February 15, 2019

Honest Networker

Jr Network Engineers entering the field

<embed allowfullscreen="true" allowscriptaccess="always" height="512" id="v-Rda5Px6y-1-video" overstretch="true" seamlesstabbing="true" src="https://v0.wordpress.com/player.swf?v=1.04&amp;guid=Rda5Px6y&amp;isDynamicSeeking=true" title="trainspotting" type="application/x-shockwave-flash" width="908" wmode="direct"></embed>
trainspotting

by ohseuch4aeji4xar at February 15, 2019 09:50 AM

ipSpace.net Blog (Ivan Pepelnjak)

Loop Avoidance in VXLAN Networks

Antonio Boj sent me this interesting challenge:

Is there any way to avoid, prevent or at least mitigate bridging loops when using VXLAN with EVPN? Spanning-tree is not supported when using VXLAN encapsulation so I was hoping to use EVPN duplicate MAC detection.

MAC move dampening (or anything similar) doesn’t help if you have a forwarding loop. You might be able to use it to identify there’s a loop, but that’s it… and while you’re doing that your network is melting down.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at February 15, 2019 08:56 AM

XKCD Comics

February 14, 2019

The Networking Nerd

Managing Automation – Fighting Fear of Job Justification

Dear Employees

 

We have decided to implement automation in our environment because robots and programs are way better than people. We will need you to justify your job in the next week or we will fire you and make you work in a really crappy job that doesn’t involve computers while we light cigars with dollar bills.

 

Sincerely, Management

The above letter is the interpretation of the professional staff of your organization when you send out the following email:

We are going to implement some automation concepts next week. What are some things you wish you could automate in your job?

Interpretations differ as to the intent of automation. Management likes the idea of their engineering staff being fully tasked and working on valuable projects. They want to know their people are doing something productive. And the people that aren’t doing productive stuff should either be finding something to do or finding a new job.

Professional staff likes being fully tasked and productive too. They want to be involved in jobs and tasks that do something cool or justify their existence to management. If their job doesn’t do that they get worried they won’t have it any longer.

So, where is the disconnect?

You Do Exist (Sort of)

The problem with these interpretations comes down to the job itself. Humans can get very good at repetitive, easy jobs. Assembly line works, quality testers, and even systems engineers are perfect examples of this. People love to do something over and over again that they are good at. They can be amazing when it comes to programming VLANs or typing out tweets for social media. And those are some pretty easy jobs!

Consistency is king when it comes to easy job tasks. When I can do it so well that I don’t have to think about things any more I’ve won. When it comes out the same way every time no matter how inattentive I am it’s even better. And if it’s a task that I can do at the same time or place every day or every week then I’m in heaven. Easy jobs that do themselves on a regular schedule are the key to being employed forever.

Automatic For The Programs

Where does that sound more familiar in today’s world? Why, automation of course! Automation is designed to make easy, repeatable jobs execute on a schedule or with a specific trigger. When that task can be done by a program that is always at work and never calls in sick or goes on vacation you can see the allure of it to management. You can also see the fear in the eyes of the professional that just found the perfect role full of easy jobs that can be scheduled on their calendar.

Hence the above interpretation of the automation email sample. People fear change. They fear automation taking away their jobs. Yet, the jobs that are perfect for automation are the kinds of things that shouldn’t be jobs in the first place. Professionals in a given discipline are much, much smarter than just doing something repetitively over and over again like VLAN modifications or IP addressing of interfaces. More importantly, automation provides consistency. Automation can be programmed to pull from the correct database or provide the correct configuration every time without worry of a transcription mistake or data entry in the wrong field.

People want these kinds of jobs because they afford two important things: justification of their existence and free time at work. The former ensures they get to have a paycheck. The latter gives them the chance to find some kind of joy in their job. Sure, you have some kind of repetitive task that you need to do every day like run a report or punch holes in a sheet of metal. But when you’re not doing that task you have the freedom to do fun stuff like learn about other parts of your job or just relax.

When you take away the job with automation, you take away the cover for the relaxation part. Now, you’ve upset the balance and forced people to find new things to do. And that means learning. Figuring out how to make tasks easy and repetitive again. And that’s not always possible. Hence the fear of automation and change.

Building A Better Path To Automation

How do we fix this mess? How can we motivate people to embrace automation? Well, it’s pretty simple:

  1. Help Your Team See The Need – If your teams think think they’re going to lose their jobs because of automation, they’re not going to embrace it. You need to show them that not only are they not going to lose their jobs but how automation will make their jobs easier and better. Remember to frame your arguments along the lines of removing mistakes and not needing to worry about justifying your existence in a role. That should encourage everyone to look for new challenges to overcome.
  2. Show the Value – This goes with the first part somewhat, but more than showing the need for automation with mistake reduction or schedule easing, you also need to show value. If a person has never made a mistake or has built their schedule around repetitive tasks they are going to hate automation. Show them what they can do now that their roles don’t have to focus on the old stuff they did. Help them look at where they can provide additional value. Even if it starts off by monitoring the automation platform to make sure it’s executing correctly. Maybe the value they can provide is finding new things to automate!
  3. Embrace the Future – Automation allows people to learn how to do new things. They can focus on new skills or roles that help support the business in a better way. More automation means more complexity to understand but also a chance for people to shine in new roles. The right people will see a challenge as something to be overcome. Help them set new goals. Help them get where they want to be. You’ll be surprised how quickly they will get there with the right leadership.

Tom’s Take

Automation isn’t going to steal jobs. It will force people to examine their tasks and decide how important they really are. The people that were covering their basic roles and trying to skate by are going to leave no matter what. Even if your automation push fails these marginal people are going to leave for greener pastures thanks to the examination of what they’re actually doing. Don’t let the pushback discourage you in the short term. Automation isn’t the goal. Automation is the tool to get you to the true goal of a smoother, more responsive team that accomplishes more and can reacher higher goals.

by networkingnerd at February 14, 2019 04:25 PM

My Etherealmind

Some Thoughts on Digital Transformation

It was my privilege to host a panel during ONUG London 2018 and my face was shoved in front of a camera as part of the package. I surprised myself here with ‘thought lording’ about digital transformation that seems to be intelligent and coherent. I extracted best and relevant bits into this video. Here are […]

The post Some Thoughts on Digital Transformation appeared first on EtherealMind.

by Greg Ferro at February 14, 2019 12:29 PM

ipSpace.net Blog (Ivan Pepelnjak)

Video: Automating Simple Reports

Network automation is scary when you start using it in a brownfield environment. After all, it’s pretty easy to propagate an error to all devices in your network. However, there’s one thing you can do that’s usually pretty harmless: collect data from network devices and create summary reports or graphs.

I collected several interesting solutions created by attendees of our Building Network Automation Solutions online course and described them in a short video.

Want to create something similar? No time to procrastinate – the registration for the Spring 2019 course ends tomorrow.

by Ivan Pepelnjak (noreply@blogger.com) at February 14, 2019 08:44 AM

February 13, 2019

My Etherealmind
Network Design and Architecture

Tech Field Day in Barcelona

I was in Barcelona last week, there was a Cisco Live as you might know. During the Cisco Live unfortunately I couldn’t meet with people as I was invited them by Tech Field Day and recorded many great sessions together.   Cisco announced ACI in Cloud and there was presentations about it. If you don’t …

The post Tech Field Day in Barcelona appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at February 13, 2019 03:12 PM

XKCD Comics

February 12, 2019

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Operating Cisco ACI the Right Way

This is a guest blog post by Andrea Dainese, senior network and security architect, and author of UNetLab (now EVE-NG) and  Route Reflector Labs. These days you’ll find him busy automating Cisco ACI deployments.


In this post we’ll focus on a simple question that arises in numerous chats I have with colleagues and customers: how should a network engineer operate Cisco ACI? A lot of them don’t use any sort of network automation and manage their Cisco ACI deployments using the Web Interface. Is that good or evil? As you’ll see we have a definite answer and it’s not “it depends”.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at February 12, 2019 08:47 AM

February 11, 2019

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Last Week on ipSpace.net (2019W6)

Last week Howard Marks completed the Hyperconverged Infrastructure Deep Dive trilogy covering smaller HCI players (including Cisco’s Hyperflex) and explaining the intricacies of costing and licensing HCI solutions.

On Thursday I finally managed to start the long-overdue Data Center Interconnects update. The original webinar was recorded in 2011, and while the layer-3 technologies haven’t changed much (with LISP still being mostly a solution in search of a problem), most of the layer-2 technologies I described at that time vanished, with OTV being a notable exception. Keep that in mind the next time your favorite $vendor starts promoting another wonderful technology.

You can get access to both webinars with standard ipSpace.net subscription.

by Ivan Pepelnjak (noreply@blogger.com) at February 11, 2019 08:41 AM

XKCD Comics

February 08, 2019

Dyn Research (Was Renesys Blog)

Last Month in Internet Intelligence: January 2019

This post is presented in conjunction with The Internet Society.

During the second half of 2018, the causes of significant Internet disruptions observed through the Oracle Internet Intelligence Map could be clustered into a few overarching areas: government-directed, cable problems, power outages, distributed denial of service (DDoS) attacks, and general technical issues. Little changed heading into 2019, with two new government-directed Internet disruptions observed in Africa, alongside disruptions caused by fiber cuts and other network issues that impacted a number of countries around the world.

Government Directed

Initially covered in last month’s overview, the Internet disruption in the Democratic Republic of Congo continued into January, lasting through the third week of the month. Government authorities reportedly cut off Internet access in the country in December to prevent “rumor mongering” in the run-up to presidential elections.

An attempted military coup in Gabon led to a day-long Internet disruption in the country. The disruption started just after 07:00 UTC on January 7, as seen in the figure below, which shows clear declines in the Traceroute Completion Ratio and BGP Routes metrics, as well as a disruption to the usual diurnal pattern seen in the DNS Query Rate metric. Although the coup was reportedly thwarted hours later, Internet connectivity was disrupted until around 11:00 UTC on February 8.

The disruption is also clearly evident in the Traffic Shifts graphs for AS16058 (Gabon Telecom), shown in the figure below. During the time that the Internet in Gabon was shut down, the number of successful traceroutes from Internet Intelligence measurement agents to endpoints in the network fell to near zero.

Just a week later, protests over fuel prices in Zimbabwe led the country’s government to order a social media ban, leading telecommunications service providers including Telone and Econet to block access to applications including Twitter, Facebook, and WhatsApp. (Telone is a state-owned operator and Zimbabwe’s largest fixed Internet provider, and Econet (Liquid Telecom) is Zimbabwe’s largest mobile operator.) Shortly thereafter, these providers moved to block Internet access entirely, leading to the disruption on January 15 & 16 seen in the figure below.

The impact of these shutdowns can be seen in the Traffic Shifts graphs for the respective network providers shown in the figures below, with the number of completed traceroutes to endpoints in each network dropping significantly over January 15 & 16.

The graphs above also show another disruption occurring on January 17 & 18, lasting for approximately 18 hours. This resulted from a directive issued by the Zimbabwean government ordering another shutdown, despite a pending court action challenging the legality of the earlier shutdown. A High Court justice ultimately ruled that the government directives ordering Internet shutdowns were illegal.

In November 2017, an Internet Exchange Point (IXP) was launched in Harare, Zimbabwe, providing a venue for local network providers to come together to exchange traffic. The figure below shows incoming and outgoing traffic levels at the Harare IXP during both periods of disruption – traffic in both directions drops to effectively zero.

Fiber Cuts

On January 4, a reported fiber cut near Norilsk, Russia forced two local network providers onto backup satellite connections, significantly impacting performance. As shown in the Traffic Shifts graphs below for AS29520 (Rosintel) and AS58037 (Masterra), traceroutes targeting endpoints within the networks shifted to satellite connectivity providers with little observed disruption to service, but at the cost of much higher latency. A Facebook post on January 7 from the fiber operator reported that the damage to the fiber had been fixed, and that services had been restored.

It appears that a similar issue may have occurred at the end of January as well, forcing Masterra onto a higher latency satellite connection again for about a day and a half, as shown in the Traffic Shifts graphs below. A Facebook post from the aforementioned fiber operator references a fiber break that occurred on January 28, citing an expected 72-hour repair time, but it is not clear if that is related to this disruption.

Brief Internet disruptions in Greenland were observed on January 5 & 8, as shown in the figure below.  All three measured metrics were impacted for between 90-120 minutes on both days. The disruptions may have been related to work being done to repair a reported break in a submarine cable connecting Greenland to Iceland – the break happened close to Qaqortoq, Greenland on December 28, 2018.

Tele Greenland is the largest telecommunications company in Greenland, and the impact of the submarine cable break on AS8818 (Tele Greenland) can be seen in the Traffic Shifts graphs below. During the two observed disruptions, the number of completed traceroutes to endpoints in the network dropped precipitously, while those that did complete successfully saw increased latency.

At approximately 07:30 GMT on January 20, a reported failure in the Tonga Cable disrupted Internet connectivity to Tonga. The Tonga Cable connects the island nation to Fiji, where Internet traffic can take advantage of Fiji’s connection to the Southern Cross Cable Network, connecting to Australia, New Zealand, and the United States. The cable’s failure resulted in a significant decrease in the number of completed traceroutes to endpoints in the country as well as a visible decline in the number of routed networks in the country, and the DNS Query Rate metric dropped to zero at times.

The figure below shows that the metrics gradually recovered over the next several days as Tonga’s Internet connectivity shifted to satellite backup. On January 28, a cable repair ship arrived in Tonga, and a day later, had located the damaged submarine cable, but found it around 100 meters south-east from where it was originally laid, and subsequently identified a second fault in the cable.

Other Network Issues

During the last few days of December 2018, and continuing into January 2019, network providers in South Sudan reportedly had to revert to backup geostationary satellite connectivity when microwave links to Uganda went down. As shown in the Traffic Shifts graphs below, the associated latency for traceroutes targeting endpoints in these networks increased by nearly 2.5x on December 28 & 29, December 31, and January 2 & 3 as they shifted to SES Astra satellite links.

A significant disruption in Internet connectivity was observed in the Country Statistics graphs for Gambia on January 16, as shown in the figure below. The next day, in response to user complaints regarding slow service, local provider Gamtel posted a Tweet (embedded below) explaining that planned maintenance by Portugal Telecom between January 16-20 could impact connection speeds.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The impact of this scheduled maintenance can be seen in the figures below, which show Traffic Shifts graphs for AS25250 (Gamtel) and AS37503 (Unique Solutions). In the graphs for Gamtel, the maintenance being performed by Portugal Telecom apparently caused a multi-hour drop in the number of completed traceroutes to endpoints within the network, along with a significant increase in associated latency. The Unique Solutions graphs show that traceroutes to endpoints within the network failed over from Portugal Telecom to an alternate provider, with only a brief disruption to connectivity, and a slight decrease in associated latency.

In addition to the disruptions reviewed above, an issue was observed in Haiti, where a unknown problem on January 5 at Digicel Haiti disrupted connectivity in the country for approximately five hours. Local providers in the Cook Islands and South Sudan apparently experienced problems with satellite connectivity to O3b on January 24 and January 27-29 respectively, resulting in multi-hour disruptions to Internet connectivity within these countries.

Conclusion

Problems with submarine cables, terrestrial fiber, and satellite/microwave links have long been threats to Internet connectivity, and it was almost expected that we’d see these occur soon after 2019 began. While arguably avoidable, their impact can generally be mitigated through disaster recovery plans that include backup connectivity through alternate providers and alternative types of connections. The new year also kicked off with two significant government-directed Internet disruptions in Africa, along with one that began in December and continued into 2019. Unfortunately, providers have minimal, if any, recourse when these types of shutdowns are ordered.

by David Belson at February 08, 2019 06:16 PM

My Etherealmind
XKCD Comics

February 07, 2019

My Etherealmind

The IPv6 Problem is IPv4

At the end of the day, most engineers want to implement IPv6 because they know, deep down, that it is an eventual necessity.  One problem is that no one is talking about quitting IPv4. If you add IPv6 to your network, you increase costs, complexity and operations. IPv4 is going to be around for 25 […]

The post The IPv6 Problem is IPv4 appeared first on EtherealMind.

by Greg Ferro at February 07, 2019 05:00 PM

ipSpace.net Blog (Ivan Pepelnjak)

SD-WAN Security Under the Hood

A while ago we published a guest blog post by Christoph Jaggi explaining the high-level security challenges of most SD-WAN solutions… but what about the low-level details?

Sergey Gordeychik dived deep into implementation details of SD-WAN security in his 35C3 talk (slides, video).

TL&DW: some of the SD-WAN boxes are as secure as $19.99 Chinese webcam you bought on eBay.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at February 07, 2019 07:08 AM

The Networking Nerd

Certifications Are About Support

You may have seen this week that VMware has announced they are removing the mandatory recertification requirement for their certification program. This is a huge step from VMware. The VCP, VCAP, and VCDX are huge certifications in the virtualization and server industry. VMware has always wanted their partners and support personnel to be up-to-date on the latest and greatest software. But, as I will explain, the move to remove the mandatory recertification requirement says more about the fact that certifications are less about selling and more about supporting.

The Paper Escalator

Recertification is a big money maker for companies. Sure, you’re spending a lot money on things like tests and books. But those aren’t usually tied to the company offering the certification. Instead, the testing fees are given to the testing center, like Pearson, and the book fees go to the publisher.

The real money maker for companies is the first-party training. If the company developing the certification is also offering the training courses you can bet they’re raking in the cash. VMware has done this for years with the classroom requirement for the VCP. Cisco has also started doing in with their first-party CCIE training. Cisco’s example also shows how quality first-party content can drive out the third parties in the industry by not even suggesting to prospective candidates that this is another option to get their classroom materials.

I’ve railed against the VCP classroom requirement before. I think forcing your candidates to take an in-person class as a requirement for certification is silly and feels like it’s designed to make money and not make good engineers. Thankfully, VMware seems to agree with me in the latest release of info. They’re allowing the upgrade path to be used for their recertification process, which doesn’t necessarily require attendance in a classroom offering. I’d argue that it’s important to do so, especially if you’re really out of date with the training. But not needing it for certification is a really nice touch.

Keeping the Lights On

The other big shift with this certification change from VMware is the tacit acknowledgement that people aren’t in any kind of rush to upgrade their software right after the newest version is released. Ask any system administrator out there and they’ll tell you to wait for a service pack before you upgrade anything. System admins for VMware are as cautious as anyone, if not moreso. Too often, new software updates break existing functionality or cause issues that can’t be fixed without a huge time investment.

How is this affected by certification? Well, if I spent all my time learning VMware 5.x and I got my VCP on it because my company was using it you can better believe that my skill set is based around VCP5. If my company doesn’t decide to upgrade to 6.x or even 7.x for several years, my VCP is still based on 5.x technology. It shouldn’t expire just because I never upgraded to 6.x. The skills that I have are focused on what I do, not what I’m studying. If my company finally does decide to move to 6.x, then I can study for and receive my VCP on that version. Not before.

Companies love to make sure their evangelists and resellers are all on the latest version of their certifications because they see certifications as a sales tool. People certified in a technology will pick that solution over any others because they are familiar with it. Likewise, the sales process benefits from knowledgable sales people that understand the details behind your solution. It’s a win-win for both sides.

What this picture really ignores is the fact that a larger number of non-reseller professionals are actually using the certification as a study guide to support their organization. Perhaps they get certified as a way to get better support terms or a quicker response to support calls. Maybe they just learned so much about the product along the way that they want to show off what they’ve been doing. No matter what the reason, it’s very true that these folks are not in a sales role. They’re the support team keeping the lights on.

Support doesn’t care about upgrading at the drop of a hat. Instead, they are focused on keeping the existing stuff running as long as possible. Keeping users happy. Keeping executives happy. Keeping people from asking questions about availability or services. That’s not something that looks good on a bill of materials. But it’s what we all expect. Likewise, support isn’t focused on new things if the old things keep running. Certification, for them, is more about proving you know something instead of proving you can sell something.


Tom’s Take

I’ve had so many certifications that I don’t even remember them all. I got some of them because we needed it to sell a solution to a customer. I got others to prove I knew some esoteric command in a forgotten platform. But, no matter what else came up, I was certified on that platform. Windows 2000, NetWare 6.x, you name it. I was certified on that collection of software. I never rushed to get my certification upgraded because I knew what the reality of things really was. I got certified to keep the lights on for my customers. I got certified to help the people that believed in my skills. That’s the real value of a certification to me. Not sales. Just keeping things running another month.

by networkingnerd at February 07, 2019 01:57 AM

February 06, 2019

Honest Networker
My Etherealmind

Network Dictionary: logging

The concept of logging dates to sailing ships and they got it wrong too.

The post Network Dictionary: logging appeared first on EtherealMind.

by Greg Ferro at February 06, 2019 05:00 PM

XKCD Comics

February 05, 2019

My Etherealmind

Blessay: SDWAN and Lockin

I consider these forms of possible lock-in for SD-WAN

The post Blessay: SDWAN and Lockin appeared first on EtherealMind.

by Greg Ferro at February 05, 2019 05:00 PM

ipSpace.net Blog (Ivan Pepelnjak)

Automating Brownfield Device Configuration

Numerous network automation deployments happen in brownfield installations: you’re trying to automate parts of existing network deployment and operations processes. If you’re lucky you start automating deployment of new devices… but what if you have to automate parts of existing device configurations?

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at February 05, 2019 07:58 AM

February 04, 2019

Ethan Banks on Technology

Leveraging Desktop Real Estate To Decrease Distractions

I use a dual-monitor setup. In my setup, the main screen sits centered directly in front of me. The secondary screen, which is slightly smaller, is off to one side. The real estate provided by the two screens gives me plenty of pixels across which to splash my applications–ample “screenery.”

I use my screenery productively when recording podcasts. I display a script, conferencing app, and recording tool without having to switch between them. Research productivity is also enhanced. I display a note-taking app front and center, with research subject matter like a video presentation, Kindle book, or PDF off to the side.

No Pixel Left Behind

Acres of screenery has benefits, but lots of screen space is also a potential distraction. I fight the desire to fill every pixel with an application. If I don’t use all the pixels, I must be wasting desktop space, right? I don’t want to waste my not inconsiderable investment in fancy monitors. Hmm. Sounds like an example of the sunk cost fallacy.

Desktop operating system developers have catered to my craving, adding sticky edges to windows that ensure not a single pixel is wasted. I can make my window edges stick to each other and the edge of the screen itself! That’s perfect!

Only, screens filled edge-to-edge is anything but perfect when there’s a lot of screenery to fill. A small laptop screen, where every pixel counts? Sure. There’s no space to waste. But in a dual-monitor configuration with bountiful space? I’m learning to resist screen-filling temptation.

The Lie That All Apps Must Be Tended In Real-Time

Of necessity, I am a Slack user. I interact with humans and Slackbot frequently as a standard part of my day. Although I carefully limit notifications, I leave a few notifications on to react in a timely way to people that might need me to weigh in. Lots of screenery means that I can keep Slack open all the time for near instantaneous reaction times to those important messages.

Which…is good, right?

Maybe not. Perhaps you recognize this common knowledge worker problem. Even with only carefully curated notifications making it through, chat apps are a distraction from the task at hand. Oh…a notification in my company’s #blahblah channel. I should check that!

Oops–flow state broken.

An App Windowing Strategy To Improve Focus

In a dual-monitor setup, carefully planned application placement can help avoid broken flow states, improving productivity. In my central monitor, I place the app (or perhaps apps) that are required to complete the task at hand. I do not cluster chat apps, email, or music players around the app I need for productivity, as these are distractions.

Instead, I put distracting apps in the side monitor. And then I minimize them.

1. Place productive work apps on your center screen. Ideally this is a single application–the tool you are using to create. A text editor. An IDE. The GUI you input information into. If it’s time to work on email, your email client. A web browser with minimal tabs open.

But there’s so much wasted screenery! I could pop my music player up in the corner and…

NO. Bad you (and me, to be honest). Resist the temptation to fill pixels with information that does not directly support the task at hand.

2. Place additional applications that directly support your creative work and need a lot of real estate into the secondary screen. For example, the output test screen for an application you’re developing. A report you must reference to support the piece you’re writing. A monitoring tool that helps you visualize changes you’re making.

What if there are no other applications that support your task? Then display nothing at all on your secondary monitor.

Nothing at all? You mean, a screen with nothing but wallpaper?

Exactly. And if pretty wallpaper distracts you, not even that.

Surely, my calendar can go over there…I’d hate to miss an appointment!

NO, bad you. Do you get notified before your next appointment through your operating system’s notification tool? Of course you do. You don’t need to check your calendar minute-by-minute. You’ll get an alert when you have somewhere to be.

Yeah, but my to-do list would look so good over there, and I could impress my boss when she walks by. And to-do lists are all about productivity, so it’s got to be okay.

NO, bad you. Task lists are to be used when you need to add a task, cross a task off, or get your next set of marching orders. They aren’t there to stroke your ego and make you feel good about how organized you are. Task lists are tools to help you maintain focus–not distract from your current task while you ponder the next ten tasks.

3. Minimize applications that are not directly related to the task at hand. For me, this is difficult. When I say, “NO, bad you,” in my points above, I’m foremost scolding myself. When applications are minimized, I experience the same fear of missing out (FOMO) that I feel when recovering from the addiction of checking social media.

What if there’s something crucial in my calendar? What if someone sends me a Slack message and I don’t respond to it in two minutes or less? What if I hear a really cool track in Spotify, and I can’t glance over and know exactly what it is? What if…

Stop. None of those things are crucial in the moment. All of those things can be addressed eventually, when you’re done the task at hand. Give yourself permission to minimize non-essential apps. To waste pixels. To let some screenery run free.

4. Minimize apps in the appropriate monitor. For example, keep chat apps, music players, and your calendar in the secondary monitor, but minimized. This drives home the point that non-productivity apps are not to be front and center when you’re looking at them.

Oh, I’m looking at my secondary monitor. I’m allowing myself to be distracted by this app I opened from the task bar. I’m using an app that I normally use in between tasks. I must be on a productivity break.

There is nothing wrong with taking a break. You need to be reasonably productive, not constantly. But, your goal is to get done with the non-essential apps, put them away by minimizing them to the task bar, and then get on with your next productive task. Minimize interruption to flow state so that when you are working meaningfully, the result is your undistracted best.

I don’t think you comprehend this multi-tasking world we live in. We’ve all got to multi-task. That’s how things get done, and I can do it! I’m great at multi-tasking.

I agree that multi-tasking seems inevitable. I don’t think we’ll ever get away from it entirely. Even so, perhaps the key is in not wishing for a distraction. In allowing yourself to be fully immersed in the topic that you’re working on.

The information universe tempts you with mildly pleasant but ultimately numbing diversions. The only way to stay fully alive is to dive down to your obsessions six fathoms deep.

David Brooks in The Art Of Focus, NYTimes.com

Making screenery your friend can help you become totally immersed.

by Ethan Banks at February 04, 2019 08:17 PM

My Etherealmind

Don’t Focus on Lock-in, Focus On The Undo

Every and any decision is a form of lock-in. The decision to buy on-brand or off-brand, open or closed source, vendor A or vendor B has consequences that follow.

The post Don’t Focus on Lock-in, Focus On The Undo appeared first on EtherealMind.

by Greg Ferro at February 04, 2019 05:00 PM

Possible Analyst Scam: Your Experience for a Sample Report

Getting concerned about scummy and scummy approaches by market and financial research companies for advisory services.

The post Possible Analyst Scam: Your Experience for a Sample Report appeared first on EtherealMind.

by Greg Ferro at February 04, 2019 01:02 PM

ipSpace.net Blog (Ivan Pepelnjak)

Tech Field Day Extra @ CLEUR19 Recap

I spent most of last week with a great team of fellow networking and security engineers in a windowless room listening to good, bad and plain boring presentations from (mostly) Cisco presenters describing new technologies and solutions – the yearly Tech Field Day Extra @ Cisco Live Europe event.

This year’s hit rate (the percentage of good presentations) was about 50% and these are the ones I found worth watching (in chronological order):

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at February 04, 2019 07:51 AM

The Data Center Overlords

A Discussion On Storage Overhead

Let’s talk about transmission overhead.

For various types of communications protocols, ranging from Ethernet to Fibre Channel to SATA to PCIe, there’s typically additional bits that are transmitted to help with error correction, error detection, and/or clock sync. These additional bits eat up some of the bandwidth, and is referred to generally as just “the overhead”.

For 1 Gigabit Ethernet and 8 Gigabit Fibre Channel as well as SATA I, II, and III, they use 8/10 overhead. Which means for every eight bits of data, an additional two bits are sent.

The difference is who pays for those extra bits. With Ethernet, Ethernet pays. With Fibre Channel and SATA, the user pays.

1 Gigabit Ethernet has a raw transmit rate of 1 gigabit per second. However, the actual transmission rate (baud, the rate at which raw 1s and 0s are transmitted) for Gigabit Ethernet is 1.25 gigabaud. This is to make up for the 8/10 overhead.

SATA and Fibre Channel, however, do not up the baud rate to accommodate for the 8/10 overhead. As such, even though 1,000 Gigabit / 8 bits per byte = 125 MB/s, Gigabit Fibre Channel only provides 100 MB/s. 25 MB/s is eaten up by the extra 2 bits in the encoding. The same is true for SATA. SATA 3 is capable of transmitting at 6 Gigabits per second, which is 750 MB/s. However, 150 MB/s of that is eaten up by the extra 2 bits, so SATA III can transmit 600 MB/s instead.

PAM 4

There’s a new type of raw data transmission hitting the networking world called PAM 4. Right now it’s used in 400 Gigabit Ethernet. 400 Gigabit Ethernet is 4 channels of 50 Gigabit links. You’ll probably notice the math on that doesn’t check out: 4 x 50 = 200, not 400. That’s where PAM 4 comes in: The single rate change is still 50 gigabaud, but instead of the signal switching between two possible values (0, 1), it switches between 4 possible values (0, 1, 2, 3). Thus, each clock cycle can represent 2 bits of data in stead of 1 bit of data, doubling the transmission rate.

Higher Level Protocol Overhead

For networking storage on Ethernet, there’s also additional overhead for IP, TCP/UDP, and possibly others (VXLAN for example). In my next article, I’ll talk about why they don’t really matter that much.

by tonybourke at February 04, 2019 05:27 AM

XKCD Comics

February 03, 2019

ipSpace.net Blog (Ivan Pepelnjak)

When You Have to Deal with **** at Work

Not that it helps that much but keep in mind: you're not the only one. Here's a wonderful blog post by Al Rasheed.

Oh, and if you still feel like a fraud after being in the industry for years (check out the Impostor Syndrome), you're not alone either.

by Ivan Pepelnjak (noreply@blogger.com) at February 03, 2019 03:32 PM

February 02, 2019

Honest Networker

Your two new colleagues coming home from Cisco Live and seems to have enjoyed the Kool-Aid a bit too much.

<embed allowfullscreen="true" allowscriptaccess="always" height="510" id="v-ZnU6PZu3-1-video" overstretch="true" seamlesstabbing="true" src="https://v0.wordpress.com/player.swf?v=1.04&amp;guid=ZnU6PZu3&amp;isDynamicSeeking=true" title="cleur" type="application/x-shockwave-flash" width="908" wmode="direct"></embed>
cleur

by ohseuch4aeji4xar at February 02, 2019 09:07 AM

February 01, 2019

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Should I Write a Book?

Erik Dietrich (of the Expert Beginner fame) published another great blog post explaining when and why you should write a book. For the attention-challenged here’s my CliffNotes version:

  • Realize you have no idea what you’re doing (see also: Dunning-Kruger effect)
  • Figure out why you’d want to spend a significant amount of your time on a major project like book writing;
  • It will take longer (and will be more expensive) than you expect even when considering Hofstadter’s law.

by Ivan Pepelnjak (noreply@blogger.com) at February 01, 2019 08:00 AM

Honest Networker
XKCD Comics

January 31, 2019

IPEngineer.net

Event-Driven Automation: The TL;DR No One Told You About

Event-Driven automation is an umbrella term much like "coffee" (also see here, it turns out I’ve used coffee anecdotes way too much). How many times do you go to a popular chain and just blurt out "coffee". At 5am, it might be the nonsensical mysterious noise automagically leaving one’s mouth but once we decide it’s bean time, we get to the specifics.

There are multiple tools that give you different capabilities. Some are easier to get started with than others and some are feature rich and return high yields of capability against invested time.

Friendly dictator advice; try not to get wrapped up in the message bus used or data encapsulation methodologies. Nerdy fun, but fairly pointless when it comes to convincing anyone or organisation to make a fundamental shift.

Event-Driven is about receiving a WebHook and annoying people on Slack

This is a terrible measure and one we needed to have dropped yesterday. In more programming languages than I can remember, I’ve written the infamous "Hello World" and played with such variables, struct instances and objects as the infamous "foo" and the much revered "bar". Using an automation platform to receive an HTTP post and updating a support ticket or ruining someone’s day on Slack is a great 101 example for new-comers and at the same time, a terrible measure of a system.

What about pulling data from InfluxDB on a change, gRPC streaming telemetry or Git repo changes? Actions and integrations range from low and high level APIs from Kubernetes to honking a Tesla horn, so keep your mind open to your target challenges.

What if a Human Wants to Start a Process

Event-Driven has no bearing on this. Your process is mechanised into some domain-specific language asset which a given system executes. Event-Driven implies that this logic or process can be triggered automatically. Any system entirely closed is a bad system and you should avoid it. Every one of these systems has the event-driven part as optional and usually made operational by a rule base. No rules, no event-driven. A human can input the data by hand (if introducing errors is your thing) and trigger the mechanised process.

Event-Driven Should Have Batteries Included; If It Doesn’t, It Sucks

TL;DR – every business, every person and every version of the same software has variance. Product completeness will never happen. Get over it

If two people are tasked with mechanising the same source process, I’m willing to bet money that you get two marginally different mechanised processes that display the same black-box behaviour. Looking at the sheer number of processes any given organisation can have, differences will happen. We’re humans, so even the understanding of the human written source process can be different. Compare this batch of stuff against the combinations of processes across every applicable industry and we have an almost perfect storm of variance. We haven’t even touched on integrations yet. What about integrations with Terraform v0.11 versus 0.12, or VMware 5.0 vs 5.2? These are challenges most automation focussed engineers deal with every day and just have to get on with. Anyone new to this will complain about these points in a seemingly valid way and it’s mostly ignorance at play here.

  1. You will have to create your own workflows. The same with CI-CD pipelines.
  2. Integrations might have a baseline and that baseline for your variance may require shifting. Things like HTTP, gRPC and NETCONF make all the difference here. Get used to semantic versioning and learning how to release versions for your environment.
  3. These challenges are what will keep your people employed and you will also save money from your software vendors because you need to take less product from them.

Every platform and tool requires that you know how to create flow-charts and create a Rosetta stone translation mechanism from your business lingo to the language of choice for your platform or tool.

We Can’t Use Open Source

Get yourself acquainted with software houses that can provide support contracts. They exist and will remove your roadblock. If your issue is "not built here" then good luck.

I Don’t Know X…But

Then you’re not qualified to ask the question you were about to. Go away, learn about your challenges and the implemented ways of solving your challenges on readily available platforms. You should also do this if you’re at a conference. Running snippets of code isn’t just constrained to containers or serverless and you may be surprised how some of your favourite tools work if you just look a little closer.

EDA

Sometimes known as EDI (…infrastructure) and it boils down to three things: input, do-logic, integrations. From this triad, we can create both open and closed-loop automations, event-driven and traditional automation amplifiers for engineer-drive automation like multi-device targeting.

Close

I’m in a weird mental phase currently. The most simple concepts in automation are decades old and yet in our industry we’re just beginning to stoke the fire. The worry of repeating failures and accidents thanks to not passing on lessons learnt is real and we can avoid them proactively for the better good. Sometimes it’s good to back over the basics so we have strong foundations. If you have a free few hours in March (2019) then also think about joining my session on ipspace.net all around getting the basics right for your automation challenge.

It might be worth checking here too for pre-recorded sessions on event-driven which includes lengthy discussions on signals versus events and the fundamentals of data flows.

The post Event-Driven Automation: The TL;DR No One Told You About appeared first on ipengineer.net.

by David Gee at January 31, 2019 09:49 PM

Honest Networker

Recent BGPmon.net announcement

 

<embed allowfullscreen="true" allowscriptaccess="always" height="512" id="v-tFXgSwPw-1-video" overstretch="true" seamlesstabbing="true" src="https://v0.wordpress.com/player.swf?v=1.04&amp;guid=tFXgSwPw&amp;isDynamicSeeking=true" title="crushing2bbarbie2bwith2bhydraulic2bpress" type="application/x-shockwave-flash" width="908" wmode="direct"></embed>
crushing2bbarbie2bwith2bhydraulic2bpress

by ohseuch4aeji4xar at January 31, 2019 12:40 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

SRv6: One Tool to Rule Them All

I got some interesting feedback from one of my readers on Segment Routing with IPv6 extension headers:

Some people position SRv6 as the universal underlay and overlay due to its capabilities for network programming by means of feature+locator SRH separation.

Stupid me replied “SRv6 is NOT an overlay solution but a source routing solution.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 31, 2019 07:46 AM

January 30, 2019

ShortestPathFirst

Interview with Juniper Networks Ambassador Dan Hearty

First of all, let me apologize for the long delay in getting this next Juniper Ambassador interview published. I had a death in the family in December and it consumed a good portion of my time these last two months, so I’ve had little time for social media activities. Alas, I’m very excited about this …

by Stefan Fouant at January 30, 2019 04:58 PM

My Etherealmind
Honest Networker
The Networking Nerd

Risking It All

When’s the last time you thought about risk? It’s something we have to deal with every day but hardly ever try to quantify unless we work in finance or a high-stakes job. When it comes to IT work, we take risks all the time. Some are little, like deleting files or emails thinking we won’t need them again. Or maybe they’re bigger risks, like deploying software to production or making a change that could take a site down. But risk is a part of our lives. Even when we can’t see it.

Mitigation Revelations

Mitigating risk is the most common thing we have to do when we analyze situations where risk is involved. Think about all the times you’ve had to create a backout plan for a change that you’re checking in. Even having a maintenance window is a form of risk mitigation. I was once involved in a cutover for a metro fiber deployment that had to happen between midnight and 2 am. When I asked why, the tech said, “Well, we don’t usually have any problems, but sometimes there’s a hiccup that takes the whole network down until we fix it. This way, there isn’t as much traffic on it.”

Risk is easy to manage when you compartmentalize it. That’s why we’re always trying to push risk aside or contain the potential damage from risk. In some cases, like a low-impact office that doesn’t rely on IT, risk is minimal at best. Who cares if we deploy a new wireless AP or delete some files? The impact is laughable if one computer goes offline. For other organizations, like financial trading or healthcare, the risks of downtime are far greater. Things that could induce downtime, such as patches or changes, must be submitted, analyzed, approved, and implemented in such a way as to ensure there is no downtime and no chance for failure.

Risk behaves this way no matter what we do. Sometimes our risks are hidden because we don’t know everything. Think about bugs in release code, for example. If we upgrade to a new code train to fix an existing bug or implement a new feature we are assuming the code passed some QA checks at some point. However, we’re still accepting a risk that the new code will contain a bug that is worse or force us to deal with new issues down the road. New code upgrades have even more stringent risk mitigation, such as bake-in periods or redundancy requirements before being brought online. Those protections are there to protect us.

Invisible Investments In Problems

But what about risks that we don’t know about? What if those risks were minimized before we ever had a chance to look for them in a shady way?

For example, when I had LASIK surgery many years ago, I was handed a pamphlet that was legally required to be handed to me. I read through the procedure, which included a risk of possible side effects or complications. Some of them were downright scary, even with a low percentage chance of occurring. I was told I had to know the risks before proceeding with the surgery. That way, I knew what I was getting into in case one of those complications happened.

Now, legal reasons aside, why would the doctor want to inform me of the risks? It makes it more likely that I’m not going to go through with the procedure if there are significant risks. So why say anything at all unless you’re forced to? Many of you might say that the doctor should say something out of the goodness or morality of their own heart, but the fact a law exists that requires disclosure should tell you about the goodness in people’s hearts.

Medical providers are required to reveal risk. So are financial planners and companies that provide forward looking statements. But when was the last time that a software company disclosed potential risk in their platform? After all, their equipment or programs could have significant issues if they go offline or are made to break somehow. What if there is a bug that allows someone to hack your system or crash your network? Who assumes the risk?

If your provider doesn’t tell you about the risks or tries to hide them in the sales or installation process, they’re essentially accepting the unknown risk on your behalf for you. If they know there is a bug in the code that could cause a hospital core network to melt down or maybe reboot a server every 180 days like we had to do with an unpatched CallManager 6.0 server then they’ve accepted that risk silently and passed it along to you. And if you think you’re going to be able to sue them or get compensation back from them you really need to read those EULAs that you agree to when you install things.

Risky Responsibility

The question now becomes about the ultimate responsibility. These are the “unknown unknowns”. We can’t ask about things we don’t know about. How could we? So it’s up to people with the knowledge of the risk to disclose it. In turn, that opens them up to some kinds of risk too. If my method of mitigating the risk in your code is to choose not to purchase your product, then you have to know that it was less risk for me to choose that route. Sure, it’s more risky for you to disclose it, but the alternative could lead to a big exposure.

Risk is a balance. In order to have the best balance we need to have as much information as possible in order to put plans in place to mitigate it to our satisfaction. Some risks can’t be entirely mitigated. But failure to disclose risks to prevent someone from making a sale or implementing a technology is a huge issue. And if you find out that it happened to you then you absolutely need to push back on it. Because letting someone else accept the risk on your behalf in secret will only lead to problems down the road.


Tom’s Take

Every change I checked into a network during production hours was a risk. Some of them were minor. Others were major. And still others were the kind that burned me. I accepted those risks for myself but I always made sure to let the customer I was working for know about them. There was no use in hiding information about a change that could take down the network or delete data. Sure, it may mean holding off on the change or making it so that we needed to find an alternative method. But the alternative was hurt feelings at best and legal troubles at worst. Risk should never be a surprise.

by networkingnerd at January 30, 2019 01:00 AM

XKCD Comics

January 29, 2019

Potaroo blog

Addressing 2018

Time for another annual roundup from the world of IP addresses. Let's see what has changed in the past 12 months in addressing the Internet and look at how IP address allocation information can inform us of the changing nature of the network itself.

January 29, 2019 07:00 PM

My Etherealmind

Day Zero, One, Two

My personal approach to technology design is to visualise three broad stages. Day Zero, Day One and Day Two. 

The post Day Zero, One, Two appeared first on EtherealMind.

by Greg Ferro at January 29, 2019 03:32 PM

ipSpace.net Blog (Ivan Pepelnjak)

Not So Fast Ansible, Cisco IOS Can’t Keep Up…

Remember how earlier releases of Nexus-OS started dropping configuration commands if you were typing them too quickly (and how it was declared a feature ;)?

Mark Fergusson had a similar experience on Cisco IOS. All he wanted to do was to use Ansible to configure a VRF, an interface in the VRF, and OSPF routing process on Cisco CSR 1000v running software release 15.5(3).

Here’s what he was trying to deploy. Looks like a configuration straight out of an MPLS book, right?

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 29, 2019 07:44 AM

January 28, 2019

My Etherealmind

Strong Opinions, Loosely Held

I do receive feedback that I express strong opinions. Some people even make that sound like a negative thing. In my networking career, my core value proposition was to develop a strong opinion to suggest, explain and justify spending millions of company dollars. 

The post Strong Opinions, Loosely Held appeared first on EtherealMind.

by Greg Ferro at January 28, 2019 03:00 PM