September 27, 2021

ipSpace.net Blog (Ivan Pepelnjak)

State of IT Security in 2021

Patrik Schindler sent me his views on code quality and resulting security nightmares after reading the Cisco SD-WAN SQL Injection saga. Enjoy!


I think we have a global problem with code quality. Both from a security perspective, and from a less problematic but still annoying bugs-everywhere perspective. I’m not sure if the issue is largely ignored, or we’ve given up on it (see also: Cloud Complexity Lies or Cisco ACI Complexity).

September 27, 2021 07:21 AM

September 26, 2021

ipSpace.net Blog (Ivan Pepelnjak)

Open-Source DMVPN Alternatives

When I started collecting topics for the September 2021 ipSpace.net Design Clinic one of the subscribers sent me an interesting challenge: are there any open-source alternatives to Cisco’s DMVPN?

I had no idea and posted the question on Twitter, resulting in numerous responses pointing to a half-dozen alternatives. Thanks a million to @MarcelWiget, @FlorianHeigl1, @PacketGeekNet, @DubbelDelta, @Tomm3h, @Joy, @RoganDawes, @Yassers_za, @MeNotYouSharp, @Arko95, @DavidThurm, Brian Faulkner, and several others who chimed in with additional information.

Here’s what I learned:

September 26, 2021 02:48 PM

September 24, 2021

The Networking Nerd

Private 5G Needs Complexity To Thrive

<figure class="wp-block-image size-large"></figure>

I know we talk about the subject of private 5G a lot in the industry but there are more players coming out every day looking to add their voice to the growing supporters of these solutions. And despite the fact that we tend to see 5G and Wi-Fi technologies as ships in the night this discussion isn’t going to go away any time soon. In part it’s because decision makers aren’t quite savvy enough to distinguish between the bands, thinking all wireless communications are pretty much the same.

I think we’re not going to see much overlap between these two technologies. But the reasons why aren’t quite what you might think.

Walking Workforces

Working from anywhere other than the traditional office is here to stay. Every major Silicon Valley company has looked at the cost benefit analysis and decided to let workers do their thing from where they live. How can I tell it’s permanent? Because they’re reducing salaries for those that choose to stay away from the Bay Area. That carrot is pretty enticing and for the companies to say that it’s not on the table for remote work going forward means they have no incentive to make people want to move to work from an office.

Mobile workers don’t care about how they connect. As long as they can get online they are able to get things done. They are the prime use case for 5G and Private 5G deployments. Who cares about the Wi-Fi at a coffee shop if you’ve got fast connectivity built in to your mobile phone or tablet? Moreover, I can also see a few of the more heavily regulated companies requiring you to use a 5G uplink to connect to sensitive data though a VPN or other technology. It eliminates some of the issues with wireless protection methods and ensures that no one can easily snoop on what you’re sending.

Mobile workers will start to demand 5G in their devices. It’s a no-brainer for it to be in the phone and the tablet. As laptops go it’s a smart decision at some point, provided enough people have swapped over to using tablets by then. I use my laptop every day when I work but I’m finding myself turning to my iPad more and more. Not for any magical reason but because it’s convenient if I want to work from somewhere other than my desk. I think that when laptops hit a wall from a performance standpoint you’re going to see a lot of manufacturers start to include 5G as a connection option to lure people back to them instead of abandoning them to the big tablet competition.

However, 5G is really only a killer technology for these more complex devices. The cost of a 5G radio isn’t inconsequential to the overall cost of a device. After all, Apple raised the price of their iPad when they included a 5G radio, didn’t they? You could argue that they didn’t when they upgraded the iPhone to a 5G chipset but the cellular technology is much more integral to the iPhone than the iPad. As companies examine how they are going to move forward with their radio technology it only makes sense to put the 5G radios in things that have ample space, appropriate power, and the ability to recover the costs of including the chips. It’s going to be much more powerful but it’s also going to be a bigger portion of the bill of materials for the device. Higher selling prices and higher margins are the order of the day in that market.

Reassuringly Expensive IoT

One of the drivers for private 5G that I’ve heard of recently is the drive to have IoT sensors connected over the protocol. The thinking goes that the number of devices that are going to be deployed it going to create a significant amount of traffic in a dense area that is going to require the controls present in 5G to ensure they aren’t creating issues. I would tend to agree but with a huge caveat.

The IoT sensors that people are talking about here aren’t the ones that you might think of in the consumer space. For whatever reason people tend to assume IoT is a thermostat or a small device that does simple work. That’s not the case here. These IoT devices aren’t things that you’re going to be buying one or two at a time. They are sensors connected to a larger system. Think HVAC relays and probes. Think lighting sensors or other environmental tech. You know what comes along with that kind of hardware? Monitoring. Maintenance. Subscription costs.

The IoT that is going to take advantage of private 5G isn’t something you’re going to be deploying yourself. Instead, it’s going to be something that you partner with another organization to deploy. You might “own” the tech in the sense that you control the data but you aren’t going to be the one going out to Best Buy or Tech Data to order a spare. Instead, you’re going to pay someone to deploy it and it when it goes wrong. So how does that differ from the IoT thermostat that comes to mind? Price. Those sensors are several hundred dollars each. You’re paying for the technology included in them with that monthly fee to monitor and maintain them. They will talk to the radio station in the building or somewhere nearby and relay that data back to your dashboard. Perhaps it’s on-site or, more likely, in a cloud instance somewhere. All those fees mean that the devices become more complex and can absorb the cost of more complicated radio technology.

What About Wireless?

Remember when wireless was something cool that you had to show off to people that bought a brand new laptop? Or the thrill of seeing your first iPhone connect to attwifi at Starbucks instead of using that data plan you paid so dearly to get? Wireless isn’t cool any more. Yes, it’s faster. Yes, it is the new edge of our world. But it’s not cool. In the same way that Ethernet isn’t cool. Or web browsers aren’t cool. Or the internal combustion engine isn’t cool. Wi-Fi isn’t cool any more because it is necessary. You couldn’t open an office today without having some form of wireless communications. Even if you tried I’m sure that someone would hop over to the nearest big box store and buy a consumer-grade router to get wireless working before the paint was even dry on the walls.

We shouldn’t think about private 5G replacing Wi-Fi because it never will. There will be use cases where 5G makes much more sense, like in high-density deployments or in areas were the contention in the wireless spectrum is just too great to make effective use of it. However, not deploying Wi-Fi in favor of deploying private 5G is a mistake. Wireless is the perfect “set it and forget it” technology. Provide an SSID for people to connect to and then let them go crazy. Public venues are going to rely on Wi-Fi for the rest of time. These places don’t have the kind of staff necessary to make private 5G economical in the long run.

Instead, think of private 5G deployments more like the way that Wi-Fi used to be. It’s an option for devices that need to be managed and controlled by the organization. They need to be provisioned. They need to consume cycles to operate properly. They need to be owned by the company and not the employee. Private 5G is more of a play for infrastructure. Wi-Fi is the default medium given the wide adoption it has today. It may not be the coolest way to connect to the network but it’s the one you can be sure is up and running without the need for the IT department to come down and make it work for you.


Tom’s Take

I’ll admit that the idea of private 5G makes me smile some days. I wish I had some kind of base station here at my house to counteract the horrible reception that I get. However, as long as my Internet connection is stable I have enough wireless coverage in the house to make the devices I have work properly. Private 5G isn’t something that is going to displace the installed base of Wi-Fi devices out there. With the amount of management that 5G requires in devices you’re not going to see a cheap or simple method to deploying it appear any time soon. The pie-in-the-sky vision of having pervasive low power deployments in cheap devices is not going to be realistic on the near future horizon. Instead, think of private 5G as something that you need to use when your other methods won’t work or when someone you are partnering with to deploy new technology requires it. That way you won’t be caught off-guard when the complexity of the technology comes to play.

by networkingnerd at September 24, 2021 08:03 PM

Packet Pushers

Automation For The People

Next week the Packet Pushers are hosting a free one-hour livestream with Gluware to explore how Gluware aims to democratize automation; that is, get you quick wins around common tasks such as configuration changes and OS updates, while also helping you evolve toward more complex automation capabilities without having to become a master of coding. […]

The post Automation For The People appeared first on Packet Pushers.

by Drew Conry-Murray at September 24, 2021 04:17 PM

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Breaking Down Silos

Here’s another masterpiece by Charity Majors: Why I hate the phrase “breaking down silos”. A teaser in case you can’t decide whether to click the link:

When someone says they are “breaking down silos”, whether in an interview, a panel, or casual conversation, it tells me jack shit about what they actually did.

Enjoy ;)

September 24, 2021 07:12 AM

Another SD-WAN Security SNAFU: SQL Injections in Cisco SD-WAN Admin Interface

Christoph Jaggi sent me a link to an interesting article describing security vulnerabilities pentesters found in Cisco SD-WAN admin/management code.

I’m positive the bugs have been fixed in the meantime, but what riled me most was the root cause: Little Bobby Tables (aka SQL injection) dropped by. Come on, it’s 2021, SD-WAN is supposed to be about building secure replacements for MPLS/VPN networks, and they couldn’t get someone who could write SQL-injection-safe code (the top web application security risk)?

September 24, 2021 07:05 AM

XKCD Comics

September 23, 2021

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Building a Small Data Center Fabric with Four Switches

One of my subscribers has to build a small data center fabric that’s just a tad too big for two switch design.

For my datacenter I would need two 48 ports 10GBASE-T switches and two 48 port 10/25G fibber switches. So I was watching the Small Fabrics and Lower-Speed Interfaces part of Physical Fabric Design to make up my mind. There you talk about the possibility to do a leaf and spine with 4 switches and connect servers to the spine.

A picture is worth a thousand words, so here’s the diagram of what I had in mind:

September 23, 2021 06:37 AM

September 22, 2021

Packet Pushers

Level Up Your Branch With Prisma SD-WAN 5.6

The following post is by Rajesh Kari, Senior Product Marketing Manager for SASE at Palo Alto Networks. We thank Palo Alto Networks for being a sponsor. The pandemic challenged the way companies operate, forcing them to rethink their branch strategy. They quickly migrated their workforce to be remote while delivering uninterrupted connectivity to applications and […]

The post Level Up Your Branch With Prisma SD-WAN 5.6 appeared first on Packet Pushers.

by Sponsored Blog Posts at September 22, 2021 03:32 PM

ipSpace.net Blog (Ivan Pepelnjak)

IS-IS Flooding Details

Last week I published an unrolled version of Peter Paluch’s explanation of flooding differences between OSPF and IS-IS. Here’s the second part of the saga: IS-IS flooding details (yet again, reposted in a more traditional format with Peter’s permission).


In IS-IS, DIS1 is best described as a “baseline benchmark” – a reference point that other routers compare themselves to, but it does not sit in the middle of the flow of updates (Link State PDUs, LSPs).

A quick and simplified refresher on packet types in IS-IS: A LSP carries topological information about its originating router – its System ID, its links to other routers and its attached prefixes. It is similar to an OSPF LSU containing one or more LSAs of different types.

September 22, 2021 07:24 AM

XKCD Comics

September 21, 2021

Packet Pushers

Quantum Key Distribution Explained With Dr. Joshua Slater

Josh Slater explains to Heavy Networking co-hosts Ethan Banks & Greg Ferro how distributing cryptographic keys with quantum networking works. Qubits and photons and spins, oh my! Listen to the entire episode on Heavy Networking, the podcast for network engineers on the Packet Pushers podcast network. You can subscribe to the Packet Pushers’ YouTube channel […]

The post Quantum Key Distribution Explained With Dr. Joshua Slater appeared first on Packet Pushers.

by The Video Delivery at September 21, 2021 08:34 PM

September 20, 2021

ipSpace.net Blog (Ivan Pepelnjak)

netsim-tools: Network Topology Graphs

Someone using my netsim-tools sent me an intriguing question: “Would it be possible to get network topology graphs out of the tool?

Please note that we’re talking about creating graphs out of network topology described as a YAML data structure, not a generic GUI or draw my network tool. If you’re a GUI person, this is not what you’re looking for.

I did something similar a long while ago for a simple network automation project (and numerous networking engineers built really interesting stuff while attending the Building Network Automation Solutions course), so it seemed like a no-brainer. As always, things aren’t as easy as they look.

September 20, 2021 07:07 AM

XKCD Comics

September 17, 2021

The Networking Nerd

APIs and Department Stores

<figure class="wp-block-image size-large"></figure>

This week I tweeted something from a discussion we had during Networking Field Day that summed up my feelings about the state of documentation of application programming interfaces (APIs):

<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter">
<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
</figure>

I laughed a bit as I wrote it because I’ve worked in department stores like Walmart in the past and I know the reasons why they tend to move things around. Comparing that to the way that APIs are documented is an interesting exercise in how people think about things like new capabilities and notification of changes.

Branding Exercises

In case you weren’t aware, everything in your average department store is carefully planned out. The things placed in the main aisles are decided on weeks in advance due to high traffic. The items placed at the ends of the aisles, or endcaps, are placed there to highlight high margin items or things that are popular enough to be sought out by customers. The makeup of the rest of the store is determined by a lot of metrics.

There are a few restrictions that have to be taken into account. In department stores with grocery departments, the locations of the refrigerated sections must be around the outside because of power requirement. Within those restrictions, plans put the high traffic items in the back of the store to require everyone to walk past all the other stuff in hopes they might buy it. That’s why the milk and bread and electronics areas are always the furthest away from the front of the store. You’re likely headed there anyway so why not make you work for it?

Every few months the store employees receive new floor plans that move items to different locations. Why would they do that? Well, those metrics help them understand where people are more likely to purchase certain items. Those metrics also tell the planners what items should be located together as well, which is how the whole aisle is planned out. Once everything gets moved they start gathering new metrics and find out how well their planning works. Aside from the inevitable grumbles. Even with some fair warning no one is happy when you find out something has moved.

Who Needs Documentation?

You might think that, on the surface, there’s not much similarity between a department store aisle and an API. One is a fixture. The other is code. Yet, think about how APIs are typically changed and you might find some of the parallels. Change is a constant in the world of software development, after all.

The APIs that we used a decade ago are almost assuredly different from the ones we program for today. Every year brings updated methods, new functions, and even changes in programming languages or access methods. How can you be sure that developers are accessing the latest and greatest technology that you’ve put into place? You can’t just ask them. Instead, you have to deprecate the methods that you don’t want them to use any longer.

Ask any developer writing for a API about deprecation and you’re probably going to hear a string of profanity. Spending time to write a perfectly good piece of software only to have it wrecked by someone’s decision to do things differently is infuriating to say the least. Trying to solve a hard problem with a novel concept is one thing. Having to do it all over again a month later when a new update is released is even more infuriating.

It’s the same fury that you feel when the peanut butter is moved from aisle four to aisle eight. How dare you! It took me a week last time to remember where it was and now you’ve gone and moved it. Just like when I spent all that time learning which methods to query to pull the data I needed for my applications.

No matter how much notice you give or how much you warn people that change is coming they’re always going to be irritated at you for making those changes. It feels like a waste of effort to need to rewrite an interface or to walk a little further in the store to locate the item you wanted. Humans aren’t fond of wasted effort or of needing to learn new things without good reason.

Poor API documentation is only partly to blame for this. Even the most poorly documented API will eventually be mapped out by someone that needs the info. It’s also the fact that the constant change in methods and protocols forces people to spend a significant amount of time learning the same things over and over again for very little gain.

The Light at the End of the Aisle

Ironically enough, both of these kinds of issues are likely to be solved in a similar way. Thanks to the large explosion of people doing their shopping online or with pickup and delivery services there is a huge need to have things more strictly documented and updated very frequently. It’s not enough to move the peanut butter to a better location. Now you need to update your online ordering system so the customers as well as the staff members pulling it for a pickup order can find it quickly and get more orders done in a shorter time.

Likewise, the vast number of programs that are relying on API calls today necessitate that older versions of functionality are supported for longer or newer functions are more rigorously tested before implementation. You don’t want to disable a huge section of your userbase because you deprecated something you didn’t like to maintain any longer. Unless you are the only application in the market you will find that creating chaos will just lead to users fleeing for someone that doesn’t upset their apple cart on a regular basis.


Tom’s Take

Documentation is key for us to understand change. We can’t just say we changed something. We have to give warning, ensure that people have seen the warning, tell them we’ve changed it, and then give them some way to transform the old way of things into the new one. And even that might not be enough. However, the pace of change that we’re seeing also means that rapid changes may not even be required for much longer. With people choosing to order online and never step foot inside the store the need to change the shelves frequently may be a thing of the past. With new methods and languages being developed so rapidly today it may be much faster to rewrite everyone on a new API and leave the old one intact instead of forcing developers to look at technology that is years old at this point. The delicious irony of the people forcing change on us to need to accept change themselves is something I’d happily shop for.

by networkingnerd at September 17, 2021 09:05 PM

ipSpace.net Blog (Ivan Pepelnjak)

Interesting Concept: Time Dilation

I loved the Time Dilation blog post by Seth Godin. It explains so much, including why I won’t accept a “quick conf call to touch base and hash out ideas” from someone coming out of the blue sky – why should I be interested if they can’t invest the time to organize their thoughts and pour them into an email.

The concept of “creation-to-consumption” ratio is also interesting. Now I understand why I hate unedited opinionated chinwagging (many podcasts sadly fall into this category) or videos where someone blabbers into a camera while visibly trying to organize their thoughts.

Just FYI, these are some of the typical ratios I had to deal in the past:

September 17, 2021 06:59 AM

XKCD Comics

September 16, 2021

Potaroo blog

Regulating Big Tech. This Time, for sure!

There is a growing unease within the US and elsewhere over the extraordinary rise of these technology giants, not just in monetary terms but in terms of their social power as well. There is a growing sentiment that the current situation looks like it will never be adequately corrected by just competitive pressures within market itself. Some further forms of regulatory intervention will be needed to force a fundamental realignment of these players. In so doing, it appears that regulators appear to be finally catching up with the online world in the US, in Europe and in China.

September 16, 2021 06:00 PM

ipSpace.net Blog (Ivan Pepelnjak)

LSA/LSP Flooding in OSPF and IS-IS

Peter Paluch loves blogging in microchunks on Twitter ;) This time, he described the differences between OSPF and IS-IS, and gracefully allowed me to repost the explanation in a more traditional format.


My friends, I happen to have a different opinion. It will take a while to explain it and I will have to seemingly go off on a tangent. Please have patience. As a teaser, though: The 2Way state between DRothers does not improve flooding efficiency – in fact, it worsens it.

September 16, 2021 07:17 AM

September 15, 2021

ipSpace.net Blog (Ivan Pepelnjak)

New: ipSpace.net Design Clinic

In early September, I started yet another project that’s been on the back burner for over a year: ipSpace.net Design Clinic (aka Ask Me Anything Reasonable in a more structured format). Instead of collecting questions and answering them in a podcast (example: Deep Questions podcast), I decided to make it more interactive with a live audience and real-time discussions. I also wanted to keep it valuable to anyone interested in watching the recordings, so we won’t discuss obscure failures of broken designs or dirty tricks that should have remained in CCIE lab exams.

September 15, 2021 07:32 AM

XKCD Comics

September 14, 2021

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Stateful Switchover (SSO) 101

Stateful Switchover (SSO) is another seemingly awesome technology that can help you implement high availability when facing a broken non-redundant network design. Here’s how it’s supposed to work:

  • A network device runs two copies of the control plane (primary and backup);
  • Primary control plane continuously synchronizes its state with the backup control plane;
  • When the primary control plane crashes, the backup control plane already has all the required state and is ready to take over in moments.

Delighted? You might be disappointed once you start digging into the details.

September 14, 2021 06:51 AM

September 13, 2021

ipSpace.net Blog (Ivan Pepelnjak)

Configuring NSX-T Firewall with a CI/CD Pipeline

Initial implementation of Noël Boulene’s automated provisioning of NSX-T distributed firewall rules changed NSX-T firewall configuration based on Terraform configuration files. To make the deployment fully automated he went a step further and added a full-blown CI/CD pipeline using GitHub Actions and Terraform Cloud.

Not everyone is as lucky as Noël – developers in his organization already use GitHub and Terraform Cloud, making his choices totally frictionless.

September 13, 2021 06:20 AM

Potaroo blog

TLS with a side of DANE

These are some notes I took from the DNS OARC meeting held in September 2021. This was a short virtual meeting, but for those of us missing a fix of heavy-duty DNS, it was very welcome in any case!

September 13, 2021 05:00 AM

XKCD Comics

September 11, 2021

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Ops Questions in Software Engineering Interviews

Charity Majors published another must-read article: why every software engineering interview should include ops questions. Just a quick teaser:

The only way to unwind this is to reset expectations, and make it clear that:

  • You are still responsible for your code after it’s been deployed to production, and
  • Operational excellence is everyone’s job.

Adhering to these simple principles would remove an enormous amount of complexity from typical enterprise IT infrastructure… but I’m afraid it’s not going to happen anytime soon.

September 11, 2021 06:49 AM

September 10, 2021

The Networking Nerd

Fast Friday – Podcasts Galore!

<figure class="wp-block-image size-large"></figure>

It’s been a hectic week and I realized that I haven’t had a chance to share some of the latest stuff that I’ve been working on outside of Tech Field Day. I’ve been a guest on a couple of recent podcasts that I loved.

Art of Network Engineering

I was happy to be a guest on Episode 57 of the Art of Network Engineering podcast. AJ Murray invited me to take part with all the amazing co-hosts. We talked about some fun stuff including my CCIE study attempts, my journey through technology, and my role at Tech Field Day and how it came to be that I went from being a network engineer to an event lead.

The interplay between the hosts and I during the discussion was great. I felt like we probably could have gone another hour if we really wanted to. You should definitely take a listen and learn how I kept getting my butt kicked by the CCIE open-ended questions or what it’s like to be a technical person on a non-technical briefing.

IPv6, Wireless, and the Buzz

I love being able to record episodes of Tomversations on Youtube. One of my latest was all about IPv6 and Wi-Fi 6E. As soon as I hit the button to publish the episode I knew I was going to get a call from my friends over at the IPv6 Buzz podcast. Sure enough, I was able to record an episode talking to them all about how the parallels between the two technologies are similar in my mind.

What I love about this podcast is that these are the experts when it comes to IPv6. Ed and Tom and Scott are the people that I would talk to about IPv6 any day of the week. And having them challenge my assertions about what I’m seeing helps me understand the other side of the coin. Maybe the two aren’t as close as I might have thought at first but I promise you that the discussion is well worth your time.


Tom’s Take

I don’t have a regular podcast aside from Tomversations so I’m not as practiced in the art of discussion as the people above. Make sure you check out those episodes but also make sure to subscribe to the whole thing because you’re going to love all the episodes they record.

by networkingnerd at September 10, 2021 09:43 PM

ipSpace.net Blog (Ivan Pepelnjak)

Lessons Learned: Fundamentals Haven't Changed

Here’s another bitter pill to swallow if you desperately want to believe in the magic powers of unicorn dust: laws of physics and networking fundamentals haven’t changed (see also: RFC 1925 Rule 11).

Whenever someone is promising a miracle solution, it’s probably due to them working in marketing or having no clue what they’re talking about (or both)… or it might be another case of adding another layer of abstraction and pretending the problems disappeared because you can’t see them anymore.

You’ll need a Free ipSpace.net Subscription to watch the video.

September 10, 2021 07:10 AM

XKCD Comics

September 09, 2021

Honest Networker
ipSpace.net Blog (Ivan Pepelnjak)

netsim-tools Overview

In December 2020, I got sick-and-tired of handcrafting Vagrantfiles and decided to write a tool that would, given a target networking lab topology in a text file, produce the corresponding Vagrantfile for my favorite environment (libvirt on Ubuntu). Nine months later, that idea turned into a pretty comprehensive tool targeting networking engineers who like to work with CLI and text-based configuration files. If you happen to be of the GUI/mouse persuasion, please stop reading; this tool is not for you.

During those nine months, I slowly addressed most of the challenges I always had creating networking labs. Here’s how I would typically approach testing a novel technology or software feature:

September 09, 2021 07:16 AM

September 08, 2021

Packet Pushers

Book Review: ‘Project Hail Mary’ by Andy Weir

Project Hail Mary is the latest work of fiction from Andy Weir, best known for his debut novel The Martian. And just like in The Martian, the protagonist’s survival in this new book depends on his ability to solve problems, troubleshoot mishaps, and generally “science the sh*t” out of things. Project Hail Mary a text-based […]

The post Book Review: ‘Project Hail Mary’ by Andy Weir appeared first on Packet Pushers.

by Drew Conry-Murray at September 08, 2021 07:49 PM

XKCD Comics

September 07, 2021

ipSpace.net Blog (Ivan Pepelnjak)

Non-Stop Forwarding 101

Non-Stop Forwarding (NSF) is one of those ideas that look great in a slide deck and marketing collaterals, but might turn into a giant can of worms once you try to implement them properly (see also: stackable switches or VMware Fault Tolerance).

NSF has been around for at least 15 years, so I’m positive at least some vendors got most of the details right; I’m also pretty sure a few people have scars to prove they’ve been around the non-optimal implementations.

September 07, 2021 06:47 AM

September 06, 2021

ipSpace.net Blog (Ivan Pepelnjak)

Comparing Forwarding Performance of Data Center Switches

One of my subscribers is trying to decide whether to buy an -EX or an -FX version of a Cisco Nexus data center switch:

I was comparing Cisco Nexus 93180YC-FX and Nexus 93180YC-EX. They have the same port distribution (48x 10/25G + 6x40/100G), 3.6 Tbps switching capacity, but the -FX version has just 1200 Mpps forwarding rate while EX version goes up to 2600 Mpps. What could be the reason for the difference in forwarding performance?

Both switches are single-ASIC switches. They have the same total switching bandwidth, thus it must take longer for the FX switch to forward a packet, resulting in reduced packet-per-seconds figure. It looks like the ASIC in the -FX switch is configured in more complex way: more functionality results in more complexity which results in either reduced performance or higher cost.

September 06, 2021 06:34 AM

XKCD Comics

September 03, 2021

The Networking Nerd

Getting Blasted by Backdoors

<figure class="wp-block-image size-large"><figcaption>Open Door from http://viktoria-lyn.deviantart.com/</figcaption></figure>

I wanted to take minute to talk about a story I’ve been following that’s had some new developments this week. You may have seen an article talking about a backdoor in Juniper equipment that caused some issues. The issue at hand is complicated at the linked article does a good job of explaining some of the nuance. Here’s the short version:

  • The NSA develops a version of Dual EC random number generation that includes a pretty substantial flaw.
  • That flaw? If you know the pseudorandom value used to start the process you can figure out the values, which means you can decrypt any traffic that uses the algorithm.
  • NIST proposes the use of Dual EC and makes it a requirement for vendors to be included on future work. Don’t support this one? You don’t get to even be considered.
  • Vendors adopt the standard per the requirement but don’t make it the default for some pretty obvious reasons.
  • Netscreen, a part of Juniper, does use Dual EC as part of their default setup.
  • The Chinese APT 5 hacking group figures out the vulnerability and breaks into Juniper to add code to Netscreen’s OS.
  • They use their own seed value, which allows them to decrypt packets being encrypted through the firewall.
  • Hilarity does not ensue and we spend the better part of a decade figuring out what has happened.

That any of this even came to light is impressive considering the government agencies involved have stonewalled reporters and it took a probe from a US Senator, Ron Wyden, to get as far as we have in the investigation.

Protecting Your Platform

My readers know that I’m a pretty staunch advocate for not weakening encryption. Backdoors and “special” keys for organizations that claim they need them are a horrible idea. The safest lock is one that can’t be bypassed. The best secret is one that no one knows about. Likewise, the best encryption algorithms are ones that can’t be reversed or calculated by anyone other than the people using them to send messages.

I get that the flood of encrypted communications today is making life difficult for law enforcement agencies all over the world. It’s tempting to make it a requirement to allow them a special code that will decrypt messages to keep us safe and secure. That’s the messaging I see every time a politician wants to compel a security company to create a vulnerability in their software just for them. It’s all about being safe.

Once you create that special key you’ve already lost. As we saw above, the intentions of creating a backdoor into an OS so that we could spy on other people using it backfired spectacularly. Once someone else figured out that you could guess the values and decrypt the traffic they set about doing it for themselves. I can only imagine the surprise at the NSA when they realized that someone had changed the values in the OS and that, while they themselves were no longer able to spy with impunity, someone else could be decrypting their communications at that very moment. If you make a key for a lock someone will figure out how to make a copy. It’s that simple.

We focus so much on the responsible use of these backdoors that we miss the bigger picture. Sure, maybe we can make it extremely difficult for someone in law enforcement to get the information needed to access the backdoor in the name of national security. But what about other nations? What about actors not tied to a political process or bound by oversight from the populace. I’m more scared that someone that actively wishes to do me harm could find a way to exploit something that I was told had to be there for my own safety.

The Juniper story gets worse the more we read into it but they were only the unlucky dancer with a musical chair to slip into when the music stopped. Any one of the other companies that were compelled to include Dual EC by government order could have gotten the short straw here. It’s one thing to create a known-bad version of software and hope that someone installs it. It’s an entirely different matter to force people to include it. I’m honestly shocked the government didn’t try to mandate that it must be used exclusively of other algorithms. In some other timeline Cisco or Palo Alto or even Fortinet are having very bad days unwinding what happened.


Tom’s Take

The easiest way to avoid having your software exploited is not to create your own exploit for it. Bugs happen. Strange things occur in development. Even the most powerful algorithms must eventually yield to Moore’s Law or Shor’s Algorithm. Why accelerate the process by cutting a master key? Why weaken yourself on purpose by repeating over and over again that this is “for the greater good”? Remember that the greater good may not include people that want the best for you. If you’re wiling to hand them a key to unlock the chaos that we’re seeing in this case then you have overestimated your value to the process and become the very bad actor you hoped to stop.

by networkingnerd at September 03, 2021 06:35 PM

ipSpace.net Blog (Ivan Pepelnjak)

Video: Introduction to Network Addressing

A friend of mine pointed out this quote by John Shoch when I started preparing the Network Stack Addressing slide deck for my How Networks Really Work webinar:

The name of a resource indicates what we seek, an address indicates where it is, and a route tells us how to get there.

You might wonder when that document was written… it’s from January 1978. They got it absolutely right 42 years ago, and we completely messed it up in the meantime with the crazy ideas of making IP addresses resource identifiers.

September 03, 2021 07:28 AM

XKCD Comics

September 02, 2021

ipSpace.net Blog (Ivan Pepelnjak)

Automating NSX-T Firewall Configuration

Noël Boulene decided to automate provisioning of NSX-T distributed firewall rules as part of his Building Network Automation Solutions hands-on work.

What makes his solution even more interesting is the choice of automation tool: instead of using the universal automation hammer (aka Ansible) he used Terraform, a much better choice if you want to automate service provisioning, and you happen to be using vendors that invested time into writing Terraform provisioners.

September 02, 2021 07:14 AM

September 01, 2021

ipSpace.net Blog (Ivan Pepelnjak)

netsim-tools: Python Package and Unified CLI

One of the major challenges of using netsim-tools was the installation process – pull the code from GitHub, install the prerequisites, set up search paths… I knew how to fix it (turn the whole thing into a Python package) but I was always too busy to open that enormous can of worms.

That omission got fixed in summer 2021; netsim-tools is now available on PyPI and installed with pip3 install netsim-tools.

September 01, 2021 06:07 AM

XKCD Comics

August 31, 2021

Ethan Banks on Technology

The Best Technologists First Try To Solve Their Own Problems

Every once in a while, I get questions from random internet folks who want me to do their homework for them. They want me to provide them with detailed technical information, solve their complex design problem, or curate content on a difficult topic so that they don’t have to do the sifting.

While I like to help folks out as much as anyone (and often do), I usually ignore these sorts of questions. Why? Partly, I don’t have enough time to fix the internet. Partly, I like to get paid for consulting. But more importantly, the best technologists first try to solve their own problems.

A Manager’s Perspective

When interviewing candidates for technical positions, one of my questions is, “If you run into a problem you’ve never faced before, how do you solve it?” There are two typical answers.

  1. “I’ll ask someone else for help. Probably you.”
  2. “I’ll search the internet, company wiki, and product documentation. I’ll set up a lab. If I’m still stuck, I’ll ask for help.”

I prefer to hire a person who first tries to figure things out. While I want neither a cowboy nor science experiments making their way into production, I do want a motivated individual who will research difficult technical challenges and grow as a result. As that person grows stronger, their team grows stronger as well.

It’s About The Team

Remember that while managers manage individuals, they also manage teams. Hiring decisions are based partly on how well a candidate will fit in with the established team. I view unmotivated technologists as a red flag for team dynamics.

You might feel that if you worked for me, you’d never be allowed to ask a question. That’s not the case. There’s no shame in asking for help at the appropriate time. Technology is hard, and the problems one faces change over time–domain-specific knowledge ages out.

Sometimes a situation is urgent, and you won’t have time to figure out for yourself why { the network is down | the server is offline | the CEO can’t login to the VPN }. All technologists need help to solve problems at certain times. Never asking for help can be just as bad as constantly nagging teammates. However, there’s a big difference between immediately leaning on others and being self-sufficient whenever possible.

When you ask another for help without first trying to aid yourself, you have added to that other person’s workload. You’re cutting into the time they have to get their own work done. Instead of contributing to the team, you’re a drag on team performance. When you make no effort to find your own answers, you weaken your team.

But It’s Also About Yourself

You want to be self-sufficient when you can. You’ll both learn & understand more. Self-sufficiency leads to technology mastery. Technology mastery leads to career opportunities. Career opportunities can transform your life.

by Ethan Banks at August 31, 2021 03:38 PM

August 30, 2021

Potaroo blog

TLS with a side of DANE

Am I really talking to you? In a networked world that’s an important question.

August 30, 2021 02:00 AM

XKCD Comics

August 27, 2021

The Networking Nerd

Sharing Failure as a Learning Model

<figure class="wp-block-image size-large"></figure>

Earlier this week there was a great tweet from my friends over at Juniper Networks about mistakes we’ve made in networking:

<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter">
<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
</figure>

It got some interactions with the community, which is always nice, but it got me to thinking about how we solve problems and learn from our mistakes. I feel that we’ve reached a point where we’re learning from the things we’ve screwed up but we’re not passing it along like we used to.

Write It Down For the Future

Part of the reason why I started my blog was to capture ideas that had been floating in my head for a while. Troubleshooting steps or perhaps even ideas that I wanted to make sure I didn’t forget down the line. All of it was important to capture for the sake of posterity. After all, if you didn’t write it down did it even happen?

Along the way I found that the posts that got significant traction on my site were the ones that involved mistakes. Something I’d done that caused an issue or something I needed to look up through a lot of sources that I distilled down into an easy reference. These kinds of posts are the ones that fly right up to the top of the Google search results. They are how people know you. It could be a terminology post like defining trunks. Or perhaps it’s a question about why your SPF modules are working in a switch.

Once I realized that people loved finding posts that solved problems I made sure to write more of them down. If I found a weird error message I made sure to figure out what it was and then put it up for everyone to find. When I documented weird behaviors of BPDUGuard and BPDUFilter that didn’t match the documentation I wrote it all down, including how I’d made a mistake in the way that I interpreted things. It was just part of the experience for me. Documenting my failures and my learning process could help someone in the future. My hope was that someone in the future would find my post and learn from it like I had.

Chit Chat Channels

It used to be that when you Googled error messages you got lots of results from forum sites or Reddit or other blogs detailing what went wrong and how you fixed it. I assume that is because, just like me, people were doing their research and figuring out what went wrong and then documenting the process. Today I feel like a lot of that type of conversation is missing. I know it can’t have gone away permanently because all networking engineerings make mistakes and solve problems and someone has to know where that went, right?

The answer came to me when I read a Reddit post about networking message boards. The suggestions in the comments weren’t about places to go to learn more. Instead, they linked to Slack channels or Discord servers where people talk about networking. That answer made me realize why the discourse around problem solving and learning from mistakes seems to have vanished.

Slack and Discord are great tools for communication. They’re also very private. I’m not talking about gatekeeping or restrictions on joining. I’m talking about the fact that the conversations that happen there don’t get posted anywhere else. You can join, ask about a problem, get advice, try it, see it fail, try something else, and succeed all without ever documenting a thing. Once you solve the problem you don’t have a paper trail of all the things you tried that didn’t work. You just have the best solution that you did and that’s that.

You know what you can’t do with Slack and Discord? Search them through Google. The logs are private. The free tiers remove messages after a fashion. All that knowledge disappears into thin air. Unlike the Wisdom of the Ancients the issues we solve in Slack are gone as soon as you hit your message limit. No one learns from the mistakes because it looks like no one has made them before.

Going the Extra Mile

I’m not advocating for removing Slack and Discord from our daily conversations. Instead, I’m proposing that when we do solve a hard problem or we make a mistake that others might learn from we say something about it somewhere that people can find it. It could be a blog post or a Reddit thread or some kind of indexable site somewhere.

Even the process of taking what you’ve done and consolidating it down into something that makes sense can be helpful. I saw X, tried Y and Z, and ended up doing B because it worked the best of all. Just the process of how you got to B through the other things that didn’t work will go a long way to help others. Yes, it can be a bit humbling and embarrassing to publish something that admits you that you made a mistake. But It’s also part of the way that we learn as humans. If others can see where we went and understand why that path doesn’t lead to a solution then we’ve effectively taught others too.


Tom’s Take

It may be a bit self-serving for me to say that more people need to be blogging about solutions and problems and such, but I feel that we don’t really learn from it unless we internalize it. That means figuring it out and writing it down. Whether it’s a discussion on a podcast or a back-and-forth conversation in Discord we need to find ways to getting the words out into the world so that others can build on what we’ve accomplished. Google can’t search archives that aren’t on the web. If we want to leave a legacy for the DenverCoder10s of the future that means we do the work now of sharing our failures as well as our successes and letting the next generation learn from us.

by networkingnerd at August 27, 2021 07:09 PM

XKCD Comics