March 25, 2023 Blog (Ivan Pepelnjak)

Worth Reading: The Dangers of Knowing Everything

Another interesting take on ChatGPT in networking, this time by Tom Hollingsworth in The Dangers of Knowing Everything:

In a way, ChatGPT is like a salesperson. No matter what you ask it the answer is always yes, even if it has to make something up to answer the question.

To paraphrase an old joke: It’s not that ChatGPT is lying. It’s just that what it knows isn’t necessarily true. See also: the difference between bullshit and lies.

March 25, 2023 07:08 AM

March 24, 2023 Blog (Ivan Pepelnjak)

Video: Chassis Switch Architectures

Did you know that most chassis switches look like leaf-and-spine fabrics1 from the inside? If you didn’t, you might want to watch the short Chassis Architectures video by Pete Lumbis (author of ASICs for Networking Engineers part of the Data Center Fabric Architectures webinar).

You’ll need Free Subscription to watch the video.

March 24, 2023 07:23 AM

XKCD Comics

March 23, 2023 Blog (Ivan Pepelnjak)

Will ChatGPT Replace Stack Overflow?

TL&DR: No. You can move on.

NANOG87 summary by John Kristoff prompted me to look at NANOG87 presentations, and one of them discussed ChatGPT and Network Engineering (video). I couldn’t resist the clickbait ;)

Like most using ChatGPT for something articles we’re seeing these days, the presentation is a bit too positive for my taste. After all, it’s all fine and dandy to claim ChatGPT generates working router configurations and related Jinja2 templates if you know what the correct configurations should look like and can confidently say “and this is where it made a mistake” afterwards.

March 23, 2023 07:03 AM

March 22, 2023

Potaroo blog

Hiding Behind MASQUEs

Privacy was a difficult topic for Internet protocols at the outset of the Internet. Things took a very different turn some 10 years ago following the disclosures of mass surveillence programs in the US, when the IETF declared that pervasive monitoring of users consititued at attack and Internet protocols needed to take measures to contain the way in which data was accessed in the network. The latest offerings in the area of improved privacy include Oblivious HTTP and MASQUE. Lets look at these approaches and the way that they attempt to contain the potential leakage of data.

March 22, 2023 07:00 AM Blog (Ivan Pepelnjak)

New: CI/CD in Networking Resource Page

Over the years I wrote a dozen blog posts describing various aspects of using CI/CD in network automation. These blog posts are now collected in the new CI/CD in Networking page that also includes links to related podcasts, webinars, and sample network automation solutions.

March 22, 2023 06:42 AM

XKCD Comics

March 21, 2023

Packet Pushers

Build Your K8s Environment For The Real World Part 1 – Day Zero Ops

When you’re designing a Kubernetes environment, whether it’s small or large, there are a few things that you must think about prior to writing the code to deploy the cluster or implementing the GitOps Controller for all of your Continuous Delivery needs. First, you must plan. Planning is the most important phase. In blog one […]

The post Build Your K8s Environment For The Real World Part 1 – Day Zero Ops appeared first on Packet Pushers.

by Michael Levan at March 21, 2023 06:26 PM Blog (Ivan Pepelnjak)

March 20, 2023

Packet Pushers

The Networking RFCs: To read or not to read?

I see this question rather often asked on various social media. A post on Twitter a few days ago triggered this little blog post and I deeply appreciate the poster. The question was simple “Is it really necessary for an engineer to know or understand the key RFC numbers?”. Some of the engineers I work […]

The post The Networking RFCs: To read or not to read? appeared first on Packet Pushers.

by Kam Agahian at March 20, 2023 09:10 PM

Demo Bytes: Upgrading Network Devices With BackBox – Video

The BackBox network automation platform comes with many pre-built functions to make routine tasks performed by network administrators simple & foolproof. In this demo, BackBox’s Senior Product Manager Perry Greenwood shows Packet Pushers’ Ethan Banks how to automate network device upgrades using BackBox. We look at the entire lifecycle of the upgrade process, including scheduling, […]

The post Demo Bytes: Upgrading Network Devices With BackBox – Video appeared first on Packet Pushers.

by The Video Delivery at March 20, 2023 06:51 PM

The Networking Nerd

Assume Disaster

One of the things that people have mentioned to me in the past regarding my event management skills is my reaction time. They say, “You are always on top of things when they go wrong. How do you do it?”

My response never fails to make them laugh. I offer, “I always assume something is going to go wrong. I may not know what it is but when it does happen I’m ready to fix it.”

That may sound like a cynical take on planning and operations but it’s served me well for many years. Why is it that things we spend so much time working on always seem to go off the rails?

Complexity Fails

Whether it’s an event or a network or even a carpentry project you have to assume that something is going to go wrong. Why? Because the more complex the project the more likely you are to hit a snag. Systems that build on themselves and require input to proceed are notorious for hitting blocks that cause the whole thing to snarl into a mess of missed timelines.

When I was in college studying project management I learned there’s even a term for time saving: crashing a project. Not literally crashing the project into something but instead looking for ways to trim the timeline and work through issues. Why is this a common term? I’d hazard a guess that very few projects actually stick to their timeline. It could be a parts delay. It could be a team taking longer to work through an issue. Mercury could be in retrograde during sunspots. Whatever the case may be, projects are designed to have floating timelines.

This imprecision built into project planning made me realize that the only way to be really sure that something would get done properly was to anticipate the errors and work through them. Part of the way to prevent these issues is to reduce complexity. You may not be able to work through every potential scenario where something is going to go sideways but you can almost always tell where the problems will arise. Any module of work that has lots of moving parts or lots of people with specific deadlines is going to be a trouble spot. The more components that depend on each other means a greater chance that any one of them slipping will cause a delay that requires attention.

If you have a project or are planning something that has complicated steps for a specific goal, try to break those down into more simple things that don’t depend on each other. Have a team that needs to write a report based on the research from another team? Don’t bundle those together. Have the writing team working on things that aren’t dependent upon the research team just in case the data isn’t delivered. If you’re building a house and you are planning on having things done that require a roof being installed you should have a plan for what happens if the roofers are behind or the shingles don’t arrive on schedule. Finding these extra bits of complexity and eliminating them will go a long way toward solving recurring sources of frustration.

Be Prepared for Problems

The motto of the Boy Scouts is “be prepared”. It’s something I constantly remind the youth in the program weekly. Be prepared for what exactly? It doesn’t matter what if you’re properly prepared. You don’t have to be prepared for every possible scenario but you need to have the flexibility to address a wide variety of potential problems.

Take information security, as a prime example. How will your enterprise be breached? There’s almost too many ways to consider. New zero day? Backdoor password installed years ago? Phishing your key employees? Good old fashioned malfeasance? The list of things are endless! But the results are always the same. Attackers look for things of value and either steal them or disable them. Thieves steal and chaotic souls cause chaos. The entry is unknown but the results of entry can be quantified and considered.

You may not know how they’ll get in but you know how to stop them once they do. That’s why you should always assume you’re under attack or already breached. If you construct the system in such a way as to prevent lateral movement or even create policies to keep data safe at rest you’ll go a long way to preventing unauthorized users from accessing it, malicious or otherwise.

Is assuming that you’re always under attack kind of paranoid? Yes, it is. However, if you assume you’ve been breached and you are wrong all you’ve done is ensure that your data is safe and secure. If you assume you’re not and you end up being wrong you get to spend a lot of time cleaning up and sending emails to your boss and your resume to the next place where you get to make all new assumptions.

Tom’s Take

The optimist in me wants to believe that you can plan something so well that there isn’t a chance a problem can happen. The realist in me knows the optimist is crazy. That doesn’t mean I should just stop planning and hope for the best when I need to tap dance my way out of a problem. Instead, it means that I need to consider all the possibilities and try to have an answer for them, event if they’re remote. That way I’m never caught off guard by the wackiest of issues.

by networkingnerd at March 20, 2023 04:01 PM Blog (Ivan Pepelnjak)

Test VRF-Aware DHCP Relaying with netlab

After figuring out how DHCP relaying works and testing it in a simple lab, I went a step further and tested VRF-aware DHCP relaying.

Lab Topology

I had to make just a few changes to the DHCP relaying lab topology:

  • DHCP server is running on CSR 1000v. IOSv DHCP server does not support subnet selection DHCP option and thus doesn’t work with relays that do inter-VRF DHCP relaying.
  • I put the link between the DHCP client and DHCP relay into a VRF.

March 20, 2023 07:15 AM

XKCD Comics

March 19, 2023 Blog (Ivan Pepelnjak)

Worth Reading: History of 8-bit Bytes

Just in case you wondered why we have eight bits per byte: after Julia Evans investigated this mystery, Steven Bellovin published an excellent overview of the early years of bytes and words.

March 19, 2023 11:44 AM

March 18, 2023 Blog (Ivan Pepelnjak)

Worth Exploring: OSPF Watcher

Vadim Semenov created an interesting solution out of open-source tools (and some glue): a system that tracks, logs, and displays OSPF changes in your network.

It might not be exactly what you’re looking for (and purists would argue it should use BGP-LS), but that’s the beauty of open-source solutions: go and adapt it to your needs, generalizes your fixes, and submit a pull request.

March 18, 2023 10:14 AM

March 17, 2023

The Data Center Overlords

Connection Types with Network Automation and Ansible

Ansible is a great platform for network automation, but one of its quirks is its sometimes obtuse errors. I was running a playbook which logs into various Arista leafs and spines and does some tests. I’m using SSH to issue the commands (versus eAPI). I got this error:

fatal: [spine1]: FAILED! => {"changed": false, "msg": "Connection type ssh is not valid for this module"}

One of the little things that trips me up when doing Ansible with network automation is the connection type.

When you’re automating servers (Ansible’s original use case) the connection type is assumed to be SSH, so the Ansible control node will log in to the node and perform some functions. The default connection type is “ssh”.

It’s a little counter-intuative, but even if you’re using SSH to get into network device, most network-centric modules won’t work. You need to use another connection type such as network_cli, which is part of the netcommon module collection. When you use network_cli, you also might have to specify a few other options such as network_os, become, and become_method.

        ansible_connection: network_cli
        ansible_network_os: eos
        ansible_become: yes
        ansible_become_method: enable

If your device has some sort of API, you can use httpapi as the connection type. You’ll almost always need to specify to use SSL and to set validate_certs to false (as in most cases devices have a self-signed certificate).

        ansible_connection: httpapi
        ansible_httpapi_use_ssl: True
        ansible_httpapi_validate_certs: False
        ansible_network_os: eos
        ansible_httpapi_port: 443

There’s also netconf and a few others.

So whether your network device is controlled via SSH/CLI or some type of API (JSON-RPC, XML-RPC, REST) make sure to set the connection type, otherwise Ansible probably won’t work with your network device.

by tonybourke at March 17, 2023 11:35 PM

Aaron's Worthless Words

Sending Slack Messages with Python

Here’s a quick summary of what we’ve talked about in the last few posts — all with Python.

This is all fine and dandy, but I would guess that you’re not the only engineer in the company and production maintenance scripts don’t run off of your laptop. We need a way to let a group of people know what’s happening when one of your scripts is run. And please don’t say email. Email has been worthless for alerting for over a decade, and there are better ways to do it. Search your feelings…you know it to be true!

At this point, we all have some magic messaging tool that someone in upper management decided we needed. There are others out there, but I would guess that the majority of companies are using Microsoft Teams or Slack with some Webex Teams sprinkled in there. These are great tools with lots of features and are probably not yet overused to point of making users ignore the messages, so they are great candidates for telling others what you broke through automation.

It’s also a good place to keep track of the history of the chaos you’ve caused. Instead of having a log sitting on a disk on a server somewhere, the log messages are recorded for posterity for everyone to see at any time. This obviously could be good or bad, but it’s better than someone calling you at 3am asking if your tasks have done something egregious. And, yes, logs are a part of IT life, and auditors will want to see them every year or two when they come onsite. We all have to keep our logs, but we can still send updates via Slack (or whatever) as well.

We’re going to talk about Slack here because it’s free for me to use, and I’ve already got it set up. The concepts are the same in MS Teams or WE Teams, though. Like…pretty much exactly the same.

We’re only going to talk about plaintext updates to the channel — just as we did with print statements and logging handlers. You can do fancier stuff like text formatting and sections and actions and polls if you want, and there are libraries out there that will make the fancier things easier for you. For now, though, we’re keeping it simple. Maybe we can do that later, but, for now, let’s just send messages as we would to a log or the screen.

The first thing to do is to set up a channel for an incoming webhook. I have a Slack workspace for myself and created a channel called #automation-testing where I want these messages to land. I went into the channel config and added an app called Incoming Webhooks, . When it’s installed, Slack provides a long URL. Copy this down somewhere so you have it since this is where you’re going to send your updates. There’s sort of a “security through obscurity” thing going on, so there’s no authentication involved. **cough** Security! **cough**

Anyone who has this URL can post to your channel, so it needs to be kept safe. You shouldn’t put it directly in your code. I wound up putting it in the device_creds.yml file along with the username and password for logging into the gear. The key I used is slack_url. My mother tells me I’m very creative. We’ll import all the credential information into so we can use that URL later. And make sure that creds file is in your .gitignore file so it doesn’t get published to your repository. Ask me about the email I got from Slack the other day that said “We see you published a webhook URL to GitHub, so we’re regenerating the URL for you.” Oops. Glad they were looking out for me.

To some code, I guess. Let’s refactor an easy one we’ve already done. How about the one where we delete all the Netbox API tokens that we’re not actively using? We won’t change the logic; we’re going to just upgrade from print statements to Slack updates…and maybe advance a little toward being a “proper” Python developer.

I’m running Python 3.9.10 today. All this code is available on my Github repo for you to freely steal. I reiterate that I am not a developer and make no guarantees with this code. You should ask someone who knows what they’re doing to review any of this before you put it into production. I am also learning as I go, so I’ve noticed my own code is changing as times moves along. Don’t freak out if there’s some different structure or actual comments compared to the last time we looked at this code.

Deletes all API tokens from Netbox
import yaml
import requests
import pynetbox

def send_to_slack(message: str, slack_url: str):
    Send a message to Slack

        message (str): The message text
        slack_url (str): The URL to post to

        bool: Whether or not the message was sent successfully
    headers = {"text": message}
    post_result =, json=headers, timeout=10)
    if post_result.status_code == 200:
        return True
    return False

ENV_FILE = "env.yml"
CREDS_FILE = "device_creds.yml"

def main():
    Run this
    with open(ENV_FILE, encoding="UTF-8") as file:
        env_vars = yaml.safe_load(file)

    with open(CREDS_FILE, encoding="UTF-8") as file:
        creds = yaml.safe_load(file)

    nb_conn = pynetbox.api(url=env_vars['netbox_url'])

    my_token = nb_conn.create_token(env_vars['username'], env_vars['password'])

    all_tokens = nb_conn.users.tokens.all()

    send_to_slack(message="Looking for old tokens in Netbox.",
    found_old_tokens = False

    for token in all_tokens:
        if ==
            send_to_slack(message="Don't delete your own token, silly person!",
        send_to_slack(message=f"Deleting token {}",
        found_old_tokens = True


    if not found_old_tokens:
        send_to_slack(message="Found no old tokens to delete.",

if __name__ == "__main__":

Like I said, the basic function of the script is the same. We’ve only added some Slack functionality to replace the print statements, which are done through the send_to_slack function defined in line 8. There we set up the JSON body, send the post to the given URL, and return a boolean based on the status code we get back. Not too difficult here.

Some of the minor changes include importing in the creds from YAML on line 35. The slack_url value from that dictionary will be sent to the function for posting. We’ve also added a tracking variable called found_old_tokens to see if we found something to delete. If we didn’t, we’ll published a Slack message that says we didn’t find any…just so we know the process finished and didn’t crash. I like closure.

I do need to mention that we did some restructuring of the script to make it more Pythonic. See line 64 where we did the whole __name__ thing, which makes us look fancy. This just says to run the function main() if this script is called from the command line. It’s not really important for functionality here, but it’s good practice for the future. Things like this will help us when we start taking all this code and putting into a custom module later.

We also included some type hints in the send_to_slack function. What are these? I mean, they tell you what type of variable to use, but I’m not sure what they really do for us here. Maybe when we’ve got a fully-developed system for automatically maintaining our Netbox data we can see a benefit. I think I just watch too many YouTube videos on Python at this point.

Logging to Slack is a lot better in my opinion. I like to let everyone know what’s going on. I also like to yell at them at 3am when they call me because they didn’t use the tools properly even though I gave them instructions. Most importantly, though, is the fact that it’s not email.

As an afterthought, here’s proof that this code actually does something. Not proof that it works optimally or that it even works well. Just works.

<figure class="wp-block-image size-full"><figcaption class="wp-element-caption">Screenshot showing a Slack update for deleting tokens</figcaption></figure>

Send any air traffic scanners question my way.


by jac at March 17, 2023 03:31 PM Blog (Ivan Pepelnjak)

Video: vPC Fabric Peering with EVPN Multihoming

After implementing MLAG functionality with EVPN and having a VXLAN-like fabric transport path between MLAG members, it becomes possible to get rid of the MLAG peer link.

Not surprisingly, most implementations of virtual MLAG peer link remain proprietary. Lukas Krattiger described the details of Cisco’s vPC Fabric Peering implementation in the EVPN Deep Dive webinar.

You need Free Subscription to watch the video. To watch the whole webinar, buy Standard or Expert Subscription.

March 17, 2023 07:06 AM

XKCD Comics

March 16, 2023 Blog (Ivan Pepelnjak)

Advantages of Using Generalized TTL Security Mechanism (GTSM) with EBGP

A few weeks ago I described why EBGP TCP packets have TTL set to one (unless you configured EBGP multihop). Although some people claim that (like NAT) it could be a security feature, it’s not a good one. Generalized TTL Security Mechanism (GTSM, described in RFC 5082) is much better.

Most BGP implementations set TTL field in outgoing EBGP packets to one. That prevents a remote intruder that manages to hijack a host route to an adjacent EBGP peer from forming a BGP session as the TCP replies get lost the moment they hit the first router in the path.

March 16, 2023 07:56 AM

March 15, 2023 Blog (Ivan Pepelnjak)

First Steps in IPv6 Deployments

Even though IPv6 could buy its own beer (in US, let alone rest of the world), networking engineers still struggle with its deployment – one of the first questions I got in the Design Clinic was:

We have been tasked to start IPv6 planning. Can we discuss (for enterprises like us who all of the sudden want IPv6) which design paths to take?

I did my best to answer this question and describe the basics of creating an IPv6 addressing plan. For even more details, watch the IPv6 webinars (most of them at least a few years old, but nothing changed in the IPv6 world in the meantime apart from the SRv6 madness).

March 15, 2023 07:48 AM

XKCD Comics

March 14, 2023

Packet Pushers

SmartNIC, DPU Revenue Forecast To Grow 30% In 2023

Data Processing Unit (DPU) and SmartNIC vendors such as NVIDIA, Intel, and AMD are making a lot of noise about the ability of their adapters to offload work from CPUs and to run networking, security, and storage applications directly on a NIC inside a server. But that noise hasn’t necessarily turned into sales—at least not […]

The post SmartNIC, DPU Revenue Forecast To Grow 30% In 2023 appeared first on Packet Pushers.

by Drew Conry-Murray at March 14, 2023 06:37 PM Blog (Ivan Pepelnjak)

Leaf-and-Spine Fabrics Between Theory and Reality

I’m always envious of how easy networking challenges seem when you’re solving them in PowerPoint, for example, when an innovation specialist explains how scalability works in leaf-and-spine fabrics in a LinkedIn comment:

One of the main benefits of a CLOS folded spine topology is the scale out spine where you can scale out the number of spine nodes increasing your leaf-spine n-way ECMP as well as minimizing the blast radius with the more spine nodes the more redundancy and resiliency.

Isn’t that wonderful? If you need more bandwidth, sprinkle the magic spine powder on your fabric, add water, and voila! Problem solved. Also, it looks like adding spine switches reduces the blast radius. Who would have known?

March 14, 2023 07:01 AM

March 13, 2023

Potaroo blog

Submarine Cable Resilience

How do you protect a submarine cable from interference? Do you use more amour plating? Or laying the cable in a sea floor trench? Or simply lay more cables? Or do you head off into radio-based systems?

March 13, 2023 11:20 PM Blog (Ivan Pepelnjak)

Test DHCP Relaying with netlab

After figuring out how DHCP relaying works, I decided to test it out in a lab. netlab has no DHCP configuration module (at the moment); the easiest way forward seemed to be custom configuration templates combined with a few extra attributes.

Lab Topology

This is how I set up the lab:

March 13, 2023 07:01 AM

XKCD Comics

March 12, 2023 Blog (Ivan Pepelnjak)

Worth Reading: Putting Large Language Models in Context

Another take on “what are large language models and what can we expect from them,” this time by Bruce Davie: Putting Large Language Models in Context:

My approach, at least for now, is to treat these LLM-based systems as very large, efficient collections of matchboxes–and keep working in my chosen field of networking.

March 12, 2023 07:48 AM

March 11, 2023 Blog (Ivan Pepelnjak)

Worth Reading: The War on Expertise

Jeff McLaughlin published an excellent blog post perfectly describing what we’ve been experiencing for decades: the war on expertise.

On one hand, the “business owners” force us to build complex stuff because they think they know better, on the other they blame people who know how to do it for the complex stuff that happens as the result of their requirements:

I am saying that we need to stop blaming complexity on those who manage to understand it.


March 11, 2023 07:42 AM

March 10, 2023

The Networking Nerd

The Dangers of Knowing Everything

By now I’m sure you’ve heard that the Internet is obsessed with ChatGPT. I’ve been watching from the sidelines as people find more and more uses for our current favorite large language model (LLM) toy. Why a toy and not a full-blown solution to all our ills? Because ChatGPT has one glaring flaw that I can see right now that belies its immaturity. ChatGPT knows everything. Or at least it thinks it does.

Unknown Unknowns

If I asked you the answer to a basic trivia question you could probably recall it quickly. Like “who was the first president of the United States?” These are answers we have memorized over the years to things we are expected to know. History, math, and even written communication has questions and answers like this. Even in an age of access to search engines we’re still expected to know basic things and have near-instant recall.

What if I asked you a trivia question you didn’t know the answer to? Like “what is the name of the metal cap at the end of a pencil?” You’d likely go look it up on a search engine or on some form of encyclopedia. You don’t know the answer so you’re going to find it out. That’s still a form of recall. Once you learn that it’s called a ferrule you’ll file it away in the same place as George Washington, 2+2, and the aglet as “things I just know”.

Now, what if I asked you a question that required you to think a little more than just recalling info? Such as “Who would have been the first president if George Washington refused the office?” Now we’re getting into more murky territory. Instead of being able to instantly recall information you’re going to have analyze what you know about the situation. For most people that aren’t history buffs they might recall who Washington’s vice president was and answer with that. History buffs might take more specialized knowledge about matters would apply additional facts and infer a different answer, such as Jefferson or even Samuel Adams. They’re adding more information to the puzzle to come up with a better answer.

Now, for completeness sake, what if I asked you “Who would have become the Grand Vizier of the Galactic Republic if Washington hadn’t been assassinated by the separatists?” You’d probably look at me like I was crazy and say you couldn’t answer a question like that because I made up most of that information or I’m trying to confuse you. You may not know exactly what I’m talking about but you know, based on your knowledge of elementary school history, that there is no Galactic Republic and George Washington was definitely not assassinated. Hold on to this because we’ll come back to it later.

Spinning AI Yarns

How does this all apply to a LLM? The first thing to realize is that LLMs are not replacements for search engines. I’ve heard of many people asking ChatGPT basic trivia and recall type questions. That’s not what LLMs are best at. We have a multitude of ways to learn trivia and none of them need the power of a cloud-scale computing cluster interpreting inputs. Even asking that trivia question to a smart assistant from Apple or Amazon is a better way to learn.

So what does an LLM excel at doing? Nvidia will tell you that it is “a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets”. In essence it can take a huge amount of input, recognize certain aspects of it, and produce content based on the requirements. That’s why ChatGPT can “write” things in the style of something else. It knows what that style is supposed to look and sound like and can produce an output based on that. It analyzes the database and comes up with the results using predictive analysis to create grammatically correct output. Think of it like Advanced Predictive Autocorrect.

If you think I’m oversimplifying what LLMs like ChatGPT can bring to the table then I challenge you to ask it a question that doesn’t have an answer. If you really want to see it work some magic ask it something oddly specific about something that doesn’t exist, especially if that process involves steps or can be broken down into parts. I’d bet you get an answer at least as many times as you get something back that is an error message.

To me, the problem with ChatGPT is that the model is designed to produce an answer unless it has specifically been programmed not to do so. There are a variety of answers that the developers have overridden in the algorithm, usually something racially or politically sensitive. Otherwise ChatGPT is happy to spit out lots of stuff that looks and sounds correct. Case in point? This gem of a post from Joy Larkin of ZeroTier:

Short version: ChatGPT gave a user instructions for a product that didn’t exist and the customer was very frustrated when they couldn’t find the software to download on the ZeroTier site. The LLM just made up a convincing answer to a question that involved creating something that doesn’t exist. Just to satisfy the prompt.

Does that sound like a creative writing exercise to you? “Imagine what a bird would look like with elephant feet.” Or “picture a world where people only communicated with dance.” You’ve probably gone through these exercises before in school. You stretch your imagination to take specific inputs and produce outputs based on your knowledge. It’s like the above mention of applied history. You take inputs and produce a logical outcome based on facts and reality.

ChatGPT is immature enough to not realize that some things shouldn’t be answered. If you use a search engine to find the steps to configure a feature on a product the search algorithm will return a page that has the steps listed. Are the correct? Maybe. Depends on how popular the result is. But the results will include a real product. If you search for nonexistent functionality or a software package that doesn’t exist your search won’t have many results.

ChatGPT doesn’t have a search algorithm to rely on. It’s based on language. It’s designed to approximate writing when given a prompt. That means, aside from things it’s been programmed not to answer, it’s going to give you an answer. Is it correct? You won’t know. You’d have to take the output and send it to a search engine to determine if that even exists.

The danger here is that LLMs aren’t smart enough to realize they are creating fabricated answers. If someone asked me how to do something that I didn’t know I would preface my answer with “I’m not quite sure but this is how I think you would do it…” I’ve created a frame of reference that I’m not familiar with the specific scenario and that I’m drawing from inferred knowledge to complete the task. Or I could just answer “I don’t know” and be done with it. ChatGPT doesn’t understand “I don’t know” and will respond with answers that look right according to the model but may not be correct.

Tom’s Take

What’s funny is that ChatGPT has managed to create an approximation of another human behavior. For anyone that has ever worked in sales you know one of the maxims is “never tell the customer ‘no'”. In a way, ChatGPT is like a salesperson. No matter what you ask it the answer is always yes, even if it has to make something up to answer the question. Sci-fi fans know that in fiction we’ve built guardrails for robots to save our society from being harmed by functions. AI, no matter how advanced, needs protections from approximating bad behaviors. It’s time for ChatGPT and future LLMs to learn that they don’t know everything.

by networkingnerd at March 10, 2023 05:21 PM Blog (Ivan Pepelnjak)

Video: SD-WAN Backend Architecture

After describing the SD-WAN reference design, Pradosh Mohapatra focused on individual components of an SD-WAN solution, starting with the backend architecture.

You need at least free subscription to watch this video and other videos in the SD-WAN Overview webinar.

March 10, 2023 06:48 AM

XKCD Comics

March 09, 2023

Packet Pushers

Xcitium’s Endpoint Virtual Jail Aims To Lock Up Mystery Malware

Xcitium is an Endpoint Detection and Response (EDR) vendor that sells client software that uses multiple methods to protect endpoints. Methods include anti-virus, a host firewall, a Host Intrusion Protection System (HIPS), and a technique it calls ZeroDwell Containment. The first three components are straightforward. The AV software relies on signatures to detect known malware. […]

The post Xcitium’s Endpoint Virtual Jail Aims To Lock Up Mystery Malware appeared first on Packet Pushers.

by Drew Conry-Murray at March 09, 2023 07:00 PM

Kubernetes Security And Networking 3: Helpful Tips For Securing Your Kubernetes Cluster – Video

Michael Levan reviews security essentials for protecting your Kubernetes infrastructure, including worker nodes. He discusses server hardening using CIS Benchmarks as a guide, running a scanner (using Kubescape as an example), and employing role-based access control (RBAC). You can subscribe to the Packet Pushers’ YouTube channel for more videos as they are published. It’s a […]

The post Kubernetes Security And Networking 3: Helpful Tips For Securing Your Kubernetes Cluster – Video appeared first on Packet Pushers.

by The Video Delivery at March 09, 2023 03:37 PM Blog (Ivan Pepelnjak)

DHCP Relaying Details

Chinar Trivedi asked an interesting question about DHCP relaying in VXLAN/EVPN world on Twitter and my first thought was “that shouldn’t be hard” but when I read the first answer that turned into “wait a minute, how exactly does DHCP relaying works?

I’m positive there’s a tutorial out there somewhere, but I decided to go back to the sources of wisdom: the RFCs. It turned out to be a long walk down the IETF history lane.

March 09, 2023 07:54 AM

March 08, 2023 Blog (Ivan Pepelnjak)

New: Anycast Resource Page

I wrote two dozen blog posts describing IP anycast concepts, from first-hop anycast gateways to anycast between DNS servers and global anycast (as used by large web properties), but never organized them in any usable form.

That’s fixed: everything I ever wrote about anycast is nicely structured on the new Anycast Resources page.

March 08, 2023 06:42 AM

XKCD Comics

March 07, 2023

The Networking Nerd

Friction as a Network Security Concept

I had the recent opportunity to record a podcast with Curtis Preston about security, data protection, and networking. I loved being a guest and we talked about quite a bit in the episode about how networking operates and how to address ransomware issues when they arise. I wanted to talk a bit more about some concepts here to help flesh out my advice as we talked about it.

Compromise is Inevitable

If there’s one thing I could say that would make everything make sense it’s this: you will be compromised. It’s not a question of if. You will have your data stolen or encrypted at some point. The question is really more about how much gets taken or how effectively attackers are able to penetrate your defenses before they get caught.

Defenses are designed to keep people out. But they also need to be designed to contain damage. Think about a ship on the ocean. Those giant bulkheads aren’t just there for looks. They’re designed to act as compartments to seal off areas in case of catastrophic damage. The ship doesn’t assume that it’s never going to have a leak. Instead, the designers created it in such a way as to be sure that when it does you can contain the damage and keep the ship floating. Without those containment systems even the smallest problem can bring the whole ship down.

Likewise, you need to design your network to be able to contain areas that could be impacted. One giant flat network is a disaster waiting to happen. A network with a DMZ for public servers is a step in the right direction. However, you need to take it further than that. You need to isolate critical hosts. You need to put devices on separate networks if they have no need to directly talk to each other. You need to ensure management interfaces are in a separate, air-gapped network that has strict access controls. It may sound like a lot of work but the reality is that failure to provide isolation will lead to disaster. Just like a leak on the ocean.

The key here is that the controls you put in place create friction with your attackers. That’s the entire purpose of defense in depth. The harder it is for attackers to get through your defenses the more likely they are to give up earlier or trigger alarms designed to warn you when it happens. This kind of friction is what you want to see. However, it’s not the only kind of friction you face.

Failing Through Friction

Your enemy in this process isn’t nefarious actors. It’s not technology. Instead, it’s the bad kind of friction. Security is designed by its very nature to create friction with systems. Networks are designed to transmit data. Security controls are designed to prevent the transmission of data. This bad friction comes when these two aspects are interacting with each other. Did you open the right ports? Are the access control lists denying a protocol that should be working? Did you allow the right VLANs on the trunk port?

Friction between controls is maddening but it’s a solvable problem with time. The real source of costly friction comes when you add people into the mix. Systems don’t complain about access times. They don’t call you about error messages. And, worst of all, they don’t have the authority to make you compromise your security controls for the sake of ease-of-use.

Everyone in IT has been asked at some point to remove a control or piece of software for the sake of users. In organizations where the controls are strict or regulatory issues are at stake the requests are usually disregarded. However, when the executives are particularly insistent or the IT environment is more carefree you can find yourself putting in a shortcut to get the CEO’s laptop connected faster or allow their fancy new phone to connect without a captive portal. The results are often happy and have no impact. That is, until someone finds out they can get in through your compromised control and create a lot of additional friction.

How can you reduce friction? One way is to create more friction in the planning stages. Ask lots of questions about ports and protocols and access list requirements before something is implemented. Do your homework ahead of time instead of trying to figure it out on the fly. If you know that a software package needs to communicate to these four addresses on these eight ports then anything outside of that list should be suspect and be examined. Likewise, if someone can’t tell you what ports need to be opened for a package to work you should push back until they can give you that info. Better to spend time up front learning than spend more time later triaging.

The other way to reduced friction in implementation is to shift the friction to policy. If the executives want you to compromise a control for the sake of their own use make them document it. Have them write it down that you have been directed to add a special configuration just for them. Keep that information stored in your DR plan and note it in your configuration repositories as well. Even a comment in the access list can help understand why you had to do something a certain way. Often the request to document the special changes will have the executives questioning the choice. More importantly, if something does go sideways you have evidence of why the change was made. And for executives that don’t like to look like fools this is a great way to have these kinds of one-off policy changes stopped quickly when something goes wrong and they get to answer questions from a reporter.

Tom’s Take

Friction is the real secret of security. When properly applied it prevents problems. When it’s present in too many forms it causes frustration and eventually leads to abandonment of controls or short circuits to get around them. The key isn’t to eliminate it entirely. Instead you need to apply it properly and make sure to educate about why it exists in the first place. Some friction is important, such as verifying IDs before entering a secure facility. The more that people know about the reasons behind your implementation the less likely they are to circumvent it. That’s how you keep the bad actors out and the users happy.

by networkingnerd at March 07, 2023 07:17 PM

Packet Pushers

Barriers To Kubernetes

If you’re a system administrator or Infrastructure Engineer that has: Managed upgrades for large-scale systems Managed high availability and horizontal scaling Deployed binaries on Linux or Windows VMs Deployed virtualization and bare-metal environments Kubernetes is going to be a major upgrade for you, how you deploy, and how you manage services. Kubernetes truly does make […]

The post Barriers To Kubernetes appeared first on Packet Pushers.

by Michael Levan at March 07, 2023 11:00 AM Blog (Ivan Pepelnjak)

Dynamic MAC Learning: Hardware or CPU Activity?

An subscriber sent me a question along the lines of “does it matter that EVPN uses BGP to implement dynamic MAC learning whereas in traditional switching that’s done in hardware?” Before going into those details, I wanted to establish the baseline: is dynamic MAC learning really implemented in hardware?

Hardware-based switching solutions usually use a hash table to implement MAC address lookups. The above question should thus be rephrased as is it possible to update the MAC hash table in hardware without punting the packet to the CPU? One would expect high-end (expensive) hardware to be able do it, while low-cost hardware would depend on the CPU. It turns out the reality is way more complex than that.

March 07, 2023 06:59 AM

March 06, 2023

Potaroo blog

An Economic Perspective on Internet Centrality

What sustains a digital monopoly in today's world? It's not the amassing of a huge workforce, or even having access to large pool of capital. It's not even the use of proprietary technologies that are not accessible to others. So why isn't the Internet fulfilling its vision of profound and intense competitive pressure in every part of the digital supply chain? Whjat is sustaining the domination of the digital world by a select group of behemoths? And, can we change this picture?

March 06, 2023 10:50 PM

Packet Pushers

Kubernetes Security And Networking 4: Helpful Tips To Secure The API Server – Video

In the previous video, Michael Levan walked through some security essentials for protecting worker nodes in a Kubernetes cluster. In this video he focuses on essential protections for the API server. He looks at security benchmarks from CIS, using Kubescape for security scanning, and how to integrate the two. Michael Levan hosts the “Kubernetes Unpacked” […]

The post Kubernetes Security And Networking 4: Helpful Tips To Secure The API Server – Video appeared first on Packet Pushers.

by The Video Delivery at March 06, 2023 09:53 PM Blog (Ivan Pepelnjak)

netlab: Change Stub Networks into Loopbacks

One of the least-documented limitations of virtual networking labs is the number of network interfaces a virtual machine could have. vSphere supports up to 10 interfaces per VM, the default setting for vagrant-libvirt is eight, and I couldn’t find the exact numbers for KVM. Many vendors claim their KVM limit is around 25; I was able to bring up a Nexus 9300v device with 40 adapters.

Anyway, a dozen interfaces should be good enough if you’re building a proof-of-concept fabric, but it might get a bit tight if you want to emulate plenty of edge subnets.

March 06, 2023 06:38 AM

XKCD Comics

March 03, 2023

XKCD Comics

March 01, 2023

Packet Pushers

Ask JJX: What About the KeePass Vulnerability?

In the midst of LastPass’s repeated barrage of breaches, a pretty serious vulnerability was found in another common password manager — KeePass. This slid under most of our radars, including mine. Professor Cyber Naught of the Mastodons suggested I comment on the situation. I’m so glad he brought this up, because it highlights several critical […]

The post Ask JJX: What About the KeePass Vulnerability? appeared first on Packet Pushers.

by Jennifer Minella at March 01, 2023 03:13 AM

XKCD Comics

February 27, 2023

The Networking Nerd

Why Do YOU Have To Do It?

One of the things that I’ve seen as a common thread among people in the industry as of late is the subject of burnout. Sure, burnout is a common topic no matter what year we’re in but a lot more of what I’m starting to hear about is self-inflicted burnout. Taking on too many projects, doing more than one job, and even having too many things going on outside of your specific role are all contributors to burnout. How can we keep that from happening?

Atlas and His Burden

For me, one of the biggest reasons why I find myself swimming in frustration is because I am very quick to volunteer to do things. In part it’s because I want to make sure the job is done correctly. In another part it’s because I want to be seen as someone that is always willing to get things done. Add in a dash of people pleasing and you can see how this spirals out of control. I’m sure you’ve even heard that as a career advice at some point. I’ve even railed against it many times on this blog.

How can you overcome the impulse to want to volunteer to do everything? If you’re not in a more senior role it’s going to be hard to tell someone you can’t or won’t do something. As I learned last year from commenters you don’t always have that luxury. If you are in a senior role you also may find yourself quickly volunteering to ensure that the job is done properly. That’s when you need to ask an important question:

Why do I have to do this?

Check your ego at the door and make sure that your Aura of Superiority is suppressed. This isn’t about you being better than the job or task. This is about determining why you are the best person to do this job. Seems easy at first, right? Just explain why this is something you have to do. But when you dig into it things get a little less clear.

Are you the most skilled person at this task in the company? That’s a good reason for you to do it. But could you offer to show someone else how to do the thing instead? Especially if you’re the only one that can do it? Cross training ensures that others know what to do when time is critical. It’s also nice to be able to take a vacation without needing to check your email every ten minutes. Enabling others to do things means you’re not the only phone call every time it needs to be done.

Is this something you’re worried won’t be done correctly if you don’t do it? Why? Is it something very difficult to accomplish? If so, why not have a team work on it? Is this something that you already have an idea of how you want to do it? That’s a recipe for trouble. Because you’ll implement your ideas for the thing and then either get bored or distracted and forget all about it. That leads to others thinking you’ve dropped the ball. It could also lead to people passing you over when you have good ideas because they’re afraid you’re going to take the ball and drop it later. If you think that it won’t be done correctly without your input you should find a way to add your input but not make yourself responsible for the completion of the project.

Are you just taking on the task for the accolades of a job well done? Do you enjoy the feeling of being called out for a successful completion of something? That’s fairly standard. Do you enjoy being chastised when you fail? Does it bother you when you’re called out in front of the team for not delivering something? Again, standard behavior of a normal person. The problem is when the need for the former outweighs the aversion to the latter.

In this excellent Art of Network Engineering episode with Mike Bushong he recounts a story of a manager that pushed back against him when he complained that no one knew how busy he really was. Her response of “everyone just sees you not getting things done” really made him stop and realize that taking the entire world on your shoulders wouldn’t make anything better if you kept failing to deliver.

I could go on and on and belabor the point more but I think you understand why it’s important to ask why the task can’t be reassigned or shared. Rather than just refusing you’re trying to figure out if anyone else should be doing it instead of you. As someone with too many things to do it’s critical you’re able to get those done. Adding more to your plate won’t make anyone’s job any easier.

Tom’s Take

I feel that I will always struggle to keep from taking on too many things at once. It’s not quite a compulsion but it’s also difficult not to want to do something for someone or take on a task that really should be done by someone with more skill or more time. The key for me is to stop and ask myself the question in the title. If I’m not the best person to be doing the job or if there is someone else that I can show so I’m not the only one that knows what to do then I need to do that instead. Sharing knowledge and ensuring others can do the tasks means everyone is involved and you’re not overwhelmed. And that makes for a happier workplace all around.

by networkingnerd at February 27, 2023 08:21 PM

Packet Pushers

Who Are The Most Overpaid Tech CEOs?

Here's seven tech CEOs that showed up on a list of the 100 most overpaid chief executives. These IT execs are getting sky-high compensation. Their pay packages look even higher when compared to the median pay of their workers. For instance, the ratio between Intel CEO Pat Gelsinger’s compensation and the median pay of an Intel employee is 1,711 to 1.

The post Who Are The Most Overpaid Tech CEOs? appeared first on Packet Pushers.

by Drew Conry-Murray at February 27, 2023 10:45 AM