April 20, 2018

Internetwork Expert Blog

We’ve Added a New Serverless Computing Course to Our Video Library!

Tired of creating and maintaining server instances for every app, language, and framework you use? Want to focus on pure function code, instead of wasting time on server management? Learn how to run your functions as a service, in a DevOps-free environment, with our Introduction to Serverless Computing.

 


In This Course You’ll Learn:

  • What Serverless Computing is, and how it differs from conventional server hosting
  • How to save on operational costs by reducing DevOps, and only paying when your functions are active
  • Ways to easily run functions as microservices, which automatically scale as load increases
  • How to create serverless functions in AWS Lambda, Azure functions, Google Cloud Functions, and
    Algorithmia


About The Instructor:

Jon Peck is a full-stack developer, consultant, teacher, and startup enthusiast. With a Computer Science degree from Cornell University and two decades of industry experience, he now focuses on bringing scalable, discoverable, and secure machine-learning microservices to developers across a wide variety of platforms.

Speaker (conferences): DeveloperWeek, SeattleJS, Global AI Conf, AI Next, Nordic APIs, DeveloperWeek, ODSC
Speaker (tech schools): Galvanize, CodeFellows, Metis, Epicodus, Alchemy
Organizer: Seattle Building Intelligent Applications Meetup
Educator: Cascadia College, Seattle C&W, independent instruction
Lead Developer: Empower Engine, Giftstarter, Mass General Hospital, Cornell University

Ready to Watch? All Access Pass members can view this course by logging in to their account. To purchase this course visit ine.com.

by jdoss at April 20, 2018 06:44 PM

The Networking Nerd

It’s Time For Security Apprenticeships

Breaking into an industry isn’t easy. When you look at the amount of material that is necessary to learn IT skills it can be daunting and overwhelming. Don’t let the for-profit trade school ads fool you. You can’t go from ditch digger to computer engineer in just a few months. It takes time and knowledge to get there.

However, there is one concept in non-technical job roles that feels very appropriate to how we do IT training, specifically for security. And that’s the apprenticeship.

Building For The Future

Apprenticeship is a standard for electricians and carpenters. It’s the way that we train new people to do the work of the existing workforce. It requires time and effort and a lot of training. But, it also fixes several problems with the current trend of IT certification:

  1. You Can’t Get a Job Without Experience – Far too often we see people getting rejected for jobs at the entry level because they have no experience. But how are they supposed to get the experience without doing the job? IT roles paradoxically require you to be cheap enough to hire for nothing but expect you to do the job on day one. Apprenticeships fix this by showing that you’re willing to do the job and get trained at the same time. That way you can be properly trained on the job.
  2. You Need To Be Certified – Coupled tightly with the first problem is the certification trend. Yes, electricians and plumbers are certified to do the work. After 5 years of training. You don’t get to take a test after 3 months and earn Journeyman status. You have to earn it through training and work. It also ensures that every journeyman out there has the same basic level of work experience and training. Compare that with people getting a CCNA right out of college with no experience on devices or a desktop administrator position having an MCSE-level requirement to discourage people from applying.
  3. You Can’t Find a Mentor – This is a huge issue for entry-level positions. The people you work with are often so overtasked with work that they don’t have time to speak to you, let alone show you the ropes. By defining the role of the apprentice and the journeyman trainer, you can ensure that the mentoring relationship exists and is reinforced constantly through the training period. You don’t get through an apprenticeship in a vacuum.
  4. You Can’t Get Paid To Learn – This is another huge issue with IT positions. Employers don’t want to pay to train you. They want you to use the vast powers of the Internet to look up hacks on Stack Overflow and paste them into production equipment rather than buying a book for you or letting you take a class. Some employers are better than others about highly skilled workers, but those employers are usually leveraging those skills to make more money, like in a VAR or MSP. Apprenticeships fix this problem by including a classroom component in the program. You spend time in a classroom every other week while you’re doing the work.

Securing the Future

The apprenticeship benefits above go double for security-related professions. Whereas there is a lot of material to train networking and storage admins, security is more of a game of cat-and-mouse. As new protections are developed, we must always figure out of the old protections are still useful. We need to train people how to learn. We need to bring them up to speed on what works and what doesn’t and why googling for an old article about packet filtering firewalls isn’t as relevant in 2018 as it was a decade ago.

More than that, we need to get the security practitioners to a place where they can teach effectively and mentor people. We need to get away from the idea of just rattling off a long list of recommended protections to people as the way to train them about security. We need to make sure we’re teaching fundamentals and proper procedures. And they only way that can happen is with someone that has the time and motivation to learn. Like an apprentice.

This would also have the added luxury of ensure the next generation of security practitioners learns the pitfalls before they make mistakes that compromise the integrity of the data they’re protecting. Making a mistake in security usually means someone finding out something they aren’t supposed to know. Or data getting stolen and used for illicit means. By using apprenticeships as a way for people to grow, you can quickly bring people up to speed in a safe environment without worrying that their decisions will lead to problems with no one to check their work.


Tom’s Take

I got lucky in my IT career. I worked with two mentors that spent the time to train me like an apprentice. They showed me the ropes and made sure I learned things the right way. And I became successful because of their leadership. Today, I don’t see that mentality. I see people covering their own jobs or worried that they’re training a replacement instead of a valued team member. I worry that we’re turning out a generation of security, networking, and storage professionals that won’t have the benefit of knowing what to do beyond Google searches and won’t know how to teach the coming generation how to deal with the problems they face. It’s not too late to start by taking an apprentice under your wing today.

by networkingnerd at April 20, 2018 04:55 PM

ipSpace.net Blog (Ivan Pepelnjak)

OpenFabric with Russ White on Software Gone Wild

Continuing the series of data center routing protocol podcasts, we sat down with Russ White (of the CCDE fame), author of another proposal: OpenFabric.

As always, we started with the “what’s wrong with what we have right now, like using BGP as a better IGP” question, resulting in “BGP is becoming the trash can of the Internet”.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 20, 2018 06:36 AM

Ethan Banks on Technology

StayFocusd Extension For Chrome

During the last month or two, I’d gotten into a habit of trawling through Imgur, looking for memes I could spin into humorous tweets about networking. It became a game to see what tweets I could create that people would find funny.

That game was successful, in that I had many tweets that were liked and/or retweeted dozens or, in a few cases, hundreds of times. But there was a downside. I was spending a lot of time on Imgur seeking inspiration. I was also spending a lot of time composing tweets and checking reactions.

I Hurt Myself Today

This led to the familiar cycle of Internet addiction. I was hooked on Twitter…again. I’ve been through this with Twitter off and on for many years now. My use of Imgur was also obsessive, opening the app on my phone multiple times per day and scrolling, scrolling, scrolling while looking for new fodder.

Using social media in the context of addiction is subtly different from simply wasting time. Addiction, for me, means using social media when I didn’t plan to. There’s a compulsion that would drive me to fire up Tweetdeck and check out all of my carefully curated columns, review mentions, and review performance statistics.

The issue isn’t just the time used. It’s also the issue of interrupting whatever it was that I was working on. Interruption takes me out of flow, reducing productivity for the day.

Engineered Synaptic Engagement

Lest you think I’m just a whiny bugger with a personal problem, the issue is far deeper. Social media is meant to hook you and compel you to click. It’s not merely a recreational activity that an unfortunate few lose control of like compulsive gamblers or alcoholics who can’t tame their vices. Andrew Sullivan wrote the following in New York Magazine.

“Do not flatter yourself in thinking that you have much control over which temptations you click on. Silicon Valley’s technologists and their ever-perfecting algorithms have discovered the form of bait that will have you jumping like a witless minnow. No information technology ever had this depth of knowledge of its consumers — or greater capacity to tweak their synapses to keep them engaged. And the engagement never ends.”

http://nymag.com/selectall/2016/09/andrew-sullivan-my-distraction-sickness-and-yours.html

Even more disturbing, I’m not entirely certain how addicted I was. I knew I was going through days of not getting nearly as much done as I thought I should have. I’d grind away for ten or more hours with seemingly little to show for it.

I knew I was dropping out of flow state when checking the socials, but it seemed like checking here and there shouldn’t have been so impactful. Again, Andrew Sullivan offers a clue.

“A small but detailed 2015 study of young adults found that participants were using their phones five hours a day, at 85 separate times. Most of these interactions were for less than 30 seconds, but they add up. Just as revealing: The users weren’t fully aware of how addicted they were. They thought they picked up their phones half as much as they actually did.”

http://nymag.com/selectall/2016/09/andrew-sullivan-my-distraction-sickness-and-yours.html

For me, using my phone was one part of the issue, but the primary screen I interact with is my Mac workstation. So, I had a couple of issues to resolve if I wanted to break the cycle of addiction one more time.

My phone is easy to resolve. I don’t run any social media apps for the most part anyway, so I just removed Imgur. That broke that habit.

Staying Focused With Stayfocusd

The workstation is more of a challenge, as I spend most of my time working in the Chrome browser. Hitting Tweetdeck is just too easy. To help with this, I’ve employed the Stayfocusd extension for Chrome.

Stayfocusd has a lot of features, but I have kept it simple so far. I went into Blocked Sites, and added linkedin.com and twitter.com. That handles LinkedIn, Twitter, and any subdomains that might be under them such as tweetdeck.twitter.com.

There’s another feature of Stayfocusd I really like, which is that the blocks don’t have to be absolute. Because I operate a media company, I do actually need to check the socials now and again. People communicate with us using Twitter and LinkedIn. But I’ve found I can do all the checking I need to do in two minutes.

I like the “max time” feature because I can do the minimal scan of Twitter mentions and LinkedIn notifications I need to without getting mired down in mindless scrolling or checking stats. The two minute limit keeps me honest while checking, and also curtails the number of times I kick off a social session.

But Does It Work?

What’s the end result of all this? Stayfocusd is a good addition to my social media defense arsenal. I’m back in my groove of getting a lot done in a sitting. Being able to focus more deeply and for longer periods of time. More flow state productivity.

I don’t think Stayfocusd is an absolute cure. That will only come when I delete my two remaining social media accounts. But for now, I’m pleased that Stayfocusd has assisted in breaking this latest cycle of addiction.

by Ethan Banks at April 20, 2018 02:21 AM

XKCD Comics

April 19, 2018

My Etherealmind

Why Enterprise IT Customers Are Stupid

There are many ways that buyers of Enterprise IT are stupid. Mostly its bad leadership and poor management that leads to poor decisions and processes like ITIL. Sometimes its pride preventing you from admitting failure, or the allure of a free steak lunch (putting one over your salary owner by paying for it with overpriced […]

by Greg Ferro at April 19, 2018 06:36 PM

ipSpace.net Blog (Ivan Pepelnjak)

Why Can’t We All Use Provider-Independent IPv6 Addresses?

Here’s another back-to-the-fundamentals question I received a while ago when discussing IPv6 multihoming challenges:

I was wondering why enterprise can’t have dedicated block of IPv6 address and ISPs route the traffic to it. Enterprise shall select the ISP's based on the routing and preferences configured?

Let’s try to analyze where the problem might be. First the no-brainers:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 19, 2018 07:21 AM

April 18, 2018

Internetwork Expert Blog

Check Out Our New Certified Wireless Security Professional (CWSP) Course!

This course covers WLAN Security Basics, Wifi Attack Vectors, 802.11 Security Design Considerations, and 802.11 Authentication.

 

Instructor: Steve Evans

Course Duration: 2hr 44min


What You’ll learn:

The first module discusses the need for Wi-Fi security and provides the background of decision and standards making bodies. It also discusses 802.11 in the Enterprise. In the second module, you will learn typical modes of attack on Wi-fi implementations and how different attack methods expose the vulnerability of the Wi-Fi network. Module 3 describes the need for performing a risk assessment, as well as establishing a security policy. It also discusses management and monitoring of the enterprise Wi-fi network. The last module details the different 802.11 authentication methods used from password based to enterprise based. It provides details on the complexities of more stringent authentication methods.

Ready To Watch? Click here to view on our streaming site. Don’t have an All Access Pass? No problem, you can also purchase this course at ine.com.

by jdoss at April 18, 2018 03:56 PM

ipSpace.net Blog (Ivan Pepelnjak)

Pragmatic Data Center Fabrics

I always love to read the practical advice by Andrew Lerner. Here’s another gem that matches what Brad Hedlund, Dinesh Dutt and myself (plus numerous others) have been saying for ages:

One specific recommendation we make in the research is to “Build a rightsized physical infrastructure by using a leaf/spine design with fixed-form factor switches and 25/100G capable interfaces (that are reverse-compatible with 10G).”

There’s a slight gotcha in that advice: it trades implicit complexity of chassis switches with explicit complexity of fixed-form switches.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 18, 2018 06:31 AM

XKCD Comics

April 17, 2018

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Interview: Programmable Infrastructure Is Just a Tool

A while ago I did an interview about programmable infrastructure that got published as an article in mid-March. As you might expect, my main message was “technology will never save you unless you change your processes to adapt to its benefits.”

Hope you’ll enjoy it!

by Ivan Pepelnjak (noreply@blogger.com) at April 17, 2018 06:50 AM

April 16, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Should I Take CCIE DC or ipSpace.net Data Center Online Course?

Got this question from a networking engineer who couldn’t decide whether to go for CCIE Data Center certification or attend my Building Next-Generation Data Center online course:

I am considering pursuing CCIE DC. I found your Next-Generation DC course very interesting. Now I am bit confused trying to decide whether to start with CCIE DC first and then do your course.

You might be in a similar position, so here’s what I told him.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 16, 2018 11:10 AM

ipSpace.net Subscription Now Available with PayPal

Every second blue moon someone asks me whether they could buy ipSpace.net subscription with PayPal. So far, the answer has been no.

Recently we started testing whether we could use Digital River to solve a few interesting challenges we had in the past, and as they offer PayPal as a payment option, it seemed to be a perfect fit for a low-volume trial.

The only product that you can buy with PayPal during the trial is the standard subscription – just select PayPal as the payment method during the checkout process.

Finally: the first three subscribers using PayPal will get extra 6 months of subscription.

by Ivan Pepelnjak (noreply@blogger.com) at April 16, 2018 06:06 AM

XKCD Comics

April 14, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: The Death of Expertise

Bruno Wollman pointed me to an excellent article on the ignorance of expertise and confidence of the dumb. Here’s the TL&DR summary (but you should really read the whole thing):

  • The expert isn’t always right;
  • An expert is far more likely to be right than you are;
  • Experts come in many flavors – usually you need a combination of education and expertise;
  • In any discussion, you have a positive obligation to learn at least enough to make the conversation possible. University of Google doesn’t count;
  • While you’re entitled to have an opinion, having a strong opinion isn’t the same as knowing something.

Enjoy ;)

by Ivan Pepelnjak (noreply@blogger.com) at April 14, 2018 07:08 AM

April 13, 2018

Moving Packets

Mellanox, Ixia and Cumulus: Part 3

Last–but not least–in the technology triumvirate presenting a joint session at Networking Field Day 17 was Cumulus Networks. This post looks at the benefits of Cumulus Linux as a NOS on the Mellanox Spectrum Ethernet switch platform.

Cumulus/Mellanox/Ixia Logos

Cumulus Networks

I’ve not yet managed to deploy Cumulus Linux in anger, but it’s on a fairly short list of Network Operating Systems (NOS) which I would like to evaluate in earnest, because every time I hear about it, I conclude that it’s a great solution. In fact, I’m having difficulty typing this post because I have to stop frequently to wipe the drool from my face.

Cumulus Linux supports around 70 switches from 8 manufacturers at this time, and perhaps obviously, that includes the Mellanox Spectrum switches that were presented during this session. This is the beauty of disaggregation of course; it’s possible to make a hardware selection, then select the software to run on it. Mellanox made a fairly strong case for why the Spectrum-based hardware is better than others, so now Cumulus has to argue for why they would be the best NOS to run on the Mellanox hardware.

Cumulus Linux, as the name suggests, is based on Debian linux. Cumulus Networks then adds its deep knowledge of the ASICs and platforms they support in order to write the linux drivers necessary to interface with the switch. One of the arguments made against much commercial use of open source software is that the companies do not give back to the open source community, while profiting from the same. Cumulus Networks is proud to tell us that it feeds patches back upstream where possible, and in the NFD17 presentation, Cumulus TME Pete Lumbis described how Cumulus wanted to add support for VRFs to linux, as they felt that the existing network name spaces were not an adequate solution. Cumulus went back to the community with ideas and code to implement VRFs, the proposal was accepted and now VRFs are part of the linux kernel. Once linux supports VRFs natively, Cumulus then implemented VRFs on the switching platform. The end result – as with pretty much everything in Cumulus Linux – is that configuring a VRF or an interface, for example, on Cumulus Linux is done exactly the same as it would be on any other linux device.

Cumulus Networks was also a key player in the forking of Quagga to create Free Range Routing (FRR). Some parts of the community, including Cumulus Networks and other vendors, were unhappy that Quagga was not being developed and patches were not being reviewed and updated at the kind of speed necessary to keep Cumulus – a user of Quagga – competitive. In the end, unable to find other solutions to a claimed backlog of 3,000 patches for Quagga, the project was forked and Cumulus Linux now uses FRR instead.

It is of note that not everybody is, or wants to be, a linux admin, and for many network engineers the linux CLI is not comfortable. With that in mind, Cumulus Linux created a network command line utility (NCLU). Linux admins can just edit the config files if desired, while network engineers are more likely to appreciate NCLU’s online help and tab completion of commands. Under the hood, NCLU is just a more friendly abstraction to the underlying configuration files. so both server and network admins can view and manage the configurations in whichever way makes most sense to them. Pete Lumbis notes that in some cases the NCLU command takes output which in linux is positively messy, and turns it into something far clearer and more familiar; but the source for the information is the same regardless of which commands are used to view it.

It’s also of note that since everything NCLU does is reflected within linux itself, management of Cumulus switches can be achieved using the tools that already exist to manage linux servers. That’s right; finally there’s a network product which Ansible understands right out of the box, because it’s really a linux server!

Cumulus VX

Cumulus VX is a free VM of Cumulus Linux supporting VMware, VirtualBox and KVM. It can be downloaded and used for network testing and learning purposes, and with the assistance of a tool like Vagrant the connectivity between multiple VMs can be automated so that full network architectures can be simulated using Cumulus VX. The demo on the video is entirely virtualized based on a this topology:

Cumulus Demo Topology

The configurations in CumulusVX are identical to those on dedicated switching hardware, so once modeled and validated successfully the configuration can simply be copied over to the target devices with a high degree of confidence that it will work. It’s worth noting that CumulusVX is not optimized for throughput in any way (no DPDK or similar); and is intended purely as a validation platform, not a production platform.

NetQ

The feature which most excited me (it pushed my nerd buttons) was NetQ. Every switch is configured to run a NetQ agent which monitors Netlink messages as well as a few other things. The NetQ agent can thus see the operational status of its switch and store the status in a Redis in-memory distributed database, meaning that the status of every device is known on, well, every device, so it’s possible to see the current state of an entire fabric from a single device.

For example, the screenshot below is from the demo session and includes an example of tracing a mac (netq show mac), running a layer 2 trace (netq trace ... from leaf02) and checking current BGP session status (netq check bgp):

Cumulust NetQ Screenshot

Netq can ‘perform’ a layer 2 or layer 3 traceroute from anywhere to anywhere else in the network, including identifying ECMP links along the way. What’s interesting about this is that no packets are sent; the traceroutes are calculated based on the known status of every node so it’s all theoretical but based on real device status.

Because the state data are held in a Redis database, if there’s not a netq command providing the desired output, users can choose to write a query as a select statement instead:

Cumulus Netq SQL Screenshot

And one more thing: running a netq command means querying the current system state data, so if a system were to keep track of that state data with timestamps, it would be possible to ask for a netq command to be run on the system state at a specific time in the past. Enter the telemetry server (another VM) which does exactly that. To me this kind of thing is like gold when troubleshooting a report of a problem that occurred at a specific time in the past/

Cumulus has extended Netq support to Docker as well, so it’s possible to see, for example, which containers are running on a given host, or where a container connects to the fabric, and similar. With the telemetry server running, the same queries can be used to show how containers are being spun up and closed down.

I know I can’t really do justice to Cumulus Linux and NetQ, but Cumulus Networks’ Pete Lumbis does a great job in this NFD17 session video:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="381" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/253197258" webkitallowfullscreen="webkitallowfullscreen" width="678"></iframe>

Conclusions

I really like the way Cumulus Networks thinks, and its commitment to preserving the ‘linux’ way of doing things. Obviously, in this context Cumulus Linux is being presented as a good partner for Mellanox switches, but even if another hardware vendor is chosen, Cumulus always seems to me to be a pretty sweet product that’s accessible to administrators either with either server experience or network experience. Two thumbs way up!

 

Disclosures

I was an invited delegate to Network Field Day 17 at which all the companies listed above presented. Sponsors pay for presentation time in front of the NFD delegates, and in turn that money funds the delegates’ transport, accommodation and nutritional needs while there. With that said, I always want to make it clear that I don’t get paid anything to be there (I took vacation), and I’m under no obligation to write so much as a single word about any of the presenting companies, let alone write anything nice about them. If I have written about a sponsoring company, you can rest assured that I’m saying what I want to say and I’m writing it because I want to, not because I have to.

You can read more here if you would like.

If you liked this post, please do click through to the source at Mellanox, Ixia and Cumulus: Part 3 and give me a share/like. Thank you!

by John Herbert at April 13, 2018 06:45 PM

The Networking Nerd

On Old Configs and Automation

I used to work with a guy that would configure servers for us and always include an extra SCSI card in the order. When I asked him about it one day, he told me, “I left it out once and it delayed the project. So now I just put them on every order.” Even after I explained that we didn’t need it over and over again, he assured me one day we might.

Later, when I started configuring networking gear I would always set a telnet password for every VTY line going into the switch. One day, a junior network admin asked me why I configured all 15 instead of just the first 5 like they learn in the Cisco guides. I shrugged my shoulders and just said, “That’s how I’ve always done it.”

The Old Ways

There’s no more dangerous phrase than “That’s the way it’s always been.”

Time and time again we find ourselves falling back on the old rule of thumb or an old working configuration that we’ve made work for us. It’s comfortable for the human mind to work from a point of reference toward new things. We find ourselves doing it all the time. Whether it’s basing a new configuration on something we’ve used before or trying to understand a new technology by comparing it to something we’ve worked on in the past.

But how many times have those old configurations caused us grief? How many times have we been troubleshooting a problem only to find that we configured something that shouldn’t have been configured in the way that it was. Maybe it was an old method of doing hunt groups. Or perhaps it was a legacy QoS configuration that isn’t supported any more.

Part of our issue as networking professionals is that we want to concentrate on the important things. We want to implement solutions and ideas that work for our needs and not concentrate on the minutia of configuration. Sure, the idea of configuration a switch from bare metal to working config is awesome the first time you do it. But the fifteenth time you have to configure one in a row is less awesome. That’s why copy-and-paste configurations are so popular with people that just want to get the job done.

New Hotness

This idea of using old configurations for new things takes even more importance when you start replacing the boring old configuration methods with new automation and API-driven configuration models. Yes, APIs make it a lot easier to configure a switch programmatically. And automation tools like Puppet and Ansible make it much faster to bring a switch online from nothing to running in the minimum amount of time.

However, even with this faster configuration method, are we still using old, outdated configurations to solve problems? Sure, I don’t have to worry about configuring VLANs on the switch one at a time. But if my configuration is still referencing VLANs that are no longer in the system that makes it very difficult to keep the newer switches running optimally. And that’s just assuming the configuration is old and outdated. What if we’re still using deprecated commands?

APIs are great because they won’t support unsupported things. But if we don’t scrub the configuration now and then to remove these old lines and protocols we’ll quickly find ourselves in a world of trouble. Because those outdated and broken things will bring the API to a halt. Yes, the valid commands will still be entered correctly, but if those valid commands rely on something invalid to work properly you’re going find things broken very fast.

What makes this whole thing even more irritating is that most configurations need to be validated and approved before being implemented. Which makes the whole process even more difficult to manage. Part of the reason why old configs live for so long is that they need weeks or months of validation to be implemented effectively. When new platforms or configuration methods crop up it could delay new equipment installation. This sometimes leads to people installing new gear with “approved” configs that might not be the best fit in order to get that new equipment into service.


Tom’s Take

So, how do we fix all this? What’s the trick? Well, it’s really a combination of things. We need to make sure we audit configs regularly to keep the old stuff from continuing on past the expiration dates. We also need to continually resubmit new configurations to the approvals process. Just like disaster recovery documentation, configurations are living, breathing documents that should always be current and refreshed. The more current your configurations, the less likely you are to have old cruft keeping your systems running at subpar performance. And the less likely you are to have to worry about breaking new things like APIs and automation systems in the future.

by networkingnerd at April 13, 2018 04:01 PM

My Etherealmind

Is Networking Complex/Hard ?

Its not complicated (natch). Its distributed. And we don’t have visibility to know. Distributed Systems What makes networking hard ? A network is a distributed system where state must be shared between devices that are unreliably connected. Its a fallacy that a network will ever be reliable or predictable. Skills Network technologies and their fundamentals […]

by Greg Ferro at April 13, 2018 02:43 PM

ipSpace.net Blog (Ivan Pepelnjak)

Video: Automatic Diagramming with PowerNSX

Here's a trick question: how often do your Visio diagrams match what's really implemented in your network?

Wouldn't it be great to be able to create or modify them on-the-fly based on what's really configured in the network? That's exactly what Anthony Burke demonstrated in the PowerNSX part of PowerShell for Networking Engineers webinar (source code).

You’ll need at least free ipSpace.net subscription to watch the video.

by Ivan Pepelnjak (noreply@blogger.com) at April 13, 2018 08:18 AM

XKCD Comics

April 12, 2018

My Etherealmind

Too Big To Fail is More Likely to Fail ?

People think that big companies are too big to fail and thats why you should buy from big companies.  Except that this is no longer true. To whit: HPE just divested all of its software assets. While HPE maintains a substantial interest in the new owner, I’m confident that HPE will walk away from those […]

by Greg Ferro at April 12, 2018 06:58 PM

ipSpace.net Blog (Ivan Pepelnjak)

EVPN Route Target Considerations in EBGP Environment

The proponents of the “let’s run EVPN over EBGP underlay” idea often ignore an interesting challenge: EVPN advocates use of automatically-generated Route Targets, which might not work when every leaf switch uses a different AS number.

I explored this particular can of worms in the EVPN Route Target Considerations section of the Using BGP in a Data Center Leaf-and-Spine Fabric saga.

by Ivan Pepelnjak (noreply@blogger.com) at April 12, 2018 09:04 AM

April 11, 2018

Moving Packets

Mellanox, Ixia and Cumulus: Part 2

This post is part two of three in a series looking at the joint presentations made by Mellanox, Ixia and Cumulus at Networking Field Day 17, in February 2018. More specifically, this post looks at what part Ixia has to play in the deployment of an Ethernet switch fabric built using Mellanox switches and running Cumulus Linux as the Network Operating System (NOS).

Cumulus/Mellanox/Ixia Logos

Ixia

What confused me most about a presentation from Mellanox, Ixia and Cumulus about Ethernet fabrics was to figure out what role Ixia would be playing in the disaggregated model. Mellanox makes the switch hardware and Cumulus makes the switch software, so Ixia fits, well, where exactly?

IxNetwork

IxNetwork is billed as an end-to-end validation solution which in many ways undersells what it’s all about. Rather than being just more traffic-generating test equipment, IxNetwork can emulate multiple switch and server devices so that a single piece of test hardware can be connected to what it believes is a large existing infrastructure, and that hardware’s behavior and resiliency can be validated. In the demo topology, IxNetwork connects to a physical Mellanox Spectrum switch running Cumulus Linux, emulating connected servers as well as an entire leaf/switch EVPN/VXLAN fabric, attached hosts and VTEPs, to which the physical switch can be connected.

Ixia Demo Topology

The emulated environment establishes EBGP connections to the Mellanox/Cumulus switch as it would in a real network. Additional clarity may be gained by looking at the same demo configuration in the Scenario tab of the IxNetwork UI:

Ixia GUI

Ixia’s IxNetwork allows one or more physical switches to be tested in the context of a fully emulated data center environment, validating packet loss, latency and similar statistics end-to-end, allowing captures to be taken at any point in the network, and so forth. IxNetwork can connect to interfaces ranging from 1Gbps to 400Gbps, so it’s definitely keeping pace with current hardware. This capability would be incredibly helpful if I were evaluating new hardware for my network, but for end users like me this seems like only an occasional need until a purchasing decision has been made.

There are two videos for Ixia; the first is an introduction to IxNetwork:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="381" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/253197353" webkitallowfullscreen="webkitallowfullscreen" width="678"></iframe>

The second is the product demo itself:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="381" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/253197308" webkitallowfullscreen="webkitallowfullscreen" width="678"></iframe>

Based on the presentations alone, I was trying to determine use cases for IxNetwork that would allow me to justify spending money on this test suite. Certainly if I were a switch vendor, I might appreciate the ability to prove my hardware’s capabilities. Similarly, if I were evaluating different vendor’s hardware as a potential new switching platform for my network, I’d be very interested to have this kind of emulated test environment to hand, as it would allow me to test the hardware as part of a larger environment where it’s running protocols and switching packets in the way it would do in a real network (e.g. EBGP and VXLAN); after all, pushing a stream of L2 frames through a switch at high speed does little to confirm how the switch copes if its also having to read VXLAN headers and make routing decisions for each frame/packet. These are, however, very limited use cases and potentially only transient in need, which would not encourage me to make the investment.

However, researching a little further it seems that IxNetwork goes beyond what was presented, offering emulation of routed networks for example, able to interface with physical equipment using a variety of IGPs (e.g. ISIS, OSPF, EIGRP and RIP), IGBP and EBGP (including FlowSpec). It can communicate using MPLS (supporting LDP and RSVP-TE) with L2VPN, L3VPN, Multicast VPN and more, and it can emulate PIM-SM/SSM and IGMP/MLD querier/receiver for multicast, and it supports LACP and BFD. It’s possible therefore to emulate a large, routed network, enterprise WAN or service provider WAN and see how connected equipment (or small sections of physical infrastructure) would behave and perform as part of the wider network rather than testing components in isolation. IxNetwork seems to combine complex multi-protocol network emulation with competent traffic generation capabilities

Conclusions

I appreciate seeing how IxNetwork can be used to validate the performance of the Mellanox/Cumulus switch combination. This does not, however, fill me with a need to invest in the Ixia product myself. However, I can see how service providers, for example, would be all over IxNetwork as a way to consistently test and validate things like Network Functions Virtualization (NFV), software-defined CEs and anything else which might need to be evaluated. I can imagine services being tested using IxNetwork purely as an emulated network in the service path to provide realistic testing and failure scenario outcomes. The relationship of IxNetwork to the Mellanox/Cumulus switch seems to be limited to validating that this combination of hardware and software performs well, as expected, and there is for sure some value in that.

To be fair to Ixia, I don’t think IxNetwork is a good fit what I currently do, so my gut reaction is not necessarily one of excitement. However, I can see how a product like this could be an every day tool in some environments, and could help improve the success of product deployments. With that in mind, IxNetwork gets a sincere ‘would definitely evaluate’ if I were in the market for a tool of this type.

 

Disclosures

I was an invited delegate to Network Field Day 17 at which all the companies listed above presented. Sponsors pay for presentation time in front of the NFD delegates, and in turn that money funds the delegates’ transport, accommodation and nutritional needs while there. With that said, I always want to make it clear that I don’t get paid anything to be there (I took vacation), and I’m under no obligation to write so much as a single word about any of the presenting companies, let alone write anything nice about them. If I have written about a sponsoring company, you can rest assured that I’m saying what I want to say and I’m writing it because I want to, not because I have to.

You can read more here if you would like.

If you liked this post, please do click through to the source at Mellanox, Ixia and Cumulus: Part 2 and give me a share/like. Thank you!

by John Herbert at April 11, 2018 06:00 PM

My Etherealmind

Using Sales People for Tech Support is Expensive

First published in Human Infrastructure Magazine in Oct 2017. When something goes wrong with a product, your first stop is likely to be tech support. Those painfully expensive maintenance agreements that you pay for every year get you access to ‘world class’ support services. ORLY? Hopefully the problems occur after you bought and deployed the […]

by Greg Ferro at April 11, 2018 05:21 PM

Internetwork Expert Blog

We’ve Added a New Microsoft Administration Course to Our Video Library!

Considering Windows Server 2016? In this helpful course, get the details about Windows Server 2016 basic functionality and features that we use as administrators on almost a daily basis.

 


Why You Should Watch:

If you are interested in Administering Windows Server 2016 and need to know the basics, this is where you start! This course covers all the basic aspects of utilities you will use as a system administrator, how to get to them, and how they work.


What You’ll Learn:

This course covers installation methods, service packs, troubleshooting, basic features of Active directory, data storage, remote services, network monitoring, reliability and availability, permissions, security, and virtualization.


About the Instructor:

Melissa Hallock has been in the IT field since 1996 when she first began working with hardware. While working on a Bachelor of Applied Science in Networking, she landed her first IT job in a Forbe’s top 100 growing companies as a LAN Technician and worked with all things Microsoft. Later she migrated to Linux and Mac operating systems. Having always worked in an education setting as a tech, she decided to start teaching and began teaching at the second largest private college in Michigan. She quickly became the most sought out instructor and decided to pursue a Master of Information. After she completed her masters’, she expanded her teaching to also cover several programming courses.

Melissa also has several certifications including the A+, Networking +, Cloud +, and the Windows 7 certification. She is now working on a Security + certification, and then will get started on her MCSA.

What Are You Waiting For? Watch this video on our streaming site

by jdoss at April 11, 2018 05:16 PM

SNOsoft Research Team

Retro: FACEBOOK – Anti-Social Networking (2008).

This is a retro post about a penetration test that we delivered to a client back in 2008.  During the test we leveraged personal data found on Facebook to construct and execute a surgical attack against an energy company (critical infrastructure).  The attack was a big success and enabled our team to take full control of the client’s network, domain and their critical systems.

Click to download:

 

Given the recent press about Facebook and its respective privacy issues we thought it would be good to also shed light on the risks that its users create for the companies and/or agencies that they work for.  It is important to stress that the problem isn’t Facebook but instead is the way that people use and trust the platform.  People have what could be described as an unreasonable expectation of privacy when it relates to social media and that expectation directly increases risk.  We hope that this article will help to raise awareness about the very real business risks surrounding this issue.

 

Full Writeup (Text extract from PDF): June 2008
FACEBOOK Anti-Social Networking:
“It is good to strike the serpent’s head with your enemy’s hand.”

THE FRIEND OF MY ENEMY IS MY FRIEND. (2008)

The Facebook Coworker search tool can be abused by skilled attackers in sophisticated attempts to compromise personal information and authentication credentials from your company employees. Josh Valentine and Kevin Finisterre of Penetration Testing Company Netragard, Inc. also known as Peter Hunter and Chris Duncan, were tasked with conducting a penetration test against a large utility company. Having exhausted most conventional exploitation methods they decided to take a non conventional approach to cracking the companies networks. In this case they decided that perhaps a targeted attack against the companies Facebook population would be the most fruitful investment of time. Since Facebook usage requires that you actually sign up Josh and Kevin had to research believable back grounds for their alter ego’s Peter and Chris. The target company had a fairly  large presence in the US with four offices located in various places. Due to the size of the company it was easy to cherry pick bits and pieces of information from the hundreds of available profiles. Because many profiles can be browsed without any prior approval gathering some basic information was easy. Armed with new identities based on the details and demographics of the companies Facebook population it was time to make some new friends. After searching through the entries in the Coworker search tool they began selectively attempting to befriend people. In some cases the attempts were completely random and in others they tried to look for ‘friendly’ people. The logic was that once Peter and Chris had a few friends on their lists they could just send out a few mass requests for more new friends. With at least four or five friends under their belt the chances of having overlapping friends would increase.

“by the way… thanks for the hookup on the job. I really appreciate it man.”

Appearing as if they were ‘friends of friends’ made convincing people to accept the requests much easier. Facebook behavior such as the ‘Discover People You May Know’ sidebar also added benefit of making people think they knew Peter and Chris. Blending in with legit accounts meant that the two fake accounts needed to seem like real people as much as possible. Josh and Kevin first came up with basic identities that were just enough to get a few friends. Now If they wanted to continue snaring new friends and not raise any suspicions with existing friends they would need to be fairly active with the accounts.Things needed to get elaborate at this point so Josh and Kevin combed the internet looking for random images as inspiration for character background. Having previously decided on their desired image and demographic they decided to settle on a set of pictures to represent themselves with. They came up with a few photos from the surrounding area and even made up a fake sister for Chris. All of this obviously helped solidify the fact that they were real people in the eyes of any prospective friends. Eventually enough people had accepted the requests that Facebook began suggesting Chris and Peter as friends to many of the other employees of the target company.

Batch requests are the way to go Cherry picking individual friends was obviously the way to get a good profile started but Josh and Kevin were really after as many of the employees as possible so a more bulk approach was needed. After they were comfortable that their profiles looked real enough the mass targeting of company employees began. Simply searching the company Facebook network yielded 492 possible employee profiles. After a few people became their friends the internal company structure became more familiar. This allowed the pair to make more educated queries for company employees. Due to the specific nature of the company industry it was easy to search for specific job titles. Anyone could make a query in a particular city and search for a specific job title like “Landman” or “Geologist” and have a reasonable level of accuracy when targeting employees.

At the time the Chris Duncan account was closed there were literally 208 confirmed company employees as friends. Out of the total number of accounts that were collected only 2 or 3 were non employees or former employees. The company culture allowed for a swift embracing of the two fictitious individuals. They just seemed to fit in. Given enough time it is reasonable to expect that many more accounts would have been collected at the same level of accuracy.

Facebook put some measures in place to stop people from harvesting information. The first 50 or so friend requests that were sent Facebook required a response to a captcha program. Eventually Facebook was complacent with the fact that the team was not a pair of bots and allowed requests to occur in an unfettered manner. The team did run into what appeared to be a per hour as well as a per day limit to the number of requests that could be sent. There was a sweet spot and the team was able to maintain a nice flow of requests.

“Hi Chris, are you collecting REDACTED People? :)”

The diverse geography of the company and the embracing of internet technologies made the ruse seem comfortable. In many cases employees approached the team suspecting suspicious behavior but they were quickly appeased with a few kind words and emoticons. The hometown appeal of the duo’s profiles seemed to help people drop their guard and usual inhibitions. With access to the personal details of several company employees at their fingertips it was now time to sit back and reap the benefits. Once the pair had a significant employee base intra company relationships were outlined, common company culture was revealed. As an example several employees noted and pointed out to Chris and Peter that they could not find either individual in the “REDACTED employee directory”. Small tidbits of information like this helped Kevin and Josh carefully craft other information that was later fed to the people they were interacting with. With a constant flow of batch requests going there was a consistent and equally constant flow of new friends to case for information.

Over a seven day period of data collection there were as few as 8 newly accepted friends or as many as 63.

Days with more than 20 or so requests were not at all unusual for us.

Even after our testing was concluded the profiles continued to get new friend requests from REDACTED.

May 26 – 11
May 25 – 9
May 25 – 8
May 23 – 15
May 22 – 26
May 21 – 63
May 20 – 40

Every bit of information gleaned was considered when choosing the ultimate attack strategy. The general reactions from people also helped the team gauge what sort of approach to take when crafting the technique for the coup de grâce. Josh and Kevin had to go with something that was both believable and lethal at the same time. Having cased several individuals and machines on the company network it was time to actually attack those lucky new friends.

“ALL WARFARE IS BASED ON DECEPTION Hence, when able to attack, we must seem unable; when using our forces we must seem inactive; when we are near, we must make the enemy believe we are far away…”

Having spent several days prior examining all possible means of conventional exploitation Kevin and Josh were ready to move on and actually begin taking advantage of all the things they had learned about the energy companies network.

“Forage on the enemy, use the conquered foe to augment one’s own strength”

During their initial probes into the company networks the Duo came across a poorly configured server that provided a web based interface to one of the companies services. Having reverse engineered the operations of the server and subsequently compromising the back-end database that made the page run they were able to manipulate the content of the website in a manner that allowed for theft of company credentials in the near future. During information gathering it was common for employees to imply that they had access to some sort of company portal by which they could obtain information and perhaps access to various parts of the company.

“Supreme excellence consists in breaking the enemy’s resistance without fighting”

The final stages of the penetration testing happened to fall on a holiday weekend. The entire staff was given the Friday before the holiday off as well as the following Monday. Lucky for the team this provided an ideal window of opportunity during which the help desk would be left undermanned. A well orchestrated attack that appeared to be from the help-desk would be difficult to ward off and realistically unstoppable if delivered during this timeframe.

“In all fighting the direct method may be used for joining battles, but indirect methods will be needed in order to secure victory”

Several hundred phishing emails were sent out to the unsuspecting Facebook friends, the mailer was perfectly modeled from an internal company site. The mailer implied that the users password may have been compromised and that they should attempt to login and verify their settings. In addition to the mailer the status of the two Profiles were changed to include an enticing link to the phishing site. Initially 12 employees were fooled by the phishing mailer. Due to a SNAFU at the AntiSPAM company Postini another 50 some odd employees were compromised. The engineer at Postini felt that the mailer looked important and decided to remove the messages from the blocked queue. Access to the various passwords allowed for a full compromise of the client’s infrastructure including the mainframe, various financial applications, in house databases and critical control systems.

Clever timing and a crafty phishing email were just as effective if not more effective than the initial hacking methods that were applied. Social engineering threats are real,educate your users and help make them aware of efforts to harvest your company info. Ensure that a company policy is established to help curb an employee usage of Social Networking sites. Management staff should also consider searching popular sites for employees that are too frivolously giving out information about themselves and the company they work for. Be vigilant don’t be another phishing statistic.

The post Retro: FACEBOOK – Anti-Social Networking (2008). appeared first on Netragard.

by Adriel Desautels at April 11, 2018 02:53 PM

ipSpace.net Blog (Ivan Pepelnjak)

New in IPv6: The Next Chapter in IPv6 Multihoming Saga

Remember the IPv6 elephant in the room – the inability to do small-site multihoming without NAT (well, NPT66)? IPv6 is old enough to buy its own beer, but the elephant is still hogging the room. Tons of ideas have been thrown around in IETF (mostly based on source address selection tricks), but none of that spaghetti stuck to the wall.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 11, 2018 08:11 AM

Networking Now (Juniper Blog)

BitPaymer Ransomware hides behind windows Alternate Data Streams

Threat name: BitPaymer Ransomware

IOC Hash: Sha256: 8943356b0288b9463e96d6d0f4f24db068ea47617299071e6124028a8160db9c

IOC Files:

  • Files encrypted changed to extension .locked
  • Files ending with Readme_txt are created containing the Ransom Notes

BitPaymer ransomware was first seen in mid 2017 and was known to infect hospitals and ask for a huge Ransom. Earlier versions of BitPaymer allegedly demanded a whopping amount of 20 to 50 bitcoins, which would amount to approximately one lakh dollars. This means that the ransomware was targeting organizations rather than individuals . Recently, we came across a variant of this ransomware .

 ransom_note.png

 Fig: BitPaymer ransom note

 

BitPaymer uses a unique hiding mechanism that.  exploits  alternate data streams (ADS), a feature of a NTFS file system that allows it to hide itself from plain sight.

 

Earlier versions of BitPaymer hid their own files by adding themselves to  blank files as an ADS. The latest version copies a clean Windows system executable to application data folder and then adds a copy of itself as an ADS stream to that copy of clean executable file . This can evade security tools that are not able to look into ADS. The file name of the copy of the clean executable is usually 8 character with “~1” at the end ie .”SOWI3D~1”. In this version of BitPaymer, , the name of the ADS   is “:bin”, while versions of the malware is “:exe” .So, the file name in this case is  ”SOWI3D~1:bin”, where bin is  the copy of the malware hidden as an ADS . You can only see “SOWI3D~1” as the file name when using Windows Explorer or file browsing tools like Far Manager.

 file.png

 Fig: stream not visible in Windows Explorer

 

Tools like AlternateStreamView can be used to view the ADS. The tool is available at the following URL: “https://www.nirsoft.net/utils/alternate_data_streams.html”.

 alternate_stream viewer tool.png

  Fig: alternateStreamView tool

 

After adding itself as an ADS ,to the copy of the clean Windows system executable, the malware launches the copied executable.

createProcess.png

   Fig: process created from ADS

 

The ransomware also tries to delete backup files like other ransomware. Most ransomware is known to use only VSSAdmin to delete the shadow copies but this one also seems to use “diskshadow.exe”.(note: shadow copies are used for the purpose of backup). The malware then executes the following commands to delete backups.

 

"C:\WINDOWS\system32\vssadmin.exe C:\WINDOWS\system32\vssadmin.exe Delete Shadows /All /Quiet"

 

"C:\WINDOWS\system32\diskshadow.exe C:\WINDOWS\system32\diskshadow.exe /s C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\yP34.tmp".

 

 The ransomware has also embedded a public key  that is used for encryption purposes. public key.png

 

 

                                                           

 

 

Fig: public key embedded in the ransomware

 

After encrypting the files, the malwares drops a ransom note file for the victim.

The ransom note is slightly different in this version of BitPaymer. This version demands the ransom to be paid within 24 hours, while the earlier gave a period of 72 hours.

 

The malware encrypts the files  and leaves a ransom note in the directory. The encrypted files usually end with “.ini.locked” . The ransom note file name usually has the same file name with

extension “ini.readme_txt”.  

infected_folder.png

 Fig: Files encrypted by BitPaymer

 

BitPaymer is meant to spread by Brute force Remote Desktop Protocols (RDP).

  

Detection

Both Juniper Sky ATP and JATP on-prem solutions detect this threat as seen in the screenshots below .

cyphort.png

Screen Shot 2018-04-02 at 11.53.44 AM.png

 

Screen Shot 2018-04-02 at 11.53.56 AM.png

 Fig: Juniper Sky ATP and JATP  Detection

  

Customers using Juniper Sky ATP and JATP solutions are protected from the threat.

 

 

 

 

 

 

 

 

 

 

 

 

 

by amohanta at April 11, 2018 06:16 AM

XKCD Comics

April 10, 2018

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Couldn’t Resist: Cheat-Proofing Certifications

Stumbled upon this paragraph on Russ White’s blog:

I don’t really know how you write a certification that does not allow someone who has memorized the feature guide to do well. How do you test for protocol theory, and still have a broad enough set of test questions that they cannot be photographed and distributed?

As Russ succinctly explained the problem is two-fold:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 10, 2018 08:04 AM

Potaroo blog

Measuring ATR

One of the more pressing and persistent problems today is the treatment of fragmented packets. We are seeing a very large number of end-to-end paths that no longer support the transmission of fragmented IP datagrams. What can the DNS do to mitigate this issue?

April 10, 2018 02:00 AM

April 09, 2018

The Networking Nerd

Reclaiming 1.1.1.1 For The Internet

Hopefully by now you’ve seen the announcement that CloudFlare has opened a new DNS service at the address of 1.1.1.1. We covered a bit of it on this week’s episode of the Gestalt IT Rundown. Next to Gmail, it’s probably the best April Fool’s announcement I’ve seen. However, it would seem that the Internet isn’t quite ready for a DNS resolver service that’s easy to remember. And that’s thanks in part to the accumulation of bad address hygiene.

Not So Random Numbers

The address range of 1/8 is owned by APNIC. They’ve had it for many years now but have never announced it publicly. Nor have they ever made any assignments of addresses in that space to clients or customers. In a world where IPv4 space is at a premium, why would a RIR choose to lose 16 million addresses?

Edit: As pointed out by Dale Carder of ES.net in a comment below, APNIC has been assigning address space out of 1 /8 since 2010. However, the most commonly leaked prefixes in that subnet that are difficult to assign because of bogus announcements come from 1.0.0.0/14.

As it turns out, 1/8 is a pretty bad address space for two reasons. 1.1.1.1 and 1.2.3.4. These two addresses are responsible for most of the inadvertent announcements in the entire 1/8 space. 1.2.3.4 is easy to figure out. It’s the most common example IP address given when talking about something. Don’t believe me? Google is your friend. Instead of using 192.0.2.0/24 like we should be using, we instead use the most common PIN, password, and luggage combination in the world. But, at least 1.2.3.4 makes sense.

Why is 1.1.1.1 so popular? Well, the first reason is thanks to Airespace wireless controllers. Airespace uses 1.1.1.1 as the default virtual interface address for just about everything. Here’s a good explanation from Andrew von Nagy. When Airespace was sold to Cisco, this became a very popular address for Cisco wireless networks. Except now that it’s in use as a DNS resolver there are issues with using it. The wireless experts I’ve talked to recommend changing that address to 192.0.2.1, since that address has been marked off for examples only and will never be globally routable.

The other place where 1.1.1.1 seems to be used quite frequently is in Cisco ASA failover interfaces. Cisco documentation recommended using 1.1.1.1 for the primary ASA failover and 1.1.1.2 as the secondary interface. The heartbeats between those two interfaces were active as long as the units were paired together. But, if they were active and reachable then any traffic destined for those globally routable addresses would be black holed. Now, ASAs should probably be using 192.0.2.1 and 192.0.2.2 instead. But beware that this will likely require downtime to reconfigure.

The 1.1.1.1 address confusion doesn’t stop there. Some systems like Nomadix use them as the default logout address. Vodafone used to use it as an image caching server. ISPs are blocking it upstream in some ACLs. Some security organizations even recommend dropping traffic to 1/8 as a bogon prevention measure. There’s every chance that 1.1.1.1 is going to be dropped by something in your network or along the transit path.

Planning Not To Fail

So, how are you going to proceed if you really, really want to use CloudFlare DNS? Well, the first step is to make sure that you don’t have 1.1.1.1 configured anywhere in your network. That means checking WLAN controllers, firewalls, and example configurations. Odds are good you’re running RFC1918 space. But you should try to ping 1.1.1.1 anyway. If you can ping it, then you should traceroute the address. If the traceroute leaves your local network, you probably have a good path.

Once you’ve determined that you’re capable of reaching 1.1.1.1, you need to test it first. Configure it on a test machine or VM and make sure it’s actually resolving addresses for you. Better safe than sorry. Once you know it’s really working like you want it to work, configure it on your internal DNS servers as a forwarder. You still want internal control of DNS thanks to things like Active Directory. But configuring it as a forwarder means you can take advantage of all the features CloudFlare is building into the system while still retaining anything you’ve done locally.


Tom’s Take

I’m glad CloudFlare and APNIC are reclaiming 1.1.1.1 for some useful purpose. CloudFlare can take the traffic load of all the horribly misconfigured systems in the world. APNIC can use this setup to do some analytics work to find out exactly how screwed up things are. I wouldn’t be shocked to see something similar happen to 1.2.3.4 in the future if this bet pays off.

I’ve been using 1.1.1.1 since April 2nd and it works. It’s fast and hasn’t broken yet, which is the best that you can hope for from a DNS server. I’m sure I’ll play around with some of the advanced features as they come online but for now I’m just happy that one of the most recognizable IP addresses in the world is working for me.

by networkingnerd at April 09, 2018 07:45 PM

My Etherealmind
Moving Packets

Mellanox, Ixia and Cumulus: Part 1

When I saw that Mellanox was presenting at Networking Field Day 17, I was definitely curious. When I found out that I would in fact be watching a joint presentation by Mellanox, Cumulus Networks and Ixia, it is fair to consider my interest piqued. Why would these three companies present together?

Cumulus/Mellanox/Ixia Logos

It turns out that these three companies present quite a compelling story, both individually–as you would probably expect–but also when used in combination. This post looks at the role of Mellanox Ethernet switches in an Ethernet fabric.

Mellanox

To me, Mellanox has been one of those ‘behind the scenes’ companies whose hardware is all over the place but whose name, in Ethernet circles at least, is less well known. Storage and compute engineers on the other hand are likely more familiar with the Mellanox name, especially in the context of Infiniband switches and network interface cards (NICs). In 2016 Mellanox acquired EZchip, allowing the development of some very capable Ethernet switches and an expansion of the company’s portfolio; to paraphrase Amit Katz (VP, WW Ethernet Switch), Mellanox connects PCI-Express interfaces together by building NICs, cables and switches.

At the Networking Field Day event in February 2018, Mellanox proposed that most of the layer 2 switching protocols offered today to solve load distribution and loop prevention issues are irrelevant and suggested that EVPN/VXLAN was able to do everything the data center needed without having to use protocols like TRILL, Spanning Tree, SPB and so forth. More specifically Mellanox offers a broad range of 10/25/40/50 and 100Gbps Ethernet switches based on their Spectrum ASIC which support VXLAN without requiring packet recirculation (i.e. VXLAN routing is accomplished in a single pass through the ASIC).

Mellanox Spectrum

I’m not going to repeat everything that’s in the video, because, well, you could just watch the video yourself, but here are some of the highlights I took away about the Mellanox switching products were:

  • Support for multiple network operating systems installed via ONIE, including – of course – Cumulus Linux, but also Microsoft SONiC, and Linux with SAI and switchdev. Access to the switch in a disaggregated manner falls within Mellanox’s Open Ethernet initiative
  • Mellanox’s own NOS, MLNX-OS (now Mellanox Onyx) supports OpenFlow 1.0 and 1.3.
  • All switching features are available without an additional license
  • VXLAN routing is supported, both symmetric and asymmetric
  • No special license required for features like EVPN VXLAN
  • 10, 25, 50, 100Gbps support throughout the switch range. 40 Gbps on at least one model, but then, is anybody really looking at 40Gbps any more?

In case you do want the full story or simply want more detail, here’s the video of Mellanox presenting at Networking Field Day 17:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="381" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/253514380" webkitallowfullscreen="webkitallowfullscreen" width="678"></iframe>

I felt that Mellanox made a pretty convincing case for considering their hardware when looking for a white box vendor, and when looking at some of the performance statistics for the Spectrum / Spectrum 2 ASICs in comparison to Broadcom’s offerings it becomes even more obvious that Mellanox should not be just an ‘also ran’:

Mellanox Spectrum vs Broadcom

For sure, this chart is produced by Mellanox so will undoubtedly focus on areas where the Spectrum thoroughly trounces the Tomahawk and Trident 3, but it’s effective, don’t you think? There is also a Tolly report commissioned by Mellanox which can be downloaded after registering, comparing Mellanox Spectrum and Broadcom Tomahawk based switches in the context of packet loss and low latency.

Closely related to this, the quote below is from a Mellanox guest post on Packet Pushers called ‘Rethinking Big Buffers‘; I would be surprised if you don’t spit your coffee out half way through reading it:

Mellanox Quote

This claim is a direct reference to data from the Tolly report. I do not have permission to reproduce content from the report here but if you register to download the report, the exact figures are shown in Table 1, on page 5. Equally curious are the results which show what traffic gets dropped when an output port is oversubscribed. While the Mellanox Spectrum ASIC shows lost frames spread equally across all the contributing input ports, the Broadcom Tomahawk results at times show a very strange and unequal distribution of packet loss, favoring the forwarding of frames from some input ports over others. This is probably not what we, as consumers of the technology, might have assumed would happen.

Conclusions

Mellanox made a pretty strong case to consider its switches when building a white box based Ethernet switching fabric. But what of Ixia and Cumulus? I will address these companies in subsequent posts.

 

Disclosures

I was an invited delegate to Network Field Day 17 at which all the companies listed above presented. Sponsors pay for presentation time in front of the NFD delegates, and in turn that money funds the delegates’ transport, accommodation and nutritional needs while there. With that said, I always want to make it clear that I don’t get paid anything to be there (I took vacation), and I’m under no obligation to write so much as a single word about any of the presenting companies, let alone write anything nice about them. If I have written about a sponsoring company, you can rest assured that I’m saying what I want to say and I’m writing it because I want to, not because I have to.

You can read more here if you would like.

If you liked this post, please do click through to the source at Mellanox, Ixia and Cumulus: Part 1 and give me a share/like. Thank you!

by John Herbert at April 09, 2018 01:47 PM

ipSpace.net Blog (Ivan Pepelnjak)

Container Security through Segregation

One of my readers sent me a container security question after reading the Application Container Security Guide from NIST:

We are considering segregating dev/test/prod environments with bare-metal hardware. I did not find something in the standard concerning this. What should a financial institution do in your opinion?

I am no security expert and know just enough about containers to be dangerous, but there’s a rule that usually works well: use common sense and identify similar scenarios that have already been solved.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 09, 2018 06:44 AM

XKCD Comics

April 07, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Automation: Easy Button vs Sentient Voodoo Magic Button

I’m always telling network engineers attending my network automation workshops and online courses that there’s no magic bullet or 3-steps-to- success.

You cannot automate a process until you can describe it with enough details so that someone who has absolutely no clue what should be done can execute it.

David Gee published a long (and somewhat ranty) version of that statement. Enjoy!

by Ivan Pepelnjak (noreply@blogger.com) at April 07, 2018 07:51 AM

April 06, 2018

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Video: Tools and Knobs to Use when Tweaking TCP Performance

In the second half of his Networks, Buffers and Drops webinar JR Rivers focused on end systems: what tools could you use to measure end-to-end TCP throughput, or monitor performance of an individual socket or the whole TCP stack?

You’ll need at least free ipSpace.net subscription to watch the video.

by Ivan Pepelnjak (noreply@blogger.com) at April 06, 2018 06:20 AM

Potaroo blog

Measuring the Root Zone KSK Trust

An analysis of DNS resolver data to attempt to estimate the impact of a roll of the Root Zone Key Signing Key (KSK).

April 06, 2018 02:00 AM

Stuffing the Camel into the Bikeshed

I’m sure that there are folk who believe that bodies like the IETF can exercise just the right level of restraint and process management to keep excessive levels of both camelling and bikeshedding out of the IETF and its Working Groups activities. Speaking personally, I just can’t see that happening.

April 06, 2018 02:00 AM

XKCD Comics

April 05, 2018

IPEngineer.net

Juniper JET & Golang

Network programmability and network automation go hand-in-hand (pun intended) and I’ve been waiting for an opportunity to play with the Juniper IDL (.proto) files to build a JET (Juniper Extension Toolkit) application. Thanks to Marcel Wiget’s efforts, the opening I’ve been waiting for came along!

So what is JET?

JET is a couple of things:

  • Ability to run Python, C and C++ applications onboard both veriexec and non-veriexec enabled Junos
  • Ability to create an off-box application using GRPC and MQTT

JET allows you to program Junos out of the normal NETCONF, CLI, SNMP and ephemeral DB methods that we’re all fairly used to. The other thing is, it’s quick. Like really quick. With GRPC and MQTT, we can program a network element using mechanisms the software world is used to. I’ve been saying for a long time our data is no longer our own and JET allows us to bridge organisational worlds in multiple ways. Pretty cool.

So what did you do?

Not having a huge amount of time for this, I opted for off-box and took Marcel’s code as the base for how to use the APIs exposed via GRPC.

The application uses the “bgp_route_service” JET API via GRPC to program and delete BGP routes with multiple next hops.

If you’re curious about the code, you can check it out via the link below. The READMEs should take you from zero to hero and I would love to know how you got on!

https://github.com/DavidJohnGee/go-jet-demo-app

The post Juniper JET & Golang appeared first on ipengineer.net.

by David Gee at April 05, 2018 06:21 PM

My Etherealmind