January 19, 2018

The Networking Nerd

Can Routing Be Oversimplified?

I don’t know if you’ve had a chance to see this Reddit thread yet, but it’s a funny one:

We eliminated routing protocols from our network!

Short non-clickbait summary: We deployed SD-WAN and turned off OSPF. We now have a /16 route for the internal network and a default route to the Internet where a lot of our workloads were moved into the cloud.

Bravo for this networking team for simplifying their network to this point. All other considerations aside, does this kind of future really bode well for SD-WAN?

Now You See Me

As pointed out in the thread above, the network team didn’t really get rid of their dynamic routing protocols. The SD-WAN boxes that they put in place are still running BGP or some other kind of setup under the hood. It’s just invisible to the user. That’s nothing new. Six years ago, Ivan Pepelnjak found out Juniper QFabric was running BGP behind the scenes too.

Hiding the networking infrastructure from the end user is nothing new. It’s a trick that has been used for years to allow infrastructures to be tuned and configured in such a way as to deliver maximum performance without letting anyone tinker with the secret sauce under the hood. You’ve been using it for years whether you realize it or not. Have MPLS? Core BGP routing is “hidden” from you. SD-WAN? Routing protocols are running between those boxes. Moved a bunch of workloads to AWS/Azure/GCE? You can better believe there is some routing protocol running under that stack.

Making things complex for the sake of making them hard to work on is foolish. We’ve spent decades and millions of dollars trying to make things easy. If you don’t believe me, look at the Apple iPhone. That device is a marvel at hiding all the complexity underneath. But, it also makes it really hard to troubleshoot when things go wrong.

Building On Shoulders

SD-WAN is doing great things for networking. I can remember years ago the thought of turning up a multi-site IPSec VPN configuration was enough to give me hives, let alone trying to actually do it. Today, companies like Viptela, VeloCloud, and Silver Peak make it easy to do. They’re innovating on top of the stack instead of inside it.

So much discussion in the community happens around building pieces of the stack. We spend time and effort making a better message protocol for routing information exchange. Or we build a piece of the HTTP stack that should be used in a bigger platform. We geek out about technical pieces because that’s where our energy feels the most useful.

When someone collects those stack pieces and tries to make them “easy”, we shout that company down and say that they’re hiding complexity and making the administrators and engineers “forget” how to do the real work. We spend more time focusing on what’s hidden and not enough on what’s being accomplished with the pieces. If you are the person that developed the fuel injection system in a car, are you going to sit there and tell Ford and Chevrolet than bundling it into a automotive platform is wrong?

So, while the end goal of any project like the one undertaken above is simplification or reducing problems because of less complex troubleshooting it is not a silver bullet. Hiding complexity doesn’t make it magically go away. Removing all your routing protocols in favor of a /16 doesn’t mean your routing networking runs any better. It means that your going to have to spend more time trying to figure out what went wrong when something does break.

Ask yourself this question: Would you rather spend more time building out the network and understand every nook and cranny of it or would you rather learn it on the fly when you’re trying to figure out why something isn’t working the way that it should? The odds are very good that you’re going to put the same amount of time into the network either way. Do you want to front load that time? Or back load it?


Tom’s Take

The Reddit thread is funny. Because half the people are dumping on the poster for his decision and the rest are trying to understand the benefits. It surely was created in such a way as to get views. And that worked admirably. But I also think there’s an important lesson to learn there. Simplicity for the sake of being simple isn’t enough. You have to replace that simplicity with due diligence. Because the alternative is a lot more time spent doing things you don’t want to do when you really don’t want to be doing them.

by networkingnerd at January 19, 2018 03:49 PM

ipSpace.net Blog (Ivan Pepelnjak)

Packet Forwarding on Linux on Software Gone Wild

Linux operating system is used as the foundation for numerous network operating systems including Arista EOS and Cumulus Linux. It provides most networking constructs we grew familiar with including interfaces, VLANs, routing tables, VRFs and contexts, but they behave slightly differently from what we’re used to.

In Software Gone Wild Episode 86 Roopa Prabhu and David Ahern explained the fundamentals of packet forwarding on Linux, and the differences between Linux and more traditional network operating systems.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 19, 2018 07:20 AM

Security to the Core | Arbor Networks Security

The ARC of Satori

Authors: Pete Arzamendi, Matt Bing, and Kirk Soluk. Satori, the heir-apparent to the infamous IOT malware Mirai, was discovered by researchers in December 2017. The word “satori” means “enlightenment” or “understanding” in Japanese, but the evolution of the Satori malware has brought anything but clarity. […]

by ASERT team at January 19, 2018 02:03 AM

XKCD Comics

January 18, 2018

My Etherealmind

ISP Column – January 2018

Geoff Huston reports the miserable state of Internet BGP

by Greg Ferro at January 18, 2018 03:42 PM

ipSpace.net Blog (Ivan Pepelnjak)

Webinars in 2017

2017 was one of the busiest years since I started the ipSpace.net project.

It started with an Ansible for Networking Engineers session covering advanced Ansible topics and network device configurations. Further sessions of that same webinar throughout 2017 added roles, includes, extending Ansible with dynamic inventory, custom modules and filters, and using NAPALM with Ansible.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 18, 2018 08:42 AM

Networking Now (Juniper Blog)

Remote Code Execution Vulnerability on Huawei Devices

Every device that is directly accessible from the internet is under constant attack. By exposing a honeypot on the internet, you can peer into lots of interesting types of activities. A trained eye can identify known exploits fairly easily, and once in a while run into something new: an exploitation attempt using a zero-day vulnerability.

by mhahad at January 18, 2018 01:37 AM

January 17, 2018

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Ansible, Chef, Puppet or Salt? Which One Should I Use?

One of the first things I did when I started my deep-dive into network automation topics was to figure what tools people use to automate stuff and (on a pretty high level) what each one of these tools do.

You often hear about Ansible, Chef and Puppet when talking about network automation tools, with Salt becoming more popular, and CFEngine being occasionally mentioned. However, most network automation engineers prefer Ansible. Here are a few reasons.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 17, 2018 09:06 AM

XKCD Comics

January 16, 2018

Internetwork Expert Blog

Check Out Our Newest Google Cloud Course

For those of you that are looking to familiarize yourself with the Google Cloud Platform, you’re in luck! we have just released Google Cloud Platform: PaaS with App Engine. This course is now available to All Access Pass members through your members account, and to everyone else through purchase at ine.com.

This course is just one of a growing collection of Google classes offered by INE, we also plan on releasing a Google Data Storage course later this week. Until then, read on to learn about Joseph Holbrook’s latest addition to our Google video course library.

<iframe frameborder="0" height="315" src="https://www.youtube.com/embed/1EBjyj-OYw0" width="560"></iframe>
 

Why Study Google App Engine?
Google App Engine is an extremely useful tool; it is a fully managed platform that completely abstracts away infrastructure so you can focus only on code.

About the Course:
This course covers Google App Engine PaaS and more specifically the history, features and functions of Google App Engine. The instructor will explain the benefits of using Google App Engine with live examples and demos.

The course is 4 hours and 5 minutes long and is taught by Joseph Holbrook.

What You’ll Learn:
Students will dive into both deployment models of the App Engine and learn how to decide which model is right for your organization.

Whether you’re a developer or architect, this course will help you understand the basic capabilities and some of the useful advanced features of the Google App Engine.

About the Instructor:
Joe Holbrook has been in the IT field since 1993 when he was exposed to several HPUX systems on board a US Navy flagship. He has migrated from UNIX world to Storage Area Networking(SAN) and then on to Enterprise Virtualization and Cloud Architecture. Joseph has worked for numerous companies such as HDS, 3PAR Data, Brocade, Dimension Data, EMC, Northrup Grumman, ViON, Ibasis.net, Chematch.com, SAIC and Siemens Nixdorf. Joe currently works as a Subject Matter Expert specializing in Cloud/IT Security focused on Data Storage infrastructure services and Data migrations to the Cloud.

Joseph holds Industry leading certifications from Amazon Web Services, Google Cloud Platform, Brocade, Hitachi Data Systems, EMC, VMWare, CompTIA, HP 3PAR ASE, Cloud Credential Council and other orgs. He is now working on the Google Cloud Platform for several organizations.

Joe is married with children and lives in Jacksonville, Florida. In his free time, Joseph enjoys traveling to South America, spending time with his 5 year old daughter and learning about cryptocurrencies.  Joe is also an avid hockey fan and takes pleasure in skiing whenever he can get out of Florida.

by jdoss at January 16, 2018 09:49 PM

My Etherealmind
Dyn Research (Was Renesys Blog)

China Activates Historic Himalayan Link To Nepal

On 10 January 2018, China Telecom activated a long-awaited terrestrial link to the landlocked country of Nepal.  The new fiber optic connection, which traverses the Himalayan mountain range, alters a significant aspect of Nepal’s exclusive dependency on India, shifting the balance of power (at least for international connectivity) in favor of Kathmandu.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

Following a number of brief trials since mid-November, Nepal Telecom fully activated Internet transit from China Telecom at 08:28 UTC on 10 January 2018, as depicted below.

Background

In our 2015 coverage of the earthquake that devastated Nepal, I wrote:

Nepal, as well as Bhutan, are both South Asian landlocked countries wedged between India and China that are dependent on India for a number of services including telecommunications. As a result, each country has been courting Chinese engagement that would provide a redundant source of Internet connectivity.

In December 2016, executives Ou Yan of China Telecom Global (CTG) and Lochan Lal Amatya of Nepal Telecom (pictured below) signed an agreement to route IP service through a new terrestrial cable running between Jilong county in China and Rasuwa district in Nepal.

 

Last week, the fiber link to China finally came to life and established Nepal’s first redundant source of international transit.  An operational fiber optic circuit through China will provide Nepal several distinct benefits.

First, it provides resiliency if the links to India were ever to go down, whether due to earthquake, fiber cut, or any other catastrophic technical failure. Second, it provides Nepal with additional bandwidth, although it isn’t clear that lack of bandwidth has been limiting the country’s Internet development.  Finally, with a second source of international transit, Nepal is in a better position to negotiate terms of service and pricing than when it was entirely captive to India’s carriers.

Changes in Performance

Looking at the performance implications for Nepal Telecom, we can see that traffic from Far East locations will generally speed up along the Hong Kong to Nepal link, while connections from some Western European countries may experience a slowdown.

The graphic below plots latencies from our measurement servers in Taipei, Tokyo, Seoul, and Hong Kong to Nepal Telecom.  In each case, the latencies decreased when the new China Telecom service was activated on 10 January.

 

Surprisingly, latencies from Zurich increase (plot below).

 

In this example, since the latencies via Bharti Airtel increase at the time of the transit activation, we can surmise that this may be due to a change in the unobservable return path of these round-trip measurements.  The new return path is most likely egressing via China Telecom, since the forward path remains unchanged.

Nonetheless, a materially new traffic pattern through China has emerged for Nepal.

Conclusion

The upside of this development for Nepal is clear: cheaper and more resilient international Internet bandwidth.  But China has something to gain as well.

Infrastructure investment in countries along China’s traditional trade routes is central to its One Belt One Road foreign policy agenda.  By making investments in neighboring countries (like a fiber optic cable to Nepal), China hopes to reap benefits in trade as well as achieve political and military influence.

And while many in the United States are focused on the competition between the US and China for regional influence, China and India are locked in a battle for influence in South Asia.  China has made a significant move by connecting its Internet directly to Nepal.

While similarly situated Bhutan would also benefit from a direct Internet connection to its northern neighbor, recent rising tensions between the world’s two most populous nations over tiny Bhutan make this technical advancement unlikely for now.  Good thing Bhutan’s Gross National Happiness indicator doesn’t measure Internet resilience.  At least, not yet.

by Doug Madory at January 16, 2018 06:36 PM

My Etherealmind
Moving Packets

MP on Vertitech IT’s “Best IT Blogs 2018”

This is a quick post to say thanks to the folks at Vertitech IT for listing movingpackets.net among their Best IT Blogs for 2018 (“Must-Read Resources for CIOs, IT & Security Pros”). MP was on the Best IP Blogs of 2017 as well, and it’s an honor to be on the list for a second year.

Vertitech IT Best Blogs of 2018

Vertitech explain the creation of this list thus:

Information Technology.  Sometimes we get so focused on the bits and bytes side of the equation we forget about the information part.  When it comes right down to it, IT is all about using technology to inform, to communicate, to make the business of doing business easier and more understandable.

That’s why we compiled this list. Originally created last year with 50 top IT blogs, we’ve expanded this year’s update to include 70 leading resources for IT professionals, including blogs, discussion forums, niche industry publications, and the best resources for CIOs and CTOs.  VertitechIT’s top 70 IT blogs, forums, and resources were selected because they are among the most current, frequently updated, credible, and informative sources of information related to IT on the web today. From musings of industry leaders, to the veteran guys and gals in the trenches who chronicle their IT journeys, these 70 blogs and resources all have something important to say about the IT world of today, and tomorrow.

I’m also pleased to see (and to be in the company of) the NetworkingNerd (Tom Hollingsworth) and Chris Wahl‘s “WahlNetwork“, among others. Maybe it’s really a list of nerds with beards? Either way, Vertitech has put together a good list of technical resources, and the list is well worth a browse to find some additional great blog subscriptions!

If you liked this post, please do click through to the source at MP on Vertitech IT’s “Best IT Blogs 2018” and give me a share/like. Thank you!

by John Herbert at January 16, 2018 03:07 PM

ipSpace.net Blog (Ivan Pepelnjak)

Event-Driven Automation on Building Network Automation Solutions Online Course

Most engineers talking about network automation focus on configuration management: keeping track of configuration changes, generating device configurations from data models and templates, and deploying configuration changes.

There’s another extremely important aspect of network automation that’s oft forgotten: automatic response to internal or external events. You could wait for self-driving networks to see it implemented, or learn how to do it yourself.

On March 20th live session of Building Network Automation Solutions online course David Gee will dive deeper into event-driven network automation. As he explains the challenge:

When it comes to running infrastructure and infrastructure services, a lot of the decision making is human based. Someone reads a ticket, someone decides what to do. Someone gets alerted to an event and that someone does something about it. This involvement causes friction in the smooth-running nature of automated processes. Fear not! Something can be done about it.

We all know the stories of ITIL and rigid process management and David will show you how event-driven automation could be made reality even with strict and rigid controls, resulting in an environment that reacts automatically to stimuli from your services and infrastructure. We will discuss what events are, when they're important, how to normalize them, and what we can do when we have identified an event positively. We will also discuss commercial vs open source options along with their pros and cons.

Finally, you will see a live demonstration of both syslog and ICMP powered event driven automation in action. Links to usable code samples will be provided in the session so you reproduce the demos in your own environment.

Interested? Register now!

by Ivan Pepelnjak (noreply@blogger.com) at January 16, 2018 07:14 AM

January 15, 2018

ipSpace.net Blog (Ivan Pepelnjak)

When Did IT Practitioners Lose Their Curiosity?

One of my readers sent me an interestingly sad story as a response to my importance of fundamentals rant. Here it is… enjoy ;)

2017-01-14: Updated with a different viewpoint

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 15, 2018 06:50 PM

My Etherealmind

Meet and Greet in Silicon Valley from 15 Jan – 3 Feb 2018

I’ve reduced the travel and conferences I’m attending in 2018 to focus on content –  writing, podcasting and video is my future. The negative impact is that I’ll miss meeting many people in the industry. So I am spending 3 weeks in San Jose 15 Jan – 3 Feb 2018 and hoping to meet as many […]

by Greg Ferro at January 15, 2018 05:54 PM

ipSpace.net Blog (Ivan Pepelnjak)

Meltdown and Its Networking Equivalents

One of my readers sent me this question:

Do you have any thoughts on this meltdown HPTI thing? How does a hardware issue/feature become a software vulnerability? Hasn't there always been an appropriate level of separation between kernel and user space?

There’s always been privilege-level separation between kernel and user space, but not the address space separation - kernel has been permanently mapped into the high-end addresses of user space (but not visible from the user-space code on systems that had decent virtual memory management hardware) since the days of OS/360, CP/M and VAX/VMS (RSX-11M was an exception since it ran on 16-bit CPU architecture and its designers wanted to support programs up to 64K byte in size).

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 15, 2018 07:11 AM

XKCD Comics

January 13, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Robust IPAM

Elisa Jasinska covered several IPAMs in her overview of open-source network automation tools, and we had Jeremy Stretch talking about NetBox in the Building Network Automation Solutions online course, but if you’re looking for a really robust easy-to-implement solution, check out this document from 1998 (deployment experience, including a large-scale one).

by Ivan Pepelnjak (noreply@blogger.com) at January 13, 2018 09:07 AM

January 12, 2018

The Networking Nerd

Chipping Away At Technical Debt

We’re surrounded by technical debt every day. We have a mountain of it sitting in distribution closets and a yard full of it out behind the data center. We make compromises for budget reasons, for technology reasons, and for political reasons. We tell ourselves every time that this is the last time we’re giving in and the next time it’s going to be different. Yet we find ourselves staring at the landscape of technical debt time and time again. But how can we start chipping away at it?

Time Is On Your Side

You may think you don’t have any time to work on the technical debt problem. This is especially true if you don’t have the time due to fixing problems caused by your technical debt. The hours get longer and the effort goes up exponentially to get simple things done. But it doesn’t have to be that way.

Every minute you spend trying to figure out where a link goes or a how a server is connected to the rest of the pod is a minute that should have been spent documenting it somewhere. In a text document, in a picture, or even on the back of a napkin stuck to the faceplate of said server!

I once watch an engineer get paid an obscene amount of money to diagram a network. Because it had never been done. He didn’t modify anything. He didn’t change a setting. All he did was figure out what went where and what the interface addresses were. He put it in Visio and showed it to the network administrators. They were so thrilled they had it printed in poster size and framed! He didn’t do anything above and beyond showing them what they already had.

It’s easy to say that you’re going to document something. It’s hard to remember to do it. It’s even harder when you forget to do it and have to spend an hour remembering why you did it in the first place. So it’s better to take a minute to write in an interface description. Or perhaps a comment about why you did a thing the way you did it. Every one of those minutes chips away a little more technical debt.

Ask yourself this question next time: If I spent less time solving documentation problems, what could I do to reduce debt overall?

Mad As Hell And Not Going To Design It Anymore

The other sources of technical debt are money and politics. And those usually come into play in the design phase. People jockeying for position or trying to save money for something else. I don’t mind honest budget saving measures. What I mind is trying to cut budgets for things like standing desks and quarterly bonuses. The perception of IT is that it is still a budget sink with no real purpose to support the business. And then the email server goes down. That’s why IT’s real value comes out.

When you design a network, you need to take into account many factors. What you shouldn’t take into account is someone dictating how things are going to be just because they like the way that something looks or sounds. When I worked at a VAR there were always discussions like this. Who should we use for the switching hardware? Which company has a great rebate this quarter? I have a buddy that works at Vendor X so we should throw him a bone this time.

As the networking professional, you need to stand up for your decisions. If you put the hard work and effort into making something happen you shouldn’t sit down and let it get destroyed by someone else’s bad idea. That’s how technical debt starts. And if you let it start this way you’ll never be able to get rid of it. Both because it will be outside of your control and because you’ll always identify someone else as the source of the problem.

So, how do you get mad and fix it before it starts? Know your project cold. Know every bolt and screw. Justify every decision. Point out the mistakes when they happen in front of people that don’t like to be embarrassed. In short, be a jerk. The kind of self-important, overly cerebral jerk that is smug because he knows he’s right. And when people know you’re right they’ll either take your word for things or only challenge you when they know they’re right.

That’s not to say that you should be smug and dismissive all the time. You’re going to be wrong. We all are. Well, everyone except my wife. But for the rest of us, you need to know that when you’re wrong you need to accept it and discuss the situation. No one is 100% infallible, but someone that can’t recognize when they are wrong can create a whole mountain range of technical debt before they’re stopped.


Tom’s Take

Technical Debt is just a convenient term for something we all face. We don’t like it, but it’s part of the job. Technical Debt compounds faster than any finance construct. And it’s stickier than student loans. But, we can chip away at it slowly with some care. Take a little time to stop the little problems before they happen. Take a minute to save an hour. And when you get into that next big meeting about a project and you see a huge tidal wave of technical debt headed your way, stand up and let it be known. Don’t be afraid to speak out. Just make sure you’re right and you won’t drown.

by networkingnerd at January 12, 2018 03:52 PM

My Etherealmind

Don’t Delete PCAP Files – Trim Them! – NETRESEC Blog

Nice tool for people who are crafting an artisanal logging system

by Greg Ferro at January 12, 2018 03:03 PM

XKCD Comics

January 11, 2018

Internetwork Expert Blog

We’ve Added a New AWS Certified Solutions Architect Course to Our Video Library!

Last week, we added an Amazon Web Services (AWS) Certified Solutions Architect – Associate Level course to our video library. The course is available for streaming via your All Access Pass members account and also available for purchase at ine.com.

<iframe frameborder="0" height="315" src="https://www.youtube.com/embed/AgWQZSrpFlU" width="560"></iframe>
 

Why Study AWS?
An AWS certification will put you in an elite group of cloud engineers, and could lead to better employment opportunities. Amazon Web Services are used by thousands of companies around the world, therefore, AWS certified engineers are highly valued by many employers. AWS certifications show the potential employer that you have the skills to design, deploy and manage scalable and secure systems on AWS platform.

About the Course:
This course is designed to help you prepare for the AWS Solutions Architect – Associate exam. Even if you do not have a lot of IT experience, and have never used any Amazon Web Services, this course is the best starting point for AWS certifications.

This class is taught by Miles Karabas, and includes 5 hours and 43 minutes of in-depth lectures, plus numerous practice questions similar to those on the exam.

Areas Focused On:

  • Compute
  • Storage
  • Databases
  • Security, Identity & Compliance
  • Management Tools
  • Networking & Content Delivery
  • Messaging

Upon completion of the course, students will be able to:

  • Understand the basics of AWS services and how the user interacts with them
  • Understand how networking works in AWS
  • Build secure, reliable and cost effective applications on the AWS platform
  • Implement applications on AWS platform
  • Identify appropriate use of AWS architectural best practices
  • Estimate costs and identify cost control mechanisms

About the Instructor:
Miles Karabas is a certified hands-on AWS, ORACLE and Postgresql systems & database engineer, with over 20 years of experience. He possesses 5 AWS certifications and has worked with all major flavors of Unix and Linux. Miles has extensive cross-industry and cross-functional training and experience in large multi-platform environments. He is especially interested in systems and database migration to Amazon cloud infrastructure. In the database area, his primary interest is the database performance monitoring & tuning.

Miles has a wife, Suzana, and two wonderful daughters, Aleksandra and Sofia. He spends his free time working hard on his small goat & sheep farm on the outskirts of West Palm Beach in Florida.

by jdoss at January 11, 2018 04:12 PM

ipSpace.net Blog (Ivan Pepelnjak)

Upcoming ipSpace.net Events

2018 has barely started and we’re already crazily busy:

The last week of January is Cisco Live Europe week. I’ll be there as part of the Tech Field Day Extra event – drop by or send me an email if you’ll be in Barcelona during that week.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 11, 2018 07:06 AM

January 10, 2018

ipSpace.net Blog (Ivan Pepelnjak)

BGP Route Selection: a Failure of Intent-Based Networking

It’s interesting how the same pundits who loudly complain about the complexities of BGP (and how it will be dead any time soon and replaced by an SDN miracle) also praise the beauties of intent-based networking… without realizing that the hated BGP route selection process represents one of the first failures of intent-based approach to networking.

Let’s start with some definitions. There are two ways to get a job done by someone else:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 10, 2018 02:26 PM

Fat Fingers Strike Again…

Level3 had a pretty bad bad-hair-day just a day before Pete Lumbis talked about Continuous Integration on the Building Network Automation Solutions online course (yes, it was a great lead-in for Pete).

According to messages circulating on mailing lists it was all caused by a fumbled configuration attempt. My wild guess: someone deleting the wrong route map, causing routes that should have been tagged with no-export escape into the wider Internet.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 10, 2018 08:43 AM

XKCD Comics

January 09, 2018

Networking Now (Juniper Blog)

Multicloud Ready Firewall Policies

Cloud deployments offer agility and elasticity for applications and can rapidly increase productivity. Enterprise IT teams need to deliver the security and risk mitigation goals whether applications run on-premises or in the cloud. Introducing Juniper’s enhanced meta-data based security policy model in this blog that enables IT/security teams not compromise on the security of the applications as they drive towards the agility expectations in multicloud deployments.

by snimmagadda at January 09, 2018 10:38 PM

Multi-Cloud Ready Firewall Policies

Cloud deployments offer agility and elasticity for applications and can rapidly increase productivity. Enterprise IT teams need to deliver the security and risk mitigation goals whether applications run on-premises or in the cloud. Introducing Juniper’s enhanced meta-data based security policy model in this blog that enables IT/security teams not compromise on the security of the applications as they drive towards the agility expectations in multi-cloud deployments.

by snimmagadda at January 09, 2018 08:27 PM

Meltdown & Spectre: Modern CPU vulnerabilities

ThinkstockPhotos-483147081_JNet.png

Today, chatter has increased significantly about a set of related vulnerabilities that impact several modern CPUs that perform speculative instruction execution, amongst which Intel and AMD chips. These vulnerabilities allow an attacker to gain access to kernel space memory or to another process’s memory, which in theory they should not have access to. In turn, this leads to potential information leakage of sensitive information like passwords, encryption keys, etc. In the case of virtualized environment, it is possible to cross the boundary of the virtual machine guest OS to another virtual machine’s address space, making data leakage in cloud environments even more problematic.

 

These vulnerabilities have been dubbed Meltdown and Spectre. The CVEs associated with them are:

  • CVE-2017-5753 hw: cpu: speculative execution bounds-check bypass
  • CVE-2017-5715 hw: cpu: speculative execution branch target injection
  • CVE-2017-5754 hw: cpu: speculative execution permission faults handling

 

There is no known exploit in the wild taking advantage of these vulnerabilities yet. But there has been a proof of concept posted by a PhD student from a university in Austria. There is little doubt that some sophisticated threat actors will attempt to take advantage of unpatched systems in the near future.

 

Operating systems vendors have been working on patches to mitigate these vulnerabilities. Some Linux updates are available for download. Windows updates have just been made available today. Amazon is planning system updates on January 4. Google has made updates available to its Cloud Platform and Chrome OS and has already updated Android and G-suite. MacOS has already deployed fixes.

 

It is speculated that the fixes will have a non-negligible performance impact that depends on the operating system, the nature of the fix and the workload of the system.

 

Exposure of Juniper’s products

 
Juniper SIRT has published an advisory at https://kb.juniper.net/JSA10842 with more information about the impact and available mitigations for Juniper products.

 

Mitigation

To mitigate this vulnerability, it is highly recommended to apply patches relevant to the operating systems you run as vendors make them available.

by mhahad at January 09, 2018 08:09 PM

My Etherealmind

January 08, 2018

Networking Now (Juniper Blog)

SRX platforms with Junos OS 15.1X49 complete FIPS 140-2 certification

NIST Cryptographic Module Validation Program certification is important to Federal Government agencies.

 

by bshelton at January 08, 2018 04:56 PM

ipSpace.net Blog (Ivan Pepelnjak)

New Design on www.ipSpace.net

One of my readers sent me a polite email a while ago saying “your site is becoming like $majorVendor’s web site – every corner looks completely different based on when you made it

The worst part is that he was right, so I spent the last two weeks as a website janitor, mopping up broken markup, fixing CSS cracks, polishing old texts…

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at January 08, 2018 09:32 AM

Potaroo blog

BGP in 2017

This is a report on the experience with the Internet's inter-domain routing system over the past year, looking in some detail at some metrics from the routing system that can show the essential shape and behaviour of the underlying interconnection fabric of the Internet.

January 08, 2018 01:00 AM

XKCD Comics

January 05, 2018

My Etherealmind
XKCD Comics

January 04, 2018

Networking Now (Juniper Blog)

Metldown & Spectre: modern CPU vulnerabilities

ThinkstockPhotos-483147081_JNet.png

Today, chatter has increased significantly about a set of related vulnerabilities that impact several modern CPUs that perform speculative instruction execution, amongst which Intel and AMD chips. These vulnerabilities allow an attacker to gain access to kernel space memory or to another process’s memory, which in theory they should not have access to. In turn, this leads to potential information leakage of sensitive information like passwords, encryption keys, etc. In the case of virtualized environment, it is possible to cross the boundary of the virtual machine guest OS to another virtual machine’s address space, making data leakage in cloud environments even more problematic.

 

These vulnerabilities have been dubbed Meltdown and Spectre. The CVEs associated with them are:

  • CVE-2017-5753 hw: cpu: speculative execution bounds-check bypass
  • CVE-2017-5715 hw: cpu: speculative execution branch target injection
  • CVE-2017-5754 hw: cpu: speculative execution permission faults handling

 

There is no known exploit in the wild taking advantage of these vulnerabilities yet. But there has been a proof of concept posted by a PhD student from a university in Austria. There is little doubt that some sophisticated threat actors will attempt to take advantage of unpatched systems in the near future.

 

Operating systems vendors have been working on patches to mitigate these vulnerabilities. Some Linux updates are available for download. Windows updates are likely to come on patch Tuesday next week, January 9. Amazon is planning system updates on January 4. Google has made updates available to its Cloud Platform and Chrome OS and has already updated Android and G-suite. MacOS has already deployed fixes.

 

It is speculated that the fixes will have a non-negligible performance impact that depends on the operating system, the nature of the fix and the workload of the system.

 

Exposure of Juniper’s products

 

The Juniper SIRT is aware of the recent articles related to Intel’s processor security vulnerability and is actively investigating the impact on Juniper Networks products and services. This vulnerability affects all modern Intel processors which are used in various Juniper Network products including (but not limited): Junos OS with Intel-based REs, Junos Space Appliance, NSMXpress/NSM3000/NSM4000 appliance and CTP appliance.

 

To exploit this vulnerability in Junos OS based platforms, an attacker needs to have a local authenticated privileged (admin) access and needs to bypass the veriexec check, as such our initial CVSS-Base-Score assessment of 4.7 (CVSS:3.0/AV:L/AC:H/PR:H/UI:N/S:U/C:H/I:L/A:N). In order to mitigate this vulnerability, it is recommended to limit the access to critical infrastructure networking equipment to only trusted administrators from trusted administrative networks or hosts.

 

Other Juniper products will provide patches as soon as they are made available by the Operating Systems vendors.

 

Mitigation

To mitigate this vulnerability, it is highly recommended to apply patches relevant to the operating systems you run as vendors make them available.

by mhahad at January 04, 2018 02:37 AM

January 03, 2018

My Etherealmind

Its time for Network Field Day 17

My privilege to review whats coming up 24th Jan

by Greg Ferro at January 03, 2018 07:41 PM

Routing Freak

SDN with Big Data Analytics for an Intelligent Network

Software, cloud computing and IOT are rapidly transforming networks in a way, and at a rate, never seen before. With software-as-a-service (SaaS) models, enterprises are moving more and more of their critical applications and data to public and hybrid clouds. Enterprise traffic, that never left the corporate network, is now shifting to the Internet, reaching out to different data centers across the globe. Streaming Video (Netflix, Youtube, Hulu, Amazon) accounts for an absurdly high percentage of traffic in the Internet and content providers have built out vast content distribution networks (CDNs) that overlay the Internet backbone. Higher resolutions (HD and UHD) will increase the traffic further, and by some accounts, will be over 80% of the total network traffic by 2020. More and more businesses are being created that reach their customers exclusively over the Internet (Spotify, Amazon, Safari, Zomato, etc). Real-time voice and video communications are moving to cloud-based delivery and network operators are challenged to deliver these services without impacting user quality of experience. And if this was’nt enough, with the advances being made in IOT, we have more devices than ever, lively communicating and chatting in real time over the Internet.

Security becomes a prime concern as more business critical applications migrate to the cloud. The number of DDOS attacks are only increasing and IOT devices can be compromised by hackers to launch some very lively and innovative attacks. A large scale cyber attack in 2016 used a botnet consisting of a multitude of IoT devices such as printers, camera, web cams, residential network gateways, and even baby monitors causing a major outage that brought down a big chunk of the Internet.

All this traffic goes over service provider networks that were built and designed using devices, protocols and management software from the Jurassic age. The spectacular growth and variability of traffic that is experienced today was not anticipated when these networks were built. There is a dire need to cope with changing traffic patterns and to optimize the use of available network resources at all levels (IP, MPLS and Optical) — we’ll talk about the multi-layer SDN controller that optimizes the IP-Optical layers some other time.

Given these challenges, its imperative that service providers work towards gaining realtime visibility into the network behavior and extract actionable insights needed to react immediately to network anomalies, changing traffic patterns and security threats and alarms.

And this is where big data analytics, like a knight in the shining armour, comes in.

Given the data rates that we are dealing with, and the rate at which traffic volumes and speeds are growing, deep packet inspection at line rate gets ruled out in most parts of the network. There is only so much that one can do with hardware’s brute force approach. Additionally, with most traffic being encrypted, DPI offers limited — no, zero — insight into whats happening in the network.

What can help at the scale that networks run today is streaming telemetry combined with big data analytics. Instead of constantly polling the devices in the network and then reacting to what is learnt, the new age mantra is for these devices to periodically push the relevant statistics to the data collectors, which can analyse this data and act based on that. One can argue that streaming network telemetry may not even require an IETF standard in order to be useful. A standard format like JSON could be used, and it’s up to the collector to parse and interpret the incoming barrage of data. This allows network operators to quickly write dev-ops tools that they can use to closely monitor their network and services. This opens up room for hyper innovation where new-age startups can quickly come up with products that can smartly mine the data from the network and draw rich insights into whats happening that can help the service providers in running their networks smarter and hotter.

Big data analytics entails ingesting, processing and storing exabytes worth of network data over a period of time that can be analysed later for actionable insights. With advances made in streaming analytics, this analysis can also happen in real time as the data comes piping hot from the network. New age scalable stream processors make it possible to fuse data streams to answer more sophisticated queries about the network in real-time.

By correlating data from sources beyond traditional routing and networking equipment (IX router-server views, DNS and CDN logs, firewall logs, billing and call detail records) it is possible for the analytics engine to identify patterns or behaviors that can not be identified by merely sifting through the device logs (collected traditionally using SNMP, syslogs, netflow, sflow, IPFIX, etc). The ability to correlate telemetry data from the network with applications such as Netflix or Youtube or SaaS applications such as an iOS upgrade can provide insights that can never be found with traditional traffic engineering approaches.

I claim that we now have the smarts to avoid the famous iOS7 meltdown that happened when iOS7 was released. Let’s see how:

The analytics engine feeding the controller can identify and correlate iOS updates to a new spike — an anomaly — in the network utilization inside an enterprise. The SDN controller can install more specific flows that will steer all iOS update traffic on a different path in the network. This way the controller can automatically adjust the enterprise customer flows to either (i) provide an improved iOS update experience OR (ii) prevent other enterprise traffic to get affected with the iOS update tsunami.  Advanced IP controllers (and those are being demo’ed to several service providers currently) can steer such traffic across multiple ASes as well.

We recently demo’ed a hierarchical SDN controller to a very big customer in Europe. The SDN controller was used to set up inter-domain IP/MPLS services and it used telemetry feeds to determine the realtime link utilization of the inter-domain links. We used that information to place the inter-domain IP services across multiple ASes — the new services were placed on the least utilized inter-domain link at that instance. The services could be moved around as the link utilization changed. This is very different from how its done today, where the BW utilization is reserved and services are placed based on the hard reservations. IMO, the concept of hard reservations will get obsoleted very soon. Why assume that a VPLS service on a link will take up 1Gbps, when the traffic that it “historically” sends never exceeds 100 Mbps?

The figure below shows the different sources feeding into a typical big data analytics cluster that feeds the output to the SDN controller.

Flow telemetry and network telemetry will help in monitoring the traffic flowing inside the service provider networks. We could use this to gain a deep understanding of what a network looks like during normal operations and how it looks like when an anomaly is present in the network.

If one understands the “normal”, the abnormal can become apparent. What comprises abnormal may vary from network to network and from attack to attack. It could include large traffic spikes from a single source in the network, higher-than-typical traffic “bursts” from several or many devices in the network, or traffic types detected that are not normally sent from a known device type. Once the abnormal has been identified, the attacks can be controlled and eliminated.

Network telemetry will also help in peering analytics to select the most cost-effective peering and transit connections based on current and historic traffic trends. Correlating this data with BGP feeds from route servers can help in visualizing how the traffic flows/shifts from one AS to the other.

Data collected from different sources is fed to a scalable publish/subscribe pipeline that feeds this to the big data analytics platform. Some of this can be fed to a real time streaming analytics platform for deriving rich real time insights from the network. This can then be fed to a machine learning cluster for predictive analytics.

The data is stored in a scalable data lake which can be optimized for complex, multi-dimensional queries that becomes the building block for the SDN controller to do something useful. This data can be coupled with the other data that is being learnt off different sources (syslog records, DNS and CDN logs, IX views, etc) and all this can be processed and transformed into actionable intelligence. For example, this can help service providers understand the amount of Facebook, Netflix, Youtube and Amazon Prime Video traffic thats flowing in their networks. It can help them construct a “heat map” of the most active sources and the sinks. Combine this with anonymized subscriber demographics, and the big data analytics framework can provide high fidelity insights into how the subscribers, applications and the network are correlated.

This level of insight cannot be derived by merely observing the telemetry feeds alone since it is not straightforward to correlate flows with specific applications, services and subscriber end points. The ability to mine data from a panoply of sources (as shown on the left side of the figure above — DNS servers, repositories that can identify servers and end points by owner, geo-location, type and purpose) and being able to correlate them is what differentiates the new age intelligent networks from the ones that exist today.

This level of sophistication can not be achieved without a solid big data analytics framework supporting the SDN controller. The limitless potential of what can be achieved will only unfold as more real deployments start happening in the next few years. We’re living in very interesting times, and I’m waiting with bated breath to see what the future holds and how the networking industry becomes “great again”!


by Manav Bhatia at January 03, 2018 04:15 PM

XKCD Comics

January 01, 2018

The Networking Nerd

2018 Is The Year Of Writing Everything

Welcome back to a year divisible by 2! 2018 is going to be a good year through the power of positive thinking. It’s going to be a fun year for everyone. And I’m going to do my best to have fun in 2018 as well.

Per my tradition, today is a day to look at what is going to be coming in 2018. I don’t make predictions, even if I take some shots at people that do. I also try not to look back to heavily on the things I’ve done over the past year. Google and blog searches are your friend there. Likely as not, you’ve read what I wrote this year and found one or two things useful, insightful, or amusing. What I want to do is set up what the next 52 weeks are going to look like for everyone that comes to this blog to find content.

Wearing Out The Keyboard

The past couple of years has shown me that the written word is starting to lose a bit of luster for content consumers. There’s been a bit push to video. Friends like Keith Townsend, Robb Boardman, and Rowell Dionicio have started making more video content to capture people’s eyes and ears. I myself have even noticed that I spend more time listening to 10-minute video recaps of things as opposed to reading 500-600 words on the subject.

I believe that my strengths lie in my writing. I’m going to continue to produce content around that here for the foreseeable future. But that’s not so say that I can’t dabble. With the help of my media-minded co-worker Rich Stroffolino, we’ve created a weekly video discussion called the Gestalt IT Rundown. It’s mostly what you would expect from a weekly video series. Rich and I find 2-3 tech news articles and make fun of them while imparting some knowledge and perspective. We’ve been having a blast recording them and we’re going to be bringing you more of them every Wednesday in 2018.

That’s not to say that my only forays on GestaltIT.com are going to be video focused. I’m also going to be writing more there as well. I’ve had some articles go up there this year that talked about some of the briefings that I’ve been taking from networking companies. That is going to continue through 2018 as well. I’ve also had some great long form writing, including my favorite article from last year about vendors and VARs. I’m going to continue to post articles on Gestalt IT in 2018 as well as posting them here. I’m going to need to figure out the right mix of what to put up for my day job and what to put up here for my Batman job. If there’s a kind of article you prefer to see here please make sure to let me know so I can find a way to make more of them.

Lastly, in 2018 I’m going to try some new things as well with regard to my writing. I’m going to write more coverage of the events I attend. Things like Cisco Live, WLPC, and other industry events will have some discussions of what goes on there for people that can’t go or are looking for the kind of information that you can’t get from an event website. I’m also going to try to find a way to incorporate some of my more detailed pieces together as reports for consumption. I’m not sure how this is going to happen, but I figured if I put it down here as a goal then I would be required to make it happen next year.


Tom’s Take

I like writing. I like teaching and informing and telling the occasional bad joke about BGP. It’s all because I get regular readers that want to hear what I have to say that I continue to do what I do. Rather than worry about the lack of content and coverage that I’m seeing in the community, I’ve decided to do my part to put out more. It is a challenging exercise to come up with new ideas every week and put them out for the world to see. But as long as folks like you keep reading what I have to say I’ll keep writing it all down.

by networkingnerd at January 01, 2018 06:24 PM

Ethan Banks on Technology

All Of Ethan’s Podcasts And Articles For December 2017

Here’s a catalog of all the media I produced (or helped produce) in December 2017.

PACKET PUSHERS WEEKLY PODCAST

PRIORITY QUEUE PODCAST

  • Nothing new this month, but I have 3 shows in this series scheduled for 2018 already.

DATANAUTS PODCAST

  • Episode 113 – Working In High Performance Computing. Nick Buraglio is our special guest to talk through the kinds of compute problems and networking challenges HPC environments face.
  • Episode 114 – Unikernels And IncludeOS. Per Buer, CEO of IncludeOS, schools us on how unikernels differ from a traditional OS, their benefits and tradeoffs, use cases for the technology, and more. Per also gives an overview of the IncludeOS software.
  • Episode 115 – Secure Printing With HP (Sponsored). We discuss securing print infrastructure with Michael Howard, Chief Security Advisor, and Jason O’Keeffe, Print Security Advisor, at HP. These guys are the real security deal in touch with how the hacking community leverages printers as jump off points to launch attacks.

Briefings In Brief Podcast

PacketPushers.net

ETHANCBANKS.COM

NEWSLETTERS

PARTING THOUGHTS

  • In addition to writing and recording, December was about planning for 2018. I spent time with some excellent folks in the Packet Pushers community kicking around a new idea for a virtual mini-conference. I built a background planning document for the first event. Next I need to finalize the agenda for the first event.
  • We’ve kicked off a project to build a membership site with premium content and so on for Packet Pushers. That process has started with meetings and planning. Exciting to actually have this project kick off.
  • Packet Pushers shuts down for 2 weeks at the end of the year. Looking forward to vacation. Frankly, I’m tired. 2017 was a whirlwind of writing, recording, planning, and traveling. I’m ready to relax, work on my house, and maybe hike if the weather is kind.
  • My thanks to all of you that consume what we create over at Packet Pushers. We’ll see you next year.

by Ethan Banks at January 01, 2018 05:00 AM

XKCD Comics

December 29, 2017

XKCD Comics

December 28, 2017

Moving Packets

Twinax – Cheap, Cheerful and Annoyingly Chubby

What’s not to love about twinax? Formerly the exclusive domain of IBM systems, twinax has seen itself reborn in the last few years in the form of the Direct Attach Cable (DAC) used to connect systems at speeds of 10Gbps and 40Gbps (by way of bundling four twinax pairs in a single cable).

Twinax

Direct Attach Cables

Before diving into the pros and cons of DAC, it’s important to understand the different varieties that are available. A DAC is a cable which has SFP+ format connectors hard-wired on each end; plug each end into an SFP+ socket and, vendor support notwithstanding, the link should come up. A direct attach cable is frequently and erroneously referred to as a “DAC cable”, so if the words “PIN number” give you the jitters, working anywhere with DACs is likely to drive you to drink.

Passive Copper DAC (Twinax)

The most common kind of DAC is the passive DAC. The SFP+ connector on a passive DAC, give or take some electrical protection circuitry, is pretty much a direct connection from the copper in the twinax to the copper contacts which connect to the host device:

Passive Copper DAC

Sending a 10G signal over a single copper pair requires some quite clever processing to take place in both the send and receive functions, but a passive DAC does not contain the components necessary to do so. Consequently, in order to use a passive DAC, the host device has to be able to do all the processing and signal amplification itself, which may increase the cost of the interface on the host. However, since the SFP+ connector is largely devoid of complexity, passive DAC cables are – as we say in the UK – cheap as chips, costing far less than a regular short reach optical SFP+.

Passive DACs are only for use over short distances, typically recommended for use only up to five meters, which means they are used most often for server to top of rack (ToR) connectivity. Longer connection lengths are available, but require use of an Active DAC.

Active Copper DAC (Twinax)

Active copper DACs are available in slightly longer lengths, but the increase to around a 10m maximum still means they are likely to to perform the necessary signal processing and amplification, so the switch itself does not need to have that capability and thus might be slightly cheaper to purchase, although it’s a trade off because each active DAC will be more expensive.

Active Copper DAC

The improved embedded signal driving capabilities allow for the longer cable lengths, and the twinax copper cable itself means that active copper DACs are still relatively cheap.

Active Optical Cable (Fiber)

The new young thing in connectivity options is the active optical cable (AOC). In the same spirit as the copper DACs these cables come with an SFP+ connector hard-wired on each end, but this time instead of twinax, the AOC uses fiber. Because fiber’s transmission characteristics are so much better than copper, an AOCs using OM3 fiber can be as long as 300m.

Active Optical Cable

If you’re wondering what the difference is between purchasing an AOC and purchasing two SFP+ transceivers and a 300m fiber, well, you and I are in the same boat. I used Fiberstore to see roughly what it would cost to connect two Cisco devices, and I found the following:

Solution Item Price URL
Active Optical Cable AOC 10m $50 https://www.fs.com/products/30895.html
Regular Optical Connection 2 x 10G SR SFP+ $32 ($16 each) https://www.fs.com/products/11552.html
10m OM3 Multimode Duplex Fiber $5.70 https://www.fs.com/products/41736.html

In this example, the AOC costs $50 and a non-AOC connection costs $37.50; the case for AOC is not immediately clear on a financial basis at least. It’s also worth noting that while in theory AOC supports up to a 300m connection (the same as a normal SFP+ transceiver with OM3 fiber), since the connectors are effectively part of the cable itself, it can only be used for a direct run from device to device; it’s not possible to connect using fiber patch bays.

For the purposes of this post, I’m going to consider AOC as being broadly equivalent to buying SFPs and using OM3 fiber.

The Benefits of Copper DAC

It’s Cheap!

Without a doubt, using copper DAC means saving money. The Cisco switches and UCS servers I use all have support for passive copper DAC and based on a Fiberstore price of $24 for a 5m DAC, that puts the cost of a single connection at under half the cost of the 10m AOC and about 2/3 the price of the regular fiber and SFP+. Depending on how much your favorite vendor marks up their SFP+ transceivers, the saving may be even higher still.

It’s Quick!

Using a DAC means no transceivers to fumble and drop, and no need to fiddle around making sure that the LC connectors are pushed all the way home.

It’s Reliable!

An all-in-one cable means no dirt in the connectors, guaranteed cable/SFP+ compatibility (because both ends are manufactured by the same company), no physical connections to work loose, and no ‘rolled fibers’ meaning a failed initial connection.

Where It All Goes Wrong

Size Does Matter

Size is where, in my opinion, copper DAC goes wildly wrong, and passive copper DACs are the worst offender. The problem is that despite twinax being screened, it’s still very susceptible to electromagnetic interference (EMI) and signal degradation. As a consequence, it’s normal to find that the longer the DAC is, the thicker the cable is. A one meter DAC isn’t so bad, but by five meters, the cable is getting pretty chunky and increasingly inflexible. If an installation contains only a few of these cables, that’s fine, but in a high density compute environment, passive DACs are a nightmare.

Even within a single rack, it can be necessary to use cables of 1m, 2m and even 3m if using cable management, and if the ToR switch isn in a neighboring rack, longer DACs get introduced and cable management rapidly becomes an issue. Active DACs are marginally better because they get away with using thinner cables for longer distances), but then the price is higher and it’s still bulkier than a pair of fibers.

Troubleshooting

Then there’s troubleshooting. The data center operations guys I speak to are unanimous in despising these dense DAC installations because trying to get to the rear of a device through a heavy, dense curtain of passive DACs is extremely unpleasant. One data center I know has now pretty much banned DAC going forward because of how difficult it can be to trace cables and access devices when the rear of the rack is, despite best efforts to attempt some cable management with those big, chunky DACs, almost inaccessible.

Airflow

I’ve not taken any measurements, but I have to imagine that having a density of cables behind a rack of servers will be an impediment to the cooling process at some level. Anybody who has built a PC knows that routing cables carefully can significantly improve the cooling capability of a computer (and boy, didn’t we just love when SATA replaced PATA and we got to use nice thin cables instead of those great, flappy ribbons).

Conclusions?

Copper DAC can make sense if used in limited quantities, but for denser installations I vote for fiber every time. Using AOC is certainly a valid fiber option, but given the price comparison I performed above, I’m struggling to understand the benefits versus using regular SFP+ transceivers and duplex fibers. Feel free to clue me in if I’m missing something there. It’s a real shame, because the price of DAC, if nothing else, makes it incredibly attractive as a budget connectivity solution.

Bonus Update (January 2018)

After this post went live, Twitter user @neojima responded that “DAC usually = Direct Attach Copper, so “DAC cable” isn’t really irritatingly redundant. 🙂” Fair point, well made. Perhaps my criticism of “DAC Cable” is a bit unfair; it seems that both Direct Attach Copper and Direct Attach Cable are both in use out there, and I stand corrected. That said, since AOC ends in “cable” it might be easier to refer to them as DAC and AOC as if the C stands for cable in both, just for consistency. However, thanks to @neojima for the correction!

If you liked this post, please do click through to the source at Twinax – Cheap, Cheerful and Annoyingly Chubby and give me a share/like. Thank you!

by John Herbert at December 28, 2017 03:00 PM