October 23, 2016

Potaroo blog


NANOG held its 68th meeting in Dallas in October. Here's what I found memorable and/or noteworthy from this meeting.

October 23, 2016 07:00 AM

A Brief History of the IANA

October 2016 marks a milestone in the story of the Internet. At the start of the month the United States Government let its residual oversight arrangements with ICANN (the Internet Corporation for Assigned Names and Numbers) over the operation of the Internet Assigned Numbers Authority (IANA) lapse. No single government now has a unique relationship with the governance of the protocol elements of the Internet, and it is now in the hands of a community of interested parties in a so-called Multi-Stakeholder framework. This is a unique step for the Internet and not without its attendant risks. How did we get here?

October 23, 2016 06:00 AM

October 22, 2016

ipSpace.net Blog (Ivan Pepelnjak)

Why cybersecurity certifications suck

Robert Graham wrote a great blog post explaining why so many IT certifications suck.

TL&DR: because they are trivial pursuits instead of knowledge assessment tests… but do read the whole post and compare it to your recent certification experience.

by Ivan Pepelnjak (noreply@blogger.com) at October 22, 2016 09:56 AM

October 21, 2016

Security to the Core | Arbor Networks Security

On DNS and DDoS

The global DNS infrastructure provides the critical function of mapping seeming random sets of numbers in IP addresses (like to a name that an Internet consumer may recognize (like www.myfavoritestore.com).   To scale to a global level, the DNS system was designed as a multi-level reference network that would allow any user on the Internet […]

by ASERT team at October 21, 2016 06:53 PM

Potaroo blog


DNS OARC is the place to share research, experiences and data primarily concerned with the operation of the DNS in the Internet. Here are some highlights for me from the most recent meeting, held in October 2016 in Dallas.

October 21, 2016 12:00 PM

ipSpace.net Blog (Ivan Pepelnjak)

Basic Docker Networking

After explaining the basics of Linux containers, Dinesh Dutt moved on to the basics of Docker networking, starting with an in-depth explanation of how a container communicates with other containers on the same host, with containers residing on other hosts, and the outside world.

by Ivan Pepelnjak (noreply@blogger.com) at October 21, 2016 06:22 AM

Do Enterprises Need VRFs?

One of my readers sent me a long of questions titled “Do enterprise customers REALLY need VRFs?

The only answer I could give is “it depends” (it’s like asking “Do animals need wings?”), and here’s my attempt at building a decision tree:

You can use the decision tree to figure out whether you need VRFs in your data center or in your enterprise WAN.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at October 21, 2016 03:43 AM

XKCD Comics

October 20, 2016

Honest Networker
ipSpace.net Blog (Ivan Pepelnjak)

Do You Use SSL between Load Balancers and Servers?

One of my readers sent me this question:

Using SSL over the Internet is a must when dealing with sensitive data. What about SSL between data center components (frontend load-balancers and backend web servers for example)? Does it make sense to you? Can the question be summarized as "do I trust my Datacenter network team"? Or is there more at stake?

In the ideal world in which you’d have a totally reliable transport infrastructure the answer would be “There’s no need for SSL across that infrastructure”.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at October 20, 2016 05:58 AM

October 19, 2016

The Networking Nerd

Facebook Wedge 100 – The Future of the Data Center?



Facebook is back in the news again. This time, it’s because of the release of their new Wedge 100 switch into the Open Compute Project (OCP). Wedge was already making headlines when Facebook announced it two years ago. A fast, open sourced 40Gig Top-of-Rack (ToR) switch was huge. Now, Facebook is letting everyone in on the fun of a faster Wedge that has been deployed into production at Facebook data centers as well as being offered for sale through Edgecore Networks, which is itself a division of Accton. Accton has been leading the way in the whitebox switching market and Wedge 100 may be one of the ways it climbs to the top.

Holy Hardware!

Wedge 100 is pretty impressive from the spec sheet. They paid special attention to making sure the modules were expandable, especially for faster CPUs and special purpose devices down the road. That’s possible because Wedge is a highly specialized micro server already. Rather than rearchitecting the guts of the whole thing, Facebook kept the CPU and the monitoring stack and just put newer, faster modules on it to ramp to 32x100Gig connectivity.


As suspected in the above image, Facebook is using Broadcom Tomahawk as the base connectivity in their switch, which isn’t surprising. Tomahawk is the roadmap for all vendors to get to 100Gig. It also means that the downlink connectivity for these switches could conceivably work in 25/50Gig increments. However, given the enormous amount of east/west traffic that Facebook must generate, Facebook has created a server platform they call Yosemite that has 100Gig links as well. Given the probably backplane there, you can imagine the data that’s getting thrown around the data centers.

That’s not all. Omar Baldonado has said that they are looking at going to 400Gig connectivity soon. That’s the kind of mind blowing speed that you see in places like Google and Facebook. Remember that this hardware is built for a specific purpose. They don’t just have elephant flows. They have flows the size of an elephant herd. That’s why they fret about the operating temperature of optics or the rack design they want to use (standard versus Open Racks). Because every little change matters a thousand fold at that scale.

Software For The People

The other exciting announcement from Facebook was on the software front. Of course, FBOSS has been updated to work with Wedge 100. I found it very interesting in the press release that much of the programming in FBOSS went into interoperability with Wedge 40 and with fixing the hardware side of things. This makes some sense when you realize that Facebook didn’t need to spend a lot of time making Wedge 40 interoperate with anything, since it was a wholesale replacement. But Wedge 100 would need to coexist with Wedge 40 as the rollout happens, so making everything play nice is a huge point on the checklist.

The other software announcement that got the community talking was support for third-party operating systems running on Wedge 100. The first one up was Open Network Linux from Big Switch Networks. ONL ran on the original Wedge 40 and now runs on the Wedge 100. This means that if you’re familiar with running BSN OSes on your devices, you can drop in a Wedge 100 in your spine or fabric and be ready to go.

The second exciting announcement about software comes from a new company, Apstra. Apstra announced their entry into OCP and their intent to get their Apstra Operating System (AOS) running on Wedge 100 by next year. That has a big potential impact for Apstra customers that want to deploy these switches down the road. I hope to hear more about this from Apstra during their presentation at Networking Field Day 13 next month.

Tom’s Take

Facebook is blazing a trail for fast ToR switches. They’ve got the technical chops to build what they need and release the designs to the rest of the world to be used for a variety of ideas. Granted, your data center looks nothing like Facebook. But the ideas they are pioneering are having an impact down the line. If Open Rack catches on you may see different ideas in data center standardization. If the Six Pack catches on as a new chassis concept, it’s going to change spines as well.

If you want to get your hands dirty with Wedge, build a new 100Gig pod and buy one from Edgecore. The downlinks can break out into 10Gig and 25Gig links for servers and knowing it can run ONL or Apstra AOS (eventually) gives you some familiar ground to start from. If it runs as fast as they say it does, it may be a better investment right now than waiting for Tomahawk II to come to your favorite vendor.



by networkingnerd at October 19, 2016 04:49 PM

XKCD Comics
Keeping It Classless

A New Automation Chapter Begins

Two years ago, while I worked as a network engineer/consultant, I felt strongly that the industry was ripe for change. In February 2015 I jumped feet-first into the world of network automation by going back to my roots in software development, combining those skills with the lessons I learned from 3 years of network engineering.

I’ve learned a ton in the last 2 years - not just at the day job but by actively participating in the automation and open source communities. I’ve co-authored a network automation book. I’ve released an open source project to facilitate automated and distributed testing of network infrastructure. I’ve spoken publicly about many of these concepts and more.

Despite all this, there’s a lot left to do, and I want to make sure I’m in the best place to help move the industry forward. My goal is and has always been to help the industry at large realize the benefits of automation, and break the preconception that automation is only useful for big web properties like Google and Facebook. Bringing these concepts down to Earth and providing very practical steps to achieve this goal is a huge passion of mine.

Automation isn’t just about running some scripts - it’s about autonomous software. It’s about creating a pipeline of actions that take place with minimal human input. It’s about maintaining high quality software. I wrote about this and more yesterday in my post on the “Principles of Automation”.


Later this month, I’m starting a new chapter in my career and joining the team at StackStorm.

In short, StackStorm (the project) is an event-driven automation platform. Use cases include auto-remediation, security responses, facilitated troubleshooting, and complex deployments.

StackStorm presented at the recent Network Field Day 12 and discussed not only the core platform, but some of the use cases that, while not specifically network-centric, are important to consider:

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/M_hacp2qd70" width="560"></iframe>

When I first saw StackStorm, I realized quickly that the project aligned well with the Principles of Automation I was rattling around in my head, especially the Rule of Autonomy, which dictates that automation should be driven by input from other software systems. StackStorm makes it easy to move beyond simple “scripts” and truly drive decisions based on events that take place elsewhere.

So, how does this change things in terms of my community involvement? Actually I expect this to improve. Naturally, you’ll likely see me writing and talking about StackStorm and related technologies - not just because they’re my employer but because the project matches well with my automation ideals and principles. This does NOT mean that I will stop talking about other concepts and projects. One unique thing about automation is that it’s never a one-size-fits-all….you’re always going to deal with multiple tools in a pipeline to get the job done. I am still very passionate about the people and process problems that aren’t tackled directly by technology solutions, and I plan to continue to grow my own experience in these areas and share them with you all.

I still very strongly believe that the first problems we should be solving in the networking industry, and in IT as a whole, are problems of culture and process. So, from that perspective, nothing has changed - but from this new team I feel like I’ll have the support and platform I need to really get these ideas out there.

Lastly, there are still openings on the team so if you’re passionate about automation, please consider applying.

By no means am I done yet - but I do want to take the opportunity to say Thank You to all who have been a part of my public journey for the past 5+ years. I couldn’t have had the learning experiences I’ve had without readers who were just as passionate about technology. My goal is only to increase my involvement in the community in the years to come, and I hope that what I contribute is helpful.

I attended NFD12 as a delegate as part of Tech Field Day, well before I started talking with StackStorm team about employment opportunities. Events like these are sponsored by networking vendors who may cover a portion of our travel costs. In addition to a presentation (or more), vendors may give us a tasty unicorn burger, warm sweater made from presenter’s beard or a similar tchotchke. The vendors sponsoring Tech Field Day events don’t ask for, nor are they promised any kind of consideration in the writing of my blog posts … and as always, all opinions expressed here are entirely my own. (Full disclaimer here)

October 19, 2016 12:00 AM

October 18, 2016

Security to the Core | Arbor Networks Security

The Great DGA of Sphinx

This post takes a quick look at Sphinx’s domain generation algorithm (DGA). Sphinx, another Zeus-based banking trojan variant, has been around circa August 2015. The DGA domains are used as a backup mechanism for when the primary hardcoded command and control (C2) servers go down. It is currently unknown to us as to what version […]

by Dennis Schwarz at October 18, 2016 03:55 PM

ipSpace.net Blog (Ivan Pepelnjak)

Save the date: Leaf-and-Spine Fabric Design Workshop in Zurich

Do you believe in vendor-supplied black box (regardless of whether you call it ACI or SDDC) or in building your own data center fabric using solid design principles?

It should be an easy choice if believe a business should control its own destiny instead of being pulled around by vendor marketing (to paraphrase Russ White)

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at October 18, 2016 07:23 AM

Potaroo blog

IPv6 and the DNS

We often think of the Internet as the web, or even these days as just a set of apps. When we look at the progress with the transition to IPv6 we talk about how these apps are accessible using IPv6 and mark this as progress. But the Internet is more than just these services. There is a whole substructure of support and if we are thinking about an IPv6 Internet then everything needs to change. So here I want to look at perhaps the most critical of these hidden infrastructure elements - the Domain Name System. How are we going with using IPv6 in the DNS?

October 18, 2016 01:00 AM

Keeping It Classless

Principles of Automation

Automation is an increasingly interesting topic in pretty much every technology discipline these days. There’s lots of talk about tooling, practices, skill set evolution, and more - but little conversation about fundamentals. What little is published by those actually practicing automation, usually takes the form of source code or technical whitepapers. While these are obviously valuable, they don’t usually cover some of the fundamental basics that could prove useful to the reader who wishes to perform similar things in their own organization, but may have different technical requirements.

I write this post to cover what I’m calling the “Principles of Automation”. I have pondered this topic for a while and I believe I have three principles that cover just about any form of automation you may consider. These principles have nothing to do with technology disciplines, tools, or programming languages - they are fundamental principles that you can adopt regardless of the implementation.

I hope you enjoy.

It’s a bit of a long post, so TL;DR - automation isn’t magic. It isn’t only for the “elite”. Follow these guidelines and you can realize the same value regardless of your scale.


Lately I’ve been obsessed with a game called “Factorio”. In it, you play an engineer that’s crash-landed on a planet with little more than the clothes on your back, and some tools for gathering raw materials like iron or copper ore, coal, wood, etc. Your objective is to use these materials, and your systems know-how to construct more and more complicated systems that eventually construct a rocket ship to blast off from the planet.

Even the very first stages of this game end up being more complicated than they initially appear. Among your initial inventory is a drill that you can use to mine coal, a useful ingredient for anything that needs to burn fuel - but the drill itself actually requires that same fuel. So, the first thing you need to do is mine some coal by hand, to get the drill started.

We can also use some of the raw materials to manually kick-start some automation. With a second drill, we can start mining for raw iron ore. In order to do that we need to build a “burner inserter”, which moves the coal that the first drill gathered into the second drill:

Even this very early automation requires manual intervention, as it all requires coal to burn, and not everything has coal automatically delivered to it (yet).

Now, there are things you can do to improve your own efficiency, such as building/using better tools:

However, this is just one optimization out of a multitude. Our objectives will never be met if we only think about optimizing the manual process; we need to adopt a “big picture” systems mindset.

Eventually we have a reasonably good system in place for mining raw materials; we now need to move to the next level in the technology tree, and start smelting our raw iron ore into iron plates. As with other parts of our system, at first we start by manually placing raw iron ore and coal into a furnace. However, we soon realize that we can be much more efficient if we allow some burner inserters to take care of this for us:

With a little extra work we can automate coal delivery to this furnace as well:

There’s too much to Factorio to provide screenshots of every step - the number of technology layers you must go through in order to unlock fairly basic technology like solar power is astounding; not to mention being able to launch a fully functional rocket.

As you continue to automate processes, you continue to unlock higher and higher capabilities and technology; they all build on each other. Along the way you run into all kinds of issues. These issues could arise in trying to create new technology, or you could uncover a bottleneck that didn’t reveal itself until the system scaled to a certain point.

For instance, in the last few screenshots we started smelting some iron plates to use for things like pipes or circuit boards. Eventually, the demand for this very basic resource will outgrow the supply - so as you build production facilities, you have to consider how well they’ll scale as the demand increases. Here’s an example of an iron smelting “facility” that’s built to scale horizontally:

Scaling out one part of this system isn’t all you need to be aware of, however. The full end-to-end supply chain matters too.

As an example, a “green” science pack is one resource that’s used to perform research that unlocks technologies in Factorio. If you are running short on these, you may immediately think “Well, hey, I need to add more factories that produce green science packs!”. However, the bottleneck might not be the number of factories producing green science, but further back in the system.

Green science packs are made by combining a single inserter with a single transport belt panel - and in the screenshot above, while we have plenty of transport belt panels, we aren’t getting any inserters! This means we now have to analyze the part of our system that produces that part - which also might be suffering a shortage in it’s supply chain. Sometimes such shortages can be traced all the way down to the lowest level - running out of raw ore.

In summary, Factorio is a really cool game that you should definitely check out - but if you work around systems as part of your day job, I encourage you to pay close attention to the following sections, as I’d like to recap some of the systems design principles that I’ve illustrated above. I really do believe there are some valuable lessons to be learned here.

I refer to these as the Principles of Automation, and they are:

  • The Rule of Algorithmic Thinking
  • The Rule of Bottlenecks
  • The Rule of Autonomy

The Rule of Algorithmic Thinking

Repeat after me: “Everything is a system”.

Come to grips with this, because this is where automation ceases to be some magical concept only for the huge hyperscale companies like Facebook and Google. Everything you do, say, or smell is part of a system, whether you think it is or not; from the complicated systems that power your favorite social media site, all the way down to the water cycle:

By the way, just as humans are a part of the water cycle, humans are and always will be part of an automated system you construct.

In all areas of IT there is a lot of hand-waving; engineers claim to know a technology, but when things go wrong, and it’s necessary to go deeper, they don’t really know it that well. Another name for this could be “user manual” engineering - they know how it should work when things go well, but don’t actually know what makes it tick, which is useful when things start to break.

There are many tangible skills that you can acquire that an automation or software team will find attractive, such as language experience, and automated testing. It’s important to know how to write idiomatic code. It’s important to understand what real quality looks like in software systems. However, these things are fairly easy to learn with a little bit of experience. What’s more difficult is understanding what it means to write a meaningful test, and not just check the box when a line of code is “covered”. That kind of skill set requires more experience, and a lot of passion (you have to want to write good tests).

Harder still is the ability to look at a system with a “big picture” perspective, while also being able to drill in to a specific part and optimize it…and most importantly, the wisdom to know when to do the latter. I like to refer to this skill as “Algorithmic Thinking”. Engineers with this skill are able to mentally deconstruct a system into it’s component parts without getting tunnel vision on any one of them - maintaining that systems perspective.

If you think Algorithms are some super-advanced topic that’s way over your head, they’re not. See one of my earlier posts for a demystification of this subject.

A great way to understand this skill is to imagine you’re in an interview, and the interviewer asks you to enumerate all of the steps needed to load a web page. Simple, right? It sure seems like it at first, but what’s really happening is that the interviewer is trying to understand how well you know (or want to know) all of the complex activities that take place in order to load a web page. Sure, the user types a URL into the address bar and hits enter - then the HTTP request magically takes place. Right? Well, how did the machine know what IP address was being represented by that domain name? That leads you to the DNS configuration. How did the machine know how to reach the DNS server address? That leads you to the routing table, which likely indicates the default gateway is used to reach the DNS server. How does the machine get the DNS traffic to the default gateway? In that case, ARP is used to identify the right MAC address to use as the destination for that first hop.

Those are just some of the high-level steps that take place before the request can even be sent. Algorithmic thinking recognizes that each part of a system, no matter how simple, has numerous subsystems that all perform their own tasks. It is the ability to understand that nothing is magic - only layers of abstraction. These days, this is understandably a tall order. As technology gets more and more advanced, so do the abstractions. It may seem impossible to be able to operate at both sides of the spectrum.

It’s true, no one can know everything. However, a skilled engineer will have the wisdom to dive behind the abstraction when appropriate. After all, the aforementioned “problem” seemed simple, but there are a multitude of things going on behind the scenes - any one of which could have prevented that page from loading. Being able to think algorithmically doesn’t mean you know everything, but it does mean that when a problem arises, it might be time to jump a little further down the rabbit hole.

Gaining experience with automation is all about demystification. Automation is not magic, and it’s not reserved only for Facebook and Google. It is the recognition that we are all part of a system, and if we don’t want to get paged at 3AM anymore, we may as well put software in place that allows us to remove ourselves from that part of the system. If we have the right mindset, we’ll know where to apply those kinds of solutions.

Most of us have close friends or family members that are completely non-technical. You know, the type that breaks computers just by looking at them. My suggestion to you is this: if you really want to learn a technology, figure out how to explain it to them. Until you can do that, you don’t really know it that well.

The Rule of Bottlenecks

Recently I was having a conversation with a web developer about automated testing. They made the argument that they wanted to use automated testing, but couldn’t because each web application they deployed for customers were snowflake custom builds, and it was not feasible to do anything but manual testing (click this, type this). Upon further inspection, I discovered that the majority of their customer requirements were nearly identical. In this case, the real bottleneck wasn’t just that they weren’t doing automated testing; they weren’t even setting themselves up to be able to do it in the first place. In terms of systems design, the problem is much closer to the source - I don’t mean “source code”, but that the problem lies further up the chain of events that could lead to being able to do automated testing.

I hear the same old story in networking. “Our network can’t be automated or tested, we’re too unique. We have a special snowflake network”. This highlights an often overlooked part of network automation, and that is that the network design has to be solid. Network automation isn’t just about code - it’s about simple design too; the network has to be designed with automation in mind.

This is what DevOps is really about. Not automation or tooling, but communication. The ability to share feedback about design-related issues with the other parts of the technology discipline. Yes, this means you need to seek out and proactively talk to your developers. Developers, this means sitting down with your peers on the infrastructure side. Get over it and learn from each other.

Once you’ve learned to think Algorithmically, you start to look at your infrastructure like a graph - a series of nodes and edges. The nodes would be your servers, your switches, your access points, your operating systems. These nodes communicate with each other on a huge mesh of edges. When failures happen, they often cause a cascading effect, not unlike the cascading shortages I illustrated in Factorio where a shortage of green science packs doesn’t necessarily mean I need to spin up more green science machines. The bottleneck might not always be where you think it is; in order to fix the real problem, understanding how to locate the real bottleneck is a good skill to have.

The cause of a bottleneck could be bad design:

Or it could be improper/insufficient input (which could in turn be caused by a bad design elsewhere):

One part of good design is understanding the kind of scale you might have to deal with and reflecting it in your design. This doesn’t mean you have to build something that scales to trillions of nodes today, only that the system you put in place doesn’t prevent you from scaling organically in the near future.

As an example, when I built a new plant in Factorio to produce copper wiring, I didn’t build 20 factories, I started with 2 - but I allowed myself room for 20, in case I needed it in the future. In the same way, you can design with scale in mind without having to boil the ocean and actually build a solution that meets some crazy unrealistic demand on day one.

This blog post is already way too long to talk about proper design, especially considering that this post is fairly technology-agnostic. For now, suffice it to say that having a proper design is important, especially if you’re going in to a new automation project. It’s okay to write some quick prototypes to figure some stuff out, but before you commit yourself to a design, do it on paper (or whiteboard) first. Understanding the steps there will save you a lot of headaches in the long run. Think about the system-to-be using an Algorithmic mindset, and walk through each of the steps in the system to ensure you understand each level.

As the system matures, it’s going to have bottlenecks. That bottleneck might be a human being that still holds power over a manual process you didn’t know existed. It might be an aging service that was written in the 80s. Just like in Factorio, something somewhere will be a bottleneck - the question is, do you know where it is, and is it worth addressing? It may not be. Everything is a tradeoff, and some bottlenecks are tolerable at certain points in the maturity of the system.

The Rule of Autonomy

I am very passionate about this section; here, we’re going to talk about the impact of automation on human beings.

Factorio is a game where you ascend the tech tree towards the ultimate goal of launching a rocket. As the game progresses, and you automate more and more of the system (which you have to do in order to complete the game in any reasonable time), you unlock more and more elaborate and complicated technologies, which then enable you to climb even higher. Building a solid foundation means you spend less time fussing with gears and armatures, and more time unlocking capabilities you simply didn’t have before.

In the “real” world, the idea that automation means human beings are removed from a system is patently false. At first light, automation actually creates more opportunities for human beings because it enables new capabilities that weren’t possible before it existed. Anyone who tells you otherwise doesn’t have a ton of experience in automation. Automation is not a night/day difference - it is an iterative process. We didn’t start Factorio with a working factory - we started it with the clothes on our back.

This idea is well described by Jevon’s Paradox, which basically states that the more efficiently you produce a resource, the greater the demand for that resource grows.

Not only is automation highly incremental, it’s also imperfect at every layer. Everything in systems design is about tradeoffs. At the beginning of Factorio, we had to manually insert coal into many of the components; this was a worthy tradeoff due to the simple nature of the system. It wasn’t that big of a deal to do this part manually at that stage, because the system was an infant.

However, at some point, the our factory needed to grow. We needed to allow the two parts to exchange resources directly instead of manually ferrying them between components.

The Rule of Autonomy is this: machines can communicate with other machines really well. Let them. Of course, automation is an iterative system, so you’ll undoubtedly start out by writing a few scripts and leveraging some APIs to do some task you previously had to do yourself, but don’t stop there. Always be asking yourself if you need to be in the direct path at all. Maybe you don’t really need to provide input to the script in order for it to do it’s work, maybe you can change that script to operate autonomously by getting that input from some other system in your infrastructure.

As an example, I once had a script that would automatically put together a Cisco MDS configuration based on some WWPNs I put into a spreadsheet. This script wasn’t useless, it saved me a lot of time, and helped ensure a consistent configuration between deployments. However, it still required my input, specifically for the WWPNs. I quickly decided it wouldn’t be that hard to extend this script to make API calls to Cisco UCS to get those WWPNs and automatically place them into the switch configuration. I was no longer required for that part of the system, it operated autonomously. Of course, I’d return to this software periodically to make improvements, but largely it was off my plate. I was able to focus on other things that I wanted to explore in greater depth.

The goal is to remove humans as functional components of a subsystem so they can make improvements to the system as a whole. Writing code is not magic - it is the machine manifestation of human logic. For many tasks, there is no need to have a human manually enumerate the steps required to perform a task; that human logic can be described in code and used to work on the human’s behalf. So when we talk about replacing humans in a particular part of a system, what we’re really talking about is reproducing the logic that they’d employ in order to perform a task as code that doesn’t get tired, burnt out, or narrowly focused. It works asynchronously to the human, and therefore will allow the human to then go make the same reproduction elsewhere, or make other improvements to the system as a whole. If you insist on staying “the cog” in a machine, you’ll quickly lose sight of the big picture.

This idea that “automation will take my job” is based on the incorrect assumption is that once automation is in place, the work is over. Automation is not a monolithic “automate everything” movement. Like our efforts in Factorio, automation is designed to take a particular workflow in one very small part of the overall system and take it off of our plates, once we understand it well enough. Once that’s done, our attention is freed up to explore new capabilities we were literally unable to address while we were mired in the lower details of the system. We constantly remove ourselves as humans from higher and higher parts of the system.

Note that I said “parts” of the system. Remember: everything is a system, so it’s foolish to think that human beings can (or should) be entirely removed - you’re always going to need human input to the system as a whole. In technology there are just some things that require human input - like new policies or processes. Keeping that in mind, always be asking yourself “Do I really need human input at this specific part of the system?” Constantly challenge this idea.

Automation is so not about removing human beings from a system. It’s about moving humans to a new part of the system, and about allowing automation to be driven by events that take place elsewhere in the system.


Note that I haven’t really talked about specific tools or languages in this post. It may seem strange - often when other automation junkies talk about how to get involved, they talk about learning to code, or learning Ansible or Puppet, etc. As I’ve mentioned earlier in this post (and as I’ve presented at conferences), this is all very meaningful - at some point the rubber needs to meet the road. However, when doing this yourself, hearing about someone else’s implementation details is not enough - you need some core fundamentals to aim for.

The best way to get involved with automation is to want it. I can’t make you want to invest in automation as a skill set, nor can your manager; only you can do that. I believe that if the motivation is there, you’ll figure out the right languages and tools for yourself. Instead, I like to focus on the fundamentals listed above - which are language and tool agnostic. These are core principles that I wish I had known about when I started on this journey - principles that don’t readily reveal themselves in a quick Stack Overflow search.

That said, my parting advice is:

  1. Get Motivated - think of a problem you actually care about. “Hello World” examples get old pretty fast. It’s really hard to build quality systems if you don’t care. Get some passion, or hire folks that have it. Take ownership of your system. Make the move to automation with strategic vision, and not a half-cocked effort.
  2. Experiment - learn the tools and languages that are most powerful for you. Automation is like cooking - you can’t just tie yourself to the recipe book. You have to learn the fundamentals and screw up a few times to really learn. Make mistakes, and devise automated tests that ensure you don’t make the same mistake twice.
  3. Collaborate - there are others out there that are going through this journey with you. Sign up for the networktocode slack channel (free) and participate in the community.

October 18, 2016 12:00 AM

October 17, 2016

Networking Now (Juniper Blog)

Automating Cyber Threat Intelligence with SkyATP: Part One

Each year, the economics of "fighting back" against Hacktivism, CyberCrime, and the occasional State-Sponsored attack become more and more untenable for the typical Enterprise. It's nearly impossible for the average Security Team to stay up to date with the latest emerging threats while also being tasked with their regular duties. Given the current economic climate, the luxury of having a dedicated team to perform Cyber Threat Intelligence (CTI) is generally out of reach for all but the largest of Enterprises. While automated identification, curation, and enforcement of CTI cannot truly replace human Security Analysts (yet), it has been shown to go a long way towards increasing the effectiveness and agility of your Security infrastructure. 

by cdods at October 17, 2016 05:25 PM

Putting a Dent in Cybercrime: From Industry to Individual

The idea of a lone hacker maliciously tapping away in a dark room is an antiquated one. The business of cybercrime is now a multibillion-dollar enterprise with highly organized entities looking to exploit vulnerabilities and scam businesses and consumers in our increasingly networked world. According to a Juniper commissioned report from the RAND Corporation:


The cyber black market has evolved from a varied landscape of discrete, ad hoc individuals into a network of highly organized groups, often connected with traditional crime groups (e.g., drug cartels, mafias, terrorist cells) and nation-states. It does not differ much from a traditional market or other typical criminal enterprises; participants communicate through various channels, place their orders, and get products.


Today, attackers are much more efficient in their efforts than ever before, driven by the ability to work with others in the criminal underground. Left unchecked, I worry that the ability to defend against these organizations will be more challenging.

by Kevin Walker at October 17, 2016 04:00 PM

XKCD Comics

October 15, 2016

ipSpace.net Blog (Ivan Pepelnjak)

One of the better explanations of SDN

Stumbled upon this via HighScalability:

Every time I feel like I'm "out of touch" with the hip new thing, I take a weekend to look into it. I tend to discover that the core principles are the same [...]; or you can tell they didn't learn from the previous solution and this new one misses the mark, but it'll be three years before anyone notices (because those with experience probably aren't touching it yet, and those without experience will discover the shortcomings in time.)

Yep, that explains the whole centralized control plane ruckus ;) Read also a similar musing by Ethan Banks.

by Ivan Pepelnjak (noreply@blogger.com) at October 15, 2016 05:16 PM

Fast Linux Packet Forwarding with Thomas Graf on Software Gone Wild

We did several podcasts describing how one could get stellar packet forwarding performance on x86 servers reimplementing the whole forwarding stack outside of kernel (Snabb Switch) or bypassing the Linux kernel and moving the packet processing into userspace (PF_Ring).

Now let’s see if it’s possible to improve the Linux kernel forwarding performance. Thomas Graf, one of the authors of Cilium claims it can be done and explained the intricate details in Episode 64 of Software Gone Wild.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at October 15, 2016 06:27 AM

October 14, 2016

XKCD Comics
Jason Edelman's Blog


There is a lot of buzz around network APIs such as NETCONF and RESTCONF. Here we’ll take a quick a look at these APIs on Cisco IOS XE. On the surface, it seems Cisco IOS XE is the first network device platform that supports NETCONF and RESTCONF both driven from YANG models.

Technically, RESTCONF isn’t officially supported or even seen in the CLI to enable it, but more on that later.


When APIs are model driven, the model is the source of truth. If done right, all API documentation and configuration validation could occur using tooling built directly from the models. YANG is the leading data modeling language and as such, all API requests using RESTCONF/NETCONF are directly modeled from the YANG models IOS XE supports. For this post, we’ll just say the models can easily be represented as JSON k/v pairs or XML documents. We’ll cover YANG in more detail in a future post.


You can directly access the NETCONF server on IOS XE using the following SSH command (or equivalent from a SSH client).

The NETCONF server is a SSH sub-system.

$ ssh -p 830 ntc@csr1kv -s netconf 

The full response from the IOS XE NETCONF server can be seen below.

When you get the response from the device, you need to respond with client capabilities, and then can you can enter NETCONF request objects into that terminal session directly communicating to the device using NETCONF — all without writing any code. This is a good way to ensure your XML objects are built properly before testing them out in any type of script.

So first, we can paste this object into the terminal:

<?xml version="1.0" encoding="UTF-8"?>
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

Now the client and the server have exchanged capabilities and the client is now able to send NETCONF request objects.

We are going to query the device for the IP configuration on the GigabitEthernet2 interface. We’ll do this by sending the following object to the device (still in the same terminal session from above).

This is not a typical interactive session, so don’t be alarmed if you aren’t getting feedback from the device before pasting in this object.

<?xml version="1.0"?>
<nc:rpc message-id="101" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
        <nc:filter type="subtree">
            <native xmlns="http://cisco.com/ns/yang/ned/ios">

This is the response we get back:

<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><data><native xmlns="http://cisco.com/ns/yang/ned/ios"><interface><GigabitEthernet><name>2</name><ip><address><primary><address></address><mask></mask></primary></address></ip></GigabitEthernet></interface></native></data></rpc-reply>]]>]]>

And if we clean up the response:

<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">
      <native xmlns="http://cisco.com/ns/yang/ned/ios">

We can see the structured XML response that makes it extremely easy to programmatically get data out of devices (as well as configure them). Say good bye to manual parsing forever.


The RESTCONF API on IOS XE is built from the same models NETCONF is using. You also have your choice if you want to use XML or JSON data encoding when using RESTCONF.

Here we’ll use JSON.

The following URL using an HTTP GET accomplishes the same thing as shown in the previous NETCONF GET operation.


The JSON response returned back to us is this:

  "ned:address": {
    "primary": {
      "address": "",
      "mask": ""

This maps nicely back into a Python dictionary that we can easily parse and work with.


  • RESTCONF and NETCONF are both model driven APIs on IOS XE
  • RESTCONF is NOT the same REST API that has been on the CSR1KV or IOS XE - it’s a brand new API
  • You’ll need 16.3.1 to test this - the testing for this post used the CSR1KV
  • The RESTCONF/NETCONF APIs support 100s of YANG models - all testing here was using the native Cisco IOS model. This is a personal favorite of mine as the full running configuration is modeled in JSON/XML.

RESTCONF, as good as it seems, is not yet officially supported by TAC and it’s actually hidden in the CLI. Why? Who knows? But if you’re interested in it, make sure Cisco is aware.

As you can see, there is no RESTCONF command:

% Unrecognized command

But, watch this:

csr1(config)#do show run | inc restconf

And it then works like a charm.


Check the Network to Code Labs.

NETCONF Server Capabilities


<?xml version="1.0" encoding="UTF-8"?>
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

October 14, 2016 12:00 AM

October 13, 2016

Networking Now (Juniper Blog)

Holistic Network Protection Paradigms: A Change in Cybersecurity Mindset from Network Security to Secure Networks

Traditional cybersecurity approaches involving perimeter-only protection are no longer enough to prevent data breaches and potential data exfiltration. Our cyber adversaries have grown in sophistication with very little training and inexpensive equipment. The standard attack anatomy is changing. Protection against state actors, lone wolf actors, and insider threats is becoming increasingly problematic. The evolution of threats has necessitated a change in the security mindset from high-trust (trust what’s inside) to zero-trust (trust nothing) posture. So, the traditional methods of high-trust security have created a type of architected fragility that is inflexible and unable to adapt quickly or at all to protect against the constant barrage of cyber threats.

by DanielleDMZ at October 13, 2016 05:45 PM

ipSpace.net Blog (Ivan Pepelnjak)

Network Automation RFP Requirements

After finishing the network automation part of a recent SDN workshop I told the attendees “Vote with your wallet. If your current vendor doesn’t support the network automation functionality you need, move on.

Not surprisingly, the next question was “And what shall we ask for?” Here’s a short list of ideas, please add yours in comments.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at October 13, 2016 06:47 AM

October 12, 2016

PacketLife.net Blog

Legacy TLS cipher support in Firefox

After upgrading Firefox recently, I noticed that I could no longer access certain embedded devices via HTTPS. It seems that recent versions of Firefox and Chrome no longer support certain TLS ciphers due to recently discovered vulnerabilities. That's all well and good, except the error returned offers no recourse if you need to connect anyway.


Firefox returns the error SSL_ERROR_NO_CYPHER_OVERLAP with no option to temporarily allow connectivity. (Chrome reports a similar error named ERR_SSL_VERSION_OR_CIPHER_MISMATCH.) Presumably, this choice was made by the developers with the intention of forcing people to upgrade outdated devices. Unfortunately, in order to upgrade an out-of-date device, we typically must first be able to connect to it. I wasted a fair bit of time digging up a solution, so I figured I'd document the workaround here for when I inevitably run into this problem again a year from now and have forgotten what I did.

Continue reading · 2 comments

by Jeremy Stretch at October 12, 2016 05:45 PM

The Networking Nerd

Tomahawk II – Performance Over Programmability


Broadcom announced a new addition to their growing family of merchant silicon today. The new Broadcom Tomahawk II is a monster. It doubles the speed of it’s first-generation predecessor. It has 6.4 Tbps of aggregate throughout, divided up into 256 25Gbps ports that can be combined into 128 50Gbps or even 64 100Gbps ports. That’s fast no matter how you slice it.

Broadcom is aiming to push these switches into niches like High-Performance Computing (HPC) and massive data centers doing big data/analytics or video processing to start. The use cases for 25/50Gbps haven’t really changed. What Broadcom is delivering now is port density. I fully expect to see top-of-rack (ToR) switches running 25Gbps down to the servers with new add-in cards connected to 50Gbps uplinks that deliver them to the massive new Tomahawk II switches running in the spine or end-of-row (EoR) configuration for east-west traffic disbursement.

Another curious fact of the Tomahawk II is the complete lack of 40Gbps support. Granted, the support was only paid lip service in the Tomahawk I. The real focus was on shifting to 25/50Gbps instead of the weird 10/40/100Gbps split we had in Trident II. I talked about this a couple of years ago and wasn’t very high on it back then, but I didn’t know the level of apathy people had for 40Gbps uplinks. The push to 25/50Gbps has only been held up so far by the lack of availability of new NICs for servers to enable faster speeds. Now that those are starting to be produced in volume expect the 40Gbps uplinks to be a relic of the past.

A Foot In The Door

Not everyone is entirely happy about the new Broadcom Tomahawk II. I received an email today with a quote from Martin Izzard of Barefoot Networks, discussing their new Tofino platform. He said in part:

Barefoot led the way in June with the introduction of Tofino, the world’s first fully programmable switches, which also happen to be the fastest switches ever built.

It’s true that Tofino is very fast. It was the first 6.4 Tbps switch on the market. I talked a bit about it a few months ago. But I think that Barefoot is a bit off on its assessment here and has a bit of an axe to grind.

Barefoot is pushing something special with Tofino. They are looking to create a super fast platform with programmability. P4 is not quite an FPGA and it’s not an ASIC. It’s a switch stripped to its core and rebuilt with a language all its own. That’s great if you’re a dev shop or a niche market that has to squeeze every ounce of performance out of a switch. In the world of cars, the best analogy would be looking at Tofino like a specialized sports car like a Koenigsegg Agera. It’s very fast and very stylish, but it’s purpose built to do one thing – drive really fast on pavement and carry two passengers.

Broadcom doesn’t really care about development shops. They don’t worry about niche markets. Because those users are not their customers. Their customers are Arista, Cisco, Brocade, Juniper and others. Broadcom really is the Intel of the switching world. Their platforms power vendor offerings. Buying a basic Tomahawk II isn’t something you’re going to be able to do. Broadcom will only sell these in huge lots to companies that are building something with them. To keep the car analogy, Tomahawk II is more like the old F-body cars produced by Chevrolet that later went on to become Camaros, Firebirds, and Trans Ams. Each of these cars was distinctive and had their fans, but the chassis was the same underneath the skin.

Broadcom wants everyone to buy their silicon and use it to power the next generation of switches. Barefoot wants a specialist kit that is faster than anything else on the market, provided you’re willing to put the time into learning P4 and stripping out all the bits they feel are unnecessary. Your use case determines your hardware. That hasn’t changed, nor is it likely to change any time soon.

Tom’s Take

The data center will be 25/50/100Gbps top to bottom when the next switch refresh happens. It could even be there sooner if you want to move to a pod-based architecture instead of more traditional designs. The odds are very good that you’re going to be running Tomahawk or Tomahawk II depending on which vendor you buy from. You’re probably only going to be running something special like Tofino or maybe even Cavium if you’ve got a specific workload or architecture that you need performance or programmability.

Don’t wait for the next round of hardware to come out before you have an upgrade plan. Write it now. Think about where you want to be in 4 years. Now double your requirements. Start investigating. Ask your vendor of choice what their plans are. If their plans stink, as their competitor. Get quotes. Get ideas. Be ready for the meeting when it’s scheduled. Make sure you’re ready to work with your management to bury the hatchet, not get a hatchet jobbed network.

by networkingnerd at October 12, 2016 04:59 PM

Dyn Research (Was Renesys Blog)

War-torn Syrian city gets new fiber link


The northern Syrian city of Aleppo is one of the key battlegrounds of that country’s on-going civil war as well as the epicenter of the European refugee crisis.  The most appropriate United States response to events in Aleppo has become a major foreign policy question among the candidates in this year’s U.S. presidential election.  Experts are now predicting that forces loyal to President Bashar al-Assad, backed by the Russian military, will take control of rebel-held eastern Aleppo within weeks.  The image below (from Wikipedia) illustrates the the current state (as of 9 October 2016) of the conflict in Aleppo, depicting rebel-held regions in green and those under government control in red.

Amidst all of this, the Syrian Communications and Technology Ministry announced this week that they had completed a new fiber optic line connecting the parts of Aleppo loyal to President Assad to the state telecom’s core network in Damascus, increasing available bandwidth for residents.  It had previously been connected by a high-capacity microwave link.

From a BGP routing standpoint, this development was reflected by the disappearance of AS24814 — we first reported the appearance of AS24814 serving Aleppo in 2013.  At 14:42 UTC (5:42pm local) on 10 October 2016, we saw the 14 prefixes announced by AS24814 shifted over to Syrian Telecom’s AS29256.  Below is one of those 14 BGP routes changing origins, as seen in Dyn’s Internet Intelligence — Network product.


History of AS24814

In August of 2013 we jointly published a blog post with the Washington Post documenting the loss of Internet service in Syria’s most populous city, Aleppo.  Up until then, Internet service in Aleppo had been reliant on transit from Turk Telekom via a fiber optic cable traversing the rebel-held city of Saraqib to the west.  This is the same Saraqib that was the site of suspected chemical weapons attacks months earlier.  Fiber optic communications gear in Saraqib was disabled by rebels on 29 August 2013, resulting in the outage we reported, and service through Saraqib has never been restored.

In an effort to restore service to Aleppo, the Syrian Telecom (formerly known as STE) created an emergency fiber circuit to reach Turk Telekom via Idlib, Syria.  The link was activated at 14:45 UTC on 8 October 2013 and it was then that AS24814 was first employed, designating this emergency communications link.  We described this development in a 2013 blog post here.

While AS24814 (i.e., Internet service to Aleppo and surrounding areas) suffered numerous outages (we reported most via Twitter), service was lost for an extended period of time beginning in March 2015 when rebels took Idlib as part of the 2015 Idlib offensive.  The fiber optic cables carrying service for AS24814 (Aleppo) were destroyed as a result of that offensive, resulting in a nearly 8-month outage.

AS24814 would only reappear on 8 November 2015 after Syria’s government military forces, aided by a new Russian bombing campaign, made the situation safe enough for government telecommunications engineers to reconnect Aleppo using a high capacity microwave link to the coastal city of Latakia, Syria.

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Yesterday, that microwave link (along with the emergency Internet service designated by AS24814) was retired and replaced with a new fiber optic cable, providing substantially greater bandwidth and routed through Khanasir, a town located safely inside government-controlled territory.


Internet service in rebel-held eastern Aleppo will not benefit from the new fiber optic cable described in this blog.  The remaining residents in this part of the city rely on a mix of small VSAT satellite terminals and unsanctioned microwave links into Turkey.

Last year, we speculated that the involvement of the Russian military in the fighting around Aleppo had sufficiently tilted the conflict in favor of the Assad regime to create the space needed for Syrian Telecom technicians to restore Internet service to the embattled city.  The establishment of a new fiber optic cable servicing Aleppo marks further infrastructure consolidation in northern Syria by the government and, thanks to the Russian jets, may well presage the re-establishment of President Assad’s rule throughout the city.

The post War-torn Syrian city gets new fiber link appeared first on Dyn Research.

by Doug Madory at October 12, 2016 02:36 PM

Network Design and Architecture

Benefits of MPLS – Why MPLS is used ? – MPLS Advantages

Benefits of MPLS, Why MPLS is used on today networks and the Advantages of MPLS will be explained in this post. As an Encapsulation and VPN mechanism, MPLS brings many benefits to the IP networks. In this article most of them will be explained and design examples will be shared by referring more detailed articles on the website as […]

The post Benefits of MPLS – Why MPLS is used ? – MPLS Advantages appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at October 12, 2016 09:10 AM

ipSpace.net Blog (Ivan Pepelnjak)

Do I Need Redundant Firewalls?

One of my readers sent me this question:

I often see designs involving several more than 2 DCs spread over different locations. I was actually wondering if that makes sense to bring high availability inside the DC while there's redundancy in place between the DCs. For example, is there a good reason to put a cluster of firewalls in a DC, when it is possible to quickly fail over to another available DC, as a redundant cluster increases costs, licenses and complexity.

Rule#1 of good engineering: Know Your Problem ;) In this particular case:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at October 12, 2016 08:44 AM

Network Design and Architecture

BGP in MPLS Layer 3 VPN – BGP as a PE-CE Routing Protocol

BGP can be used as a PE-CE routing protocol in MPLS Layer 3 VPN. Also Service Providers run MP-BGP (Multiprotocol BGP) if they have MPLS Layer 3 VPN. In this article, MP-BGP will not be explained since it has been explained here earlier in detail. When BGP is used as a PE-CE routing protocol between […]

The post BGP in MPLS Layer 3 VPN – BGP as a PE-CE Routing Protocol appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at October 12, 2016 08:01 AM

XKCD Comics

October 11, 2016

Networking Now (Juniper Blog)

Can’t Fight the Cyber War Alone? Share Your Threat Intelligence!

In his keynote speech during the RSA conference of 2011, the former director of the NSA, Gen. Keith B. Alexander made an interesting statement: “Securing our nation’s network is a team sport”. It is cleared now than ever, that no-one can fight the cyber war alone, and community efforts sharing cyber threat intelligence could benefit all participants, even in a competitive environment.


When referring to modern cyber threats, the attackers seem to have the upper hand. Regardless of their motivation, their engagement with the target has many asymmetric characteristics which work in their benefit, creating the need for new defense concepts deployed is a seemingly never ending arms race.

One of those new concepts is the sharing of real-time actionable cyber threat intelligence (CTI) - the exchange of dynamic feed of threat or attack related objects utilized for enforcement or analysis at the receiving end. Sharing CTI between different organizations, represents a collaborative effort to improve cyber defense posture by leveraging the capabilities, knowledge, and experience of the broader community. Such deployments may take different technological and structural forms, eventually reducing duplication of effort while enabling one organization’s detection to become another organization’s prevention.


In recent years, a growing number of sharing alliances have emerged, either between individuals using social networks, within the same vertical market, across different sectors in the same geography, between commercial and government bodies, and even among countries. In many cases these sharing initiatives represent a shift in the organization’s legacy IT paradigm, and create a complex, multifaceted challenge to technology, law, organizational culture, privacy and more[1]. These challenges are bigger when the parties are direct competitors or have other conflicts of interests, as demonstrated in my research-in-progress conducted at the Blavatnik Interdisciplinary Cyber Research Center (ICRC). The research analyzes threat intelligence sharing between cybersecurity vendors, with the goal to create visibility and understanding of the formed ecosystem within this industry. Since the shared information is closely related to the core business of the firms, it presents clearly the challenge of combining collaboration with competition named as coopetition.


Security vendors have already embraced CTI as a defense concept providing their customers with a viable solution, but the disaggregation of the solution elements described in  Figure 1, allows them to mutually use feeds from each other, or provide their threat intelligence using another vendor as a sales through channel. These three elements may belong to one or several vendors, and deployed as a single or multiple products either on customer premises or in the cloud. The source point of the information flow is a threat intelligence feed, and the destination is a policy enforcement or decision point. In between, an optional element called Threat Intelligence Platform (TIP) may act as an exchange point tying several sources and destinations together. Integration between all elements is based on either proprietary API’s or evolving standards such as STIX™, TAXII™, and CyBOX™.


Figure 1 – Disaggregated elements of threat intelligence sharingFigure 1 – Disaggregated elements of threat intelligence sharingThe key findings of the research suggest that cooperating with competitors is a winning strategy, showing correlation between market-related success indicators of a vendor, to its number of sharing relationships. Furthermore, the industry as a whole is a coopetition fit environment divided into social network communities, where successful companies attract new relationships more, following the ”rich-gets-richer” phenomenon. In addition, intelligence sharing can result in better security coverage, direct and indirect financial gains, and benefit to the greater good.


Given the possible advantages to companies, and the challenge of fighting the cyber war alone, many organizations are reconsidering their policy on sharing cyber related information with outside parties, literally demonstrating that crowd wisdom is applicable in the cybersecurity domain. For more on the topic from both academic and industry perspectives, join my presentation “101 to Threat intelligence Sharing”, at the (ISC)² Security Congress EMEA in Dublin 18-19 October 2016, or at the CSX 2016 Europe conference in London 31 October-2 November 2016.


[1] Zrahia, A. (2014). A multidisciplinary analysis of cyber information sharing. Military and Strategic Affairs, 6(3), 59-77. E-ISSN 2307-8634. The Institute for National Security Studies (INSS), Tel-Aviv University.

by Aviram Zrahia at October 11, 2016 09:20 AM

ipSpace.net Blog (Ivan Pepelnjak)

Check Out the Designing Active-Active and Disaster Recovery Data Centers Webinar

The featured webinar in October 2016 is the Designing Active-Active and Disaster Recovery Data Centers webinar, and the featured videos include the discussion of disaster avoidance challenges and the caveats you might encounter with long-distance vMotion. All ipSpace.net subscribers can view these videos, if you’re not one of them yet start with the trial subscription.

As a trial subscriber you can also use this month's featured webinar discount to purchase the webinar.

by Ivan Pepelnjak (noreply@blogger.com) at October 11, 2016 07:40 AM

October 10, 2016

Network Design and Architecture

OSPF to IS-IS Migration

There are many reasons of OSPF to IS-IS migration, specifically for the Service Provider networks. Some of these reasons are shared later in the case study. OSPF to IS-IS migration can be done in three ways. In this article I will share, ‘  ship in the night approach ‘ which relies on having both routing […]

The post OSPF to IS-IS Migration appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at October 10, 2016 04:20 PM

Networking Now (Juniper Blog)

Boards Need to “Get on Board” to Assess and Address Cybersecurity Risk

Far from a back room, IT-centric issue, cybersecurity is now front and center as organizations of all sizes work to define and execute strategies that mitigate risk and defend and combat against attacks. In order for companies to be as prepared as possible, a strong and effective cybersecurity strategy requires active board participation. Cybersecurity is no longer the sole responsibility of the Chief Information Officer or technical focused employees. Industry-leading companies understand this and are planning and executing accordingly.

by bworrall at October 10, 2016 04:00 PM

Network Design and Architecture

Design considerations for network mergers and acquisitions

Network mergers and acquisitions are the processes which can be seen in any type of businesses. As a network designers, our job to identify the business requirements of both existing networks and the merged network and finding best possible technical solutions for the business. There are many different areas which need to be analyzed carefully. Wrong business requirement […]

The post Design considerations for network mergers and acquisitions appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at October 10, 2016 10:06 AM

ipSpace.net Blog (Ivan Pepelnjak)

The Impact of ICMP Redirects

One of my readers sent me an interesting question after reading my ICMP Redirects blog post:

In Cisco IOS, when a packet is marked by IOS for ICMP redirect to a better gateway, that packet is being punted to the CPU, right?

It depends on the platform, but it’s going to hurt no matter what.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at October 10, 2016 06:39 AM

XKCD Comics

October 09, 2016

Network Design and Architecture

OSPF Multi-Area Adjacency | OSPF Area Placement | RFC 5185

OSPFv2 by default setup only one adjacency over a single link. But this can be an issue some time and as a network designer you should understand the consequences and know the available solutions. Placing a link in wrong OSPF area can create an OSPF sub optimal routing especially in hub and spoke topology. In […]

The post OSPF Multi-Area Adjacency | OSPF Area Placement | RFC 5185 appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at October 09, 2016 05:44 PM

IS-IS Design considerations on MPLS backbone

IS-IS Design considerations on MPLS backbone Using IS-IS with MPLS require some important design considerations. IS-IS as a scalable link state routing protocol has been used in the Service Provider networks for decades. In fact, eight of the largest nine Service Providers use IS-IS routing protocol on their network as of today. If LDP is […]

The post IS-IS Design considerations on MPLS backbone appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at October 09, 2016 05:04 PM

ipSpace.net Blog (Ivan Pepelnjak)

Using DNS Names in Firewall Rulesets

My friend Matthias Luft sent me an interesting tweet a while ago:

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

All I could say in 160 characters was “it depends”. Here’s a longer answer.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at October 09, 2016 12:29 PM

Networker's Online

TCP Protocol: Slow Start

In the last post we explained the basic idea of using sequence and acknowledgement numbers to track how many bytes were sent and received. We also has encountered the term “slow start” and elaborated how TCP uses this concept on the server to send few segments of data to the receiver instead of sending the full …

The post TCP Protocol: Slow Start appeared first on Networkers-online.com.

by Mohamed Kamal at October 09, 2016 10:44 AM

October 08, 2016

The Networking Nerd

Thoughts on Theft


It’s been a busy week for me. In fact, it’s been a busy few weeks. I’ve had lots of time to enjoy NetApp Insight, Cloud Field Day, and Storage Field Day. I’ve also been doing my best to post interesting thoughts and ideas. Whether it’s taking on the CCIE program or keynote speakers, I feel like I owe a debt to the community and my readers to talk about topics that are important to them, or at least should be. Which is why I’m irritated right now about those ideas being stolen.

Beg, Borrow, and Steal

A large part of my current job is finding people that are writing great things and shining a spotlight on them. I like reading interesting ideas. And I like sharing those ideas with people. But when I share those ideas with people, I make absolutely sure that everyone knows where those ideas came from originally. And if I use those ideas for writing my own content, I make special care to point out where they came from and try to provide the context for the original statement in the first place.

What annoys me to no end is when people take ideas as their own and try to use them for their own ends. It’s not all that difficult. You can use weasel words like “sources” or “I heard once” or even “I read this article”. Those are usually good signs that content is going to be appropriated for some purpose. It’s also a sign that research isn’t being done or attributed properly. It’s lazy journalism at best.

What really grinds my gears is when my ideas are specifically taken and used elsewhere without attribution. Luckily, I haven’t had to deal with it much so far. I have a fairly liberal policy about sharing my work. I just want people to recognize the original author. But when my words end up in someone else’s mouth, that’s when the problems start.

Credit Where It Is Due

Taking ideas given freely without offering a clue as to where they come from is theft. Plain and simple. It takes the hard work that someone has put in to thinking through an issue and wraps it up in a cloudy mess. Now, who is to say (beyond dates) who was the originator of the idea? It’s just as easy to say that someone else came up with it. That’s what makes the tracing the origin of things so difficult. Proper attribution for ideas is important in a society where knowledge carries so much weight.

I don’t expect to make millions of dollars from my ideas. I have opinions. I have thoughts. Sometimes people agree with them. Just as often, people disagree. The point is not to be right or wrong or rich. The true point is to make sure that the thoughts and ideas of a person are placed where they belong when the threads are all unwound.

Honestly, I don’t even really want a ton of credit. It does me little good to have someone shouting from the rooftops that I was the first person to talk about something. Or that I was right when everyone else was wrong. But when the butcher’s bill comes due, I’d at least like to have my name attached to my thoughts.

Tom’s Take

I’ve luckily been able to have most of my appropriated content taken down. Some have used it as fuel for a link bait scheme to get paid. Others have used it as a way to build a blog for readership for some strange purpose. Thankfully, I’ve never run into anyone that was vocally taking credit for my writing and passing it off as their own. If you are a smart person and willing to writing things down, do the best you can with what you have. You don’t need to take something else that someone has written and attempt to make it you own. That just tarnishes what you’re trying to do and makes all your writing suspect. Be the best you can be and no one will ever question who you are.

by networkingnerd at October 08, 2016 02:42 AM

October 07, 2016

Honest Networker
ipSpace.net Blog (Ivan Pepelnjak)
XKCD Comics
Jason Edelman's Blog

Launching Network to Code On Demand Labs

I changed things up this week and wrote an article on LinkedIn about the launch of the Network to Code On Demand Labs platform.

It is a cloud based service that allows you to launch any number of network topologies in minutes for simulations, labs, demos, and testing.

Check it out if you want 22 hours free of on-demand network access to devices of your choice!




October 07, 2016 12:00 AM