September 27, 2016

Honest Networker Blog (Ivan Pepelnjak)

How Do I Get a Grasp of SDN and NFV?

One of my readers had problems getting the NFV big picture (and how it relates to SDN):

I find the topic area of SDN and NFV a bit overwhelming in terms of information, particularly the NFV bit.

NFV is a really simple concept (network services packaged in VM format), what makes it complex is all the infrastructure you need around it.

Read more ...

by Ivan Pepelnjak ( at September 27, 2016 06:57 AM

September 26, 2016

Ethan Banks on Technology

Auto-Adding Routes When Mac PPTP Connection Comes Up

Before you read this post, understand that PPTP is insecure. Don’t use PPTP to create a VPN to anything you care about. Really. Apple has even pulled PPTP support from macOS Sierra. Read all about PPTP’s Apple death here, and thanks to @scottm32768 for letting me know about it.


Skip to Solution #3.


When successfully making a PPTP connection to a remote VPN server with the built-in Mac OS X client, you find that you can’t connect to hosts on the other side of the VPN tunnel. You can still connect to the Internet and LAN hosts.

The root issue is that, by default, OS X has no reason to send traffic across the VPN tunnel. A reason must be provided.

Solution #1 – Setting Service Order

In System Preferences > Network, perform “Set Service Order” (the drop down gear icon), and move the PPTP connection to the top of the list.

This means that when the PPTP tunnel is up, traffic will flow through it before other network connections. This will gain you access to hosts on the other side of the VPN tunnel. It will also break everything else, unless the network on the other side of the PPTP tunnel can also service your Internet traffic. This is going to be a function of the VPN termination device as well as the firewall configuration at the remote site.

The issue here is that ALL traffic, even your Internet traffic, will be routed through the tunnel. Thus, Internet traffic on your system is tossed into the tunnel, pops out at the remote site, gets hairpinned back around right back out through the remote network’s firewall, hits the Internet server you were trying to get to, comes all the way back to the remote network, where it finally gets popped back into the tunnel to you. Not all firewalls or VPN termination devices will be configured to support this hairpin routing.

If you choose this method, remember to set a DNS server in your PPTP connection profile that can be reached via the VPN tunnel. Something public like Google’s and might work. This is important because there’s a good chance your local DNS server will become unreachable as soon as the tunnel comes up, leaving you without name resolution. You might have connectivity, but without name resolution, it will feel like you don’t.

Solution #2 – Disabling Split Tunneling

By default, OS X will “split tunnel” when using the built-in PPTP client. That is, traffic will follow OS X’s routing table. Networks on the other side of the tunnel flow via the tunnel, assuming there are routes that send appropriate traffic that way. Other traffic, such as local LAN or Internet, flows via the wifi or Ethernet connection directly – no tunnel. Therefore, traffic is “split” between the tunnel and physical network interfaces. You can check OS X’s routing table via netstat -rn.

The catch here is that bringing up a PPTP tunnel doesn’t automatically add routes to OS X’s routing table, which is why your PPTP tunnel doesn’t seem to be working and you’re reading this article. There’s a tunnel, but nothing instructing OS X to forward any traffic across that tunnel. Therefore, you’re going to check a box that defeats split tunneling, forcing all traffic into the tunnel.

In System Preference  > Network, select the PPTP connection profile. Click the “Advanced…” button. Check “Send all traffic over VPN connection”. In this case, the service order doesn’t matter.

All the same caveats about hairpin routing and DNS as mentioned in solution #1 hold true.

Solution #3 (and my favorite) – /etc/ppp/ip-up

The script /etc/ppp/ip-up will automatically fire after a PPTP tunnel is brought up. This appears to be default behavior in *NIX kernels, based on this.

Once the PPP link is established, pppd looks for /etc/ppp/ip-up. If this script exists and is executable, the PPP daemon executes the script. This allows you to automate any special routing commands that may be necessary and any other actions that you want to occur every time the PPP link is activated.

This is definitely the behavior of OS X. When the PPTP tunnel comes up, the /etc/ppp/ip-up script fires. Therefore, you can use this script to add routes to the OS X routing table.

1. Create /etc/ppp/ip-up as sudo. If you aren’t a sudo-er on your Mac (i.e. not an admin equivalent), this is going to be an issue for you. You have to have root equivalent to edit this script. I use vi as my editor. Thus, sudo vi /etc/ppp/ip-up.

2. Let’s say there are two networks I care about on the other side of my PPTP tunnel: and An /etc/ppp/ip-up script to add them to the routing table could look as follows.

/sbin/route add -net -interface $1
/sbin/route add -net -interface $1

3. We’re using the explicit path “/sbin/” to be certain that the script can find the route command.

4. The $1 is a variable representing the name of the interface used by PPPd.

5. Make sure root is the owner of /etc/ppp/ip-up. It should be by default. sudo chown root /etc/ppp/ip-up

6. Make sure the script is executable. It will not be by default. sudo chmod 755 /etc/ppp/ip-up

The next time you bring up a PPTP tunnel, /etc/ppp/ip-up will run, adding those two routes to the OS X routing table. Don’t forget that you can validate that the script ran by looking at netstat -rn.

With the routes added to the routing table, OS X knows to send traffic for those networks across the tunnel.

This isn’t a perfect solution, as the script is a blunt hammer that doesn’t distinguish between tunnels. This particular script will add those routes to the OS X routing table, no matter what PPTP server you access. You’d need a smarter script to support multiple PPTP sites, which is beyond my scope here. Maybe in a future post.

by Ethan Banks at September 26, 2016 06:48 PM

Networking Now (Juniper Blog)

August 2016: A Look at Evasion Techniques

In this month's blog, we'll look at one example from a simple but evasive dropper family that we saw frequently in August, with a focus on techniques used to actively evade analysis and detection. This sample retains some of the author's debugging information, which helpfully labels the project as "ResourceDropper":



The executable code is obfuscated to deter static analysis, and the final payload is encrypted. The compiled code resolves many dependencies at runtime with GetProcAddress, which is often used to obscure the underlying functionality from static analysis.



The malware also checks for a number of indications that it is being analyzed in a sandbox or a debugger. It begins by checking directly whether a debugger is attached.



The sample then accesses the Process Environment Block to look for indirect indications of debugging or analysis, followed by specific checks for a number of popular anti-malware products (using obfuscated product names):



The malware exits quickly if any anti-anti-malware checks succeed. Otherwise, it drops and executes another malicious file, this one masquerading as a Java updater. The dropped/injected file copies then deletes itself on launch (another strong indicator of maliciousness!) and it too checks for signs it is being analyzed. In its process memory, we find that it checks for a DLL associated with Sandboxie.


The malware goes on to look for Window product IDs known to be associated with particular sandboxes or antimalware products.  keys.png 

Once past all of these precautions, the malware attempts to harvest user credentials (including FTP logins, as seen below) and opens a persistent backdoor. 


The evasion techniques seen here are not particularly sophisticated; we've seen malware check for all sorts of indicators that a human user is present (mouse movements, for example), check the system specifications to see if the machine looks genuine (is the hard drive big enough and are there enough processor cores?), or require specific command-line parameters to ensure it is not being analyzed separately from the script that should have downloaded it. The result is a cat-and-mouse routine between malware authors and the anti-malware industry as the malware authors attempt to delay detection for as long as possible to ensure maximum return on their efforts. In Sky ATP, we search for dozens of indicators that a sample is trying to identify an analysis system and evade detection, and we continue to add new features. In addition, our machine-learning engines are trained on a constantly updated set that includes the newest evasive malware, enabling us to detect evasive behavior as these techniques evolve.


Once again, thanks for reading! Check back soon for an upcoming post about return-oriented programming (ROP).

by AsherLangton at September 26, 2016 04:46 PM

Honest Networker Blog (Ivan Pepelnjak)

How Many vMotion Events Can You Expect in a Data Center?

One of my friends sent me this question:

How many VM moves do you see in a medium and how many in a large data center environment per second and per minute? What would be a reasonable maximum?

Obviously the answer to the first part is it depends (please share your experience in the comments), so we’ll focus on the second one. It’s time for another Fermi estimate.

Read more ...

by Ivan Pepelnjak ( at September 26, 2016 06:55 AM

XKCD Comics

September 25, 2016

Ethan Banks on Technology

Managing Digital Racket

I read this article, long by today’s standards of fleeting attention. TL;DR. Information bombardment addicted the author with negative effects on his life. And while he’s not done making changes in his life, he has broken the cycle.

I’ve had similar challenges to him, and continue to hone my approach to managing digital racket. I know I’ve written about this before, but the art is evolving for me. Chronicling progress, however minor, is cathartic.

I mute nearly all notifications. This cuts down tremendously on mental intrusions, improving my focus and reducing FOMO. While you’d think turning off notifications would increase FOMO, you realize over time that you aren’t actually missing anything substantial. Once you believe this, the anxiety borne of FOMO fades away.

The only notifications I currently receive are as follows.

  1. Phone calls. I don’t get many, and most of them are directly related to my business.
  2. Direct messages from my immediate family.
  3. Direct messages from my three co-workers and a few close collaborators.

I have deleted most social media apps from my phone. I have a few for the sake of convenience when abroad, but rarely access them. With notifications turned off, the temptation is practically nil. Twitter is my greatest temptation, and therefore do not keep it on my phone at all except at conferences. Buffer allows me to queue tweets without having to interact with Twitter directly.

The most notable social media app that remains on my phone is Reddit. However, I don’t use Reddit for work, so it’s not a distraction during my working hours.

On my Mac, I use the multiple desktops feature. The main desktop is my working screen. Here, I have my Chrome browser, research documents, and terminal consoles. In one Chrome tab, I have my company’s Slack group, as it’s a critical part of my workflow along with Trello and Buffer. Wunderlist keeps me focused. Scrivener organizes my writing projects.

The secondary desktop contains social media and other distracting things. For instance, I have Safari running Tweetdeck and LinkedIn. I also have the Slack app with the myriad non-company groups I’m in running as a separate window.

To access the other desktop, I must deliberately perform a 4-finger swipe up, and then choose the other desktop with a point-and-click. I have disabled the “Swipe between full-screen apps” feature that allows for quick 4-finger swiping between desktops with my trackpad. This means that switching to the secondary desktop is a conscious choice that puts me in a different mindset. Am I willing to give into temptation and look at that other desktop? Or is it easier to actually stay in the zone and keep working? The swipe, point, and click gives me just enough time to avoid losing my productivity mojo.

Couldn’t I just, in a moment of weakness, open Tweetdeck on my primary, working desktop? Of course. But there’s something that chafes in my brain when I try it. After a couple of weeks of segregated desktops, looking at Twitter on the main desktop feels like an unwelcome intrusion.

I have regular screen moratoriums. Lately, this comes in the form of a weekly outdoor excursion. Assuming I’m not on a plane and weather permitting, I’m outdoors every Saturday, usually hiking a lot of miles in the mountains. I have a GPS watch I use as a tool. I have a phone with me for safety reasons. But for the last several weeks, I haven’t used my phone, even to take a picture. The phone stays in my pack.

While I can’t prove this, my feeling is that putting the screen away for the several hours I’m in the woods each week is important to my mental health. The complete screen disconnect somehow hits a reset button that allows me to function with a clearer brain the next week. Again, this is anecdotal. I can’t prove this yet. But I do know that for the last few weeks, thinking and producing has been easier for me.

by Ethan Banks at September 25, 2016 08:56 PM

September 23, 2016

Honest Networker Blog (Ivan Pepelnjak)

Docker Networking: Introduction to Microservices and Containers

Dinesh Dutt started his excellent Docker Networking webinar with introduction to the concepts of microservices and Linux containers. You won’t find any deep dives in this part of the webinar, but all you need to do to get the details you’re looking for is to fill in the registration form.

by Ivan Pepelnjak ( at September 23, 2016 07:23 AM

XKCD Comics

September 22, 2016 Blog (Ivan Pepelnjak)

Why Would I Use BGP and not OSPF between Servers and the Network?

While we were preparing for the Cumulus Networks’ Routing on Hosts webinar Dinesh Dutt sent me a message along these lines:

You categorically reject the use of OSPF, but we have a couple of customers using it quite happily. I'm sure you have good reasons and the reasons you list [in the presentation] are ones I agree with. OTOH, why not use totally stubby areas with the hosts being in such an area?

How about:

Read more ...

by Ivan Pepelnjak ( at September 22, 2016 11:59 AM

September 21, 2016

Honest Networker
The Networking Nerd

Apple Watch Unlock, 802.11ac, and Time


One of the benefits of upgrading to MacOS 10.12 Sierra is the ability to unlock my Mac laptop with my Apple Watch. Yet I’m not able to do that. Why? Turns out, the answer involves some pretty cool tech.

Somebody’s Watching You

The tech specs list the 2013 MacBook and higher as the minimum model needed to enable Watch Unlock on your Mac. You also need a few other things, like Bluetooth enabled and a Watch running WatchOS 3. I checked my personal MacBook against the original specs and found everything in order. I installed Sierra and updated all my other devices and even enabled iCloud Two-Factor Authentication to be sure. Yet, when I checked the Security and Privacy section, I didn’t see the checkbox for the Watch Unlock to be enabled. What gives?

It turns out that Apple quietly modified the minimum specs during the Sierra beta period. Instead of early 2013 MacBooks being support, the shift moved support to mid-2013 MacBooks instead. I checked the spec sheets and mine is almost identical. The RAM, drive, and other features are the same. Why does Watch Unlock work on those Macs and not mine? The answer, it appears, is wireless.

Now AC The Light

The mid-2013 MacBook introduced Apple’s first 802.11ac wireless chipset. That was the major reason to upgrade over the earlier models. The Airport Extreme also supported 11ac starting in mid-2013 to increase speeds to more than 500Mbps transfer rates, or Wave 1 speeds.

While the majority of the communication that the Apple Watch uses with your phone and your MacBook is via Bluetooth, it’s not the only way it communicates. The Apple Watch has a built-in wireless radio as well. It’s a 2.4GHz b/g/n radio. Normally, the 11ac card on the MacBook can’t talk to the Watch directly because of the frequency mismatch. But the 11ac card in the 2013 MacBook enables a different protocol that is the basis for the unlocking feature.

802.11v has been used for a while as a fast roaming feature for mobile devices. Support for it has been spotty before wider adoption of 802.11ac Wave 1 access points. 802.11v allows client devices to exchange information about network topology. 11v also allows for clients to measure network latency information by timing the arrival of packets. That means that a client can ping an access point or another client and get a precise timestamp of the arrival of that packet. This can be used for a variety of things, most commonly location services.

Time Is On Your Side

The 802.11v timestamp has been proposed to be used as a “time of flight” calculation all the back since 2008. Apple has decided to use Time of Flight as a security mechanism for the Watch Unlock feature. Rather than just assume that the Watch is in range because it’s communicating over Bluetooth, Apple wanted to increase the security of the Watch/Mac connection. When the Mac detects that the Watch is within 3 meters of the Mac it is connected to via Handoff it is in the right range to trigger an unlock. This is where the 11ac card works magic.

When the Watch sends a Bluetooth signal to trigger the unlock, the Mac sends an additional 802.11v request to the watch via wireless. This request is then timed for arrival. Since the Mac knows the watch has to be within 3 meters, the timestamp on the packet has a very tight tolerance for delay. If the delay is within the acceptable parameters, the Watch unlock request is approved and your Mac is unlocked. If there is more than the acceptable deviation, such as when used via a Bluetooth repeater or some other kind of nefarious mechanism, the unlock request will fail because the system realizes the Watch is outside the “safe” zone for unlocking the Mac.

Why does the Mac require an 802.11ac card for 802.11v support? The simple answer is because the Broadcom BCM43xx card in the early 2013 MacBooks and before doesn’t support the 802.11v time stamp field (page 5). Without support for the timestamp field, the 802.11v Time of Flight packet won’t work. The newer Broadcom 802.11ac compliant BCM43xx card in the mid-2013 MacBooks does support the time stamp field, thus allowing the security measure to work.

Tom’s Take

All cool tech needs a minimum supported level. No one could have guess 3-4 years ago that Apple would need support for 802.11v time stamp fields in their laptop Airport cards. So when they finally implemented it in mid-2013 with the 802.11ac refresh, they created a boundary for support for a feature on a device that was in the early development stages. Am I disappointed that my Mac doesn’t support watch unlock? Yes. But I also understand why now that I’ve done the research. Unforeseen consequences of adoption decisions really can reach far into the future. But the technology that Apple is building into their security platform is cool no matter whether it’s support on my devices or not.

by networkingnerd at September 21, 2016 02:41 PM Blog (Ivan Pepelnjak)

This Is Why I’m Not Doing SD-WAN Webinars

One of my long-time regular readers sent me this question:

I was wondering if you have had any interest in putting together an SD-WAN overview/update similar to what you do with data center fabrics where you cover the different product offerings, differentiators, solution scorecard…

That would be a good idea. Unfortunately the SD-WAN vendors aren’t exactly helping.

Read more ...

by Ivan Pepelnjak ( at September 21, 2016 11:52 AM

XKCD Comics

September 20, 2016

Dyn Research (Was Renesys Blog)

BackConnect’s Suspicious BGP Hijacks


Earlier this month, security blogger Brian Krebs broke a story about an Israeli DDoS-for-hire service, vDOS, which had been hacked, revealing “tens of thousands of paying customers and their (DDoS) targets.”  Afterwards, Krebs noticed that vDOS itself was also a victim of a recent BGP hijack from a company called BackConnect, which claims to be the “world’s first and leading open source based DDoS and network security provider.”

Bryant Townsend, CEO of BackConnect, confirmed to Krebs that they had indeed conducted a BGP hijack on vDOS, but claimed that it was for “defensive purposes.”  In an email to the NANOG list, Townsend explained that in doing so they “were able to collect intelligence on the actors behind the botnet as well as identify the attack servers used by the booter service,” implying this was a one-time event.  Krebs then contacted Dyn for some assistance in researching what appeared to be a series of BGP hijacks conducted by BackConnect over the past year.  What emerges from this analysis is that the hijack against vDOS probably wasn’t the first time BackConnect used BGP hijacks in the course of its business.  And via the use of forged AS paths, BackConnect sometimes obscured their involvement in this activity. (Today’s blog post on BackConnect by Brian Krebs can be found here.) 1lZ8hzji

Hijack of vDOS/Verdina

Let’s first take a look at BackConnect’s recent hijack of vDOS that brought this discussion to the fore.  According to our data, BackConnect (AS203959) began announcing (Verdina Ltd.) at 07:13:26 UTC on 7 September 2016.  Over half of our peers accepted a BGP route with BackConnect’s ASN as the origin until it stopped announcing the route at 7:49:15 UTC, less than 50 minutes later.  The propagation of this route is depicted in the visualization below, which shows the duration and reach of BackConnect’s route across the hundreds of BGP sources we employ for our analysis.
As stated by BackConnect’s new Twitter account, this action was performed for the purpose of intercepting Internet traffic destined for this address range.  Supposedly a botnet control node employed by vDOS was hosted in this IP space and by hijacking it, BackConnect could see who was connecting to it, or to put it in their CEO’s own words, “the actors behind the botnet.”

When viewing this BGP hijack in the context of our traceroutes, we can see the new path traffic took as a result of this action.  Prior to the hijack, traceroutes from our server hosted with Amazon Web Servces (AWS) in Ashburn, VA would normally traverse Cogent from Washington DC and on to Sofia, Bulgaria where Verdina is located.  Below is one of those traceroutes the day prior to the hijack.

trace from AWS Ashburn, VA to at 08:19 Sep 06, 2016
1  *
2  *
3    RFC 6598 (carrier-grade NAT)                             0.589
4, Inc.                       Ashburn     US    1.421
5   Amazon Technologies Inc.               Ashburn     US    1.468
6    Amazon Technologies Inc.               Ashburn     US    1.491
7            Washington  US    2.009
8             Washington  US    2.554
9  Washington  US    1.878
10  Washington  US    3.104
11  New York    US    9.058
12  London      GB   77.922
13 Amsterdam   NL   83.648
14  Frankfurt   DE   93.298
15  Munich      DE  100.592
16  Vienna      AT  104.828
17  Sofia       BG  125.216
18 *

During the hijack, the traceroute instead went from Amazon to Comcast and on to Voxility (BackConnect’s transit provider of choice, covered by KrebsonSecurity in 2013) to Los Angeles.  Numerous traceroutes from around the world were redirected to Voxility in Los Angeles, as opposed to anything in Bulgaria during the hijack.

trace from AWS Ashburn, VA to at 07:41 Sep 07, 2016
1  *
2  *
3    RFC 6598 (carrier-grade NAT)                         0.478
4, Inc.                   Ashburn      US   0.668
5   Amazon Technologies Inc.           Ashburn      US   1.054
6  Amazon Technologies Inc.           Ashburn      US    0.75
7   Comcast Cable Communications, LL   Ashburn      US   1.436
8  Comcast Business Communications,   Ashburn      US    1.12
9           Ashburn      US   1.283
10           Los Angeles  US  62.358
11   Verdina Ltd.                                        62.633

As see in Dyn’s Internet Intelligence, typical incoming transit for BackConnect has the following form, i.e., BackConnect typically has Voxility as the sole upstream provider for their prefixes.

Screen Shot 2016-09-20 at 7.56.26 AM

In the remaining part of this blog post, we’ll take a look at some of the other interesting BGP routing activity involving BackConnect (AS203959) over the past year.

Forged AS Paths

On 20 February 2016, BackConnect hijacked a route ( belonging to a competing DDoS-mitigation provider, Staminus.  In March, Brian Krebs broke the news that Staminus had been hacked, revealing sensitive customer data.  This sequence of events has led some to believe that BackConnect may have been involved in the Staminus hack.  BackConnect CEO Bryant Townsend was formerly SVP of Business Development at Staminus.

Setting aside that discussion, here’s what the hijack looked like from a BGP perspective.  The prefix is a more-specific of, which is announced by Staminus (AS25761).
At 08:36:59 UTC, BackConnect began hijacking this address space using the following AS path:
... 3223 203959 53587 53587 53587 53587 134830 134830 134830 203959 203959

Then the AS path changed to the following with InAbate (AS134830) ostensibly as the origin (rightmost ASN):
... 3223 203959 32768 53587 53587 53587 53587 134830 134830 134830

Finally, BackConnect added AS25761 (Staminus) as the origin, taking the form:
... 3223 203959 1229 3257 25761

In every case, the routes passed through BackConnect’s ASN (AS203959) and onto another DDoS-mitigation provider, Voxility (AS3223).  In the third form of the AS path, we can be sure that the last three ASNs in the path were forged.  For one, GTT (AS3257) never had this route in its table at the time.  Also Staminus uses GTT and Telia (AS1299) for transit, so it appears that BackConnect attempted to make it look like the route had passed through these ASNs (AS1229 was likely a typo for Telia’s AS1299), or perhaps they were there to prevent those providers from carrying the route.  Regardless of BackConnect’s intentions, announcing a more-specific hijack with a forged AS path is itself a pretty suspicious act.

On the following day, BackConnect announced, a more-specific hijack of originated by GHOSTnet GmbH (AS12586).  While OpenDNS’s automated BGPstream service spotted this one, it misidentified the actual hijacker.
During the orange spike in the above graph, the AS path initially had BackConnect as the origin:
… 3223 203959
Then at the very end, the AS path changed to the following.
… 3223 203959 1229 3257 25761

Again AS25761 belongs to Staminus and AS3257 (GTT) and AS1229 (same typo for Telia as above) are the upstreams of Staminus.  That portion of the AS path is clearly forged.  This hijack wasn’t conducted by Staminus, it was BackConnect (AS203959) posing as Staminus via a forged AS path.

BackConnect again used forged AS paths on 21 February 2016 when announcing (Tivoli Systems).  At the time, was not announced, although it is currently announced by Softlayer.  At 07:58:53 UTC, this route was announced with an origin of 4134 (China Telecom) but exclusively routed through BackConnect and Voxility.  Later it was changed to have an origin of Hurricane Electric (AS6939) and upstreams of AS42708 (Portlane) and AS36236 (Host Virtual), followed by China Telecom (again). It was again routed exclusively through BackConnect.

First AS path format:
... 3223 203959 4134
Second AS path format:
... 3223 203959 4134 42708 36236 6939
We have several peering sessions with Hurricane Electric (HE) and can report that this route was not in their routing table at this time, let alone with HE as the origin.  We also have peering in China that doesn’t support China Telecom as the origin either.  If China Telecom were really carrying this route, it would have been seen by other upstreams of China Telecom and not just BackConnect.  These facts support the conclusion that this AS path was also forged by BackConnect.

What was BackConnect’s purpose of hiding the origin of a route of unused address space?  As stated by Bryant Townsend in a post to the NANOG mailing list explaining their hijack of vDOS, “No, we do not plan to ever intentionally perform a non-authorized BGP hijack in the future”, implying this was a one-time event, rather than a pattern of behavior.

On 16 April 2016, BackConnect began transiting a new route that was a hijack of routed address space.  The route ( was a more-specific hijack of originated by GHOSTnet GmbH (AS12586).  Was this another case of BackConnect (AS203959) forging the origins to obfuscate their involvement in this hijack?

The AS path first took the form
... 3223 203959 27176
and was then changed to
... 3223 203959 29073

So the origin appeared to be DataWagon (AS27176) and then Ecatel (AS29073), but the paths always traversed BackConnect (AS203959), and, of course, Voxility.
Looking at FQDNs resolving to this space, we observed the following very odd ones over the past year:

Other BackConnect hijacks of note

Soon after BackConnect’s ASN (AS203959) appeared in the routing table last fall, it had conducted its first BGP hijack.  For about 20 minutes, it announced, a more-specific of announced by Falco Networks (AS31251).
One might wonder if this incident could be chalked up to a typical BGP-based DDoS mitigation, such as Akamai’s Prolexic service.  The short duration of the hijack and the lack of global propagation makes this less likely.  If a sizable portion of the Internet’s routers do not carry a route to a DDoS mitigation provider, then the attack traffic won’t end up at the provider’s traffic scrubbing centers, thus limiting the efficacy of any mitigation.  In addition, DDoS attacks often last more than 20 minutes.  Alternatively, short-lived announcements could be more effective with reconnaissance than with mitigation, as we reported back in 2013, when man-in-the-middle traffic redirection was first observed.   The routing is clear; the motivations of the actors are not.

On 4 December 2015, BackConnect hijacked address space belonging to Russian DDoS-mitigation provider DDoS-Guard.
A couple of weeks later, BackConnect began routing one of its own prefixes ( via DDoS-Guard.  Below is a visualization of the upstreams of BackConnect (AS203959) for this route.
On 7 January 2016, BackConnect hijacked (Sauce Labs) for about 5 minutes.  Since the normal origin (AS62537) didn’t stop announcing the route, there was no apparent coordination between the two parties, suggesting that this was an uninvited action.
On 19 February 2016, BackConnect briefly announced eight routes that were all more-specific hijacks of, a prefix normally announced by GHOSTnet (AS12586)

Many suspicious FQDNs were seen resolving to this address space in the past year, including …

On 17 April, BackConnect announced which was a more-specific of announced by Ecatel (AS29073).
Many suspicious FQDNs were seen resolving to this address space in the past year, including …

BackConnect’s Routing Leak

BGP routing leaks (especially peering leaks) occur in some form nearly every day.  But for a company that has confirmed that it has manipulated BGP routes in order to intercept traffic, what may otherwise be viewed as simply an innocent BGP leak involving BackConnect might now be viewed in a different light.  At 09:08:28 UTC on 28 May 2016, BackConnect leaked over 13,000 BGP routes from various peers to its transit provider Voxility. Below are visualizations depicting the duration and the degree of route propagation by BackConnect as an upstream for four of these routes.

As part of our continuous global Internet latency and path monitoring, many hundreds of traceroutes were redirected through Voxility and presumably BackConnect on their way to their various destinations.  Below is an example of a server in London with Level 3 transit, tracing out the path it took to Opal Telecom, also in London, the day before the leak.

trace from London to at 12:01 May 27, 2016
1  *
2    London  GB  0.315
3  *
4  *
5      London  GB  0.595
6 London  GB  0.673
7           London  GB  0.939
8           London  GB  5.222
9           London  GB  2.001
10           London  GB  1.856
11                           London  GB 25.141

Then during the leak, a traceroute with the same source and destination was redirected to Voxility PoP in Miami, Florida before being passed back to London to reach Opal Telecom.

trace from London to at 09:11 May 28, 2016
1  *
2 London     GB    0.303
3  London     GB    0.388
4                                          London     GB    0.652
5       Miami      US   98.385
6       Miami      US   98.173
7      Miami      US  104.161
8              Miami      US  106.383
9              Washington US  137.884
10 *
11 *
12        London     GB  142.411
13        London     GB  143.289
14        London     GB  143.391
15                        London     GB  166.841

It is safe to assume that a lot of Internet traffic passed through Voxility en route to BackConnect on 28 May 2016.


What we can conclude from all of this is that BackConnect has a history of hijacking BGP routes and, while this is not uncommon for DDoS mitigation services, their hijacking pattern is unusual to say the least.  With limited propagation of routes, very short duration hijacks, deliberate attempts at obfuscation, and apparent lack of coordination with the impacted parties, these traffic interceptions seem completely unlike typical services between consenting parties.

As readers of this blog will certainly know, the DDoS threat is growing ever more “frequent, persistent, and complex.”  The more we can learn about the actors and their methods of operation, the better we can defend ourselves and the health and operation of the global Internet.  Regardless of BackConnect’s intentions, the larger question for the community is to what lengths are we willing to go in that struggle and under whose authority?

<script async="async" charset="utf-8" src=""></script>

<script async="async" charset="utf-8" src=""></script>


Appendix:  Additional BackConnect BGP routing activity

Author’s note: For the sake of completeness, we have added additional descriptions of routing activity involving BackConnect below.

3 December 2015:

In December, BackConnect announced, a more-specific hijack of, which is announced by AS48884 (VolumeDrive).  This hijack lasted a couple of days and eventually achieved global propagation.  Such a pattern is more consistent with BGP-based DDoS mitigation.
7 January 2016: &

On 7 January 2016, BackConnect briefly appeared upstream of Host4Geeks LLC (AS393960) for two of its routes.  If this was DDoS mitigation, it wouldn’t have been very effective considering its brevity, limited route propagation, and lack of coordination between the different parties announcing the routes.

March – July, 2016:

In the past year, has been the location of a hacker forum.  It was initially originated by DataWagon (AS27176) on 15 November 2015.  Soon after it would appear that origin had changed, when in reality DataWagon had simply falsified part of the AS path to make it appear as though it was coming from elsewhere.

The new AS path was the following:

... 174 27176 6939 3257 4134 8655

If this path were to be believed, it would suggest that AS8655 (an unused ASN) was the origin, which supposedly sent this route exclusively to China Telecom (AS4134), who then sent it exclusively to GTT (AS3257), Hurricane Electric (AS6939) and on to DataWagon (AS27176).  This route would cycle through other origins with falsified AS paths over the following months until it sputtered out in July.

In March, DataWagon had returned to being the origin of and continued to do so until 21 July 2016 when it was withdrawn.  However, the route reappeared the following day, again originated by DataWagon (AS27176) (pictured below left), but this time with BackConnect as its exclusive upstream instead of DDoS mitigation provider ProTraf (AS63990) (pictured below right).

24 March 2016:

While the following hijack didn’t involve BackConnect, it did involve the DDoS mitigation provider (Voxility) exclusively used by BackConnect in a hijack of Staminus ( occurring on 24 March 2016.  The private ASN (AS65421) seen in many AS paths appeared just after Voxility’s ASN.  Since Staminus didn’t stop announcing this route during this incident, this doesn’t appear to be consistent with consensual DDoS mitigation.
23 April 2016:

OpenDNS’s BGPstream claimed a BackConnect hijack of on 23 April 2016.  Technically speaking, this was not a hijack as the victim’s prefix ( had ceased being routed at about 11:00 UTC the previous day (22 April). Having said that, appears to have been announced by IP squatter Bitcanal (AS197426) using a phony origin ASN, AS55830 (Indonesia Digital Media).  Suspicious routing from Bitcanal has been mentioned on multiple occasions in this blog.

20 May 2016:

On 20 May 2016, BackConnect announced which was a more-specific hijack of, announced by AS26615 (Tim Celular of Brazil).  BackConnect explained to Brian Krebs that this hijack was simply a typo as one of their routes differs by a single digit (

The post BackConnect’s Suspicious BGP Hijacks appeared first on Dyn Research.

by Doug Madory at September 20, 2016 04:18 PM

Ethan Banks on Technology

In Chicago on October 26? Come think about SD-WAN with me.

On October 26, 2016 at 5:30p, I’m speaking to a couple of Chicago-based MeetUp groups banding together to hear me discuss implementing SD-WAN. Sign up here. Or here.

The talk will be held at Cisco Systems Building – SkylineATS, 9501 Technology Blvd. 3rd Floor, Rosemont, IL.

This SD-WAN discussion is aimed at network engineers and other technologists who need to understand and recommend technology solutions for their organizations, as well as those who need to make the silly things vendors sell us actually work.

My goal is to make sure you’ve got plenty to think about as you explore SD-WAN. The talk will take away some of the, “You don’t know what you don’t know.”

I’ll cover the following.

  • An overview of what SD-WAN really is.
  • Integrating WAN optimization and SD-WAN.
  • Managing existing private WAN contracts.
  • Managing your own internal SLAs.
  • Relating SD-WAN to XaaS you might be using.
  • Considerations for multi-tenant environments.
  • Handling deep packet inspection requirements.
  • Leveraging TDM and other non-Ethernet circuits.
  • Bandwidth scaling.
  • WAN circuit design recommendations.
  • Integration with your existing routing domain.
  • A list of SD-WAN vendors & their products.

I hope to see you there.

by Ethan Banks at September 20, 2016 01:59 PM Blog (Ivan Pepelnjak)

Juniper Is Serious about OpenConfig and IETF YANG Data Models

When people started talking about OpenConfig YANG data models, my first thought (being a grumpy old XML/XSLT developer) was “that should be really easy to implement for someone with XML-based software and built-in XSLT support” (read: Junos with SLAX).

Here’s how my simplistic implementation would look like:

Read more ...

by Ivan Pepelnjak ( at September 20, 2016 10:36 AM

September 19, 2016

Ethan Banks on Technology

Presenting Technical Topics To Technical People

Fred writes, “I’ve got a conference coming up in December that I’ve been invited to speak at. This is something I’ve wanted to do for sometime. However, having never done it, I’m looking for some tips on how to get started.”

Q: What’s the best way to find a topic that is new enough to be interesting, but relevant enough to be useful?

People go to conferences hoping, among other things, to gather information that they didn’t have before. What that is will vary by audience member. Designers, architects, and C-levels who are trying to stay ahead of the curve will want to know about the future — what tech is coming and the likely impact to their business and operations. Engineers and operators — the people down in the blood and guts of IT — will be more interested in hard skills.

By “hard,” I don’t mean difficult. I mean useful tools and techniques that they can bring back to their job with them and put to use.

  • When addressing an engineering audience, the most engaging talks will be technical ones that go into specifics. The catch here is that most talks are in the 30 to 60 minute range. Therefore, the speaker must balance technical specifics with getting through a useful amount of material. If that balance can be struck, there’s a good talk to be delivered.
  • Hardcore techies also like skills that can keep them ahead in their career. Skills related to techniques or products that are growing in demand will garner a lot of attention. For instance, networkers have been excited about programmatic network automation over the last couple of years.

Everyone likes topics that will bring value to their business. For instance, a talk that compares both the soft and hard costs of running a private vs. public vs. hybrid cloud will be a thought-provoking chat. Why? Quantifying such things is difficult, and a talk that breaks down costs of such complex architectures often puts the audience in a situation of, “I would not have considered that on my own.”

Understand the difference between media buzzwords and real-world usefulness. Buzzwords take on lives of their own in media. All of a sudden, everyone is talking about devops, serverless, microservices, and containers. Yes, those terms have a real meaning and are useful to certain organizations. But are they useful to your audience? Or just a trendy curiosity? Don’t chase hype in the hopes of having a well-attended session. Place delivery of value above all else.

Q: How do I prepare? I’m a horrible procrastinator.

Procrastination is the enemy of an effective presentation. The day of delivery is not the deadline. Rather, you need time to prepare your slides, learn your talk, edit the talk, and perfect your delivery. Time is not on your side. Therefore, start now. Only if you realize what’s truly ahead of you will you find the motivation to get started.

This doesn’t mean you’ll have a perfect presentation a few weeks before you head to the podium. If you are the fretful type, you might end up tweaking your deck until moments before you speak. But getting going means that you have a solid starting point. The plane ride should be a time for relaxation, managing the general stress of travel, and locating the nearest Auntie Anne’s or Jersey Mike’s during connections — not stressing out about slides.

Practically speaking, block out a few hours on your calendar. Sixty minutes here. Ninety minutes there. During those times, remain distraction free. Crank through version 1.0 of your presentation as quickly as possible. Don’t stop. Deep work. Get it all out there, even if it sucks. Version 1.0 might be a turd, but it’s the hardest one to push out. Once you’ve got it in front of you, you can get to work polishing.

Q: What are strategies that work well for presentation preparation and delivery?


First, get over imposter syndrome. While there’s no need to be an egomaniac, recognize that you were asked to speak for a reason. Stop with the “I wonder if they’ll like me” inner monologue and get on with it.

Now, onto the content itself.

1. Don’t boil the ocean. You will be tempted as a technical person to explain and justify everything. You can’t. You don’t have time. You must assume a certain baseline of knowledge for your audience.

2. Deliver the right content to the right audience in the right way. When proposing your talk, there was a working title and an abstract — a summary of what your talk will cover. Keep that in mind. Your presentation is a implied promise to deliver certain information. So deliver.

When deciding how to deliver your information, one approach is to think of it like a story. Your presentation has a beginning, middle and end. This perspective will help you with flow.

If your presentation is meant to be persuasive, then it has a main point — a thesis you want your audience to remember when they leave. All points must support that main thesis point, or they belong to another talk. Don’t assume technical talks are not persuasive. Tech talks very often are persuasive, or could be structured in such a way.

Finally, know your audience. Nerds have different buttons to push than C-levels. Structure your content to meet your audience where they are at, and then take them a little higher.

3. Do not start your presentation prep by opening PowerPoint or Keynote. Instead, write out your main points, text, or notes first using an editor of your choosing.

Your slides are not your talk. Rather, slides should have a minimum of information that act merely as a reference point or visual aid for the audience. If your presentation has detailed information, refer people to a URL where they can download a comprehensive companion document.

Remember — text walls suck. Your audience can read your slides or listen to you talk, but they can’t do both. Credit to Slide:ology.

Slides must be necessary. Diagrams must be necessary. Or skip them. You don’t need a lot of them. Most of the world’s public talks were given before screen projection and slides. YOU are the object of your live audience’s attention.

4. Give your talk and time yourself. You must know if you’re too fast or slow, have enough material or too much. Know which slides you can skip if you run short of time. If you’re an experienced speaker and know your own cadence well, you might be able to get away without this. Otherwise, plan on a couple of dry runs.

5. Know your equipment, both hardware and software. You should know how to deal with secondary monitors, and you should know exactly how your presentation software works in a dual-monitor setup.

For example, PowerPoint has a Presenter display + audience display that works with dual outputs. You’ll see a Presenter display on your screen with a timer, your notes, the current slide, and the upcoming slide. The projector screen viewed by the audience will have the actual slides.

6. Include the extras. If you send your slides to a handler who will stage them for you, make sure you include special fonts or other supporting templates, etc. Fonts matter greatly to the overall look and feel of your presentation. Some templates rely on specific fonts to render icons that will render as generic squares or odd characters if the font is missing. A missing font can result in a deck that’s ugly at best and unreadable at worst.

Alternatively, you might export your presentation to PDF or JPEG to ensure that your deck appears exactly how you intended. I have had handlers build decks on their own platform for me using the PDFs or JPEGs I sent to them. In a pinch, it can be done. Just ask.

7. Check out the venue before it’s your time to speak. Talk to the A/V staff ahead of time if you can. You want to know the stage, the screen or screens, and the size of the room. You should also sort out how to hook up your laptop and prove that it works with your connectors and setup. You want to know how you’ll be mic’ed. That could be simply you standing in front of a podium with an attached mic, or via a wireless lavalier mic. 

Be prepared to interface your laptop with anything. VGA, DVI, and HDMI are all common. If you want to use your own laptop, then it’s on you to be able to interface with whatever is at the venue. Have those cables and adapters ready, just in case.

Practice mic technique if you’re not used to being amplified. Hearing your own voice booming over the house sound system can be a little strange at first. If you can work with the mic and get comfortable with how you sound before you start speaking, that can take away some anxiety.

Realize that an empty room will sound loud and boomy compared to a room with fifty or a hundred people in it. From an acoustic standpoint, people are sound-absorbing meatbags. The more bodies in the room, the higher the contrast will be between your empty room practice and live presentation delivery.


1. Do not use “slide builds.” These are slides that use animations or transitions, and build over time as you click. These building features are rarely helpful to the audience, more often serving as distractions. Stick with static slides.

This is also helpful for exports of your deck. By eschewing slide builds, the live audience gets the same product that someone watching your presentation on or other slide archival site will get.

2. Wear something that makes you feel confident. Attire that makes you look your most attractive builds confidence in front of others. But before you pick your favorite Marvel t-shirt…

3. Wear something appropriate. Your clothes need to fit, and should match or exceed the “dressiness” of your average audience member. You are sending a message with your appearance. You might also be live streamed or archived on YouTube in HD. 1080p HD leaves nowhere to hide. So, try to care a little bit.

Most of you reading this will not have the level of notoriety that will give you a pass on your personal appearance. While I might listen to Steve Wozniak deliver a talk in his very finest underpants, there’s no chance I’ll listen to you in yours.

If you’d like more specificity, then I recommend the following.

  • For a west coast / SanFran / Silicon Valley crowd, dark wash jeans paired with a collared shirt works fine. But you can get away with just about any level of nerdy eccentricity that strikes you. I’ve seen multi-colored hair, tattoos, nerdy t-shirts, sockless, shoeless, and bare footed presenters.
  • For an east coast / NYC crowd, consider going upscale. A two piece suit without a tie would not be overkill. Young east coasters are dressing up these days, particularly those working in finance.
  • Las Vegas conferences are a melting pot. I’d go with your west coast vibe. Being sober with most of your body covered is likely to be adequate in this context. It sounds like a low bar to set, but I have sat through sessions where the presenter clearly believed in better presentations through chemicals.
  • Consider that lav mics clip to button-up shirts more easily than t-shirts.

You should also consider vendor logos. Wearing vendor-branded attire could be an implied endorsement. The same concern follows for laptop stickers if that laptop will be visible to your audience or to cameras. Sure, you might love Juniper. But do you want to be that person wearing a Junos hat while delivering a vendor-neutral presentation on layer three campus network design? Or wearing your employer’s shirt when you’re not representing your employer while giving your talk? Maybe you do. Maybe you don’t. It’s worth thinking about.

4. Be yourself. For instance, don’t try to be a comedian if you’re not one — very few are. Lame jokes fall flat and can make people feel awkward. Don’t get me wrong. Humor is fine! Be sarcastic, poke fun — those are good things. But don’t use your presentation as a chance to channel your inner stand-up comedian.

If you’ve never studied how stand-up comedians perform their craft, it’s with a lot of trial and error, as well as practice of fledgling material in front of live audiences. Unless you give this one talk so much that you practice delivering comedic lines to get your wording and timing just right, most punchlines are better left to the pros.

5. You might get introduced. You might not. You might be asked to deliver a “house” message. You might not. Just roll with it. Be a pro. Don’t let the little things throw you.

6. Choose whether to have Q&A during or after your presentation. It’s trendy to set up your talk as if you’re about to start a dialogue. “Let’s keep this interactive,” I’ve heard several presenters say as they open a session. I grasp, and even applaud, the spirit of that, but accepting questions during your presentation is a little bit dangerous. You must keep control of the room, or you’ll never get through your talk.

On the other hand, holding all questions until after you’re done can be dangerous. If you are a talker, you might go right to the end to get through your material. That leaves no time for Q&A in conference settings where folks have to scramble to get to the next thing on their schedules.

7. Repeat audience questions. If someone is asking questions and they are not mic’ed, you need to re-state the question for the audience before answering. This keeps the room together, which is absolutely critical especially as the session wears on. People are easily distracted by their screens, so you need to keep attention focused by making sure everyone knows exactly what question is being answered.

8. Be ready for the afterglow. After the talk, the microphone will turn off, and most folks will disperse. But a few people will want to chat with you. Be ready for this in several different ways.

Anticipate weird questions. Some questions might have had something to do with your talk, but maybe not. Don’t feel like you have to fake an answer right then and there. You don’t. Humbly offer your best opinion if you have one, but don’t be upset if you don’t. Just tell the person honestly that you’ve not been in their situation before.

Remember, you’re not there to give away free consulting. You want to be polite and helpful in the way that all non-sociopaths do, but you have no specific obligation to solve their problem. Even so, if the question is interesting and you’re available, you might be able to engage them as a consultant after the event. Which reminds me…

Have business cards handy. A few folks might want to follow up with you after the event. The easiest way to facilitate this is with a business card. You can get a box of more than you’re likely to ever need for $10 or so. Hand them the card, and they can get on their way to their next event while still being able to get a hold of you later on.

Be ready to say, “Thank you.” Some folks might just want to express their gratitude for your talk. Smile, nod, and thank them. If it gets weird after that, ask them where they work or what they do to de-fuse the awkwardness.

by Ethan Banks at September 19, 2016 07:17 PM Blog (Ivan Pepelnjak)

The Cost of Networking Has Not Declined

One of the common taglines parroted by SDN aficionados goes along the lines of “The cost to acquire and manage server and storage architectures has declined over time while networking stays stubbornly expensive.” (I took it straight from an anonymous blog comment).

Let’s see how well it matches reality.

Read more ...

by Ivan Pepelnjak ( at September 19, 2016 05:03 PM

Potaroo blog

DDOS Attackers - Who and Why?

Bruce Schneier's recent blog post, “Someone is Learning How to Take Down the Internet" reported that the incidence of DDOS attacks is on the rise. The obvious question I have when reading these reports is "Who is behind these attacks, and why are they doing it?"

September 19, 2016 03:00 AM

XKCD Comics

September 17, 2016 Blog (Ivan Pepelnjak)

Getting Started in the Mobile World

Got this challenge from one of my readers:

I've recently changed jobs and I am currently working for a telco. The problem is that I have no idea of what they are talking about when they mention SGSN, GGSN, Gi, Gn, etc... I only know routing and switching stuff :(.

Obviously he tried to search for information and failed.

Read more ...

by Ivan Pepelnjak ( at September 17, 2016 04:26 PM

September 16, 2016

The Networking Nerd

DevOps and the Infrastructure Dumpster Fire


We had a rousing discussion about DevOps at Cloud Field Day this week. The delegates talked about how DevOps was totally a thing and it was the way to go. Being the infrastructure guy, I had to take a bit of umbrage at their conclusions and go on a bit of a crusade myself to defend infrastructure from the predations of developers.

Stable, Boy

DevOps folks want to talk about continuous integration and continuous deployment (CI/CD) all the time. They want the freedom to make changes as needed to increase bandwidth, provision ports, and rearrange things to fit development timelines and such. It’s great that they have they thoughts and feelings about how responsive the network should be to their whims, but the truth of infrastructure today is that it’s on the verge of collapse every day of the week.

Networking is often a “best effort” type of configuration. We monkey around with something until it works, then roll it into production and hope it holds. As we keep building more patches on to of patches or try to implement new features that require something to be disabled or bypassed, that creates a house of cards that is only as strong as the first stiff wind. It’s far too easy to cause a network to fall over because of a change in a routing table or a series of bad decisions that aren’t enough to cause chaos unless done together.

Jason Nash (@TheJasonNash) said that DevOps is great because it means communication. Developers are no longer throwing things over the wall for Operations to deal with. The problem is that the boulder they were historically throwing over in the form of monolithic patches that caused downtime was replaced by the storm of arrows blotting out the sun. Each individual change isn’t enough to cause disaster, but three hundred of them together can cause massive issues.


Networks are rarely stable. Yes, routing tables are mostly stabilized so long as no one starts withdrawing routes. Layer 2 networks are stable only up to a certain size. The more complexity you pile on networks, the more fragile they become. The network really only is one fat-fingered VLAN definition or VTP server mode foul up away from coming down around our ears. That’s not a system that can support massive automation and orchestration. Why?

The Utility of Stupid Networking

The network is a source of pain not because of finicky hardware, but because of applications and their developers. When software is written, we have to make it work. If that means reconfiguring the network to suit the application, so be it. Networking pros have been dealing with crap like this for decades. Want proof?

  1. Applications can’t take to multiple gateways at a time on layer 2 networks. So lets create a protocol to make two gateways operate as one with a fake MAC address to answer requests and ensure uptime. That’s how we got HSRP.
  2. Applications can’t survive having the IP address of the server changed. Instead of using so many other good ideas, we create vMotion to allow us to keep a server on the same layer 2 network and change the MAC <-> IP binding. vMotion and the layer 2 DCI issues that it has caused has kept networking in the dark for the last eight years.
  3. Applications that run don’t need to be rewritten to work in the cloud. People want to port them as-is to save money. So cloud networking configurations are a nightmare because we have to support protocols that shouldn’t even be used for the sake of legacy application support.

This list could go on, but all these examples point to one truth: The application developers have relied upon the network to solve their problems for years. So the network is unstable because it’s being extended beyond the use case. Newer applications, like Netflix and Facebook, and thrive in the cloud because they were written from the ground up to avoid using layer 2 DCI or even operate at layer 2 beyond the minimum amount necessary. They solve tricky problems like multi host combinations and failover in the app instead of relying on protocols from the golden age of networking to fix it quietly behind the scenes.

The network needs to evolve past being a science project for protocols that aim to fix stupid application programming decisions. Instead, the network needs to evolve with an eye toward stability and reduced functionality to get there. Take away the ability to even try to do those stupid tricks and what you’re left with is a utility that is a resource for your developers. They can use it for transport without worrying about it crashing every day with some bug in a protocol no one has used in five years yet was still installed just in case someone turned on an old server accidentally.

Nowhere is this more apparent than cloud networking stacks like AWS or Microsoft Azure. There, the networking is as simplistic as possible. The burden for advanced functionality per group of users isn’t pushed into a realm where network admins need to risk outages to fix a busted application. Instead, the app developers can use the networking resources in a basic way to encourage them to think about failover and resiliency in a new way. It’s a brave new world!

Tom’s Take

I’ll admit that DevOps has potential. It gets the teams talking and helps analyze build processes and create more agile applications. But in order for DevOps to work the way it should, it’s going to need a stable platform to launch from. That means networking has to get its act together and remove the unnecessary things that can cause bad interactions. This was caused in part by application developers taking the easy road and pushing against the networking team of wizards. When those wizards push back and offer reduced capabilities countered against more uptime and fewer issues you should start to see app developers coming around to work with the infrastructure teams to get things done. And that is the best way to avoid an embarrassing situation that involves fire.

by networkingnerd at September 16, 2016 11:35 PM

XKCD Comics

September 15, 2016 Blog (Ivan Pepelnjak)

Whitebox Switching at LinkedIn with Russ White on Software Gone Wild

When LinkedIn announced their Project Falco I knew exactly what one of my future Software Gone Wild podcasts would be: a chat with Russ White (Mr. CCDE, now network architect @ LinkedIn).

It took us a long while (and then the summer break intervened) but I finally got it published: Episode 62 is waiting for you.

by Ivan Pepelnjak ( at September 15, 2016 06:09 AM


Here are the outlines of an interesting ExpertExpress discussion:

  • A global organization wanted to connect data centers across the globe with a new transport backbone.
  • All the traffic has to be encrypted.

Should they buy L2VPN and use MACsec on it or L3VPN and use GETVPN on it (considering they already have large DMVPN deployments in each region)?

Read more ...

by Ivan Pepelnjak ( at September 15, 2016 06:08 AM

XKCD Comics

September 14, 2016

Ethan Banks on Technology

Slack. Less Bad Than The Rest.

A topic I complain about with some regularity is my inability to keep up with incoming messages. I’m too busy creating something for someone else to consume to bother trying to keep up. That’s the way of things. If I successfully keep up with all the input, I never achieve useful output.

In this world of message misery, Slack is my friend. I find that Slack is better at managing input than most other forms of communication.

As Slack groups form (I’m in 8 now), it allows me to interact with people in a private or semi-private manner in a way that’s less intrusive than Google Hangouts or an iMessages chat room.

Slack groups are far better for me than e-mail. I have a passionate dislike for e-mail, although I’ve gotten better at managing it with process and tools. E-mail remains useful to me because it’s the lowest common denominator of communications. If nothing else works, then I can probably send the person an e-mail.

At the moment, Slack is the “least worst” way to manage communication for me.

  • I can mute as well as tune notifications. I often mute entire channels that do not require real-time interaction. I can also set do not disturb times. I can also tailor notifications on mobile differently from notifications on my desktop. I find real-time notification disruptive, so I tend to shut them all off with a few exceptions for co-workers who likely need my attention immediately.
  • I can organize the messages. This is a function of how Slack works. There is a natural hierarchy of groups, public and private channels, private group chats, and one-to-one chats.
  • I can search the messages. Message search is absolutely critical for any message database where the data contains action items. Slack has never failed me. My inbox search has been great with web-native Gmail, which I never use. Airmail, my current favorite IMAP client, does search reasonably well, but I’ve found message search to fall short on all other IMAP clients I’ve tried.
  • I can set reminders. This simple feature is a valuable aid to not forget an action item.
  • I can integrate with other apps. Slack has an API, and there is a good bit of integration with other tools that makes Slack my one-stop shop for keeping up with what’s going on in my company. For instance, Trello activity can be reflected in a Slack channel.

Therefore, Slack becomes chat with the benefit of e-mail search, and without the cryptic clumsiness of IRC. Since I deal with a company team as well as peers spread all over the world, Slack fits. IMO, it’s the best way to deal with a bad problem.

by Ethan Banks at September 14, 2016 10:05 PM Blog (Ivan Pepelnjak)

Source Code Is Not Standards

One of the oft-repeated messages of the Software-Defined Pundits is “Standard bodies are broken, (open) source code is king”… and I’d guess that anyone who was too idealistic before being exposed to how the sausage is being made within IETF has no problems agreeing with them. However…

Read more ...

by Ivan Pepelnjak ( at September 14, 2016 05:55 AM

September 13, 2016

Networking Now (Juniper Blog)

Juniper Networks Continues to Deliver NIST FIPS 140-2 Certifications

 Five More.

<object data-extension-version="" data-install-updates-user-configuration="true" data-supports-flavor-configuration="true" height="150" id="__symantecPKIClientMessenger" style="display: none;" width="300"></object>

<object data-extension-version="" data-install-updates-user-configuration="true" data-supports-flavor-configuration="true" id="__symantecPKIClientMessenger" style="display: none;"></object>

by bshelton at September 13, 2016 08:07 PM

Ethan Banks on Technology

Interview: Dr. Pat McCarthy Of The Giant Magellan Telescope

<iframe data-name="pd-iframe-player" frameborder="0" height="100" scrolling="no" src="" width="100%"></iframe>

On the Citizens of Tech Podcast #43, we interviewed Dr. Patrick McCarthy of the Giant Magellan Telescope project, currently under construction in Chile.

The GMT is in a new class of “extremely large telescopes.” Featuring a custom glass formulation, seven asymmetric mirrors being polished in Arizona, and software that will correct in real-time for atmospheric distortion and physical alignment, the GMT will gather images too dim for us to have ever seen before.

Among the anticipated advances is the ability to see planets orbiting distant stars, allowing us to get that planet’s spectrographic signature. That data will help us find planets with the chemical signatures of life. We’ll also be able to look ever further back in time as we observe across light years, clarifying our understanding of the universe’s opening moments.

Pat was an outstanding spokesman for the GMT, clearly explaining the project’s worth to science, construction challenges, and relation to other extremely large telescope projects. He also helped us understand the pros and cons of terrestrial vs. space-based telescopes.

by Ethan Banks at September 13, 2016 04:00 PM Blog (Ivan Pepelnjak)

How Do I Persuade My Management Automation Makes Sense?

Matt Oswalt made two great points while tweeting about my Automation Gone Wild blog post:

  • Automation should be a strategy. You need management buy-in;
  • You should have at least one person with strong software development experience in your automation team.

However, life is not always rosy, so @stupidengineer asked:

Read more ...

by Ivan Pepelnjak ( at September 13, 2016 06:45 AM

September 12, 2016

Honest Networker

Cooking up a new routing policy

<video autoplay="1" class="wp-video-shortcode" controls="controls" height="360" id="video-1198-1" loop="1" preload="metadata" width="500"><source src="" type="video/mp4"></video>

by ohseuch4aeji4xar at September 12, 2016 10:35 PM

Networking Now (Juniper Blog)

The Latest in Disaster Recovery as a Service: Expedient Joins Forces with Juniper Networks vSRX to Keep Customers’ Networks Safe and Highly Available

We hear a lot about being “always on” and having “100 percent uptime.” While this is a reasonable expectation, it’s a difficult task to accomplish when an outage occurs due to a disaster or some other unavoidable circumstance. What to do in such a situation is a dilemma that keeps IT professionals on the edge of their seats and reaching for the latest technology that can keep their workloads backed up and secure in a time of need.

by smiles at September 12, 2016 09:36 PM Blog (Ivan Pepelnjak)

Guest Appearance on Full Stack Journey Podcast

I love listening to Scott Lowe’s Full Stack Journey podcast, so I was totally delighted when he asked me to participate. The results: FSJ Episode#8. Enjoy!

by Ivan Pepelnjak ( at September 12, 2016 02:43 PM

XKCD Comics

September 09, 2016 Blog (Ivan Pepelnjak)

OSPF Areas and Summarization: Theory and Reality

While most readers, commenters, and Twitterati agreed with my take on the uselessness of OSPF areas and inter-area summarization in 21st century, a few of them pointed out that in practice, the theory and practice are not the same. Unfortunately, most of those counterexamples failed due to broken implementations or vendor “optimizations”.

Read more ...

by Ivan Pepelnjak ( at September 09, 2016 11:59 AM

XKCD Comics

September 08, 2016

Potaroo blog

Binding to an IPv6 Subnet

In the original framework of the IP architecture, hosts had network interfaces, and network interfaces had single IP addresses. These days, many operating systems allow a configuration to add additional addresses to network interfaces by enumerating these additonal addresses. But can we bind a network interface to an entire subnet of IP addresses without having to enumerate each and every individual address?

September 08, 2016 09:00 PM

Brian Raaen

Moving to the dark side

Things have been crazy this past year. After 10 years of working at my old job, last march I took a position at a new job. I am moving from working as an engineer working on IOS and IOS-XR service provider networks, to now working in the Healthcare industry at a regional health care provider. … Continue reading "Moving to the dark side"

by Brian Christopher Raaen at September 08, 2016 06:51 PM

The Networking Nerd

Cloud Apps And Pathways


Applications are king. Forget all the things you do to ensure proper routing in your data center. Forget the tweaks for OSPF sub-second failover or BGP optimal path selection. None of it matters to your users. If their login to Seibel or Salesforce or Netflix is slow today, you’ve failed. They are very vocal when it comes to telling you how much the network sucks today. How do we fix this?

Pathways Aren’t Perfect

The first problem is the cloud focus of applications. Once our packets leave our border routers it’s a giant game of chance as to how things are going to work next. The routing protocol games that govern the Internet are tried and true and straight out of RFC 1771(Yes, RFC 4271 supersedes it). BGP is a great tool with general purpose abilities. It’s becoming the choice for web scale applications like LinkedIn and Facebook. But it’s problematic for Internet routing. It scales well but doesn’t have the ability to make rapid decisions.

The stability of BGP is also the reason why it doesn’t react well to changes. In the old days, links could go up and down quickly. BGP was designed to avoid issues with link flaps. But today’s links are less likely to flap and more likely to need traffic moved around because of congestion or other factors. The pace that applications need to move traffic flows means that they tend to fight BGP instead of being relieved that it’s not slinging their traffic across different links.

BGP can be a good suggestion of path variables. That’s how Facebook uses it for global routing. But decisions need to be made on top of BGP much faster. That’s why cloud providers don’t rely on it beyond basic connectivity. Things like load balancers and other devices make up for this as best they can, but they are also points of failure in the network and have scalability limitations. So what can we do? How can we build something that can figure out how to make applications run better without the need to replace the entire routing infrastructure of the Internet?

GPS For Routing

One of the things that has some potential for fixing inefficiency with BGP and other basic routing protocols was highlighted during Networking Field Day 12 during the presentation from Teridion. They have a method for creating more efficiency between endpoints thanks to their agents. Founder Elad Rave explains more here:

<iframe allowfullscreen="true" class="youtube-player" height="359" src=";rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="584"></iframe>

I like the idea of getting “traffic conditions” from endpoints to avoid congestion. For users of cloud applications, those conditions are largely unknown. Even multipath routing confuses tried-and-true troubleshooting like traceroute. What needs to happen is a way to collect the data for congestion and other inputs and make faster decisions that aren’t beholden to the underlying routing structure.

Overlay networking has tried to do this for a while now. Build something that can take more than basic input and make decisions on that data. But overlays have issues with scaling, especially past the boundary of the enterprise network. Teridion has potential to help influence routing decisions in networks outside your control. Sadly, even the fastest enterprise network in the world is only as fast as an overloaded link between two level 3 interconnects on the way to a cloud application.

Teridion has the right idea here. Alternate pathways need to be identified and utilized. But that data needs to be evaluated and updated regularly. Much like the issues with Waze dumping traffic into residential neighborhoods when major arteries get congested, traffic monitors could cause overloads on alternate links if shifts happen unexpectedly.

The other reason why I like Teridion is because they are doing things without hardware boxes or the need to install software anywhere but the end host. Anyone working with cloud-based applications knows that the provider is very unlikely to provide anything outside of their standard offerings for you. And even if they manage, there is going to be a huge price tag. More often than not, that feature request will become a selling point for a new service in time that may be of marginal benefit until everyone starts using it. Then application performance goes down again. Since Teridion is optimizing communications between hosts it’s a win for everyone.

Tom’s Take

I think Teridion is on to something here. Crowdsourcing is the best way to gather information about traffic. Giving packets a better destination with shorter travel times means better application performance. Better performance means happier users. Happier users means more time spent solving other problems that have symptoms that aren’t “It’s slow” or “Your network sucks”. And that makes everyone happier. Even grumpy old network engineers.


Teridion was a presenter during Networking Field Day 12 in San Francisco, CA. As a participant in Networking Field Day 12, my travel and lodging expenses were covered by Tech Field Day for the duration of the event. Teridion did not ask for nor where they promised any kind of consideration in the writing of this post. My conclusions here represent my thoughts and opinions about them and are mine and mine alone.


by networkingnerd at September 08, 2016 02:29 PM Blog (Ivan Pepelnjak)

OpenStack on VMware NSX on Software Gone Wild

Does it make sense to run OpenStack on top of VMware infrastructure? How well does NSX work as a Neutron plug-in? Marcos Hernandez answered these questions (and a lot of others) in the Episode 61 of Software Gone Wild (admittedly after a short marketing pitch in the first 10 minutes).

by Ivan Pepelnjak ( at September 08, 2016 11:55 AM

September 07, 2016

Honest Networker Blog (Ivan Pepelnjak)

Running BGP between Virtual Machine and ToR Switch

One of my readers left this question on the blog post resurfacing the idea of running BGP between servers and ToR switches:

When using BGP on a VM for mobility, what is the best way to establish a peer relationship with a new TOR switch after a live migration? The VM won't inherently know the peer address or the ASN.

As always, the correct answer is it depends.

Read more ...

by Ivan Pepelnjak ( at September 07, 2016 10:55 AM

XKCD Comics

September 06, 2016

Ethan Banks on Technology

MacBook Battery Replacement Requires Admin Credentials?

Over the weekend, I investigated the possibility of Apple replacing the tired battery in my four year old rMBP13. Yes, they can do it. It’s $199 for that particular model. But they also require an admin-level username and password for the device. Here’s an excerpt from the chat session.

Apple support rep:

What is the Admin Name and password for your Mac?


Will not share. Definitely should not be required for a battery replacement.

Apple support rep:

It is required. When the Mac goes to the repair depot that is required. You can remove that information so there is just an automatic log in. And you can set it up again when you get it back. We do not ask for any information that is not required.


Okay, then we’re done here. Thanks very much for your help!

An automatic log in, while an improvement from a certain point of view, isn’t a fix. No, you don’t have to know the user/pass now to access the system now, but you’re still on the system with admin-level credentials. Anyone with admin equivalent credentials to the system can, with a minimum of effort, get into whatever part of the file system they might like, make changes to the system, etc.

No one should give these level of credentials to anyone, let alone Apple over a chat session. Not even a properly-encrypted-with-a-valid-cert chat session that makes me believe I was, in fact, speaking to an official Apple representative.

Battery replacement in a compact laptop chassis such as a modern MacBook is an arduous affair, which is why I’m happy to pay someone else to do it. But the price of admin equivalency, even temporarily, is a price too high. Whatever the technical reasons might be for this current requirement, Apple should do better. I suggest a service mode that could be used to verify that the replacement battery installation was successful. No doubt it’s not that simple. Nothing ever is.

I’ll try a meatspace Apple store and see if there’s a way I can get the replacement done without having to hand over the admin credentials.

by Ethan Banks at September 06, 2016 02:40 PM Blog (Ivan Pepelnjak)

Questions about Network Automation Workshop

Marcel Reuter sent me a few questions about my upcoming Network Automation workshop. You might find them interesting, so here they are:

We have a lab with virtual IOS-XE, IOS-XR and Junos (vMX) router. I would like to learn how to provisioning the Lab router.

Covered in the workshop. I’m focusing on vIOS (which is pretty close to IOS Classic and IOS-XE) and Nexus OS because that’s what I can get up and running quickly in VIRL.

Read more ...

by Ivan Pepelnjak ( at September 06, 2016 10:50 AM

My Etherealmind

Enterprise IT Doesn’t Care About The Price. Really.

Enterprise IT is willing to pay high prices and deliver large profit margins to suppliers.

The post Enterprise IT Doesn’t Care About The Price. Really. appeared first on EtherealMind.

by Greg Ferro at September 06, 2016 10:11 AM