May 19, 2015

The Networking Nerd

Open Choices In Networking

neo-architect

I had an interesting time at the spring meeting of the Open Networking User Group (@ONUG_) this past week. There were lots of discussions about networking, DevOps, and other assorted topics. One that caught me by surprise was some of the talk around openness. These tweets from Lisa Caywood (@RealLisaC) were especially telling:

After some discussion with other attendees, I think I’ve figured it out. People don’t want an open network. They want choice.

Flexible? Or Predictable?

Traditional networking marries software and hardware together. You want a Cisco switch? It runs IOS or NX-OS. Running Juniper? You can have any flavor of OS you want…as long as it’s Junos. That has been the accepted order of things for decades. Flexibility is traded for predictability. Traditional networking vendors give you many of the tools you need. If you need something different, you have to find the right mix of platform and software to get your goal accomplished. Mixing and matching is almost impossible.

This sounds an awful lot like the old IBM PC days. The same environment that gave rise to whitebox computers. We have a whitebox switching movement today as well for almost the same reasons – being able to run a different OS on cheaper hardware to the same end as the traditional integrated system. In return, you gain back that flexibility that you lost. There are some tradeoffs, however.

In theory, a whitebox switch is marginally harder to troubleshoot than a traditional platform. Which combiantion of OS and hardware are you running? How do those things interact to create bugs? Anyone that has ever tried to install USB device drivers on Windows knows that kind of pain. Getting everything to work right can be rough.

In practice, the support difference is negligible. Traditional vendors have a limited list of hardware, but the numerous versions of software (including engineering special code) interacting with those platforms can cause unforseen consequences. Likewise, most third party switch OS vendors have a tight hardware compatibility list (HCL) to ensure that everything works well together.

People do like flexibility. Giving them options means they can build systems to their liking. But that’s only a part of the puzzle.

The Problem Is Choice

Many of the ONUG attendees I talked to liked the idea of whitebox switching. They weren’t entirely enamoured, however. When I pressed a bit deeper, a pattern started to emerge. It sounded an awful lot like this:

I don’t want to run Vendor X Linux on my switch. I want to run my Linux on a switch!

That issue highlighted the real issue. Open networking proponents don’t want systems that offer open source networking enhancing the work of all. What they want is a flexible network that is capable of letting them run what they want on things.

The people that attend conferences like ONUG don’t like rigid choice options. Telling them they can run IOS or Junos is like picking the lesser of two evils. These people want to have a custom OS with the bare minimum needed to support a role in the network. They are used to solving problems outside the normal support chain. They chafe at the idea of being forced into a binary decision.

That goes back to Lisa’s tweets. People don’t want a totally open network running Quagga and other open source solutions. They want an open architecture that lets them rip and replace solutions based on who is cheaper that week or who upset them at the last account team meeting. They want the freedom to use their network as leverage to get better deals.

It’s a whole lot easier to get a better discount when you can legitimately threaten to have the incumbent thrown out and replaced relatively easily. Even if you have no intentions of doing so. Likewise, new advances in whitebox switching give you leverage to replace sections of the network and have feature parity with traditional vendors in all but a few corner cases. It seems to be yet another redefinition of open.


Tom’s Take

Maybe I’m just a cynic. I support development of software that makes the whole world better. My idea of open involves people working together to make everything better. It’s not about using strategies to make just my life easier. Enterprises are big consumers of open technologies with very little reciprocity outside of a few forward thinkers.

Maybe the problem is that we’ve overloaded open to mean so many other things that we have congnitive dissonance when we try to marry the various open ideas together? Open network architecture is easy as long as you stick to OSPF and other “standard” protocols. Perhaps the problem of choice is being shortsighted enough to make the wrong one.

 


by networkingnerd at May 19, 2015 04:07 PM

May 18, 2015

Peter's CCIE Musings and Rants

The REST of the story - Using REST API's to program your UC and DC Cisco Applications

Hi Guys!

Hopefully you've had chance to check out my blogpost on using SOAP and AXL to program Cisco Communications Manager, if not you should take a look some time! This post is all about using REST API's. I will take you through everything you should need to get started with REST to make API calls to your Cisco applications and services.

SOAP is just one method that webservices use to expose and consume information from each other, the other method is REST.

I'm not knowledgable enough to give you the pro's and Con's of SOAP vs REST but what I can tell you is that while CUCM uses SOAP, most other UC applications from Cisco use REST, as well as the data center products such as Cisco Nexus, Cisco UCS etc. etc.

You can use the API's exposed via REST to do all sorts of programming of your cisco applications, you could:


  • Write an application or webservice to automatically provision new UCS service profiles based on a customers information
  • Create plugins for your own enteprise applications to interact with unity, maybe you can get it to show how many voicemails they have right in salesforce
  • Use programming to automate tedious tasks such as assigning skills to agents in bulk on UCCX
  • Anything you can think of!

The exposed REST API's give you the freedom to configure Cisco applications without having to use their CLI or GUI, for most of their products, anything you can do in the CLI or the GUI, you can do programmatically via the REST API's

REST is a universal method of accessing API's, there are ways in almost every programming language to use them.

Lets go over some of the very basics of REST, we really won't scratch the surface of all the elements involved in REST but we should be able to cover enough to get you started.

As mentioned,  REST uses HTTP and HTTPS as its underlying transport mechanism to allow you to make "calls" to an exposed webservice. This is all done using the standard HTTP methods like GET and POST, you could make REST calls simply using your web browser!

GET is a HTTP method where you are requesting data from the webserver and plan to do something with it, PUT and POST methods are used to create or update data on the webserver. (there is some pros and cons to PUT vs POST but for the purpose of this article, just know that some API calls will need you to use the PUT method and others will want you to use the POST method)

When you have finished your HTTP Method, in HTTP you will receive a status code in response, these status codes help you determine if your method was succesful, for example, Unity connection will provide the 204 code if your REST API call to "upload WAV prompt" is succesful, if there was errors in your upload for some reason, it is likely to respond with a 4XX error such as 400 Bad Request, or 403 not-authorized if you don't have the appropriate permissions.

it's helpful to know the status codes that HTTP may send and what they are likely to mean:


  • 1XX - Information messages such as continue, processing etc. Your likely not going to receive very many of these
  • 2XX - Success codes, such as 200 OK, 202 Accepted etc. etc, Likely to mean your REST call was succesful
  • 3XX - Redirection codes such as moved temporarily etc. 
  • 4XX - Client-side errors such as 404 Not found, these are likely to mean you have some sort of mistake in your REST API Call
  • 5XX - Server side errors, the server is having some sort of error in processing your request.


The final bit of theory we need to understand for REST is the HTTP principal of "content type"

Content-type is used with HTTP to tell the server what kind of data we are sending it, for example, perhaps we are uploading a wav file, in this case the content type might be: audio/wav.

In REST, your likely to see two content-types very frequently:
application/json
and
application/XML

These two content types define how you will be formatting your REST calls, application/json means you will be sending your request in the JSON (Javascript Object Notation) Format, this format is considered "Simpler" than XML and uses a key: value pair format, like the below:


{
"firstName": "John",
"lastName": "Smith" }


Expressed using the application/XML content type, the message may look more like this:

>
>John> >Smith>

Both methods are perfectly valid ways of representing the same piece of information, it's up to you which method you prefer, as well as up to the webserver providing the REST call as to what content-type it accepts.

Now with this background information on REST API, let's take a look at unity connection for an example of using a REST API!

The Unity Connection API documentation is available  here: http://docwiki.cisco.com/wiki/Cisco_Unity_Connection_APIs

On the call handler API call page, you can see a method for listing the call handlers below:

This kind of format is extremely common with REST, the URL is often used to indicate exactly what information you are after.

Go ahead and browse to your unity server on the above URL, so for example:
https://10.1.1.1/vmrest/handlers/callhandlers (where 10.1.1.1 is obviously the IP address of your own Unity Connection server), this will make a regular GET request to this URL.

you should be prompted for a username and password, your admin login will work fine but obviously when you write your own custom applications you probably shouldn't use the admin account!

If you have done this correctly, you should get something like this showing in your browser:


 Success! we have pulled down an XML formatted document listing out all our configured call handlers. This shows you the kind of information you could bring into your application using REST API's.

This is quite useful but what about when you want to make modifications, or do more complicated queries via REST? Before you jump head first into using a programming language to implement your REST API's, The first thing you should do is download the firefox (or chrome) extension "RESTClient" which is available here: http://www.restclient.net/

 This super useful tool allows us to make REST calls directly in our web browser and manipulate them far more effectively than we could accomplish using just the URL Alone.

A screenshot below shows the power of this tool:


When you first add the tool, you will want to go to the authentication menu and add basic authentication, pop in your username and password.

Next, to specify the content type, go to the headers menu and create a custom header, it probably makes sense to save this header



First, let's make sure we understand the format of the URL we need to perform this request to.

the url is:
https:///vmrest/callhandlertemplates//templatemenuentries/

to get the callhandlertemplate-objectid, let's make a new GET request, use the following URL:

 https://10.1.1.1/vmrest/callhandlertemplates  (where 10.1.1.1 is obviously your unity connection URL)

also, just for fun, let's use a new header called Accept to tell unity connection that we would like its response to be in the JSON format rather than the usual XML format. This can be done by going to Headers, clicking custom headers and adding Name: Accept and Value: application/json

Click "Send" to send the REST call, don't forget your method should be GET since in this case we are just expecting to receive a response, we also do not need to fill in the request body, you can leave that blank.

Your REST Client should look something like this:



You can see that you have a few different ways to display the response, this shows you how JSON is formatted.

Now that we have the URI for the call handler template (in this case, it's vmrest/callhandlertemplates/1b0bb6d4-b2a1-458f-a73f-79f21d9c864b) we can use a PUT method to modify this call handler template.

Lets pretend that we want to change the caller input for the default call handler template to make pressing 1 perform a different action. The screenshot below shows the two available methods (via application/xml and application/json) to make this change via REST.


On your RESTClient, change the method to PUT, in the request body, add the URL in the format shown for the caller input API above (https:///vmrest/callhandlertemplates//templatemenuentries/)

For me, the URL looks like this:

https://172.21.1.58/vmrest/callhandlertemplates/1b0bb6d4-b2a1-458f-a73f-79f21d9c864b/templatemenuentries/1

Be sure to add a content-type header to tell unity connection what format your request body is going to be in, I am going to use json format just to get practice with it, but try both! Give the XML format a try too!

My Body request has the following format:

{
"Action":"7",
"TransferNumber":"1000",
"TransferType":1,
"TransferRing":"2"
}


Action: 7 specifies that the action when TouchTone key 1 is pressed (Remember, I specified the tonekey would be 1 by using the URL), it should transfer to an alternate contact number, which is 1000, it should be a transfer type of 1 and wait 2 rings. You can get a list of other actions available to you via the calller input API documentation available here:
http://docwiki.cisco.com/wiki/Cisco_Unity_Connection_Provisioning_Interface_%28CUPI%29_API_--_Updating_Caller_Input_Keys 



This will now update the call handler template for me, my REST client looked something like this:




Now when I go to unity connection and check my callhandler template Caller Input, I can see that the change has gone through:



There you have it! Using REST API's to make changes to Unity connection, in the next blog post we will use C# to actually do something quite useful that is quite a pain to do in the GUI.

What you have just learnt should serve you well even outside of the Cisco product range, VMWare has REST API's available to use, Amazon, Instagram, Twitter... there are so many REST API services out there, http://www.programmableweb.com/category/all/apis?page=1&order=created&sort=desc provides a great directory of just some of these services, I am sure you can come up with great ideas on how you could use third-party REST API's to do something great!



by peter_revill (noreply@blogger.com) at May 18, 2015 03:17 PM

My Etherealmind

Where are we with SDN in May 2015 ?

There is so much to talk about in SDN that I couldn’t finish any of the dozen or so blog posts that I started in the last three weeks. Instead I created a 15 minute Keynote presentation that had the key points that I wanted to blog about.

The post Where are we with SDN in May 2015 ? appeared first on EtherealMind.

by Greg Ferro at May 18, 2015 12:43 PM

XKCD Comics

May 16, 2015

PACKETattack

Introducing the Citizens of Tech Podcast

Citizens of Tech is not a show about gadgets and apps, at least not specifically. It's not a show about networking. It's not a constipated show about IT. Rather, it's a show for nerds who like science, gaming, books, contrarian thinking, entertainment, space exploration, transportation, energy, complex world problems, and anything else that's somehow technology-related. Sure, that might include gadgets, apps, IT, and so on, but we're trying to appeal to a certain kind of mind -- probably yours -- and not a certain kind of industry.

by Ethan Banks at May 16, 2015 09:48 PM

May 15, 2015

Loopback Mountain

More ADN (Awk Defined Networking)

Want to know how many IPv4 nodes are in each of your VLANs? Use ADN:

ssh myswitch 'sh arp | i Vlan' | awk '{print $NF}' | sort | uniq -c | sort -rn

     79 Vlan38
     65 Vlan42
     58 Vlan34
     22 Vlan36
     21 Vlan32
     20 Vlan40
      9 Vlan3
      7 Vlan8
      5 Vlan6
      5 Vlan204
      5 Vlan203
      5 Vlan2
      4 Vlan74
      3 Vlan82
      3 Vlan4

by noreply@blogger.com (Jay Swan) at May 15, 2015 08:52 PM

Internetwork Expert Blog

PPP CHAP Authentication Question

 

The following question was recently sent to me regarding PPP and CHAP:

 

At the moment I only have packet tracer to practice on, and have been trying to setup CHAP over PPP.

It seems that the “PPP CHAP username xxxx” and “PPP CHAP password xxxx” commands are missing in packet tracer.

I have it set similar to this video… (you can skip the first 1 min 50 secs)

https://www.youtube.com/watch?v=5ltNfaPz0nA

As he doesn’t use the missing commands, if that were to be done on live kit would it just use the hostname and magic number to create the hash?

 

Also, in bi-directional authentication, do both routers have to use the same password or can they be different as long as they match what they expect from the other router?

Thanks, Paul.

 

Here was my reply:

Hi Paul,

When using PPP CHAP keep in mind four fundamental things:

  1. The “magic number” that you see in PPP LCP messages has nothing to do with Authentication or CHAP.  It is simply PPPs way of trying to verify that it has a bi-directional link with a peer. When sending a PPP LCP message a random Magic Number is generated.  The idea is that you should NOT see your own Magic Number in LCP messages received from your PPP Peer.  If you DO see the same magic number that you transmited, that means you are talking to yourself (your outgoing LCP CONFREQ message has been looped back to you).  This might happen if the Telco that is providing your circuit is doing some testing or something and has temporarily looped-back your circuit.
  2. At least one of the devices will be initiating the CHAP challenge.  In IOS this is enabled with the interface command, “ppp authentication chap”.  Technically it only has to be configured on one device (usually the ISP router that wishes to “challenge” the incoming caller) but with CHAP you can configure it on both sides if you wish to have bi-directional CHAP challenges.
  3. Both routers need a CHAP password, and you have a couple of options on how to do this.
  4. The “hash” that is generated in an outgoing PPP CHAP Response is created as a combination of three variables, and without knowing all three values the Hash Response cannot be generated:
  • A router’s Hostname
  • The configured PPP CHAP password
  • The PPP CHAP Challenge value

I do all of my lab testing on real hardware so I can’t speak to any “gotchas” that might be present in simulators like Packet Tracer.  But what I can tell you, is that on real routers the side that is receiving the CHAP challenge must be configured with an interface-level CHAP password.

The relevant configurations are below as an example.

ISP router that is initiating the CHAP Challenge for incoming callers:

username Customer password cisco
!
interface Serial1/3
 encapsulation ppp
 ppp authentication chap
 ip address x.x.x.x y.y.y.y
!

Customer router placing the outgoing PPP call to ISP:

hostname Customer
!
interface Serial1/3
 encapsulation ppp
 ppp chap password cisco
 ip address x.x.x.x y.y.y.y
!

If you have a situation where you expect that the Customer Router might be using this same interface to “call” multiple remote destinations, and use a different CHAP password for each remote location, then you could add the following:

 

Customer router placing the outgoing PPP call to ISP-1 (CHAP password = Bob) and ISP-2 (CHAP password = Sally):

hostname Customer
!
username ISP-1 password Bob
username ISP-2 password Sally
!
interface Serial1/3
 encapsulation ppp
 ppp chap password cisco
 ip address x.x.x.x y.y.y.y
!

Notice in the example above, the “username x password y” commands supercede the interface-level command, “ppp chap password x”. But please note that the customer (calling) router always needs the “ppp chap password” command configured at the interface level.  A global “username x password y” in the customer router does not replace this command.  In this situation, if the Customer router placed a call to ISP-3 (for which there IS no “username/password” statement) it would fallback to using the password configured at the interface-level.

Lastly, the “username x password y” command needs to be viewed differently depending on whether or not it is configured on the router that is RESPONDING to a Challenge…or is on the router that is GENERATING the Challenge:

  • When the command “username X password Y” is configured on the router that is responding to the CHAP Challenge (Customer router), the router’s local “hostname” and password in this command (along with the received Challenge) will be used in the Hash algorithm to generate the CHAP RESPONSE.

 

  • When the command “username X password Y” is configured on the router that is generating the CHAP Challenge (ISP Router), once the ISP router receives the CHAP Authentication Response (which includes the hostname of the Customer/calling router) it will match that received Hostname to a corresponding “username X password Y” statement. If one is found that matches, then the ISP router will perform its own CHAP hash of the username, password, and Challenge that it previously created to see if its own, locally-generated result matches the result that was received in the CHAP Response.

Lastly, you asked, “ Also, in bi-directional authentication, do both routers have to use the same password or can they be different as long as they match what they expect from the other router?”

Hopefully from my explanations above it is now clear that in the case of bi-directional authentication, the passwords do indeed have to be the same on both sides.

 

Hope that helps!

Keith

 


 

 

by Keith Bogart, CCIE #4923 at May 15, 2015 05:48 PM

Network Design and Architecture

Why and Where Ring topology is used ?

Ring topology is used mostly for economical reason. It is very common topology in the service provider access, and it is not so uncommon in Aggregation and Core ( Backbone ) networks as well. Long haul links are expensive thus in order to provide last mile connectivity in the Service Provider access domain, nodes might… Read More »

The post Why and Where Ring topology is used ? appeared first on Network Design and Architecture.

by orhanergun at May 15, 2015 11:15 AM

Push and Pull Based Control Plane Mechanisms

Control plane packets are used to build a communication path between the networking devices. In some cases control plane is used to advertise and learn the endpoints. Imagine a network which consist of these networking devices, in order to crate a graph or tree among them for bridging or routing purpose, control plane protocols are used.… Read More »

The post Push and Pull Based Control Plane Mechanisms appeared first on Network Design and Architecture.

by orhanergun at May 15, 2015 10:10 AM

XKCD Comics

May 14, 2015

PACKETattack

ONUG Spring 2015 Live Blog – Day 2

More live blogging from ONUG Spring 2015 in NYC. Coverage of sessions related to open networking, SDN, use cases, customer experiences with emerging technology, etc.

by Ethan Banks at May 14, 2015 07:49 PM

ONUG Spring 2015 Live Blog – Day 1

My live blog from the ONUG 2015 Spring event in NYC.

by Ethan Banks at May 14, 2015 05:59 PM

Network Design and Architecture

Common Networking Protocols in LAN, WAN and Datacenter

Spanning Tree, Link Aggregation , VLAN and First Hop Redundancy protocols are used in Campus, Service Provider Access and Aggregation and in the Datacenter environment. There are definitely other protocols which are common across the Places in the Networks but in order to keep this article short and meaningful I choose these four. I will… Read More »

The post Common Networking Protocols in LAN, WAN and Datacenter appeared first on Network Design and Architecture.

by orhanergun at May 14, 2015 01:28 PM

May 13, 2015

Potaroo blog

Diving into the DNS

The turning of the DNS from a distributed database query tool into a malicious weapon in the cyber warfare arena has had profound impacts on the thinking about the DNS. I remember hearing the rallying cry some years back: “Lets all work together to find all these open resolvers and shut them down!” These days I don't hear that any more. It seems that, like SPAM in email, we’ve quietly given up on eradication, and are now focusing on how to preserve service in a toxic world. I suppose that this is yet another clear case of markets in action – there is no money in eradication, but there is money in meeting a customer’s requirement to allow their service to work under any circumstances. We’ve changed our self-perception from being the public DNS police to private mercenaries who work diligently to protect the interests of our paying customers. We are being paid to care about the victim, not to catch the attacker or even to prevent the attack.

May 13, 2015 09:00 AM

XKCD Comics

May 12, 2015

Network Design and Architecture

BGP PIC – Prefix Independent Convergence

BGP PIC ( Prefix Independent Convergence )  is a BGP Fast reroute mechanism which can provides sub second convergence even for the 500K internet prefixes by taking help of IGP convergence. BGP PIC uses hierarchical data plane in contrast to flat FIB design which is used by Cisco CEF and many legacy platforms. In a hierarchical… Read More »

The post BGP PIC – Prefix Independent Convergence appeared first on Network Design and Architecture.

by orhanergun at May 12, 2015 06:38 PM

Security to the Core | Arbor Networks Security

How to Become an Internet Supervillain in Three Easy Steps

One of the truisms of comic books and graphic novels is that nothing is immutable – both heroes and villains are rebooted, retconned, featured as radically (or subtly) different versions in alternate timelines, etc. The Marvel Cinematic Universe, which so far includes the Captain America, Thor, Hulk, Iron Man, and Avengers films, is a good example. DC are doing the same with The Flash and Green Arrow, and the latest cinematic incarnations of Batman and Superman are set to do battle with one another in a projected summer blockbuster movie next year.

And these new variants on old stories proliferate throughout the various versions of each character arc – variations on the same themes, but instantly recognizable to long-time fans and easily remembered by new ones. Tony Stark’s updated Iron Man origin story in the first Iron Man movie is one such example; the supervillain Mystique’s origin in the X-Men series of films (not part of the MCU) is another.

That isn’t to say that there’s no innovation taking place – Frank Miller’s The Dark Knight Returns radically migrated the general public perception of Batman away from the 1960s comedy paradigm popularized by the camp television series towards a much darker interpretation of Bruce Wayne’s tortured transformation into the Batman over the course of two (soon to be three) successive reboots of the cinematic portrayal of the classic superhero. Alan Moore’s Watchmen cleverly subverted the tried and true formulas of both superheroes and their supervillain nemeses, transforming one into the other in a paired set of character inversions which are amongst the strongest and most memorable in all forms of literature. With Marvels, Alex Ross and Kurt Busiek brought us back to the beginnings of the character arcs of many of the major Marvel superheroes – giving us a very different perspective on those beginnings – resulting in a familiar, yet greatly altered perception of their stories and significance. Ross and Mark Waid did the same for Superman, Wonder Woman, and Captain Marvel (along with several other nearly-forgotten characters) in DC’s seminal Kingdom Come series.

And then Mark Millar showed up, and subverted everything we thought we knew about the superhero/supervillain dichotomy in his ‘Millarworld’ milieu, as well as in more established Marvel and DC franchises. Millar made use of many of the same basic concepts mixed in with more extreme characters and circumstances, leading to outcomes both familiar in theme but wildly varying in details.

Depending upon your inclinations and sensibilities, the thematic and archetypal similarities between the story arcs of comic books and graphic novels and the state of security of many Internet-connected networks and properties may be either amusing, depressing, or strangely compelling. Or some combination of the three.

And as it turns out, it’s considerably easier to become a supervillain on the Internet than it ever has been in comics:

Step 1: Possess – or Invent – a Motive.

Whether it’s ideology, greed, online gaming disputes, or pure nihilism (e.g., ‘for the lulz’), for all practical purposes, there’s a near-infinitude of miscreants or potential miscreants on the Internet (latest user population estimate: 3 billion and counting) today, and many of them have a near-infinite set of axes to grind, either real or imaginary. No matter an organization’s industry, vertical, focus, market, services, or user population, somewhere out there, there’s someone who can somehow benefit from disrupting the availability of its Internet presence – it doesn’t matter who or why, it’s just enough to know that they’re out there, and they’re apparently a permanent feature of life on the Internet, reaching back into its very own Cold War-/ARPANET/IRC-driven origin story, seemingly destined to always be with us.

Step 2: Develop – or Acquire – the Means.

Whether a given archenemy is a network- and applications-savvy polymath or a clueless script kiddie barely able to click a mouse or maneuver across a touchscreen, there are superpowers out there waiting to be invented, used, or reused in the service of disruption. The real innovators (think Lex Luthor or Victor von Doom) are relatively rare; they develop new DDoS attack methodologies, sell them onwards or utilize them personally to accomplish their own individual goals (generally extortion, a diversion to mask online espionage of one form or another, or ideological in nature), and then those new methodologies inevitably make their way downstream into weaponized cloud-based DDoS ‘booter’ or ‘stresser’ tools, allowing the least technically-inclined aspiring Doctor Impossibles to make use of highly effective DDoS techniques such as link-saturating reflection/amplification attacks or more subtle TCP connection-oriented attack methodologies, all through an accessible (if not aesthetically pleasing) Web GUI interface. Push a few buttons, move a few sliders, pay up with a few (likely stolen) Bitcoins or credit cards, and a new Internet supervillain is born!

Step 3: Identify the Opportunity.

Unfortunately, the industry best current practices (BCPs) for maximizing the availability of network elements, servers, application stacks, services, et. al. which have been developed and made publicly available and are continually evangelized by many participants in the global operational security community, including Arbor ASERT, are more honored in the breach than in the observance. As a result, even very well-understood, basic DDoS attack methodologies all too often succeed even against large, well-resourced organizations with Internet-facing properties which are crucial to their revenue streams, logistics, and brand reputation. This state of affairs works in favor of all levels of attackers, who often don’t even bother to perform much (if any) reconnaissance before launching DDoS attacks against their intended targets.

The more effective Internet supervillains with the longest-running criminal careers are those who practice good tradecraft, who don’t risk gaining too much negative attention from various combinations of law enforcement agencies, and who know when to fade into the background until the next target of opportunity presents itself. And then there are those who adopt a flashy moniker, who’re extremely profligate with their attack campaigns, who threaten DDoS attacks of the greatest sophistication and largest attack traffic volumes – but who in reality are utilizing the same tried-and-true attack methodologies pioneered by the original innovators, slowly expanding their mastery of entry-level ‘booter’/‘stresser’ services while becoming giddy with their newfound, yet circumscribed, superpowers. For this category of Internet supervillains, small initial successes often boost their self-confidence to unjustified levels, and lead them into an overly profligate series of attacks against high-profile institutions which is almost certainly going to bring a lot of unwanted (from the attacker’s point of view) official scrutiny.

For the last year or so, an individual or organization calling itself DD4BC (‘DDoS for Bitcoins’) has been rapidly increasing both the frequency and the scope of its DDoS extortion attempts, shifting target demographics from low-level Bitcoin exchanges to online casinos and betting shops and, most recently, to prominent financial institutions across Europe, Asia, Australia, and New Zealand. DD4BC’s modus operandi is generally to launch a relatively small 10gb/sec – 15gb/sec reflection/amplification DDoS attacks against the chosen target, then email an extortion demand for between 15 and 100 Bitcoins (whatever they believe the target in question may be willing to pay) to an official contact address at the targeted organization. These extortion demands typically claim that DD4BC have 400gb/sec – 500gb/sec of DDoS attack capacity at their disposal, and give the targeted organization 48 hours to pay up, else they threaten to unleash overwhelming DDoS attacks against the target in the event of non-payment.

As of this writing, we’re unaware of any organization which has actually given in to DD4BC’s extortion demands, so we’re unsure of how lucrative DD4BC’s DDoS-driven extortion campaigns actually are for the perpetrator(s). What we have observed is that to date, DD4BC seem not to have generated any DDoS attacks in excess of a few tens of gb/sec – which, sadly, have been sufficient to at least initially disrupt the availability of many targeted organizations due to the all-too-commonplace lack of adequate preparations on the part of the defenders. However, the targets and their ISP and MSSP partners have generally moved quickly to successfully mitigate the DD4BC DDoS attacks, not least because DD4BC are simply making use of well-known DDoS attack methodologies such as ntp, SSDP, and WordPress XML-RPC reflection/amplification attacks, plus the occasional SYN-flood (one of the original DDoS attack methodologies in use on the then-nascent commercial Internet, first put to use in 1995). The WordPress reflection/amplification attack, first described in early 2014, seems to be the latest addition to their repertoire.

The ntp reflection/amplification attacks utilized by DD4BC have been seen on the public Internet for the last several years, achieving mainstream popularity in late 2013/early 2014, with Arbor publishing an analysis of the attacks and detailed mitigation instructions in mid-2014. SSDP ascended into popularity in mid-2014, and Arbor included descriptions of and effective mitigation techniques for this DDoS attack methodology in updates to our earlier publications on the general topic of reflection/amplification DDoS attacks.

In short, DD4BC appear to be utilizing commercial ‘booter’/’stresser’ services, and are slowly expanding their mastery of these entry-level attack-generation systems to launch attacks employing well-known methodologies with equally well-known mitigation techniques available through commercial solutions and services such as Arbor’s Peakflow SP/TMS, APS, and Arbor Cloud, as well as a variety of network infrastructure-based tools and techniques recommended by Arbor to network operators of all varieties.

The secret identities and motivations of aspiring Internet supervillains may be of prurient interest to both targets and bystanders, but the actual details aren’t actually necessary for organizations with significant Internet-facing properties to successfully defend against the well-known and readily-mitigated DDoS attack methodologies utilized by lower-tier miscreants, as well as the increasingly sophisticated attacks launched by more skilled attackers.

During the most recent upsurge in DD4BC activity, we’ve worked with targeted organizations who hadn’t yet incorporated the relevant best current practices (BCPs) nor followed the mitigation recommendations made by Arbor and other participants in the global operational security community, and who therefore were initially affected by DD4BC’s use of these well-known DDoS attack methodologies. However, it was relatively easy to bring them up to speed very quickly, with both on-premise and ISP/MSSP DDoS defense solutions, services , and techniques which effectively mitigated the attacks against these organizations.

Conversely, we also collaborated with organizations – both service providers of various stripes as well as enterprises in various verticals – who’d already incorporated Arbor’s recommended BCPs and detection/classification/traceback/mitigation techniques during the initial upsurge in ntp reflection/amplification attacks in early 2014, SSDP reflection/amplification attacks in mid-2014, as well as those we’d been assisting in mitigating DNS, SNMP, chargen, and other reflection/amplification attacks over many years. Because these organizations have kept up with the latest BCPs and recommended mitigation strategies and have done so for many years, they and their customers/users were almost completely unaffected by the standard reflection/amplification attacks launched against them by DD4BC, who soon decided to switch their focus to less prepared and capable targets.

“With great power comes great responsibility,” as Uncle Ben was retconned into remonstrating to a young Peter Parker in Ultimate Spider-Man #4. The good news is that practically every organization of note with an Internet presence either has in their possession the superpowers needed to defeat today’s DDoSing Internet supervillains, or can quickly call on allies who possess those powers in abundance. And just like in the comics, the fact that most of the bad guys simply utilize variations on well-established themes means that once an organization has implemented the relevant recommendations and BCPs, they’re well-prepared to deal with bad actors ranging from the likes of Paste Pot Pete and Negaduck to Galactus, Eater of Worlds or Darkseid – and all those in between.

References

1 Soluk, Kirk. (2014, February 14). NTP Attacks: Welcome to The Hockey Stick Era. http://www.arbornetworks.com/asert/2014/02/ntp-attacks-welcome-to-the-hockey-stick-era/

2 ASERT Threat Intelligence. (2014, March). ASERT Threat Intelligence Brief 2014-05 – Comprehensive Insight and Mitigation Strategies for NTP Reflection/Amplfication Attacks. Available to Arbor Customers Upon Request.

3 Dobbins, Roland. (2014, October). Presentation – When the Sky is Falling: Network-Scale Mitigation of High-Volume Reflection/Amplification DDoS Attacks. https://app.box.com/s/r7an1moswtc7ce58f8gg

4 Dobbins, Roland. (2014, August). Webinar – When the Sky is Falling: Network-Scale Mitigation of High-Volume Reflection/Amplification DDoS Attacks. https://www.brighttalk.com/webcast/9053/122257

5 Dobbins, Roland. (2009 – Present). Public Folder of Best Current Practices (BCPs) tutorials and techniques for network operators. https://app.box.com/s/4h2l6f4m8is6jnwk28cg

by Roland Dobbins at May 12, 2015 05:02 PM

May 11, 2015

The Networking Nerd

The Light On The Fiber Mountain

MountainRoad

Fabric switching systems have been a popular solution for many companies in the past few years. Juniper has QFabric and Brocade has VCS. For those not invested in fabrics, the trend has been to collapse the traditional three tier network model down into a spine-leaf architecture to optimize east-west traffic flows. One must wonder how much more optimized that solution can be. As it turns out, there is a bit more that can be coaxed out of it.

Shine A Light On Me

During Interop, I had a chance to speak with the folks over at Fiber Mountain (@FiberMountain) about what they’ve been up to in their solution space. I had heard about their revolutionary SDN offering for fiber. At first, I was a bit doubtful. SDN gets thrown around a lot on new technology as a way to sell it to people that buy buzzwords. I wondered how a fiber networking solution could even take advantage of software.

My chat with M. H. Raza started out with a prop. He showed me one of the new Multifiber Push On (MPO) connectors that represent the new wave of high-density fiber. Each cable, which is roughly the size and shape of a SATA cable, contains 12 or 24 fiber connections. These are very small and pre-configured in a standardized connector. This connector can plug into a server network card and provide several light paths to a server. This connector and the fibers it terminates are the building block for Fiber Mountain’s solution.

With so many fibers running a server, Fiber Mountain can use their software intelligence to start doing interesting things. They can begin to build dedicated traffic lanes for applications and other traffic by isolating that traffic onto fibers already terminated on a server. The connectivity already exists on the server. Fiber Mountain just takes advantage of it. It feels very simliar to the way we add in additional gigabit network ports when we need to expand things like vKernel ports or dedicated traffic lanes for other data.

Quilting Circle

Where this solution starts looking more like a fabric is what happens when you put Fiber Mountain Optical Exchange devices in the middle. These switching devices act like aggregation ports in the “spine” of the network. They can aggregate fibers from top-of-rack switches or from individual servers. These exchanges tag each incoming fiber and add them to the Alpine Orchestration System (AOS), which keeps track of the connections just like the interconnections in a fabric.

Once AOS knows about all the connections in the system, you can use it to start building pathways between east-west traffic flows. You can ensure that traffic between a web server and backend database has dedicated connectivity. You can add additional resources between systems that are currently engaged in heavy processing. You can also dedicated traffic lanes for backup jobs. You can do quite a bit from the AOS console.

Now you have a layer 1 switching fabric without any additional pieces in the middle. The exchanges function almost like a passthrough device. The brains of the system exist in AOS. Remember when Ivan Pepelnjak (@IOSHints) spent all his time pulling QFabric apart to find out what made it tick? The Fiber Mountain solution doesn’t use BGP or MPLS or any other magic protocol sauce. It runs at layer 1. The light paths are programmed by AOS and the packets are swtiched across the dense fiber connections. It’s almost elegant in the simplicity.

Future Illumination

The Fiber Mountain solution has some great promise. Today, most of the operations of the system require manual intervention. You must build out the light paths between servers based on educated guesses. You must manually add additional light paths when extra bandwidth is needed.

Where they can really improve their offering in the future is to add intelligence to AOS to automatically make those decisions based on thresholds and inputs that are predefined. If the system can detect bigger “elephant” traffic flows and automatically provision more bandwidth or isolate these high volume packet generators it will go a long way toward making things much easier on network admins. It would also be great to provide a way to interface that “top talker” data into other systems to alert network admins when traffic flows get high and need additional resources.


Tom’s Take

I like the Fiber Mountain solution. They’ve built a layer 1 fabric that performs similarly to the ones from Juniper and Brocade. They are taking full advantage of the resources provided by the MPO fiber connectors. By adding a new network card to a server, you can test this system without impacting other traffic flows. Fiber Mountain even told me that they are looking at trial installations for customers to bring their technology in at lower costs as a project to show the value to decision makers.

Fiber Moutain has a great start on building a low latency fiber fabric with intelligence. I’ll be keeping a close eye on where the technolgy goes in the future to see how it integrates into the entire network and brings SDN features we all need in our networks.

 


by networkingnerd at May 11, 2015 09:02 PM

Internetwork Expert Blog

OSPF Path Selection Challenge

Edit: Thanks for playing! You can find the official answer and explanation here.

I had an interesting question come across my desk today which involved a very common area of confusion in OSPF routing logic, and now I’m posing this question to you as a challenge!

The first person to answer correctly will get free attendance to our upcoming CCIE Routing & Switching Lab Cram Session, which runs the week of June 1st 2015, as well as a free copy of the class in download format after it is complete.  The question is as follows:

Given the below topology, where R4 mutually redistributes between EIGRP and OSPF, which path(s) will R1 choose to reach the network 5.5.5.5/32, and why?

Bonus Questions:

  • What will R2′s path selection to 5.5.5.5/32 be, and why?
  • What will R3′s path selection to 5.5.5.5/32 be, and why?
  • Assume R3′s link to R1 is lost.  Does this affect R1′s path selection to 5.5.5.5/32? If so, how?

Tomorrow I’ll be post topology and config files for CSR1000v, VIRL, GNS3, etc. so you can try this out yourself, but first answer the question without seeing the result and see if your expected result matches the actual result!

 

Good luck everyone!

by Brian McGahan, CCIE #8593, CCDE #2013::13 at May 11, 2015 06:53 PM

XKCD Comics