I first met Elisa Jasinska when she had one of the coolest job titles I ever saw: Senior Packet Herder. Her current job title is almost as cool: Senior Network Toolsmith @ Netflix – obviously an ideal guest for the Software Gone Wild podcast.
One of the confusing aspects of Internet operation is the difference between the types of providers and the types of peering. There are three primary types of peering, and three primary types of services service providers actually provide. The figure below illustrates the three different kinds of peering. One provider can agree to provide transit […]
Building a private cloud infrastructure tends to be a cumbersome process: even if you do it right, you oft have to deal with four to six different components: orchestration system, hypervisors, servers, storage arrays, networking infrastructure, and network services appliances.Read more ...
Last week in Chicago, at the annual SIGCOMM flagship research conference on networking, Arbor collaborators presented some exciting developments in the ongoing story of IPv6 roll out. This joint work (full paper here) between Arbor Networks, the University of Michigan, the International Computer Science Institute, Verisign Labs, and the University of Illinois highlighted how both the pace and nature of IPv6 adoption has made a pretty dramatic shift in just the last couple of years. This study is a thorough, well-researched, effective analysis and discussion of numerous published and previously unpublished measurements focused on the state of IPv6 deployment.
The study examined a decade of data reporting twelve measures drawn from ten global-scale Internet datasets, including several years of Arbor data that represents a third to a half of all interdomain traffic. This constitutes one of the longest and broadest published measurement of IPv6 adoption to date. Using this long and wide perspective, the University of Michigan, Arbor Networks, and their collaborators found that IPv6 adoption, relative to IPv4, varies by two orders of magnitude (100x!) depending on the measure one looks at and, because of this, care must really be taken when looking at individual measurements of IPv6. For example, examining only the fraction of IPv6 to IPv4 traffic, which is still just shy of 1%, is misleading, since virtually all other indicators show that IPv6 is much more ready for use and able to grow very quickly.
In the study, differences in IPv6 deployment across global regions were also apparent. This suggests that both the incentives and obstacles to adopt the new protocol vary in different parts of the world.
Most surprisingly, the team found that over the last three years the nature of IPv6 use, in terms of traffic, content, reliance on transition technology, and performance, has shifted dramatically from prior findings, showing a maturing of the protocol into production mode. For instance, Arbor data shows that the increase in IPv6 traffic relative to IPv4 over each of 2012 and 2013 has been phenomenal, growing more than 400% in each year — a more than quintupling. Arbor data also helped show that *how* people are using IPv6 has likewise evolved immensely, to the point where IPv6 is now largely used natively and mostly for content, neither of which was the case just three years ago.
Interestingly, this study offers a thought-provoking rationale for the high incidence of NNTP and rsync in the IPv6 application mix. Based on the data, the high volumes of NNTP and rsync is likely partially due to synchronization of NNTP and software distribution data between a relatively small number of IPv6-enabled servers that resided within the research and education communities. The significant increase of HTTP and HTTPS traffic in the IPv6 application mix could correlate with a much broader increase of IPv6-connected end-user computers accessing IPv6-enabled web servers.
These changes in adoption rate and the nature of IPv6 use come on the heels of several important IPv4 exhaustion milestones (such as the IANA address depletion event), which began in 2011. Thus, the team believes that this new phase of IPv6 rollout might have been spurred, in part, by a growing shortage of IPv4 addressing.
The study’s conclusions regarding the prevalence of untunneled native IPv6 traffic in today’s Internet are significant in that they imply a level of infrastructure readiness for IPv6. Transition technologies played an important “early adopter” role in the evolution of IPv6 technology and it now appears that IPv6 deployment has entered a stage where Internet infrastructures can support native IPv6 traffic.
In closing, the team noted that, together, IPv6′s very fast recent growth and how its use has shifted signal a true quantum leap. Twenty years after it was standardized, it looks like IPv6 is finally becoming real.
For the full presentation shared at SIGCOMM, click here or on the image below to download.
Many thanks to Jakub Czyz, Scott Iekel-Johnson, Bill Cerveny and Roland Dobbins for assistance with this post!
The Moscone Center in San Francisco is a popular place for technical events. Apple’s World Wide Developer Conference (WWDC) is an annual user of the space. Cisco Live and VMworld also come back every few years to keep the location lively. This year, both conferences utilized Moscone to showcase tech advances and foster community discussion. Having attended both this year in San Francisco, I think I can finally state the following with certainty.
It’s time for tech conferences to stop using the Moscone Center.
Let’s face it. If your conference has more than 10,000 attendees, you have outgrown Moscone. WWDC works in Moscone because they cap the number of attendees at 5,000. VMworld 2014 has 22,000 attendees. Cisco Live 2014 had well over 20,000 as well. Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.
The main keynote hall in Moscone North is too small to hold the large number of audience members. In an age where every keynote address is streamed live, that shouldn’t be a problem. Except that people still want to be involved and close to the event. At both Cisco Live and VMworld, the keynote room filled up quickly and staff were directing the overflow to community spaces that were already packed too full. Being stuffed into a crowded room with no seating or table space is frustrating. But those are just the challenges of Moscone. There are others as well.
I Left My Wallet In San Francisco
San Francisco isn’t cheap. It is one of the most expensive places in the country to live. By holding your conference in downtown San Francisco, you are forcing your 20,000+ attendees into a crowded metropolitan area with expensive hotels. Every time I looked up a hotel room in the vicinity of VMworld or Cisco Live, I was unable to find anything for less than $300 per night. Contrast that with Interop or Cisco Live in Las Vegas, where sub-$100 are available and $200 per night gets you into the hotel of the conference center.
Las Vegas is built for conferences. It has adequate inexpensive hotel options. It is designed to handle a large number of travelers arriving at once. While spread out geographically, it is easy to navigate. In fact, except for the lack of Uber, Las Vegas is easy to get around in than San Francisco. I never have a problem finding a restaurant in Vegas to take a large party. Bringing a group of 5 or 6 to a restaurant in San Francisco all but guarantees you won’t find a seat for hours.
The only real reason I can see for holding conferences at Moscone, aside from historical value, is the ease of getting materials and people into San Francisco. Cisco and VMware both are in Silicon Valley. Driving up to San Francisco is much easier than shipping the conference equipment to Las Vegas or Orlando. But ease-of-transport does not make it easy on your attendees. Add in the fact that the lower cost of setup is not reflected in additional services or reduced hotel rates and you can imagine that attendees have no real incentive to come to Moscone.
The Moscone Center is like the Cotton Bowl in Dallas. While both have a history of producing wonderful events, both have passed their prime. They are ill-suited for modern events. They are cramped and crowded. They are in unfavorable areas. It is quickly becoming more difficult to hold events for these reasons. But unlike the Cotton Bowl, which has almost 100 years of history, Moscone offers not real reason to stay. Apple will always be here. Every new iPhone, Mac, and iPad will be launched here. But those 5,000 attendees are comfortable in one section of Moscone. Subjecting your VMworld and Cisco Live users to these kinds of conditions is unacceptable.
It’s time for Cisco, VMware, and other large organizations to move away from Moscone. It’s time to recognize that Moscone is not big enough for an event that tries to stuff in every user it can. instead, conferences should be located where it makes sense. Las Vegas, San Diego, and Orlando are conference towns. Let’s use them as they were meant to be used. Let’s stop the madness of trying to shoehorn 20,000 important attendees into the sardine can of the Moscone Center.
VMware announced the vCloud Hosted Services a while back and it was mostly known as vCheese for short. This week it was rebranded as "vCloud Air Network" and that is too much of a mouthful to keep saying as well. Don't these marketing people live in the real world ? Lets me share my suggestion .......
A few days ago I had an interesting interview with Christoph Jaggi discussing the challenges, changes in mindsets and processes, and other “minor details” one must undertake to gain something from the SDDC concepts. The German version of the interview is published on Inside-IT.ch; you’ll find the English version below.Read more ...
Nexus 1000V release 5.2(1)SV3(1.1) was published on August 22nd (I’m positive that has nothing to do with VMworld starting tomorrow) and I found this gem in the release notes:
Enabling BPDU guard causes the Cisco Nexus 1000V to detect these spurious BPDUs and shut down the virtual machine adapters (the origination BPDUs), thereby avoiding loops.
After a week of testing, I decided to move the main ipSpace.net web site (www.ipspace.net) as well as some of the resource servicing hostnames to CloudFlare CDN. Everything should work fine, but if you experience any problems with my web site, please let me know ASAP.
Collateral benefit: ipSpace.net is now fully accessible over IPv6 – register for the Enterprise IPv6 101 webinar if you think that doesn’t matter ;)
A while ago I explained why OpenFlow might be a wrong tool for some jobs, and why centralized control plane might not make sense, and quickly got misquoted as saying “controllers don’t scale”. Nothing could be further from the truth, properly architected controller-based architectures can reach enormous scale – Amazon VPC is the best possible example.Read more ...
Recertification Brings many new and important security capabilities . . .
In Part 1 we discussed how to turn off ISATAP on Windows host—which is a great idea. Turning off unnecessary components of your network simplifies everything. But ISATAP can be useful in certain scenarios. For instance, if you want to test an application on IPv6 you clearly don’t want to turn on IPv6 everywhere and […]
I've been reading the Cisco Application Centric Infrastructure Design Guide. Sometimes I see a product of genius and wondrous use of technology, other times I'm like 'did they do it the hard way or what' ?
Here’s an interesting story illustrating the potential pitfalls of multi-DC deployments and the impact of data gravity on application performance.
Long long time ago on a cloudy planet far far away, a multinational organization decided to centralize their IT operations and move all workloads into a central private cloud.Read more ...
In a world ruled by OpenFlow you’d expect the OpenFlow controller to know all the traffic; in more traditional networks we use technologies like NetFlow, sFlow or IPFIX to report the traffic statistics – but regardless of the underlying mechanism, you need a tool that will collect the statistics, aggregate them in a way that makes them usable to the network operators, report them, and potentially act on the deviations.Read more ...
Securing cloud data centers is an ongoing challenge. Your adversaries—cyber criminals, nation state attackers, hacktivists—continue to develop sophisticated, invasive techniques, resulting in a continually evolving threat landscape.
Because clouds are dynamic in nature, with new application and services being spun up or taken down and virtual workloads being moved, security for the cloud should be dynamic also. That poses the question, are traditional firewalls that are focused on layer 3 and 4 inspection sufficient in today’s threat environment? Also, next-gen firewalls are powerful, yet not designed to protect from the velocity and variety of new attacks being created every day. In today’s world, shouldn’t firewalls be able to take immediate action based on known or emerging intelligence?
With the shift to cloud architectures, traditional firewall administration has become burdensome and fraught with human error due to the sheer complexity of distributed security. What’s needed is an effective network security solution that fights cyber criminals head-on and can adapt to emerging threats without exerting excessive load on the enforcement point.
What other fears or concerns about securing the cloud data center keep you up at night?
Stay tuned to my blog for ideas on how to address these challenges.
A visual representation of the company and, to a lesser extent, product history of the load balancing/application delivery field. My usual F5 bias is present but it seems justified considering their long-held market leading position. I’ve been itching to post this for a while but simply couldn’t stop changing the formatting. I can’t say I’m […]
He's worked in the IT industry for over 15 years in a variety of roles, predominantly in data centre environments. Working with switches and routers pretty much from the start he now also has a thirst for application delivery, SDN, virtualisation and related products and technologies. He's published a number of F5 Networks related books and is a regular contributor at DevCentral.
I’m still getting questions about layer-2 data center interconnect; it seems this particular bad idea isn’t going away any time soon. In the face of that sad reality, let’s revisit what I wrote about layer-2 DCI over VXLAN.
VXLAN hasn’t changed much since the time I explained why it’s not the right technology for long-distance VLANs.Read more ...
I often hear vendors and pundits proclaim that Enterprise is resisting change. In particular, they say that individuals in Enterprises can't see the change or won't discuss buying new technology. I see these objections as failure of the current system and much less due to the people.
The post Blame the System For Resisting Change – Not The People appeared first on EtherealMind.
There’s a new term floating around that seems to be confusing people left and right. It’s something that’s been used to describe a methodology as well as used in marketing left and right. People are using it and don’t even really know what it means. And this is the first time that’s happened. Let’s look at the word “open” and why it has become so confusing.
For those at home that are familiar with Linux, “open” wasn’t the first term to come to mind. “Free” is another word that has been used in the past with a multitude of loaded meanings. The original idea around “free” in relation to the Open Source movement is that the software is freely available. There are no restrictions on use and the source is always available. The source code for the Linux kernel can be searched and viewed at any time.
Free describes the fact that the Linux kernel is available for no cost. That’s great for people that want to try it out. It’s not so great for companies that want to try and build a business around it, yet Red Hat has managed to do just that. How can they sell something that doesn’t cost anything? It’s because they keep the notion of free sharing of code alive while charging people for support and special packages that interface with popular non-free software.
The dichotomy between unencumbered idea software and no cost software is so confusing that the movement created a phrase to describe it:
Free as in freedom, not free as in beer.
When you talk about freedom, you are unrestricted. You can use the software as the basis for anything. You can rewrite it to your heart’s content. That’s your right for free software. When you talk about free beer, you set the expectation that whatever you create will be available at no charge. Many popular Linux distributions are available at no cost. That’s like getting beer for nothing.
Open, But Not Open Open
The word “open” is starting to take on aspects of the “free” argument. Originally, the meaning of open came from the Open Source community. Open Source means that you can see everything about the project. You can modify anything. You can submit code and improve something. Look at the OpenDaylight project as an example. You can sign up, download the source for a given module, and start creating working code. That’s what Brent Salisbury (@NetworkStatic) and Matt Oswalt (@Mierdin) are doing to great effect. They are creating the network of the future and allowing the community to do the same.
But “open” is being redefined by vendors. Open for some means “you can work with our software via an API, but you can’t see how everything works”. This is much like the binary-only NVIDIA driver. Proprietary programming is pre-compiled and available to download for free, but you can’t modify the source at all. While it works with open source software, it’s not open.
A conversation I had during Wireless Field Day 7 drove home the idea of this new “open” in relation to software defined networking. Vendors tout open systems to their customers. They standardize on northbound interfaces that talk to orchestration platforms and have API support for other systems to call them. But the southbound interface is proprietary. That means that only their controller can talk to the network hardware attached to it. Many of these systems have “open” in the name somewhere, as if to project the idea that they work with any component makeup.
This new “open” definition of having proprietary components with an API interface feels very disingenuous. It also makes for some very awkward conversations:
$VendorA: Our system is open!
ME: Since this is an open system, I can connect my $VendorB switch and get full functionality from your controller, right?
$VendorA: What exactly do you mean by “full”?
Using “open” to market these systems is wrong. Telling customers that you are “open” because your other equipment can program things through a narrow API is wrong. But we don’t have a word to describe this new idea of “open”. It’s not exactly closed. Perhaps we can call it something else. Maybe “ajar”. That might make the marketing people a bit upset. “Try our new AjarNetworking controller. As open as we wanted to make it without closing everything.”
“Open” will probably be dominated by marketing in the next couple of year. Vendors will try to tell you how well they interoperate with everyone. And I will always remember how open protocols like SIP are and how everyone uses that openness against it. If we can’t keep the definition of “open” clean, we need to find a new term.
Last week the global routing table (as seen from some perspectives) supposedly exceeded 512K routes, and weird things started to happen to some people that are using old platforms that by default support 512K IPv4 routes in the switching hardware.
I’m still wondering whether the BGP table size was the root cause of the observed outages. Cisco’s documentation (at least this document) is pretty sloppy when it comes to the fact that usually 1K = 1024, not 1000 – I’d expect the hard limit to be @ 524.288 routes … but then maybe Cisco’s hardware works with decimal arithmetic.Read more ...
There are design tools which we should consider for every design. LAN, WAN and the data center where these common design tolls and attributes should be considered. Many of the principles in this article series might be fit not only for the network part of the design but also compute, virtualization and storage technologies also […]
He has more than 10 years in IT, and has worked on many network design and deployment projects.
In addition, Orhan is a:
Blogger at Network Computing.
Blogger and podcaster at Packet Pushers.
Manager of Google CCDE Group.
On Twitter @OrhanErgunCCDE
The post Common Design Tools and Attributes for Everyone Part-1 appeared first on Packet Pushers Podcast and was written by Orhan Ergun.
What can you do if you have a small team of networking engineers responsible for four ever-growing data centers (with several hundred network devices in each of them)? There’s only one answer: you try to survive by automating as much as you can.
In the fourth episode of Software Gone Wild podcast David Barosso from Spotify explains how they use network automation to cope with the ever-growing installed base without increasing the size of the networking team.Read more ...