She was kind enough to invite me as the keynote speaker (it’s pretty easy to guess what I’ll talk about) and will be around for the whole day to discuss data center design challenges, SDN, network programmability, or whatever else you’d like to talk about.
I hope to see you in real life in Bern (but you do have to register first).
A reader sent me this question:
My company will have 10GE dark fiber across our DCs with possibly OTV as the DCI. The VM team has also expressed interest in DC-to-DC vMotion (<4ms). Based on your blogs it looks like overall you don't recommend long-distance vMotion across DCI. Will the "Data Center trilogy" package be the right fit to help me better understand why?
Unfortunately, long-distance vMotion seems to be a persistent craze that peaks with a predicable period of approximately 12 months, and while it seems nothing can inoculate your peers against it, having technical arguments might help.Read more ...
Notes on the CheckPoint firewall clustering solution based on a review of the documentation in August 2014.
The post Tech Notes: CheckPoint Firewall Cluster XL in 2014 appeared first on EtherealMind.
My good friend Tiziano complained about the fact that BGP considers next hop unreachable if there’s an entry in the IP routing table even though the router cannot even ping the next hop.
That behavior is one of the fundamental aspects of IP networks: networks built with IP routing protocols rely on fate sharing between control and data planes instead of path liveliness checks.Read more ...
‘ On Earth Day at 1990 , New York City’s Transportation Commissioner decided to close 42d Street , which as every New Yorker knows is always congested. “Many predicted it would be doomsday,” said the Commissioner, Lucius J. Riccio. “You didn’t need to be a rocket scientist or have a sophisticated computer queuing model to […]
He has more than 10 years in IT, and has worked on many network design and deployment projects.
In addition, Orhan is a:
Blogger at Network Computing.
Blogger and podcaster at Packet Pushers.
Manager of Google CCDE Group.
On Twitter @OrhanErgunCCDE
After a week of testing, I decided to move the main ipSpace.net web site (www.ipspace.net) as well as some of the resource servicing hostnames to CloudFlare CDN. Everything should work fine, but if you experience any problems with my web site, please let me know ASAP.
2014-08-27: Had to turn off CloudFlare (and thus IPv6). They don't seem to support HTTP range requests, which makes video startup time unacceptable. Will have to move all video URLs (where the HTTP range requests are expected coming from streaming clients) to a different host name, which will take time.
Collateral benefit: ipSpace.net is now fully accessible over IPv6 – register for the Enterprise IPv6 101 webinar if you think that doesn’t matter ;)
I first met Elisa Jasinska when she had one of the coolest job titles I ever saw: Senior Packet Herder. Her current job title is almost as cool: Senior Network Toolsmith @ Netflix – obviously an ideal guest for the Software Gone Wild podcast.
One of the confusing aspects of Internet operation is the difference between the types of providers and the types of peering. There are three primary types of peering, and three primary types of services service providers actually provide. The figure below illustrates the three different kinds of peering. One provider can agree to provide transit […]
Building a private cloud infrastructure tends to be a cumbersome process: even if you do it right, you oft have to deal with four to six different components: orchestration system, hypervisors, servers, storage arrays, networking infrastructure, and network services appliances.Read more ...
Last week in Chicago, at the annual SIGCOMM flagship research conference on networking, Arbor collaborators presented some exciting developments in the ongoing story of IPv6 roll out. This joint work (full paper here) between Arbor Networks, the University of Michigan, the International Computer Science Institute, Verisign Labs, and the University of Illinois highlighted how both the pace and nature of IPv6 adoption has made a pretty dramatic shift in just the last couple of years. This study is a thorough, well-researched, effective analysis and discussion of numerous published and previously unpublished measurements focused on the state of IPv6 deployment.
The study examined a decade of data reporting twelve measures drawn from ten global-scale Internet datasets, including several years of Arbor data that represents a third to a half of all interdomain traffic. This constitutes one of the longest and broadest published measurement of IPv6 adoption to date. Using this long and wide perspective, the University of Michigan, Arbor Networks, and their collaborators found that IPv6 adoption, relative to IPv4, varies by two orders of magnitude (100x!) depending on the measure one looks at and, because of this, care must really be taken when looking at individual measurements of IPv6. For example, examining only the fraction of IPv6 to IPv4 traffic, which is still just shy of 1%, is misleading, since virtually all other indicators show that IPv6 is much more ready for use and able to grow very quickly.
In the study, differences in IPv6 deployment across global regions were also apparent. This suggests that both the incentives and obstacles to adopt the new protocol vary in different parts of the world.
Most surprisingly, the team found that over the last three years the nature of IPv6 use, in terms of traffic, content, reliance on transition technology, and performance, has shifted dramatically from prior findings, showing a maturing of the protocol into production mode. For instance, Arbor data shows that the increase in IPv6 traffic relative to IPv4 over each of 2012 and 2013 has been phenomenal, growing more than 400% in each year — a more than quintupling. Arbor data also helped show that *how* people are using IPv6 has likewise evolved immensely, to the point where IPv6 is now largely used natively and mostly for content, neither of which was the case just three years ago.
Interestingly, this study offers a thought-provoking rationale for the high incidence of NNTP and rsync in the IPv6 application mix. Based on the data, the high volumes of NNTP and rsync is likely partially due to synchronization of NNTP and software distribution data between a relatively small number of IPv6-enabled servers that resided within the research and education communities. The significant increase of HTTP and HTTPS traffic in the IPv6 application mix could correlate with a much broader increase of IPv6-connected end-user computers accessing IPv6-enabled web servers.
These changes in adoption rate and the nature of IPv6 use come on the heels of several important IPv4 exhaustion milestones (such as the IANA address depletion event), which began in 2011. Thus, the team believes that this new phase of IPv6 rollout might have been spurred, in part, by a growing shortage of IPv4 addressing.
The study’s conclusions regarding the prevalence of untunneled native IPv6 traffic in today’s Internet are significant in that they imply a level of infrastructure readiness for IPv6. Transition technologies played an important “early adopter” role in the evolution of IPv6 technology and it now appears that IPv6 deployment has entered a stage where Internet infrastructures can support native IPv6 traffic.
In closing, the team noted that, together, IPv6′s very fast recent growth and how its use has shifted signal a true quantum leap. Twenty years after it was standardized, it looks like IPv6 is finally becoming real.
For the full presentation shared at SIGCOMM, click here or on the image below to download.
Many thanks to Jakub Czyz, Scott Iekel-Johnson, Bill Cerveny and Roland Dobbins for assistance with this post!
The Moscone Center in San Francisco is a popular place for technical events. Apple’s World Wide Developer Conference (WWDC) is an annual user of the space. Cisco Live and VMworld also come back every few years to keep the location lively. This year, both conferences utilized Moscone to showcase tech advances and foster community discussion. Having attended both this year in San Francisco, I think I can finally state the following with certainty.
It’s time for tech conferences to stop using the Moscone Center.
Let’s face it. If your conference has more than 10,000 attendees, you have outgrown Moscone. WWDC works in Moscone because they cap the number of attendees at 5,000. VMworld 2014 has 22,000 attendees. Cisco Live 2014 had well over 20,000 as well. Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.
The main keynote hall in Moscone North is too small to hold the large number of audience members. In an age where every keynote address is streamed live, that shouldn’t be a problem. Except that people still want to be involved and close to the event. At both Cisco Live and VMworld, the keynote room filled up quickly and staff were directing the overflow to community spaces that were already packed too full. Being stuffed into a crowded room with no seating or table space is frustrating. But those are just the challenges of Moscone. There are others as well.
I Left My Wallet In San Francisco
San Francisco isn’t cheap. It is one of the most expensive places in the country to live. By holding your conference in downtown San Francisco, you are forcing your 20,000+ attendees into a crowded metropolitan area with expensive hotels. Every time I looked up a hotel room in the vicinity of VMworld or Cisco Live, I was unable to find anything for less than $300 per night. Contrast that with Interop or Cisco Live in Las Vegas, where sub-$100 are available and $200 per night gets you into the hotel of the conference center.
Las Vegas is built for conferences. It has adequate inexpensive hotel options. It is designed to handle a large number of travelers arriving at once. While spread out geographically, it is easy to navigate. In fact, except for the lack of Uber, Las Vegas is easy to get around in than San Francisco. I never have a problem finding a restaurant in Vegas to take a large party. Bringing a group of 5 or 6 to a restaurant in San Francisco all but guarantees you won’t find a seat for hours.
The only real reason I can see for holding conferences at Moscone, aside from historical value, is the ease of getting materials and people into San Francisco. Cisco and VMware both are in Silicon Valley. Driving up to San Francisco is much easier than shipping the conference equipment to Las Vegas or Orlando. But ease-of-transport does not make it easy on your attendees. Add in the fact that the lower cost of setup is not reflected in additional services or reduced hotel rates and you can imagine that attendees have no real incentive to come to Moscone.
The Moscone Center is like the Cotton Bowl in Dallas. While both have a history of producing wonderful events, both have passed their prime. They are ill-suited for modern events. They are cramped and crowded. They are in unfavorable areas. It is quickly becoming more difficult to hold events for these reasons. But unlike the Cotton Bowl, which has almost 100 years of history, Moscone offers not real reason to stay. Apple will always be here. Every new iPhone, Mac, and iPad will be launched here. But those 5,000 attendees are comfortable in one section of Moscone. Subjecting your VMworld and Cisco Live users to these kinds of conditions is unacceptable.
It’s time for Cisco, VMware, and other large organizations to move away from Moscone. It’s time to recognize that Moscone is not big enough for an event that tries to stuff in every user it can. instead, conferences should be located where it makes sense. Las Vegas, San Diego, and Orlando are conference towns. Let’s use them as they were meant to be used. Let’s stop the madness of trying to shoehorn 20,000 important attendees into the sardine can of the Moscone Center.
VMware announced the vCloud Hosted Services a while back and it was mostly known as vCheese for short. This week it was rebranded as "vCloud Air Network" and that is too much of a mouthful to keep saying as well. Don't these marketing people live in the real world ? Lets me share my suggestion .......
A few days ago I had an interesting interview with Christoph Jaggi discussing the challenges, changes in mindsets and processes, and other “minor details” one must undertake to gain something from the SDDC concepts. The German version of the interview is published on Inside-IT.ch; you’ll find the English version below.Read more ...
Nexus 1000V release 5.2(1)SV3(1.1) was published on August 22nd (I’m positive that has nothing to do with VMworld starting tomorrow) and I found this gem in the release notes:
Enabling BPDU guard causes the Cisco Nexus 1000V to detect these spurious BPDUs and shut down the virtual machine adapters (the origination BPDUs), thereby avoiding loops.
A while ago I explained why OpenFlow might be a wrong tool for some jobs, and why centralized control plane might not make sense, and quickly got misquoted as saying “controllers don’t scale”. Nothing could be further from the truth, properly architected controller-based architectures can reach enormous scale – Amazon VPC is the best possible example.Read more ...