Distributed DoS mitigation is another one of the “we were doing SDN without knowing it” cases: remote-triggered black holes are used by most major ISPs, and BGP Flowspec was available for years. Not surprisingly, people started using OpenFlow to implement the same concept (there’s even a proposal to integrate OpenFlow support into Bro IDS).
Note: My wife will be with me again this year, and she is trying to get a tour group going to look around the city while others are in sessions. If you want to be in on the tourist action, contact her via Twitter.
As per tradition (a new tradition, but a tradition nonetheless), here is my schedule for the week. Also as tradition, I’m bound to only do about 20% of what’s documented here. If you’ve ever been, you know what I mean. Here we go.
Saturday, May 17 13:00 - Arrive in SFO Sunday, May 18 14:00 - Exam 16:00 or so - Tweetup Monday, May 19 08:00 - BRKCRT-2001 - NX-OS, IOS, IOS-XR, Unique and Similar at the Same Time w/ Joseph Rinehart 10:00 - BRKCRT-2000 - HardCore IPv6 Routing - No Fear w/ Scott Morris, Donnie Moss 13:00 - BRKCCIE-3345 - The CCIE Candidate's Introduction to MPLS L3VPN Networks w/ Keith Barker 15:30 - GENKEY-2200 - Cisco Live Welcome Keynote w/ John Chambers 17:00 - GENREC-2000 - World of Solutions: Welcome Reception w/ everyone at the conference Tuesday, May 20 08:00 - BRKRST-2509 - Mastering Data Center QoS w/ Lucien Avramov 10:00 - GENKEY-2300 - Cisco Live Technology Keynote: Infrastructure for The Agile Enterprise w/ Rob Lloyd 12:30 - BRKCCIE-3003 - DMVPN for Route & Switching CCIE Candidates w/ Johnny Bass 15:00 - BRKRST-2301 - Enterprise IPv6 Deployment w/ Tim Martin 17:00 - GENREC-2100 - World of Solutions: Reception w/ everyone at the conference Wednesday, May 21 08:00 - BRKRST-3371 - Advances in BGP w/ Gunter Van de Velde 10:00 - GENKEY-2400 - Cisco Live Partner Keynote: The Internet of Everything Ecosystem – Bringing IT and OT Together with the Internet of Things w/ IoE bingo 12:00 - MTE-5872 - Meet the Engineer w/ David Prall 13:30 - BRKCRS-3036 - Advanced Enterprise Campus Design: Routed Access w/ Tyler Creek 16:00 - BRKDCT-3346 - Advanced - End-to-End QoS Implementation and Operation with Cisco Nexus w/ Lukas Krattiger 19:00 or so - Customer Appreciation Event!!!!!!!!!!!!!!!!! Thursday, May 22 10:30 - GENKEY-2500 - Cisco Live Celebrity Keynote: Reinventing Education —The One World School House w/ Sal Kahn 12:30 - BRKRST-2667 - How to write an IPv6 Addressing Plan w/ Veronika McKillop 14:30 - BRKNMS-1035 - Cisco Live Network and NOC: Panel Discussion moderated by Jimmy Ray Purser w/ Joe Clarke, Per Hagen, and Jimmy Ray Purser Friday, May 23 Early early - Travel home
Alcatraz boat tour tickets questions to me.
See you guys in a month!
Is it the Internet Armageddon? NO! Thanks to an Emergency Signature Release, IDP saves the day!
I’ve been thinking about this question quite a bit over the last year  and interestingly a debate over just this issue has recently erupted in the blogosphere (and elsewhere). Vidya Narayanan, who reignited the discussion with her blog “Why I Quit Writing Internet Standards” , calls for a “radical restructuring” of the IETF, IEEE and what […]
Mr. Meyer is also currently Director of the Advanced Network Technology Center at the University of Oregon where he is also a Senior Research Scientist in the department of Computer Science.. One of his major projects at the University of Oregon is routeviews (see www.routeviews.org).
Prior to joining Cisco, he served as Senior Scientist, Chief Technologist and Director of IP Technology Development at Sprint.
The post What is the value proposition of Standards in the age of Open Source? appeared first on Packet Pushers Podcast.
As many of you hopefully already know, the CCIE Routing & Switching certification blueprint is changing from version 4 to version 5 on June 3rd 2014. As this date quickly approaches, and as the last of the v4 lab seats are fully booked, it’s time to start planning your attack on the RSv5 blueprint.
While Cisco’s official blueprint for v5 is now more detailed that it has ever been in the past, it still lacks some details in certain areas, for example “Implement, optimize and troubleshoot filtering with any routing protocol.” Additionally it would be difficult to use Cisco’s blueprint for a study plan as it stands in its current linear format. For example “Layer 3 multicast” is listed before “Fundamental routing concepts”, which from a learning perspective doesn’t make sense, because you must understand unicast routing fully before you learn multicast routing. To help remedy this we’ve re-ordered and expanded Cisco’s blueprint into INE’s RSv5 Expanded Blueprint, which you can find below after the jump.
Our CCIE RSv5 Expanded Blueprint is meant to be used as a checklist that you can use as you go through your preparation. This way when you’re finally ready to attempt the lab exam, you can be assured that you’ve at least heard of all the topics in the scope, regardless of how obscure some of them might be. Additionally note that some topics listed below might appear only on the written exam and not the lab exam, such as MPLS Layer 2 VPNs or RIPng, but are still included in our content and the outline below.
The below outline will continue to be updated, so check back periodically during your preparation to see changes, adds, and removes. Good luck in your studies!
INE’s CCIE RSv5 Expanded Blueprint
Dmitri Kalintsev published links to VMware NSX resources and VMware NSX for vSphere online documentation a while ago. It’s great to have product documentation and design guides (although you still can’t download NSX from the VMware download center), but the cynical half of my brain couldn’t help noticing all the promises made in the “Network Gateway Services” section of the resources page.Read more ...
No matter how hard the clouderati click the heels of their brogues together and repeat “public cloud is better” , the simple fact is that most companies have large amounts of IT infrastructure that works just fine and is profitable. To make matters worse, the cost of transformation exceeds the potential financial return while creating […]
During Interop 2014 I got involved in numerous interesting conversations revolving around SDN and new operations models (including the heretic idea of bundling appliances with application stacks and making developers responsible for network services).
During one of those discussions someone said “I think I get the ideas behind DevOps, but I don’t think we should configure our network devices with Puppet or Chef” to which I replied “Puppet or Chef are just tools, DevOps is a lifestyle.”Read more ...
A few days ago, millions of servers around the world were impacted by Heartbleed, a security vulnerability in OpenSSL. This was arguably one of the hottest topics on the Internet. Organizations scrambled to put a fix in place and update builds. At Juniper, several product teams worked round the clock to ensure that customers get updates on highest priority. As of a short while ago, Junos Pulse Connect Secure (VPN) and Policy Secure (UAC) released patches that would fix the vulnerability for its mobility offering.
In a recent article, writer Adam Clark Estates shared that, “Over the next three years, the U.S. Army will be filling its brand new cyber warfare institute at West Point with the best and brightest hackers it can find.” This approach aligns well with the sentiments recently expressed by Nawaf Bitar at the RSA Conference in San Francisco in his keynote, “The Next World War Will Be Fought in Silicon Valley.”
It isn’t sufficient for nations to protect only their physical borders. They must protect more. They must protect critical data and infrastructures, including financial systems, wireless communications, electric power grids, oil and natural gas systems, and others, from cybercrime.
Here are three actions to consider as part of embarking on this important endeavor:
1. Evaluate and deploy state of the art/best of breed security and intelligence systems to protect critical infrastructure, especially with the proliferation of multitudes of “smart” gadgets and the inception of “machine-to-machine” communications taking place amongst residents and citizens both within and outside the nation’s borders and which are vulnerable to cyber-attack.
2. Selectively hire white hat hackers who can seek out vulnerabilities in the network.
3. Recruit experienced IT security specialists who will oversee and manage the deployed security systems as well as take rapid action on detected vulnerabilities and remediate post-breach.
While the initiative of taking cybercrime as seriously as physical warfare is agreeable, building a comprehensive and strong plan and “army” who will adeptly fight black hat hackers and “beat them at their own game” is no small feat. As part of building its security intelligence arsenal, federal and law enforcement agencies may want to consider Juniper’s intrusion deception approach that helps stop threats and attackers before they can do harm.
Before embarking on the battle against cybercrime, ensure you have the right plan, people and protection to maximize your chances of success against the enemy!
Frame Relay was to teach multipoint networking to upcoming engineers and we recently abandoned on the curriculum. Now it's back in MPLS-TP.
The post Response: RFC 7167 – A Framework for Point-to-Multipoint appeared first on EtherealMind.
It's a constant and oft repeated fallacy that software on x86 servers will never forward packets at speed. Here is Vyatta explaining why their software will be able to go past 100 Million Packets Per Second this year on standard COTS hardware.
The post Brocade Vyatta & Forwarding Performance on X86 Server appeared first on EtherealMind.
Vidya Narayana, in a piece at Gigaom, said recently: So, why did I actually stop contributing to standards definitions? The primary one is the fact that while the pace at which standards are written hasn’t changed in many years, the pace at which the real world adopts software has become orders of magnitude faster. Standards, unfortunately, […]
Paul Unbehagen made an interesting claim when presenting Avaya network built for Sochi Olympics during a recent Tech Field Day event: “we didn’t need MPLS or BGP to implement L2- and L3VPN. It was all done with SPB and IS-IS.”Read more ...
Heartbleed captured quite a bit of news these past few days. A hole in the most secure of web services tends to make people a bit anxious. Racing to release patches and assess the damage consumed people for days. While I was a bit concerned about the possibilities of exposed private keys on any number of web servers, the overriding thought in my mind was instead about the speed at which we went from “totally secure” to “Swiss cheese security” almost overnight.
Jumping The Gun
As it turns out, the release of the information about Heartbleed was a bit sudden. The rumor is that the people that discovered the bug were racing to release the information as soon as the OpenSSL patch was ready because they were afraid that the information had already leaked out to the wider exploiting community. I was a bit surprised when I learned this little bit of information.
Exploit disclosure has gone through many phases in recent years. In days past, the procedure was to report the exploit to the vendor responsible. The vendor in question would create a patch for the issue and prepare their support organization to answer questions about the patch. The vendor would then release the patch with a report on the impact. Users would read the report and patch the systems.
Today, researchers that aren’t willing to wait for vendors to patch systems instead perform an end-run around the system. Rather than waiting to let the vendors release the patches on their cycle, they release the information about the exploit as soon as they can. The nice ones give the vendors a chance to fix things. The less savory folks want the press of discovering a new hole and project the news far and wide at every opportunity.
Shame Is The Game
Part of the reason to release exploit information quickly is to shame the vendor into quickly releasing patch information. Researchers taking this route are fed up with the slow quality assurance (QA) cycle inside vendor shops. Instead, they short circuit the system by putting the issue out in the open and driving the vendor to release a fix immediately.
While this approach does have its place among vendors that move a glacial patching pace, one must wonder how much good is really being accomplished. Patching systems isn’t a quick fix. If you’ve ever been forced to turn out work under a crucial deadline while people were shouting at you, you know the kind of work that gets put out. Vendor patches must be tested against released code and there can be no issues that would cause existing functionality to break. Imagine the backlash if a fix to the SSL module cause the web service to fail on startup. The fix would be worse than the problem.
Rather than rushing to release news of an exploit, researchers need to understand the greater issues at play. Releasing an exploit for the sake of saving lives is understandable. Releasing it for the fame of being first isn’t as critical. Instead of trying to shame vendors into releasing a patch rapidly to plug some hole, why not work with them instead to identify the issue and push the patch through? Shaming vendors will only put pressure on them to release questionable code. It will also alienate the vendors from researchers doing things the right way.
Shaming is the rage now. We shame vendors, users, and even pets. Problems have taken a back seat to assigning blame. We try to force people to change or fix things by making a mockery of what they’ve messed up. It’s time to stop. Rather than pointing and laughing at those making the mistake, you should pick up a keyboard and help them fix it. Shaming doesn’t do anything other than upset people. Let’s put it to bed and make things better by working together instead of screaming “Ha ha!” when things go wrong.
One of my recent projects has been deploying an MPLS/VPN architecture across a pair of smallish datacenters comprised entirely of Juniper gear. While I'm no stranger to MPLS/VPN, I am still a bit green to Junos, so it was a good learning exercise. My previous articles covering MPLS/VPN on Cisco IOS have been fairly popular, so I figured it would be worthwhile to cover a similar implementation in the Juniper world.
For our datacenters, we decided to implement a simple spine and leaf topology with a pair of core routers functioning as IBGP route reflectors and a pair of layer three ToR switches in each server rack. The spine is comprised of four layer three switches which run only MPLS and OSPF; they do not participate in BGP.
This article assume some basic familiarity with MPLS/VPN, so if you're new to the game, consider reading through these previous articles for some background before continuing:
Steve Marquess who manages the business side of the OpenSSL Foundation talks about the shabby state of corporate support for open source development. I want to call out this paragraph first (although many other are more interesting), about the courage and discipline it takes to publish your work in the face of fear of public […]
The post Response: Speeds and Feeds › Of Money, Responsibility, and Pride appeared first on EtherealMind.
Marc Eisenbarth, Alison Goodrich, Roland Dobbins, Curt Wilson
A very serious vulnerability present in OpenSSL 1.0.1 for two years has been disclosed (CVE-2014-0160). This “Heartbleed” vulnerability allows an attacker to reveal up to 64kb of memory to a connected client or server. This buffer-over-read vulnerability can be used in rapid succession to exfiltration larger sections of memory, potentially exposing private keys, usernames and passwords, cookies, session tokens, email, or any other data that resides in the affected memory region. This flaw does not affect versions of OpenSSL prior to 1.0.1. This is an extremely serious situation, which highlights the manual nature of the tasks required to secure critical Internet services such as basic encryption and privacy protection.
As the vulnerability has been present for over two years, many modern operating systems and applications have deployed vulnerable versions of OpenSSL. OpenSSL is the default cryptographic library for Apache and nginx Web server applications, which together account for an estimated two-thirds of all Web servers. OpenSSL is also used in a variety of operating systems, including BSD variants such as FreeBSD, and Linux distributions such as Ubuntu, CENTOS, Fedora and more. Other networking gear such as load-balancers, reverse proxies, VPN concentrators, and various types of embedded devices are also potentially vulnerable if they rely on OpenSSL, which many do. Additionally, since the vulnerability’s disclosure, several high-profile sites such as Yahoo Mail, Lastpass, and the main FBI site have reportedly leaked information. Others have discussed the impact on underground economy crime forums, which were reportedly vulnerable to the matter and were attacked.
A key lesson is that OpenSSL, which is a vital component of the confidentiality and integrity of uncounted systems, applications and sites across the Internet, is an underfunded, volunteer-run project, which is desperately in need of major sponsorship and attendant allocation of resources.
Anyone running OpenSSL on a server should upgrade to version 1.0.1g. For earlier versions, re-compiling with the OPENSSL_NO_HEARTBEATS flag enabled will mitigate this vulnerability. For OpenSSL 1.0.2, the vulnerability will be fixed in 1.0.2-beta2. In terms of remediation, there’s a huge amount of work that must be done, not only for servers, but also for load-balancers, reverse proxies, VPN concentrators, various types of embedded devices, etc. Applications which were statically compiled against vulnerable versions of the underlying OpenSSL libraries must be re-complied; private keys must be invalidated, re-generated, and re-issued; certificates must be invalidated, re-generated, and re-issued – and there are a whole host of problems and operational challenges associated with these vital procedures. Some systems may be difficult to patch, so network access control restrictions or the deployment of non-vulnerable proxies may be considered where possible to reduce the attack surface.
In most cases, exploitation of this vulnerability leaves no sign in server logs, making it difficult for organizations to know if they have been compromised. In addition, even after applying the OpenSSL patch, private keys, passwords, authentication credentials or any other data that was stored in heap memory used by OpenSSL may have already been compromised by attackers, potentially going as far back as two years. Of significant concern is the compromise of private key material, and one security organization reported that they were able to obtain this material during testing. Others reported difficulty in obtaining certificate material but were able to discover significant amounts of other sensitive data. Considering how easy it is for attackers to hammer this vulnerability over and over again in a very quick sequence, the amount of memory being disclosed can be quite substantial. Memory contents will vary depending on program state and controlling what is returned and what position in memory the contents are read from is much like a game of roulette.
Risk to Private Key Material
Security researchers in a Twitter exchange starting on April 8 2014 indicate that private keys have been extracted in testing scenarios, and other researchers suggest that attacking the servers during or just after log rotation and restart scripts run could expose private key material. This allegation has not been tested by ASERT.
For further details, please see the Twitter thread at https://twitter.com/1njected/status/453781230593769472
Incident Response and Attack Tools
While there has been some call to avoid over-reaction, organizations should strongly consider revoking and reissue certificates and private keys; otherwise, attackers can continue to use private keys they may have obtained to impersonate Websites and/or launch man-in-the-middle attacks. Users should change usernames and passwords as well, but should not enter login credentials on Websites with vulnerable OpenSSL deployments. To do so could invite attackers to compromise both the old and new credentials if they were exposed in memory.
Many tools have been made available to test for the vulnerability and these same tools are available for attackers to use as well. It is also reasonable to consider that the password reuse problem will again cause additional suffering, because the same passwords shared across multiple systems create an extension of attack surface. A shared password that provides access to a sensitive system, or to an e-mail account used for password resets, can be all that an attacker needs to infiltrate an organizations defenses along multiple fronts.
Multiple proof-of-concept exploits have already been published, and a Metasploit module has been published. Attackers of all shapes and sizes have already started using these tools or are developing their own to target vulnerable OpenSSL servers. There have been reports that scanning for vulnerable OpenSSL servers began before the disclosure of the bug was made public, although other reports suggest that these scans may not have been specifically targeting the Heartbleed vulnerability.
ATLAS Indicates Scanning Activity
ASERT has observed an increase in scanning activity on tcp/443 from our darknet monitoring infrastructure, over the past several days, most notably from Chinese IP addresses (Figure 1, below). Two IP addresses (188.8.131.52 and 184.108.40.206) observed scanning tcp/443 have been blacklisted by Spamhaus for exploit activity. Scans from Chinese sources are predominately coming from AS4143 (CHINANET-BACKBONE) and AS23724 (CHINANET-IDC-BJ-AP).
Figure 1: TCP/443 scans, Tuesday – Wednesday (April 8-9)
Attacks observed by ASERT decreased by Thursday as of this report writing. China still accounted for the largest percentage of detected scan activity:
Figure 2: TCP/443 scans, Thursday (April 10)
Pravail Security Analytics Detection Capabilities
Arbors Pravail Security Analytics system provides detection for this vulnerability using the following rules:
2018375 ET CURRENT_EVENTS TLS HeartBeat Request (Server Intiated)
2018376 ET CURRENT_EVENTS TLS HeartBeat Request (Client Intiated)
2018377 ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Client Init Vuln Server)
2018378 ET CURRENT_EVENTS Possible OpenSSL HeartBleed Large HeartBeat
Response (Server Init Vuln Client)
Examples of detection capabilities are reproduced below.
Analysis of Historical Packet Captures Using New Indicators
In the event of this, and other security threats that are highly emergent, organizations may wish to consider implementing analysis capabilities on archived packet captures in order to detect first signs of attack activity. Granular analysis using fresh indicators can help pinpoint where and when a targeted attack (or a commodity malware attack, for that matter) may have first entered the network, or when such attackers may have exfiltrated data using a technique that was not yet being detected on the wire during the time of the initial attack and infiltration. The capabilities of Pravail Security Analytics will give organizations the means to accomplish such an analysis. A free account is available at https://www.pravail.com/ and rest assured that this site is using the latest non-vulnerable OpenSSl version.
Longer-Term Implications and Lessons Learned
Serious questions have been raised regarding the notification process surrounding this vulnerability. The operational community at large have voiced serious disapproval surrounding the early notification of a single content delivery network (CDN) provider, while operating system vendors and distribution providers, not to mention the governmental and financial sectors, were left in the dark and discovered this issue only after it was publicly disclosed via a marketing-related weblog post by the CDN vendor in question. It has been suggested that the responsible disclosure best practices developed and broadly adopted by the industry over the last decade were in fact bypassed in this case, and concerns have been voiced regarding the propriety and integrity of the disclosure process in this instance.
Recent indications that a significant number of client applications may be utilizing vulnerable versions of OpenSSL as well has broad implications, given the propensity of non-specialist users to ignore software updates and to continue unheedingly running older versions of code.
Furthermore, only ~6% of TLS-enabled Websites (and an undetermined, but most probably even-smaller percentage of other types of systems) make use of Perfect Forward Secrecy (PFS). This configurable option ensures that if an issue of this nature arises, previously encrypted traffic retained in packet captures isn’t susceptible to retrospective cryptanalysis.
Without PFS, there are no automated safeguards that can ameliorate these issues, once a vulnerability of this nature has been exposed. Many operators and users may not realize that if attackers captured packets of encrypted traffic in the past from vulnerable services/applications which weren’t configured with PFS – i.e., the overwhelming majority of such systems – and have retained those captured packets, they’ve the opportunity now to use analysis tools to replay those packets and decrypt the Internet traffic contained in those packets. This means that attackers can potentially unearth their credentials, intellectual property, personal financial information, etc. with access to previously captured packet-dumps.
The ability for an attacker to decrypt packet capture archives requires that the attacker has obtained the private keys used to encrypt that traffic. As recent research shows, this is not a theoretical vulnerability – private key material has been compromised in a lab environment and therefore we must assume that attackers have at least the same, if not more substantial capabilities.
The ‘Heartbleed’ vulnerability may well result in an underground market in ‘vintage’ packet captures – i.e., packet captures performed after the date this vulnerability was introduced into OpenSSL, and prior to some date in the future after which it is presumed that the most ‘interesting’ servers, services, applications, and devices have been remediated.
This incident has the potential to evolve into a massive 21st-Century, criminalized, Internet-wide version of the Venona Project, targeting the entire population of Internet users who had the ill fortune to unknowingly make use of encrypted applications or services running vulnerable versions of OpenSSL. This highlights the paradox of generalized cryptographic systems in use over the Internet today.
While the level of complexity required to correctly design and implement cryptosystems means that in most situations, developers should utilize well-known cryptographic utilities and libraries such as OpenSSL, the dangers of a cryptographic near-monoculture have been graphically demonstrated by the still-evolving Heartbleed saga. Further complicating the situation is the uncomfortable fact that enterprises, governments, and individuals have been reaping the benefits of the work of the volunteer OpenSSL development team without contributing the minimal amounts time, effort, and resources to ensure that this vital pillar of integrity and confidentiality receives the necessary investment required to guarantee its continued refinement and validation.
This is an untenable situation, and it is clear that the current development model for OpenSSL is unsustainable in the modern era of widespread eavesdropping and rapid exploitation of vulnerabilities by malefactors of all stripes. Information on how to support the OpenSSl effort can be found here: https://www.openssl.org/support/
In some cases, attackers seeking exploitable hosts may scan and/or try to exploit this vulnerability so aggressively that they inadvertently DoS the very hosts they’re seeking to compromise. Organizations should be cognizant of this threat and ensure that the appropriate availability protections are in place so that their systems can be defended against both deliberate and inadvertent DDoS attacks.
It should also be noted that initial experimentation seems to indicate that it’s easiest for attackers to extract the private keys from vulnerable OpenSSL-enabled applications and services, using the least amount of exploit traffic, immediately after they have been started. Accordingly, organizations should be prepared to defend against DDoS attacks intended to cause state exhaustion and service unavailability for SSL-enabled servers, load-balancers, reverse proxies, VPN concentrators, etc. The purpose of such DDoS attacks would be to force targeted organizations to re-start these services in order to recover from the DDoS attacks, thus providing the attackers with a greater chance of capturing leaked private keys.
http://www.seacat.mobi/blog/heartbleed Note: This event may have been a false positive caused by ErrataSec’s masscan software (http://blog.erratasec.com/2014/04/no-we-werent-scanning-for-hearbleed.html)
I’ve always found sites which test IPv6 connectivity interesting. In 2005, I implemented the ipv6calc cgi software as part of a server-side include that reported which IPv4 or IPv6 address the visitor was using to visit the Web site. At that time, the number of IPv6-enabled visitors to the site per month averaged in single digits.
As mentioned in another posting (you can read it here), the “test-ipv6” software is available open-source. I’ve implemented a mirror of this site at http://test-ipv6.arbor.net and am hoping that there will be more than a handful of IPv6-enabled visitors per month. This Web site, based at Arbor’s engineering lab in Ann Arbor, Michigan, will provide the visitor with information about their IPv6 capabilities and readiness, with a score from 0 to 10.
There is a certain level of skill to creating an IPv6-capable network. There is even more skill to creating an IPv6-capable network correctly. To help confirm an IPv6-capable network has been configured correctly and that “upstream” IPv6 connectivity is correct, there are several Web sites which offer basic insights into the quality of IPv6 connectivity.
Such sites have been around in one form or another since at least 2000. The most famous early “test” Web site was perhaps “www.kame.net” – if the turtle (“kame” in Japanese) moved, the site was being reached via IPv6. The openly available “ipv6calc” software included a CGI that allowed one to confirm not only what IP version one was reaching the Web site with, but also information about the address.
To verify basic IPv6 functionality, a good starting point is the Web site “http://test-ipv6.com”. This Web site provides an IPv6 readiness score from 0 to 10 and measures both client and network IPv6 readiness. In addition to testing for basic IPv6 capabilities, it reports on a sampling of IPv6-enabled destinations that can be reached, as well as IPv6 DNS and large packet support. The site is based on open-source software and source code can be found at https://github.com/falling-sky/source/wiki. There are also numerous mirrors around the world.
Another IPv6 test Web site is http://ipv6-test.com, not to be confused with http://test-ipv6.com mentioned above. This site offers Path MTU tests. http://ipv6-test.com is a bit less specific than http://test-ipv6.com/ about exactly which tests it performs and http://ipv6-test.com does not appear to provide source code or mirrors, as far as I can tell.
A site which offers more comprehensive testing of both IPv4 and IPv6 connectivity is the ICSI “Netalyzer.” This site was funded via a grant from the National Science Foundation and operated by the International Computer Science Institute at University of California -Berkeley. It performs a variety of IPv4 and IPv6 tests, including checks for open network ports, fragmentation functionality, path MTU discovery, general DNS functionality and DNSSec. Netalyzer information is accessible via a Web browser or by running a Java application on the command line. On initial inspection and after an unsuccessful attempt to reach the creators of the site, it appears source code is not available.
Certainly, these Web sites only provide a small amount of data in the quest to fully understand the network that one is connected to and there are other diagnostic tools one should seek to obtain a more comprehensive understanding of characteristics, connectivity and performance of a network. However, these tools provide an excellent overview of basic IPv6 connectivity and network capabilities.
According to Jason Fesler, developer and maintainer of the test-ipv6.com Web site and the source code, the goal of test-ipv6 was not only to provide basic IP address information, but to help visitors identify certain failure conditions. Said Fesler, “Having no IPv6 is one thing; but having misconfigured IPv6 is a very different problem with very negative user experience issues. Today, the browsers mostly work around those user experience issues; which create a different headache for system administrators — hiding the problems.”
Most ISPs rolling out large-scale residential IPv6 (bringing US to ~7%, Switzerland to ~10% and Belgium above 16%) agree it’s a no-brainer, but the rest of the world still hesitates.
To help the dubious majority cross the (perceived) shaky bridge across the gaping chasm between IPv4 and IPv6, a team of great engineers with decades of IPv6 operational experience (including networking gurus from Time Warner, Comcast and Yahoo, and the never-tiring IPv6 evangelist Jan Žorž) wrote an IPv6 Troubleshooting for Helpdesks document.Read more ...
Okay, so we all know some little slice or another of the Internet. But how do all these slices really fit together? How does each player in the system make money by getting your device to connect to someone else’s server to grab content (whether or not you just asked for it)? Let’s put it […]
Through a court-mandated decision, access to Twitter has officially been blocked across all of Turkey. Weather or not this was the right decision; it is evident that people are not happy about it at all. As you already may know, I am originally from Turkey but have been living elsewhere for many years now while […]
Stuff me but I don't know what open is anymore.
The post Rant: Is It Open ? I Don’t Know What Open Is Anymore appeared first on EtherealMind.
When I first heard about NFV, I thought it was just another steaming pile of hype designed to push the appliance vendors to offer their solutions in VM format. After all, we’re past the hard technical challenges: most appliances deserve to have an Intel Inside sticker, performance problems have been addressed (see Intel DPDK, 6WIND, PF_ring and Snabb Switch), so what’s stopping us from deploying NFV apart from stubborn vendors who want to sell hardware, not licenses?Read more ...
I always recommended EBGP-based designs for DMVPN networks due to the significant complexity of running IBGP without an underlying IGP. The neighbor next-hop-self all feature introduced in recent Cisco IOS releases has totally changed my perspective – it makes IBGP-over-DMVPN the best design option unless you want to use DMVPN network as a backup for MPLS/VPN network.Read more ...
After a few years of floating around a bit, I decided it was time for something new, something different. We put in our second Cisco voice system this week that I am administrating so I decided it was a good time to head down the voice track. I am going to start with the CCNA, and move through the CCNP track, so it will be some time before I am planning on taking any sort of ccie written collaboration exam. Hopefully I can get to that point before I have to re-certify I am going to follow Mark Snow’s plan at least until I get done with the CCNP. I have all of INE’s voice material right now, so my plan is to make good use of that material. I would like to get some 2800′s in the basement rack, and get a rack going that matched the older INE blueprint. I have a vmware server running right now with CUCM on it, and have plenty of resources left to get all the servers on it. There is no way I am going to be able to convince the wife to let me build out a full collaboration rack though. It’s will be a lot of rack rentals this time around.
In the end it just feels good to be taking on something new again…
Last month I had the opportunity to work with a company to perform an IPv6 pilot. There are a lot of elements to light up for an organization to use IPv6, most of them (but not all) being technical in nature. One of the mechanism I used was ISATAP. In the past I have not […]
A bumper crop of ten links today since I’ve been distracted with Interop Las Vegas where I presenting sessions & meeting with vendors. Then I flew to New York to perform some analyst work with investment/fund manager types. The Ethernet Switching Landscape // Speaker Deck – The deck that Ethan Banks used at […]
I was asked to describe how Arista has been able to penetrate the networking switch market relatively quickly. Arista was founded in 2004 and ten years later has achieved a competitive position against all the major vendors in networking and specifically against Cisco who has a dominant market position. Most vendors develop product like an […]
The post The Difference Between Arista and Competitors (Factories not Babies) appeared first on EtherealMind.
In the first half hour of the Infrastructure for Private Clouds workshop at last week’s Interop Las Vegas I focused on business aspects of private cloud design: defining the customers, the services, and the level of self-service you’ll offer to your customers.
Nick Martin published a great summary of these topics @ SearchServerVirtualization; I couldn’t have done it better myself (they want to get your email address, but this article is definitely worth it).Read more ...
This article from the Association of Computing Machinery and written by no less than Paul Vixie. It is a detailed review of the basic facts of the Internet being smart at the edge and dumb in the middle. By design, the Internet core is stupid, and the edge is smart. This design decision has enabled […]
The post Response: Rate-limiting State and Internet Frailty – ACM appeared first on EtherealMind.