According to a report from the Bipartisan Policy Center (a Washington nonprofit group comprised of leading energy security experts), our national power grid is most definitely vulnerable to cyber attacks. Juniper’s own Nawaf Bitar, senior VP and general manager of the Security Business Unit, echoed this fact in his keynote at last week’s RSA conference in San Francisco.
Collection of useful, relevant or just fun places on the Internets for 7th March 2014 and a bit commentary about what I’ve found interesting about them: The Definitive Guide to NSX Gateway Uses Cases – J-Net Community – I missed this earlier but it’s a good summary on NSX use cases from the launch. […]
This press release from Huawei really grabbed my attention because it's an MPLS enabled router that is finger sized and built for carrier networks.
The post Response: Huawei Unveils the World’s Tiniest Atom Router appeared first on EtherealMind.
It’s always interesting to connect with enterprise security experts and IT peers alike. Last week was no exception at RSA 2014 when I was invited to join a panel discussion hosted by Trusted Computing Group and moderated by security expert Victor Wheatman.
If you’re like me and work in high tech, explaining what you do to your non-high-tech friends and family can be a daunting task. And if you happen to work in IT or, say, for a networking and security company, the task can sometimes feel like mission impossible.
For months, I’ve tried to explain to my kids what I do for a living. Not only am I a hard-to-define marketing professional in the security industry, but I’m also competing against their friends’ parents whose professions are the less vague variety—like police officer, firefighter, teacher. Try as I might to describe the important role I play in their ever connected lives, I can’t seem to win against the Apple parents who arrive home with new iPads or iPhones. No, really. I live in Silicon Valley and we really do have those Apple parents in our schools!
So, you may be wondering, what changed on Monday, February 24?
The long and short of it: everything.
A common complaint about vendor certifications is that they don't prepare you for the real world. Well of course not. Neither did high school or college. And thinking about it, my University education wasn't much better.
The post Rant: Certification and Training Does Prepare You for the Real World appeared first on EtherealMind.
Dual 100G interface and 24 MILLION flow table entries for Open vSwitch ? And flow setup rates to match.
The post Response: Netronome 100GE Cards Target SDN | EE Times appeared first on EtherealMind.
If you plan to attend the Troopers 2014 conference in two weeks, don’t forget to include my full-day SDN workshop on Tuesday in your agenda (the Troopers conference is sold out, but you can still register for the workshop). The topics of the workshop will include:
As someone who knows that Amazon AWS is a really expensive method of buying compute & storage, I was much amused by the following slides from a presentation by Terry Wise, Director, Worldwide Partner Ecosystem during a recent conference. You might enjoy a good laugh too.
An increase in credit and debit card theft via Point of Sale (PoS) malware campaigns over the late 2013 holiday season has resulted in significant media attention and has likely emboldened threat actors as the success of past campaigns comes to light. Media attention has decreased since news of the Target breach and associated fallout, however threat actors targeting PoS systems are still engaged in active attacks.
Certain malware, such as Dexter, Project Hook, Alina, ChewBacca, JackPoS and VSkimmer have been written specifically to compromise Point of Sale machines. Other malware not designed specifically for PoS attack, such as ever-popular Citadel, has the capability to exfiltrate data from the target organization. In short, any system that contains credit/debit card data in any clear-text form in memory or on disk or sends clear-text card data over the network is potentially at risk regardless of whether that machine is a PoS terminal or not.
In addition to Alina, Chewbacca, JackPoS and other Point of Sale malware, ASERT continues to track the Dexter and Project Hook PoS campaings we originally reported on in December of 2013. Indicators suggest that Dexter Revelation may have been in existence as early as April 2013. A new ASERT threat intelligence brief sited at the end of this post provides a significant amount of updated material about Dexter and Project Hook including:
This information should prove valuable for incident responders and those responsible for protecting cardholder data environments. Additionally, since many of the network and file indicators have not been previously released, these indicators may be useful for identifying environments that are already compromised. The brief also provides scripts for decoding dump files that may help incident responders determine the scope of a compromise.
The following map shows Dexter and Project Hook infections as of January 24, 2014:
Continued PoS campaign activity suggests that organizations still need to be vigilant. This new ASERT intelligence brief will help. The full document is available here.
*Author credits: Curt Wilson, Dave Loftus, and Dennis Schwarz
What could a small ISP do to limit failure domains? Metro Ethernet and MPLS Virtual Private LAN service are all the rage, and offers customers the promise of being able to connect all their branch offices together, and use the same set of VLANs with free Layer 2 connectivity between their sites. It's either: extend the failure domains, or lose out in selling the service, b/c the customer will buy from another ISP.Read more ...
In short, connectivity is now commodity and it is services that are hard. Understanding this point is key to understanding the SDN market. I take the view that SDN assumes that connectivity is a cheap, low cost and low value business function.
The post Understanding SDN: Connectivity Is Commodity, Services Are Valuable appeared first on EtherealMind.
Collection of useful, relevant or just fun places on the Internets for 4th March 2014 and a bit commentary about what I’ve found interesting about them: Meetup HQ Blog • No doubt, this has been a tough weekend for… – Meetup.com talks about their handling of a DDOS shakedown with some very good points. […]
On the very same day that I published the CLI is Not the Problem post I stumbled upon an interesting discussion on the v6ops mailing list. It all started with a crazy idea to modify BGP to use 128-bit router ID to help operators that think they can manually configure large IPv6-only networks without any centralized configuration/management authority that would assign 32 bit identifiers to their routers.Read more ...
Ever wonder how Juniper SecTeam decides what signatures to write?
For the signatures they do write, why do they "Recommend" some but not others?
And how do they determine the severity of those signatures, anyway?
Find all those questions answered and more...
When you mention Google+ to people, you tend to get a very pointed reaction. Outside of a select few influencers, I have yet to hear anyone say good things about it. This opinion isn’t helped by the recent moves by Google to make Google+ the backend authentication mechanism for their services. What’s Google’s aim here?
Google+ draws immediate comparisons to Facebook. Most people will tell you that Google+ is a poor implementation of the world’s most popular social media site. I would tend to agree for a few reasons. I find it hard to filter things in Google+. The lack of a real API means that I can’t interact with it via my preferred clients. I don’t want to log into a separate web interface simply to ingest endless streams of animated GIFs with the occasional nugget of information that was likely posted somewhere else in the first place.
It’s the Apps
One thing the Google of old was very good at doing was creating applications that people needed. GMail and Google Apps are things I use daily. Youtube gets visits all the time. I still use Google Maps to look things up when I’m away from my phone. Each of these apps represent a separate development train and unique way of looking at things. They were more integrated than some of the attempts I’ve seen to tie together applications at other vendors. They were missing one thing as far as Google was concerned: you.
Google+ isn’t a social network. It’s a database. It’s an identity store that Google uses to nail down exactly who you are. Every +1 tells them something about you. However, that’s not good enough. Google can only prosper if they can refine their algorithms. Each discrete piece of information they gather needs to be augmented by more information. In order to do that, they need to increase their database. That means they need to drive adoption of their social network. But they can’t force people to use Google+, right?
That’s where the plan to integrate Google+ as the backend authentication system makes nefarious sense. They’ve already gotten you hooked on their apps. You comment on Youtube or use Maps to figure out where the nearest Starbucks already. Google wants to know that. They want to figure out how to structure AdWords to show you more ads for local coffee shops or categorize your comments on music videos to sell you Google Play music subscriptions. Above all else, they can use that information as a product to advertisers.
Build It Before They Come
It’s devilishly simple. It’s also going to be more effective than Facebook’s approach. Ask yourself this: when’s the last time you used Facebook Mail? Facebook started out with the lofty goal of gathering all the information that it could about people. Then they realized the same thing that Google did: You have to collect information on what people are using to get the whole picture. Facebook couldn’t introduce a new system, so they had to start making apps.
Except people generally look at those apps and push them to the side. Mail is a perfect example. Even when Facebook tried to force people to use it as their primary communication method their users rebelled against the idea. Now, Facebook is being railroaded into using their data store as a backend authentication mechanism for third party sites. I know you’ve seen the “log In With Facebook” buttons already. I’ve even written about it recently. You probably figured out this is going to be less successful for a singular reason: control.
Unlike Google+ and the integration will all Google apps, third parties that utilize Facebook logins can choose to restrict the information that is shared with Facebook. Given the climate of privacy in the world today, it stands to reason that people are going to start being very selective about the information that is shared with these kinds of data sinks. Thanks to the Facebook login API, a significant portion of the collected information never has to be shared back to Facebook. On the other hand, Google+ is just making a simple backend authorization. Given that they’ve turned on Google+ identities for Youtube commenting without a second though, it does make you wonder what other data their collecting without really thinking about it.
I don’t use Google+. I post things there via API hacks. I do it because Google as a search engine is too valuable to ignore. However, I don’t actively choose to use Google+ or any of the apps that are now integrated into it. I won’t comment on Youtube. I doubt I’ll use the Google Maps functions that are integrated into Google+. I don’t like having a half-baked social media network forced on me. I like it even less when it’s a ham-handed attempt to gather even more data on me to sell to someone willing to pay to market to me. Rather than trying to be the anti-Facebook, Google should stand up for the rights of their product…uh, I mean customers.
When Apple launched the new release of iOS last autumn, networking gurus realized the new iOS uses MP-TCP, a recent development that allows a single TCP socket (as presented to the higher layers of the application stack) to use multiple parallel TCP sessions. Does that mean we’re getting closer to fixing the TCP/IP stack?
TL&DR summary: Unfortunately not.Read more ...
100% utilisation means that you are a traffic jam in the workflow. Don't overcommit yourself.
The post 100 Percent Utilisation and Overcommitting Yourself appeared first on EtherealMind.
At the 2014 SC Awards, Juniper was selected as a Reader Trust award winner in the Best Cloud Computing Security Solution (Firefly Host), Best Network Access Control Solution (Unified Access Control), and Best Unified Threat Management Security Solution (SRX Series Services Gateway for the Branch) categories.
Another pretty-down-to-Earth OpenFlow use case: service insertion. “Slightly” easier than playing with VLANs or PBR (can you tell how tired I am based on the enormous length of this intro?).
The first question I tried to answer (and probably failed to) in the SDN 101 webinar was: What exactly is SDN? Is it an architecture with physically separate centralized control plane, or is it more? Does separate control plane make sense, or is it better to program distributed devices? Watch the video recorded during the live webinar session and tell me whether you agree with my answers.
The “Is CLI In My Way … or a Symptom of a Bigger Problem” post generated some interesting discussions on Twitter, including this one:
One would hope that we wouldn’t have to bring up this point in 2014 … but unfortunately some $vendors still don’t get it.Read more ...
We live in interesting times, as security concerns take center stage.
Juniper Networks' keynote, "The Next World War Will be Fought in Silicon Valley," also took center stage at the RSA conference on Tuesday morning.
Nawaf Bitar, SVP and GM of Juniper's Security Business Unit, drew upon emotions and history to seek inspiration and solutions to privacy and cyber threat concerns. Calling upon the security community, he hopes creativity and innovation will be the industry's form of protest.
Watch the keynote replay.
Renesys is headed to RightsCon next week, to bring some data to the discussions around evolving Internet localization, the economics of infrastructure diversification, and the role of the private sector in strengthening and stabilizing Internet service delivery worldwide.
When we talk to our customers about Internet instability (international service providers, multinational enterprises, cloud and content providers), we hear things that suggest that the interests of digital citizens and global enterprises are (in this case, at least) strongly aligned. Outside of a few special interests, nobody profits from unstable, unreliable Internet that’s subject to arbitrary political control, throttling, and shutdowns. Disruptions in Internet service make it more expensive for our customers to reliably deliver their global services to hundreds of millions of consumers worldwide.
How resilient the Internet actually is, of course, depends on where you look. Just within the last week, bloody crises in Syria, Ukraine, and Venezuela have all created conditions under which one might expect to see Internet outages emerge — and yet, the outcomes have been very different. Syria suffered another in its sequence of near-total national outages, while Venezuela’s outages were much more limited, consisting largely of slowdowns rather than outright cutoffs. Ukraine’s Internet managed to stay almost completely intact, delivering realtime video of the Euromaidan and other protest sites to viewers around the world without interruption for weeks on end.
To understand why some countries’ Internet is more fragile than others, it may help to revisit the Internet structural diversity model first described in our blog from November 2012, “Could it Happen in Your Country?” Today we’ll take a look at how the world’s Internet markets have grown and diversified, and see what we can learn from the evolution of that model over the last year, looking specifically at these three conflict regions.
|Review of the model|
Our model is based on direct observation of each nation’s contribution to the global Internet routing table, and creating a census of various kinds of providers within each country. We classified each of the Internet’s autonomous systems as “domestic” or “international,” based on a simple test of single-country customer concentration.
Because an Internet shutdown order must be applied across a range of frontier service providers, to first approximation, “more is better” when it comes to keeping the Internet connected within a country.
Over time, we predict that
We divide the countries of the world into four tiers:
|If you have only 1 or 2 companies at your international frontier, Renesys classifies your country as being at severe risk of Internet disconnection.||Examples: Syria, Turkmenistan, Ethiopia, Uzbekistan, Myanmar, Yemen.|
|If you have fewer than 10 service providers at your international frontier, your country is probably exposed to some moderate to significant risk of Internet disconnection. Ten providers also seems to be the threshold below which one finds significant additional risks from infrastructure sharing — there may be a single cable, or a single physical-layer provider who actually owns most of the infrastructure on which the various providers offer their services.||Examples: Oman, Rwanda, Pakistan, Armenia.|
|If you have at least 10 internationally-connected service providers, but no more than 40, your risk of disconnection is fairly low. Given a determined effort, it’s plausible that the Internet could be shut down over a period of days or weeks, but it would be hard to implement and even harder to maintain that state of blackout.||Examples: Bahrain, India, Israel, Vietnam, Turkey, Mexico.|
|Finally, if you have more than 40 providers at your frontier, your country is likely to be extremely resistant to Internet disconnection. There are just too many paths into and out of the country, too many independent providers who would have to be coerced or damaged, to make a rapid countrywide shutdown plausible to execute. A government might significantly impair Internet connectivity by shutting down large providers, but there would still be a deep pool of persistent paths to the global Internet.||Examples: United States, Russia, Canada, France, Germany.|
|As we reported midway through 2013, the set of observed nationwide Internet outages did appear to correspond to countries we deemed to be at higher risk. There were also surprises, which revealed additional modes of single-point failure: Bangladesh (at the time, entirely reliant on a single submarine cable landing), and Lebanon (where Ogero’s retention of control over the newly-landed IMEWE cable greatly reduced the country’s existing provider diversity).|
Since then, we’ve seen additional evidence of the protective effect of national provider diversity. As we continue to tweet about nation-scale Internet outages, the “severe risk” and “moderate risk” countries continue to be overrepresented (for example, Cuba, Uzbekistan, Libya, Syria, Sierra Leone, and Central African Republic, just within the last few months).
First, some hopeful news.
Not all the movements were positive.
Worldwide, the majority of countries (56% of the 234 we surveyed) were stable; that is, their frontier set changed by no more than a single provider up or down since November 2012. Of the remainder, advances outnumbered declines by more than 2:1, suggesting that the global Internet is, indeed, moving toward greater diversity.
Comparing Internet outcomes across conflict regions
In this light, we can compare the specific impacts of last week’s violence in Syria, Venezuela, and Ukraine on their national Internet stability.
|While we have seen evidence of periodic Internet slowdowns, as well as the blocking of specific websites such as Twitter, and credible reports of regional access disruptions, we haven’t yet seen an Egyptian-style national blackout in Venezuela. The country’s financial problems may be contributing to their Internet difficulties. Incumbent CANTV has experienced loss of transit through international providers Telefonica and Verizon Business in recent months (see transit shift plot at right), which may explain some of the emerging constraints on available international bandwidth.|
We acknowledge the limitations of such a simple model in predicting complex events such as Internet shutdowns. Many factors can contribute to making countries more fragile than they appear at the surface (for example, shared physical infrastructure under the control of a central authority, or the physical limitations of a few shared fiberoptic connections to distant countries).
It’s also clear that hidden central points of failure can also be introduced at entirely different “layers” of the networking stack, as demonstrated by China’s Great DNS Failure, which made large numbers of popular domains unreachable, while leaving the routing infrastructure intact.
If there is a general lesson to be learned, we suspect that the best way to protect the Internet from throttling, terrorism, and disconnection is simply to build more of it — more autonomous systems, more physically diverse routes, more cable systems, more Internet exchange points, more peering, more interconnection in more buildings in more cities, every year. It’s only when a region is starved for diverse access (rural communities in the United States) or starved for diverse international connectivity (incumbent-dominated economies in emerging markets) or designed for centralized monitoring and control (China’s Great Firewall) that the Internet remains fragile and subject to broad-scale interruption.
Facebook’s purchase of WhatsApp is a timely economic reminder that the primary sources of online consumer growth for the next 10 years will be found outside the English-speaking world. Enterprises are only now coming to terms with the challenges of maintaining connectivity to that global population, across an Internet whose paths can have highly variable (indeed, often mysterious) performance. One at a time, each of the countries of the world is finding its way toward a level of physical- and logical-level connectivity that will make Internet kill switches a dim memory from the Internet’s troubled youth.
When describing Hyper-V Network Virtualization packet forwarding I briefly mentioned that the hypervisor switches create (an equivalent of) a host route for every VM they need to know about, prompting some readers to question the scalability of such an approach. As it turns out, layer-3 switches did the same thing under the hood for years.Read more ...
We all know what Cisco Live is, right? Networkers? The Cisco users’ conference? If not, then educate yourself, friend. It takes place every year in different parts of the world. I try my best to go every year to the US event and am lucky to be able to go this year. It costs a bagillion dollars and a week of my time; why am I so excited about going? Easy answers in no particular order.
excuses for not blogging for months questions my way.
Selecting shapes and connectors one-by-one in Visio can be tedious, especially when working with large or repetitive drawings. If you've been drawing for a while, you've probably gotten the hang of selecting just the right subset of shapes using the rectangular select tool, and employing the control key to add or remove any outliers as desired. This can be time-consuming though, especially when you want to pick out just a few connectors from a jumble of criss-crossing lines.
Here's a trick to try next time you find yourself excessively control-clicking: Identify each logical group of shapes or connectors that you'll likely want to tweak, and bundle them up into to their own layer. You can then use Visio's "select by layer" option to grab them all at once later. Take the drawing below, for instance.
Don’t get me wrong, firewalls have come a long way over the years. They are great security enforcement points based on what they do know, but there is so much more that is possible.
Juniper is focused on building high IQ networks and that includes high IQ security. Instead of focusing just on point solution intelligence, we’ve built an extended security intelligence framework that connects to the firewall. This framework delivers highly actionable intelligence from multiple sources so the firewall can enforce unique intelligence-based policies.
This is one of a multi-part series on the Ethernet switching landscape I’m writing in preparation for a 2-hour presentation at Interop Las Vegas 2014 (hour 1 & hour 2). Part 1 of this written series appeared on NetworkComputing.com. Search for the rest of this series.
One often quoted statistic in Ethernet switch specifications is latency. Latency figures are usually cited in microseconds or nanoseconds. With ever decreasing latency numbers as new switch platforms emerge, what is a networker to make of latency? Is latency a key measurement to focus on when evaluating Ethernet switches? The answer to that question comes in two parts. First, there needs to be a good understanding of what latency is. Second, there needs to be a business use-case where latency matters. For many consumers, once you understand what Ethernet switch latency is, you might decide it’s not a big factor in your buying decision.
Generally speaking, latency measures the time it takes for a frame to enter and then exit a switch. In other words, latency is a measure of how long it takes for the switch to do its job – that of “switching” a frame in one port and out another. Latency is sometimes described as “port-to-port” latency. Think of it as the amount of time a frame spends inside the switch. That’s a rough description (and not completely accurate), so let’s look a little closer.
According to Gary Lee of Fulcrum Microsystems, latency can be measured in a number of different ways.
“There are several ways to measure latency through a switch: first-bit-in to last-bit-out (FILO), last-bit-in to first-bit-out (LIFO), first-bit-in to first-bit-out (FIFO) and last-bit-in to last-bit-out (LILO). In each case, latency is measured at the switch ingress and egress ports.”
I only raise this point about the different measuring techniques only because as you Google around looking for information on switch latency, you run into this data. Various ways to measure switch latency might seem a concern. How do you know that the vendors you are comparing are using the same method when citing their latency specifications? Reality is that practically all modern switch architectures are cut-through (at least when it comes to low-latency forwarding), meaning that the switch is forwarding the frame before the entirety of the frame has been received. For low-latency operation, cut-through switching is expected. The alternative to cut-through is store-and-forward. In this mode, a frame must be received in its entirety before being forwarded, which notably increased the time it takes for a frame to be switched, as well as making it variable. The point is this. Assuming cut-through forwarding, the only accurate way to measure latency is either LILO or FIFO. To quote Gary again,
“These methods [FIFO/LILO] are effectively the same and are the only way to properly measure the latency through a cut-through switch.”
Implicitly, then, I believe it’s safe to assume that when looking at latency numbers for cut-through switching operation, numbers from different vendors are directly comparable. If you can find a latency measurement reported by a vendor, it should indicate a FIFO/LILO measurement of a switch in cut-through mode.
Let’s take a look at latency measurements as reported by their vendors for a few Ethernet switches. Note that this is not an “apples-to-apples” comparison. These switches have different ASICs, may have different host-facing physical ports, and likely first came to market in different years. I have a point, though – bear with me.
These switches all have widely varying latency characteristics, which are market differentiators for them. But I believe that latency is only a differentiator to certain market segments. Unless you are building a network where nanoseconds count – such as high frequency trading – then the port-to-port latency of one Ethernet switch over another just isn’t going to make any appreciable difference in the performance of the applications running across your network.
Another consideration is that the Ethernet PHY (the physical Ethernet medium you use, as in the transceiver) will also impact latency. For example, 10GBase-T (10GbE over twisted-pair copper) PHYs introduces noticeable higher latency than SFP+ based PHYs. Paul Kish in the Belden Right Signals blog opines,
“With simplified electronics, SFP+ also offers better latency—typically about 0.3 microseconds per link. 10GBASE-T latency is about 2.6 microseconds per link due to more complex encoding schemes within the equipment.”
But again, the question comes back to one of business needs and application. 10GBASE-T has a use-case in top-of-rack (ToR) or end-of-row (EoR), considering 10GBASE-T server LAN-on-motherboard modules (LOMs) are coming to market. Should the higher latency of 10GBASE-T introduced by the PHY put you off? Only if microseconds matter.
Latency, low latency, ultra low latency and the rest are relevant terms that indicate how long it takes a switch to move traffic through itself. However, beyond the physical medium and switch architecture, many other things could impact application latency. At the end of the day, it might be fun to argue about microseconds and nanoseconds in the context of your switchs, but what difference will it make if you have bottlenecks in other places? Consider these elements of your IT engine that are far more likely to negatively impact your application performance than Ethernet switching latency.
These sorts of problems can introduce significant fractions of a second (or even whole seconds!) of latency into an application transaction. For most shops, these sorts of issues are where the performance battles need to be fought. Buying the lowest latency switch you can find just isn’t going to appreciably help, unless you’re one of the very few with that sort of application requirement. And you already know who you are.