In a culture of Internet toxicity, a question all productivity-minded people should ponder is why they are participating in social platforms. For example, Twitter has become a predominantly negative cesspool. Even the pun-loving techies I monitor from a distance seem to lean increasingly toward darkness and anger.
Few rainbows are to be found on Twitter these days. Maybe it’s just the dark mode UI talking, but I don’t go there to laugh anymore. I don’t go there to connect with friends. Instead, I put on my virtual armor, and read through comments and responses directed at me or my company. To be sure, much of what I see is fine. However, many comments are meant to start fires, even when couched in smiling niceties.
That’s the Internet for you. We’ve always had flame wars, trolls, and haters, all the way back to my dial-up days on Delphi forums and AOL. I know how to block and mute people, and of course that helps. Even so, I find that there’s something different in the tone these days. Folks are on crusades to bring others down. To shame. To burn in digital effigy.
When toxicity spills over into my timeline despite my best filtering efforts, my head goes to a bad place. I lose focus on work, finding myself instead brooding on the dark proclamations of the vocal bitter who wield hashtags as cudgels.
Why am I on this platform? Why am I participating in social media’s culture of toxicity, even passively?
I’ve had answers for this in years past–the “good outweighing the bad” argument winning me back repeatedly. Lately? I’m not sure that holds up. Social media is largely a destructive force, designed to deliver advertisements, arguments, and addiction. Considering those selling points, no one would purchase the product.
What, exactly, is the point then? Others have asked themselves this question and have opted out. Some I know personally. Some I’ve read about. The common theme among these folks? None have reported regret for having walked out of the toxic mire that can depress the mind and rob productivity.
Yes, I still have a Twitter account, but I can no longer explain why, at least not in a way that stands up to intellectual rigor. The issue is becoming harder to ignore.
You’re probably familiar with Cisco DevNet. If not, DevNet is the place Cisco has embraced outreach to the developer community building for software-defined networking (SDN). Though initially cautious in getting into the software developer community, Cisco has embraced their new role and really opened up to help networking professionals embrace the new software normal in networking. But where is DevNet going to go from here?
DevNet wasn’t always the darling of Cisco’s offerings. I can remember sitting in on some of the first discussions around Cisco OnePK and thinking to myself, “This is never going to work.”
My hesitation with Cisco’s first attempts to focus on software platforms came from two places. The first was what I saw as Cisco trying to figure out how to extend the platforms to include some programmability. It was more about saying they could do software and less about making that software easy to use or program against. The second place was actually the lack of a place to store all of this software knowledge. Programmers and developers are fickle lot and you have to have a repository where they can get access to the pieces they needed.
DevNet was that place that Cisco should have built from the start. It was a way to get people excited and involved in the process. But it wasn’t for everyone at first. If you don’t speak developer you’re going to feel lost. Even if you are completely fluent in networking and you know what you want to accomplish, just not how to get there. DevNet started off as the place to let the curious learn how to combine networking and programming.
DevNet really came into their own about 3 years ago. I use that timeline because that’s when I first heard that people were wanting to spend more time at Cisco Live in the DevNet Zone learning programming and other techniques and less time in traditional sessions. Considering the long history of Cisco Live that’s an impressive feat.
More importantly, DevNet changed the conversation for professionals. Instead of just being for the curious, DevNet became a place where anyone could go and find the information they needed. It became a resource. Not just a playground. Instead of poking around and playing with things it became a place to go and figure things out. Or a place to learn more about a new technology that you wanted to implement, like automation. If the regular sessions at Cisco Live were what you had to learn, DevNet is where you wanted to go and learn.
Susie Wee (@SusieWee) deserves all the credit in the world here. She has seen what the developer community needs to thrive inside of Cisco and she’s delivered it. She’s the kind of ambassador that can go between the various Cisco business units (BUs) and foster the kind of attitudes that people need to have to succeed. It’s no longer about turf wars or fiefdoms. Instead, it’s about leveraging a common platform for developers and networkers alike to find a common ground to build from. But even that’s not enough to complete the vision.
During Cisco Live 2019, I talked quite a bit with Susie and her team. And one of things that struck me from our conversations was not how DevNet was an open and amazing place. Or how they were adding sessions as fast as they could find instructors. It was that so many people weren’t taking advantage of it. That’s when I realized that DevNet needs to shift their focus. Instead of just providing a place for networking people to learn, they’re going to have to go on the offensive.
DevNet needs to enhance and increase their outreach programs. Being a static resource is fine when your audience is eager to learn and looking for answers. But those people have already flocked to the DevNet banner. For things to grow, DevNet needs to pull the laggards along. The people who think automation is just a fad. Or that SDN is in the Trough of Disillusionment from a pretty Gartner graphic. DevNet has momentum, and soon will have the certification program needed to help networking people show off their transformation to developers.
For DevNet to really succeed, they need to be grabbing people by the collar and dragging them to the new reality of networking. It’s not enough to give people a place to do research on nice-to-have projects. You’re going to have get the people engaged and motivated. That means committing resources to entry-level outreach. Maybe even building a DevNet Academy similar to the Cisco Academy. But it has to happen. Because the people that aren’t already at DevNet aren’t going to get there on their own. They need a push (or a pull) to find out what they don’t know that they don’t know.
It’s a crazy idea to think that a network built to be completely decentralized and resilient can be so easily knocked offline in a matter of minutes. But that basically happened twice in the past couple of weeks. CloudFlare is a service provide that offers to sit in front of your website and provide all kinds of important services. They can prevent smaller sites from being knocked offline by an influx of traffic. They can provide security and DNS services for you. They’re quickly becoming an indispensable part of the way the Internet functions. And what happens when we all start to rely on one service too much?
The first outage on June 24, 2019 wasn’t the fault of CloudFlare. A small service provider in Pennsylvania decided to use a BGP Optimizer from Noction to do some route optimization inside their autonomous system (AS). That in and of itself shouldn’t have caused a problem. At least, not until someone leaked those routes to the greater internet.
It was a comedy of errors. The provider in question announced their more specific routes to an upstream customer, who in turn announced them to Verizon. After that all bets are off. Because those routes were more specific than the aggregates they became the preferred routes. And when the whole world beats a path to your door to get to the rest of the world, you see issues.
Those issues caused CloudFlare to go offline. And when CloudFlare goes offline everyone starts having issues. The sites they are the front end for go offline, even if the path to those sites is still valid. That’s because CloudFlare is acting as the front end for your site when you use their service. It’s great because it means that when someone knocks your system offline or hits you with a ton of traffic you’re safe because CloudFlare can support a lot more bandwidth than you can, especially if you’re self hosted. But if CloudFlare is out, you’re out of luck.
There was a pretty important lesson to be learned in all this and CloudFlare did an okay job of explaining some of those lessons. But the tone of their article was a bit standoffish and seemed to imply that the people whose responsibility it was to keep the Internet running should do a better job of keeping their house in order. For those of you playing along at home, you’ll realize that the irony overlords were immediately summoned to mete out justice to CloudFlare.
On July 2nd, CloudFlare went down again. This time, instead of seeing issues with routing packets or delays, users of the service were greeted with 502 Bad Gateway errors. Again, when CloudFlare is down your site is down even if you’re not offline. And then the speculation started. Was this another BGP hijack? Was CloudFlare being attacked? No one knew and most of the places you could go look were offline, including one of the biggest offline site detectors, which was a user of CloudFlare services.
CloudFlare [eventually posted a blog owning up to the fact that it wasn’t an attack or a BGP issue, but instead was the result of a bad web application firewall (WAF) rule being deployed globally in one go. A single regular expression (regex) was responsible for spiking the CPU utilization of the entirety of the CloudFlare network. And when all your CPUs are cranking along at 100% utilization across the board, you are effectively offline.
In the post-mortem CloudFlare had to eat a little crow and admit that their testing procedures for catching this particular issue were inadequate. To see the stance they took with Verizon and Noction just a week or so before and then to see how they had to admit that this one was all on them was a bit humbling for sure. But, more importantly, it shows that you have to be vigilant in every part of your organization to ensure that some issue that you deploy isn’t going to cause havoc on the other side. Especially if you’re the responsible party of a large percentage of traffic on the web.
I think CloudFlare is doing good work with their services. But I also think that too many people are relying on them to provide services that should be planned out and documented. It’s important to realize that no one service is going to provide all the things you need to stay resilient. You need to know how you’re keeping your site online and what your backup plan is when things go down.
And, if you’re running one of those services, you’d better be careful about running your mouth on the Internet.
Isaac Asimov used the term ‘cerebration session’ : It seems to me then that the purpose of cerebration sessions is not to think up new ideas but to educate the participants in facts and fact-combinations, in theories and vagrant thoughts. Source Most people would know this as brain storming but you can impress managers and […]
Week of 24th June 2019 was interesting. We had #ferrogate which made a lot of network engineers very unhappy and also an ongoing social media thread on code comments. For this discussion, I’m going with the title of "leaving comments in code expressed artefacts" because code represents more than writing software. I feel quite passionately about this having been on the raw end of no code comments and also being guilty of leaving plenty of crappy and unhelpful comments too.
Let’s set a scene. You’ve had a long day and you’re buckled in for what can only be described as a mentally exhausting night. The system architecture is clearly formed in your head and you’re beginning to see issues ahead of time. You can’t quite justify any premature optimisation, but you know this current design has a ceiling. You also know there are system wide intricacies that are not obvious at the component level.
Normality in these scenarios is to insert context based comments, which make perfect sense at 2am, but next day 9am exhausted you may be confused as to what on earth happened in the early hours. We’ve all been there.
There are multiple trains of thought for this subjective topic:
Simple blobs of code one would argue do not need line-by-line explanations, but what about a factory function? What if you write a scheduler or pipeline code that deals with creating and setting up parts of an intricate set of concurrent operations? Cognitive load is a real issue. Everyone’s ability to do this is different and it’s dangerous to either assume everyone can do it or put yourself in the picture as a hero.
Leaving links in your code to a design is a great idea barring the obvious issue of not having a design. This might be a tipping point for your organisation. Moving fast and light is great, but without a design, how do you know where you’re going? If you’re laughing whilst reading this and out-loud asking the obvious question, it’s true! Many projects lack a formal design. I treat designs as living documents. They change with lessons learnt and can become highly valued artefacts. I can assure you as the industry dances with Microservice patterns, designs will become more important.
Meaningful comments are those that contain a set of mental steps to help build a cognitive state. A better way would to describe what Mars looks like then leaving enough telescope setup tips so a reader can see and recognise it. Meaningful comments can be formed from multiple lines and also contain internal and external links.
As with everything in IT, this is subjective and leaving comments depends on many factors. My preference is to provide a tour guide manual or script for whatever it is you are building. If you build complex workflows in automation, this information still stands. Code can represent mechanical states, software imperatives and communication methodologies. The things we humans build are formed in a part of us that scientists themselves don’t fully understand. Your brain’s view is unique to you and anything you can do to help someone else view the world through your eyes, especially regarding complex systems, will be greatly received. It can be the difference between success and failure.
This post is a set of my views and as always, I welcome discussion and engagement. Thanks for reading.
In a post which now appears to have been deleted, Greg Ferro got right to the point in his article Response: Certifications Are Not A Big Deal. Stop Being a Princess About It. The majority of this (my) response was written while Greg’s post was still active, but I had to come back and inject more context after I spotted on June 30, 2019 that the post had become unavailable.
To save you digging in the WayBackMachine, the history to Greg’s post as I understand it is that Greg made some comments in Episode 238 of the Packet Pushers’ Network Break suggesting that vendor certifications were trivial. A listener evidently gave some strong feed back disagreeing with this, and so in Episode 239 of the Packet Pushers’ Network Break Greg responded to that feedback, and reiterated his position about certification study, specifically framed around Cisco’s CCNP. Greg made some reasonable points; that the certification programs from the vendors are not designed to teach fundamentals in the same way that, say, a computer science degree might do, and that the aim is really to make money for the vendor, and reduce their tech support costs, and as such the vendor certification education programs are compromised, and that for anybody with a CS degree, vendor certifications are fairly trivial. I don’t agree with everything Greg said, but it’s a good discussion point, at least.
It seems that user “Toothy McGrin”, however, did not appreciate Greg’s attitude regarding the triviality of certifications and minimal time spent on certification training versus “real” education. I can’t find Toothy’s original comment online, but Greg quoted that he said:
CCNA/CCNP may not be a big deal in the circles you travel in, but for a lot of employees and employers they are. They still require sacrificing a lot of time that could be spent with your family or friends or just R&R after a 10 hour work day, and the vast majority are self financed.
I don’t like the angle Greg took in responding to Toothy McGrin, but it does make me think about our motivations for getting a vendor certification, regardless of how easy or difficult they might be to achieve, and what the true educational value is of the certification.
Why do we get vendor certifications in the first place?
Many vendors oblige their resellers/integrators and channel partners to have a particular number and/or level of certified droids on staff. This makes sense up to a point; after all, it’s nice for customers to think that an integrator, for example, might design, sell, install and support equipment that they actually know something about. Naturally, having a bunch of certified nerds isn’t enough, so some vendors have an additional certification program specifically aimed at sales and pre-sales, to make sure that the most
expensive appropriate devices are specified in the design.
To be fair to “Toothy McGrin”, it’s ok to whine about putting lots of effort into a cert if it’s one you are doing only because you are forced to do it if you want to keep your job.
“I’m a CCIE, so I know what I’m talking about.” Yeah, but no. As I’ve covered previously, a certification does not make your opinion any more valid than anybody else’s. In fact, to quote myself from that article:
It’s what we DO that determines our value.
It’s ok to be proud at having earned a difficult certification; in and of itself, that’s a worthy achievement. But don’t brag on it. Please don’t brag on it. I can pretty much guarantee that however good somebody thinks they are because of that certification, there are plenty of people out there without the same cert who can engineer them into a corner with one arm tied behind their back.
Note: Braggers don’t get to be Whiners.
Yes, it’s the mountain-climber’s rationale. And honestly, it’s ok to collect certifications like My Coke Rewards if that gives a sense of satisfaction. That said, if that’s the motivation for certification, I’m not sure that such a person would be entitled to whine about their decision to go after a string of letters that are only used to pollute their email signature.
Stop right there. Certifications do not make you a better engineer.
There’s something here that’s valid, but there’s a complicated relationship between the process of learning things and that of taking a certification exam. I believe there’s a saying about how reaching a goal is not as important as the journey taken to get there, and the end goal of achieving a certification makes the journey somewhat tricky because it makes you learn things along the way that aren’t necessarily helpful.
Let’s be clear; rote memorization is not learning. Rote memorization will, however, likely get you through most certification exams, because most certification exams tend to focus on testing the (short-term) retention of facts. Some people find memorization easy, and such people will likely find certification exams easy. Others don’t work like that, and for them, certification exams are likely to be a real struggle.
The typical multiple-guess certification exam format is not capable of testing a candidate’s underlying understanding of a technology, and even the “simulation” questions tend to revolve around the memorization of implementation steps. Put bluntly, multiple choice exams are not good preparation for designing and operating a network, which may be why certifications like the Cisco CCIE and Juniper JNCIE have proven to be relatively highly valued (argue about this if you wish), as there’s a hands-on element to the testing.
With that said, however, candidates do have some choice about how they learn the facts that will be on the certification tests; there are decisions to be made about which route to take on their journey to certification. The fastest way to certification is to buy a certification study guide book which is focused solely on teaching the things likely to be in the test and may even provide online simulators in which the candidate can practice exam scenarios (I’m assuming that the days of including a CD with a study guide are long gone).
The slower but more effective way is to find resources which teach the protocols and technologies. Practice what is learned on real (or virtual) devices. Build networks; lots of them. When a scenario doesn’t work, troubleshooting is a superb way to find out if you’ve understood the fundamentals behind that scenario. Discuss them with other people. Try teaching the topic to somebody else, as teaching is a great way to find out what you don’t really understand.
Well, it sure can’t hurt to have a certification on a resume. If nothing else, it may help a candidate sort higher in a pile of similar resumes. Of course, with no hands-on job experience, the resume could end up going right back down the stack, but as Greg rightly points out, lazy recruiting agents and lazy hiring managers can be a thing, and sometimes they do use certs as a way to whittle the field down before looking any further. There’s a reason that people will do anything to include the letters “CCIE” in their resume, even if they haven’t taken any concrete steps towards the cert itself.
“Toothy McGrin” makes an interesting comment about most people self-funding their certification paths. I don’t know whether Toothy (may I call you Toothy?) is correct that the majority self-fund, but I’m sure many people do. Ultimately though, it’s a means to an end, isn’t it? I mean, do people go through college and incur student loan debt just for fun? Ok, some people might, but most do it because they believe that having an Associate’s or Bachelor’s degree makes them more marketable and will help them earn more. In some cases that education is the bar for entry into their chosen profession. Ironically, in networking it’s rarely necessary to have a degree in telecommunications or similar; just having a degree at all seems to be the check mark for many jobs. Either way. it would be nice to think that the investment made (in both time and money) learning networking would end up be rewarded by way of a paycheck.
Ever heard the saying that nothing worth having comes easily?
“[CCNA/CCNP] still require sacrificing a lot of time that could be spent with your family or friends or just R&R after a 10 hour work day,”
Yes, certs do require sacrifice, and if they didn’t, everybody would have them and they’d be completely meaningless (perhaps Greg would say “even more meaningless”?). But go back to the first section of this post: why does one work to attain a certification? It’s a kind of rubber stamp (or whatever perceived value) of learning. If the aim was to learn stuff and improve skills, did you use your time wisely to actually learn and practice those skills, rather than learn how to pass the test? If so, then wasn’t that sacrifice worth it? You’ll be able to prove your value much more effectively than somebody who just studied for the exam. Are you now better at your job? Are you able to make better decisions because of what you learned? Does that in itself make you worth more and more promotable? That’s the pay back. Ironically, of course, to attain the certification, You’ll still have to top up those fundamentals with a bunch of useless trivia to answer all of the questions on a certification exam, because that’s how those exams work.
As for the 10-hour work day, well, I don’t know what Toothy’s day job is; for all I know, they are working in an Amazon Fulfillment Center* to pay the bills while trying to develop their networking skills at home. I don’t think it changes the fact that any form of self-improvement takes sacrifice of some sort. Ask anybody who does body-building whether the time they spend in the gym takes them away from their family and friends, or whether they have to invest money in gym fees? I sympathize (empathize, in fact) with coming home from a long day at work, whatever that work might be, and then having to turn your brain on again in order to study. I can only suggest explaining to family and friends what you’re doing, and why, and hope that they understand and can be supportive of your goals for self-improvement.
Certifications alone mean very little, or at least they should do, but we know that some people use them as a filtering technique. Some people think that the more certifications you can cram into a resume, the better. Personally, if I see somebody with 20 certs, my gut reaction is that they are simply a good test taker and fact-memorizer, and there’s a better than average chance (based on my experience) that they’ve had relatively little hands on with the subjects of all those certifications. It’s pretty easy to weed out the test-takers at interview so long as – and this is really important – one doesn’t structure the interview just like a vendor test, asking the candidate for the kind of trivia they would memorize for a multiple choice exam. If a candidate does well at interview and demonstrates an understanding of the technologies and protocols, not just vendor exam trivia, the certs simply don’t matter.
Many companies use agencies to submit resumes for job listings, and in order to get to that interview in the first place, a resume has to somehow bubble up high enough that the agent is willing to put the candidate forward. Recruiters don’t always have the skills to speak to a candidate and judge which ones are any good (because they aren’t skilled in the technologies either) so a resume listing certifications looks like a better choice to send onward to the client. Some hiring managers (where there are direct resume submissions) also evaluate on a similar basis, as I said earlier.
And therein lies the rub. Certifications are easy to get for some, and hard to get for others, depending on whether you are a good test-taker, good at memorizing things, or good at learning fundamentals. That certification is relatively meaningless at interview but in order to get selected for interview it is necessary to have one or more certifications to raise the resume higher in the pile.
The good news is that most people are in the same boat; many of the people competing for the same jobs are also likely self-funding. The even better news is that there is more free training, documentation, guides and great information out there than ever before, and relatively easy availability of virtual devices on which to practice developing skills. That’s going to take time and sacrifice and most likely some funding which, as an investment in your career, is probably worth it.
The bottom line is that until we have a recruiting system which can filter candidates on something better than lists of certifications on resumes, certifications remain table stakes in order to get to interview. If we had a better way to rank candidates it would probably help people who have thousands of hours of hands-on skills, probably gained through that personal sacrifice. Whether one takes an exam at the end of all that hard work would becomes almost irrelevant, but it wouldn’t remove the need to work at the technology in the first place.
Are vendor certification exams easy? No, but they’re easier for some people than for others. As the “viewers” of those certifications in others, we can bear that in mind.
Should vendor certification exams be dismissed as irrelevant? No; they do prove an ability to learn some information, much as a degree shows an ability to learn some information. As I’ve said about my own CCIE,
“A CCIE is a network engineer who knew how to answer the specific questions that were asked on the day they turned up for the written exam and the lab.”
The same applies to pretty much any other vendor cert (give or take the lab in most cases). How we choose to value that, well, that’s up to us.
Do vendor certifications require sacrifice? More than likely, yes. I think Toothy felt that Greg was mocking and dismissing the sacrifice that students of vendor certifications are making, but the more I have read Greg’s response and filtered out the hyperbole, the more I believe that Greg was reacting to the implication that making a sacrifice somehow entitles somebody to special consideration (or maybe a participation trophy) and it really doesn’t; most of us been in pretty much the same situation.
I’ve lauded Greg in the past as a champion of sharing information and of developing the networking community, and I stand by that. On the other hand, Greg’s response to Toothy McGrin was unbecoming; it seemed personal, insulting, patronizing, dismissive and uncalled for. It was certainly not in the spirit of building community, and perhaps the post’s removal was inevitable.
However, I’m not a fan of revisionist blogging and I would rather have seen an apology attached to the post than to see its removal. (Of course, maybe there was something else behind the scenes that I’m not aware of.)
Updated July 1, 2019: Credit to Greg; he posted an apology today after being offline over the weekend.
It’s easy to “zing” people – and companies – on the internet. With blogging platforms and social media, we get to publish pithy, sassy diatribes about things, created entirely without any counter-argument, and boy do they feel good at the time. But as Tom Hanks (portraying Joe Fox) so accurately said in the movie You’ve Got Mail:
Someone upsets you and instead of smiling and moving on, you zing them. “Hello, it’s Mr Nasty.” […] But then, on the other hand, I must warn you that when you finally have the pleasure of saying the thing you mean to say at the moment you mean to say it, remorse inevitably follows.
The fantastic Mr Fox has a point, and I suspect we’ve all been there at one time or another.
Toothy McGrin, hang in there, keep studying and remember that vendor certification does not equate to your ability to do the job. To be successful, you need to learn, practice and troubleshoot the fundamentals, and then the vendor-specific implementations of those will come much more easily to you over time. Plus, you’ll be a way better engineer.
6/20/3019: Edited to add: For another great perspective on this, check out the post “Arguments Gone Wrong” from @ghostinthenet. Great point of view, and good additional reading in my opinion.
7/1/2019: Edited to include a link to Greg’s apology.
* When I hear “Amazon Fulfillment Center”, somehow I always think of the “IOI Loyalty Centers” in the movie version of Ready Player One.
If you liked this post, please do click through to the source at Response to “Certifications Are Not A Big Deal. Stop Being a Princess About It.” and give me a share/like. Thank you!
Juniper’s Mist acquisition is getting a dose of the SDN Campus and its coming up in a nasty rash. The symptoms are: an overlay network using L2TPv3 (aka MPLS for ordinary people) and and software controller badged AI-driven microservice cloud architecture insight in the user experience. Actually, before we press on, this is the twaddle […]
I’ve been trawling through the Extreme Networks Announces Intent to Acquire Aerohive Networks – Investor Presentation – https://investor.extremenetworks.com/static-files/16c92f7a-212b-48ae-86bc-aa132251b1af I’ve picked out some highlights for those people wondering about Key Takeaways Aerohive can be positioned to compete with Meraki which is good fit for existing customers. Extreme gets a basis on which to build SDWAN products that […]
I stumbled over this IOS configuration in a folder today. I don’t remember what or who it was for but it looks like its a Mainframe Front End Processor to IPX configuration. The FEP was likely connected using SDLC protocol on the serial interaces and IPX was operating on 16 Megabit Token Ring interface usng […]
I must admit that I was wrong. After almost six years, I was mistake about who would end up buying Aerohive. You may recall back in 2013 I made a prediction that Aerohive would end up being bought by Dell. I recall it frequently because quite a few people still point out that post and wonder what if it’s happened yet.
Alas, June 26, 2019 is the date when I was finally proven wrong when Extreme Networks announced plans to purchase Aerohive for $4.45/share, which equates to around $272 million paid, which will be adjust for some cash on hand. Aerohive is the latest addition to the Extreme portfolio, which now includes pieces of Brocade, Avaya, Enterasys, and Motorola/Zebra.
Why did Extreme buy Aerohive? I know that several people in the industry told me they called this months ago, but that doesn’t explain the reasoning behind spending almost $300 million right before the end of the fiscal year. What was the draw that have Extreme buzzing about this particular company?
The most apparent answer is HiveManager. Why? Because it’s really the only thing unique to Aerohive that Extreme really didn’t have already. Aerohive’s APs aren’t custom built. Aerohive’s switching line was rebadged from an ODM in order to meet the requirements to be included in Gartner’s Wired and Wireless Magic Quadrant. So the real draw was the software. The cloud management platform that Aerohive has pushed as their crown jewel for a number of years.
I’ll admit that HiveManager is a very nice piece of management software. It’s easy to use and has a lot of power behind the scenes. It’s also capable of being tuned for very specific vertical requirements, such as education. You can set up self-service portals and Private Pre-Shared Keys (PPSKs) fairly easily for your users. You can also build a lot of policy around the pieces of your network, both hardware and users. That’s a place to start your journey.
Why? Because Extreme is all about Automation! I talked to their team a few weeks ago and the story was all about building automation platforms. Extreme wants to have systems that are highly integrated and capable of doing things to make life easier for administrators. That means having the control pieces in place. And I’m not sure if what Extreme had already was in the same league as HiveManager. But I doubt Extreme has put as much effort into their software yet as Aerohive had invested in theirs over the past 8 years.
For Extreme to really build out the edge network of the future, they need to have a cloud-based management system that has easy policy creation and can be extended to include not only wireless access points but wired switches and other data center automation. If you look at what is happening with intent-based networking from other networking companies, you know how important policy definition is to the schema of your network going forward. In order to get that policy engine up and running quickly to feed the automation engine, Extreme made the call to buy it.
More importantly than the software piece, to me at least, is the people. Sure, you can have a bunch of people hacking away at code for a lot of hours to build something great. You can even choose to buy that something great from someone else and just start modifying it to your needs. Extreme knew that adapting HiveManager to fulfill the needs of their platform wasn’t going to be a walk in the park. So bringing the Aerohive team on board makes the most sense to me.
But it’s also important to realize who had a big hand in making the call. Abby Strong (@WiFi_Princess) is the VP of Product Marketing at Extreme. Before that she held the same role at Aerohive in some fashion for a number of years. She drove Aerohive to where they were before moving over to Extreme to do something similar.
When you’re building a team, how do you do it? Do you run out and find random people that you think are the best for the job and hope they gel quickly? Do you just throw darts at a stack of resumes and hope random chance favors your bold strategy? Or do you look at existing teams that work well together and can pull off amazing feats of technical talent with the right motivation? I’d say the third option is the most successful, wouldn’t you?
It’s not unheard of in the wireless industry for an entire team to move back and forth between companies. There’s a hospitality team that’s moved back and forth between Ruckus, Aerohive, and Ubiquiti. There are other teams, like some working on 802.11u, that bounced around a couple of times before they found a home. Which makes me wonder if Extreme bought Aerohive for HiveManager and ended up with the development team as a bonus? Or if they decided to buy the development team and got the software for “free”?
We all knew Aerohive was putting itself on the market. You don’t shed sales staff and middle management unless you’re making yourself a very attractive target for acquisition. I still held out hope that maybe Dell would come through for me and make my five-year-old prediction prescient. Instead, the right company snapped up Aerohive for next to nothing and will start in earnest integrating HiveManager into their stack in the coming months. I don’t know what the future plans for further integration look like, but the wireless world is buzzing right now and that should make life extremely sweet for the Aerohive team.
I want apologise for this post which didn’t meet my own standards. I broke one of my personal golden rules for blogging ‘no personal attacks’ when I made reference to a specific person who commented. The post is now deleted for that reason. This is no way acceptable. I unreservedly apologise to the person involved […]
The post Response: Certifications Are Not A Big Deal. Stop Being a Princess About It. appeared first on EtherealMind.
I have the impression that I ‘know’ most of this but a refresher put it back to top of mind.
The post Video: Ergonomics Expert Explains How to Set Up Your Desk – WSJ appeared first on EtherealMind.
It’s high time for another summer break (I get closer and closer to burnout every year - either I’m working too hard or I’m getting older ;).
Of course we’ll do our best to reply to support (and sales ;) requests, but it might take us a bit longer than usual. I will publish an occasional worth reading or watch out blog post, but don’t expect anything deeply technical for the new two months.
In the meantime, try to get away from work (hint: automating stuff sometimes helps ;), turn off the Internet, and enjoy a few days in your favorite spot with your loved ones!
Daniel Teycheney attended the Spring 2019 Building Network Automation Solutions online course and sent me this feedback after completing it (and creating some interesting real-life solutions on the way):
I spent a bit of time the other day reflecting on how much I’ve learn’t from the course in terms of technical skills and the amount I’ve learned has been great. I literally no idea about things like Git, Jinja2, CI testing, reading YAML files and had only briefly seen Ansible before.
I’m not an expert now, but I understand these things and have real practical experience on these subjects which has given me great confidence to push on and keep getting better.Read more ...
Another Cisco Live is in the books for me. I was a bit shocked to realize this was my 14th event in a row. I’ve been going to Cisco Live half of the time it’s been around! This year was back in San Diego, which has good and bad points. I’d like to discuss a few of them there and get the thoughts of the community.
Good: The Social Media Hub Has Been Freed! – After last year’s issues with the Social Media Hub being locked behind the World of Solutions, someone at Cisco woke up and realized that social people don’t keep the same hours as the show floor people. So, the Hub was located in a breezeway between the Sails Pavilion and the rest of the convention center. And it was great. People congregated. Couches were used. Discussions were had. And the community was able to come together again. Not during the hours when it was convenient. But a long time. This picture of the big meeting on Thursday just solidifies in my mind why the Social Media Hub has to be in a common area:
You don’t get this kind of interaction anywhere else!
Good: Community Leaders Step Forward – Not gonna lie. I feel disconnected sometimes. My job at Tech Field Day takes me away from the action. I spend more time in special sessions than I do in the social media hub. For any other place that could spell disaster. But not for Cisco Live. When the community needs a leader, someone steps forward to fill the role. This year, I was happy to see my good friend Denise Fishburne filling that role. The session above was filled with people paying rapt attention to Fish’s stories and her bringing people into the community. She’s a master at this kind of interaction. I was even proud to sit on the edge and watch her work her craft.
Fish is the d’Artagnan of the group. She may be part of the Musketeers of Social Media but Fish is undoubtedly the leader. A community should hope to have a leader that is as passionate and involved as she is, especially given her prominent role in Cisco. I feel like she can be the director of what the people in the Social Media Hub need. And I’m happy to call her my friend.
Bad: Passes Still Suck – You don’t have to do the math to figure out that $700 is bigger than $200. And that $600/night is worse than $200/night. And yet, for some reason we find ourselves in San Diego, where the Gaslamp hotels are beyond insane, wondering what exactly we’re getting with our $700 event pass. Sessions? Nope. Lunch? Well, sort of. Access to the show floor? Only when it’s open for the random times during the week. Compelling content? That’s the most subjective piece of all. And yet Cisco is still trying to tell us that the idea of a $200 social-only pass doesn’t make sense.
Fine. I get it. Cisco wants to keep the budgets for Cisco Live high. They got the Foo Fighters after all, right? They also don’t have to worry about policing the snacks and food everywhere. Or at least not ordering the lowest line items on the menu. Which means less fussing about piddly things inside the convention center. And for the next two years it’s going to work out just great in Las Vegas. Because Vegas is affordable with the right setup. People are already booking rooms at the surrounding hotels. You can stay at the Luxor or the Excalibur for nothing. But if the pass situation is still $700 (or more) in a couple of years you’re going to see a lot of people dropping out. Because….
Bad: WTF?!? San Francisco?!? – I’ve covered this before. My distaste for Moscone is documented . I thought we were going to avoid it this time around. And yet, I found out we’re going back to SF in 2022. WHY?!?!?!? Moscone isn’t any bigger. We didn’t magically find seating for 10,000 extra people. More importantly, the hotel situation in San Fran is worse than ever before. You seriously can’t find a good room this year for VMworld. People are paying upwards of $500/night for a non-air conditioned shoe box! And why would you do this to yourself Cisco? Sure, it’s cheap. Your employees don’t need hotel rooms. You can truck everything up. But your costs savings are being passed along to the customer. Because you would rather them pay through the nose instead of footing the bill yourself. And Moscone still won’t hold the whole conference. We’ll be spilled over into 8 different hotels and walking from who knows where to get to the slightly nicer shack of a convention center. I’m not saying that Cisco Live needs to be in Vegas every year. But it’s time for Cisco to start understanding that their conference needs a real convention center. And Moscone ain’t it.
Better: Going Back to Orlando – As you can see above, I’ve edited this post to include new information about Cisco Live 2022. I have been informed by multiple people, including internal Cisco folks, that Live 2022 is going to Orlando and not SF. My original discussion about Cisco Live in SF came from other sources with no hard confirmation. I believe now it was floated as a trial balloon to see how the community would respond. Which means all my statements above still stand regarding SF. Now it just means that there’s a different date attached to it.
Orlando is a better town for conventions than SF. It’s on-par with San Diego with the benefit that hotels are way cheaper for people because of the large amount of tourism. I think it’s time that Cisco did some serious soul searching to find a new venue that isn’t in California or Florida for Cisco Live. Because if all we’re going to do is bounce back and forth between San Diego and Orlando and Vegas over and over again, maybe it’s time to just move Cisco Live to Vegas and be done with the moving.
Cisco Live is something important to me. It has been for years, especially with the community that’s been created. There’s nothing like it anywhere else. Sure, there have been some questionable decisions and changes here and there. But the community survives because it rededicates itself every year to being about the people. I wasn’t kidding when I tweeted this:
Because the real heart of the community is each and every one of the people that get on a plane and make the choice time and again to be a part of something special. That kind of dedication makes us all better in every possible way.
HPE announced a marketing campaign built around the idea of Cloudless. I see this as a superb bit of trolling as the cloudista faithful have been delightfully duped into talking about HPE and highlighting how narrow minded they are. Most of them don’t even realise just how hard they are being rick-rolled here. Its bloody […]
When I was still at university the fourth-generation programming languages were all the hype, prompting us to make jokes along the lines “fifth generation will implement do what I don’t know how”Read more ...
The company 'Helium' appears to be attempting to build a national Low Power WAN (LPWAN) carrier network by asking normal people to buy and operate network nodes for them. The hotspot may be purchased directly or bundled with 3rd party IOT products and become nodes in a proprietary LPWAN that mines tokens in a blockchain.
The post Helium – Venture Capital Con Job or Viable Business ? appeared first on EtherealMind.
If you’re very old (like me) you’ll likely remember the halcyon days when IP routing was not enabled by default on Cisco routers. Younger gamers may find this hard to believe, which makes it even stranger when I keep bumping into an apparently common misconception about how routers work. Let’s take a look at what I’m beefing about.
To put this in context for the younger gamers, it’s worth noting that at the time, a typical “enterprise” might be running IP, but was equally likely to run IPX, AppleTalk, DECnet or some other protocol which may – or may not – support routing. Yes, there was life before the Internet Protocol became ubiquitous. If you’re curious, the command to enable IP routing is, well:
Guess how IPX routing was enabled:
DECnet Phase IV?
decnet [network-number] routing <decnet-address>
Ok, so the pattern isn’t entirely consistent, but it’s close enough. In one way things are much simpler now because routers tend to handle IP (and IPv6) and nothing else. On the other hand there are so many more IP-related features available, I think we should just be grateful that there’s only one underlying protocol to worry about.
Assuming that a router has IP routing enabled by default, here’s my gripe. Consider this simple network topology:<figure class="wp-block-image"><figcaption>Totally High Quality Network Diagram</figcaption></figure>
The image shows a router with two connected subnets, each of which connected to a switch with a PC connected to it. The PCs each have an IP address on their respective networks, and a default gateway pointing to the router interface. I’ve used this diagram to ask a variety of simple interview questions over the last ten years or so, and as part of that I’ve asked a number of candidates to consider the scenario where PC-A cannot ping PC-B, and to describe troubleshooting steps that might be taken to determine the cause.
On a number of those occasions, a candidate has said they would check the routing table on R1. When asked to explain what they would be looking for, the candidate explains that perhaps the router didn’t have a route for one side or the other, so they’d check that it had routes. “What kind of routes?” you might ask (and I did). The candidates would then explain that there needed to be either static routing or dynamic routing on the router. Some are hesitant on the dynamic routing part, but all who go down this path explain the need for a static route to each of the attached subnets.
I really struggle to understand this. I have wondered whether it’s something inherited from the linux world, where a
netstat -rn or
route shows the subnet seemingly pointing to an interface, e.g.:
$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags Iface 10.1.1.0 0.0.0.0 255.255.255.0 U eth0 <--- 0.0.0.0 10.1.1.1 0.0.0.0 UG eth0
What’s interesting is that most candidates can also explain how Cisco’s administrative distance (AD) is used, and cite some common values, for example:
AD | Protocol ---+---------- 0 | Connected 1 | Static
The candidates are typically clear that where multiple routes exist for a destination, the route with the lower AD will be selected. They’re also clear that an attached interface counts as “Connected”. The fact that a connected route would override the proposed static route doesn’t seem to process, or at least it does not until the conflict is pointed out at which point it’s like their understanding of the whole world was just turned upside down.
If this were something presented only by a very occasional candidate, I’d say it was just one of those things, but this misunderstanding has been offered up so many times over the years, I have begun to feel a little bit sorry for the candidates, because clearly somebody is out there spreading misinformation which they unfortunately have accepted in the absence of anything to contradict it.
Part of this problem, I suspect, is book learning. Ask a candidate to state AD values for a list of protocols, and it’s like looking the information up in a mental table, and the answer will be rattled off with confidence. Ask how AD works or what it does, and the candidate can give a textbook definition of administrative distance on Cisco routers. This is what I can probably best define as “book smarts.” We’ve all been there; we had to learn a product or protocol without the ability at that time to be hands on, so we’ve learned all about something in theory, but have never used it in practice.
I’ve been a Grumpy Old Man about this before, and if you go to that post, jump to the heading “Rote Memorization” to get my views on it. Does Feynman’s story sound familiar? This problem is in part due to the way many vendor tests are structured, favoring trivia over actual understanding, and I can’t really blame the candidates for memorizing in this manner when that’s what will let them pass the trivia test.
Nonetheless, for the sake of my sanity, please let’s be clear that a router – with IP routing enabled – will by default route packets between its connected interfaces without help from static or dynamic routes. Yes there are some exceptions I can think of (usually revolving around same-interface routing), but in this simple scenario, this is how it is.
I really have wondered if there’s a CCENT-type textbook doing the rounds out there which tells the students that they need static routes for connected subnets; it seems strange that so many candidates seem to have had the same bad hallucination about how routers work. Perhaps a disgruntled student created the Free Study Guide equivalent of Monty Python’s Hungarian Phrase Book?
As for the inability to apply what has been learned, it’s possible that this is a result of a lack of practical experience and an excess of test cramming. However, unlike when I was young, optimistic and trying to learn networking, virtualized network devices are now readily available for use, so there’s little excuse for up and coming network engineers not to get some time and experience at the command line.
Or will the next generation only know how to point and click? That’s probably a topic for another rant.
If you liked this post, please do click through to the source at Cranky Old Network Engineer Complains About The Youth Of Today and give me a share/like. Thank you!
Christoph Jaggi sent me this observation during one of our SD-WAN discussions:
The centralized controller is another shortcoming of SD-WAN that hasn’t been really addressed yet. In a global WAN it can and does happen that a region might be cut off due to a cut cable or an attack. Without connection to the central SD-WAN controller the part that is cut off cannot even communicate within itself as there is no control plane…
A controller (or management/provisioning) system is obviously the central point of failure in any network, but we have to go beyond that and ask a simple question: “What happens when the controller cluster fails and/or when nodes lose connectivity to the controller?”Read more ...
Originally Published in the Human Infrastructure Magazine Issue 113 in May 2019 I’m in New York, USA. I should be taking photos of myself in front of landmarks, having amazing food and laughing as I walk down the the street. In reality, I’m sitting in front of a mirror that is screwed to the wall […]
SD-WAN is the best thing that could have happened to networking according to some industry “thought leaders” and $vendor marketers… but it seems there might be a tiny little gap between their rosy picture and reality.
This is what I got from someone blessed with hands-on SD-WAN experience:Read more ...
Effective information security requires gaining visibility into potential threats and preventing the spread of malicious activity when it occurs. In a multicloud environment, it is easy to lose viability due to lack of control over the underlying infrastructure. Defending those same multicloud environments can be resource intensive, as maintaining consistent security policies across multiple infrastructures is complicated. Juniper Connected Security can help.
I’ve been developing yet more automation recently, and I’ve been hitting two major stumbling blocks that have had a negative impact on my ability to complete the tooling.
When APIs were first made available, the documentation from many vendors was simply incomplete; it seemed that the documentation team was always a release or two behind the people implementing the API. To fix that, a number of vendors have moved to a self-documenting API system along the lines of Swagger. The theory is that if you build an API endpoint, you’re automatically building the documentation for it at the same time, which is a super idea. This has improved the API’s endpoint coverage but in some cases has resulted in thorough documentation explaining what the endpoints are, but little to no documentation explaining why one would choose to use a particular endpoint.
As a result, with one API in particular I have been losing my mind trying to understand which endpoint I should use to accomplish a particular task, when no less than three of them appear to handle the same thing. I’m then left using trial and error to determine the correct path, and at the end of it I determine which one to use, but don’t really know why.
There are few better ways to waste an afternoon than to have an API endpoint which you call correctly per the documentation, but the call fails for some other reason. The worst one I’ve encountered recently is an HTTP REST API call which returns a HTTP 400 error (implying that the problem is in the request I sent) but with a JSON error message in the returned content saying there was an internal error on the back end. Surely an internal server error should be in the 5xx series? That particular error is caused by a bug which prevents that API call working correctly when the device is deployed in a cluster. This is infuriating, and took a long time to track down and confirm as a bug rather than an error on my part.
Unfortunately this discovery also suggests that as part of the software validation process before code release, either the API is not being fully tested (it has incomplete test coverage) and/or the API is not being tested against devices which are clustered, which, for this device, I’d suggest represents the majority of implementations.
Worse, in some ways, than that are the endpoints which return a valid-looking result and a success code, but are not necessarily providing what was requested. I’ve learned the hard way that just because an API tells you that a request was successful, it’s still necessary during development to manually inspect the returned data to make sure that the API is behaving itself and providing what it claims.
For example, I am working with one API where a request for the FIB (Forwarding Information Base) returns lots of entries. However, closer inspection of those entries reveals that only the first of any ECMP next-hops is being returned; it’s not possible to see all calculated equal cost paths. As a second irritation, the entire FIB cannot be retrieved at once; after much trial and error I determined that it is necessary to page the result in blocks of around 30-35 entries, or the request eventually triggers an internal error and fails. Naturally, the documentation does not indicate that there is any kind of limit on how many FIB entries can be returned safely at one time, nor that there would be an issue with a larger FIB being returned.
Worse, retrieving the RIB (Routing Information Base) from the same API – which thankfully does include the ECMP routes I’m looking for – only returns the first ~60 entries, and ignores any pagination requests entirely, so it’s not possible to see anything but those 60 entries. Again, looking manually allowed me to confirm that although I had asked for entries 90-129, for example, I was still getting the first 60 RIB entries. If I had not looked carefully, I could have made some very bad decisions on the basis of those incomplete data.
If the “show ip route” (or similar) command didn’t work properly in the CLI of a network device, customers would lose their minds, and I am pretty certain that a patched version of code would become available almost immediately. When the API doesn’t work, I get shrugs and promises of a fix in a future release at some unspecified time.
APIs have got big and unwieldy, and that’s partly our fault as users, because – reasonably enough – we want them to allow us to do everything the device can do. The APIs take a lot more effort on the part of the vendors to document and the end result seems to be that in some cases at least, the quality and value of that documentation has decreased even while coverage of endpoint availability and capabilities have increased. Making those APIs usable and understandable is key for developers but also in order to retain customers, because if I can’t figure out how to do something or I lose faith in the API’s reliability on one product, there’s a danger I’ll move to a different product.
As an industry we expect to be able to control everything via an API, and we are automating our business processes based on those APIs. Broken API endpoints mean broken business processes and that’s just not acceptable. I’m also getting a little tired of waiting for one bug to be fixed in a code release, then discovering another API bug and having to repeat the cycle, never quite finding a version of code that I can safely deploy and automate.
APIs have to be functional and reliable or they’re useless. APIs need to be thoroughly tested before shipping code to customers. Perhaps vendors can consider how they might be able to patch bugs in the API in a more agile fashion so that issues can perhaps be fixed without requiring a full code upgrade on a device, which has a high cost to the business. Unfortunately the API seems frequently to be tightly bound to the operating system rather than abstracted safely away from it, which means this will largely remain a dream rather than an actuality.
Most importantly, APIs need to be a first class citizen in the operation of every device, not a “table stakes” feature wedged uncomfortably and unreliably into legacy code.
If you liked this post, please do click through to the source at The Achilles Heel of the API and give me a share/like. Thank you!
My ‘do not use underscores in DNS’ war story: Back in the day when NetBIOS name services (NBNS) mattered more than DNS, people would put names on the their machines so they could access the shared resources from the Windows finder. Developers and certain types of ‘security professionals’ who have opinions on underscores vs dashes […]
The post Why I Do Not Use Underscores in DNS, A ‘War’ Story. appeared first on EtherealMind.
One of my readers sent me this question:
How can I learn more about reading REST API information from network devices and storing the data into tables?
Long story short: it’s like learning how to drive (well) - you have to master multiple seemingly-unrelated tasks to get the job done.Read more ...
One of the first things I realized when I started my Azure journey was that the Azure orchestration system is incredibly slow. For example, it takes almost 40 seconds to display six routes from per-VNIC routing table. Imagine trying to troubleshoot a problem and having to cope with 30-second delay on every single SHOW command. Cisco IGS/R was faster than that.
If you’re old enough you might remember working with VT100 terminals (or an equivalent) connected to 300 baud modems… where typing too fast risked getting the output out-of-sync resulting in painful screen repaints (here’s an exercise for the youngsters: how long does it take to redraw an 80x24 character screen over a 300 bps connection?). That’s exactly how I felt using Azure CLI - the slow responses I was getting were severely hampering my productivity.Read more ...
I wanted to put down some evidence on why Cisco is more than a networking company.
I consider this useful information for people who are planning their careers and particularly those peopel who are investing in certification programs. Its my view that Cisco has outgrown networking.
I always love to hear from networking engineers who managed to start their network automation journey. Here’s what one of them wrote after watching Ansible for Networking Engineers webinar (part of paid ipSpace.net subscription, also available as an online course).
This webinar helped me a lot in understanding Ansible and the benefits we can gain. It is a big area to grasp for a non-coder and this webinar was exactly what I needed to get started (in a lab), including a lot of tips and tricks and how to think. It was more fun than I expected so started with Python just to get a better grasp of programing and Jinja.
In early 2019 we made the webinar even better with a series of live sessions covering new features added to recent Ansible releases, from core features (loops) to networking plugins and new declarative intent modules.