April 27, 2017

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Who Moved My Control Plane?

Jordan Martin published a nice summary of what I’ve been preaching for years: centralized control plane doesn’t work (well) while controller-based network orchestration makes perfect sense.

While I totally agree with what he wrote he got the hype angle wrong:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 27, 2017 08:30 AM

Update: VMware NSX in Redundant L3-only Data Center Fabric

Short update for those that read the original blog post: it turns out that the answer to the question “Is it possible to run VMware NSX on redundantly-connected hosts in a pure L3 data center fabric?” is still NO.

VTEPs from different ESXi hosts can be in different subnets, but while a single ESXi host might have multiple VTEPs, the only supported way to use them is to put them in the same subnet. I removed the original blog post.

A huge thank you to everyone who pushed me with their comments and emails to find the correct answer.

by Ivan Pepelnjak (noreply@blogger.com) at April 27, 2017 06:39 AM

April 26, 2017

My Etherealmind

Response: Don’t believe the non-programming hype – Paul’s blog

Paul Gear has a great response to a recent Packet Pushers Weekly episode on programming/automation and this particular view that I agree with: Programming isn’t hype; programming is a fundamental IT skill.  If you don’t understand the basics of computer architecture (e.g. CPU instruction pointers, registers, RAM, stacks, cache, etc.) and how to create instructions […]

The post Response: Don’t believe the non-programming hype – Paul’s blog appeared first on EtherealMind.

by Greg Ferro at April 26, 2017 04:02 PM

ipSpace.net Blog (Ivan Pepelnjak)

Mini-RSA in Zurich, NSX, ACI, Automation…

I’ll be doing several on-site workshops in the next two months. Here’s a brief summary of where you could meet me in person.

A bit of manual geolocation first: if you’re from Europe, check out the first few entries, if you’re from US, there’s important information for you at the bottom, and if you don’t want to travel Europe or US, there’s an online course starting in September ;)

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 26, 2017 08:06 AM

VMware NSX in Redundant L3-only Data Center Fabric

During the Networking in Private and Public Clouds webinar I got an interesting question: “Is it possible to run VMware NSX on redundantly-connected hosts in a pure L3 data center fabric?

TL&DR: I thought the answer is still No, but after a very helpful discussion with Anthony Burke it seems that it changed to Yes (even through the NSX Design Guide never explicitly says Yes, it’s OK and here’s how you do it).

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 26, 2017 06:16 AM

XKCD Comics

April 25, 2017

My Etherealmind

9 Easy Ways to Break a Cisco Network

Every day operation of a Cisco router is likely to cause failure.

The post 9 Easy Ways to Break a Cisco Network appeared first on EtherealMind.

by Greg Ferro at April 25, 2017 06:09 PM

April 24, 2017

Moving Packets

John’s Network Oops – As Seen On Reuters

In my response to The Network Collective’s group therapy session where the participants ‘fessed up to engineering sins, I promised to share my own personal nightmare story, as seen on Reuters. Grab a bag of popcorn, a mug of hot chocolate and your best ghost story flashlight, and I will share a tale which will haunt you for years to come. If you have small children in the room, this may be a good time to send them outside to play.

John Tells A Scary Story

At one point in my career I was a network engineer for a national mobile provider in the USA. The mobility market is a high-stakes environment, perhaps more so than most industry outsiders might expect. Users have surprisingly high expectations and are increasingly reliant on the availability of the network at all times of day or night.

High-Stakes Networking

Mobile networks are typically not just for consumers but are also used by a large number of private entities including fleet management companies, fire/burglar alarm systems, shipping companies and emergency services, so even a minor outage can potentially be a problem. These commercial organizations all had customized private connectivity to the mobile provider and thankfully we had a contractually-identified maintenance window available six days a week, during which all changes would have to happen. Nonetheless, even during a change window the attitude, rightly enough, was that if an interruption in service could be avoided, it should be. I refer to this as make before break engineering — a reference to electrical switches in which the new connection is made before the old connection is disconnected — and writing changes this way requires a different mindset from that found in a typical enterprise environment.

When the stakes are high, the stress is high, and with true gallows humor we would joke, somewhat tongue in cheek, that you weren’t a fully-fledged member of the team until you had caused an outage which you could read about on Reuters. It was a somewhat ironic badge of honor, in some ways. In many networking roles, losing connectivity for a few hours is just an annoyance. Think about it, though; have you ever heard or read a story in the news about a mobile provider having some kind of outage? The risk of damage to a provider’s reputation should not be underestimated, as reports of outages have a direct impact on customers’ perception about the reliability and capabilities of each provider when they’re making a choice about their next mobile contract, and that means a direct impact on the bottom line.

My Reuters Moment

While I’m not proud of it, I do have the aforementioned badge of honor (and possibly the t-shirt as well). As background, I should explain that one of my roles at this particular mobile provider was to manage internet peering for the data centers. Internally, we had backhaul between the public-facing addresses for each site (so we would not have to transit the public internet when a service was not local), so internally we knew all our public routes, but externally we carefully filtered what we advertised to the Internet to ensure that traffic from outside the provider came to the right place.

Technical Error

The error I made was when updating a route-map on our edge internet routers at Data Center A. My intent had been to add a new sequence something like this:

route-map RM_OUTBOUND_TO_INTERNET seq 700 permit
 match ip prefix-list PL_LocalRoutes

Simple, right? Unfortunately, at some point during the creation of my MOP (Method Of Procedure, or a change script), I had managed to mistype the name of the prefix-list, and my change instead read like this:

route-map RM_OUTBOUND_TO_INTERNET seq 700 permit
 match ip prefix-list PL LocalRoutes

The MOP had been through reviews both within Engineering and with Operations, and nobody had spotted the error, and so the change was scheduled for execution. At this point it is worth explaining that this company had strict separation of duties between Operations and Engineering; Engineering wrote the MOPs, but weren’t allowed to execute them. Operations executed the MOPs, but weren’t allowed to write them; my access to the routers as an Engineer was read-only. I’ve posted previously about writing a MOP in order that it can be successfully executed by another person and I recommend reading that post too. While it’s a pain to have to write changes out in such detail, the upside is that I didn’t actually have to be there at 4AM when the change was being executed. After all, how could I help?

Fast Forward to 11:30AM

Somewhere around 11:30AM the morning of my 4AM internet change at Data Center A, I received an email asking if I had heard about an outage in Data Center B, and wondering if I could help take a look because they couldn’t figure out what had happened. This was the first report I’d heard about it, so I asked for further details of what was happening. Data Center B, it seems, was mostly offline. Throughput was way down on the internet-facing firewalls, and users going through that site were reporting that they couldn’t access many services. I thought about this for a minute, issued one command on the edge router at Data Center A, and I was able to confirm that the root cause was the change made on my behalf. I told them to roll the change back per my change script, and the problem would disappear, and within 10 minutes — by 11:45AM or so — service had been restored.

Root Cause

I learned something important that morning about Cisco IOS route-map configuration; did you know that you can match more than one prefix-list within the same match command? i.e. it’s valid to have:

match ip prefix-list PL1 PL2 PL3 PL4 PL5

This is handy to know, because it means that my typo:

match ip prefix-list PL LocalRoutes

…was not rejected as a syntax error by IOS. Instead, it was interpreted as being a request to match a route in either of two prefix-lists, one called PL and one called LocalRoutes. In true IOS fashion, there was also no warning or error about the fact that the command was referencing two prefix-lists, neither of which exist.

Another helpful thing to understand is that when a prefix-list is non-existent, Cisco IOS treats it as a match all clause. Thus, instead of only matching the list of networks in PL_LocalRoutes, my route-map statement now matched all routes, and that included the our internal routes to the public ranges in other data centers.

The end result was that Data Center A was advertising routes which belonged to Data Center B, and consequently traffic was going to the wrong place and while some of it was permitted to transit our internal network to Data Center B, the return path from B to the Internet didn’t include Data Center A, so there was an asymmetrical path through the firewalls which meant the sessions never established.As Seen On Reuters

Reuters

The outage had been running from 4:15AM until around 11:45AM, but it had only been noticed at around 7:15AM. Needless to say this extended way beyond our maintenance window. Customers were complaining, and when I jumped on google to see if there was any word about an outage affecting (roughly) a quarter of the American population, I was rewarded with the a page and a half of news reports about it, and top of the list was Reuters. Level up! The Reuters badge, I found out, comes with a complimentary wave of nausea.

The Aftermath

The Command

What command did I issue to figure out what was going on?

show ip bgp neighbor a.b.c.d advertised-routes | inc Total

While I’m not quite at the level where I can fix radios by thinking, I was able to listen to the symptoms, think about what might cause them, realize that my change involved one of those potential causes (i.e. that I was advertising too many routes from Data Center A), and was able to validate my theory fairly easily. I knew how many routes were advertised before my change, and I knew how many routes I had intended to allow in addition, so when I checked how many routes were being advertised to one of our internet providers and saw a significantly larger number than expected, it was obvious what was wrong. I didn’t immediately know why it had happened, but I knew what had happened. Once I was knew that it was the route-map change which had evidently not gone to plan, the space in the middle of the prefix-list name was an easy thing to discover.

Why Wasn’t The Problem Noticed Earlier?

Why was it 7AM before a problem was identified? The answer to this is both good and bad. During maintenance windows, the NOC were used to seeing anomalies in device performance and traffic flows as we made changes, so a culture had built up whereby anomalies would be ignored during the maintenance window, even if we had not advised that such anomalies should be expected. After the BGP change was made, traffic for Data Center B was coming in to Data Center A, and the internet-facing firewalls were blocking a huge number of sessions, and the idle session count was through the roof. CPU had doubled because two data centers’ traffic was hitting the firewall. In all cases, while these symptoms had been noted, they were ignored as the normal fluctuation during a big change.

With the benefit of hindsight, obviously the NOC would not have done this, but at that time, it’s what happened. Even at the end of the maintenance window at 6:15AM, the firewall statistics were clearly abnormal – but the NOC was changing shifts around then, and the message had been passed on by the outgoing shift that they were ignoring the firewall statistics due to maintenance activities, and consequently the next shift continued to ignore it for the next hour before somebody again questioned why the utilization and failed session statistics were still so high. This was an outage extender (i.e. something which wasn’t causal, but extended the outage beyond the point at which it could or should have been identified and fixed), because the issue had been in place for three hours already before anybody started looking at it, and we had already exited the agreed maintenance window.

Why Wasn’t I Called Earlier?

Perhaps understandably, when an outage occurs in Data Center B, Operations did not immediately consider changes made in Data Center A. Even when I was eventually contacted, it was to get help troubleshooting, not because my change was suspected of being the cause. This was a lesson learned; the data centers were inherently coupled when it came to public IP space and internet access, so it was important to always consider that coupling when an issue arose. Again, this doesn’t change the root cause of the problem, but it’s another extended. Once I was called, I identified the problem within five minutes and service was restored 10 minutes after that.

Surely You Tested After The Change?

We did test after the change. Data Center A — where we made the change — was working perfectly. We did not, however, test Data Center B. Why would we? The change was in Data Center A. Another lesson learned, and a good case study in considering the potential downstream impact of a change.

Hey Mr Hypocrite, Where’s Your Implementation Test Plan?

Where was my test plan? In the script, actually. Every change in the MOP was followed up a set of test steps to validate the correct implementation of the change. Before changing the route-map, the MOP gave the commands to test and note down the number of routes being sent to each internet peer. The MOP specified how many new routes should be advertised, and then post-change I had included specific checks on how many routes we were advertising to each internet BGP peer, and noted that the number should be [routes_before] + [added_routes].

When the Operations engineer checked their session logs, they admitted honestly that they had evidently not issued the commands specified in the MOP to validate the post-change routing. Once more this is an outage extender because had the commands been issued and the route counts had not matched what was specified in the MOP, the MOP directed the Operations engineer to stop and roll back from that point. Had the tests been carried out, the problem would have been identified at 4:20AM and rolled back by 4:25AM, limiting to 10 minutes an outage which eventually lasted nearly seven and a half hours.

Whose Fault Is It Anyway?

It was my fault. I produced a MOP with a typo in it, so the root cause of the outage is all mine. However, were it not for an unfortunate storm of bad assumptions and incomplete process execution, the incident could have been identified and resolved well within the maintenance window, and somebody at Reuters could have had a quieter morning. Similarly, I would not have spent the next two days putting together a detailed Root Cause Analysis document for management and generally feeling like the worst engineer in the world.

Was I Fired?

No, I was not. I owned up to my typo, but with so many other elements contributing to the outage, it would have been very unfair if the company had singled me out. Instead, I worked with Operations to find ways to avoid this kind of issue in the future and create the necessary policy to support that goal.

Lessons to Learn

I noted a number of lessons learned on the way through, but as a brief summary:

  • Fix the outage first; point fingers later
  • Own up to your mistakes
  • Always question anomalies and see if the answers make sense
  • Always have a thorough test plan including the expected results
  • Always execute the test plan…
  • Consider downstream impacts and environment which may have a shared fate
  • Don’t do it again! Once is unfortunate; twice is just careless. Figure out what you need to do to ensure that you don’t repeat the same mistake.

I think that’s more than enough from me. If you have your own horror stories I’d love to hear them, and if you haven’t listened to The Network Collective, Episode 1, you should, because you’ll hear about some more bad days happening to other people and you can empathize or cackle with the schadenfreude, as is your preference.

Important Note

Some times, places, people and technical details about this incident have been changed to protect the guilty. And also to stop you finding it in Reuters’ archives…

If you liked this post, please do click through to the source at John’s Network Oops – As Seen On Reuters and give me a share/like. Thank you!

by John Herbert at April 24, 2017 03:52 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Figure Out What the Customer Really Needs

One of the toughest challenges you can face as a networking engineer is trying to understand what the customer really needs (as opposed to what they think they’re telling you they want).

For example, the server team comes to you saying “we need 5 VLANs between these 3 data centers”. What do you do?

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 24, 2017 09:51 AM

Security to the Core | Arbor Networks Security

Observed Spike in DDoS Attacks Targeting Hong Kong

Introduction Each week ASERT produces a weekly threat intelligence bulletin for Arbor customers. In addition to providing insights into the week’s security news and reviewing ASERT’s threat research activities, we also summarize the weeks DDoS attack data as reported by over 330 global Internet Service […]

by Kirk Soluk at April 24, 2017 12:39 AM

XKCD Comics

April 22, 2017

Jason Edelman's Blog

Self Driving Cars and Network Automation

Last year at Interop, there was a great mini-conference dedicated to the DevOps for Networking community. In that session, I kicked off the day with a general view of where the industry was with respect to the intersection of DevOps and networking with a focus on network automation.

One of the analogies I made was comparing network automation to self-driving cars posing the question, “Are they real?”…“Are they real for us (the consumer)?”

Self-Driving Cars

No, they are not, but I continued to make the analogy. Is complete network automation real today? While, the answer is yes, it’s not really a reality for most…yet.

So, what’s the connection between self-driving cars and network automation?

Start small and expand. Pick a problem, solve it, and integrate it.

Self-Driving Cars are Coming

While self-driving cars aren’t a reality for us to buy and purchase today, intelligent cars are– these are cars that have high-value services and features enhancing the way we drive, our safety, and much more generally, the way we in which we consume the streets and infrastructure around us.

Intelligent Cars

These include automated features like self-parking, back-up cameras, automated beeping as you back-up, automatic-brakes, GPS, and computer systems that give you a plethora of visibility about the inner workings of the car (complex system). So yes, you better believe it. The self-driving car is coming– one feature, chip, feedback loop, and computer program at a time.

Network Automation is Coming

All of the pieces are actually here already!

Achieving network automation is hard, very hard. But it’s actually not if you break it down into achievable milestones. Maybe it’s something like the following:

  • Generate automated reports and documentation for Campus Access layer and expand networks from there. You don’t need to start with every network type.
  • Create proper configuration templates for each new device type or for each new service being deployed. Again, you don’t need to start with every device or network type.
  • Create a compliance check for credentials in one part of the network and gradually expand compliance checks and networks checked against.
  • Standing up a new site? Look into zero touch provisioning.
  • Having a problem with bad switches in a stack or linecards in a chassis? Perfect problem to solve.

As use-cases like this are being solved week after week, you’ll have short-term wins proving the value of automation, but also be moving towards the bigger picture of deploying services, integrating into 3rd party platforms, creating relevant feedback loops, offering APIs to the business, and much more.

The biggest takeaway is to make sure you build a plan, know it’ll take time to achieve, and break it up into achievable milestones. It’ll be a win for everyone involved.

-Jason

@jedelman8

April 22, 2017 12:00 AM

April 21, 2017

The Networking Nerd

The Future Of SDN Is Up In The Air

The announcement this week that Riverbed is buying Xirrus was a huge sign that the user-facing edge of the network is the new battleground for SDN and SD-WAN adoption. Riverbed is coming off a number of recent acquisitions in the SDN space, including Ocedo just over a year ago. So, why then, would Riverbed chase down a wireless company when they’re so focused on the wiring behind the walls?

The New User Experience

When SDN was a pile of buzzwords attached to an idea that had just come out of Stanford, a lot of people were trying to figure out just what exactly SDN could offer them in terms of their network. Things like network slicing were the first big pieces to be put up before things like orchestration, programmability, and APIs were really brought to the fore. People were trying to figure out how to make this hot new thing work for them. Well, almost everyone.

Wireless professionals are a bit jaded when it comes to SDN. That’s because they’ve seen it already in the form of controller-based solutions. The idea that a central device can issue commands to remote access devices and control configurations easily? Airespace was doing that over a decade ago before they got bought by Cisco. Programmability is a moot point to people that can import thousands of access points into a device and automatically have new SSIDs being broadcast on them all in a matter of seconds. Even the new crop of “controllerless” wireless systems on the market still have a central control infrastructure that sends commands to the APs. Much like we’ve found in recent years with SDN, removing the control plane from the data plane path has significant advantages.

So, what would it take to excite wireless pros about SDN? Well, as it turns out, the issue comes down to the user side of the equation. Wireless networks work very well in today’s enterprise. They form the backbone of user connectivity. Companies like Aruba are experimenting with all-wireless offices. The concept is crazy at first glance. How will users communicate without phones? As it turns out, most of them have been using instant messengers and soft phone programs for years. Their communications infrastructure has changed significantly since I learned how to install phone systems years ago. But what hasn’t changed is the need to get these applications to play nicely with each other.

Application behavior and analysis is a huge selling point for SDN and, by extension, SD-WAN. Being able to classify application traffic running on a desktop and treat it differently based on criteria like voice traffic versus web browsing traffic is huge for network professionals. This means the complicated configurations of QoS back in the day can be abstracted out of the network devices and handled by more intelligent systems further up the stack. The hard work can be done where it should be done – by systems with unencumbered CPUs making intelligent decisions rather than by devices that are processing packets as quickly as possible. These decisions can only be made if the traffic is correctly marked and identified as close to the point of origin as possible. That’s where Riverbed and Xirrus come into play.

Extending Your Brains To Your Fingers

By purchasing a company like Xirrus, Riverbed can build on their plans for SDN and SD-WAN by incorporating their software technology into the wireless edge. By classifying the applications where they live, the wireless APs can provide the right information to the SDN processes to ensure traffic is dealt with properly as it flies through the network. With SD-WAN technologies, that can mean making sure the web browsing traffic is sent through local internet links when traffic meant for main sites, like communications or enterprise applications, can be sent via encrypted tunnels and monitored for SLA performance.

Network professionals can utilize SDN and SD-WAN to make things run much more smoothly for remote users without the need to install cumbersome appliances at the edge to do the classification. Instead, the remote APs now become the devices needed to make this happen. It’s brilliant when you realize how much more effective it can be to deploy a larger number of connectivity devices that contain software for application analysis than it is to drop a huge server into a branch office where it’s not needed.

With the deployment of these remote devices, Riverbed can continue to build on the software side of technology by increasing the capabilities of these devices while not requiring new hardware every time a change comes out. You may need to upgrade your APs when a new technology shift happens in hardware, like when 802.11ax is finally released, but that shouldn’t happen for years. Instead, you can enjoy the benefits of using SDN and SD-WAN to accelerate your user’s applications.


Tom’s Take

Fortinet bought Meru. HPE bought Aruba. Now, Riverbed is buying Xirrus. The consolidation of the wireless market is about more than just finding a solution to augment your campus networking. It’s about building a platform that uses wireless networking as a delivery mechanism to provide additional value to users. The spectrum part of wireless is always going to be hard to do properly. Now, the additional benefit of turning those devices into SDN sensors is a huge value point for enterprise networking professionals as well. What better way to magically deploy SDN in your network than to flip a switch and have it everywhere all at once?


by networkingnerd at April 21, 2017 06:16 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)
XKCD Comics

April 20, 2017

Networking Now (Juniper Blog)

Securing Enterprise Hybrid Clouds with Industry-Leading High-Performance Next-Generation Firewalls

Juniper Networks SRX4000 line of next-generation firewalls set a new benchmark for price and performance while enabling secure migration into hybrid clouds

 

As enterprises grow more dependent on cloud technologies, they need to begin adopting hybrid cloud architectures to provide greater flexibility and economic benefits. This, however, is easier said than done.

 

Migrating to a hybrid cloud model and deploying point firewalls presents its own set of challenges, including:

 

• Performance degradation at scale impacting security effectiveness

• Complex security management

• Weak connectivity between data centers

• Increased risk surface

 

Unfortunately, point and legacy firewalls are poorly suited for hybrid cloud environments, creating an immediate need for solutions that can provide:

 

  • Faster threat detection and blocking: Fully integrated, cloud-informed threat prevention (such as Juniper Networks Sky Advanced Threat Prevention) offers immediate, actionable intelligence; scalability; and integrated security services that keep you up to date and defend against the very latest threats.
  • Effective security everywhere: An architecture powered by Juniper’s Software-Defined Secure Network (SDSN) platform lets enterprises easily implement and efficiently operate their security infrastructure. An ecosystem that is continually learning about new threats enables faster enforcement and consistent security across your hybrid cloud environment, keeping costs down.
  • Flexible and scalable architecture: Building secure environments across private and public cloud data centers helps you keep your network running, delivering resiliency, high-performance NGFW functionality, complete application visibility and control, and effective threat defense.
  • Smarter control and visibility: Intuitive, scalable management tools and analytics provide actionable intelligence that empowers teams to do more with fewer resources, keeping operational costs down.
  • Industry-leading, high-performance NGFWs: Juniper’s efficient and effective physical and virtual SRX Series NGFWs optimize security, allowing you to easily implement defenses and operate them more efficiently without compromising performance.

 

An effective hybrid cloud solution, working along with high-performance physical and virtual next-generation firewalls deployed in private and public data centers, provides business resiliency, visibility and control, analytics, and automation—all of which help enterprises reduce business risk and focus on business critical problems.

 

A Software-Defined Secure Network builds threat detection, enforcement, and remediation into the very fabric of your network. Powered by Juniper’s high-performance NGFWs, along with smarter and faster application visibility and control, the Juniper hybrid cloud security architecture provides flexible, end-to-end security, allowing enterprises to protect their data within private and public data centers, campuses, or regional headquarters.

 

To learn more about Juniper Networks SRX Series NGFWs and how Juniper’s security solutions seamlessly extend across private and public cloud architectures without compromising performance and manageability, please download our Securing Enterprise Hybrid Clouds solution brief.

 

To learn more about SRX4000 Services Gateways please visit SRX4000 Services Gateways.

by abdis at April 20, 2017 06:39 PM

My Etherealmind
Router Jockey

PCAP t-shirts just in time for CLUS17

Hey guys, I just wanted to drop a quick note to let you know that I’ve relaunched my teespring shirt campaigns with enough time that you should get your orders before Cisco Live US 2017. I’ve got several types of clothing under each design, so make sure you look to see if I have what you’re looking for. This campaign is only open for 14 days – so get yours while you can!

As usual, send comments / suggestions / etc to @tonhe on twitter.

Thanks again, and I hope to see you at #CLUS17

Click below to enter my teespring storefront

The post PCAP t-shirts just in time for CLUS17 appeared first on Router Jockey.

by Tony Mattke at April 20, 2017 06:11 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)
Networking Now (Juniper Blog)

Nation States move from passive to active cyber defences

HackerFingers.jpg

Look for evidence in the public domain that any government has admitted to targeting another government’s civilian or military digital infrastructure and you won’t find much, for obvious reasons.  To date, almost all official rhetoric has been about defending citizens and infrastructure against foreign states, but that is changing. In 2017 I believe we will see more nations move the narrative from one of passive defence, to a more active stance.

by lfisher at April 20, 2017 08:00 AM

April 19, 2017

Ethan Banks on Technology

RESPONSE: 3 Hidden Lessons Behind Top Podcasts to Help Yours Stand Out

Thoughts from the Content Marketing Institute for budding podcasters were shared here. Here’s my response to the points that stood out to me.

CMI’s big idea #1.

“At first, format trumps talent.” And then later…“Avoid the race to the bottom of simply booking the biggest guests in your niche and meandering through an unplanned episode. Instead, find your format.”

Response. To record an effective show people will listen to, you need a plan, agreed. However, the article cites an example of a 15 minute long episode carved into blocks of minutes and seconds.

Perhaps that’s what you need when working against an ultra-tight timeline. However, an outline that provides structure should be adequate. Overly structuring a podcast is burdensome and can serve to stifle interesting conversation. Freedom is one of the benefits of podcasting.

Podcasting is NOT a digital regurgitation of radio, although many try to shoehorn podcasts into a radio format, because the radio business is what they understand. However, podcast content is different. Distribution is different. Listener consumption is different. Monetization is different.

And perhaps most importantly, timelines are fluid. 15 minute long podcasts are being created under an artificial time constraint that begs the question…why?

On the other hand, having no format at all before hitting record is indeed bad news. Wandering, random conversations are wastes of the listeners’ time. Stay on point enough to maximize the amount of information you’re sharing or able to get your guest to share. Writing a solid outline ahead of time will get that done.

A great deal of my time each week is spent researching my guest (if there is one), reading about our topic(s), and constructing an outline with a story arc that will engage the listener. That’s my “format” such as it is, and it’s all you need. Don’t obsess about music, corny bits, falling precisely onto specific minute and second marks, etc. Just get the content right–that’s most of the battle.

CMI’s big idea #2.

“Time constraints are your strength (Spoiler alert: Nobody wants your 60-minute show).”

Response. This is flat-out wrong. The length of your podcast episode has everything to do with fair treatment of the material chosen for the episode and nothing to do with creating a bunch of abbreviated episodes to stuff the future download queue.

As a podcaster, you must know how to move the conversation along. There comes a point where you’ve talked about a topic enough, and it’s time to get to the next thing. On the other hand, many subjects offer tangents that are worth exploring. Podcasting is about right-sizing the time spent as the conversation progresses.

The limit of your podcast episode is not constrained by the clock. The limit is when there’s nothing else worth discussing. That is decidedly a balancing act, as no one has the attention span for your ocean-boiling, yak-shaving carrying on even if it’s interesting. There is some limit. But I can say with confidence that 60 minutes isn’t necessarily that limit.

Case in point: a Tim Ferriss Show episode is routinely over an hour, and not infrequently more than two. Tim’s show is perhaps an outlier, but it’s one of the most popular podcasts in the world for a reason. The content is just that compelling, despite the length of the shows.

A second case in point is the TED Radio Hour on NPR. This frustrating show is constrained by a very specific format, as it’s not only a podcast, it’s also a broadcast radio feature. Therefore, the content ends up as a mix of stretching out some segments longer than they need go, while also rushing through certain guests that clearly had more to offer.

The TED Radio Hour is slickly produced and predictable, but the ultimate value for the listener is sacrificed on the altar of format, length being a chief limitation. Too bad. In TED Radio Hour’s heyday, which I believe is long past, they found some very interesting topics and people.

Final case in point comes from experiences with my own shows. Show duration is just not a problem.

  1. I know from interacting with hundreds of audience members over many years that 60 minutes is just fine. They have long commutes. They want to flex their brains while mowing the lawn. Etc. Time is something they are willing to expend on worthwhile content, and therefore, they want to hear complex topics treated fairly.
  2. On one show, my co-host and I experimented with locking down the show to 30 minutes. No one thanked us for this. In fact, the opposite was true. We had listeners tell us that the longer shows were better, and so we went back to the longer format.

If your show is decent and the subjects you choose demand it, someone will want your 60-minute show–just so long as you aren’t waffling on after you ran out of worthwhile things to say.

CMI’s big idea #3.

“Create recurring segments or content brands within the show.”

Response. I have no big disagreement with this point, but don’t obsess about it. When your show is new, it hasn’t yet found its voice. Give your show ten or more episodes to settle into a groove, then see what segments naturally occur.

Once you’ve picked them out, run with them, but don’t be a slave to them. You don’t have to have material for a recurrent segment every single show. Do it when you’ve got it, but don’t force it if you don’t.

For example, on Citizens of Tech, we have “Content I Like” and “Today I Learned” segments on pretty much every show. Eric and I are always finding interesting things we want to share for those bits. On the other hand, we also have “Privacy Watch” and “Deathwatch” segments, but we don’t run either of those two segments every show. There’s just not enough interesting content to fill those segments every episode.

Stop overthinking.

There is no one-size-fits-all to podcasting. What works for one audience won’t work for another. However, the opportunities are endless. Stop trying to find a magic formula that will gain you audience. I don’t care how good your show is, audience will take a long time to accumulate if you don’t have an existing audience to use as a launching pad for your new show. (And maybe even if you do.)

So…forget about all of that. Be creative. Be different. But be focused, delivering a consistent product that is, at the end of the day, yours. If it’s good, the audience will follow if you’re patient enough and perhaps get a few lucky breaks.

The podcasts I am the most interested in now tend to be rather “out there.” Weird stuff, with odd formats. I appreciate slick production values, but at the same time I’m sick of homogenized polish that render podcasts sterile, canned, and phony.

Make something you want to listen to. Other people will want to listen to it, too.



Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

by Ethan Banks at April 19, 2017 06:52 PM

My Etherealmind

IPv6 Extensions Are Already Dead

In this wiki entry disguised as a RFC 7872, “Observations on the Dropping of Packets with IPv6 Extension Headers in the Real World” highlights IPv6 Extension Headers are effectively unusable since internet providers are dropping IPv6 fragment and failing to support Extension Headers.  In IPv6, an extension header is any header that follows the initial 40 […]

The post IPv6 Extensions Are Already Dead appeared first on EtherealMind.

by Greg Ferro at April 19, 2017 03:27 PM

ipSpace.net Blog (Ivan Pepelnjak)

Amazing Discovery: Stability Matters

Here’s an interesting blog post (particularly as it’s coming from a well-known cloud evangelist): at the infrastructure level stability matters more than agility or speed-of-deployment. Welcome to real world ;)

by Ivan Pepelnjak (noreply@blogger.com) at April 19, 2017 06:11 AM

XKCD Comics

April 18, 2017

My Etherealmind

Cisco IOS-XR: the buggy XML API

Cisco still can't write reliable applications for its own IOS-XR operating system

The post Cisco IOS-XR: the buggy XML API appeared first on EtherealMind.

by Greg Ferro at April 18, 2017 06:39 PM

Moving Packets

Response: The Network Collective, Episode 1

The Network Collective

The end of March brought with it the first episode of a neat new project called The Network Collective, a video roundtable for networking engineers. The hosts and co-founders of this escapade are Jordan Martin (@BCJordo), Eyvonne Sharp (@SharpNetwork) and Phil Gervasi (@Network_Phil).

Top 10 Ways To Break Your Network

The Network Collective, Episode 1

Episode 1 brought three guests to the virtual table: Carl Fugate, Mike Zsiga and Jody Lemoine, the latter of whom (top right on the YouTube video) is actually blurry in real life, and this is not a video artifact. The topic for discussion was the Top 10 Ways To Break Your Network. Thankfully, the show didn’t actually provide tips on how to break your network — as if we need any help doing that — but instead looked at the shameful ways in which each participant had managed to cause network destruction in the past, and what lessons could be learned.

The fact that five of six experienced professionals are willing to own up to their blunders (one brought a colleague’s mistake to put up on the chopping block) actually signals one of the most important lessons that the episode highlighted, which is to be honest and own up to your mistakes. It is better for your career to do that than to pretend that you have no idea how an outage happened. Trust me; I have a very particular set of skills, skills I have acquired over a very long career. Skills that make me a nightmare for people who cause outages. If you admit your error up front, that’ll be the end of it. I will not throw you under the bus; I will not pursue you. But if you try to cover up your error, I will look for you, I will find you, and I will hang you out to dry. But I digress… The long and short of it is that if I waste my time tracking down the source of a problem which somebody knew all along but didn’t want to admit to, I’m going to be pretty steamed. As a consultant for 16 years, one of many mantras I learned to live by is this:

Bad News Doesn't Get Better With Age

It’s All About The Environment

With that said, I feel that there’s another important lesson here, and it’s for management rather than the engineers. As a manager, it behooves you to create an environment which encourages honesty instead of punishing it. I have worked in environments where the most important part of finding the root cause to an outage was appointing blame to an individual. Guess what? Nobody ever wanted to own up to doing anything because they were fearful for their jobs. If you’re currently thinking Well, of course, that’s obvious!, you’d be right, yet I’ve seen and heard about companies like this far too often. How does your company treat honest mistakes?

Confession Time

We all make mistakes. The reasons for those mistakes vary from carelessness and over-confidence through to ignorance, software bugs, unfamiliarity with an environment and sheer bad luck. However, they are mistakes and — other than in circumstances of exceptional disgruntlement — are not an intentional attempt to take down a network. I don’t want to sound like a greetings card, but every mistake is also an opportunity to learn, and this is where the metaphorical rubber meets the road. The aim of performing a root cause analysis (RCA) after an outage is not simply to determine what happened; it should also be to look at how that same mistake can be avoided in the future. Without the latter, there’s no point in performing the RCA in the first place, in my opinion.

Finding Root Cause

When looking for a root cause, I go beyond simply the action that caused the outage. I as questions like:

  • Was there a process failure which allowed this to happen (or did somebody break a process which would have prevented the issue)?
  • How quickly was the issue discovered? Why did it take that long?
  • Were there extenders, i.e. did something happen (or not happen) which meant that the outage continued for longer than it needed to?
  • What testing was being done during/after the change, and did it catch the error? If not, why not? i.e. Were there what we now know are holes in the test plan.

One of the comments during this episode was along the lines that and outage can occur, and then steps are taken to make sure that particular outage path can’t happen again, but it’s almost pointless because the next outage will inevitably be something else unexpected. The implication seemed to be that making changes to avoid a repeat incident was somehow pointless. I respectfully disagree. The first time a mistake happens, it’s a mistake. If the same mistake happens again because I didn’t take steps to prevent it, then it’s not a mistake any more, it’s a known, unresolved problem.

As a corollary to that, if an engineer makes the same mistake repeatedly, perhaps this career is not for them.

All Aboard The Blunder Bus

In response to this first episode, in a future post I will share one of my own epic blunders and analyze the lessons to be learned from what happened.

The Network Collective

The Network Collective looks like it should be an interesting project to follow, and I would recommend subscribing. I love hearing tales from the real world, and next weeks’ recording of Episode 2 (Choosing a Routing Protocol) features the ubiquitous Russ White. What’s not to love?

If you liked this post, please do click through to the source at Response: The Network Collective, Episode 1 and give me a share/like. Thank you!

by John Herbert at April 18, 2017 01:54 PM

ipSpace.net Blog (Ivan Pepelnjak)

Automate Everything: ipSpace.net Is Coming Back to US

After the last US-based ipSpace.net workshop a lot of people asked me about the next one. It took a long time, but here it is: I’m running an on-site automation workshop together with several friends with outstanding hands-on experience in Colorado in late May.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 18, 2017 11:09 AM

April 17, 2017

My Etherealmind

CPU Failures Hurt Intel’s Bottom Line

Unsurprising, the failure of Intel Atom C2000 is costing money

The post CPU Failures Hurt Intel’s Bottom Line appeared first on EtherealMind.

by Greg Ferro at April 17, 2017 05:18 PM

Network Design and Architecture

April Online CCDE Class is going to start today

I am excited as today, 2017 CCDE April Online (Webex) class is going to start. Actually , there is only half an hour and we will start. Every day will be 4 hours and minimum 11 days it will take. We will go through the theory , best practices and the case studies for many […]

The post April Online CCDE Class is going to start today appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at April 17, 2017 03:23 PM

XKCD Comics

April 14, 2017

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Programmable ASICs on Software Gone Wild

During Cisco Live Europe 2017 (where I got thanks to the Tech Field Day crew kindly inviting me) I had a nice chat with Peter Jones, principal engineer @ Cisco Systems. We started with a totally tangential discussion on why startups fail, and quickly got back to flexible hardware and why one would want to have it in a switch.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 14, 2017 06:24 AM

XKCD Comics

April 13, 2017

My Etherealmind
The Networking Nerd

Changing The Baby With The Bathwater In IT

If you’re sitting in a presentation about the “new IT”, there’s bound to be a guest speaker talking about their digital transformation or service provider shift in their organization. You can see this coming. It’s a polished speaker, usually a CIO or VP. They talk about how, with the help of the vendor on stage with them, they were able to rapidly transform their infrastructure into something modern while at the same time changing processes to accommodate faster IT response, more productive workers, and increase revenue or transform IT from a cost center to a profit center. The key components are simple:

  1. Buy new infrastructure from $vendor
  2. Transform all processes to be more agile, productive, and better.

Why do those things always happen in concert?

Spring Cleaning

Infrastructure grows old. That’s a fact of life. Outside of some very specialized hardware, no one is using the same desktop they had ten years ago. No enterprise is still running Windows 2000 server on an IBM NetFinity server. No one is still using 10Mbps Ethernet over Thinnet to connect their offices. Hardware marches on. So when we buy new things, we as technology professionals need to find a way to integrate them into our existing technology stack.

Processes, on the other hand, are very slow to change. I can remember dealing with process issues when I was an intern for IBM many, many years ago. The process we had for deploying a new workstation had many, many reboots involved. The deployment team worked out a new strategy to streamline deployments and make things run faster. We brought our plan to the head of deployments. From there, we had to:

  • Run tests to prove that it was faster
  • Verify that the process wasn’t compromised in any way
  • Type up new procedures in formal language to match the existing docs
  • Then submit them for ISO approval

And when all those conditions were met, we could finally start using our process. All in all, with aggressive testing, it still took two months.

Processes are things that are thought to be carved in stone, never to be modified or changed in any way for the rest of time. Unless the stones break or something major causes a process change. Usually, that major change is a whole truckload of new equipment showing up on the back dock attached to a consultant telling IT there is a better way (TM) to do things.

Ceteris Paribus

Ceteris Paribus is a latin term that means “all else unchanged”. We use it when we talk about having multiple variables in an equation and the need to keep them constant to be able to measure changes appropriately.

The funny thing about all these transformations is that it’s hard to track what actually made improvements when you’re changing so many things at once. If the new hardware is three or four times faster than your old equipment, would it show that much improvement if you just used your old software and processes on it? How much faster could your workloads execute with new CPUs and memory management techniques? How about collapsing your virtual infrastructure onto fewer and fewer physical servers because of advances there? Running old processes on new hardware can give you a very good idea of how good the hardware is. Does it meet the criteria for selection that you wanted when it was purchased? Or, better still, does it seems like you’re not getting the performance you paid for?

Likewise, how are you able to know for sure that the organization and process changes you implemented actually did anything? If you’re implementing them on new hardware how can you capture the impact? There’s no rule that says that new processes can only be implemented on new shiny hardware. Take a look at what Walmart is doing with OpenStack. They most certainly aren’t rushing out to buy tons and tons of new servers just for OpenStack integration. Instead, they are taking streamlined processes and implementing them on existing infrastructure to see the benefits. Then it’s easy to measure and say how much hardware you need to expand instead of overbuying for the process changes you make.


Tom’s Take

So, why do these two changes always seem to track with each other? The optimist in me wants to believe that it’s people deciding to make positive changes all at once to pull their organization into the future. Since any installation is disruptive, it’s better to take the huge disruption and retrain for the massive benefits down the road. It’s a rosy picture indeed.

The pessimist in me wonders if all these massive changes aren’t somehow tied to the fact that they always come with massive new hardware purchases from vendors. I would hope there isn’t someone behind the scenes with the ear of the CIO pushing massive changes in organization and processes for the sake of numbers. I would also sincerely hope that the idea isn’t to make huge organizational disruptions for the sake of “reducing overhead” or “helping tell the world your story” or, worse yet, “making our product look good because you did such a great job with all these changes”.

The optimist in me is hoping for the best. But the pessimist in me wonders if reality is a bit less rosy.


by networkingnerd at April 13, 2017 03:33 PM

Router Jockey

PNDA provides scalable and reactive network analytics

PNDADuring Networking Field Day 15 our friends from the Linux Foundation, including Lisa Caywood, briefed us on a recent “acquisition” from Cisco. PNDA (Panda) is an open source Platform for Network Data Analytics, which aggregates data from multiple sources on a network including, real time performance indicators, logs, network telemetry, and other useful metrics… then in combination with Apache Spark, the data is analyzed to find useful patterns. None of this should be confused with Cisco’s recent announcement of the Tetration analytics platform. Tetration is a data center focused solution focused on a very particular space, where PNDA is more of a horizontally focused platform that is cross-vendor and cross-dataset. But this project is in no way a fork of the Cisco Tetration product as they evolved from completely separate code bases. Because PNDA is an open source initiative, it is able to take advantage of many existing projects, like Apache Spark, to build a robust analytics platform. Because of this, it allows them to remain extremely flexible.  While PNDA’s focus is solely on network, but there are other projects out there that are utilizing it as a jumping off point to perform analytics on other data. Think of the project as the glue that is able to utilize independent projects to become something whole, which is greater than the sum of it’s parts. The project strives to deliver processed data to downstream applications, where it can then be evaluated. It does this using Apache’s Kafka and Zookeeper applications to distribute high velocity data. Kafka consumer applications can consume this data, or you can create your own toolchain using data processing applications, and then dump it into a Hadoop cluster, or return it to the Kafka ecosystem.

 

PNDA Open Source Ecosystem

Use Cases

Some of the key market space for PNDA is currently in a few specific use-cases. Digesting large amount of data from a CMTS cable plant, or reading sensor data from an entire city’s network infrastructure devices. The GiLAN project enables real-time service assurance to ISPs using PNDA as a input for all syslog, and SNMP traffic, they can pull this data into Logstash, feed it to Kafka, and then process it using Moogsoft’s Incident.Moog data analysis and presentation software. This data can also be consumed by other applications like Ontology’s Real-Time Inventory platform. This enables the ISP’s NOC to respond to real-time fault’s in the infrastructure. All of this data being cross referenced and analyzed can look for trends which can even predict future service impacting issues so they can be fixed before they cause an outage.

 

<figure class="wp-caption aligncenter" id="attachment_6287" style="width: 960px">CMTS Predictive Service Management<figcaption class="wp-caption-text">CMTS Predictive Service Management</figcaption></figure>

PNDA Data Assurance

One of the really impressive features for this Open Source software is it’s ability to provide end-to-end assurance on the data not only being accepted, but also processed and stored through every last bit of the ecosystem. The bits of green you see in this console obviously indicate that things are going well within the PNDA infrastructure. But what really impresses me is that PNDA is constantly verifying the ecosystem by sending test data, and ensuring that the result is indeed what was expected of the platform. Another great feature that is able to be incorporated that is missing from the upstream products that it leverages.

 

<figure class="wp-caption aligncenter" id="attachment_6290" style="width: 800px">PNDA Console<figcaption class="wp-caption-text">PNDA Console</figcaption></figure>

 

Conclusion

All in all the Linux Foundation has been working on some rather interesting initiatives including DPDK, IoTivity, ONAP, Let’s Encrypt, Open vSwitch, Open Daylight, OPNFV, and Prometheus! With PNDA joining this list it’s easy to see that the Open Source initiative is alive and well in the networking space. — If any of my ramblings here interest you, please take the time to watch the presentation below from Networking Field Day 15 on PNDA.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://player.vimeo.com/video/212795551?title=0&amp;byline=0&amp;portrait=0" style="display:block; margin:0 auto;" width="640"></iframe>

Tech Field Day Disclaimer

Tech Field Day is made possible by the sponsors who are footing the bill for the travel and living expenses of delegates such as myself. Sponsors should understand that their financing of Tech Field Day in no way guarantees them any bias from the delegates and that they are only there to provide their honest and direct opinions of the solutions they present. For my full disclaimer, click here.

The post PNDA provides scalable and reactive network analytics appeared first on Router Jockey.

by Tony Mattke at April 13, 2017 02:30 PM

Networking Now (Juniper Blog)

Juniper Networks Security Issues & Predictions (for 2017)

 

Cam.png

Recent focus in cybersecurity has been how to remain ahead of advanced attacks. Whilst this is important, 2016 proved that many organisations had missed fundamental security controls with ransomware seeping through email gateways, weak passwords in use on critical systems, users able to access data, files and systems across their internal networks, out of date security software, poor patch management controls, low use of encryption with data being stored in clear text – the list goes on and on. Why?

 

This series of articles will go into detail on network issues and predictions which we see on the horizon for the coming year. Please read on for a high-level overview of what you can look forward to.

 

If you enjoyed reading this blog and would like to read related security blogs please visit here

by lpitt at April 13, 2017 01:49 PM

How the Fast Evolution of Stealthy Malware Requires a Rethink of Security

F-117_Nighthawk_Front.jpg

Stealth – the art of remaining hidden - has been a force of nature since before the dawn of mankind. Long before we were standing upright on the Savannah, nature had already figured out that one great way of staying alive was to remain silent, hidden out of sight and with the wind in your face as you watch your prey. As in nature, the art of remaining hidden continues to evolve for the cybercriminal, as well.

by lfisher at April 13, 2017 10:49 AM

ipSpace.net Blog (Ivan Pepelnjak)

Leaf-and-Spine Fabrics: Featured Webinar in April 2017

I recently finished editing the videos from the Leaf-and-Spine Designs update to the Leaf-and-Spine Fabrics webinar, so it wasn’t hard to select the featured webinar for April 2017. The featured videos now include BGP in the Data Center by Dinesh Dutt, SPB Deep Dive by Roger Lapuh, and VXLAN with EVPN control plane by Lukas Krattiger.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 13, 2017 06:10 AM

April 12, 2017

Ethan Banks on Technology

Managing Your Time When You Have Too Many Things You Want To Do

A friend of mine asked me the following.

“How did you manage your time and schedule for 5 years with everything you wanted and needed to do?”

Here’s the context for that question. For about 5 years, I had a full-time job as a global network engineer for an e-learning company. Later, I transitioned to a similar role for a medical startup, again full-time. At the same time I was employed in those roles, I was blogging and podcasting as the Packet Pushers community grew.

As Packet Pushers ramped up, it turned into a second full-time job, a state I maintained until I was able to transition to working for myself exclusively.

Managing time and schedule.

Let’s get into the meat. How did I manage my schedule with ever so much to do?

First off, I had understanding employers that let me blog and podcast during traditional work hours, as long as it did not interfere with my regular work duties. I was always upfront about this. I never snuck off during the day to work on my side business. My boss always knew exactly what was up.

This translated roughly to flexible hours. There was also the understanding that I was always going to be available to jump in and get things handled that were time-sensitive, or react to a system-down event requiring my particular set of skills.

Focus. I would (and do) set my mind on one task at a time and see that through to completion, then work on the next thing with as little interruption as possible during each task.

That’s a bit idealistic, of course. For example, I tended to react to instant messages and HipChat quickly because people generally needed me right that moment if they were pinging me. On the other hand, I ignored e-mail for 24 or more hours. Urgent messages rarely came through e-mail. This strategy reduced interruptions, and allowed me to be highly productive when I stuck to the strategy.

This issue of focus and productivity is still true today, and I still struggle with it mightily. Inbox and Twitter are my primary distractions and can cause me to be busy without accomplishing the important things on my task list. I use a Mac app called Anti-Social to help break the habit of checking social media when the temptation to distract myself is too great.

My wife has always been supportive of my projects going all the way back to my very earliest Novell and Microsoft certs back in the 90s. I also do my best to keep up with her. She knows more about IT companies and industry acronyms than she should really be burdened with, because I try to make sure she understands what’s weighing down my mind–why I’m stressed or happy or sad or elated or tired.

I also try to prioritize spending time together that’s unfettered by all the stuff I have to do. More on spouses later, as it’s a critical topic for ambitious, career-minded people in IT. But in summary, I get to put on my schedule what I put on it when she’s supportive of those things.

I had a goal in mind. Initially, the goal was a bit of extra cash, as we always seemed to need a bit more. I’ve almost always had a second job consulting, running a small SaaS operation out of my basement before it was called SaaS, and later on writing articles and reports for media publications or public speaking at conferences. Those things all happened because of some skill I had acquired previously that I could later turn into extra dollars.

When the podcast took off, and Greg and I figured out that there was money there, the goal shifted from “extra cash” to “doing it full time.” We both wanted a break after 20+ years in the corporate world, and so we worked on our personal finances to get to a point of having de-risked working on Packet Pushers full-time.

Therefore, on those days where the day job just ended after a stressful 12 hours of planning for a major data center cutover, I’d still find a little motivation to work on Packet Pushers. To blog something. To reach out to a guest about show scheduling. To write a script. To edit a show.

The goal was to get to having one full-time job instead of two, and get some of my life back. That helped prioritize the calendar.

I worked from home as much as possible. Commuting ate 2+ hours a day, as I live in central NH where it’s relatively cheap, but over my career tended to work in towns on the Massachusetts border where there were a lot more tech jobs for someone like me.

I ended up phasing in WFH about 4 days a week at the e-learning company with support from my boss. Then when I worked for the medical startup, I worked from home all the time, since they were located in Columbus, Ohio. I traveled to Columbus once a quarter or so, but generally lived in HipChat to communicate with the team. Lots of video discussions. This worked well.

Of course, this also won me back some time in the day. I used that time to be more productive than burning fuel up and down the highway just to get to a cubicle.

On the other hand, some days you don’t get it all done, and that’s okay. There is too much to do, and not enough time to complete everything.

Even with focus, organization, and planning, things don’t always go the way you want. You don’t get to write that post. You have to re-schedule that podcast recording. Or maybe the day job just really, really needs you to get them over some hump, or dig deeply into packet captures to help narrow down why the SIP transaction is acting so oddly.

In that case, on those days, it’s all about priorities. Know what they are, stick to them, and recognize that sometimes the lower priorities are going to be the ones that get neglected.

The hierarchy of priorities.

For me, I have always made decisions, or at least tried to make decisions, based on the following hierarchy of priorities.

Family, my wife especially. If she wasn’t 100% on board without reservation deep down inside, I would be very unlikely to take on whatever it was. I value her and our relationship above all else in my life. I realize it’s different for different couples, but the way I see it, if that relationship is compromised, the rest of my life is at risk. I view my marriage as the foundation upon which whatever personal accomplishments I can look back on were built. If you’ve followed my writing over the years, this is not a new idea you’ve heard from me.

I also made consideration for my kids. My commitment was, simply, to be around. To be predictable. To be a steady influence in their lives they could count on. To be approachable and reachable, which for a lot of pre-mobile phone and FaceTime years, meant being physically around. For this reason when they were younger, I turned down jobs where I would have been expected to travel a lot.

All of this to say that there are more important things in life than career objectives, at least for me. Even so, I know MANY people who are divorced, separated, or otherwise unhappy in their relationships because they gave everything to their IT career, leaving nothing for their family. It’s a choice we all have to make, and it’s hard to get completely right even when trying to.

Who’s paying me. If I’m getting paid by $dayjob, then clearly $dayjob demands my loyalty. Not $sidejob. I’ve given $dayjob an implied promise that they give me money, and in exchange, I give them a reasonable amount of work, for some definition of reasonable. Given a choice between making sure $dayjob is happy vs. $sidejob, $dayjob would win every time.

Cheating $dayjob to work on $sidejob isn’t ethical. See my first point about employers who are understanding and being upfront with them. This means that sometimes, you just don’t have the time to work on the things you want to work on. You have to keep up with your commitments, honoring where the money is coming from.

What do I have time left to do that’s important to me? When prioritizing $dayjob, that got first cut at my calendar, along with the ability to preempt any sort of $sidejob events I had scheduled. When there was time leftover, I would fill in with blogging, podcasting, and so on. Most weeks, there was enough time.

As a side note, I’ve struggled off and on with scheduling every hour of the day or not. At this time, I don’t schedule every hour. I find my workflow doesn’t map well to every hour being blocked in that manner. However, if I find that I have a 4 or 6 hour block that’s unscheduled, I might reserve it for myself. This gives me a block of contiguous time to make something–a blog post, a podcast, a book chapter, a presentation, etc. From there, the trick goes back to focus.

Personal time. I have found, especially as I’m getting older, that I need downtime. For me, that’s a mix of gaming, anime, movies, dramas, music, exercise, and backpacking in the mountains.

Needing downtime perhaps sounds like laziness or a cop-out, at least it does to me when I ponder it. However, I’ve learned the hard way that it’s not laziness at all. Rather, I have to have regular, even frequent, times where I am completely disconnected from responsibilities, or I’ll have minor breakdowns.

I recall a few times where I’ve had so many things to do that my brain sort of locked up–that’s the best way I can think to describe it. My hands were hovering above the keyboard and trackpad, shaking. I didn’t know what to do next. I couldn’t prioritize. I couldn’t handle all the inputs fed to me by my screens competing with my task list and the pressure of deadlines.

That’s a very odd feeling to be locked up and shaking like that, especially if you’re a person who is typically in complete control and has all the answers. Those locked up experiences have been wake-up moments for me, where I realized that I was broken inside, that I was the one who broke me, and that some changes needed to be made.

In retrospect, I should have moved “personal time” higher in the hierarchy. For many years, I prioritized myself last, and I believe that was a mistake. Counterintuitively, I think that, had I allowed myself a bit more personal time, that I might have actually gotten MORE things accomplished. At least, that’s my current experience as I limit the number of hours I work each day, spending them highly focused, but then leaving the workspace after those hours are through.

Saying no is your superpower.

Which leads me to my last point. When you are a very busy person trying to do very many things, you have to learn when to say no. The more “many things” you do, the more opportunities that will come your way. It’s a snowball-to-avalanche effect, and you can’t outrun the avalanche.

At $dayjob, you raise your hand to lead some project. The project is successful. Now you’re in charge of the end product forever, so that gets added to what you do. The business remembers how extraordinary you were at that project, and asks you to head up the next project with brand new, big spending, Super Important Customer, and somehow that’s a thing you own. And of course, you’ve been contributing to the wiki at work, and Sue, who you’ve never met but works in the Toronto office, read an entry you wrote, and wonders if you could spend a couple of hours explaining a few things.

In $sidejob, you got a good reaction to the article you submitted to Major Tech Publication, and now the editor wants you to maybe do a longer write up for this report they are publishing in a couple of months, and would you like to take that on because the money’s not bad and you wouldn’t have to do much research? And then a vendor notices that you’re an influencer, and wonders if you’d like to attend their conference, all expenses paid to lovely Las Vegas or San Francisco? And then a book publisher gets wind that you’re a strong technical writer, and wonders if you’d like to be the author of a book they’d like to publish?

And on and on it goes. The more you do, the opportunities you have. If you aim to please, it’s hard to say no. If you’re greedy for attention, it’s hard to say no. If you can use the money, it’s hard to say no. The temptation is to say yes to everything, and so you do.

Because you say yes to everything in both $dayjob and $sidejob, you overcommit yourself to the point of frantic activity, trying to stay ahead of the avalanche behind you. You think you have balance, and that you have everything under control. You’ll write in the air over flyover country. You’ll work on the weekends. You’ll do a few sprints to knock out some of the projects. You’ll hold meetings while waiting to board the plane. You’ll take advantage of the time zone differential while you travel. You’ll skip breakfast today, and maybe lunch.

You’ll go insane. That’s what you’ll do.

Knowing what to say no to is your superpower, and the only way that you’ll get done the things that are truly important to you. Saying yes is very often filling your schedule with things that are important to other people, and NOT things that are important to YOU.

Every opportunity must be accepted or rejected in light of your goals and priorities. How does this opportunity meet them? If it doesn’t, politely declining is not only wise, it’s utterly necessary. Otherwise, the avalanche runs you down.

Beyond that, saying no is the only way that you’ll have enough holes in your schedule for personal time–that recuperative time that you need to stay fresh, creative, and able to focus when it’s time to work.

You are the most important part of this entire equation–the thing that must be preserved. Don’t lose sight of that. Martyrdom is not a winning strategy.



Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

by Ethan Banks at April 12, 2017 04:55 PM

My Etherealmind
Networking Now (Juniper Blog)

Could Smart-City malware be spread via motorways and highways?

Scurvygrass(Danish)_2011_03_29_WestKirby_HilbreIsland_Hoylake_027p50.jpg

 

 In recent years we have seen news reports of wildflowers and weeds being 'spread' by the wind-tunnel effect of cars on our motorways and highways, is there a potential for malware to spread between smart cities in the same way?

 

If you enjoyed reading this blog and would like to read related security blogs please visit here

by lpitt at April 12, 2017 11:59 AM

The first connected car could be taken for ransom

 

sedan3.png

 

 In my last blog, I discussed how a supply chain attack could affect the business – and brand –of a global company. This week, we’re going to take this a level down and consider something else which I believe could be a threat - the intelligence that is being built into our cars.

 

If you enjoyed reading this blog and would like to read related security blogs please visit here

by lpitt at April 12, 2017 11:59 AM