November 14, 2019

Network Design and Architecture

DMVPN Point-to-Point GRE and mGRE

DMVPN spokes can use either point-to-point GRE tunnels or multipoint GRE tunnel interface. Recently, I received a question regarding DMVPN.

In fact, the Reader asked me two questions: When is GRE used in network design? When is mGRE used in network design?

Answering the aforementioned questions are the basics that you must know if you are planning to design DMVPN network.

As you might have known, DMVPN is a hub, spoke type of topology. And the most useful, important functionality of DMVPN is that it provides excellent scalability by reducing the number of tunnel interfaces configured on the hub and spokes.

I mentioned the DMVPN phases in one of my articles. Because of that, I will not explain them here again. However, if you don’t understand the meaning of DMVPN phases, I would recommend that you peruse the article on DMVPN basics before reading this article.

Point-to-Point GRE interface is used, only in Phase 1, on the spokes.

In all the Phases, mGRE interface type is always used on the hubs.

In Phase 2 and Phase 3 of DMVPN implementation, spokes also use mGRE (not multicast GRE, but multipoint GRE) interface types.

Compared to the point-to-point GRE interface, mGRE provides scalability; it reduces the configuration complexity, thus making troubleshooting very easy. The more the troubleshooting gets easy, the more the availability time of the network increases.

by Orhan Ergun at November 14, 2019 01:58 PM

2017 CCDE Exam Dates!

2017 CCDE Exam dates has been announced.There are four CCDE exam every year. More precisely there are four CCDE Practical/Lab exam every year. There is no limitation for the CCDE Written exam.

You can join CCDE Written exam anytime in any Pearson Vue Center. It is not only 4 times in a year.

CCDE Practical exam is not only in the Cisco Office anymore, but it is in the Professional Pearson Vue Locations. There are 275 of them and unfortunatelly, not every country has PPC (Professional PearsonVue Center)

If you are in Middle East, India, Turkey, Greece and Europe would be nice location.

I attended and passed the exam in Greece and Athens is one of the most beautiful city guys 🙂 I definitely recommend it.

Below is the 2017 CCDE Practical/Lab exam dates and I wish Good Luck for everyone and definitely recommend my Self Paced CCDE Training or Instructor Led CCDE Training.

by Orhan Ergun at November 14, 2019 01:54 PM

Nothing Should Stop You!

As many of you know, I was born in Turkey. And unfortunately, the educational system of that country is very weak. And guess what: If you can’t afford to go to private school in Turkey, you may not be able to learn English in the government school.

However, if you are a very diligent student, you may learn the basics of writing or speaking English. I have decided not to allow my proofreader to edit this post. My reason is simple. I want you to notice that I am still struggling with English. But that’s okay. It’s a learning curve. So, nothing should stop you !|

My aim of writing this post is to share some of my thoughts with you. And I know many people will read this and I hope it will inspire some of you.

I worked as a network operation center engineer, presales engineer and consultant while I was in Turkey. Fortunately, I joined and managed many design projects during that time. After that, I moved to other countries with the aim of sharing my knowledge with others and getting some money of course 🙂

At this point, you might be having this thought: “With your weak English, how did you manage to succeed?”

Many thanks to God. My hiring manager was a technical person. And he understood all my ideas. Besides, my interview was excellent from the technical point of view. And then, I knew that I could learn the language and it shouldn’t stop me from achieving my goals.

After some time, I decided to take the CCDE exam. But before I attended the exam, I heard that the CCDE exam is a very difficult test and that if you are not a native speaker, you might be in trouble. Because it is too long and reading intensive. But that is not completely true. Before I start providing my own consultancy services and trainings, I used to read hundreds of pages daily. I even took courses on fast reading so that the CCDE exam wouldn’t be an issue for me.

Although the exam is not easy, I passed it. Yes, I did. A boy who was born in a country where almost nobody could speak English passed one of the hardest and reading-intensive exams in the networking field.

But That’s not all.

I didn’t plan to teach CCDE training from the outset. There were reasons behind my decisions then. But somehow, I started teaching people about the CCDE exam. Then, many individuals were asking for my help as I was writing technical posts. Granted, there were many grammar errors in my posts, but they were worth reading. Some people shared the posts on social media and some linked them to their websites. And Today, I have about 40,000 followers both on social media and on my website

Was everything easy in the beginning? Of course, not. People complained that the documents could be much better. So, I spent a huge amount of time and created the documents. And thousands of people shared, bought, and liked them on social media and on my websites. Provided many feedbacks and I took them positively. I also sold books, videos, comparison tables, blog posts and so on. Needless to say, I am still working on some of them to help readers pass the CCDE exam with ease. Some people criticized me initially. They said my content sucks. But such negative remarks didn’t stop me. I took their criticism positively and improved the resources.

Was the document the only problem? No. When you reach large audience, many people will criticize you. Of course, some will like your work; others will castigate your effort. It’s normal. So, whether you receive good or bad comments from people, you should keep your head high. Criticism shouldn’t make you sad.

Also important is that don’t do anything hoping that people will like it. Rather, you should do it because you want to do it and because you love it. I love network design. And that’s why I love talking about it. Do I make mistakes? Of course, I do. I have committed many blunders in the past, both technical and non-technical mistakes.

In addition, I am never afraid to share my ideas and knowledge just because I can commit mistakes. That reminds me: A guy who is clever than I once said, “our experience is the sum of our mistakes.”

Many thanks to God, this way of thinking has made me a well-known and successful professional in my field, as the first and only Turkish CCDE, as one of the best CCDE instructors in the world, and as one of the best authors on computer network design.

Ah did I say I am driving my dream car now? 🙂 Yes, you can achieve this too.

Nothing should stop you. Just believe in your dreams and act on them and the sky will be your limit.

by Orhan Ergun at November 14, 2019 01:52 PM

Is Cisco CCDE Exam Vendor Neutral?

Is Cisco CCDE Exam really vendor neutral?.Recently one of my CCDE Bootcamp students asked me this question. He heard that DMVPN might come in the exam.

In the beginning of my each CCDE class, I introduce the topics which will most likely asked in the CCDE Practical exam. Cisco claims that CCDE Practical exam is vendor neutral network design exam.

And I totally agree. Actually not only DMVPN, but also HSRP, GLBP, EIGRP, GETVPN might come in the exam and you should know the details of these technologies from the design point of view.

All these technologies are Cisco specific, why then it is vendor neutral ?

Reason is simple but not maybe obvious for those who don’t know the details of the exam.

These are very commonly deployed technologies in the networks. Almost everyone learned HSRP when they studied first hop redundancy protocols, I believe, right ?

Or, can be any decent network engineer who don’t know EIGRP ?

If you think that you know routing protocols, or you think that you are familiar with them, you have to know it.

But it is not about that they are commonly used technologies.

They are actually derived from the very well known standard based protocols.

Let’s take a look at DMVPN.

DMVPN (Dynamic Multipoint VPN) uses two commonly known standard protocols; mGRE and NHRP. DMVPN is an architecture which come together by combining well known standard (RFC Based) technologies.

GETVPN is not different. GETVPN uses well known technologies and design mindset. Multipoint to multipoint shared encryption.

Both, DMVPN and GETVPN are used by many other networking vendors with a different names.

Do I need to say that EIGRP is an RFC already ?

Now you know why Cisco CCDE Practical exam is vendor neutral and let’s be fair, it is the best network design exam in the industry.

by Orhan Ergun at November 14, 2019 01:50 PM

Mobile Broadband – Trending Technologies

For me and for most of Mobile broadband professionals, we are used to meeting the Telco Vendors such as Ericsson, Huawei, Cisco, Nokia, etc. It was a mind-shift for me personally when I started to meet RedHat, Mirantis, & VMware as a part of the NFV talks and I was really surprised that a company like RedHat is a member of the European Telecommunications Standards Institute (ETSI) with more focus on the Mobile Broadband Evolution participating in Mobile Edge Computing (MEC) Work Group.

 

To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Architecture Perspective” Book.

It is obvious nowadays that the borders between different technology domains are fading in the sense that Networks are shifting into software-defined Networks with new abstraction layers realizing network convergence.

With this post being the last one, I chose to talk a little bit about some trending and future Mobile Broadband technologies with the goal of having an overview of the Technology Roadmap.

NFV (Network Functions Virtualization)

 

NFV offers a way to design, deploy, & manage Network Services via decoupling the Network Functions from proprietary Hardware enabling them to run in software environment.

The start was in Oct 2012 where a specification group under name “Network Functions Virtualization” with members from AT&T, BT, DT, China Mobile, NTT DOCOMO, & Other Tier-1 operators published the first NFV White paper at a conference in Darmstadt, Germany.

ETSI was selected to be the home of NFV Industry Specification group

http://www.etsi.org/technologies-clusters/technologies/nfv

How about running Huawei GGSN on Ericsson Hardware? Juniper SRX software on Cisco UCS? Interesting, ha?

 

IoT (Internet of Things)

 

From the Mobile Broadband perspectives, There are many User Equipment (UE) Categories. One of them is the normal consumer (MBB user) and the rest of the big list are almost occupied with “things” that uses the Internet Service to contribute or provide a specific Service.

An example is an electric meter that is able to report the usage directly to Electricity authority using the Internet service availed via an embedded SIM Card installed in the meter. For sure, there are a lot of many basic and sophisticated use cases.

So, IoT platforms are evolving and developing rapidly and from the Mobile Broadband perspective, a new framework of Features (PSM, Low Complexity UEs, Extended Timers, etc) and technologies (NB-IoT, 5G, Mobile Edge Computing, Network Slicing, etc) are there and in the roadmap to support this evolution.

 

MEC (Mobile Edge Computing)

 

MEC is an Industry Specification Group in ETSI (started on Dec 2014) with the goal to provide an IT service environment and cloud-computing capabilities at the edge of the mobile network, within the RAN and in close proximity to mobile subscribers.

MEC in a nutshell

MEC is pushing the service into the Edge with the goal of improving user experience.
MEC will be an open Platform exposing APIs to Network reusing the NFV technology & all techniques related to SDN and Orchestration apply.

With the MEC, an operator can easily deploy a specific gateway (IoT Gateway) as close as possible from access to serve the critical services that can’t tolerate delay (1 millisecond!) such as smart Cars/Smart Traffic Lights.

5G

 

As the name indicates, this is the 5th generation for Mobile Networks that is still under development and Standardization. With a goal of having the Commercial Launch by 2020 (Tokyo Olympics 2020) and a “Pilot” launch by 2018 (Russia World Cup 2018), the 5G is foreseen as a use case driven technology which will provide a service framework for three main streams

  1. Enhanced Mobile Broadband (eMBB) – Evolution in user Data Rates (Gigabit Experience)
  2. Massive MTC (Machine Type Communications) – Smart Cities – Smart Homes – IoT
    Ultra-reliable and Low Latency Communications – Mission Critical Applications
  3. Smart Cars – Self Driving.
    The Target is to fulfill the below requirements
  • Latency (E2E): 1 ms
  • Throughput: 10 Gbps per UE
  • Mobility of Speeds reaching 500 Km/h
  • A lot of use cases and Sectors will be served by 5G such as Smart Grids, Smart Vehicles, Health Care, Industry & Automation, Logistics & Tracking.

I hope that this article together with the previous four have succeeded to shed some light on the Mobile Broadband ecosystem and the corresponding technologies.

I will be happy to respond to your inquiries and comments.

by Orhan Ergun at November 14, 2019 12:20 PM

Mobile Broadband Ecosystem

Mobile Broadband… You might have heard this term before, possibly in an ISP environment. The term has always represented a name of a department within a mobile operator or a vendor organization. It is always there in profile description for telecom professionals. It is everywhere actually when it comes to a certain ecosystem or framework that delivers Internet Service using Mobile Network.

 

To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Architecture Perspective” Book.

Let me bring the Wikipedia definition followed with a small note …

Mobile broadband is the marketing term for wireless Internet access through a portable modem, mobile phone, USB wireless modem, tablet or other mobile devices.

Definition is true but the note here is that you can’t rely solely on google to understand the MBB related technologies (EDGE, UMTS, 4G/LTE, etc.) because what is in google is mainly the marketing articles and the vendor specific publications which is fine but as a lesson learned, one need always to understand the technology concept decoupled from vendors influence.

The good thing is that the whole knowledge, principles, & Service descriptions for Mobile Broadband is there in the standards. Mainly the 3GPP which is freely accessible. So I’d clearly say that the “debate” that it is hard to get the knowledge of the MBB is “debatable“!

One just need to know how to get the information? Which 3GPP standard Specifications? Which 3GPP Release? Throughout this article, I am going to talk about the Mobile broadband evolution and the related standardization specifications which will enable the audience to see the big picture of the MBB and the plan is that by the end of the Five articles series, readers will be on the Mobile Broadband Track.

The Mobile Systems in general are classified into generations (2G, 3G, 4G, 5G) and for every generation, there are a set of standards specifications that describe the related service descriptions, interfaces, protocols, Call flows, etc.

1G

That was the first generation of Analogue Mobile systems firstly launched by NTT Docomo in 1979 and then that was followed by commercial deployments in the Nordics and the US in the early 1980’s.

It was more like Direct Dialling rather a network architecture controlled by the Operator. FDMA, Frequency Division Multiple Access technique was used by the Analogue Technology to serve Voice Calls.

No Data Service were offered by 1G.

2G

GSM (Global System for Mobile Communication) was a standard developed by ETSI, European Telecommunications Standards Institute to realize the 2G digital Cellular Network.

With the first Commercial deployment in Finland (Radiolinja) in July 1991. GSM had the target to deliver a Digital Circuit Switching network that is capable to deliver voice services.

One can conclude that the 2G GSM technology is a pure circuit switching network delivering voice service with no data service offered. This understanding will help us to understand the evolution of GPRS and Edge Technologies.

From the standardization perspective, the early GSM releases were called GSM Phase 1 & GSM Phase 2.

The logical architecture of 2G Network that was introduced by this release is shown below

The Yellow Highlighted Network Elements are representing the RAN (Radio Access Network) Domain and can be called BSS (Base Station Subsystem) in some other contexts while the green highlighted network elements are representing the CN (Core Network)

  • BTS: Base Transceiver Station
  • BSC: Base Station Controller
  • MSC: Mobile Switching Center
  • GMSC: Gateway MSC
  • ISC: International Switching Center
  • HLR: Home Location Register
  • VLR: Visited Location Register
  • EIR: Equipment Identity Register

The diagram is simple just to give an overview of the 2G architecture. The function for every Network Element will be illustrated in next articles.

The Catch here is that the early 2G didn’t provide a framework for Data Service and the network is solely serving CS Services. All Core Interfaces were based on Legacy SS7 interfaces. The term Circuit Switched was very dominant in a way there there were no IP Interfaces, No Data Services, and no Supporting handsets by that time.

Post releases are named GSM Phase 2+ (R96, R97, & R98).

Starting from GSM Phase 2+ (Release 96), There were some attempts to allow the network to deliver Data Services in addition to Voice.

The R96 specs are referring to 14.4 Kb/s User data based on High Speed Circuit Switched Data so still the enhancements were provided on Circuit Switched Network.

Release 97 brought some good news for Data Services where it introduced GPRS (General Packet Radio Services) on both Radio Part and Network Part and thus, the Network architecture has shifted to the below architecture to provide the PS Core Network (Packet Switched Core Network)

R97 Architecture – R97 Updated

SGSN: Serving GPRS Support Node
GGSN: Gateway GPRS Support Node
The two main network elements of PS Core Network (SGSN & GGSN) were introduced in 3GPP Release 97. The Protocol used between SGSN & GGSN over the Gn/Gp Interface is GTP, GPRS Tunneling Protocol (Over UDP/IP) and that was the first IP interface introduced by that time together with the Gi Interface which is a transparent IP interface to Internet.

GPRS typically reached speeds of 40Kbps in the downlink and 14Kbps in the uplink.

At this stage, technically and theoretically we are still at 2G but because of the introduction of the Data service and for an efficient marketing of the new service; vendors and Operators introduced a new Marketing term which is the 2.5 G.

So, here is a catch; The marketing term 2.5 G refers to R97 introduction of GPRS over the GSM network.

Evolution continued on R98, and EDGE technology was introduced at the radio side achieving 384 Kbps data rate (almost 3G) but that was not accompanied with any change in the Core Network.

Again, the evolution was labeled 2.75G by industry operators so that’s another marketing term referring to EDGE technology.

You might have seen the letter “E” for Edge on your mobile phone while being covered by 2G coverage.

3G

The UMTS (Universal Mobile Telecommunications System) was introduced as part of 3GPP Release 99 together with EDGE enhancements. New Interfaces has been added and new Network Elements as well such as Node-B & RNC that resembles the BTS & BSC in 2G.

The logical architecture of 3G Network introduced by R99 release is shown below

 

mobile broadband R97 99

 

Theoretical rates refer to 2 Mbps for both Uplink and Downlink however, actual rates by that time were 384 Kbps.

The Network has become more ready to welcome the “All-IP” Interfaces that is introduced in the following release (Release 4). The concept of All IP Network has been introduced in 3GPP Release 4 where the SS7 legacy interfaces have been standardized for a Sigtran deployment (SS7 over IP).

3GPP Release 4 is a popular release for CS Core Network where the Split of Control Plane & User Plane was achieved by introducing the Media Gateway handling the UP and dedicate the MSC for handling the CP. There was no change in the PS Core Network architecture of R99.

In R5, The major enhancement was on the air interface introducing HSDPA (High Speed Downlink Packet Access) reaching theoretical rates of 14.4 Mbps and also this release has introduced the IMS (IP Multimedia System).

R6 added the HSUPA (High Speed Uplink Packet Access) evolution that achieves 5.76 Mbps on the Uplink.

Note : In Mobile Broadband, Downlink is the direction from Network to MS and Uplink is the direction from MS to Network.

R7 Introduced the evolved HSPA (HSPA+) with typical speeds of 42 Mbps. The 3GPP Release 7 is a well know release in the PS Core domain because it introduced a very well established and deployed Architectural feature which is the Direct Tunnel giving the option that user plane is established directly between RNC and GGSN bypassing SGSN.

 

 

Same like Edge 2.75G, The HSPA technology has been marketed by some marketing terms such as 3.5G & 3.75G.

You should see the “3G”, “H”, or “H+” signs on your mobile according to the technology deployed by your carrier.

LTE (A step towards 4G)

3GPP Release 8 is one of the main Evolutionary stages when the 3GPP community decided to use IP (Internet Protocol) as the key protocol to transport all services.

A new Core Network Architecture was introduced as an evolution for the Packet switched Core in GPRS/UMTS under name EPC (Evolved Packet Core) with a direction to not have a circuit-switched domain in the sense that the new EPC would deliver both Data and Voice services.

In old release, the RAN needed to integrate to both CS & PS network where in LTE the eNodeB is only integrated to EPC. The logical architecture of LTE/EPC is shown below

 

etc MOBILE BROADBAND ecosystem

 

  • eNB: Evolved Node-B
  • MME: Mobility Management Entity
  • SGW: Serving Gateway
  • PGW: PDN Gateway
  • HSS: Home Subscriber Server
  • OCS: Online Charging System
  • PCRF: Policy and Charging Rules Function

That was the innovative LTE, Long Term Evolution that most of the operators of the world started to adopt with serious momentum towards 5G.

The diagram below shown the difference between the main Packet Core reference architecture (R6, R7, & R8)

Scandinavian TeliaSonera deployed the first commercial LTE Network in June 2009. Theoretical Data rates were 300 Mbps but never reached more than 100 – 150 Mbps at this pilot stage (2009 – 2010).

 

Here comes a well known confusion.. LTE is 4G or it is not? Does the evolutionary 3GGP R8 cover the 4G requirements?

 

The answer is always the same … search for marketing! LTE was introduced and marketed by the term 4G although in the reality, it is not. From the standards perspective the 4G requirements are covered by LTE Advanced which is standardized in 3GPP R10 but however introducing the new ecosystem under name “3G” would have been misinterpreted.

These 4G requirements are defined by ITU, International Telecommunications Union. Please have a look on the below link for more insights about the 4G (LTE-A) requirements.

http://www.etsi.org/deliver/etsi_tr/136900_136999/136913/08.00.01_60/

The confusion is still on going with LTE-Advanced being marketed as 4.5 G while it is 4G as per the standards evolution.

3GPP Release 8 provide a mean to have voice services over CS Network via CS Fallback as an alternative for Voice over LTE. That’s justified because the Voice over LTE solution was not mature enough by the time of R8 being released and operators were not ready to migrate all voice services from CS network to LTE/EPC Network.

Moving beyond R8, there is no big change in the Core Architecture. 3GPP R9 continued the enhancement of LTE radio side with some enhancement on the CSFB, & the Femto Cell.

3GPP R10 introduced the standardization of LTE-Advanced and is thought to be the standard 4G deployment.

3GPP R11 continued the enhancement of the radio Physical Layer, enhancements on the MTC, Machine Type Communications, Introduced the SaMOG enabling the integration between Trusted Non 3GPP Access (WiFi) to the EPC.

3GPP R12 and that’s the current “Frozen” release has made enhancements to the physical radio layer, Small Cells, & MIMO and Introduced the Device-to-Device proximity service.

3GPP R13 & R14 are still in “Open” state and some topics related to 5G are already discussed while the expected date for 3GPP 5G standardization release is June 2018.

There are a lot of details and bits & bytes that I’d like to discuss but I believe that this is sufficient for a first article in a 5 articles row. I hope that it gave an overview on the Mobile Broadband ecosystem and the corresponding standardization releases. I am going to build on that and start publishing a weekly article from the below List.

  • Article (2) – The Core Network Architecture (2G & 3G)
  • Article (3) – Mobile Broadband Essential Terms & Concepts
  • Article (4) – Getting the Service in 3G (under the hood)
  • Article (5) – Introduction to LTE

Thanks and waiting for your insights in the comment box below.

by Orhan Ergun at November 14, 2019 12:12 PM

Common Networking Protocols in LAN, WAN and Datacenter

Spanning Tree, Link Aggregation , VLAN and First Hop Redundancy protocols are used in Campus, Service Provider Access and Aggregation and in the Datacenter environment. There are definitely other protocols which are common across the Places in the Networks but in order to keep this article short and meaningful I choose these four.

 

I will describe Spanning tree, link aggregation, 802.1q Vlan and First hop redundancy protocols at a high level since I will explained them in detail later in the separate articles.

For the more advanced layer 2 protocol information check this article.

 

Spanning Tree – IEEE 802.1d, 802.1w, 802.1s

 

Spanning tree is used to build a control path between the Ethernet switches in the campus , service provider and data center environment. It prevents data plane loops by creating a tree !

Loop preventation is very crirical for the Ethernet since there is no TTL value or any other loop mitigation mechanism encoded in the Ethernet header.

Loop prevention is achieved by blocking the link which has a higher cost to the root switch in the topology.

802.1d is also known as original spanning tree or legacy spanning tree has been extended by Cisco and named as PVST+.

PVST+ works in a similar way with 802.1d, only the link is blocked for each Vlans separately. Topology can be adjusted by placement of root switch.

Thus PVST+ provides flexibility compare to 802.1d , since with 802.1d legacy spanning tree, only one tree can be created and all the Vlans are mapped to that tree.

Redundancy can be provided as active/standby with 801.2 legacy spanning tree but with PVST+ which is Cisco’s implementation of 802.1d ; links can be used simultaneously by having Vlan based load balancing.

Even and Odd Vlan root switch placement at the distribution layer is very common deployment method and provides deterministic topology thus troubleshooting can be easier. But if the number of Vlans are high, configuration can be quite complex.

Rapid Spanning Tree is the IEEE work which has several enhancements to original 802.1d and called 802.1w. Rapid Spanning Tree is used for fast convergence commonly and author recommends it strongly for the Campus Networks.

MST which is 802.1s is the third implementation of spanning tree and use case is data center and service provider access networks which have thousands of Vlans and require large scale bridging support.

Down side of spanning tree is to block the links for any given Vlan and root switch placement for odd/even vlan is complex to manage. Alternative approach for the Campus Networks is Link Aggregation or Multi Chassis Link Aggregation which is the subject of next concept.

There are other drawbacks of spanning tree such as suboptimal traffic flow, lack of multpathing and so on and can be avoided by using other protocols such as SPB, TRILL, Fabricpath. You can find more information about large scale bridging from here.

 

Link Aggregation – IEEE 802.3ad

 

To overcome to drawback of spanning tree which is the blocking links, link aggregation can be used between the switches. Link aggregation is used for both load sharing and redundancy.

At least two links is placed in a bundle and load sharing and depends on vendor implementation 4, 8, 16, 32 links can be placed in a bundle. Link aggregation might be used between the access-aggregation/distribution layers and distribution to core layers depends on capacity requirements.

Link aggregation works between two switches and if it will be run between three or more switches then it is called as multi chassis link aggregation.

Aggregation/distribution layer devices synchronize the state information through over the inter switch links. Over the inter switch link synchronization can be achieved with ICCP (Inter Chassis Communication protocol) or with vendor specific protocols.

VSS and VPC are the two technologies which are known by the network engineers who have experience on Cisco switches use multi chassis link aggregation to bundle two physical chassis into one logical device.

 

VLAN – Virtual Local Area Network / 802.1q

 

With adding another tag on the Ethernet frame, virtualization and segmentation can be achieved.

Similar to VSAN technology of Cisco in the storage area network, IEEE 802.1q adds a 4 byte header to the Ethernet frame and addresses the segmentation and multi tenancy in the campus,service provider and the datacenter networks.

VLAN concept is used to address either for broadcast/multicast or security concern. Broadcast, unknown unicast and multicast packets which also known as BUM traffic is flood everywhere in the network.

By separating the network with VLANs, BUM traffic is only flooded in that particular VLAN. What happens in Vegas stay in Vegas!.

Since the NICs of PCs and servers has to process every broadcast packets , limiting these traffic by diving users into separate VLAN is very common practice and recommended by the Author.

Another use case of VLAN is security. Inter VLAN traffic might be passed through the security devices such as Firewalls, IDS/IPS and unwanted communication or even zero day attacks can be prevented.

 

First Hop Redundancy Protocols – HSRP, VRRP, GLBP

 

If the requirement is to use the same Vlan on the many access switches, then layer 2 access design is used at the campus network.

Layer 2 access design will be explained later in the article. In the layer 2 access design , distribution layer switches is used as the default gateway for the end host.

For high availability purpose using at least two distribution switches is the best practice.

Question is which distribution layer switches will be the gateway of the hosts for the particular Vlan?.

There are three approaches for the first hop gateway redundancy.

HSRP ( Hot standby router protocol ) , VRRP ( Virtual router redundancy protocol ) and GLBP ( Gateway load balancing protocol ) .

HSRP is the Cisco propriety protocol and works as an active standby fashion. For a particular Vlan , only one of the switches will be active and the other switch will take the responsibility if the active device fails.

VRRP is the IETF’s answer to the problem and works in a similar way with HSRP. Common deployment method for HSRP and VRRP is to use one switch as default gateway for set of Vlans and the other switch for the remaining Vlans.

Depends on the number of Vlans , this approach might be complex from the configuration point of view.

GLBP is the Cisco propriety default gateway redundancy protocol as well. GLBP provides a redundancy for the hosts within the same Vlan. Two switches actively forward the traffic for the hosts in the same Vlan.

If one switch fails only half of the hosts on that Vlan is affected. With HSRP and VRRP, if active switch fails , all the hosts on that Vlan is affected from the failure.

by Orhan Ergun at November 14, 2019 11:42 AM

Push and Pull Based Control Plane Mechanisms

Control plane packets are used to build a communication path between the networking devices. In some cases control plane is used to advertise and learn the endpoints.

Imagine a network which consist of these networking devices, in order to crate a graph or tree among them for bridging or routing purpose, control plane protocols are used.

As a network engineer although I keep Application requirements in my mind during a network design, in general layer 4 and above is just boring.

Spanning tree, G.8032, RPR, Trill, SPB, Fabricpath,EAPS, PBB-TE (PBT) are the control plane protocols at the layer 2. They are used to create a communication path , in general a tree. Some of them allow Vlan based load balancing , some of them allow flow based load balancing with ECMP ( Equal Cost Multipath ) or ECT ( Equal Cost Tree ).

But if you read so far, I didn’t mention from reachability information. For the layer 2, reachability for us, Ethernet Mac addresses, Frame relay pdu, ATM cells etc, all of the above protocols are used for Ethernet control plane though.

In general ( SPBM is different ), reachability information is learned through flooding and learning in the case of Ethernet. This type of learning mechanism is called Data Plane learning.

Since the bridges,switches process the layer 2 frame and build their table based on end user packet ( not advertised between the devices ) while frame visit all nodes in the same broadcast domain, every bridge learns all the mac address information in the Ethernet case. ( There is a conversational learning mode though ! )

Routing protocols work slightly different than layer 2 protocols.

OSPF,IS-IS,EIGRP,RIP, BGP are the routing protocol which I will mention here.

They are used to create a communication path as well. Between two endpoints; there might be many nodes and protocols try to find an optimal path based on their best path selection alghoritm.

This process is not different than layer 2 protocols communication path creation.

But routing protocols also advertise reachability information. For the routing protocol reachability information in general ( BGP and IS-IS is used for layer 2 frames as well ) Internet Protocol ( IP).

Advertisement of reachability information between routing protocol neighbors is an example of Push based control plane. If reachability information is learned from the routing protocol packets it is a control plane learning.

In the case of ethernet you learn mac address information from the end user packet not from the routing protocol packets ( LSA, LSP and so on.. ). Thus it is a data plane learning and can be though as a subset of Pull based mechanism.

If you are familiar with LISP ( Locator and Identifier Separation Protocol ) , EID space is registered to the Mapping server by their Locators. Ingress tunnel routers learn the reachability information ( Destination EID space ) from the Mapping database.Mapping database keeps the EID ( Endpoint Identifier ) and RLOC ( Routing Locator ) mapping.

As you can see from the LISP example, reachability information is kept on the special node or nodes and all the switches/routers pull the reachability information from that special database when they need them.

With pull based mechanisms, memory requirements can be reduced on the networking devices but it is always the case that there will be an initial delay to learn reachability information ( Asking the reachability from the mapping database , RLOC will advertise them back to the ITR ). You might think that this is similar to DNS or NHRP mapping and queries in DMVPN.

by Orhan Ergun at November 14, 2019 11:37 AM

Datacenter Design: Shortest Path Bridging 802.1aq

IEEE 802.1aq Shortest Path Bridging (SPB) uses IS-IS as an underlying control plane mechanism that allows all the links in the topology to be active.

In sum, it supports layer 2 multipath. SPB is used in the datacenter; however, it can also be used in the local area network. In this article, Figure-1 will be used to explain shortest path bridging operation.

 

leaf and spine topology

 

Figure-1 – Leaf and Spine Topology

 

In Figure-1, both leaf and spine nodes run IS-IS to advertise the topological information to each other.

In SPB, IS-IS is used by the bridges to find the shortest path to each other, and it allows the topology to be calculated.

But unlike routing, large scale bridging uses only IS-IS link state protocol for the topological information, not for the reachability information.

This means that the addresses of MAC are not advertised within IS-IS.

Some vendor implementations can also use IS-IS to advertise MAC address information since they only need an additional TLV for this operation. Scalability of IS-IS for the MAC addresses advertisement is questionable for large scale deployment; thus, both BGP for MAC address distribution and IS-IS for physical topology creation might be a good option.

IS-IS is used on the underlying physical network to create a topology for layer 2. Furthermore, overlay multi-tenant networks still use flood and prone learning mechanism, also known as data plane learning, for MAC address information learning.

There are two flavors of SPB (as depicted in Figure-2): SPBV and SPBM. SPBV stands for Shortest Path Bridging for VLAN; SPBM stands Shortest Path Bridging for MAC.

The problem associated with SPBV is very similar to provider bridging. In other words, all the nodes learn the MAC addresses of the end hosts. In short, the scalability of core network is still a problem for SPBV.

 

spbm vs spbv

 

Figure-2: SPB-V and SPB-M (Source: cisco.com)

 

You might be asking, if I use PVST+ or Rapid PVST+ that uses separate topology for each VLAN, all the paths in the network will be used. Yes this is correct, but there are two caveats.

One is that you need to carefully design which VLAN will be used on which link since spanning tree will block one of the links; as a result, planning this bring about management complexity compared to using single tree for all VLANS, and it may increase troubleshooting time due to complex configuration.

The other is that since the second link will be standby and if the first link goes down, reconvergence takes time; in addition, application traffic running on active link will be dropped during a convergence event. If multipathing is enabled, the secondary link is active and only the traffic running on the primary link will be redirected to the second link. However, this operation is very fast since there is no convergence event at the protocol layer.

If multi-pathing is implemented at the hardware, microsecond level re-convergence can be achieved; if it is implemented at the software within milliseconds, traffic can continue over the second link. But convergence at the protocol itself can be extremely time consuming, especially in the case of spanning tree.

As I mentioned earlier, another version of SPB is SPBM, which is the shortest path bridging MAC in MAC solution. It is very similar to provider backbone bridging at the data plane (PBB encapsulation is used); nonetheless, the shortest path bridging does not use spanning tree as its control plane.

Instead of spanning tree, IS-IS is used to build the topology in shortest path bridging. Thus, SPB supports multipath bridging. Also, MAC addresses are hidden from the core of the network.

For data center leaf and spine architecture (as shown in Figure-1), spine switches do not keep state for the MAC addresses, and they do not know MAC addresses.

Thus, overall scalability of the fabric can be much higher compared to SPBV. Also, overall SPBM provides much higher scalability for the layer 2 networks compared to SPB’s other version, SPBM. All in all, both of them are far more superior to spanning tree due to layer 2 multi-path support, simple operation for complex topologies, and scalability.

To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Architecture Perspective” Book.

by Orhan Ergun at November 14, 2019 11:33 AM

HSRP, VRRP and GLBP Basics and Comparison

HSRP, VRRP and GLBP are the three commonly used first hop redundancy protocols in local area networks and the data center.

In this post, I will briefly describe them and highlight the major differences. I will ask you a design question so we will discuss in the comment section below.

I am explaining this topic in deep detail in my Instructor Led CCDE and Self Paced CCDE course.

HSRP and GLBP are the Cisco specific protocols but VRRP is an IETF standard. So if the business requirement states that more than one vendor will be used , then VRRP is the best choice to avoid any vendor interoperability issue.

For the default gateway functionality HSRP and VRRP uses one virtual IP corresponds one Virtual Mac address.

GLBP operates in a different way. Clients still use one virtual IP address but more than one virtual mac address is used. So each default gateway switch has its own virtual Mac address but same virtual IP address.

To illustrate this, let’s look at the below picture.

 

 

In the above picture, clients use same gateway mac address since the first hop redundancy protocol is HSRP.

If GLBP was in used, on the PC we would see different gateway mac addresses.

HSRP works as an active/standby , GLBP works as an active/active fashion.

Both nodes/gateways in any vlan can pass the traffic if GLBP is used. Bonus : This is called flow based load balancing.

Flow based load balancing is not possible in HSRP or VRRP. Maximum you can have with HSRP and VRRP is vlan based load balancing.

For some set of vlans, one switch is used as active node , for the different set of vlans, standby node is used as an active node for those vlans.

For example, all the clients in Vlan 1 – 100 can use left switch as a default gateway ,for the Vlan 101 – 200 right switch can be used as a default gateway. If you do this still all the physical links can be utilized in the topology and any switch doesn’t stay as an idle.

If you lose an active node in HSRP/VRRP network, all the hosts in a given vlan is effected. But in GLBP, since both nodes are active and only half of the traffic passes through failed node, only half of the clients in any given vlan notices the failure.

This is important network design criteria for the first hop redundancy protocols.

 

Question :

 

hsrp vrrp globe

 

 

Question 1 : What is the name of this topology ?

Question 2 : Is HRRP or GLBP more suitable , Why ?

Share your answers with your name and email since the correct answers will receive a surprise prize.

By the way please share this post on social media.

by Orhan Ergun at November 14, 2019 10:57 AM

Inter AS Option C – Design Considerations and Comparison

Inter AS Option C is the most complex, insecure, uncommon, but extremely scalable inter provider MPLS VPN solution.

I am explaining this topic in deep detail in my Instructor Led CCDE and Self Paced CCDE course.

In this post, I will explain how service providers can use Inter AS Option C to assist customers to have an end-to-end MPLS VPN service.

In the Inter AS Option B post, I explained that ASBR routers between the service providers do not keep a VRF table for the VPN customers.

As depicted in the fig.1 (shown below), as for Inter AS Option B, MP-BGP VPNv4 session is set up between service providers’ ASBR PEs.

 

 

inter-as option b

 

Figure 1: Inter-AS Option B

 

As for Inter AS Option B, ASBR routers – the provider-edge devices between the service providers – maintain only the VPN prefixes of the customers in the BGP table.

In fact, I have shown that VPNv4 BGP session has been set up between the ASBRs.

The high-level operational differences between Inter AS Option C and Inter AS Option B are in two folds: one is that ASBRs do not have VRF table; the other is that unlike Inter AS Option B, Inter AS Option C do not keep VPN prefixes for the customer on the ASBR.

ASBR is the edge routers which are used to connect the Service Provider each other.

As depicted in fig. 2 (shown below), as for Inter AS Option C, the VPNv4 BGP session is set up between route reflectors of the service providers.

 

inter-as option c

 

Figure-2 Inter AS Option C

 

In fig.2, there are two service providers: service provider A and service provider B.

Service Provider A:

Service provider A has two customers: customer A and B

For scaling purposes, all the provider-edge routers run VPNv4 BGP session with VPNv4 route reflector.

Service provider B:

Service provider B has two customers: customer A and B

A and B are companies which require Inter AS MPLS VPN service.

Service provider B runs IS-IS internally ( It could run other IGP protocols as well for sure)

PE-CE routing protocols are enabled on the VRF; thus service provider A has two separate VRF table.

For scaling purposes, all the provider-edge routers run VPNv4 BGP session with VPNv4 route reflector.

Inter AS Option C runs by enabling VPNv4 BGP session between the Route Reflectors of the Service Providers.

Provider-edge routers of service providers do not set up VPNv4 neighborship neither with AS nor between ASes.

In order to use the route reflectors to setup VPNv4 neighborship, they should move towards each other. i.e. BGP runs over TCP.

For reachability, route reflector loopback interfaces are advertised on the global routing table of service providers.

As for AS Option C depicted in fig. 3, ASBR PEs of the service providers run IPv4 BGP session between them.

Over IPv4 BGP session, loopback interfaces of route reflectors and internal PEs are advertised. Next, I will explain why you need to advertise internal PE loopback addresses, not just route reflectors.

ASBR PEs know the loopback address of Route reflector through OSPF in the Service Provider A network, through IS-IS in the Service Provider B network.

Since the VPNv4 EBGP neighborship is set up between VPNv4 RR of the service provider, the next hop for the VPN route is the route reflector.

 

Traffic trombones on the RR unless ” no bgp next-hop unchanged ” configured on the Route Reflectors. This is one of the important caveats in Inter AS MPLS VPN Option C.

 

Once this feature is enabled, route reflector does not change the next hop to itself even though the peering is EBGP VPNv4, between the RRs. ( Whenever there is an EBGP session between two BGP speaker, next hop is changed since they are not client of each other but regular EBGP neighborship, otherwise RR doesn’t change the next hop in IBGP)

In this case, because internal PEs continue to be a next hop for the customer prefixes, internal PE loopback is advertised on IPv4 BGP session between ASBRs.

As for the VPN next hop, transport label should be assigned to MPLS VPN to work efficiently, and MPLS label is assigned to the IPv4 unicast routes, which are the loopback of RRs and Internal PEs of ASBR.

 

Inter-AS Option C Design Objectives:

 

  • Inter AS Option C is unique, since it requires internal addressing advertisement between the service providers.
  • Those addresses are leaked into the global routing table of the providers, a process that providers dislike.
  • It can still be used in large scale design if the company has multiple autonomous system. (Not between the service providers).
  • This is just a network design option. Please note that it is not the only option.
  • Inter AS Option C is very difficult to implement because it requires meticulous redistribution at the edge of the networks, VPNv4 neighborship between the route reflectors, IPv4 BGP neighborship between the ASBRs, label advertisement, route target agreement between the service providers, and so on.
  • It is insecure because the internal address leaks between the providers.

I know you must have thought whether AS design is common between the service providers in real life.

In reality, I helped two times Inter-AS MPLS VPN design of the Service Providers, both of them concluded with Inter AS Option A.

To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Architecture Perspective” Book.

Let me ask you these questions.

Is Inter AS MPLS VPN enabled on your network?

Are you planning to design Inter AS VPN?

Let’s discuss this in the comment section.

by Orhan Ergun at November 14, 2019 09:31 AM

Russ White – Orhan Ergun CCDE Practical Exam Scenario

I am glad to announce that Russ White and I have been preparing a CCDE Practical Exam ( Lab Exam ) Scenario. This is the most realistic scenario available anywhere. Why? Because it is not only prepared by a CCDE but also by one of the exam founders!

Disclosure : This is not asked in the CCDE exam , but the structure and idea is very similar to what would be found in the exam.

Russ White is one of the CCDE exam founders and the Author of Optimal Routing Design, Practical BGP, Advanced IP Network Design, and many other network design and architecture books. Russ and I have put much effort into preparing this scenario.

I will first present this scenario for the first time in the July CCDE Training class. (You can see from here the topics which I will talk about in the class as well.)

There are already more than 20 people in the class and multiple people will attend the CCDE Exam in August. I am sure this scenario will be an excellent resource for the CCDE candidates.

If you want to be a good network designer as well as a CCDE, it is not enough to understand the technologies alone, but you also need to understand how business intersects with those technologies.

You need to understand how business drives the network, and how the network can drive the business.

How can you analyse the network ? How can you prepare an architecture ? What are the attributes of good design? What are the most important tradeoffs? All of this will be discussed and more!

This is the ONLY CCDE Design class offered online. Travel and time away from home is null thus saving you not only time but money!

UPDATE: THE JULY CLASS IS FULL!
REGISTER FOR YOUR SEAT AT THE OCTOBER CLASS NOW BEFORE IT SELLS OUT!

by Orhan Ergun at November 14, 2019 09:19 AM

MPLS Layer 3 VPN Deployment

In this post I will explain MPLS Layer 3 VPN deployment by providing a case study. This deployment mainly will be for green field environment where you deploy network nodes and protocols from scratch. This post doesn’t cover migration from Legacy transport mechanisms such as ATM and Frame Relay migration as it is covered in the separate post on the website.

I am explaining this topic in deep detail in my Instructor Led CCDE and Self Paced CCDE course.

With MPLS, Layer 2 and Layer 3 VPN can be provided and main difference between MPLS Layer 2 and Layer 3 VPN from the deployment point of view is, in MPLS Layer 3 VPN, customer has a routing neighborship with the Service Provider.

In MPLS Layer 2 VPN, Service Provider doesn’t setup a routing neighborship with the customer.

In the below topology I show you basic MPLS network.

 

what does pe ce mean

 

Figure – MPLS Network , Components and the Protocols

 

  • CE is the Customer Edge device and generally located at the customer location.
  • PE is the Provider Edge Device and located at the Service Provider POP location.
  • P is the Provider device and located inside the Service Provider POP location.

 

Case Study :

 

In the above topology, Customer is running EIGRP as an IGP and Service Provider infrastructure IGP protocol is IS-IS. Which technologies and the protocols should be enabled on the CE, P and PE devices ?

I will explain each check box in the above picture and you will understand whether we should enable a particular technology or protocol.

On the CE device

EIGRP : EIGRP should be enabled because as it is indicated in the case study, customer wants to use EIGRP as an IGP protocol. On the PE-CE link, EIGRP is activated.

IS-IS : On the CE device, IS-IS is not required.

MPLS : On the CE device, MPLS is not required based on these requirements. MPLS could be enabled if this customer receives Carrier Supporting Carrier Service.

MP-BGP : On the CE device, MP-BGP (Multi Protocol BGP) is not required.

Redistribution : If Customer uses different protocol in their network, they need to do redistribution. It is not told in the case study thus no need for redistribution. If redistribution is necessary, try to deploy redistribution best practices.

VRF : If customer is not doing layer 3 virtualization , no need for VRF on the CE.

 

On the PE Device

 

Same set of protocols will be analyzed. In real life deployment, Service Provider might use different IGP than IS-IS and most probably will have different PE-CE routing protocol per customer as well.

EIGRP : On the PE device EIGRP is enabled for this customer. It is used to receive customer prefixes from the CE device. CE device (customer) advertises its IP prefixes over EIGRP neighborship.

MPLS : MPLS is enabled on the PE device as well. Not on the PE-CE link but towards P (Provider) device.

IS-IS : IS-IS is enabled on the PE device towards P device as well. I explained the IS-IS routing protocol Frequently Asked Questions, you may want to read it. Also reading my IS-IS design considerations on MPLS Backbone article might be useful if you are reading this post.

MP-BGP : Multi Protocol BGP is enabled for the Layer 3 MPLS VPN on the PE devices. Between two PE devices or between PE and the BGP Router Reflector, VPN session is created.

Redistribution : On the PE device, for this customer, redistribution is performed as well. EIGRP prefixes are received from the CE devices and redistributed into BGP on the PE devices.

VRF : On the PE device, for each customer, separate VRF is created. Different customer prefixes are placed in different VRF table.

On the P device

EIGRP : EIGRP is a customer IGP in this deployment, thus EIGRP is not enabled on the P device. If Service Provider would decide to use EIGRP as an infrastructure (It is also called Internal) IGP, then EIGRP would be enabled on P device as well but EIGRP is not common infrastructure IGP protocol in the Service Provider networks.

MPLS : MPLS runs on the P device. Actually only job of the P device in the Service Provider network is packet processing between the edge devices. Thus, Infrastructure IGP and MPLS are the only necessary protocols.

IS-IS : In this case study, IS-IS is the IGP protocol of the Service Provider. That’s why IS-IS is enabled on the P device.

MP-BGP : BGP doesn’t run on the P devices in the MPLS enabled Service Provider network. This concept is known as BGP Free Core.

Redistribution: There is always only one IGP protocol on the P devices. That’s why never need for redistribution.

VRF : There is only one global routing table always on the P devices. That’s why never need for VRF as well.

Be Aware

MP-BGP only runs between the PE devices. P device role is to provide a reachability between the PE devices. I wrote an article for networkcomputing and stated that intelligence is at the edge not in the core. This is also known as KISS principle in network design.

Any routing protocol can be used between the customer and the Service Provider in MPLS Layer 3 VPN. Most common PE-CE routing protocol in real life is Static Routing and BGP. If you want to understand how OSPF works, have a look at OSPF as a PE-CE routing protocol post.

Except BGP, if customer uses any other routing protocol, redistribution is performed on the PE devices. On the PE devices, BGP next hop is automatically changed as PE device, no need to configure ‘ next-hop self ‘. I explained the BGP next hop behavior in IP and MPLS network earlier in a separate post.

BGP VPN route reflector can be used to reduce complexity of the BGP mesh and have a scalability in the Service Provider network and can be placed in a central location such as datacenter. VPN BGP route reflector placement is much more flexible than IP Route Reflector and having routing loop is not much of an issue.

I will write a separate post on BGP IP and VPN Route Reflector Design Consideration but I recommend you to have a look Fate Sharing post to understand the possible problem of using IP and VPN BGP Route Reflector on the same device which is also called Multi Service Route Reflector.

To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Architecture Perspective” Book.

by Orhan Ergun at November 14, 2019 09:04 AM

PS Core Network Concepts

Most of the educational documents related to PS Core Network start with Call Flows. Attach Call Flow, PDP Context, Paging, etc. Basically that was my problem when I started working in PS Core because the Call Flows include a lot of messages which in turn include a lot of parameters and Information Elements so starting with the Call Flows without knowing at least the Identifiers included in these messages is not the best approach to understand PS Core principles.

This is why this article will be all about the MBB terms that are commonly presented in all Call flows and in most of the MBB talks in general. Once one is comfortable with that, the Call flows will be easy to interpret.

I am bringing some for clarification.

International Mobile Subscriber Identity (IMSI)IMSI

IMSI is a unique Identifier that is allocated to each MS in GSM/UMTS System and stored in SIM Card. (Conforming to ITU E.212 numbering standard)

 

Temporary Mobile Subscriber Identity (TMSI)

In order to support the subscriber identity confidentiality service the VLRs and SGSNs may allocate Temporary Mobile Subscriber Identities (TMSI) to visiting mobile subscribers.

Below is an MS providing P-TMSI Identity to Network

 

 

International Mobile Station Equipment Identity
IMEI identifies the Handset and not the user. It consists of the following elements

– Type Allocation Code (TAC). Its length is 8 digits.

– Serial Number (SNR) is an individual SN uniquely identifying each equipment within the TAC.

– Spare digit: this digit shall be zero, when transmitted by the MS.

Try typing *#06# to know your IMEI!

 

Access Point Name (APN)

In PS Core, an Access Point Name (APN) is a reference to GGSN where in general, it can be seen as a reference to the Service.

According to the APN set on Handset and provisioned in Network, user will get a specific IP Address that is routable to a certain service.

 

 

 

This is my phone where the provisioned APNs are set and configured. most probably if I choose MMS then i will get an IP address routable to MMS PDN.

let’s see now from a high level perspective how the session is established (user gets internet access) in a 3G domain.

ps core call flow

 

  1. UE sends Attach Request to Core Network including the MS Identity
  2. SGSN initiates the authentication procedure (AKA) for the UE in coordination with HLR.
  3. Once authenticated, SGSN updated Location in HLR and retrieves the user subscription profile.
  4. If there are no restrictions, SGSN accepts the Attach Request.
  5. At this stage, UE is capable to send request for Services in the Form of “Activate PDP Context Request” message.
  6. SGSN Conveys the message to GGSN via GTP message “Create PDP Context Request” enriched by parameters from Subscriber Subscription profile.
  7. GGSN validates the request and verifies that UE has no restriction with respect to Charging & Policies. Allocates an IP Address for the UE and then respond with GTP message Create PDP Context Accept.
  8. SGSN replies to UE with Activate PDP Context Response message carrying the IP address allocated via GGSN. (Prior to this message Radio Resources are being set via SGSN and RNC)

At this stage, UE can browse the Internet making use of the routable IP Address allocated via GGSN where the bearer is divided into Radio Access Bearer (RAB) between UE & SGSN and GTP Tunnel between SGSN & GGSN.

GGSN is performing the Tunneling for the DL traffic to UE and the de-tunneling for the UL Traffic to Internet.

With the above Call flow, I close the 3G talk and starting from the next article, I will discuss the 4G Evolved Packet Core Setup.

I hope that was beneficial.. see you in next article.

by Orhan Ergun at November 14, 2019 08:51 AM

Evolved Packet Core – Welcome to Long Term Evolution!

As an end user, I am always welcoming the “4G” Signal indicator on my mobile because basically for me this maps to a better Download Speed, good quality VoIP calls (skype, Hangout, Whatsapp, etc) , better Streaming, and HD Videos.

 

evolved packet core

 

This article is all about the “4G” indicator. I am discussing the Evolved Packet Core together with the EUTRAN, Evolved Universal Terrestrial Radio Access Network Technologies that are realizing the 4G Service offered to end users.

With Data rates above 100 Mbps and latency of milliseconds that enables the best video streaming and online gaming experience; One may think of 4G networks as a replacement for 2G/3G Network which is valid in some cases. However, we see that the decision to “dismantle” 2G/3G is still in the operators roadmaps.

Before we go through the LTE/EPC Network Setup, Let’s list three main definitions and abbreviations that are closely related to 4G.

LTE, Long Term Evolution: LTE is basically the Framework for delivering high-speed Data rates for Mobile and Data Terminals. It started with 3GPP R8 and it is commercially introduced to Markets with term “4G” although “4G” requirements are covered by LTE-Advance (3GPP R10)

EUTRAN, Evolved Universal Terrestrial Radio Access Network: E-UTRAN is basically the Radio Access Network Part of the LTE system. It is represented by the e-NodeB replacing the old 3G RAN Nodes (RNC & NodeB)

EPC, Evolved Packet Core: Evolved from the 3G PS Core, EPC is the Core Network for the LTE Framework. It consists mainly of MME (Mobility Management Entity), SGW (Serving Gateway), & PGW (PDN Gateway) replacing the Old SGSN & GGSN Network Elements.

The LTE/EPC Network is a Flat All-IP Network. All Interfaces are over IP and no SS7, SIGTRAN, or Legacy Interfaces exist.

Below is the high level Architecture of the Basic LTE/EPC Network setup with the corresponding Interfaces. EUTRAN is highlighted in yellow while the EPC is highlighted in green.

 

EPC architecture

 

eNodeB: The only EUTRAN Network Element replacing the 3G Node B & RNC. It provides all Radio Management Functions.
MME, Mobility Management Entity: The main Core Signalling Node replacing the SGSN in 2G/3G.

SGW, Serving Gateway: Core Network Elements that terminates the User Plane Tunnel from eNB. It has functions mapped from both SGSN and GGSN in 2G/3G.

PGW, PDN Gateway: The Gateway CN element in EPC replacing the GGSN in 2G/3G.

HSS, Home Subscriber Server: It presents a permanent and central subscriber database.
One observation here is that the LTE is a pure Data Network that doesn’t integrate with legacy CS Network. One may notice that eNB is not integrated to MSC/MGW the main nodes in the Circuit Switched (CS) Network that deliver voice service in 2G/3G. (Check the previous articles).

The LTE network has brought some attractive Voice Solution such VoLTE (Voice over LTE) and VoWiFi (Voice over WiFi) which are very trending at this phase with many operators are keen to deliver voice services over LTE instead using the legacy voice Solutions over the CS Network.

The Report from GSA, Global mobile Suppliers Association is bringing some interesting figure about the LTE adoption across the world.

Q1 2016 results has showed that there are currently 1.292 billion LTE subscribers worldwide and that represents a 100% Yearly growth compared to last year which recorded 647 million Subscriber in Q1 2015!

By Apr 2016, 55 operators have launched HD Voice using VoLTE and 126 operators are investing in Voice over LTE; that includes Demos and PoCs.

Around 4XX Smartphone models are declared to support VoLTE.

LTE is literally the long Term Evolution that gives a boost to Mobile broadband capabilities enabling the introduction of evolutionary services such as VoWiFI and development of subsequent technologies such as 5G, NB-IOT, Network Slicing, Cloud Computing, & others.

I hope that was beneficial as an introduction to LTE/EPC Network.

See you in the next article.

by Orhan Ergun at November 14, 2019 08:41 AM

IS-IS Design considerations on MPLS backbone

Using IS-IS with MPLS require some important design considerations. IS-IS as a scalable link state routing protocol has been used in the Service Provider networks for decades.

In fact, eight of the largest nine Service Providers use IS-IS routing protocol on their network as of today.

If LDP is used to setup an MPLS LSP, important IS-IS design considerations should be carefully understood.

As you might know IS-IS routing protocol uses IS-IS levels for hierarchy.

Similar to other routing protocol, synchronization is one of the consideration. IGP-LDP synchronization is required when MPLS LSP is setup with the LDP protocol. Otherwise routing black holes occur.

One of the important IS-IS design considerations when it is used with MPLS is PE devices loopback IP addresses are not sent into IS-IS Level1 domain in Multi-Level IS-IS design. This problem doesn’t happen in flat IS-IS design since you cannot summarize the prefixes in flat/single level IS-IS deployment.

In IS-IS L1 domain, internal routers only receive ATT (Attached) bit from the L1-L2 router. This bit is used for default route purpose.

If there is more than one L1-L2 router, still only default route is sent into Level1 subdomain/level.

Internal IS-IS Level 1 routers don’t know any Level1 or Level2 information other than their area.
In order to have MPLS Layer 3 VPN, PE devices should be able to reach each other, even if they are in the different IS-IS areas.

If they can reach each other through the specific routing information, MPLS LDP LSP should be setup end to end.

You can think that, they can use a default route (ATT bit) and can still reach other routers in other areas, but they cannot.

The reason is, an LSR assigns a label to prefix for which it has an exact match in its RIB. Thanks to the RFC 5283 (LDP Extension for Inter-Area Label Switched Paths) is saying that given the condition that the LSR doesn’t have an exact match for a prefix P1, If the prefix P1 is a subset of a RIB entry p, then, a label should be assigned to P1.

Note that it is a label for exact prefix P1 (and not the P) that is installed in the LFIB. The RIB, remains unchanged.

RFC 5283 changes the default behaviour of LDP label assignment which is “exact match” to the more flexible “longest match”.

Route Leaking vs. RFC 5283 (LDP Extension for Inter-Area Label Switched Paths)

So, in an IS-IS network, which method should be preferred?

Although it depends on other criteria as well, It is good to have a future proof network from the design point of view, thus RFC 5283 implementation should be selected. It allows possible future flexibility. When you want to summarize even the PE loopbacks, you can do it.

Also with RFC 5283, route leaking is still can be configured.

PE loopback reachability can be achieved with one more way. If PE loopback is carried in BGP, which is called BGP + Label or BGP LU (Label Unicast) then there is no need for route leaking or RFC 5283. This operation has been explained in the Seamless MPLS article.

For more information on this topic, please have a look at my network design course by clicking here. 

by Orhan Ergun at November 14, 2019 08:33 AM

What is MPLS tunnel label and why it is used?

In networking we use many times different terms to define the same thing. MPLS tunnel label or transport label are just two of those.

Not only transport and tunnel labels but also other terms are used to define the same thing which these labels provide.

Let me explain first why and where MPLS tunnel label is used.

 

what does pe ce mean

 

In the above figure, traditional MPLS nodes are shown. CE, PE and the P nodes are the core nodes which are used in MPLS.

Tunnel label is only used to provide reachability between PE devices. You can think MPLS tunnel label as GRE tunnel. With point to point GRE tunnels, you create a connection between two nodes. With MPLS tunnel label, exactly the same thing is done. But the tunnel endpoints are PE (Provider Edge devices) .

In different resources you can find the different terms to describe this reachability. Instead of MPLS tunnel label, you can see; MPLS transport label , topmost label , outer label , outmost label.

Many people don’t know that they are all the same thing. All these terms are used to define the same thing and they are used for the reachability between two MPLS PE devices.

There are many other terms which you should know if you are in networking field and you can reach to most commonly used and important ones from here.

by Orhan Ergun at November 14, 2019 08:25 AM

ipSpace.net Blog (Ivan Pepelnjak)

Your First Public Cloud Deployment Should Be Small

I’ve seen successful public (infrastructure) cloud deployments… but also spectacular failures. The difference between the two usually comes down to whether the team deploying into a public cloud environment realizes they’re dealing with an unfamiliar environment and acts accordingly.

Please note that I’m not talking about organizations migrating their email to Office 365. While that counts as public cloud deployment when an industry analyst tries to paint a rosy picture of public cloud acceptance, I’m more interested in organizations using compute, storage, security and networking public cloud infrastructure.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 14, 2019 07:17 AM

November 13, 2019

Network Design and Architecture

Introduction to Disaster Recovery

Businesses want to choose reliable equipments, components and technologies while designing a network. You may deploy most reliable equipments from your trusted vendor or deploy most mature technologies with carefully do not forget eventually every system fails!

Depends on where is your datacenter located, different disasters may happen. For U.S storm, tornado is not uncommon. I remember just couple years before because of major flooding, Vodafone couldn’t serve to their customer in Turkey for at least 1 day.

Thus, resiliency is an important aspect of the design plan.Resiliency means, how fast you can react to failure with the simplest explanation.

Disaster recovery is the response and remediation that a company follows after a planned or unplanned failure. Businesses often have a secondary datacenter used mostly for backup. If the company has multiple datacenter, they can be used as active/active though.

Secondary datacenter can take the responsibility in the case of a primary datacenter fails if it is used as backup.

Recovery time will depend on business requirements.For the mission critical applications, business may tolerate very short if not zero down time. Then the cost of the required equipments in primary and backup datacenters, skilled engineer who can manage that design, complexity of the design changes based on the expectation of business from the disaster recovery service.

The amount of data loss a company can tolerate, also known as its recovery point objective (RPO) is a very important parameter. Recovery time can be between 2 hours to days or even weeks, based on the company’s applications. Highly critical applications require less downtime and less data loss. For that reason, a disaster avoidance solution might be a better option for businesses with many highly critical applications.

by Orhan Ergun at November 13, 2019 08:38 PM

4 people passed CCDE Lab with my CCDE training recently!

I realised just now that I didn’t share the names of the people who used my CCDE resources and got their CCDE numbers recently.

I know all of them, their capabilities, technical strength. I am happy to see that they are CCDE now.

Congrats to Ken Young , Jaroslaw Dobkowski , Malcolm Booden , Bryan Bartik.

Some of them used Self Paced CCDE Course , Some joined Instructor Led CCDE Training as well. I am honoured to hear good feedbacks from all and share their feedbacks in the related pages on the website.

In 2017, around 10 people passed the CCDE Lab exam with these people. And there was one cancelled exam on May 2017.

by Orhan Ergun at November 13, 2019 08:33 PM

CCIE vs. CCDE

CCIE vs. CCDE is probably one of the most frequently asked questions by networking experts.

To get more information on CCDE contents and syllabus, you can check my Instructor Led CCDE or Self Paced CCDE course webpages.

How many times have you asked yourself or discussed this topic with your friends? Many times, right?

I have CCIE routing switching and/or service provider, should I continue to design certificates such as CCDE or should I study for another expert level certification, perhaps virtualization certification?

To illustrate my answer, let me give you an example.

Consider that you would build Greenfield network. (Usually, it is the same for Brownfield as well).

First, you need to understand the business, how many locations it has, where it is located, where is HQ or HQs, Datacenter, POP locations, and so on.

After that, you try to understand how the business can assist its consumers.

It can be retail, airport, stadium, or service provider network.

All these businesses have similar and different requirements,

For example, stadium architecture requires you to have ticketing systems, access control systems, and streaming the game, all of which are connected to the network. So, you need to understand the business requirements, how they want their revenue to appear, and how their systems interact with one another. Then, you will provide the business an architecture to support its requirements.

You may need to enable QoS or Multicast for that application, as an example.

Architecture refers to the process of gathering, analyzing, and clarifying the business requirements.

Without Architecture, a Design Is Just a Guess

The designer needs to understand the business objectives and high-level functional specifications.

In the retail store example, store sales information may be updated with some central locations such as Datacenter for the purpose of analyzing data only, and high availability requirements of the store may not have much priority.

Now, let me give an example that shows that it is pertinent that you understand why a design is important and why it requires different strategies.

A Business has 1000 sites connected to two data centers. (Technically, we call it Hub and Spoke).

It plans to open 1000 additional sites within 2 years.

The business wants to operate its WAN network. While its data is highly classified, the business carries a small amount of data between remote sites and data centers.

The business can tolerate up to half an hour downtime. Since the enterprise has many remote sites, it wants to reduce the cost of devices in the remote offices.

Ideally, the enterprise wants to operate those sites using small resources on its devices. And since there are many sites, it wants the most cost effective WAN solution.

As you must have observed, I did not mention anything technical so far.

All these requirements can be received from the business leader, perhaps the CIO or CTO of the company.

Let me translate these business requirements and the structure of the technical terms.

The company has many sites, and it needs scalable design.
The available requirements are not tight.
The business’s network physically fits Hub and Spoke (Star) topology.

So far, MPLS L3 VPN service from the provider seems suitable for its requirements. Let’s continue.

The business wants to operate its WAN network.

Now, we have eliminated the MPLS L3VPN option. If you get l3 VPN from the provider, you can have multi-point-to multi-point capability; however, you may lose your control. This is because you are transferring SLA and risks to the service provider even though you depend on their performance and control.

After understanding the architecture and business requirements, translating those requirements to technical solution is the design.

You can come up with many valid design alternatives.

But you should always proffer the simplest solution.

The business believes that its data is highly confidential, so we need to encrypt its data.

Based on the business requirements, IPSEC over DMVPN would be a valid design.

DMVPN can be set up over leased lines, virtual leased line, Internet, and so on.

Since its availability requirement is not tight and the business wants the most cost effective design, IPSEC over DMVPN over the Internet is suitable.

The equipment choice is important, but not necessarily, from the design point of view. The CCDE task is generally a CCDA engineer’s job.

If you are lucky, you can tell your boss that it is not your job

Which routing protocol would you choose? More importantly, do not forget that they have two data centers.

Architecture understood the applications and the systems, all of which the business needs. The business also needs the interactions those systems have with each other at the conceptual level.

The designer will translate those requirements to the technical requirements. After that, the designer will find the best technologies for these requirements.

CCIE as an operational task will translate these technical requirements and technologies to low-level configuration state.

The designer doesn’t configure NHRP, IPSEC Crypto, Routing Protocols, Redistribution, Area Assignment, and so on.

CCIE does not necessarily need to know if EIGRP or OSPF would be a better option for the business. However, CCIE needs to know how links can be assigned to the OSPF Areas, how EIGRP Stub is configured, and so on.

 

You can watch my youtube video on CCIE vs. CCDE discussion below. Don’t forget to subscribe to channel to follow my all updates.

 

<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="675" src="https://www.youtube.com/embed/6ZMIjM0EVQU?feature=oembed" title="Cisco CCIE vs CCDE" width="1200"></iframe>

 

 

What would be your design for the above business requirements?

To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Perspective” Book.

by Orhan Ergun at November 13, 2019 08:14 PM

ipSpace.net Blog (Ivan Pepelnjak)

Can We Really Use Millions of VXLAN Segments?

One of my readers sent me a question along these lines…

VXLAN Network Identifier is 24 bit long, giving 16 us million separate segments. However, we have to map VNI into VLANs on most switches. How can we scale up to 16 million segments when we have run out of VLAN IDs? Can we create a separate VTEP on the same switch?

VXLAN is just an encapsulation format and does not imply any particular switch architecture. What really matters in this particular case is the implementation of the MAC forwarding table in switching ASIC.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 13, 2019 07:31 AM

XKCD Comics

November 12, 2019

Networking Now (Juniper Blog)

The Power of a Threat-Aware Network

Juniper Connected Security is more than just a marketing catchphrase or a nice metaphorical basket where all of Juniper Networks' information security products can be placed. It is an information security strategy, one focused on the importance of deep network visibility, multiple points of enforcement throughout the network and interconnectivity between both networking and information security products. The expansion of SecIntel throughout Juniper's portfolio is a real-world example of this strategy in action. Bringing threat intelligence to network infrastructure with SecIntel provides customers with a threat-aware network, enabling their network infrastructure to act against attacks and help safeguard users and applications.

by SamanthaMadrid at November 12, 2019 05:46 PM

ipSpace.net Blog (Ivan Pepelnjak)

Stretched VLANs and Failing Firewall Clusters

After publishing the Disaster Recovery Faking, Take Two blog post (you might want to read that one before proceeding) I was severely reprimanded by several people with ties to virtualization vendors for blaming virtualization consultants when it was obvious the firewall clusters stretched across two data centers caused the total data center meltdown.

Let’s chase that elephant out of the room first. When you drive too fast on an icy road and crash into a tree who do you blame?

  • The person who told you it’s perfectly OK to do so;
  • The tire manufacturer who advertised how safe their tires were?
  • The tires for failing to ignore the laws of physics;
  • Yourself for listening to bad advice

For whatever reason some people love to blame the tires ;)

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 12, 2019 07:14 AM

November 11, 2019

ipSpace.net Blog (Ivan Pepelnjak)

Stretched Layer-2 Subnets in Azure

Last Thursday morning I found this gem in my Twitter feed (courtesy of Stefan de Kooter)

Greg Cusanza in #BRK3192 just announced #Azure Extended Network, for stretching Layer 2 subnets into Azure!

As I know a little bit about how networking works within Azure, and I’ve seen something very similar a few times in the past, I was able to figure out what’s really going on behind the scenes in a few seconds… and got reminded of an old Russian joke I found somewhere on Quora:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 11, 2019 07:45 AM

XKCD Comics

November 08, 2019

The Networking Nerd

The Future of Hidden Features

 

948AD2EC-79D1-4828-AF55-C71EA8715771You may have noticed last week that Ubiquiti added a new “feature” to their devices in a firmware updated. According to this YouTube video from @TomLawrenceTech, Ubiquiti built an new service that contacts a URL to “phone home” and check in with their servers. It got some heavy discussion going, especially on Reddit.

The consensus is that Ubiquiti screwed up here by not informing people they were adding the feature up front and also not allowing users to opt-out initially. The support people at Ubiquiti even posted a quick workaround of blocking the URL at a perimeter firewall to prevent the communications until they could patch in the option to opt-out. If this was an isolated incident I could see some manner of outcry about it, but the fact of the matter is that companies are adding these hidden features more and more every day.

The first issue comes from the fact that most release notes for apps any more are nothing aside from platitudes. “Hey, we fixed some bugs and stuff so turn on automatic updates so you get the best version of our stuff!” is somewhat common now when it comes to a list of changes. That has a lot to do with applications developers doing unannounced A/B testing with people. It’s not uncommon to have two identical version numbers of an app running two wildly different code releases or having dissimilar UIs just because some bean counter wants to know how well Feature X polls with a certain demographic.

Slipstreaming Stuff

While that’s all well and good for consumer applications, the trend is starting to seep into enterprise software as well. While we still get an exhaustive list of things that have been fixed in a release, we’re getting more than we bargained for on occasion as well. Doing a scrub of code to ensure that it actually fixes the bugs that you have in your environment is important for reliability purposes. When the quality of the code you’re trying to publish is of less-than-stellar quality, you have have to spend more time ensuring that the stated fixes aren’t going to cause issues during implementation.

But what about when the stated features aren’t the only things included? Could you imagine the nightmare of installing a piece of software to fix an issue only to find out that something that was hidden in the code, like a completely undocumented feature, caused some other issue elsewhere? And when I say “undocumented feature” I’m not using it as a euphemism for another bug. I’m talking about a service that got installed that no one knows about. Like a piece of an app that will be enabled at a later time for instance.

Remember Microsoft back in the early 90s? They were invincible. They had everything they wanted in the palm of their hand. And then their world exploded. They got sued by the US government in an anti-trust case. One of the things that came out of this lawsuit was a focus on undocumented features in Microsoft software. The government said that Microsoft was adding features to their application software that gave them performance advantages over other vendors. And only they knew about them.

Microsoft agreed to get rid of all undocumented features, including some of the Easter eggs that were hidden in their software. That irritated some people that enjoyed finding these fun little gifts. But in 2005, there was a great blog post that discussed why Microsoft has a strict “no Easter eggs” policy now. And reading it almost 15 years later makes it sound so prescient.

Security Sounds Simple, Right?

Undocumented features are a security risk. If there is code in your app that isn’t executing or hasn’t been completely checked against the interactions in your environment you’re adding a huge amount of risk. There can be interactions that someone doesn’t understand or that you have no way of seeing. And you can better believe that if the regulators of your industry find them before you do you’re going to have a lot of explaining to do.

That’s even if you notice it in the first place. How many times have you been told to whitelist a specific URL or IP stack to make something work? I can remember being told to whitelist 17.0.0.0/8 for Apple to make their push notification service work. That’s an awful lot of IP addresses to enable push notifications! And that’s a lot of data that could be corrupted or misused by a smart threat actor.

The solution we need to fix the problem is actually pretty simple. We need to push back against the idea that we can slip undocumented features into updates. Release notes need to list everyting that comes in the software package. We need to tell our vendors and companies that we have to have a full listing of the software features. And regulatory bodies need to be ready to share the blame when someone breaks those rules instead of punishing the people that had no idea there was something in the code that was misbehaving.


Tom’s Take

I’m not a fan of finding things I wasn’t expecting. At one point my wife and I had the latest Facebook app updates and somehow her UI looked radically different than mine. But if it’s on my phone or my home computer I don’t really have much to complain about. Finding undocumented apps and features in my enterprise software is a huge issue though. Security is paramount and undocumented code is an entry point to disaster. We are the only ones that can stem the tide by pushing back against this practice now before it becomes commonplace in the future.

by networkingnerd at November 08, 2019 04:50 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)
XKCD Comics

November 07, 2019

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Explore the Content Outline of Our Networking in Public Clouds Online Course

A few days ago we published the content outline for our Networking in Public Clouds online course.

We’ll start with the basics, explore the ways to automate cloud deployments (after all, you wouldn’t want to repeat the past mistakes and configure everything with a GUI, would you?), touch on compute and storage infrastructure, and the focus on the networking aspects of public cloud deployments including:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 07, 2019 10:04 AM

November 06, 2019

ipSpace.net Blog (Ivan Pepelnjak)

VMware NSX-T and Geneve Q&A

A Network Artist left a lengthy comment on my Brief History of VMware NSX blog post. He raised a number of interesting topics, so I decided to write my replies as a separate blog post.

Using Geneve is an interesting choice to be made and while the approach has it’s own Pros and Cons, I would like to stick to VXLAN if I were to recommend to someone for few good reasons.

The main reason I see for NSX-T using Geneve instead of VXLAN is the need for additional header fields to carry metadata around, and to implement Network Services Header (NSH) for east-west service insertion.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 06, 2019 07:28 AM

Potaroo blog

Notes from OARC 31

DNS OARC held its 31st meeting in Austin, Texas on 31 October to 1 November. Here are some of my highlights from two full days of DNS presentations at this workshop.

November 06, 2019 02:00 AM

XKCD Comics

November 05, 2019

My Etherealmind

Meet Me in San Jose Next Week 3/4/5 Nov.

I will be flying into San Jose early for TFD20. Most likely this is the only time I will be in the area over the next six to twelve months. 

The post Meet Me in San Jose Next Week 3/4/5 Nov. appeared first on EtherealMind.

by Greg Ferro at November 05, 2019 06:18 PM

Packet Pushers
Networking Now (Juniper Blog)

The Power of Automation for Data Protection

In previous blogs, we discussed the importance of a strong data-protection program across known and understood data, but once the program has been completed, what’s next? Too often, the answer is ‘nothing’ or ‘very little’ with the resource- strapped security team typically needed for moving onto the next project, leaving little time for ongoing review and improvements that will be required to keep data-protection measures up to date.

 

Having a strong and known security posture is important, but it’s equally important to maintain and update that posture. The battle against malware is ongoing and bad actors don’t stay still; they’re constantly looking for new weak spots and opportunities to break through defenses and gain access to valuable data.

 

 

by lpitt at November 05, 2019 02:00 PM

ipSpace.net Blog (Ivan Pepelnjak)

Executing a Jinja2 Loop for a Subset of Elements

Imagine you want to create a Jinja2 report that includes only a select subset of elements of a data structure… and want to have header, footer, and element count in that report.

Traditionally we’d solve that challenge with an extra variable, but as Jinja2 variables don’t survive loop termination, the code to do that in Jinja2 gets exceedingly convoluted.

Fortunately, Jinja2 provides a better way: using a conditional expression to select the elements you want to iterate over.

by Ivan Pepelnjak (noreply@blogger.com) at November 05, 2019 06:07 AM

November 04, 2019

Networking Now (Juniper Blog)

Juniper Named a Champion in the Info-Tech Research Group SIEM Customer Experience Report

The results of the March 2019 Info-Tech Research Group Security Incident and Event Management (SIEM) Customer Experience Report are in and we’re proud to share that Juniper Networks was named a Champion. These results validate our Juniper Connected Security solution, along with our focus on simplicity, interconnectivity and automation.

by Trevor_Pott at November 04, 2019 04:18 PM

My Etherealmind

Are Successful CEOs Just Lucky?

The research shows that, yes, its mostly luck

The post Are Successful CEOs Just Lucky? appeared first on EtherealMind.

by Greg Ferro at November 04, 2019 10:49 AM

ipSpace.net Blog (Ivan Pepelnjak)

Maybe It's Time We Start Appreciating Standards

A friend of mine sent me a short message including…

There is a number of products that recently arrived or are coming to market using group encryption systems for IP networks, but are (understandably) not using IPsec.

… which triggered an old itch of mine caused by the “We don’t need no IETF standards, code is king” stupidity.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 04, 2019 08:12 AM

Potaroo blog

DNS Wars

The 77th NANOG meeting was held in Austin, Texas at the end of October and they invited Farsight’s Paul Vixie to deliver a keynote presentation. These are my thoughts in response to his presentation, and they are my interpretation of Paul’s talk and more than a few of my opinions thrown in for good measure!

November 04, 2019 05:00 AM

XKCD Comics

November 03, 2019

ipSpace.net Blog (Ivan Pepelnjak)

SD-WAN Vendor Landscape

In the Three Paths of Enterprise IT part of Business Aspects of Networking webinar I covered the traditional networking vendor landscape. Let’s try to do the same for SD-WAN.

It’s clear that we have two types of SD-WAN vendors:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 03, 2019 11:10 AM

November 01, 2019

The Networking Nerd

In Defense of Support

We’re all in IT. We’ve done our time in the trenches. We’ve…seen things, as Roy Batty might say. Things you wouldn’t believe. But in the end we all know the pain of trying to get support for something that we’re working on. And we know how painful that whole process can be. Yet, how is it that support is universally “bad” in our eyes?

One Of Us

Before we launch into this discussion, I’ll give you a bit of background on me. I did inbound tech support for Gateway Computers for about six months at the start of my career. So I wasn’t supporting enterprises to start with but I’ve been about as far down in the trenches as you can go. And that taught me a lot about the landscape of support.

The first thing you have to realize is that most Tier 1 support people are, in fact, not IT nerds. They don’t have a degree in troubleshooting OSPF or are signatories to the fibre channel standards. They are generally regular people. They get a week or two of training and off they go. In general the people on the other end of the support phone number are better phone people than they are technical people. The trainers at my old job told me, “We can teach someone to be technical. But it’s hard to find someone who is pleasant on the phone.”

I don’t necessarily agree with that statement but it seems to be the way that most companies staff their support lines. Why? Ask yourself a quick question: How many times has Tier 1 support solved your issue? For most of us the answer is probably “never” or “extremely rarely”. That’s because we, as IT professionals, have spent a large amount of our time and energy doing the job of a Tier 1 troubleshooter already. We pull device logs and look at errors and Google every message we can find until we hit a roadblock. Then it’s time to call. However, you’re already well past what Tier 1 can offer.

Look at it from the perspective of the company offering the support. If the majority of people calling me are already past the point of where my Tier 1 people can help them, why should I invest in those people? If all they’re going to do is take information down and relay the call to Tier 2 support how much knowledge do they really need? Why hire a rock star for a level of support that will never solve a problem?

In fact, this is basically the job of Tier 1 support. If it’s not a known issue, like an outage or a massive bug that is plastered up everywhere, they’re going to collect the basic information and get ready to pass the call to the next tier of support. Yes, it sucks to have to confirm serial numbers and warranty status with someone that knows less about VLANs than you do. But if you were in the shoes of the Tier 2 technician would you want to waste 10 minutes of the call verifying all that info yourself? Or would you rather put your talent to use to get the problem solved?

Think about all the channel partners and certification benefits that let you bypass Tier 1 support. They’re all focused on the idea that you know enough about the product to get to the next level to help you troubleshoot. But they’re also implying that you know the customer’s device is in warranty and that you need to pull log files and other data before you open a ticket so there’s no wasted time. But you still need to take the time to get the information to the right people, right? Would you want to start troubleshooting a problem with no log files and only a very basic description of the issue? And consider that half the people out there just put “Doesn’t Work” or “Acting Weird” for the problem instead of something more specific.

Sight Beyond Sight

This whole mess of gathering info ahead of time is one of the reasons why I’m excited about the coming of network telemetry and network automation. That’s because you now have access to all the data you need to send along to support and you don’t need to remember to collect it. If you’ve ever logged into a switch and run the show tech-support command you probably recall with horror the console spam that was spit out for minutes upon minutes. Worse yet, you probably remember that you forgot to redirect that spam to a file on the system to capture it all to send along with your ticket request.

Commands like show tech-support are the kinds of things that network telemetry is going to solve. They’re all-in-one monsters that provide way too much data to Tier 2 support techs. If the data they need is in there somewhere it’s better to capture it all and spend hours poring through a text file than talk to the customer about the specific issue, right?

Now, imagine the alternative. A system that notices a ticket has been created and pulls the appropriate log files and telemetry. The system adds the important data to the ticket and gets it ready to send to the vendor. Now, instead of needing to provide all the data to the Tier 1 folks you can instead open a ticket at the right level to get someone working on the problem. Mean Time to Resolution goes down and everyone is happy. Yes, that does mean that there are going to be fewer Tier 1 techs taking calls from people that don’t have a way to collect the data ahead of time but that’s the issue with automating a task away from someone. If the truth be told they would be able to find a different job doing the same thing in a different industry. Or they would see the value of learning how to troubleshoot and move up the ladder to Tier 2.

However, having telemetry gathered automatically still isn’t enough for some. The future is removing the people from the chain completely. If the system can gather telemetry and open tickets with the vendor what’s stopping it from doing the same proactively? We already have systems designed to do things like mail hard disk drives to customers when the storage array is degraded and a drive needs to be replaced. Why can’t we extend things like that to other parts of IT? What if Tier 2 support called you for a change and said, “Hey, we noticed your routing protocol has a big convergence issue in the Wyoming office. We think we have a fix so let’s remote in and fix it together.” Imagine that!


Tom’s Take

The future isn’t about making jobs go away. It’s about making everything more efficient. I don’t want to see Tier 1 support people lose their jobs but I also know that most people consider their jobs pointless already. No one wants to deal with the info takers and call routers. So we need to find a way to get the right information to the people that need it with a minimum of effort. That’s how you make support work for the IT people again. And when that day comes and technology has caught up to the way that we use support, perhaps we won’t need to defend them anymore.

by networkingnerd at November 01, 2019 04:32 PM

Networking Now (Juniper Blog)

Bait and Tackle: What Can Be Done About Phishing?

Last month, Juniper Threat Labs released research on a new Trojan-delivered malware named 'Masad Stealer’. This malware targets a messaging application to steal user data, including Cryptocurrency wallets, credit card information, discord data and more. The developers sell this malware “off the shelf”, so we’re likely to see it crop up again and again, but this does not make it a common form of attack. 

by lpitt at November 01, 2019 09:00 AM

ipSpace.net Blog (Ivan Pepelnjak)

Why Are You Always so Negative?

During the last Tech Field Day Extra @ CLEUR, one of the fellow delegates asked me about my opinion on technology X (don’t remember the details, it was probably one of those over-hyped four-letter technologies). As usual, I started explaining the drawbacks, and he quickly stopped me with a totally unexpected question: “Why do you always tend to be so negative?

That question has been haunting me for months… and here are a few potential answers I came up with.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at November 01, 2019 08:17 AM