Hello everyone. I hope you are well. After this discount news I think you will be much better
Check the Products page for all the discounted products. You have many payment options and as soon as the transaction finishes you will get the download link.
You can download all resources !
Black Friday discounts will end by Friday night.
If you are considering to learn network design or want to attend CCDE , CCDP certifications I highly recommend to check the Products page regularly. If you subscribe to the email list , you will get notification first.
I’m very happy to be attending the first edition of Hewlett-Packard Enterprise (HPE) Discover in London next week. I say the first edition because this is the first major event being held since the reaving of HP Inc from Hewlett-Packard Enterprise. I’m hopeful for some great things to come from this.
One of the most exciting things for me is seeing how HPE is working on their networking department. With the recent news about OpenSwitch, HPE is trying to shift the way of thinking about a switch operating system in a big way. To quote my friend Chris Young:
Vendors today spend a lot of effort re-writing 80% of their code and focus on innovating on the 20% that makes them different. Imagine how much further we’d be if that 80% required no effort at all?
OpenSwitch has some great ideas, like pulling everything from Open vSwitch as a central system database. I would love to see more companies use this model going forward. It makes a lot of sense and can provide significant benefits. Time will tell if other vendors recognize this and start using portions of OpenSwitch in their projects. But for now it’s interesting to see what is possible when someone takes a leap of open-sourced faith.
I’m also excited to hear from Aruba, a Hewlett-Packard Enterprise company (Aruba) and see what new additions they’ve made to their portfolio. The interplay between Aruba and the new HPE Networking will be interesting to follow. I have seen more engagement and discussion coming from HPE Networking now that Aruba has begun integrating themselves into the organization. It’s exciting to have conversations with people involved in the vendor space about what they’re working on. I hope this trend continues with HPE in all areas and not just networking.
HPE is sailing into some very interesting waters. Splitting off the consumer side of the brand does allow the smaller organization to focus on the important things that enterprises need. This isn’t a divestiture. It’s cell mitosis. The behemoth that was HP needed to divide to survive.
I said a couple of weeks ago:
To which it was quickly pointed out that HPE is doing just that. I agree that their effort is impressive. But this is the first time that HP has tried to cut itself to pieces. IBM has done it over and over again. I would amend my original statement to say that no company will be IBM again, including IBM. What you and I think of today as IBM isn’t what Tom Watson built. It’s the remnants of IBM Global Services with some cloud practice acquisitions. The server and PC business that made IBM a household name are gone now.
The lesson to HPE during as they try to find their identity in the new world post-cleaving is to remember what people liked about HP in the enterprise space and focus on keeping that goodwill going. Create a nucleus that allows the brand to continue to build and innovate in new and exciting ways without letting people forget what made you great in the first place.
I’m excited to see what HPE has in store for this Discover. There are no doubt going to be lots of product launches and other kinds of things to pique my interest about the direction the company is headed. I’m impressed so far with the changes and the focus back to what matters. I hope the momentum continues to grow into 2016 and the folks behind the wheel of the HPE ship know how to steer into the clear water of success. Here’s hoping for clear skies and calm seas ahead for the good ship Hewlett-Packard Enterprise!
Just after midnight local time on 22 November, saboteurs, presumably allied with Ukrainian nationalists, set off explosives knocking out power lines to the Crimean peninsula. At 21:29 UTC on 21 November (00:29am on 22-Nov, local time) , we observed numerous Internet outages affecting providers in Crimea and causing significant degradation in Internet connectivity in the disputed region.
|With Crimean Tatar activists and Ukrainian nationalists currently blocking repair crews from restoring power, Crimea may be looking at as much as a month without electricity as the Ukrainian winter sets in. Perhaps more importantly, the incident could serve as a flash point spurring greater conflict between Ukraine and Russia.|
The impacts can be seen in the MRTG traffic volume plot from the Crimea Internet Exchange — the drop-offs are noted with red arrows and followed by intermittent periods of partial connectivity.
The degree of service degradation varied by provider. Crimea Minister of Internal Policy, Information and Communications, Dmitry Polonsky said that Krymtelekom was the only ISP still operational because it did not rely on power from the Ukrainian territory. However, in following graphic on the left, we can see a significant reduction in the rate of completing traceroutes into Krymtelekom, suggesting considerable initial impact from the loss of power. In the graphic on the right, KerchNET located on the eastern coast of Crimea, appeared severely degraded due to the power issues (right graphic).
Dependence in Mainland Ukraine
Recall that, following Russia’s annexation of Crimea from Ukraine in March, Prime Minister Dmitry Medvedev ordered the immediate construction of a new submarine cable across the Kerch Strait, one that would connect mainland Russia to the peninsula. We spotted and reported on the activation of Kerch Strait cable in July of last year.
As illustrated in the maps below, the Crimean peninsula depends critically on the Ukrainian mainland for infrastructure services: power, water, gas and Internet — that was until the Kerch Strait Cable was activated, giving Crimea a new path to reach the global Internet.
Russia has been working on an alternative route for Crimean electricity through Kerch, much as the Kerch Strait cable provides a redundant path for Internet service. But that power cable is not planned to be operational until 22 December, almost a month away. The image below shows a darkened Crimea as viewed from space. This may be the picture of Crimea for days to come.
DMVPN Routing Considerations
Routing over DMVPN is probably the most important decision you should take for the VPN design.
Which routing protocol is suitable for your environment ? EIGRP over DMVPN , OSPF over DMVPN or BGP over DMVPN?
Let me just share some brief information about the routing protocol over the overlay tunnels in this post.
The best routing protocol over DMVPN is BGP or EIGRP for the large scale DMVPN deployments.
What is large depends on your links, stability, how is the redundancy, how many routes spokes have , which phase is in use and so on.
If you have 20.000 routes in total behind all your spokes, unless you are doing SLB design for your HUB, only BGP can support that number of routes.
As I stated earlier in this post, IS-IS cannot be used with DMVPN since it doesn’t run on top of IP. So forget about it!
OSPF can be used but has serious design limitations with DMVPN.
Since OSPF is a link state protocol, its operation doesn’t match with the DMVPN’s NBMA style.
OSPF Point to Multipoint network type is not supported over Phase 2, because HUB changes the next hop of the spokes to itself with P2MP OSPF network type in OSPF.
But in Phase 2, as I stated earlier in DMVPN article, HUB must preserve the next hop of spokes for the spoke to spoke direct tunnels.
If you use OSPF over Phase 2, the only options would remain either Broadcast or Non-Broadcast.
Since you need to specify each unicast neighbor manually for OSPF Non-Broadcast, you lose the ease of configuration benefit of DMVPN.
Phase 3 removes the point to multipoint network type limitation of OSPF but the problem is still that, OSPF requires all the nodes in one area to keep the same database and routing table.
If you design Multi-area OSPF, where would you put the ABR to limit the topology information?
If you put a non-backbone area on the LAN segments of the spokes and a backbone area on the tunnels, then Area 0 still would still have 2.000 spokes.
So failure on one spoke – link failure for example – would cause all the spokes and hub to run full SPF.
With EIGRP and BGP this wouldn’t be a problem since EIGRP and BGP allows summarization at each node in the network.
OSPF allows inter-area summarization only on the ABR and external prefix summarization on the ASBR.
Even RIP can scale much better than OSPF in the DMVPN networks. Long live RIP !
NHRP is used in all Phases.
In Phase 1, spokes don’t use NHRP for next hop resolution but use NHRP for underlay to overlay address mapping registration.
On-demand tunnels are created to pass data plane traffic, not the control plane!
If you have more than one HUB, routing protocol neighborship is created between HUBs as well.
Spoke to spoke dynamic on demand tunnels are removed when traffic ceases.
Spoke to DMVPN HUB tunnels are always up.
DMVPN is very common in Enterprise networks. It is used as either a primary or a backup path.
Even if you enable QoS over overlay tunnels, if you use DMVPN over Internet, don’t forget that Underlay transport (Internet) is still the best effort !
If multi tenancy is necessary, VRF lite can be used with DMVPN.
With VRF-lite, MPLS is not needed to create an individual VPN. But VRF-lite has scalability problems.
For the large scale multi tenant deployment, 2547oDMVPN is an architecture where we use MPLS Layer 3 VPN over DMVPN networks.
What about you?
Are you using DMVPN in your network?
Which Phase is enabled?
Which routing protocol do you use on your DMVPN tunnels?
Are you using encryption?
Is it your primary or backup path?
Let’s discuss about your design in the comment box below so everyone can benefit from your knowledge.
First, you need to remember MPLS-Traffic engineering operation.
MPLS-traffic engineering requires four steps, as shown below, for its operation.
In the diagram shown above – if the traffic flows between R1 and R2 when the packet travels to R2 – the IGP chooses the top path as the shortest path. This is because the cost of R2 to R5 through R3 is smaller than that of R2 to R5 through R6.
As you must have observed, R2-R6-R7-R4 link is not used during this operation.
With MPLS-traffic engineering, both the top and bottom path can be used.
The top path has high latency and high throughput path; as a result, it can be used for data traffic.
On the other hand, the bottom path has low latency, low throughput path, and expensive link; thus, it can be used for latency sensitive traffic, including voice and video.
To complete this operation, we need to create two MPLS-traffic engineering tunnels: one tunnel for data and the other tunnel for voice traffic. After doing that, we can place CBTS (Class based traffic selection) option of MPLS TE and voice traffic into voice LSP (TE tunnel). Next, we can identify data traffic and place it into LSP (TE tunnel).
How can we achieve the Traffic Engineering operation with Segment Routing?
I have explained Node/Prefix SID in one of the previous sections.
Now, you know that Node/Prefix SID is assigned to the loopback addresses of all segment router enabled devices, and SID is unique in the routing domain.
Also, there is another SID type flooded with IGP packet.
Adjacency Segment ID
While Adjacency SID is unique to the local router, it is globally not as unique as Node/Prefix SID.
Routers automatically allocate an Adjacency Segment ID to their interfaces, especially when the segment routing is enabled on the device.
In the topology shown above, R2 allocates Adjacency SID to the interface of R6.
Label 22001 is the adjacency SID of R2 towards R3 interface, and it is used for steering traffic from the shortest path (perhaps, you do not desire to use only the shortest path).
Label 16005 is the Node/Prefix SID of R5.
If the packet is sent from R1 to R5 with two SID, 22001 and 16005 (since R2 usually send 22001 for its local adjacency), R1 will send the packet to R2; R2 will pop 22001, sending the remaining packet towards R6 with16005 – which is Node/Prefix SID of R5.
R6 will send the packet to R7 because it is the shortest path to R5.
Node/Prefix SID is used in the shortest path routing, and it has ECMP capability.
What’s more, Adjacency SID is used in explicit path routing.
NOTE: While Adjacency SID is used for Explicit Path Routing, Node/Prefix SID follows the shortest path.
I will provide more examples so that you can understand how to use node and Adjacency SID to provide an explicit path for the traffic flows.
Our aim is to send traffic between router A and router J; however, we do not want to use E-G link.
In this operation, we will use the A-C-E-F-H-J path.
To achieve our aim, we need to reach E. After that, we will divert the traffic to the E-F link. Next, F will transfer the traffic to J, which is the final destination.
Router A should put three label/Segment ID on the packet.
SID 1600, the first SID, will travel to router E.
The second SID is 16002, which is the Adjacency SID for the R2-R3 interface. This SID is unique, and it is known only by the ingress router, not by C.
The third SID is 16003, which is the Node/Prefix SID of Router J.
Router C receives the packet with three SID, pops the 16001, and sends the remaining two labels to router E.
Router E receives the packet with 16002 SID, which is the Adjacency SID towards router F. Thus, router E pops it, and sends the remaining packet to router F.
Router F receives the packet with SID 16003, which is the Node/Prefix SID of router J.
So, router F follows the shortest path, sending the packet to router H as well as swapping 16003 with 16003 without changing it.
If router J sends implicit null label, router H pops the 16003 and undergoes PHP, sending the IP packet to the router J.
If we want to carry out this operation using MPLS-TE, we can create an explicit path by providing ERO.
Also read : Segment routing fundamentals
In the "least surprising security breach" category, Pearson VUE got hacked and your personal details have been taken.
The post Pearson Gets Owned, Cisco Certifications Database Taken appeared first on EtherealMind.
There have been several articles talking about the death of Fibre Channel. This isn’t one of them. However, it is an article about “peak Fibre Channel”. I think, as a technology, Fibre Channel is in the process of (if it hasn’t already) peaking.
There’s a lot of technology in IT that doesn’t simply die. Instead, it grows, peaks, then slowly (or perhaps very slowly) fades. Consider Unix/RISC. The Unix/RISC market right now is a caretaker platform. Very few new projects are built on Unix/RISC. Typically a new Unix server is purchased to replace an existing but no-longer-supported Unix server to run an older application that we can’t or won’t move onto a more modern platform. The Unix market has been shrinking for over a decade (2004 was probably the year of Peak Unix), yet the market is still a multi-billion dollar revenue market. It’s just a (slowly) shrinking one.
I think that is what is happening to Fibre Channel, and it may have already started. It will become (or already is) a caretaker platform. It will run the workloads of yesterday (or rather the workloads that were designed yesterday), while the workloads of today and tomorrow have a vastly different set of requirements, and where Fibre Channel doesn’t make as much sense.
Why Fibre Channel Doesn’t Make Sense in the Cloud World
There are a few trends in storage that are working against Fibre Channel:
Cloudy With A Chance of Obsolescence
The transition to cloud-style operations isn’t a great for Fibre Channel. First, we have the public cloud providers: Amazon AWS, Microsoft Azure, Rackspace, Google, etc. They tend not to use much Fibre Channel (if any at all) and rely instead on IP-based storage or other solutions. And what Fibre Channel they might consume, it’s still far fewer ports purchased (HBAs, switches) as workloads migrate to public cloud versus private data centers.
The Ephemeral Data Center
In enterprise datacenters, most operations are what I would call traditional virtualization. And that is dominated by VMware’s vSphere. However, vSphere isn’t a private cloud. According to NIST, to be a private cloud you need to be self service, multi-tenant, programmable, dynamic, and show usage. That ain’t vSphere.
For VMware’s vSphere, I believe Fibre Channel is the hands down best storage platform. vSphere likes very static block storage, and Fibre Channel is great at providing that. Everything is configured by IT staff, a few things are automated though Fibre Channel configurations are still done mostly by hand.
Probably the biggest difference between traditional virtualization (i.e. VMware vSphere) and private cloud is the self-service aspect. This also makes it a very dynamic environment. Developers, DevOpsers, and overall consumers of IT resources configure spin-up and spin-down their own resources. This leads to a very, very dynamic environment.
Endpoints are far more ephemeral, as demonstrated here by Mr Mittens.
Where we used to deal with virtual machines as everlasting constructs (pets), we’re moving to a more ephemeral model (cattle). In Netflix’s infrastructure, the average lifespan of a virtual machine is 36 hours. And compared to virtual machines, containers (such as Docker containers) tend to live for even shorter periods of time. All of this means a very dynamic environment, and that requires self-service portals and automation.
And one thing we’re not used to in the Fibre Channel world is a dynamic environment.
Virtual machines will need to attach to block storage on the fly, or they’ll rely on other types of storage, such as container images, retrieved from an object store, and run on a local file system. For these reasons, Fibre Channel is not usually a consideration for Docker, OpenStack (though there is work on Fibre Channel integration), and very dynamic, ephemeral workloads.
Block storage isn’t growing, at least not at the pace that object storage is. Object storage is becoming the de-facto way to store the deluge of unstructured data being stored. Object storage consumption is growing at 25% per year according to IDC, while traditional RAID revenues seem to be contracting.
Making it RAIN
In order to handle the immense scale necessary, storage is moving from RAID to RAIN. RAID is of course Redundant Array of Inexpensive Disks, and RAIN is Redundant Array of Inexpensive Nodes. RAID-based storage typically relies on controllers and shelves. This is a scale-up style approach. RAIN is a scale-out approach.
For these huge scale storage requirements, such as Hadoop’s HDFS, Ceph, Swift, ScaleIO, and other RAIN handle the exponential increase in storage requirements better than traditional scale-up storage arrays. And primarily these technologies are using IP connectivity/Ethernet as the node-to-node and node-to-client communication, and not Fibre Channel. Fibre Channel is great for many-to-one communication (many initiators to a few storage arrays) but is not great at many-to-many meshing.
Ethernet and Fibre Channel
It’s been widely regarded in many circles that Fibre Channel is a higher performance protocol than say, iSCSI. That was probably true in the days of 1 Gigabit Ethernet, however these days there’s not much of a difference between IP storage and Fibre Channel in terms of latency and IOPS. Provided you don’t saturate the link (neither handles eliminates congestion issues when you oversaturate a link) they’re about the same, as shown in several tests such as this one from NetApp and VMware.
Fibre Channel is currently at 16 Gigabit per second maximum. Ethernet is 10, 40, and 100, though most server connections are currently at 10 Gigabit, with some storage arrays being 40 Gigabit. Iin 2016 Fibre Channel is coming out with 32 Gigabit Fibre Channel HBAs and switches, and Ethernet is coming out with 25 Gigabit Ethernet interfaces and switches. They both provide nearly identical throughput.
But isn’t 32 Gigabit Fibre Channel faster than 25 Gigabit Ethernet? Yes, but barely.
Do what now?
32 Gigabit Fibre Channel isn’t really 32 Gigabit Fibre Channel. It actually runs at about 28 Gigabits per second. This is a holdover from the 8/10 encoding in 1/2/4/8 Gigabit FC, where every Gigabit of speed brought 100 MB/s of throughput (instead of 125 MB/s like in 1 Gigabit Ethernet). When FC switched to 64/66 encoding for 16 Gigabit FC, they kept the 100 MB/s per gigabit, and as such lowered the speed (16 Gigabit FC is really 14 Gigabit FC). This concept is outlined here in this screencast I did a while back. 16 Gigabit Fibre Channel is really 14 Gigabit Fibre Channel. 32 Gigabit Fibre Channel is 28 Gigabit Fibre Channel.
As a result, 32 Gigabit Fibre Channel is only about 2% faster than 25 Gigabit Ethernet. 128 Gigabit Fibre Channel (12800 MB/s) is only 2% faster than 100 Gigabit Ethernet (12500 MB/s).
Ethernet/IP Is More Flexible
In the world of bare metal server to storage array, and virtualization hosts to storage array, Fibre Channel had a lot of advantages over Ethernet/IP. These advantages included a fairly easy to learn distributed access control system, a purpose-built network designed exclusively to carry storage traffic, and a separately operated fabric. But those advantages are turning into disadvantages in a more dynamic and scaled-out environment.
In terms of scaling, Fibre Channel has limits on how big a fabric can get. Typically it’s around 50 switches and a couple thousand endpoints. The theoretical maximums are higher (based on the 24-bit FC_ID address space) but both Brocade and Cisco have practical limits that are much lower. For the current (or past) generations of workloads, this wasn’t a big deal. Typically endpoints numbered in the dozens or possibly hundreds for the large scale deployments. With a large OpenStack deployment, it’s not unusual to have tens of thousands of virtual machines in a large OpenStack environment, and if those virtual machines need access to block storage, Fibre Channel probably isn’t the best choice. It’s going to be iSCSI or NFS. Plus, you can run it all on a good Ethernet fabric, so why spend money on extra Fibre Channel switches when you can run it all on IP? And IP/Ethernet fabrics scale far beyond Fibre Channel fabrics.
Another issue is that Fibre Channel doesn’t play well with others. There’s only two vendors that make Fibre Channel switches today, Cisco and Brocade (if you have a Fibre Channel switch that says another vendor made it, such as IBM, it’s actually a re-badged Brocade). There are ways around it in some cases (NPIV), though you still can’t mesh two vendor fabrics reliably.
Pictured: Fibre Channel Interoperability Mode
And personally, one of my biggest pet peeves regarding Fibre Channel is the lack of ability to create a LAG to a host. There’s no way to bond several links together to a host. It’s all individual links, which requires special configurations to make a storage array with many interfaces utilize them all (essentially you zone certain hosts).
None of these are issues with Ethernet. Ethernet vendors (for the most part) play well with others. You can build an Ethernet Layer 2 or Layer 3 fabric with multiple vendors, there are plenty of vendors that make a variety of Ethernet switches, and you can easily create a LAG/MCLAG to a host.
My name is MCLAG and my flows be distributed by a deterministic hash of a header value or combination of header values.
What About FCoE?
FCoE will share the fate of Fibre Channel. It has the same scaling, multi-node communication, multi-vendor interoperability, and dynamism problems as native Fibre Channel. Multi-hop FCoE never really caught on, as it didn’t end up being less expensive than Fibre Channel, and it tended to complicate operations, not simplify them. Single-hop/End-host FCoE, like the type used in Cisco’s popular UCS server system, will continue to be used in environments where blades need Fibre Channel connectivity. But again, I think that need has peaked, or will peak shortly.
Fibre Channel isn’t going anywhere anytime soon, just like Unix servers can still be found in many datacenters. But I think we’ve just about hit the peak. The workload requirements have shifted. It’s my belief that for the current/older generation of workloads (bare metal, traditional/pet virtualization), Fibre Channel is the best platform. But as we transition to the next generation of platforms and applications, the needs have changed and they don’t align very well with Fibre Channel’s strengths.
It’s an IP world now. We’re just forwarding packets in it.
I am very proud to announce that Daniel Lardeux, Johnny Britt and Mohammad Haddad passed the CCDE Practical exam yesterday and they joined the CCDE Club, which is one of the most respected IT certifications.
Their CCDE numbers will arrive in a couple of days.
See the existing Global List of the CCDEs, their companies and numbers here. If you are not in the list, have changed your company or want to be on the list, contact me.
Daniel and Mohammad joined my July class and Johnny used the CCDE Practical preparation bundle.
I would like to stress that four guys from my class or using my preparation resources attempted the November 2015 CCDE Practical exam and three of them passed! A 75% success rate is not a small thing for this certification.
Their common idea is they don’t only learn the CCDE-related topics, but learn real life network design as well.
Below is Daniel thought’s about my class:
I attended the CCDE Class in April of 2015, and it was exactly what I needed.
Orhan took the time to break down the different technologies. Very useful, even for everyday work, and really helpful.
Time spent at the CCDE Class was also very incisive, showing me how to attack the exam.
Thank you Orhan with who I have been in contact throughout this quest. He always made himself available to answer any questions I had.
He is instrumental in my learning and helping me prepare for the CCDE, which is one of the most rigorous Network Design exams in the industry,
Thanks again Orhan.
Senior Network Consultant at Post Telecom PSF
If you would like to gain CCDE certification and also learn computer network design from the best, my recommendation is to get my CCDE preparation bundle, which consists of 60+ hours of videos, a CCDE practical workbook and attend the next CCDE class, which will be held in January.
Please note that there are many critical extra study resources that I recommend you study in my CCDE study book; that’s why it takes time to finish them before joining the class.
My videos are recorded from the class sessions, so you won’t see my face and you won’t see the student’s questions. That’s why joining the online class is important, in my opinion. Also, a video that will help you more focus will be available in a month (It will show my pretty face!).
The purpose of the recorded class videos in the CCDE Practical bundle is to give you an idea about real life design and CCDE exam.
In addition, you will find our talk with Russ White, who regularly joins as a guest in my class to help the students, in those videos.
You can register for the next CCDE class now. Early registration will give you a discount. I always limit the seats so that I can interact with my students better; that’s why I had to reject so many people in the past.
And I am serious: I am not saying this for marketing. That’s someone else’s job! WE are a designer only and if you want to be one too, send an email to firstname.lastname@example.org.
If you’re studying for the CCIE Data center v.10 exam, it’ll be available until July 2016, after which time the recently announced CCIE DC v2.0 exam will take its place.
CCIE DC v2.0 will no longer include
The following topics have been added to the CCIE DC v2.0 exam:
New technologies now covered by the CCIE Data center exam include ACI, LISP, EVPN, and VxLAN. Hardware also takes a more prominent role and includes: Cisco Nexus 9300, Nexus 5600, Nexus 2300 Fabric Extender, UCS 4300 M Series Servers and APIC Cluster.See the below table for comparison CCIE Data center v1.0 and CCIE Data center v2.0.
See the table below for comparison between CCIE Data center v1.0 and v2.0 :
A new Evolving Technologies domain will be added to the CCIE data center v2.0 exam. The section focuses on three subdomains: the Internet of Things, Network Programmability and the Cloud.
The format of the CCIE data center v2.0 lab exam is significantly different from the format of previous versions.
As part of a 60-minute diagnostic module (not included in the v1.0 exam), you will be provided with network topology diagrams, email-threads, console put and logs from which you have to find the root cause of a given issue, without device access.
This is very similar to CCDE Lab exam format, in which you’re also provided with network topology diagrams, email threads and different types of business and technical information. In this case, however, rather than troubleshooting, your task is to find the optimal design.
In order to pass the exam you must pass both the Diagnostic and the Troubleshooting & Configuration modules. In addition, the sum of the scores for both modules must be higher than the minimum value of the combined score.
CCIE Data Center Written and Lab Exam Content Updates :
You still have plenty of time to prepare for the new version. And, until July 22, 2016, you also have the option of taking the older version of the CCIE data center exam. CCIE Data center v2.0 will be available from July 25, 2016.
These dates also apply to the CCIE data center written exam.
If you’re not sure whether you should start studying CCIE Data center or the CCDE exam and you’d like to join our discussion, please feel free to add your thoughts and questions to our comments section.