Latest Posts

Welcome to Extreme Networks

Since my last post, I’ve had quite a life change of events occur. I recently accepted a position with Extreme Networks as a Senior Systems Engineer and moved out of Northwest Indiana to North Carolina. How did that happen after almost eight years at my previous role and fresh into an interim position? Well, my wife and I had been thinking about relocating for quote some time. We had visited numerous warmer states within the last couple of years and ended up revisiting Raleigh North Carolina quite a few times. The only thing stopping us from making the jump was a job of course. However I didn’t want to apply for any IT job, so I spent quite some time searching and applying to specific positions. One of those happened to be with Extreme Networks. At Purdue Northwest, we were a legacy Enterasys customer who transitioned into Extreme Networks after the acquisition of Enterasys. I’d become very familiar with Extreme Networks products and had experience showcasing how we used Extreme Networks at PNW numerous times. I loved working with Extreme products and thought it would be even better working for Extreme Networks.

I’m now two months into my new position and I’m enjoying every minute of it. We’re settling into the area and are also looking for a local church home. The weirdest transition is the kids are now on a year-round schooling schedule, but we’re getting used to it. I also work from home and work out of our main office once a week. My home lab is growing fast and getting to meet new and potential Extreme Networks customers is fun. Purdue Northwest was great. However, I felt very comfortable leaving the University in the hands of some great folks. I know my previous team will do a fantastic job.

Now I’m focused more than ever in my passion for networking. I’m learning a great deal and I’m also working with some of the brightest minds in the networking community. My future posts will start diving back down the technical track, but I’ll make sure to share the culture side of things as well. I’d like to thank God first and foremost along with my wife and family for all the support. I’d also like to thank everyone else who helped me get to where I am.


What did I get myself into?

New challenges tend to surprise you sometimes. I was pleasantly surprised when I was recently asked to serve as the Interim Assistant Director of Information Security Services for Purdue University Northwest. I currently manage a team of seven full-time individuals and two student workers that make up the networking, infrastructure, and telecom team. The group isn’t that big, but I’d just found a great rhythm managing across the considerable breadth of IT services my team supports. The security team consists of a security engineer, analyst, and a student worker. I’d done work with InfoSec before, but I gave myself some time to think about the opportunity.

One thing that helped me make a decision was that I have a fabulous team. I’ve always set a model of allowing others to grow and empowered individuals to take on leadership responsibilities without micromanagement. In the past, I served as interim for the server administration team while we went through a merger in the middle of an outlook and AD migration which was lots of work but was very successful. The networking team had also served in a security operations capacity until a dedicated security department formed two years ago. I believe these items were factored into why I was asked to serve as interim. It was a great honor and opportunity to be asked if I could help serve others and I also have a passion for teaching, so I accepted the position.

Then human nature kicked in, and I started to ask what did I get myself into when I accepted the position. Information security is no joke and there was lots of work to do, but I know that I’ve surrounded myself with supportive individuals that will help along the way. It’s been about three weeks thus far. I’ve received lots of positive feedback and have a long list of goals to accomplish. However, my primary objective is to promote teaming and collaboration across the division and the Information Security Services team. We have lots of smart individuals, so together I know that we can accomplish any task. I look forward to diving back into InfoSec and plan to share the journey.

Extending Layer 2

extended bridge

Layer 2 Bridging

If you make your way into the world of networking, you’re bound to come across a decision path on how you should handle network expansion. Should your default method always be to extend or stretch your layer 2 bridge domain? The root of the answer can be found when discussing the why. Let’s take a look at some of the use cases I’ve come across within enterprise network environments:

Device Requirement Device “A” needs to communicate with device “B” and those two devices are “required” to live on the same layer 2 broadcast domain. I haven’t come across any new devices or applications that fall into that spectrum, and it’s 2018. However, some enterprise organizations may still have legacy devices or poorly manufactured devices/applications with no foreseeable updates that may fall into this category.

Customer demand A customer you service in area “A” needs network services expanded to area “B.” They want their equipment to stay on the same subnet. Cough, cough, point of sales systems. I believe that modern POS systems can talk via IP across different subnets, but this can also be a possible use case that still comes up.

Data center disaster recovery or should I say “specific” DR models. I say “specific” because not all DC DR needs to be developed with an absolute layer 2 requirement extension model. Specific apps that are short-sighted will include layer 2 extension as a requirement. Someone insists that a VM pinned to a specific IP move from region “A” to region “B” and the IP needs to stay the same. What!?! Let’s think of better ways to do this, DNS, automate IP provisioning? However, this can still be a possible use case.

Ease of use Sometimes if you’re uncomfortable with routing protocols, it may seem easier to span a VLAN across the core of the network. Less IP provisioning, less ACL’s, potentially less firewall rules, and less management of those dreaded IP routing protocols. However, this is something we are in control of, so it’s OK to take time to research and learn what routing protocol would work best for your environment. Don’t let the lack of information drive your operation.

I can confirm that extending 100’s of VLAN’s through your core along with multiple instances of STP with a sprinkle of HSRP is NOT scalable. You will run into issues at some point. Others would say, “but my superior wants things done yesterday.” That’s another topic which may be worth blogging about in the future but hang in there.

You’re getting the point. There are some better ways to accomplish the listed use cases, but I understand that sometimes you may not be able to work with vendor X, customer Y, or technician Z to remove the necessity of layer 2 extension. Maybe your options are limited, but you’re a Rockstar network admin/engineer, so can we design around the “end user requirements”? If you must, you have quite a few options to extend layer 2 through the use of overlays. There will be some added complexity, but overlays may be worth considering instead of spanning layer 2 segments across the core.

Over What?

Ok, so what’s this overlay stuff? Say you designed your network with proper layer 2 segmentation along with a layer 3 routing protocol. Everything is working great. Your layer 2 fault domains are isolated through the use of routing protocols, you don’t have STP running across your core, and you’re taking advantage of multipath layer 3 routing. All is wonderful in the world. You then have a “hard” requirement, maybe one listed above to extend layer 2. Do you go back and span a VLAN through your core? No, overlays to the rescue! Overlays have been around for quite some time, think GRE, Pseudo-wires, etc. Some of the latest overlays you may have heard of are VXLAN or EVPN. Basically you’re encapsulating information from one segment of your network and forwarding it across an existing layer. The information de-encapsulates at an endpoint and voila you’ve extended layer 2 across your layer 3 network. I know, easier said than done. There’s plenty of resources out there on how to setup overlay protocols, so I won’t go into the details.

Alternative Thinking

Now let’s say you want to build your network from the ground up with extension services in mind. This would allow you to have a robust layer 2 transport natively built into your network. Extreme Networks has something called Fabric Connect which was an acquired technology from their Avaya network acquisition. Fabric connect is designed around shortest path bridging MAC (SPBM) as the forwarding plane and IS-IS as a control plane. You forward traffic not by IP routes, but by using an I-SID or Individual Service Identifier. You can create a layer 2 virtual service network (VSN) that’s more “circuit” based. The core of your network becomes a fabric connect mesh, and from an operational perspective, you configure services at the edge. You no longer have to segment devices to only certain parts of your network. Extreme Networks claim is that you get something like MPLS (however different) without the complexity.

Fabric connect makes me start thinking about locator/ID Separation Protocol or LISP which focuses on separating location (think IP address) from a device ID (think IP address again). If you separate location and device apart, you can now create two namespaces. In LISP, that’s the endpoint identifier (EID) and the routing locator (RLOC). What you then create is a mapping architecture similar to DNS mapping an IP to a name for determining forwarding. In fact, Cisco Campus Fabric uses LISP and VXLAN that creates another overlay solution that allows client mobility across a network.

The next time you have the opportunity to design or redesign a network, take time to study the why before you implement the how. And most importantly have fun!


Netcraftsmen on Layer2 (older, but still relevant challenges enterprises face today) 
Locator/ID Separation Protocol (LISP)
Fabric connect – Extreme networks

Peering into CHI-NOG 08


chinog-08 Javier Solis badge
I finally made it out to a CHI-NOG event, the Chicago network operators group. Experienced network engineers and architects put the group together to focus on all things network related. The yearly events concentrate on vendor-neutral topics and encourage other network enthusiasts to attend within the Chicago land region.  This year’s gathering had more than a dozen sessions and a lineup with some excellent guest speakers.  If you’re ever in the area and love networking with technology and people, I highly recommend you go. I attended quite a few of the sessions, but I’ll start with one of my favorites.

Rethinking BGP in the Data Center

Presented by Russ White

BGP the chosen EGP of the Internet has taken quite a hold in large-scale data centers across companies such as Facebook, Microsoft, LinkedIn, and Google. You can do all kinds of clever traffic engineering using BGP, but should it be the chosen IGP for data centers? The companies mentioned above are now looking into or are already deploying other technologies such as openR, openfabric, and firepath as a BGP replacement.  Russ challenged BGP deployment complexity and talked about some of the most significant hurdles being delay and jitter within the hyperscale arena. Flooding also becomes an issue along with autoconfiguration of devices.

I think it’s important not to try and over complicate existing protocols to make them fit what we want.  We need to become better engineers and try something different. That’s where white box switching and new protocols such as draft-white-openfabric come into play. White box allows for the deployment of newly developed routing protocols that are more appropriate for what we wish to accomplish. Automation is also critical for successful manageability. Russ talked about having a router or switch that you never have to configure or CLI into, a little tough to swallow for us network operators.

Closing Thoughts

I couldn’t help but think about wireless controllers. When’s the last time you ever ssh’d into your wireless access points? We couldn’t imagine going back to individually configuring access points, what a nightmare! Centralized automated management for our switches and routers makes complete sense. Are we ready for the transition? The thought of what will happen to our existing jobs always comes up. However, I say we can then transition into working on solving other problems that we never had time to complete. Overall CHI-NOG was an awesome experience. I have lots more notes, so hopefully I can come up with more stuff that you’ll enjoy reading.

Why I’m so fascinated with white box switching

All the hype

White box switching seems to be all the networking hype.  For some in-depth research, check out this podcast from packet pushers about ATT making its move into white box switching. Cisco is also committed to offering a decoupled version of IOS-XR from Cisco hardware to enable running their NOS on OCP (open compute project) compliant hardware aka “white box switching.” Fascinating stuff, but what’s the big deal? Well, I’m going to try and make a comparison.

A Lego comparison

I’m a huge adult fan of Lego (AFOL). I remember dumping old tin popcorn bins with Legos all over my bedroom floor as a child. I’m more organized today, but I can’t help tearing down and building new creations. Now imagine you have an advanced Lego technic set put together. You have gears that move, hinges that open and close, wheels turn, etc. Now imagine all those connecting pieces glued together. A nightmare for those AFOL’s who want to rebuild something special.

white box switch lego

Picture that glued together Lego set as a networking switch or router. Sure you can plug and unplug a few items, configure features within the CLI, and even get some sweet stats via SNMP. However, your switch or router’s underlying code is static which you can’t change. You’re at the mercy of the vendors nicely glued together product. I’m not suggesting that’s necessarily a bad thing, but you get where I’m going. With white box switching, you finally get to be a bit more creative with your switch or router. You can unload the default network operating system and load up something completely different. You’ve just expanded your imagination beyond one vendor and their fixed code.

A modular future

Maybe we’ll start to see advanced hardware modularity for white box switching as well. You need more processing power; upgrade your CPU. You need more space for your NOS apps or massively large routing tables, then go ahead and add more RAM. Are you a Cisco or Cumulus fan, who cares, you choose what NOS to run. Now you’re building like an AFOL. The possibilities of customization that deliver high flexibility are endless.

Lego 16 port white box switch

Extreme Networks EXOS on Nutanix CE

EXOS in Nutanix CE

Now that I have my Nutanix CE lab setup, I wanted to get some of my virtual network operating systems installed within my home lab. One of the NOS’s I’ve been running is Extreme Networks virtual EXOS. My last EXOS-VM lived in Virtualbox and ESXi. Extreme Networks has a github page here with all the information you need to get started with running the VM within a Virtualbox or ESXi environment.

Issue and Solution with Nutanix

Following the EXOS installation guide using the downloadable iso and mimicking the Vmware/Virtualbox VM settings within Nutanix CE wouldn’t work. I kept receiving an issue with the disk not correctly detected while Continue reading »

Hyperconverged infrastructure, a look at Nutanix

Sorting Out HCI

Today’s Hyper-converged infrastructure (HCI) vendors have some exciting product offerings. HCI ultimately provides scalable and flexible storage along with coupling computing resources. Recently at work, our KPI metrics started to show that some of our SAN hardware was having issues keeping up with production workload. So we ended up looking at a few HCI vendors; Simplivity, Nutanix, Pivot 3, and Vmware VSAN. After our initial investigations, it became clear that we weren’t quite ready to step into HCI just yet. Our project scope explicitly called for storage performance. At the current junction, additional compute wasn’t necessary and not budgeted for the project. Since HCI solutions couple storage with computing costs, we ending up investing in an all-flash SAN solution.

However, I was very intrigued with the different HCI platforms. What interested me the most was the ability to scale storage using x86 based systems. During our Nutanix research, I came across their community edition. I decided to load Nutanix CE on my home virtualization server and give it a whirl. There are lots of other great sites with information on how to get the initial setup going, so I’ll focus more on some of my specific findings during my home lab testing.

NutanixCE prism login

Nutanix CE Single Node Setup

I started out with a Continue reading »

Transitioning to Management

My previous IT roles have revolved around the administration of different technologies,  specifically networking technologies.  However, I’ve always had the willingness to perform any other job functions as needed.  That’s lead me to learn all types of new things such as tower climbing, billing, phone support, inventory tracking, training, and the list goes on.  At my current employment, I started as a network administrator. I moved into a network supervisor position within three years, then was asked to serve as an interim supervisor for another area through a merger.  I’m now the supervisor of networking and infrastructure.

Transitioning from a network administrator to a supervisor isn’t always a breeze.  When you’ve spent lots of time administering systems, you become ingrained into build, Continue reading »

Best of Breed

I recently heard the term “best of breed” used when discussing network vendor selection. I was surprised by this answer because you don’t hear it too often.  The more I thought about it, why not “best of breed” selection? My time as a network and infrastructure supervisor has taught me that a data center environment can be full of different compute and storage vendor products. Our SAN environment consists of Pure, Tegile, EMC, and even QNAP. Each product has its place.  Pure serves the VDI environment, Tegile/EMC host production, and QNAP serves as a target for our Veeam backups. The team has also categorized and carved out each platform into tiered offerings.

On the other hand, network vendor selection tends to be biased. Typically you’ll see one network vendor selected for the edge/access, distribution, and core. However, you will find a different wireless vendor from time to time.

Many reasons exist

A compilation of the most popular

  • We would like to interact with only one vendor for purchases and support.
  • ABC vendor only works well with a particular management tool.
  • I only know vendor ABC, and we don’t have time to learn something new.
  • Did you hear that vendor ABC had an issue with XYZ product, I don’t want those problems.
  • Everyone else uses vendor ABC.
  • No other vendor supports my VOIP feature set.
  • You can’t do XYZ well or at all with any other vendor product.

I will say that there are a few use cases that keep you tied Continue reading »

A simple start to project management

Being able to track your work efficiently is a very useful skill. For years I completed my work but rarely tracked my work in a project management professional (PMP) sort of way. Sure, I’ve done the weekly reports, sticky notes, Outlook tasks, and Outlook calendar block scheduling, which are all useful. However, simple project management skills help create a consistent and straightforward approach to managing time, resources, and tasks. I’ve seen organizations take an all guns blazing PM approach to a nothing at all approach. Sometimes you’ll see IT subject matter experts resist PM due to the “I’m too busy” or “It takes too much time” statements, but in reality, basic fundamental project management is not that difficult.

project management board

Here’s an example of how you can model behavior and start to implement some basic PM skills. I recently had a team reach out to discuss a new project that would require infrastructure resources. The team pulled up a draft diagram, and we began our dialog. I started to ask peering questions, and the diagram began to transform. Once I was comfortable with understanding what we were trying to accomplish, I shared my screen with the team and opened up OneNote. I began typing each major task that needed to be completed and here’s Continue reading »

1 2 3 7