Being a Systems Engineer or really a “Sales” Engineer, I speak with lots of IT folks about products that Extreme Networks sells. Rarely do I have conversations around hardware features such as feeds and speeds or physical components. Every competitive networking vendor mostly has access to the same commodity ASICs. In the networking and infrastructure, world software is King or Queen. Software is becoming the driving factor for change and creating some healthy competition in the Networking industry.
More specifically, what I spend time talking about is how software solutions can solve technical issues or enable the rapid deployment of new solutions regardless if it’s hosted on-prem or the cloud. There, I said it. It doesn’t matter where it lives. What matters is how it lives.
If you look around, you’ll find Cisco DevNet. It’s a fantastic resource that teaches you about open APIs, gives you access to Cisco sandboxes, and has tons of technical resources that focus on you guess it SOFTWARE. Juniper hosts NRE labs focusing on the Network Reliability Engineering model, which provides learning content around open source automation tools. Vendors continue to add open APIs not only to their hardware but software solutions too. Extreme Networks recently donated Stackstorm to the Linux Foundation for continued growth by the open-source SOFTWARE community.
SDN with one controller to rule them all is dead. However, people still want to be able to customize traffic flows. We want to glue different products together using customized workflows. We want to automate those pesky CLI commands and drill down GUI mouse clicks. Knowing Linux and open source tools in my mind is necessary to elevate your IT career.
I’m not saying networking folks need to transition into programmers because we still need knowledgeable networking people with real-world experience. However, it wouldn’t hurt to step up your software game. I challenge you to take three of the most frequent CLI or GUI clicks you do every day and challenge yourself to automate some of your tasks. You may learn something new, have some fun doing it, and will become a software King or Queen.
You can find some of my original Aerohive AP121 blog posts here. Since then Aerohive has been acquired by Extreme Networks. So I ended up activating an AP230 to the Aerohive HiveManager cloud and was also curious if I could resurrect my old AP121. Low and behold, the AP121 was still supported in HiveManager. I was able to get the AP serial number registered to my new account and added to my HiveManager cloud portal.
The AP121 showed up after a reset by pushing in the physical reset button on the AP for at least 15 seconds while powered on. However, I couldn’t manage the AP121 because the software needed updating according to the devices pane. I tried pushing new firmware from HiveManager but had no success. After digging around the Aerohive support pages and emailing technical support, I found that the AP121 required at minimum software version 6.5r3.
To check the current
software version on the AP121, SSH into the access point. The default username
should be admin, and the password is aerohive.
The software version
shown was 6.1r6 release build1779, so I had to get to 6.5r3 and load it
I preferred the SCP
option, which requires an SCP server. I used my Macbook after I enabled remote
login sharing in the OSX sharing preferences.
The AP showed up in HiveManager with the new version 6.5r3. Then I was able to push 6.5.r12 (latest supported software) through the HiveManager cloud portal.
One of my favorite features I found off the bat within HiveManager was the time-lapse feature. Here’s an example of clients moving from my AP230 over to the AP121.
I’ve been waiting for a feature like this for years. You’ll be able to watch clients move across different APs in a highly dense wireless deployment. Something I looked for in the past when working in a large Higher Ed environment to determine new AP placement. You could potentially identify sticky client issues easier through this type of visual representation. Quite frankly, every wireless vendor that provides floor maps with APs showing current client statistics should have a time-lapse feature. I’m glad it’s in Aerohive, I mean Extreme Networks.
You’re probably thinking why are we still talking about NAC? In my opinion, NAC is one of the bests ways to apply dynamic assignment of access control and gain visibility to where devices are connected to the network in real-time in an agent-less fashion. By the way, us networking folks hate agents. We don’t want to be in charge of one more application, especially if its deployed on thousands of machines.
I’ve come across situations where you may want to run Extreme Networks NAC with other vendor hardware (wired or wireless). Sometimes you don’t have the luxury to replace all of your networking gear. Don’t get me wrong; Extreme hardware works fantastic with its own NAC and NMS software along with its fantastic policy capabilities. However, it may be somewhat shocking to hear that Extreme’s solutions also works quite well with 3rd party vendor hardware. Yes, even Cisco.
The 3rd party hardware will require support for MAC/802.1x authentication. If you desire dynamically assigning different access roles to end systems, the device also needs to support receiving radius attributes to take action. Some things you can do with Cisco hardware in Extreme Control are assigning dynamic VLANs, web redirect, and per-user ACLs. I’ll demonstrate how you can accomplish applying web redirect, dynamic VLANs, and per-user ACLs to Cisco devices. You’ll also be able to force reauthentication to a Cisco switch using Extreme Control within Extreme Management Center.
Working in Extreme Control
When you add a
device to an Extreme Control engine (radius server), you can assign custom
attributes. These are the following vendor-specific attributes (VSA) that will
be sent based on a match of a profile group or when a MAC/802.1x auth hits the
default profile rule:
Make sure that you send a VLAN ID back to the Cisco device by making sure your profile has the following settings along with the custom fields set for devices that will require web redirect by editing the policy profile within Extreme Control.
The %CUSTOM2% radius attribute matches to cisco-avpair=url-redirect=https://192.168.10.92/static/index.jsp
The custom2 attribute tells the Cisco switch what the url redirect web URL is when a user is assigned to the Quarantine policy profile.
The %CUSTOM3% radius
attribute matches to cisco-avpair=url-redirect-acl=Quarantine
The custom3 attribute tells the Cisco switch that the ACL named Quarantine needs to be matched in order to apply the redirect URL.
Note: If you send the VLAN VSA attributes with a
blank value, the Cisco device doesn’t like that and will not apply the dynamic
ACLs, the web redirect URL, or the web redirect ACL. At least that’s what I
found during my internal testing with a Cisco c3850.
Also, make sure you select the appropriate reauthentication (RFC 3576 – Cisco Wired) type to force a reauthentication of a device within Extreme Control.
For Extreme Control to send per user ACLs, you need to build a policy within Extreme Management Center Policy Manager. Policy manager works by defining policies with roles consisting of L2/L3/L4 rules. If you add a Cisco device within a policy domain and enforce policy, Extreme Control will recognize that the policies, roles, and rules will need to be converted to Cisco-based ACLs that can be dynamically sent to the Cisco switch on a MAC/802.1x auth.
the following Cisco CLI command, you can see the specific attributes that are
received from Extreme Control once the Cisco switch is set up for radius
network access authentication.
If you need assistance with how to configure the Cisco switch for radius authentication to Extreme Control, head to the Extreme Networks Github scripts page here and download the Cisco IOS authentication script. The only ACL’s you’ll also have to create on the Cisco device are the ones that will match for the Web redirect VSA. There’s no need to create additional ACLs since you’ll be sending the dynamic ACLs from the policy conversion.
As a bonus, Extreme Management Center NAC can also integrate with firewall vendors, MDM solutions, and anti-virus software suites to dynamically assign access control. Again, check out the Extreme GitHub Integrations page here for some examples on Checkpoint integration, IBM Qradar, and FortiGate. You can even create your own custom workflow of activity based on a NAC event, such as opening up a ticket when a device is dynamically quarantined based on a set of events. The sky’s the limit.
I recently attended
a CWNA course taught by none other than Devin Akin, wireless guru and
co-founder of CWNP. During the course I was reminded about how attenuation can
become your best friend when building high density Wi-Fi networks.
During my time working for a WISP years ago we had a particular site that started to run into problems. This site had a six sector Motorola Canopy setup running on the ISM unlicensed 5Ghz band at the top of a hospital building. The site provided 360 degrees of coverage utilizing six 60 degree sector APs which had worked well for quite some time. These APs utilized a proprietary TDMA radio technology and were also GPS sync’d to allow for efficient channel reuse. However, the AP’s had the potential to hear other non Motorola Canopy GPS sync’d 5Ghz devices from any direction. One day the cluster of AP’s started to pick up numerous interference from other competitive WISP deployments in the area using the same 5Ghz band. Signal to noise ratio dropped and so did CPE performance. We came up with a solution to take each individual sector AP off the tripod at the center of the six story building and mount each of the 60 degree sectors (orange cylinders in picture) below the top edge of the outer building walls. Here’s a simple illustration:
This new setup allowed the building to provide attenuation from other 5Ghz interference (blue cylinders in picture). After a spectrum analysis was performed on each AP we verified that interference dropped significantly due to the building attenuation. SNR increased and CPE performance increased.
The Motorola Canopy
hardware (now Cambium Networks) protocols in use are not using 802.11
protocols, however they use the same unlicensed frequency band and follow the
same principles of RF propagation. In high density Wi-Fi deployments
attenuation can become your best friend just like the hospital building became
ours once we relocated the sector APs. Attenuation such as walls, wall
thickness, and number of walls RF propagates through can help reduce co-channel
interference between access points AND clients that are reusing the same
channel space in high density Wi-Fi deployments.
An easy way we discussed identifying CCI/CCC during the CWNA course was to fire up your favorite wireless tool like Wifi Explorer Pro. Grab a laptop with a similar spec radio that your AP has (ex: your ap is 3×3, use a 3×3 client) and stand right underneath your AP. Here’s what a scan in my house looks like right next to my AP reading a -16 on channel 36.
Identify how many other APs your laptop can hear that are on the same channel of the AP your standing by. In the example above you can see that I can hear another AP using a primary channel of 36 at a -81. This AP along with other nearby clients could have the potential to cause co-channel contention. What you see may not be exactly what the AP hears as every radio has variations in receive sensitivity, but it will help to identify possible contention or interference.
We should no longer
build Wi-Fi for maximum distance in enterprise environments like we did years
and years ago, but we should now build for capacity and efficiency. So make
sure you take advantage of those walls and other building obstacles when
designing your next high capacity Wi-Fi network if needed.
Take a look at some of the following references to familiarize yourself with co-channel interference/contention:
I came across a scenario where a user had two data centers in different locations connecting back to the same ISP via BGP. These two data centers would be advertising a unique /24 at each site. However, the user also wanted to advertise the other DC’s /24, but not in an active state for failover. Being that the user was connecting back to the same provider AS, I decided to test using the BGP MED (Multi Exit Discriminator) attribute to determine which /24 would be the preferred route from the provider end. The MED with the lowest value takes a higher priority.
We’re using Extreme Networks summit series switches, so I tested the configuration on exos 22.6 using my EXOS virtual lab. I made sure to apply a lower MED value to the /24 I wanted to prioritize at each primary site and also applied a higher MED value to the backup /24 at the opposite site.
In summit series switches you start with a policy file that matches the network address used in the BGP network statement. You can then apply a med value to that match. EXOS uses vi when creating these policy files. Here are the commands:
In my lab, I was using two different AS numbers representing each DC connected back to the same AS. Therefore I also had to use the “enable bgp always-compare-med” command on the simulated provider AS exos virtual switch as MED values are not compared between routes advertised from different autonomous systems by default.
Of course, your provider has to be willing to accept MED. If not, you could also try prepending your AS number to the AS_Path. This is another way to manipulate what route is less preferred. However, this method is not always supported as some providers ignore duplicate AS in the AS_path. The change for AS_Path is simple, just replace the med set 100; with As-path “2020“; in the policy file. This example is using AS path 2020 and should be applied to the BGP network statement that serves as the backup route at the opposite DC location.
As a systems engineer for Extreme Networks, I like to get as much hands-on lab gear that I can within a reasonable budget. I have quite a large lab setup at home as you can see.
One of my goals was to build something a bit more portable and powerful enough to run ESXi with a few VMs. I also like things that don’t take up too much power sitting idle. My test lab configurations usually consist of different virtual network operating systems such as Extreme Networks EXOS as well as Extreme virtual Wi-Fi controllers, Extreme Control VMs, and a host of other VMs. I usually Continue reading »
Since my last post, I’ve had quite a life change of events occur. I recently accepted a position with Extreme Networks as a Senior Systems Engineer and moved out of Northwest Indiana to North Carolina. How did that happen after almost eight years at my previous role and fresh into an interim position? Well, my wife and I had been thinking about relocating for quote some time. We had visited numerous warmer states within the last couple of years and ended up revisiting Raleigh North Carolina quite a few times. The only thing stopping us from making the jump was a job of course. However I didn’t want to apply for any IT job, so I spent quite some time searching and applying to specific positions. One of those happened to be with Extreme Networks. At Purdue Northwest, we were a legacy Enterasys customer who transitioned into Extreme Networks after the acquisition of Enterasys. I’d become very familiar with Extreme Networks products and had experience showcasing how we used Extreme Networks at PNW numerous times. I loved working with Extreme products and thought it would be even better working for Extreme Networks.
I’m now two months into my new position and I’m enjoying every minute of it. We’re settling into the area and are also looking for a local church home. The weirdest transition is the kids are now on a year-round schooling schedule, but we’re getting used to it. I also work from home and work out of our main office once a week. My home lab is growing fast and getting to meet new and potential Extreme Networks customers is fun. Purdue Northwest was great. However, I felt very comfortable leaving the University in the hands of some great folks. I know my previous team will do a fantastic job.
Now I’m focused more than ever in my passion for networking. I’m learning a great deal and I’m also working with some of the brightest minds in the networking community. My future posts will start diving back down the technical track, but I’ll make sure to share the culture side of things as well. I’d like to thank God first and foremost along with my wife and family for all the support. I’d also like to thank everyone else who helped me get to where I am.
New challenges tend to surprise you sometimes. I was pleasantly surprised when I was recently asked to serve as the Interim Assistant Director of Information Security Services for Purdue University Northwest. I currently manage a team of seven full-time individuals and two student workers that make up the networking, infrastructure, and telecom team. The group isn’t that big, but I’d just found a great rhythm managing across the considerable breadth of IT services my team supports. The security team consists of a security engineer, analyst, and a student worker. I’d done work with InfoSec before, but I gave myself some time to think about the opportunity.
One thing that helped me make a decision was that I have a fabulous team. I’ve always set a model of allowing others to grow and empowered individuals to take on leadership responsibilities without micromanagement. In the past, I served as interim for the server administration team while we went through a merger in the middle of an outlook and AD migration which was lots of work but was very successful. The networking team had also served in a security operations capacity until a dedicated security department formed two years ago. I believe these items were factored into why I was asked to serve as interim. It was a great honor and opportunity to be asked if I could help serve others and I also have a passion for teaching, so I accepted the position.
Then human nature kicked in, and I started to ask what did I get myself into when I accepted the position. Information security is no joke and there was lots of work to do, but I know that I’ve surrounded myself with supportive individuals that will help along the way. It’s been about three weeks thus far. I’ve received lots of positive feedback and have a long list of goals to accomplish. However, my primary objective is to promote teaming and collaboration across the division and the Information Security Services team. We have lots of smart individuals, so together I know that we can accomplish any task. I look forward to diving back into InfoSec and plan to share the journey.
If you make your way into the world of networking, you’re bound to come across a decision path on how you should handle network expansion. Should your default method always be to extend or stretch your layer 2 bridge domain? The root of the answer can be found when discussing the why. Let’s take a look at some of the use cases I’ve come across within enterprise network environments:
Device Requirement Device “A” needs to communicate with device “B” and those two devices are “required” to live on the same layer 2 broadcast domain. I haven’t come across any new devices or applications that fall into that spectrum, and it’s 2018. However, some enterprise organizations may still have legacy devices or poorly manufactured devices/applications with no foreseeable updates that may fall into this category.
Customer demand A customer you service in area “A” needs network services expanded to area “B.” They want their equipment to stay on the same subnet. Cough, cough, point of sales systems. I believe that modern POS systems can talk via IP across different subnets, but this can also be a possible use case that still comes up.
Data center disaster recovery or should I say “specific” DR models. I say “specific” because not all DC DR needs to be developed with an absolute layer 2 requirement extension model. Specific apps that are short-sighted will include layer 2 extension as a requirement. Someone insists that a VM pinned to a specific IP move from region “A” to region “B” and the IP needs to stay the same. What!?! Let’s think of better ways to do this, DNS, automate IP provisioning? However, this can still be a possible use case.
Ease of use Sometimes if you’re uncomfortable with routing protocols, it may seem easier to span a VLAN across the core of the network. Less IP provisioning, less ACL’s, potentially less firewall rules, and less management of those dreaded IP routing protocols. However, this is something we are in control of, so it’s OK to take time to research and learn what routing protocol would work best for your environment. Don’t let the lack of information drive your operation.
I can confirm that extending 100’s of VLAN’s through your core along with multiple instances of STP with a sprinkle of HSRP is NOT scalable. You will run into issues at some point. Others would say, “but my superior wants things done yesterday.” That’s another topic which may be worth blogging about in the future but hang in there.
I finally made it out to a CHI-NOG event, the Chicago network operators group. Experienced network engineers and architects put the group together to focus on all things network related. The yearly events concentrate on vendor-neutral topics and encourage other network enthusiasts to attend within the Chicago land region. This year’s gathering had more than a dozen sessions and a lineup with some excellent guest speakers. If you’re ever in the area and love networking with technology and people, I highly recommend you go. I attended quite a few of the sessions, but I’ll start with one of my favorites.
BGP the chosen EGP of the Internet has taken quite a hold in large-scale data centers across companies such as Facebook, Microsoft, LinkedIn, and Google. You can do all kinds of clever traffic engineering using BGP, but should it be the chosen IGP for data centers? The companies mentioned above are now looking into or are already deploying other technologies such as openR, openfabric, and firepath as a BGP replacement. Russ challenged BGP deployment complexity and talked about some of the most significant hurdles being delay and jitter within the hyperscale arena. Flooding also becomes an issue along with autoconfiguration of devices.
I think it’s important not to try and over complicate existing protocols to make them fit what we want. We need to become better engineers and try something different. That’s where white box switching and new protocols such as draft-white-openfabric come into play. White box allows for the deployment of newly developed routing protocols that are more appropriate for what we wish to accomplish. Automation is also critical for successful manageability. Russ talked about having a router or switch that you never have to configure or CLI into, a little tough to swallow for us network operators.
I couldn’t help but think about wireless controllers. When’s the last time you ever ssh’d into your wireless access points? We couldn’t imagine going back to individually configuring access points, what a nightmare! Centralized automated management for our switches and routers makes complete sense. Are we ready for the transition? The thought of what will happen to our existing jobs always comes up. However, I say we can then transition into working on solving other problems that we never had time to complete. Overall CHI-NOG was an awesome experience. I have lots more notes, so hopefully I can come up with more stuff that you’ll enjoy reading.