AI in your Network

- Posted in Latest Technology by

AI in your network

There's a scene in "Mars Attacks" where Pierce Brosnan, playing the scientist, dissects one of the dead Martians. He pulls some red jelly from the brain and says "Curious." It captures one of the problems with what we're calling AI today because like the components of an AI, though the jelly can do amazing things, you really can't look at it and say why.

The term Artificial Intelligence has changed its meaning many times over the last 50 years. It currently refers to systems that can do feature correlation and extraction from training data sets--often very large ones. For example, training a system to recognize a face (like your phone does) is an exercise in presenting exemplar data to the system along with reinforcing feedback when a face is dsiplayed. This is called supervised training. After seeing enough faces and getting the green light for each one, the system can learn a correlation and provide the green light on its own when a new face is presented.

enter image description here

The systems and theory for this kind of AI have been around since the 1960s--even the 1950s. A well-known example is called a multi-layer perceptron, or neural network. The original objective was to imitate the way neurons interconnect and to reinforce pathways in the presence of specific stimuli. The neural network would be made of two or often three layers--an input layer, a hidden layer and an output layer. Inputs to a given layer would add or subtract from one another in accordance with weights (multipliers). The weights would be "learned" during the training process. They were the jelly in the Martian's brain.

The neural network is an analog model of how neurons might work, and some analog implementations have been created. On a digital computer, however, it is represented by matrix arithmetic. Matrix arithmetic can often be parallelized so that multiple operations occur at the same time. This makes it fast—very fast--given suitable hardware.

In the last ten years, supercomputers have been teased up, not from liquid-cooled Crays or hypercubes, but from very small-featured computing devices like field-programmable gate arrays or graphics cards. In fact, Nvidia--the company that makes the some of the best graphics hardware--is also the world's leader in supercomputers. That's because Nvidia graphics cards are hyper-parallel, and they can be programmed to do AI just as well as to perform pixel-based ray tracing. Nvidia is currently building what will be the world's largest supercomputer in Britain.

In the network business, you see "AI" being applied to network analytics, intrusion detection and diagnostics. The systems are the product of supervised and unsupervised training that look to correlate events and to recognize unique patterns—such as a network intruder. Part of the reason why vendors are pushing the cloud so vigorously is in addition to being the customer, you are part of the product; your network experiences contribute the training sets for "AI" analytics. They need you to participate in the cloud, too.

Given copious compute power, the quest to make AIs more capable is correlated with making them deeper--adding more layers, more sophisticated back-propagation and weight adjustment. "Deep learning" makes the AI more powerful, but it also makes it subject to pitfalls that are endemic to higher order curve-fitting. That is, when the input is similar to training data, the results can be excellent. In the face of unfamiliar input, deep AI can be wildly unpredictable. "Curious," as Pierce Brosnan would say. And it’s coming to your network.

-Kevin Dowd

Why is Data Security Information so Noisy?

- Posted in Network Security by

Why is DATA SECURITY so noisy?

We’re always hoping for an easy score. But network traffic is the manifestation of intents; the traffic is there because someone or something has a goal. It might be exchanging email, sharing data or hacking you. In almost all cases, determining the goal by looking at the traffic requires a priori knowledge or assumptions about what the traffic means; it requires information that isn’t found in the traffic itself.

The object of a Security Event Manager (SEM) or an IDS/IPS is to derive knowledge from traffic data, and then reduce it to a score. The less knowledge in the process, the less valuable the score will be, which is the reason that administrators have to investigate false positives from network intelligence devices. Heuristics and signatures are two approaches to draw knowledge from data. Anomaly detection pulls patterns from data with little increase in knowledge.

Heuristics

Let’s say that you own a restaurant, and it has a security system. You get notified that the front door is open. What can you reasonably infer?

enter image description here

To make the simple observation that there might be a break-in in progress, we apply heuristics to data; we have derived knowledge. In the end, we can say that someone is probably breaking it.

Signatures

Signatures simplify event recognition for known patterns. They condense multivariate input—essentially the meat of heuristics—into a decision that creates the score. For the restaurant, a signature that said:

“the door is open and it is after hours”

would provide the same result as heuristics. Heuristics are more flexible, but signatures are efficient to process; they’re quick. Of all the possible ways to apply knowledge to complex events, signature recognition probably has the most going for it. But if a signature isn’t sufficiently specific, it can generate noise. Too specific and it might not fire.

Anomalies

Anomaly detectors watch patterns in traffic to see if they look different than training data. The more complex the input, the more complex the model, and the less sanguine its approximation will be when presented with a novel situation; higher order curve-fitting is prone to false positives by its nature.

enter image description here

The Semantic Gap

Semantic derivation is the process of increasing knowledge about the event. Semantic reduction is how we produce the score. When we combine anomaly detection, signatures and heuristics and semantically reduce them, the worst of the uncertainty in each comes out. This suggests that the more semantic reduction system that takes place in a SEM, the noisier the results will be.

What does provide reliable results? Simple metrics such as checksums on files and recognition of unplanned reboots unambiguously tell you something significant has happened, albeit late. A SEM will highlight activity you would have otherwise missed. But one can never eliminate the noise; there are semantic gaps between what is happening on the network, what the SEM understands, and the indication you receive on back-end; you’re much smarter than your network intelligence tools can ever be.

Reducing the Semantic Gap

Newer, AI-based anomaly detection systems (trained and untrained) improve the opportunities for nuanced event detection, and also for more false positives. Improvements come from coupling the output to intent, thereby reducing the semantic gap. The Mitre ATT&CK knowledge base provides a working framework for this approach. If one views the events that a SEM collects as the manifestations of intent, one can understand a breach for what it is. For instance:

  1. There is anomalous traffic from an internal computer
  2. The computer makes an outbound connection (command and control)
  3. The computer is probing the internal network; the outbound connection remains active

The numbered events, taken together, show a pattern of activity that predicts this machine has been compromised. Each one of them would be reason enough to wake the security guy in the middle of the night. Recognizing what they mean in combination reduces the semantic gap, and gives the security guy (or an IDS/IPS) a higher quality assessment of the circumstances.

Vectra

Vectra, a company out of California, provides a product that combines AI with higher-level intelligence to reduce the semantic gap. It rides on top of network taps, within cloud deployments and even inside Office365. Vectra’s Network Detection and Response platform, Cognito, has been demonstrating superior results in red team testing. Recognizing the importance of reducing the semantic gap, Atlantic is helping bring Vectra to its customers. Visit vectra.ai for more information on the products or contact Atlantic Computing (www.atlantic.com).

Copyright © 2021, Atlantic Computing Technology Corporation

Aruba AOS10

- Posted in Aruba Network by

Previous enterprise Aruba operating environments AOS 6.5 and AOS 8 were controller-based. Controller-based access points are the product of a time when APs were radio heads, capturing and producing wireless packets and ferrying them to a central controller. Little data processing was done at the access point—particularly in tunnel mode. Radio management, authentication and encryption were all performed centrally, at the controller.

Because of the increasing complexity of wireless networking protocols, the increasing speeds of wireless connections, and the increasing capability of access points, it is becoming advantageous to let the AP perform all of the processing and bridge traffic to the network at wire speed.

This is giving controllers the diminutive role of configuration and reporting. Configuration and reporting are less demanding than wireless network termination, and require much less bandwidth. Accordingly, it is possible to place portal anywhere, including out on the Internet.

Under Aruba AOS10, each access point is a controller. It gets it configuration from Aruba AOS10 Central. It acts in tandem with its neighboring access points to create a seamless wireless experience.

enter image description here

The picture above shows the components of an Aruba AOS10 network.

Access points (and switches) communicate with Aruba Central for configuration and logging. Each AP bridges traffic directly onto the network natively, via VLANs or both. Each AP communicates with its neighbors as far as several hops away. This enables roaming and forwarding of firewall state.

ClearPass, when in use, provides advanced authentication and security services, role-based access, network awareness and UEBA. ClearPass Policy Manager communicates with the access points directly, implementing RADIUS-based user access and Aruba firewall policies.

Controllers are not required, but they can be included in AOS10 for users who wish to have tunneled SSIDs or tunneled node 802.1x-based switch port access. The benefits of tunneled traffic are that data traverse the network fully encrypted and tunnels make it possible to extend access to remote layer-2 networks. Central on Prem(ises) duplicates the cloud-based AOS10 Central management capability onsite. It is offered particularly for those enterprises that, by choice or regulation, prefer to manage the network from within their own network.

  • Kevin Dowd

What is OFDMA, and how it will affect your WiFi?

- Posted in WAN/LAN by

The capabilities of infrastructure WiFi reliably precede the capabilities of the devices that use it, including laptops and phones. A previous major standard for WiFi, 802.11ac, included mechanics for Multi-User MIMO, or MU-MIMO. It provided a way to send data to two clients at once by adjusting power on multiple antennas. The signal that reached first client would be canceled for the other, and vice versa; one transmission, two different interpretations.

The access point that can craft a MU-MIMO transmission is a functionally a supercomputer. The transmission is the product of matrix calculations that factor in gains and the constructive/destructive interferences experienced at each client. MU-MIMO is uber-cool, except that there are still very few clients for it (five years later), and the opportunity to employ it comes only once-in-a-while.

Access points you would buy today are based on next standard, 802.11ax or WiFi 6. MU-MIMO is still part of the mix, but there is a much more interesting multi-user capability in the standard, called Orthogonal Frequency-Division Multiple Access (OFDMA). It works by sharing sub-carriers in a transmission between multiple client devices.

What are sub-carriers? At WiFi’s higher modulation rates, transmitted data are conveyed in multiple, bonded streams which are transmitted at neighboring frequencies. These are reassembled on receipt. Sub-carriers are orthogonal, meaning that the transmission of one does not interfere with the transmission of another. Sub-carriers provide a way to slice WiFI bandwidth into resilient pieces of modest width. Narrower bands can be demodulated and bonded more easily than if the whole channel were taken altogether at once. Fatal interference within a sub-carrier doesn’t necessarily ruin the transmission.

In WiFi 6, the sub-carriers can be shared so that some are destined for this client; some are for that client. This means that in one transmission, an access point can talk to multiple clients. That would be significant enough, but the real performance benefit comes from the elimination of overhead.

To make a single transmission, a modern access point has to perform channel assessment (to see if the air is busy). It has to insert guard bands (dead air) to allow for response turnaround. And, an access point has to contend with overlapped transmission, back-off and retry. The overhead time associated with acquiring the channel can be much greater than the data transmission window. This makes the air-time efficiency of a very fast access point with typical client data be about 10%. That’s low! By combining the data for multiple clients on multiple subcarriers, the efficiency can increase dramatically. The same amount of channel acquisition time can be shared among multiple users. The problem, as ever, is that there are few clients for OFDMA as of yet.

-Kevin Dowd

The Death of the Heat Map

- Posted in Network Solution by

The Death of the Heat Map

Fifteen years ago, when organizations were just beginning to experiment with 802.11 wireless networks, the WiFi heat map was considered a good, splotchy plan to show where wireless would be available, and how good it might be. Wifi can travel pretty far under the right conditions. Back then, we built networks for coverage, not necessarily capacity. So any signal was a good signal.

enter image description here

A few years later, we began building WiFi networks for capacity. The object was/is to provide good connections to a community of users across the whole campus. For a good experience, one needs to make an association with an access point that is nearby. Generally, the closer the AP, the better the signal and thus the higher the negotiated data rate. In short: nearby AP good; far-away AP bad.

Consider this, though: if every client is going to be near an AP, then every AP is probably going to be near other APs. That means that there will be overlapped WiFi. Here is an example of what we find in the air is a typical busy campus. This data, in fact, is associated with the heat map above:

enter image description here

The list shows that from the location where the measurement was taken, the client could hear 28 unique access points and 43 radios! Moreover, many of them were on same channels. There were, in fact, so many APs in this space that the amount of bandwidth available for users was being cannibalized by WiFi management frames. This was particularly true in the 2.4 GHz band, where slow beacons transmitted by every radio, every 100 ms could consume the lion’s share of the channel in a kind of death of a thousand cuts. So, in this case, pretty heat map; bad WiFi.

Dense WiFi networking depends on lots of APs running at low power, near their clients. The WiFi infrastructure can play many roles in encouraging clients to choose AP associations wisely. And the latest current WiFi standard, 802.11ax, provides extra features for managing overlapped APs. WiFi has advanced to reach blazing speeds and high densities in the last fifteen years, but the heat map doesn’t tell you much more than it did back then.

  • Kevin Dowd

BSS Coloring in WiFi 6

- Posted in WAN/LAN by

BSS Coloring in WiFi 6

If you’ve ever used a two-way radio or walkie-talkie, you’ve probably had the experience where the person you’re listening to gets “stepped on” by somebody else’s transmission. You may have also noticed times when the person you’re listening to “steps on” someone else’s transmission, overpowering it.

WiFi networks have long dealt with the same issues, avoiding the “stepped on” transmission by performing channel assessments and signaling intent to use a channel. This coordination and cooperation even happens between WiFi networks that otherwise have no connection to one another. If the barber shop runs an AP on channel 53 and the tire store also runs on channel 53, the two are going to share the channel. The tire store AP has its clients; the barber ship AP has its clients. Their access points and clients will each listen for the transmissions on channel 53, yielding access if the power of any transmission is above a modest -82 dbm.

In each case, the AP and its clients form a Basic Service Set, or BSS. To make better use of shared channels, WiFi 6 (802.11ax), introduces the notion of BSS coloring. The ‘color’ is a small integer in the transmission preamble. For the sake of our discussion, lets say that the integers actually correspond to colors; the BSS of the AP and all of the WiFi clients in the barber shop is blue; the tire store BSS is red. BSS color makes it immediately possible for all WiFi devices to tell whether a transmission is meant for the tire store or the barber shop.

enter image description here

BSS coloring facilitates “stepping on” another BSS’s traffic. If the AP in the tire store (red) wishes to transmit while a device in the barber shop is talking (blue), it can make a decision to broadcast over the ongoing transmission, even if the power is as high as -62 dbm. The reason this will work is that each BSS and its clients are in proximity to one another and the interference caused by its neighbors is dynamically judged to be low enough to permit the simultaneous transmission to succeed. One channel; two transmissions.

The benefit of BSS Coloring is that we can build denser WiFi networks with more channel overlap. BSS Coloring is one of the powerful new capabilities in WiFi 6.

  • Kevin Dowd

Fun with DFS

- Posted in WAN/LAN by

We have a customer who complained that every day, just before noon, users would lose their WiFi. The customer was located on a flight path for a nearby airport and military installation. As it turned out, an interim wireless firmware release changed their 5 GHz channel plan to include some Dynamic Frequency Selection (DFS) channels. These DFS channels are shared with aviation and weather radar with the proviso that if an access point detects radar on the same 5 GHz channel it serves, then it must abandon the channel. So for this customer, every day at the same time, an overhead flight knocked out part of their WiFi!

Of the twenty-two, 20 MHz, 5 GHz WiFi channels in the United States, thirteen of them are DFS channels. Because DFS channels are subject to abandonment, WiFi equipment ships with DFS channels disabled. Most WiFi systems apportion the remaining nine non-DFS channels between access points with limited contention, but there are situations where the DFS channels can solve big problems.

For example, we have a scholastic customer with some lightly built dormitories of wood-frame and gypsum. The dorms are loaded with APs. Additionally, the dorms are situated in an open space with many scholastic buildings and green space-WiFi around them. Standing next the dormitories, one can ‘hear’ forty radios. The air is busy! Adding more APs offers diminishing returns as the infrastructure competes with itself for channel access; the nine 5GHz channels are oversubscribed.

In another case, a customer had some new LED lighting installed over the summer. In the fall, they complained WiFi was intermittent on the 5 GHz band. We took measurements. The new lighting appeared to be blowing raspberries on the unlicensed 5 GHz spectrum; whatever the communication method, the LED lighting wasn’t speaking 802.11 protocols, so the WiFi infrastructure couldn’t work with it. The result was a bad WiFi experience.

In both cases, what we did was turn up some DFS channels. In the second case, we also turned down non-DFS channels. Here’s the method for enabling DFS: enable a few channels at a time. Doing it in pairs makes sense. Choose channels that can be bonded for 40 MHz. Then, watch the logs for a day or so. Look for channel abandonment events. If you don’t see any, move on to the next batch of channels. In the end, you will have an expanded collection of 5 GHz channels for your area and more available WiFi bandwidth.

We’re cowboys, here, by the way. In our office, all we use are DFS channels. So far, no deleterious effects!

-Kevin Dowd

Page 2 of 2