Skip to content

Scenario

Pablo Collado Soto edited this page Dec 2, 2022 · 1 revision

The Scenario

Our network scenario is described in the following script: src/scenario_basic.py. Mininet makes use of a Python API to give users the ability to automate processes easily, or to develop certain modules at their convenience. For this and many other reasons, Mininet is a highly flexible and powerful tool for network emulation which is widely used by the scientific community.

  • For more information about the API, see its manual.

The image above presents us with the logic scenario we will be working with. As with many other areas in networking this logic picture doesn't correspond with the real implementation we are using. We have seen throughout the installation procedure how we are always talking about 2 VMs. If you read carefully you'll see that one VM's "names" are controller and mininet. So it should come as no surprise that the controller and the network itself are living in different machines!

The first question that may arise is how on Earth can we logically join these 2 together. When working with virtualized enviroments we will generate a virtual LAN where each VM is able to communicate with one another. Once we stop thinking about programs and abstract the idea of "process" we find that we can easily identify the controller which is just a ryu app, which is nothing more than a python3 app with the controller's VM IP address and the port number where the ryu is listening. We shouldn't forget that any process running within any host in the entire Internet can be identified with the host's IP address and the processes port number. Isn't it amazing?

Ok, the above sounds great but... Why should we let the controller live in a machine when we could have everything in a single machine and call it a day? We have our reasons:

  • Facilitate teamwork, since the AI's logic will go directly into the controller's VM. This let's us increase both working group's independence. One may work on the mininet core and the data collection with telegraf whilst the other can look into the DDoS attack detection logic and visualization using Grafana and InfluxDB.

  • Facilitate the storage of data into InfluxDB from telegraf, as due to the internal workings of Mininet there may be conflicts in the communication of said data. Mininet's basic operation at a low level is be detailed below.

  • Having two different environments relying on distinct tools and implementing different functionalities let's us identify and debug problems way faster. We can know what piece of software is causing problems right away!

Running the scenario

Running the scenario requires having logged into both VMs manually or using vagrant's SSH wrapper. First of all we're going to power up the controller, to do so we run the following from the controller VM. It's an application that does a basic forwarding, which is just what we need:

ryu-manager ryu.app.simple_switch_13

You might prefer to run the controller in the background as it doesn't provide really meaningful information. In order to do so we'll run:

ryu-manager ryu.app.simple_switch_13 > /dev/null 2>&1 &

Let's break this big boy down:

  • > /dev/null redirects the stdout file descriptor to a file located in /dev/null. This is a "special" file in linux systems that behaves pretty much like a black hole. Anything you write to it just "disappears" 😮. This way we get rid of all the bloat caused by the network startup.

  • 2>&1 will make the stderr file descriptor point where the stdout file descriptor is currently pointing (/dev/null). Terminal emulators usually have both stdout and stderr"going into" the terminal itself so we need to redirect these two to be sure we won't see any output.

  • & makes the process run in the background so that you'll be given a new prompt as soon as you run the command.

If you want to move the controller app back into the foreground so that you can kill it with CTRL + C you can run fg which will bring the last process sent to the background back to the foreground.

Once the controller is up we are going to execute the network itself, to do so launch the aforementioned script from the test machine:

sudo python3 scenario_basic.py

Notice how we have opened Mininet CLI from the test machine. We can perform many actions from this command line interface. The most useful ones are detailed below.

Is working properly?

We should have our scenario working as intended by now. We can check our network connectivity by pinging the hosts, for example:

mininet> h1 ping h3

# We can also ping each other with the pingall command
mininet> pingall

As you can see in the image above, there is full connectivity in our scenario. You may have noticed how the first ping takes way longer than the other to get back to use. That is, its RTT (Round Trip Time) is abnormally high. This is due to the empty ARP tables we currently have AND to the fact that we don't yet have a flow defined to handle ICMP traffic:

  • An ARP resolution between sender and receiver of the ping takes place so that the sender learns the next hop's MAC address.

  • In addition, the ICMP message (ping-request) will be redirected to the driver (a.k.a controller) to decide what to do with the packet as the switches don't yet have a flow to handle this traffic type. This way the controller will, when it receives the packet, instantiate a set of rules on the switches so that the ICMP messages are routed from one host to the other.


As you can see, the controller's stdout (please see the appendix to learn more about file descriptors) indicates the commands it has been instantiating according to the packets it has processed. In the end, for the first packet we will have to tolerate a delay due to ARP resolution and flow lookup and instantiation within the controller. The good thing is the rest of the packets will already have the destination MAC and the rules will already instantiated in the intermediate switches, so the new delay will be minimal.

Clone this wiki locally