Generally, researchers and builders must simulate varied forms of networks with software program that will in any other case be arduous to do with actual units. For instance, some {hardware} might be arduous to get, costly to arrange, or past the talents of the crew to implement. When the underlying {hardware} isn’t a priority however the important capabilities that it does is, software program is usually a viable various.
NS-3 is a mature, open-source networking simulation library with contributions from the Lawrence Livermore Nationwide Laboratory , Google Summer season of Code, and others. It has a excessive diploma of functionality to simulate varied sorts of networks and user-end units, and its Python-to-C++ bindings make it accessible for a lot of builders.
In some circumstances, nonetheless, it is not ample to simulate a community. A simulator may want to check how knowledge behaves in a simulated community (i.e., testing the integrity of Person Datagram Protocol (UDP) site visitors in a wifi community, how 5G knowledge propagates throughout cell towers and consumer units, and so forth. NS-3 permits such sorts of simulations by piping knowledge from faucet interfaces (a function of digital community units supplied by the Linux kernel that cross ethernet frames to and from consumer area) into the operating simulation.
This weblog submit presents a tutorial on how one can transmit stay knowledge by way of an NS-3-simulated community with the added benefit of getting the data-producing/data-receiving nodes be Docker containers. Lastly, we use Docker Compose to automate complicated setups and make repeatable simulations in seconds. Be aware: All of the code for this undertaking might be discovered within the Github repository linked on the finish of this submit.
Introduction to NS-3 Networking
NS-3 has various APIs (utility programming interfaces) to make its simulations work together with the actual world. Certainly one of these APIS is the TapBridge class, which is actually a community bridge that enables for community packets coming in from a course of to turn out to be obtainable to the NS-3 simulation setting. It does this by sending site visitors to a Linux Faucet gadget despatched over to the NS-3 simulation. Within the C++ code beneath, we are able to see how simple it’s to make use of to make use of the TapBridge API:
// Create an ns-3 node
NodeContainer node;
node.Create(1);
// Create a channel that the node connects to
CsmaHelper csma;
NetDeviceContainer units = csma.Set up(node);
//Create an occasion of a TapBridge
TapBridgeHelper tapBridge;
// Allow UseBridge mode, which has the consumer outline the faucet gadget it should
//hook up with. There are extra modes obtainable which we gained’t talk about right here.
tapBridge.SetAttribute("Mode", StringValue(“UseBridge"));
// we're defining our faucet gadget which I known as mytap
tapBridge.SetAttribute("DeviceName", StringValue("mytap"));
tapBridge.Set up(node.Get(0));
}
The code above assumes that the consumer created a named Faucet Gadget (“mytap”) and that the TapBridge occasion can hook up with it.
Since simulations generally function a number of customers, we are able to envision every consumer as its personal, remoted node that produces and transmits knowledge into the simulation. This state of affairs subsequently matches effectively inside the mannequin of operating a number of containers inside the similar host. A container is solely an remoted course of with its dependencies separated from its surrounding setting, utilizing particular Linux Kernel utility programming interfaces (APIs) to perform this. The next diagram sketches out the setup I’d prefer to create for the primary iteration of this tutorial:
Determine 1. Structure of an NS-3 simulation with two containers passing actual knowledge by way of it.
Two containers are every operating some form of data-producing utility. That knowledge is broadcasted by way of one in every of its community interfaces into the host operating the NS-3 simulation utilizing a bridge. This bridge glues collectively the container community with the faucet gadget interfaces on the host by utilizing veth (digital ethernet) pairs. This configuration permits sending knowledge to the listening node within the NS-3 simulation. This setup frees us from having to face up a number of VMs or functions that share dependencies and permits portability and maintainability when operating NS-3 simulations throughout completely different machines.
The primary iteration of this tutorial makes use of Linux Containers (LXC) to implement what was proven within the determine above, and intently follows what the NS-3 wiki already reveals, so I will not dwell an excessive amount of on it.
LXC doesn’t carry a lot overhead, making it comparatively simple to know, however LXC lacks a whole lot of the performance you will discover within the aforementioned container engines. Let’s rapidly create the setup proven within the diagram above. To start out, guarantee NS-3 and LXC are put in in your system and that NS-3 is constructed.
1. Create Faucet Units
ip tuntap add tap-left mode faucet
ip tuntap add tap-right mode faucet
2. Deliver up faucets in promiscuous mode (This mode tells the OS to take heed to all community packets being despatched, even when it has a special MAC vacation spot deal with.):
ip hyperlink set tap-left promisc on
ip hyperlink set tap-right promisc on
3. Create community bridges that may join the container to the faucet gadget:
ip hyperlink add title br-left sort bridge
ip hyperlink add title br-right sort bridge
ip hyperlink set dev br-left up
ip hyperlink set dev br-right up
4. Create the 2 containers that may ping one another:
lxc-create -n left -t obtain -f lxc-left.conf -- -d ubuntu -r focal -a amd64
lxc-create
is the command to create containers however to not run them. We specify a reputation (-n
) and a configuration file to make use of (-f
) and use one of many pre-built template (-t
) —much like a Docker picture. We specify the container to make use of the ubuntu (-d
) focal launch (-r
) in amd64 structure (-a
). We do the identical command however for the “proper” container.
5. Begin the containers:
lxc-start left
lxc-start proper
6. Connect to the containers and an IP deal with to every:
(in a brand new shell)
lxc-attach left
#left >ip addr add 10.0.0.1/24 dev
(in a brand new shell)
lxc-attach proper
#proper >ip addr add 10.0.0.2/24 dev
Affirm that the IP addresses have been added utilizing
ip addr present
7. Connect faucet gadget to the beforehand made bridges (be aware: the containers won’t be able to attach to one another till the simulation is began).
ip hyperlink set tap-left grasp br-left
ip hyperlink set tap-right grasp br-right
8. Begin the NS-3 simulator with one of many instance faucet gadget applications that include NS-3:
./ns3 run ns-3/src/tap-bridge/examples/tap-csma-virtual-machine.cc
9. Connect to every container individually and ping the opposite container to substantiate packets are flowing:
#lxc-left >ping 10.0.0.2
#lxc-right >ping 10.0.0.1
Connecting NS-3 to Docker
This bare-bones setup works effectively in case you do not thoughts working with Linux containers and handbook labor. Nonetheless, most individuals do not use LXC immediately, however as a substitute use Docker or Podman. Builders typically assume that the setup for Docker can be related: create two Docker containers (left, proper) with two Docker community bridges (br-left, br-right) linked to one another like so:
docker run -it --name left --network br-left ubuntu bash
docker run -it --name proper --network br-right ubuntu bash
Then connect faucet units to the community bridge’s id (The community bridge id might be retrieved by operating ip hyperlink present):
ip hyperlink set tap-1 grasp br-***
ip hyperlink set tap-2 grasp br-***
This setup sadly, doesn’t work. As a substitute, we must create a customized community namespace that acts on behalf of the container to hook up with the host community interface. We will do that by connecting our customized community namespace to the container ethernet community interface by utilizing veth pairs, then connecting our namespace to a faucet gadget through a bridge.
- To start out, create customized bridges and faucet units as earlier than. Then, enable the OS to ahead ethernet frames to the newly created bridges:
sudo iptables -I FORWARD -m physdev --physdev-is-bridged -i br-left -p tcp -j ACCEPT
sudo iptables -I FORWARD -m physdev --physdev-is-bridged -i br-left -p arp -j ACCEPT
sudo iptables -I FORWARD -m physdev --physdev-is-bridged -i br-right -p tcp -j ACCEPT
sudo iptables -I FORWARD -m physdev --physdev-is-bridged -i br-right -p arp -j ACCEPT
2. Create the Docker containers and seize their Course of ID (PID) for future use:
pid_left=$(docker examine --format '{{ .State.Pid }}' left)
pid_right=$(docker examine --format '{{ .State.Pid }}' proper)
3. Create a brand new community namespace that might be symbolically linked to the primary container (that is setting us as much as enable our modifications to take impact on the container):
mkdir -p /var/run/netns
ln -s /proc/$pid_left/ns/internet /var/run/netns/$pid_left
4. Create the veth pair to attach containers to the customized bridge:
ip hyperlink add internal-left sort veth peer title external-left
ip hyperlink set internal-left grasp br-left
ip hyperlink set internal-left up
5. Assign an IP deal with and a MAC deal with:
ip hyperlink set external-left netns $pid_left
ip netns exec $pid_left ip hyperlink set dev external-left title eth0
ip netns exec $pid_left ip hyperlink set eth0 deal with 12:34:88:5D:61:BD
ip netns exec $pid_left ip hyperlink set eth0 up
ip netns exec $pid_left ip addr add 10.0.0.1/16 dev eth0
6. Repeat the identical steps for the best container, bridge, and interfaces.
7. Head over the containers and begin them with a TTY console like bash.
8. Lastly, begin the NS-3 simulation. Ping every container and watch these packets stream.
This setup works at Layer 2 of the OSI Mannequin, so it permits TCP, UDP, and HTTP site visitors to undergo. It’s brittle, nonetheless, since any time the container is stopped, the PID is thrown out, and the community namespace we made turns into ineffective. To scale back toil and make this course of repeatable, it’s higher to make use of a script. Higher but, if there have been a approach to orchestrate a number of containers in order that we are able to create an arbitrary variety of them—with scripts that kick off these configurations and cease the operating containers—we may have an extremely helpful and transportable instrument to run any form of simulation utilizing NS-3. We will take this course of one step additional utilizing Docker Compose.
Utilizing Docker Compose to Automate our Simulations
Let’s take a step again and assessment our ranges of abstraction. We now have a simulation that’s operating a state of affairs with n variety of containers, some sending and receiving messages and one which runs the simulation itself. One can think about having extra containers doing sure duties like knowledge assortment and evaluation, and so forth. After the simulation ends, an output is produced, and all containers and interfaces are destroyed. The next schematic illustrates this method:
Determine 2. Ultimate Simulation Creation Circulation
With this stage of abstraction, we are able to assume at a excessive stage about what the wants of our simulation are. What number of nodes do we would like? What sort of community will we wish to simulate? How will the info assortment, logging, and processing happen? Defining the primary after which going into the granular stage later permits for simpler conceptualization of the issue we try to unravel, and in addition takes us to a stage of considering that tries to get nearer to the issue.
To make this concrete, let’s look at the next Docker Compose file intimately. It defines the simulation to be run as two units (“left” and “proper”) that talk over a point-to-point connection.
For every user-end gadget (on this case, “left” and “proper”) we outline the OS it makes use of, the community mode it operates on and an attribute to allow us to log into the shell as soon as they’re operating.
“ns_3” makes use of a customized picture that downloads, builds and runs NS-3 together with the 5G-Lena bundle for simulating 5G networks. The picture additionally copies a improvement file for NS-3 from the host setting into the container on the acceptable location, permitting NS-3 to construct and hyperlink to it at runtime. To entry kernel-level networking options, the NS-3 container is granted particular permissions by way of “cap-add” to make use of TapDevice interfaces, and a community mode of “host” is used.
model: "3.8"
companies:
left:
picture: "ubuntu"
container_name: left
network_mode: "none"
tty: true
depends_on:
- ns_3
proper:
tty: true
picture: "ubuntu-net"
container_name: proper
network_mode: "none"
depends_on:
- ns_3
- left
ns_3:
picture: "ns3-lena"
container_name: ns-3
network_mode: "host"
volumes:
- ${PWD}/src/tap-csma-scenario.cc:/usr/native/ns-allinone-3.37/ns-3.37/scratch/tap-csma-scenario.cc
tty: true
cap_add:
- NET_ADMIN
units:
- /dev/internet/tun:/dev/internet/tun
The precise creation of Linux interfaces, attaching of bridges, and so forth. is finished through a bash script, which executes this Docker Compose file within the course of and thereafter runs the applications contained in the nodes that cross knowledge from one to a different. As soon as operating, these containers can run any form of knowledge producing/consuming functions, whereas passing them by way of a simulated NS-3 community.
A New Strategy to Automating NS-3 Simulations
I hope that this tutorial offers you a brand new approach to have a look at automating NS-3 simulations, and the way customizing some current industrial instruments can yield new and extremely helpful applications.