On this page:
9.1 Xen VMs
9.1.1 Controlling CPU and Memory
9.1.2 Controlling Disk Space
9.1.3 Setting HVM Mode
9.2 Docker Containers
9.2.1 Basic Examples
9.2.2 Disk Images
9.2.3 External Images
9.2.4 Dockerfiles
9.2.5 Augmented Disk Images
9.2.6 Remote Access
9.2.7 Console
9.2.8 ENTRYPOINT and CMD
9.2.9 Shared Containers
9.2.10 Privileged Containers
9.2.11 Remote Blockstores
9.2.12 Temporary Block Storage
9.2.13 Docker  Container Member Variables
2018-11-29 (16097c8)

9 Virtual Machines and Containers

A CloudLab virtual node is a virtual machine or container running on top of a regular operating system. CloudLab virtual nodes are based on the Xen hypervisor or on Docker containers. Both types of virtualization allow groups of processes to be isolated from each other while running on the same physical machine. CloudLab virtual nodes provide isolation of the filesystem, process, network, and account namespaces. Thus, each virtual node has its own private filesystem, process hierarchy, network interfaces and IP addresses, and set of users and groups. This level of virtualization allows unmodified applications to run as though they were on a real machine. Virtual network interfaces support an arbitrary number of virtual network links. These links may be individually shaped according to user-specified link parameters, and may be multiplexed over physical links or used to connect to virtual nodes within a single physical node.

There are a few specific differences between virtual and physical nodes. First, CloudLab physical nodes have a routable, public IPv4 address allowing direct remote access (unless the CloudLab installation has been configured to use unroutable control network IP addresses, which is very rare). However, virtual nodes are assigned control network IP addresses on a private network (typically the 172.16/12 subnet) and are remotely accessible over ssh via DNAT (destination network-address translation) to the physical host’s public control network IP address, to a high-numbered port. Depending on local configuration, it may be possible to request routable IP addresses for specific virtual nodes to enable direct remote access. Note that virtual nodes are always able to access the public Internet via SNAT (source network-address translation; nearly identical to masquerading).

Second, virtual nodes and their virtual network interfaces are connected by virtual links built atop physical links and physical interfaces. The virtualization of a physical device/link decreases the fidelity of the network emulation. Moreover, several virtual links may share the same physical links via multiplexing. Individual links are isolated at layer 2, but they are not isolated in terms of performance. If you request a specific bandwidth for a given set of links, our resource mapper will ensure that if multiple virtual links are mapped to a single physical link, the sum of the bandwidths of the virtual links will not exceed the capacity of the physical link (unless you also specify that this constraint can be ignored by setting the best_effort link parameter to True). For example, no more than ten 1Gbps virtual links can be mapped to a 10Gbps physical link.

Finally, when you allocate virtual nodes, you can specify the amount of CPU and RAM (and, for Xen VMs, virtual disk space) each node will be allocated. CloudLab’s resource assigner will not oversubscribe these quantities.

9.1 Xen VMs

These examples show the basics of allocating Xen VMs: a single Xen VM node, two Xen VMs in a LAN, a Xen VM with custom disk size. In the sections below, we discuss advanced Xen VM allocation features.

9.1.1 Controlling CPU and Memory

You can control the number of cores and the amount of memory allocated to each VM by setting the cores and ram instance variables of a XenVM object, as shown in the following example:

"""An example of constructing a profile with a single Xen VM. Instructions: Wait for the profile instance to start, and then log in to the VM via the ssh port specified below. (Note that in this case, you will need to access the VM through a high port on the physical host, since we have not requested a public IP address for the VM itself.) """ import geni.portal as portal import geni.rspec.pg as rspec # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a XenVM node = request.XenVM("node") # Request a specific number of VCPUs. node.cores = 4 # Request a specific amount of memory (in GB). node.ram = 4096 # Print the RSpec to the enclosing page. portal.context.printRequestRSpec() """An example of constructing a profile with a single Xen VM. Instructions: Wait for the profile instance to start, and then log in to the VM via the ssh port specified below. (Note that in this case, you will need to access the VM through a high port on the physical host, since we have not requested a public IP address for the VM itself.) """ import geni.portal as portal import geni.rspec.pg as rspec # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a XenVM node = request.XenVM("node") # Request a specific number of VCPUs. node.cores = 4 # Request a specific amount of memory (in GB). node.ram = 4096 # Print the RSpec to the enclosing page. portal.context.printRequestRSpec()

9.1.2 Controlling Disk Space

Each Xen VM is given enough disk space to hold the requested image. Most CloudLab images are built with a 16 GB root partition, typically with about 25% of the disk space used by the operating system. If the remaining space is not enough for your needs, you can request additional disk space by setting a XEN_EXTRAFS node attribute, as shown in the following example.

"""An example of constructing a profile with a single Xen VM with extra fs space. Instructions: Wait for the profile instance to start, and then log in to the VM via the ssh port specified below. (Note that in this case, you will need to access the VM through a high port on the physical host, since we have not requested a public IP address for the VM itself.) """ import geni.portal as portal import geni.rspec.pg as rspec # Import Emulab-specific extensions so we can set node attributes. import geni.rspec.emulab as emulab # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a XenVM node = request.XenVM("node") # Set the XEN_EXTRAFS to request 8GB of extra space in the 4th partition. node.Attribute('XEN_EXTRAFS','8') # Print the RSpec to the enclosing page. portal.context.printRequestRSpec() """An example of constructing a profile with a single Xen VM with extra fs space. Instructions: Wait for the profile instance to start, and then log in to the VM via the ssh port specified below. (Note that in this case, you will need to access the VM through a high port on the physical host, since we have not requested a public IP address for the VM itself.) """ import geni.portal as portal import geni.rspec.pg as rspec # Import Emulab-specific extensions so we can set node attributes. import geni.rspec.emulab as emulab # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a XenVM node = request.XenVM("node") # Set the XEN_EXTRAFS to request 8GB of extra space in the 4th partition. node.Attribute('XEN_EXTRAFS','8') # Print the RSpec to the enclosing page. portal.context.printRequestRSpec()

This attribute’s unit is in GB. As with CloudLab physical nodes, the extra disk space will appear in the fourth partition of your VM’s disk. You can turn this extra space into a usable file system by logging into your VM and doing:

mynode> sudo mkdir /dirname
mynode> sudo /usr/local/etc/emulab/mkextrafs.pl /dirname

where dirname is the directory you want your newly-formatted file system to be mounted.

"""An example of constructing a profile with a single Xen VM. Instructions: Wait for the profile instance to start, and then log in to the VM via the ssh port specified below. (Note that in this case, you will need to access the VM through a high port on the physical host, since we have not requested a public IP address for the VM itself.) """ import geni.portal as portal import geni.rspec.pg as rspec # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a XenVM node = request.XenVM("node") # Request a specific number of VCPUs. node.cores = 4 # Request a specific amount of memory (in GB). node.ram = 4096 # Print the RSpec to the enclosing page. portal.context.printRequestRSpec() """An example of constructing a profile with a single Xen VM. Instructions: Wait for the profile instance to start, and then log in to the VM via the ssh port specified below. (Note that in this case, you will need to access the VM through a high port on the physical host, since we have not requested a public IP address for the VM itself.) """ import geni.portal as portal import geni.rspec.pg as rspec # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a XenVM node = request.XenVM("node") # Request a specific number of VCPUs. node.cores = 4 # Request a specific amount of memory (in GB). node.ram = 4096 # Print the RSpec to the enclosing page. portal.context.printRequestRSpec()

9.1.3 Setting HVM Mode

By default, all Xen VMs are paravirtualized. If you need hardware virtualization instead, you must set a XEN_FORCE_HVM node attribute, as shown in this example:

"""An example of constructing a profile with a single Xen VM in HVM mode. Instructions: Wait for the profile instance to start, and then log in to the VM via the ssh port specified below. (Note that in this case, you will need to access the VM through a high port on the physical host, since we have not requested a public IP address for the VM itself.) """ import geni.portal as portal import geni.rspec.pg as rspec # Import Emulab-specific extensions so we can set node attributes. import geni.rspec.emulab as emulab # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a XenVM node = request.XenVM("node") # Set the XEN_FORCE_HVM custom node attribute to 1 to enable HVM mode: node.Attribute('XEN_FORCE_HVM','1') # Print the RSpec to the enclosing page. portal.context.printRequestRSpec() """An example of constructing a profile with a single Xen VM in HVM mode. Instructions: Wait for the profile instance to start, and then log in to the VM via the ssh port specified below. (Note that in this case, you will need to access the VM through a high port on the physical host, since we have not requested a public IP address for the VM itself.) """ import geni.portal as portal import geni.rspec.pg as rspec # Import Emulab-specific extensions so we can set node attributes. import geni.rspec.emulab as emulab # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a XenVM node = request.XenVM("node") # Set the XEN_FORCE_HVM custom node attribute to 1 to enable HVM mode: node.Attribute('XEN_FORCE_HVM','1') # Print the RSpec to the enclosing page. portal.context.printRequestRSpec()

You can set this attribute only for dedicated-mode VMs. Shared VMs are available only in paravirtualized mode.

9.2 Docker Containers

CloudLab supports experiments that use Docker containers as virtual nodes. In this section, we first describe how to build simple profiles that create Docker containers, and then demonstrate more advanced features. The CloudLab-Docker container integration has been designed to enable easy image onboarding, and to allow users to continue to work naturally with the standard Docker API or CLI. However, because CloudLab is itself an orchestration engine, it does not support any of the Docker orchestration tools or platforms, such as Docker Swarm.

You can request a CloudLab Docker container in a geni-lib script like this:

import geni.portal as portal
import geni.rspec.pg as rspec
request = portal.context.makeRequestRSpec()
node = request.DockerContainer("node")

You can use the returned node object (a DockerContainer instance) similarly to other kinds of node objects, like RawPC or XenVM. However, Docker nodes have several custom member variables you can set to control their behavior and Docker-specific features. We demonstrate the usage of these member variables in the following subsections and summarize them at the end of this section.

9.2.1 Basic Examples

"""An example of constructing a profile with a single Docker container. Instructions: Wait for the profile instance to start, and then log in to the container via the ssh port specified below. By default, your container will run a standard Ubuntu image with the Emulab software preinstalled. """ import geni.portal as portal import geni.rspec.pg as rspec # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a Docker container. node = request.DockerContainer("node") # Request a container hosted on a shared container host; you will not # have access to the underlying physical host, and your container will # not be privileged. Note that if there are no shared hosts available, # your experiment will be assigned a physical machine to host your container. node.exclusive = True # Print the RSpec to the enclosing page. portal.context.printRequestRSpec() """An example of constructing a profile with a single Docker container. Instructions: Wait for the profile instance to start, and then log in to the container via the ssh port specified below. By default, your container will run a standard Ubuntu image with the Emulab software preinstalled. """ import geni.portal as portal import geni.rspec.pg as rspec # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a Docker container. node = request.DockerContainer("node") # Request a container hosted on a shared container host; you will not # have access to the underlying physical host, and your container will # not be privileged. Note that if there are no shared hosts available, # your experiment will be assigned a physical machine to host your container. node.exclusive = True # Print the RSpec to the enclosing page. portal.context.printRequestRSpec()

It is easy to extend this profile slightly to allocate 10 containers in a LAN, and to switch them to dedicated mode. Note that in this case, the exclusive member variable is not specified, and it defaults to False):

"""An example of constructing a profile with ten Docker containers in a LAN. Instructions: Wait for the profile instance to start, and then log in to the container via the ssh port specified below. By default, your container will run a standard Ubuntu image with the Emulab software preinstalled. """ import geni.portal as portal import geni.rspec.pg as rspec # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a LAN to put containers into. lan = request.LAN("lan") # Create ten Docker containers. for i in range(0,10): node = request.DockerContainer("node-%d" % (i)) # Create an interface. iface = node.addInterface("if1") # Add the interface to the LAN. lan.addInterface(iface) # Print the RSpec to the enclosing page. portal.context.printRequestRSpec() """An example of constructing a profile with ten Docker containers in a LAN. Instructions: Wait for the profile instance to start, and then log in to the container via the ssh port specified below. By default, your container will run a standard Ubuntu image with the Emulab software preinstalled. """ import geni.portal as portal import geni.rspec.pg as rspec # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a LAN to put containers into. lan = request.LAN("lan") # Create ten Docker containers. for i in range(0,10): node = request.DockerContainer("node-%d" % (i)) # Create an interface. iface = node.addInterface("if1") # Add the interface to the LAN. lan.addInterface(iface) # Print the RSpec to the enclosing page. portal.context.printRequestRSpec()

Here is a more complex profile that creates 20 containers, binds 10 of them to a physical host machine of a particular type, and binds the other 10 to a second machine of the same type:

"""An example of constructing a profile with 20 Docker containers in a LAN, divided across two container hosts. Instructions: Wait for the profile instance to start, and then log in to the container via the ssh port specified below. By default, your container will run a standard Ubuntu image with the Emulab software preinstalled. """ import geni.portal as portal import geni.rspec.pg as rspec # Import the Emulab specific extensions. import geni.rspec.emulab as emulab # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a LAN to put containers into. lan = request.LAN("lan") # Create two container hosts, each with ten Docker containers. for j in range(0,2): # Create a container host. host = request.RawPC("host-%d" % (j)) # Select a specific hardware type for the container host. host.hardware_type = "d430" for i in range(0,10): # Create a container. node = request.DockerContainer("node-%d-%d" % (j,i)) # Create an interface. iface = node.addInterface("if1") # Add the interface to the LAN. lan.addInterface(iface) # Set this container to be instantiated on the host created in # the outer loop. node.InstantiateOn(host.client_id) # Print the RSpec to the enclosing page. portal.context.printRequestRSpec() """An example of constructing a profile with 20 Docker containers in a LAN, divided across two container hosts. Instructions: Wait for the profile instance to start, and then log in to the container via the ssh port specified below. By default, your container will run a standard Ubuntu image with the Emulab software preinstalled. """ import geni.portal as portal import geni.rspec.pg as rspec # Import the Emulab specific extensions. import geni.rspec.emulab as emulab # Create a Request object to start building the RSpec. request = portal.context.makeRequestRSpec() # Create a LAN to put containers into. lan = request.LAN("lan") # Create two container hosts, each with ten Docker containers. for j in range(0,2): # Create a container host. host = request.RawPC("host-%d" % (j)) # Select a specific hardware type for the container host. host.hardware_type = "d430" for i in range(0,10): # Create a container. node = request.DockerContainer("node-%d-%d" % (j,i)) # Create an interface. iface = node.addInterface("if1") # Add the interface to the LAN. lan.addInterface(iface) # Set this container to be instantiated on the host created in # the outer loop. node.InstantiateOn(host.client_id) # Print the RSpec to the enclosing page. portal.context.printRequestRSpec()

9.2.2 Disk Images

Docker containers use a different disk image format than CloudLab physical machines or Xen virtual machines, which means that you cannot use the same images on both a container and a raw PC. However, CloudLab supports native Docker images in several modes and workflows. CloudLab hosts a private Docker registry, and the standard CloudLab image-deployment and -capture mechanisms support capturing container disk images into it. CloudLab also supports the use of externally hosted, unmodified Docker images and Dockerfiles for image onboarding and dynamic image creation. Finally, since some CloudLab features require in-container support (e.g., user accounts, SSH pubkeys, syslog, scripted program execution), we also provide an optional automated process, called augmentation, through which an external image can be customized with the CloudLab software and dependencies.

CloudLab supports both augmented and unmodified Docker images, but some features require augmentation (e.g., that the CloudLab client-side software is installed and running in the container). Unmodified images support these CloudLab features: network links, link shaping, remote access, remote storage (e.g. remote block stores), and image capture. Unmodified images do not support user accounts, SSH pubkeys, or scripted program execution.

CloudLab’s disk image naming and versioning scheme is slightly different from Docker’s content-addressable model. A CloudLab disk image is identified by a project and name tuple (typically encoded as a URN), or by a UUID, as well as a version number that starts at 0. Each time you capture a new version of an image, the image’s version number is incremented by one. CloudLab does not support the use of arbitrary alphanumeric tags to identify image versions, as Docker does.

Thus, when you capture an CloudLab disk image of a CloudLab Docker container, and give it a name, the local CloudLab registry will contain an image (repository) of that name, in the project (and group—nearly always the project name) your experiment was created within—and thus the full image name is <project-name>/<group-name>/<image-name>. The tags within that repository correspond to the integer version numbers of the CloudLab disk image. For example, if you have created an CloudLab image named docker-my-research in project myproject, and you have created three versions (0, 1, 2) and want to pull the latest version (2) to your computer, you could run this command:

docker pull ops.emulab.net:5080/myproject/myproject/docker-my-research:2

You will be prompted for username and password; use your CloudLab credentials.

The following code fragment creates a Docker container that uses a standard CloudLab Docker disk image, docker-ubuntu16-std. This image is based on the ubuntu:16.04 Docker image, with the Emulab client software installed (meaning it is augmented) along with dependencies and other utilities.

"""An example of a Docker container running a standard, augmented system image.""" import geni.portal as portal import geni.rspec.pg as rspec request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") node.disk_image = "urn:publicid:IDN+emulab.net+image+emulab-ops//docker-ubuntu16-std" portal.context.printRequestRSpec() """An example of a Docker container running a standard, augmented system image.""" import geni.portal as portal import geni.rspec.pg as rspec request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") node.disk_image = "urn:publicid:IDN+emulab.net+image+emulab-ops//docker-ubuntu16-std" portal.context.printRequestRSpec()

9.2.3 External Images

CloudLab supports the use of publicly accessible Docker images in other registries. It does not currently support username/password access to images. By default, if you simply specify a repository and tag, as in the example below, CloudLab assumes the image is in the standard Docker registry; but you can instead specify a complete URL pointing to a different registry.

"""An example of a Docker container running an external, unmodified image.""" import geni.portal as portal import geni.rspec.pg as rspec request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") node.docker_extimage = "ubuntu:16.04" portal.context.printRequestRSpec() """An example of a Docker container running an external, unmodified image.""" import geni.portal as portal import geni.rspec.pg as rspec request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") node.docker_extimage = "ubuntu:16.04" portal.context.printRequestRSpec()

By default, CloudLab assumes that an external, non-augmented image does not run its own sshd to support remote login. Instead, it facilitates remote access to a container by running an alternate sshd on the container host and executing a shell (by default /bin/sh) in the container associated with a specific port (the port in the ssh URL shown on your experiment’s page). See the section on remote access below for more detail.

9.2.4 Dockerfiles

You can also create images dynamically (at experiment runtime) by specifying a Dockerfile for each container. Note that if multiple containers hosted on the same physical machine reference the same Dockerfile, the dynamically built image will only be built once. Here is a simple example of a Dockerfile that builds httpd from source:

"""An example of a Docker container running an external, unmodified image.""" import geni.portal as portal import geni.rspec.pg as rspec request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") node.docker_dockerfile = "https://github.com/docker-library/httpd/raw/38842a5d4cdd44ff4888e8540c0da99009790d01/2.4/Dockerfile" portal.context.printRequestRSpec() """An example of a Docker container running an external, unmodified image.""" import geni.portal as portal import geni.rspec.pg as rspec request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") node.docker_dockerfile = "https://github.com/docker-library/httpd/raw/38842a5d4cdd44ff4888e8540c0da99009790d01/2.4/Dockerfile" portal.context.printRequestRSpec()

You should not assume you have access to the image build environment (you only do if your container(s) are running in dedicated mode)—you should test your Dockerfile on your local machine first to ensure it works. Moreover, your Dockerfile will have an empty context directory, so any file resources it requires must be downloaded during the image build, from instructions in the Dockerfile.

9.2.5 Augmented Disk Images

The primary use case that Docker supports involves a large collection of containers, potentially spanning many machines, in which each container runs a single application (one or more processes launched by a single parent) that provides a specific service. In this model, container images may be tailored to support only their single service, unencumbered by extra software. Many Docker images do not include an init daemon to launch and monitor processes, or even a basic set of user-space tools and libraries.

CloudLab-based experimentation requires more than the ability to run a single service per node. Within an experiment, it is beneficial for each node to run basic OS services (e.g., syslogd and sshd) to support activities such as debugging, interactive exploration, logging, and more. It is beneficial for each node to run a suite of CloudLab-specific services to configure the node, conduct in-node monitoring, and automate experiment deployment and control (e.g., launch programs or dynamically emulate link failures). This environment requires a full-featured init daemon and a common, basic set of user-space software and libraries.

To enable experiments that use existing Docker images and seamlessly support the CloudLab environment, CloudLab supports automatic augmentation of those images. When requested, CloudLab pulls an existing image, builds both runit (a simple, minimal init daemon) and the CloudLab client toolchain against it in temporary containers, and creates a new image with the just-built runit and CloudLab binaries, scripts, and dependencies. Augmented images are necessary to fully support some CloudLab features (specifically, the creation of user accounts, installation of SSH pubkeys, and execution of startup scripts). Other CloudLab features (such as experiment network links and block storage) are configured outside the container at startup and do not require augmentation.

During augmentation, CloudLab modifies Docker images to run an init system rather than a single application. CloudLab builds and packages (in temporary containers with a build toolchain) and installs (in the final augmented image) runit and sets it as the augmented image’s ENTRYPOINT. CloudLab creates runit services to emulate existing ENTRYPOINT and/or CMD instructions from the original image. The CloudLab client-side software is also built and installed.

9.2.6 Remote Access

CloudLab provides remote access via ssh to each node in an experiment. If your container is running an sshd inside, then ssh access is straightfoward: CloudLab allocates a high-numbered port on the container host machine, and redirects (DNATs) that port to the private, unrouted control network IP address assigned to the container. (CloudLab containers typically receive private addresses that are only routed on the CloudLab control network to increase experiment scalability.) We refer to this style of ssh remote login as direct.

However, most unmodified, unaugmented Docker images do not run sshd. To trivially allow remote access to containers running such images, CloudLab runs a proxy sshd in the host context on the high-numbered ports assigned to each container, and turns successfully authenticated incoming connections into invocations of docker exec <container> /bin/sh. We refer to this style of ssh remote login as exec.

You can force the ssh style you prefer on a per-container basis—and if using the exec style, you can also choose a specific shell to execute inside the container. For instance:

"""An example of a Docker container running an external, unmodified image, and customizing its remote access.""" import geni.portal as portal import geni.rspec.pg as rspec request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") node.docker_extimage = "ubuntu:16.04" node.docker_ssh_style = "exec" node.docker_exec_shell = "/bin/bash" portal.context.printRequestRSpec() """An example of a Docker container running an external, unmodified image, and customizing its remote access.""" import geni.portal as portal import geni.rspec.pg as rspec request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") node.docker_extimage = "ubuntu:16.04" node.docker_ssh_style = "exec" node.docker_exec_shell = "/bin/bash" portal.context.printRequestRSpec()

9.2.7 Console

Although Docker containers do not have a serial console, as do most CloudLab physical machines, they can provide a pty (pseudoterminal) for interaction with the main process running in the container. CloudLab attaches to these streams via the Docker daemon and proxies them into the CloudLab console mechanism. Thus, if you click the "Console" link for a container on your experiment status page, you will see output from the container and be able to send it input. (Each time you close and reopen the console for a container, your console will see the entire console log kept by the Docker daemon, so there may be a flood of initial output if your container has been running for a long time and producing output.)

9.2.8 ENTRYPOINT and CMD

If you run an augmented Docker image in a container, recall that the augmentation process overrides the ENTRYPOINT and/or CMD instruction(s) the image contained, if any. Changing the ENTRYPOINT means that the augmented image will not run the command that was specified for the original image. Moreover, runit must be run as root, but the image creator may have specified a different USER for processes that execute in the container. To fix these problems, CloudLab emulates the original (or runtime-specific) ENTRYPOINT as an runit service and handles several related Dockerfile settings as well: CMD, WORKDIR, ENV, and USER. The emulation preserves the semantics of these settings (see Docker’s reference on interaction between ENTRYPOINT and CMD), with the exception that the user-specified ENTRYPOINT or CMD is not executed as PID 1. Only the ENTRYPOINT and CMD processes run as USER; processes started from outside the container via docker exec run as root.

You can customize the ENTRYPOINT and CMD for each container you create by setting the docker_entrypoint and the docker_cmd member variables of a geni-lib DockerContainer object instance.

The runit service that performs this emulation is named dockerentrypoint. If this service exits, it is not automatically restarted, unlike most runit services. You can start it again by running sv start dockerentrypoint within a container. You can also browse the runit documentation for more examples of interacting with runit.

This emulation process is complicated, so ifyou suspect problems, there are logfiles written in each container that may help. The stdout and stderr output from the ENTRYPOINT and CMD emulation is logged to /var/log/entrypoint.log, and additional debugging information is logged in /var/log/entrypoint-debug.log.

9.2.9 Shared Containers

In CloudLab, Docker containers can be created in dedicated or shared mode. In dedicated mode, containers run on physical nodes that are reserved to a particular experiment, and you have root-level access to the underlying physical machine. In shared mode, containers run on physical machines that host containers from potentially many experiments, and users do not have access to the underlying physical machine.

9.2.10 Privileged Containers

Docker allows containers to be privileged or unprivileged: a privileged container has administrative access to the underlying host. CloudLab allows you to spawn privileged containers, but only on dedicated container hosts. Moreover, you should only do this when absolutely necessary, because a hacked privileged container can effectively take over the physical host, and access and control other containers. To make a container privileged, set the docker_privileged member variable of a geni-lib DockerContainer object instance.

9.2.11 Remote Blockstores

You can mount CloudLab remote blockstores in Docker containers — if they were formatted with a Linux-mountable filesystem. Here is an example:

"""An example of a Docker container that mounts a remote blockstore.""" import geni.portal as portal import geni.rspec.pg as rspec import geni.rspec.igext as ig request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") # Create an interface to connect to the link from the container to the # blockstore host. myintf = # Create the blockstore host. bsnode = ig.RemoteBlockstore("bsnode","/mnt/blockstore") # Map your remote blockstore to the blockstore host bsnode.dataset = \ "urn:publicid:IDN+emulab.net:emulab-ops+ltdataset+johnsond-bs-foo" bsnode.readonly = False # Connect the blockstore host to the container. bslink = pg.Link("bslink") bslink.addInterface(node.addInterface("ifbs0")) bslink.addInterface(bsnode.interface) portal.context.printRequestRSpec() """An example of a Docker container that mounts a remote blockstore.""" import geni.portal as portal import geni.rspec.pg as rspec import geni.rspec.igext as ig request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") # Create an interface to connect to the link from the container to the # blockstore host. myintf = # Create the blockstore host. bsnode = ig.RemoteBlockstore("bsnode","/mnt/blockstore") # Map your remote blockstore to the blockstore host bsnode.dataset = \ "urn:publicid:IDN+emulab.net:emulab-ops+ltdataset+johnsond-bs-foo" bsnode.readonly = False # Connect the blockstore host to the container. bslink = pg.Link("bslink") bslink.addInterface(node.addInterface("ifbs0")) bslink.addInterface(bsnode.interface) portal.context.printRequestRSpec()

CloudLab does not support mapping raw devices (or in the case of CloudLab remote blockstores, virtual ISCSI-backed devices) into containers.

9.2.12 Temporary Block Storage

You can mount CloudLab temporary blockstores in Docker containers. Here is an example that places an 8 GB filesystem in a container at /mnt/tmp:

"""An example of a Docker container that mounts a remote blockstore.""" import geni.portal as portal import geni.rspec.pg as rspec import geni.rspec.igext as ig request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") bs = node.Blockstore("temp-bs","/mnt/tmp") bs.size = "8GB" bs.placement "any" portal.context.printRequestRSpec() """An example of a Docker container that mounts a remote blockstore.""" import geni.portal as portal import geni.rspec.pg as rspec import geni.rspec.igext as ig request = portal.context.makeRequestRSpec() node = request.DockerContainer("node") bs = node.Blockstore("temp-bs","/mnt/tmp") bs.size = "8GB" bs.placement "any" portal.context.printRequestRSpec()

9.2.13 DockerContainer Member Variables

The following table summarizes the DockerContainer class member variables. You can find more detail either in the sections above, or by browsing the source code documentation.

cores

   

The amount (integer) of virtual CPUs this container should receive; approximated using Docker's CpuShares and CpuPeriod options.

ram

   

The amount (integer) of memory this container should receive.

disk_image

   

The disk image this node should run. See the section on Docker disk images below.

docker_ptype

   

The physical node type on which to instantiate the container. Types are cluster-specific; see the hardware chapter.

docker_extimage

   

An external Docker image (repo:tag) to load on the container; see the section on external Docker images.

docker_dockerfile

   

A URL that points to a Dockerfile from which an image for this node will be created; see the section on Dockerfiles.

docker_tbaugmentation

   

The requested testbed augmentation level: must be one of full, buildenv, core, basic, none. See the section on Docker image augmentation.

docker_tbaugmentation_update

   

If the image has already been augmented, should we update it True or not False.

docker_ssh_style

   

Specify what happens when you ssh to your node; must be either direct or exec. See the section on remote access.

docker_exec_shell

   

The shell to run if the value of docker_ssh_style is direct; ignored otherwise.

docker_entrypoint

   

Change/set the Docker ENTRYPOINT option for this container; see the section on Docker ENTRYPOINT and CMD handling.

docker_cmd

   

Change/set the Docker CMD option for this container; see the section on Docker ENTRYPOINT and CMD handling.

docker_env

   

Add or override environment variables to a container. The value to this attribute should be either a newline-separated list of variable assignments, or one or more variable assignments on a single line. If the former, we do not support escaped newlines, unlike the Docker ENV instruction.

docker_privileged

   

Set this member variable True to make the container privileged. See the section on Docker privileged containers.