LXC on OpenSUSE Tumbleweed

C. R. Oldham
4 min readFeb 16, 2016

--

I’ve been enjoying OpenSUSE’s Tumbleweed distribution. It has all of the benefits of a rolling release like Arch without some of the instability. Unfortunately, my standby for lots of testing, LXC, doesn’t quite work out of the box. You can retrieve images with lxc-create -n name -t download but the images won’t start.

Extensive Googling did not reveal the specific reason for this, but I finally figured it out and decided to document it here.

SUSE has excellent support for libvirt, and libvirt has rapidly improving support for LXC. So, we’ll install the libvirt suite alongside LXC. A huge advantage here is that we’re going to get a single bridge (br0) that will work for libvirt and lxc. One frustration point I’ve had with LXC on other platforms is I’d often end up with an lxcbr0 alongside other bridges for other container/virtualization options.

To install the tools you need, it’s quickest to start with Yast. Start Yast as root, select Virtualization in the left pane, then Install Hypervisor and Tools. In the next dialog, pick just KVM Tools and libvirt LXC daemon — that’s all you need.

┌───────────────────────────────────────────────────────────┐
│ │
│ ┌Choose Hypervisor(s) to install────────────────────────┐ │
│ │Server: Minimal system to get a running Hypervisor │ │
│ │Tools: Configure, manage and monitor virtual machines │ │
│ └───────────────────────────────────────────────────────┘ │
│ │
│ ┌Xen Hypervisor─────────────────────────────────────────┐ │
│ │[ ] Xen server [ ] Xen tools │ │
│ └───────────────────────────────────────────────────────┘ │
│ ┌KVM Hypervisor─────────────────────────────────────────┐ │
│ │[ ] KVM server [x] KVM tools │ │
│ └───────────────────────────────────────────────────────┘ │
│ ┌libvirt LXC containers─────────────────────────────────┐ │
│ │[x] libvirt LXC daemon │ │
│ └───────────────────────────────────────────────────────┘ │
│ │
│ [Accept] [Cancel] │
└───────────────────────────────────────────────────────────┘

Then make sure you have lxc and apparmor installed with zypper in lxc.

# zypper in lxc apparmor apparmor-utils apparmor-abstractions

Next, we need to make sure that the apparmor profile for lxc containers is loaded

# apparmor_parser /etc/apparmor.d/lxc-containers 

If you look in /etc/lxc/default.conf, you’ll see that there is no network type established. Things will work better if we add a more sane configuration there:

# Network configuration
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up

Now pull an image — let’s use Ubuntu 14.04:

lxc-create -B btrfs -n ubuntu -t download

Setting up the GPG keyring
Downloading the image index
<list of distros omitted>Distribution: ubuntu
Release: trusty
Architecture: amd64

Using image from local cache
Unpacking the rootfs
— -
You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)
To enable sshd, run: apt-get install openssh-serverFor security reason, container images ship without user accounts and without a root password.Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.

Let’s try to start and attach to it.

lxc-start -n ubuntu -F
lxc-start: utils.c: open_without_symlink: 1626 No such file or directory — Error examining fuse in /usr/lib64/lxc/rootfs/sys/fs/fuse/connections
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 169 If you really want to start this container, set
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 170 lxc.aa_allow_incomplete = 1
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 171 in your container configuration file
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 4
lxc-start: start.c: __lxc_start: 1192 failed to spawn ‘ubuntu’
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the — logfile and — logpriority options.

Ooh. Ouch. What is aa_allow_incomplete?

man 5 lxc.container.conf[…]lxc.aa_allow_incompleteApparmor profiles are pathname based. Therefore many file restrictions require mount restrictions to be effective against a determined attacker. However, these mount restrictions are not yet implemented in the upstream kernel. Without the mount restrictions, the apparmor profiles still protect against accidental damage.If this flag is 0 (default), then the container will not be started if the kernel lacks the apparmor mount features, so that a regression after a kernel upgrade will be detected. To start the container under partial apparmor protection, set this flag to 1.[…]

Well, I’m OK with that, since I use my containers basically for testing. You may not be, if you need more security inside your containers.
So let’s add that to /etc/lxc/default.conf and try again.

# lxc-start -n ubuntu -F

Ubuntu 14.04.3 LTS ubuntu console
ubuntu login: _

QED.

Note that this setup attaches the machine’s primary ethernet adapter to the
bridge, and adapters inside subsequent containers to the same bridge. This means the container will get an IP address via DHCP on the same network as the host. Also if you run VMware Workstation or Fusion, VMware will complain that a VM is placing a network adapter in promiscuous mode and will ask for administrator credentials.

EDIT: regarding admin credentials when Fusion VMs try to set network adapters into promiscuous mode, I had forgotten there is a checkbox in later Fusion versions (I’m on 8.1.0). Go to the Preferences dialog in Fusion, select the Network pref sheet, and in the bottom left corner there is a checkbox to turn off the credentials requirement. Note this does introduce the possibility that a malicious VM could monitor all network traffic to and from your host machine.

Resources:

https://www.berrange.com/posts/2011/09/27/getting-started-with-lxc-using-libvirt/ (a little dated)
http://blog.scottlowe.org/2013/11/27/linux-containers-via-lxc-and-libvirt/
https://libvirt.org/drvlxc.html
https://forums.opensuse.org/showthread.php/511258-Cannot-boot-LXC-in-leap-42-1

--

--