Upgrading the Raspberry Pi 2 from Wheezy to Jessie

Well outside my usual area of expertise… I had a Rasperry Pi 2 running the Raspbian image of Debian Wheezy, but found that upgrading it to Jessie last night was more difficult than the usual painful experience of using “apt-get”.

Step 1

Do the usual: make sure your system is up-to date.  Also install the https transport for apt if you don’t have it, since the new Collabora RasPi2 repository requires SSL.

# apt-get update && apt-get upgrade && apt-get dist-upgrade
# apt-get -y install apt-transport-https
Step 2

Update to the jessie repositories, and switch to the new Collabora repos, grabbing the new keyring:

# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list /etc/apt/sources.list.d/*.list
# echo 'deb https://repositories.collabora.co.uk/debian/ jessie rpi2' > /etc/apt/sources.list.d/collabora.list
# apt-get -y install collabora-obs-archive-keyring
Step 3

Now we can upgrade:

# apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade

Passing the “default yes” -y flag to apt-get is my preference, but may not be yours.  You may need to run that command multiple times because it has interactive prompts and other oddities during package upgrades which sometimes fail, an experience which always reminds me of Ye Olde (1980s) Unix package managers.  OK, yes, I’m biased.

Adventures with Containerization #2: Fedora, httpd and virt-sandbox

In my previous post I used Docker to create a container running httpd.   With a Docker container, I could say I am half way to having a virtual machine: I get an isolated and self-contained OS installation in a filesystem which is separate from my host filesystem.  But the container itself is running natively on my host OS; we are not executing code under a virtual machine as happens using KVM or Xen.

In this post I want to compare Docker with another container tool: virt-sandbox, a set of tools created by Dan Walsh and Daniel Berrangé. With virt-sandbox we get an even lighter-weight container in which run httpd.

Dan and Daniel have done great writeups (and presentations!) covering these tools in much more details, e.g. here and here.   Here is a simple walkthrough, starting from a Fedora 20 install.

# yum install -q -y libvirt-sandbox httpd
# systemctl start libvirtd.service 
# virt-sandbox-service create -C -u httpd.service httpd-test2
Created sandbox container dir /var/lib/libvirt/filesystems/httpd-test2
Created unit file /etc/systemd/system/httpd-test2_sandbox.service
Created sandbox config /etc/libvirt-sandbox/services/httpd-test2/config/sandbox.cfg

I happen to already have a bridge set up on this machine, so passing the argument --network dhcp,source=default is sufficient to get networking up in the container.  Otherwise networking is more complicated to set up – the libvirt site has more details on configuration of networking.

Using Docker, a set of loopback filesystems got created behind the scenes to store the container images – Alex Larsson explains how this was implemented.  With virt-sandbox the container is going to be created using files stored directly in the host filesystem.  So I have a directory which is going to store container’s filesystem, which looks like any chroot environment might:

# tree -A -L 2 /var/lib/libvirt/filesystems/httpd-test2
├── etc
│   ├── fstab
│   ├── hostname
│   ├── httpd
│   ├── machine-id
│   ├── rc.d
│   ├── sysconfig
│   └── systemd
├── home
├── root
├── usr
│   └── lib
└── var
    ├── cache
    ├── lib
    ├── log
    ├── run -> /run
    ├── spool
    ├── tmp
    └── www

17 directories, 3 files

I’m going to do my “Hello World” in Lua this time.   virt-sandbox-service has already created a systemd service in my host, so I can skip that step.

# cat > /var/lib/libvirt/filesystems/httpd-test2/var/www/html/hello.lua <<EOF
function handle(r)
     r.content_type = "text/plain"
     r:puts("Hello Lua World!\n")
    return apache2.OK
# systemctl start httpd-test2_sandbox.service
# virt-sandbox-service execute httpd-test2 dhclient
# virt-sandbox-service execute httpd-test2 ip addr show dev eth0 | grep 'inet '
    inet brd scope global dynamic eth0
# curl
Hello Lua World!

I discovered I had to run dhclient manually in the container to get this working, which seems like a bug.  Otherwise, it worked!

What’s interesting here is that the httpd running inside the container is the actual httpd installation from my Fedora host OS – overlayed onto that /var/lib/libvirt/filesystems/httpd-test2 chroot-like directory mentioned above.   So, if I install php in the host, will it show up automatically in the container without any additional configuration?

# systemctl stop httpd-test2_sandbox.service
# yum install -q -y php
# systemctl start httpd-test2_sandbox.service
# virt-sandbox-service execute httpd-test2 dhclient
# echo '<?php echo "<h1>Hello World</h1>\n"; ?>' > /var/lib/libvirt/filesystems/httpd-test2/var/www/html/hello.php
# curl
<?php echo "<h1>Hello World</h1>\n"; ?>

Annoyingly, not; PHP is not activated and httpd served the source code.  This is because the container has a private copy of the /etc/httpd directory from my host, but that part of the filesystem doesn’t inherit any changes.  So to activate PHP in the container is a little more work:

# cp /etc/httpd/conf.modules.d/10-php.conf /var/lib/libvirt/filesystems/httpd-test2/etc/httpd/conf.modules.d/
# cp /etc/httpd/conf.d/php.conf /var/lib/libvirt/filesystems/httpd-test2/etc/httpd/conf.d/
# virt-sandbox-service execute httpd-test2 -- httpd -k restart
# curl
<h1>Hello World</h1>

Success!   Note that I only had to copy in the configuration, and the container has inherited the rest of the php package (the libphp5.so loadable module for Apache, etc) from the host.

In either this configuration or with Docker containers, to expose httpd containers to the world, there are two ways to start:

  1. Set up the containers to directly access a bridged Ethernet device.  Since each container requires its own IP address, this is only practical if you have as many IP addresses as you require containers.
  2. Configure the host as an HTTP proxy to the containers.

The httpd configuration in case (2) will look like any standard reverse proxy, except that you happen to be proxying to “backends” which are containers on the same physical machine; this can be as simple as a ProxyPass:

ProxyPass /myapp
ProxyPassReverse /myapp

… or as complicated as mod_rewrite RewriteRule using the [P] flag.

Adventures with Containerization: Fedora, Docker and httpd

I have finally got around to experimenting with Docker this week.   Thanks to the hard work of many wonderful hackers, since late last year we’ve had Docker packages working in Fedora (under the package name “docker-io”).  Matt Miller has also been pushing Fedora images to the Docker index.

How can we use Apache httpd with Docker in Fedora?    One simple use-case might be serving a mostly static web site which you want to isolate inside a Docker container.  This is what you might aim for if you have a server hosting a bunch of different web sites for unrelated customers, but you want isolation between those servers.  If one httpd gets compromised, you don’t want them all to go down.

Most of the hard work is already done here.  We need two things:

  1. A Docker image which can launch httpd, serving our web site.
  2. A way to control that Docker container from the host side.

A Dockerfile is a script which describes how to create a docker image.  Here’s my Dockerfile:

# Clone from the Fedora 20 image
FROM fedora:20
# Install httpd
RUN yum install -y httpd
# Change the default docroot
RUN sed -i 's|^DocumentRoot.*|DocumentRoot "/srv"|' /etc/httpd/conf/httpd.conf
# Add in our custom httpd configuration.
ADD extra.conf /etc/httpd/conf.d/root.conf
# Start up httpd.

The first line means this image begins as a clone of the standard Fedora 20 installation.  We only make a few modifications: we install the httpd package, change the default docroot to /srv, and add in a custom configuration file.  The “ENTRYPOINT” line means that httpd is the process invoked by default when containers are created from this image.  The custom configuration file, “extra.conf” I use is this, which relaxes httpd’s access control for the /srv directory:

<Directory /srv>
   Require all granted

Combining that and the change to the DocumentRoot, this image is set up to serve whatever is mounted at /srv.  Placing the two files described above in an empty directory I create a new Docker image as follows – note the argument passed tags the image with the name “httpd-test1”:

# docker build -t httpd-test1 .
Uploading context 10.24 kB
Uploading context 
Step 1 : FROM fedora:20
 ---> 6572f78e5fa5
Step 2 : RUN yum install -y httpd
 ---> Using cache
 ---> 44b5498e707a
Step 3 : RUN echo > /etc/httpd/conf.d/welcome.conf
 ---> Running in c91f82b99bce
 ---> c5a629660e53
Step 4 : RUN sed -i 's|^DocumentRoot.*|DocumentRoot "/srv"|' /etc/httpd/conf/httpd.conf
 ---> Running in 67fe1b96bf51
 ---> 26ad2ac0ebf8
Step 5 : ADD extra.conf /etc/httpd/conf.d/root.conf
 ---> cfe43ce7417a
Step 6 : ENTRYPOINT /usr/sbin/httpd -DFOREGROUND
 ---> Running in 05a4a1154fb7
 ---> f7d8712877b8
Successfully built f7d8712877b8

The remaining piece of the puzzle is integration on the host side.   We could rely on standard Docker tools for this, but let’s do it properly.  I’m going to place my example site, imaginatively named “example.com”, in /srv/example.com, as follows:

# mkdir /srv/example.com
# echo '<h1>Hello,  World</h1>' > /srv/example.com/index.html

Now I can set up a systemd service file which launches a Docker container using my image, and launch the container:

# cat > /etc/systemd/system/example.com.service <<EOF
Description=example.com Container

ExecStart=/usr/bin/docker run -v /srv/example.com:/srv httpd-test1

# systemctl daemon-reload 

And that’s it!  The “docker run” command will create a container using that “httpd-test1” image built earlier, and  mounts the host’s /srv/example.com as /srv in the container.  I can control this new service like any other systemd service:

# systemctl start example.com
# docker ps
    CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS                  PORTS               NAMES
91e770b4d61f        httpd-test1:latest   /bin/sh -c /usr/sbin   1 seconds ago       Up Less than a second                       kickass_franklin6   
# docker inspect 91e770b4d61f | grep IPAddr
        "IPAddress": "",
# curl
<h1>Hello,  World</h1>

Neat stuff.

Safer suexec: from setuid to Linux capabilities

Apache httpd’s “suexec” feature has always been relatively safe but still slightly annoying.  The /usr/sbin/suexec binary is installed setuid root, allowing a non-root user to execute it under root’s privileges.  The binary is implemented to very carefully ensure that this is only possible in very specific circumstances: when the “apache” user is trying to execute a CGI script owned by a user.  The documentation explains the twenty different steps necessary to make this process as secure as possible.

A recent Fedora initiative to get rid of setuid binaries prompted me to add support for using suexec with Linux capabilites instead.  A “setuid root” binary is all-powerful; the binary executes with the complete permissions of the superuser.  With capabilities we can instead allow setuid to run only with the specific permissions required.

It turns out two separate features were required here.

1.  Making suexec itself use capabilities was actually very simple.  The two capability bits which suexec needs are CAP_SETGID and CAP_SETUID – this allows arbitrary use of setuid() and setgid() as required.  So I added a new configure option –enable-suexec-capabilities to httpd which switches “make install” to use:

   setcap 'cap_setuid,cap_setgid+pe' /path/to/suexec

Instead of “chmod 4755”.  (You can read the setcap(7) man page for more information about that command.)

That was committed in r1342065.

2.  But using only that change, the suexec binary then stops working.  Why?  Because suexec wants to write to its log file (in Fedora/RHEL at /var/log/suexec.log) every time it is executed, and fails if it cannot.  And of course that log file must be only writable by root.

We could work around that by giving suexec an extra capability bit, CAP_DAC_OVERRIDE which will override the permissions checks when writing to the log file.  But that defeats the point of this exercise, which is to limit capabilities as much as possible.  There was another way – syslog!  We can write to syslog without needing superuser privileges.  So part two was to adapt suexec to use syslog.  This was added in a second commit,  r1341905, using a separate configure option (since the feature is independently useful), –with-suexec-syslog.

The complete set of suexec configure options used in the current Fedora httpd.spec is as follows:

./configure ... \
        --enable-suexec --with-suexec \
        --enable-suexec-capabilities \
        --with-suexec-caller=apache \
        --with-suexec-docroot=/var/www \
        --without-suexec-logfile \
        --with-suexec-syslog \
        --with-suexec-bin=/usr/sbin/suexec \
        --with-suexec-uidmin=500 --with-suexec-gidmin=100

Bingo! All done. suexec is no longer setuid root. The new code is available in httpd trunk, patched into Fedora httpd, and I should really propose it for backport to 2.4.x.

httpd 2.4 on Red Hat Enterprise Linux 6

My team here at Red Hat maintains the web server stack in Fedora and RHEL.  One of the cool projects we’ve been working on recently is Software Collections.  With RHEL we’ve always suffered from the tension between offering a stable OS platform to users, and trying to support the latest-and-greatest open source software.  Software Collections is a great technology we’re using to address that tension.  Remi Collet has blogged about the PHP 5.4 software collection (now available in the 1.0 release of our product) over at his blog and at redhat.com.  Another team member, Jan Kaluza, has been working on a collection of httpd 2.4 for RHEL6 – something we keep hearing requests for in bugzilla.

To kick the wheels of Jan’s collection in a RHEL 6.4 VM, here’s what I did:

# curl -s http://repos.fedorapeople.org/repos/jkaluza/httpd24/epel-httpd24.repo > /etc/yum.repos.d/epel-httpd24.repo
# yum install httpd24-httpd
  httpd24-httpd.x86_64 0:2.4.6-5.el6                                                                                

Dependency Installed:
  httpd24-apr.x86_64 0:1.4.8-2.el6  httpd24-apr-util.x86_64 0:1.5.2-5.el6  httpd24-httpd-tools.x86_64 0:2.4.6-5.el6 
  httpd24-runtime.x86_64 0:1-6.el6 


This has dropped a complete installation of Apache httpd 2.4.6 into /opt/rh/httpd24 which can be used alongside the httpd 2.2.15 package supported in RHEL 6.4.

# rpm -ql httpd24-httpd | grep sbin

The httpd install is contained inside /opt/rh/httpd24 as far as possible, but we do “leak” into the normal RHEL filesystem in a couple of places – notably to offer an init script.   This makes firing up the newly installed 2.4 daemon in my VM as easy as any other service:

# service httpd24-httpd start
Starting httpd:                                            [  OK  ]
# curl -s http://localhost/ | grep 'Test Page for'
		<title>Test Page for the Apache HTTP Server on Red Hat Enterprise Linux</title>

That’s the httpd packagers’ equivalent of getting your program to print “Hello, World” – we’re successfully serving the familiar HTML “welcome page” over HTTP on port 80.

I wanted to check whether the SELinux labelling is being applied correctly in the httpd 2.4 collection.  Using some /usr/bin/semanage magic, it’s actually very simple for us to automatically apply SELinux policy inside software collections using an RPM %post script.  Here’s one way to check whether it’s working:

# ps Zf -C httpd
LABEL                             PID TTY      STAT   TIME COMMAND
unconfined_u:system_r:httpd_t:s0 1772 ?        Ss     0:00 /opt/rh/httpd24/root/usr/sbin/httpd
unconfined_u:system_r:httpd_t:s0 1774 ?        S      0:00  \_ /opt/rh/httpd24/root/usr/sbin/httpd
unconfined_u:system_r:httpd_t:s0 1775 ?        S      0:00  \_ /opt/rh/httpd24/root/usr/sbin/httpd
unconfined_u:system_r:httpd_t:s0 1776 ?        S      0:00  \_ /opt/rh/httpd24/root/usr/sbin/httpd
unconfined_u:system_r:httpd_t:s0 1777 ?        S      0:00  \_ /opt/rh/httpd24/root/usr/sbin/httpd
unconfined_u:system_r:httpd_t:s0 1778 ?        S      0:00  \_ /opt/rh/httpd24/root/usr/sbin/httpd

Success – those “httpd_t” labels which I’ve highlighted tell me that httpd processes are running in the correct domain.

Finally, here’s a quick demo of one httpd 2.4 feature I really love – an embedded Lua interpreter in the form of mod_lua:

# cat > /opt/rh/httpd24/root/var/www/html/hello.lua <<EOF
function handle(r)
    r.content_type = "text/plain"
    r:puts("Hello Lua World!\n")
    return apache2.OK
# echo 'AddHandler lua-script .lua' > /opt/rh/httpd24/root/etc/httpd/conf.d/lua.conf
# service httpd24-httpd reload
Reloading httpd: 
# curl -s http://localhost/hello.lua
Hello Lua World!

Fun stuff for httpd geeks!

Regression Testing with Apache httpd

I wanted to jot down some notes on how to use the Apache httpd test suite, since people occasionally ask me how to test httpd.

The first step to using the test suite is to get a build of httpd with all the right modules enabled.  There a handful of extra modules which are useful in the test suite, but are not usually built with httpd .  If you are building httpd fresh from a source tarball this is as simple as getting the configure options correct.  Here’s what I use:

$ ./configure \
      "--enable-mods-shared=all ssl proxy rewrite dav reallyall bucketeer cache \
       disk_cache case_filter case_filter_in echo" \

The extra modules we’re after here are mod_case_filter, mod_case_filter_in, mod_bucketeer and mod_echo.  Those modules are virtually useless in a production server so you’ll never normally want to build them, but they provide some neat filters which are used in the test suite.   (With httpd 2.4.x it is only necessary to specific the  “reallyall” argument to –enable-mods-shared, but the above line does the right thing with all vintages of 2.x)

If you are trying to test a binary distribution of httpd (RPM, .deb etc) then you can grab the .c files for those modules from the Subversion repository and build them by hand.  For example:

$ wget http://svn.apache.org/repos/asf/httpd/httpd/branches/2.2.x/modules/echo/mod_echo.c
$ apxs -cia mod_echo.c

The extra modules must be built, installed and enabled in the httpd configuration, otherwise the Perl harness doesn’t know they exist.  If you want to test mod_ssl, you also need to make sure you have the right Perl modules to enable SSL support in LWP.  This means Crypt::SSLeay and with recent LWPs, LWP::Protocol::https.

Once you’re all set, you can grab the test suite from SVN and run it – note this must be done as a non-root user, since it’s a pain to get working as root:

$ svn co https://svn.apache.org/repos/asf/httpd/test/framework/trunk perl-harness
$ cd perl-harness
$ perl Makefile.PL -apxs /usr/bin/apxs
$ make
$ ./t/TEST

Use the appropriate apxs binary from the httpd install you wish to test.

The coverage of the test suite is not too bad; we have an ever-increasing number of regression tests for specific bugs and new features added during 2.2 and 2.4 development.   Adding new tests is easy, the test suite uses Apache::Test – I’ll do another post sometime showing how easy it is to create regression tests even if (like me) you are not particularly comfortable writing Perl.