Coda File System

Coda Network Computer

From: Michael Callahan <>
Date: Sun, 28 Jun 1998 17:07:57 -0400 (EDT)
The Coda Network Computer: A Guide for Manufacturers

Peter and I decided to waste some time on a fun hack we've been
talking about: booting a Linux machine with root in /coda!

The idea is that one could have a machine that had an ext2 root
filesystem that contained just enough to start up networking, start
venus, and then chroot into /coda.  To be precise, all of this should
be done within a shell script running as init (pid 1); the last step
would be for this shell script to do
    exec chroot /coda/linuxroot /sbin/init

The result would be that you could put a massive venus cache on the
client, and so have the benefit of local-disk access to all your
system binaries, yet have the Coda benefit of having essentially no
local data, instead having all systems' complete configurations in

Ultimately, of course, the idea would be that we could clone root
directories within /coda, so that the process of setting up a new
machine would consist simply of loading this small venus-supporting
root filesystem, rebooting, and letting a hoard walk pull system
binaries into cache.

We're not going to be able to complete this project, but we've done
some crucial steps and hope that someone would like to take over.

What we've done:

1) We did a minimal Red Hat 5.1 install into a small partition on one
   of our machines.  Then we copied this entire heirarchy into
   /coda/linuxroot.  This copy completes successfully, but it does
   strip out the device files, which are not currently supported by
   the Coda filesystem (see below!).  This was about 100MB of
   stuff--minimum install possible on Red Hat 5.1 without selecting
   packages individually; frankly, a bit large for our taste!  It does
   take a while to copy.

2) The first fun thing to do once one has this is 

      chroot /coda/linuxroot /bin/bash 

   This first chroots into the /coda filesystem, and then execs the
   bin/bash found there (/coda/linuxroot/bin/bash).  You must do this
   as root.

  Meanwhile, if you run codacon in another window, you'll see the
  various libraries that bash uses (there are a lot!) loading into
  your minicache.

  Eventually, you will get a shell prompt.  You can now try typing ls,
  vi, perl, etc -- all of which will be running entirely using files
  within the /coda/linuxroot root filesystem!  This all worked fine
  for us.

3) The next issue to tackle is how to make device files available to
   our chrooted environment.  Recall again that Coda does not
   currently allow you to do mknod within /coda.  Until Coda has this
   support, the solution will be to make /coda/linuxroot/dev a mount
   point for another, non-Coda filesystem that holds the device files.

   Our approach was to use a loopback block device with an ext2
   filesystem image.  To create the image, we did this in a shell that
   is not chrooted:

       dd if=/dev/zero of=/tmp/img bs=4k count=1000
       losetup /dev/loop0 /tmp/img
       mke2fs -i 1024 /dev/loop0
       mount /dev/loop0 /mnt/loop
       cd /dev
       find . | cpio -p -m -d --verbose /mnt/loop
       umount /mnt/loop

    Now we can try mounting this image within the Coda filesystem:

       mount /dev/loop0 /coda/linuxroot/dev

    This definitely doesn't work under 2.0 kernels; there, it returns
    "device busy".  This is due to an interaction between the Coda
    minicache and the mount system call.

    However, on 2.1, this command worked right away.  (Does it work on
    BSD?  We don't know.)  So for this project, you should definitely
    use a 2.1 kernel (we recommend Coda 4.6.0 (linux-coda-4.6.0.tgz)
    with Linux 2.1.107).

We thought this all looked very encouraging, and we think it would be
a lot of fun to go the rest of the way, but this is as far as we could
get in the very short amount of time we had to play with it.

Here's what we envisage as the next steps:

4) The next step is tackle booting.  The target here is to have a
   system that can get to the point of starting to run
   /coda/linuxroot/sbin/init as process ID 1.  Achieving this will be
   a major milestone, even if the init process immediately gets

   What you're aiming for is a shell script that can run as /sbin/init
   in the "hard disk root" (the one that will become invisible after
   the real system gets going from /coda/linuxroot with its init).
   This script will need to get the system going to the point where
   venus can run, and then starting the init from within
   /coda/linuxroot.  Essentially, this means:

   - fscking/remounting the hard disk / filesystem in read-write mode,
   - setting up networking, 
   - starting venus, 
   - waiting for venus to mount /coda, 
   - mounting /coda/linuxroot/dev, 
   - (maybe) mouting a local tmp partition on /coda/linuxroot/tmp,
   and finally
   - running   exec chroot /coda/linuxroot/sbin/init

   (Waiting for venus to mount /coda will probably require looping,
   polling /etc/mtab.)  

   To get started on this, we recommend setting up a small partition
   with a minimal install of Linux (we would re-use the install that
   we copied into /coda).  First, change this system so that it runs
   /bin/bash as its /sbin/init: i.e., it boots to a shell.  (Note that
   booting in single-user mode does a lot more already than this, so
   it's not equivalent.)

   Now you can analyze /etc/inittab and the rc scripts to figure out
   what are the key things that your script will need to do to make
   venus work.  We've given some of this list above, but we could have
   missed something.

   Using the boot-up shell, you can try setting things up manually,
   gradually copying commands into a shell script.  Bear in mind that
   you won't have update running, so it will be vital to use 'sync' to
   keep the disk up-to-date.

   Actually, the issue of what to do about update is quite important
   and needs investigation.  One argument is that the hard disk
   /sbin/init should start update so that, if venus crashes and the
   boot does not complete, you at least have the venus cache files
   synced to disk because update is, at least, already running.  On
   the other hand, the /coda init will, unless you change it, expect
   to run update itself, and this may fail if there is an update
   process already running.

   For debugging, we strongly recommend creating a couple virtual
   consoles, one with a bash running in the hard disk universe, one
   with codacon running.

5) Once you can get /coda/linuxroot/sbin/init to start, all that
   remains is to fix all problems that develop in running all programs
   that people run in Linux, under Coda!

   In particular, there are several issues we think may arise:

   - suid: Coda supports suid files, but this capability hasn't been
     tested or used for a long time.  Does it work?

   - named pipes and other non-file filesystem objects: various
     programs (lpd, syslog, X) use non-file objects that appear in the
     filesystem, like named pipes, to do IPC.  Coda currently does not
     support this.

   - Coda protection model: the Coda protection model is based on acls
     on directories and tokens for users.  It will require a bit of
     tinkering to permit root to have the correct access tokens to
     permit it to do what it will want to do without jeopardizing the
     security of the installation on the server.

   - hardlinks crossing directories: Coda does support multiple
     hardlinks to a file provided all the hardlinks are in a single
     directory.  It does not support hardlinks that cross
     directories.  Is this a problem?

   - locking: probably /var/mail should not be shared across multiple
     Coda clients right now, because Coda's locking support is at best
     limitted to locking on one client.

6) Finally, of course, the real dream is to have many clients share
   essentially the same root filesystems except, perhaps, for a small
   number of config files.  Doing this surely requires a lot of
   careful thought (which probably isn't very Coda-specific), but the
   specific advantage of doing this within Coda is that potentially
   you could support large clusters of workstations which would cache
   everything they need, providing local-disk performance with
   single-system manageability.  The holy grail!

We'd love to hear of any efforts to pursue this program and would be
very happy to provide feedback.

Michael Callahan <>
Peter Braam <>
Received on 1998-06-28 17:14:44