Coda File System

Re: ACLs, PAGs and PACs

From: Peter Braam <braam_at_carissimi.coda.cs.cmu.edu>
Date: Wed, 10 Dec 1997 18:27:30 -0500
It's nice to see so much discussion.

The Coda filesystem is rather more secure than AFS -- but not
everything is completely implemented.  There are two components:
- client server security: we effectively have a non encrypted kerberos
model in RPC2
- kernel/cache manager security: important on multiuser Coda clients.


Kernel / Cache Manager (Venus)
==============================
First you indeed need to establish a token, which we will soon base on
the process group id by default, currently we use the uid, which
indeed, is not good.  

However, for Coda --> NFS translators and such animals a different
mechanism is needed too. (Such things exist for NFS/AFS translators,
performance takes a big hit probably, since kerberized NFSd would have
to tell our cache manager any change of tokens used for operations by
nfsd, for a non kerberized nfsd you effectively end up with the uid
token stuff that already exists.)

Using forwardable kerberos tokens this should not be too
complicated. We do have all the system call machinery to make this
pretty straightforward.

In the kernel you have to be very careful that the dcache (for Linux)
or our own namecache only allows kenel cache hits for requests coming
into the kernel from the correct pgid.  For this we will soon have
dcache operations that check more than the filename -- they also
verify that the credentials for the calling process are in the
cache. Currently the Linux kernel is not checking carefully about the
credentials for cache hits.

Our cache manager makes "downcalls" to notify the kernel of changes in
token situations and the corresponding cache invalidations. 

Venus - Vice (server) 
=====================

Ultimately the client is not responsible for enforcing security
(clients are insecure).  The server will need credentials and verifies
these against ACL's before allowing operations.  Very much the AFS
model, from which Coda of course descended.

All traffic is "weakly" kerberized.  Connections are secure and
encrypted modulo the fact that currently we don't have strong
encryption. 

There are a number of hairy issues with security on disconnected
clients. In particular we consider laptops to be "single user"
machines from a Coda perspective: only one user at a time can build up
a CML (Client Modification Log) and reintegrate it.  Allowing this for
more than one user simultaneously creates problems. Kistler's thesis
discusses this in detail.







Brian Bartholomew writes:
 > FYI, NFS protections against black hats are weak to nonexistant.  All
 > you need to access an NFS server is an IP address, an IP port, and an
 > NFS directory handle.  All three may be sniffed off the wire between a
 > working server and client.  The portmap and mountd daemons need not be
 > running on the NFS server or client.  Since mountd isn't consulted the
 > contents of /etc/exports is irrelevant.  The nfsd or biod daemons need
 > not be running on clients that are Linux.  Furthermore, as far as I
 > can tell most of the NFS file permission semantics are enforced on the
 > client, not the server.
 > 
 > It would be nice if Coda was secure.
 > 
 > 
 > /*
 >  *  Usage: quietmount nfs.accessible.com 1.2.3.4 /mnt 2049 \
 >  *         10610a065b112c4118d21205e122f1122a113b0214c921d466e453f149a415b3
 >  *
 >  *  This Linux 1.3.x program allows you to mount an NFS filesystem given a
 >  *  root filesystem handle, without talking to mountd.  This bypasses the
 >  *  list of acceptable NFS client hosts set with exportfs.  It is not
 >  *  clear what permissions are checked on servers.  This program may allow
 >  *  access to all filesystems on a server, or just all filesystems that
 >  *  are doing NFS.  It may ignore a read-only exportfs flag or a mapping
 >  *  of root -> nobody.  Cross-testing is advised.
 >  *
 >  *  _Firewalls and Internet Security_, pp. 37-38 has this to say:
 >  *
 >  *  The *Network File System* (NFS) [Sun Microsystems, 1989, 1990],
 >  *  originally developed by Sun Microsystems, is now supported on most
 >  *  computers.  It is a vital component of most workstations and it is not
 >  *  likely to go away any time soon.
 >  *
 >  *     For robustness, NFS is based on RPC, UDP, and *stateless servers*.
 >  *  that is, to the NFS server -- the host that generally has the real disk
 >  *  storage -- each request stands alone; no context is retained.  Thus
 >  *  all operations must be authenticated individually.  This can post some
 >  *  problems, as we shall see.
 >  *
 >  *     To make NFS access robust in the face of system reboots and network
 >  *  partitioning, NFS clients retain state: the servers do not.  The basic
 >  *  tool is the *file handle*, a unique string that identifies each file
 >  *  or directory on the disk.  All NFS requests are specified in terms of
 >  *  a file handle, an operation, and whatever parameters are necessary for
 >  *  that operation.  Requests that grant access to new files, such as
 >  *  *open*, return a new handle to the client process.  File handles are
 >  *  not interpreted by the client.  The server creates them with
 >  *  sufficient structure for its own needs; most file handles include a
 >  *  random component as well.
 >  *
 >  *     The initial handle for the root directory of a file system is
 >  *  obtained at mount time.  The server's mount daemon, an RPC-based
 >  *  service, checks the client's host name and requested file system
 >  *  against an administrator-supplied list, and verifies the mode of
 >  *  operation (read-only versus read/write).  If all is well, the file
 >  *  handle for the root directory of the file system is passed back to the
 >  *  client.
 >  *
 >  *  {BOMB} Note carefully the implications of this.  Any client who
 >  *  retains a root file handle has permanent access to that file system.
 >  *  While standard client software renegotiates access at each mount time,
 >  *  which is typically at reboot time, there is no enforceable requirement
 >  *  that it do so.  (Actually, the kernel could have its own access list.
 >  *  In the name of efficiency, this is not done by typical implementations
 >  *  [except BSDI 2.x -Ed].)  Thus, NFS's mount-based access controls are
 >  *  quite inadequate.  It is not possible to change access policies and
 >  *  lock out existing but now-untrusted clients, nor is there any way to
 >  *  guard against users who pass around root file handles.  (We know of
 >  *  someone who has a collection of them posted on his office wall
 >  *  [probably right next to a list of his passwords -Ed].)
 >  *
 >  *     File handles are normally assigned at file system creation time,
 >  *  via a pseudo-random number generator.  (Some older versions of NFS
 >  *  used an insufficient random -- and hence guessable -- seed for this
 >  *  process.  Reports indicate that successful guessing attacks have
 >  *  indeed taken place [sample code? -Ed].)  New handles can only be
 >  *  written to an unmounted file system, using the *fsirand* command [if
 >  *  your Unix has it -Ed].  Prior to doing this, any clients that have the
 >  *  file system mounted should unmount it, lest they receive the dreaded
 >  *  "stale file handle" error.  It is this constraint -- coordinating the
 >  *  activities of the server and its myriad clients -- that makes it so
 >  *  difficult to revoke access.  NFS is too robust!
 >  *
 >  */
 > 
 > #define NFS
 > 
 > #include <sys/param.h>
 > #include <sys/mount.h>
 > #include <sys/socket.h>
 > #include <netinet/in.h>
 > #include <arpa/inet.h>
 > 
 > #include "/usr/src/linux/include/linux/nfs.h"
 > #include "/usr/src/linux/include/linux/nfs_mount.h"
 > 
 > #include <stdio.h>
 > #include <stdlib.h>
 > #include <memory.h>
 > #include <errno.h>
 > 
 > int
 > main (argc, argv)
 > int argc;
 > char *argv[];
 > {
 > 	struct nfs_mount_data nfsargs;
 > 
 > 	if (argc != 6) {
 > 		fprintf (stderr, "Usage: quietmount nfs.accessible.com 1.2.3.4 /mnt 2049 \\\n       10610a065b112c4118d21205e122f1122a113b0214c921d466e453f149a415b3\n");
 > 		exit (1);
 > 	}
 > 	if ((umount (argv[3]) < 0) && (errno != EINVAL)) {
 > 		perror ("umount");
 > 		exit (1);
 > 	}
 > 
 > 	memset (&nfsargs, 0, sizeof (nfsargs));
 > 	nfsargs.version = NFS_MOUNT_VERSION;
 > 	nfsargs.addr.sin_family = AF_INET;
 > 	nfsargs.addr.sin_port = htons (atoi (argv[4]));
 > 	nfsargs.addr.sin_addr.s_addr = inet_addr (argv[2]);
 > 
 > 	if ((nfsargs.fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0) {
 > 		perror ("socket");
 > 		exit (1);
 > 	}
 > 	if ((bindresvport (nfsargs.fd, 0)) < 0) {
 > 		perror ("bindresvport");
 > 		exit (1);
 > 	}
 > 	if ((connect (nfsargs.fd, (struct sockaddr *)&nfsargs.addr, sizeof (nfsargs.addr))) < 0) {
 > 		perror ("connect");
 > 		exit (1);
 > 	}
 > 
 > 	sscanf (argv[5], "%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x%2x",
 > 		&nfsargs.root.data[0],  &nfsargs.root.data[1],
 > 		&nfsargs.root.data[2],  &nfsargs.root.data[3],
 > 		&nfsargs.root.data[4],  &nfsargs.root.data[5],
 > 		&nfsargs.root.data[6],  &nfsargs.root.data[7],
 > 		&nfsargs.root.data[8],  &nfsargs.root.data[9],
 > 		&nfsargs.root.data[10], &nfsargs.root.data[11],
 > 		&nfsargs.root.data[12], &nfsargs.root.data[13],
 > 		&nfsargs.root.data[14], &nfsargs.root.data[15],
 > 		&nfsargs.root.data[16], &nfsargs.root.data[17],
 > 		&nfsargs.root.data[18], &nfsargs.root.data[19],
 > 		&nfsargs.root.data[20], &nfsargs.root.data[21],
 > 		&nfsargs.root.data[22], &nfsargs.root.data[23],
 > 		&nfsargs.root.data[24], &nfsargs.root.data[25],
 > 		&nfsargs.root.data[26], &nfsargs.root.data[27],
 > 		&nfsargs.root.data[28], &nfsargs.root.data[29],
 > 		&nfsargs.root.data[30], &nfsargs.root.data[31]);
 > 
 > 	nfsargs.flags    = 0;
 > 	nfsargs.rsize    = 1024;
 > 	nfsargs.wsize    = 1024;
 > 	nfsargs.timeo    = 10;
 > 	nfsargs.retrans  = 10;
 > 	nfsargs.acregmin = 10;
 > 	nfsargs.acregmax = 10;
 > 	nfsargs.acdirmin = 10;
 > 	nfsargs.acdirmax = 10;
 > 	strcpy (nfsargs.hostname, argv[1]);
 > 
 > 	if (mount (argv[1], argv[3], "nfs", 0xC0ED0000 | nfsargs.flags , &nfsargs) < 0) {
 > 		perror ("mount");
 > 		exit (1);
 > 	}
 > 	exit (0);
 > }
Received on 1997-12-10 18:55:03