Coda File System

Re: replication and howto confusion

From: <jaharkes_at_cs.cmu.edu>
Date: Wed, 20 Jan 1999 12:00:20 -0500
Bruce wrote: 
> By insisting that all open files be cached (and so read and written) locally,
> the total size of all open files must be less than the size of the local cache.
> In practice this is not too critical a limitation.  At least a part of Coda's
> target is the realm of mobile, disconnected computing.  For mobile users this
> often means a single-user laptop which I would expect would open only a small
> number of small files at once.  But in the (nearly) orthogonal realm of
> large-scale reliable computing (redundancy, higher performance) this
> limitation is more of a problem.

Hi Bruce,

I am thinking in the direction of clients with very large caches, in the order 
of 4-10GB, where essentially everything possible is hoarded. A result of this
is that most files are already in the cache. And when the `global' copy is
updated, Coda automatically fetches the new version, even before the user 
actually requests it. Currently we are blocked on many file accesses, simply
because the local cache is so darn small, and we have to refetch the file.

Combine this with write-back caching, and the wait until the data is 
written to the server is gone. The current `write-disconnected' operation 
already gives a good feel for the speed increase, especially over slow links.

In the past months we've already tweaked some scalability things, so I am
running with 100MB local caches on my laptop and desktop. But to go beyond 
some design and/or implementation issues have to be resolved. 10GB cache 
would hold in the order of 50K files, which will require in the order of 1GB 
rvm data. How do you store that into your VM on a 64MB laptop?

>     In fact, the current Coda implementation makes things worse than they
> need be.  A cached file can grow to consume all free space on the local
> cache. At this point the kernel could request that Venus (i.e. the cache
> manager) free up cache space if possible, presumably by removing cached
> but unopened (and unhoarded?) files.  At present the kernel code and
> kernel<->Venus upcall that would be required to do this do not exist.  Venus
> also must be ready to perform cache `garbage collection' when it tries to
> install a file into the cache during that file's first open.  I am not sure
> whether Venus does this at present (the Venus source code was less than
> scrutible on this point :-)).

Essentially this is already done, before a file is fetched, objects are 
thrown out of the cache to obtain space. About the growing file, the cache
usage is adjusted when the file is closed, but nothing is thrown out. The
next fetch will then clean out objects. So I guess that is not quite the
perfect solution yet. 

l8r,
	Jan
Received on 1999-01-20 12:01:12