Coda File System

Re: Cache Overflow

From: <hagopiar_at_vuser.vu.union.edu>
Date: Thu, 12 Aug 1999 13:42:40 -0400 (EDT)
The trouble is that it's a design decision that eliminates coda from being
a viable network filesystem for a (potentially large) number of cases. As
it stands, all clients must have a cache directory as large as the largest
file they will use. This defeats what I see as a primary benefit of
networked storage which is the centralization of storage space.

I imagine everyone here has had the occasion to work with files of a few
hundred megabytes, with coda you can't work them remotely unless you have
the local disk space. I shudder to think what would happen if you dialed
up via PPP and tried to tail a 650MB log file.

I'm guess I'm just surprised this is the case based on the deriviation
from AFS, and considering that no other network file system that I know of
has similar requirements (of course, none has the other features of coda).

I don't mean to complain - you designed and built it based on your
requirements, not mine - but do understand that this limitation eliminates
coda as a possibility for us.

However... :-) would it be possible for coda to cache/lock parts of the
file locally? Then when the cache fills it could attempt to purge parts of
the file back to the server?
								-Rob H.

On Thu, 12 Aug 1999, Jan Harkes wrote:

> Date: Thu, 12 Aug 1999 10:34:02 -0400
> From: Jan Harkes <jaharkes_at_cs.cmu.edu>
> Reply-To: codalist_at_TELEMANN.coda.cs.cmu.edu
> To: codalist_at_TELEMANN.coda.cs.cmu.edu
> Subject: Re: Cache Overflow
> Resent-Date: Thu, 12 Aug 1999 10:34:04 -0400
> Resent-From: codalist_at_TELEMANN.coda.cs.cmu.edu
> 
> On Thu, Aug 12, 1999 at 12:40:21PM +0200, Mitja Sarp wrote:
> > On Wed, Aug 11, 1999 at 06:51:08PM -0400, Jan Harkes wrote:
> > 
> > > Coda uses `whole file caching'. So your cache needs to be (at least) as
> > > large as the 2GB file you are trying to work with. And, as you might
> > > have noticed, the cache-limit is a little `soft', and Coda only
> > > complains once every 30 seconds about an overflow, and you might want to
> > > have the directories leading up to the file cached so it is probably
> > > better to have a larger local cache size.
> > > 
> > 
> > How about, if a file is found to be larger than the local cache, it
> > skips the caching and logging part and the file would come 'streaming'
> > from the server instead?
> 
> That is impossible. Read the papers about the design decisions around
> which Coda is built. Most applications can successfully handle errors
> when making an open call, far fewer can handle failed read/writes, and
> actually no application can roll back (or fix up) a file to a consistent
> state when a write has failed half way through (due to disconnection).
> 
> How do you expect to handle disconnected/weakly-connected operation?
> Concurrent access, detecting of write-write sharing, consistency, etc.
> 
> In a way, Coda works like a database. The open begins a transaction, and
> the close ends this transaction. Every transaction that modifies the
> filesystem is shipped to the server or logged. Failed transactions are,
> depending on the type of error, transparently handled or the blocked and
> show up as a local-global conflict. This was done to ensure that a user
> will not lose his updates, as we otherwise would do a roll back to the
> known state at the fileservers. But the system always goes from a
> consistent state to a consistent state (on the granualarity of a file).
> 
> Coda is NOT designed as an nfs client with an aggressive, persistent,
> buffer cache on top of it. If you want that, go hack an NFS client and
> server. If you use locking to block concurrent write access, and modify
> the server to invalidate all blocks a client has cached when a file is
> updated (i.e. when the lock is released), you actually would do pretty
> well, as long as there is a network.
> 
> But I wouldn't recommend going off to Europe for 2 weeks with your
> laptop, work on all your files/re-organise your email folders/update the
> webpages, and then come back and reintegrate with the server without
> problems. btw. I did this, and my desktop was really still delivering
> email, into the email-folders I was playing around with on my laptop.
> 
> Jan
> 
Received on 1999-08-12 13:44:13