Coda File System

Re: Files Bigger Than Cache Size

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Thu, 31 Jan 2008 21:30:02 -0500
On Thu, Jan 31, 2008 at 08:46:12AM +0100, u+codalist-p4pg_at_chalmers.se wrote:
> When you ask Venus for a certain cache size, it guesses how many files
> and modification entries you might need (unless you specify explicitly)
> and allocates RVM accordingly.
> This may lead to allocation of a potentially huge RVM - which possibly fails.
> 
> A huge RVM, when it is populated, makes Venus memory footprint very large,
> affecting the performance of the client host.
> 
> So, if you will be accessing primarily big files, always specify
> explicit (low) file / cml numbers when setting up a client.

It has been on my todo list for a long time to decouple number of files
cached from cache size. So a user would always have to specify the
number of files he would want cached, which then results in some
specific RVM size.

As far as disk space used by this cache, the number would be independent
and might just be like a negative number, i.e. make sure we leave at
least XXX megabytes or gigabytes free on the disk. Venus can then
periodically check with df/statfs() on the venus.cache partition and
remove file contents to make sure we stay below the limit.

Especially considering that even laptops come with pretty large disks,
average file sizes seem to be increasing about as fast as disk space and
our scalability is not in the amount of diskspace we use, but in the
number of objects we keep track of.

Jan
Received on 2008-01-31 21:31:18