Coda File System

Maxima and other questions

From: Jim Page - <>
Date: Fri, 21 May 2004 07:55:49 -0400
Afternoon all

First - great product, thanks.

Please forgive the longish post. I have installed coda on a small(ish)
cluster of 2 mail servers, using our own MTA, and I have been using the coda
filesystem as a method of replicating the 'pending' and 'quarantine' mail
archive to provide a failover capability in the cluster. I have a few
questions regarding the right way to do this: I have read the Admin manual
but I am not particularly convinced that it is up to date in certain

I have built Slackware packages for all the components - anyone is welcome
to them, or instructions. They are not hard to build.

Here is a summary of the setup: 2xDell 1750 dual xeon, dual GB NICs, 2Gig
RAM. Slackware linux 9.0 with kernel 2.6.3. Each server is configured as
both client and server. Mount points are defaults suggested in *-setup
scripts. I have a replicated volume working successfully (E0000104). The
files are stored on a single (/vicepa) partition. The messages are stored as
individual text files, some compressed, in a structure like
There can be a lot. A server running a million messages per day can often be
holding 10M-20M files. The server I am testing with handles a lot less -
maybe 500000 or so but this could increase. The servers are in different
geographical locations (one in central london and the other on the edge)
with 100MBit link.

1. I would like to know the maxima for the current build, ie: max number of
files per server, max files per partition - anything in fact which is a hard
limit which may bite us. I chose the '16M' files per server in the setup,
but found a changelog note referring to a max of 512K files per partition.
Not sure how up to date any of this is. if the max is 512K per partition,
can I rebuild for more?

Clearly I am using coda in a slightly unusual way - there is no 'real'
client. I have some questions that follow on from that.

2. Authentication: all went relatively smoothly until I got to the first
authentication. The manual is obviously largely pre-realm - this confused
me. Added to the fact that there is a bug in clog which means that the -h
<authhost> is broken - any use of the -h swicth results in a usage message.
In the end, creating a realm in /etc/coda/realms solved the problems there.
I would also have liked to know a bit more about the relationship between
system users and coda users - ie not very much, except that the username
(and uid?) is used as a default for programs like clog, cpasswd and au.

3. As the machines require the client to be running and authenticated at all
times, I have adopted the following strategy: I have created a script
containing basically 'echo <passwd> | clog' which is run at startup and in a
cron job once per hour (I know token lasts 25 hours but you never know).
This works fine on the non-SCM machine, but on the SCM it almost never boots
up authenticated. Running the auth script fixes it. Here is the order I do
- init coda server: (auth, rpc2, *mon etc)
- authentication script
- init venus.
Is this the right order? Is there likely to be a pause before auth2 is ready
to authenticate me? Is this the right way to do it?

4. Will there be any break in (read/write) service while auth2 is
re-authenticating during the cron job? I take it that permissions and so on
are only evaluated when an actual disk operation is performed so unless the
token is deleted during re-authentication I don't see why there should be a

5. I see there are 'from file' and 'to file' switches in clog. Should I be
using these for auto-authentication? How?

6. In my situation there is always a server running on the same box at the
client. I could really do without the client cache as the client will
(hopefully) never be disconnected form the server. Is there any existing way
to shortcut this and avoid the extra disk writes to the client cache?

7. Why would I want to use cfsmount? is there any advantage to using this
over linking to dirs in the the standard /coda directory?

That's all for now.

Thanks again

Email has been scanned for viruses and SPAM by Email Systems
*** Email the way you want it ***
Received on 2004-05-21 08:30:50