Coda File System

Re: Another mail on the security of coda

From: Jan Harkes <>
Date: Thu, 29 Nov 2001 13:13:20 -0500
On Thu, Nov 29, 2001 at 09:16:34AM -0500, Greg Troxel wrote:
> THe other hard part about changing to GSSAPI is keeping track of the
> bindings between "security contexts" (really session keys, but opaque)
> and coda userids.  RPC2 traffic between a venus and a codasrv is on
> behalf of venus itself sometimes, and sometimes on behalf of various
> users.  This is perhaps not too hard, though, as rpc2 already takes a
> 'struct CEntry', which could store a 'const gss_ctx_id_t
> context_handle' instead/additionally.

Yes, GSSAPI would be good, and we definitely would need to have the gss
context on a per RPC2 connection basis. I don't know whether the gss
context carries compression/encryption state as wel as user identity, in
which case lost packets or retransmissions might cause problems.

The problem really boils down to memory usage. Clients can have many
open RPC2 connections, I believe they can max out at something like
"users (??) x threads (20) x servers", where the hoard daemon and
'System:AnyUser' always exist. So even a single user system that
actively uses 6 servers can get up to 360 outgoing RPC2 connections. So
sharing context among all rpc2 connections for a given (user, host)
would definitely be worthwhile, as we would only need 18 contexts.

And ofcourse each server will have clients x users x threads incoming
RPC2 connections, which would become clients x users when sharing
contexts on a (user, host) basis, which is still a lot more, but should
be doable.

Ideally RPC2 should be structured differently by having a reliable
ordered (datagram) delivery at the bottom of the stack. Possibilities
here are TCP/IP with msg boundaries, SCTP, IL, or something like we
already have (but cleaner) on top of UDP. Then both compression and
encryption can keep state and perform a lot better (f.i. use
cipher-block-chaining and have a larger compression history). 

Having one reliable stream per host would allow multiple RPC2 and SFTP
connections to cooperate better, at the moment everyone is fighting for
his own bit of the bandwidth which results in the excessive queueing in
the network layers and unnecessary retransmissions or disconnections
when the available bandwidth is suddenly a fraction of the estimated
bandwidth on the outgoing link.

Also the containerfiles don't necessarily have to be unpacked by the
server, a client could just store a compressed/encrypted copy which is
passed on to the next client who does the decryption/decompression.
On secured links, the clients would already use the CPU cycle, but the
server can be spared the work. So SFTP could just skip the encoding
layers for already encrypted files.

Ofcourse rewriting RPC2 is another project by itself.

Received on 2001-11-29 13:13:34