Coda File System

RVMLIB_ASSERT and RVM data sizes.

From: Chris Shiels <chris_at_taglab.com>
Date: Thu, 06 Jun 2002 16:44:51 +0100
Hi.


We're consistently running into problems with our codasrv processes dying
with the following error message in /vice/srv/SrvErr:

kossoff# tail SrvErr
    RVMLIB_ASSERT: error in rvmlib_malloc

    Assertion failed: 0, file 
"/usr/src/redhat/BUILD/coda-5.3.19/coda-src/util/rvmlib.c", line 211
    EXITING! Bye!


We've tried different sizes of RVM log and data (implemented as files) but
sooner or later we always hit the 'RVMLIB_ASSERT: error in rvmlib_malloc'
problem.



We're using Red Hat 7.2 with a 2.4.18 kernel (from www.kernel.org) and
LVM 1.0.3.

We're using Coda 5.3.19 and this was downloaded as RPM's from
'http://www.coda.cs.cmu.edu/pub/coda/linux/i386/'.

We're using Coda with two hosts each acting as a server for replicated 
volumes
and each server is also acting as a client to itself.

We're using files (not partitions) for our RVM log and data.

The /vicepa partition on both servers is 1GB in size.



We've tried values of 2M(log)/90M(data), 20M(log)/200M(data) and we're now
using 30M(log)/500M(data).



We've now reached what seems to be the maximum RVM log and data sizes 
available
on our platform and we are unable to detect when this will happen next
or resize to higher values as none seem available.  Can you please help with
this?



Incidentally we don't think we're storing that much data - each volume
contains approx. 15000 files for a total size of approx. 180Mb per volume.

With 2M(log)/90M(data) we'd see the 'RVMLIB_ASSERT: error in rvmlib_malloc'
error whilst trying to populate the first volume.

With 20M(log)/200M(data) we'd see the 'RVMLIB_ASSERt: error in 
rvmlib_malloc'
error whilst trying to populate the fourth volume.



Question 1:

I'm guessing the RVMLIB_ASSERT is error being caused by filling up all
available space in the RVM log or data.  Is this correct?



Question 2:

How can I determine the current usage of the RVM log and data?

According to the announcement for 5.3.19 this is done by 'volutil 
printstats'
but I just can't see this information.

[ I've also tried summing the values from 'volutil rvmsize volumeid' for 
each
of our volumes but this doesn't seem to be anywhere near the sizes of RVM
we've been specifying by vice-setup-rvm. ]



Question 3:

We can see from the shellscript vice-setup-rvm that you provide values
to rdsinit for a 1GB RVM data size, however when we try and use this rdsinit
fails with the error messages:

    release_segment unmap failed RVM_ENOT_MAPPED
    rds_zap_heap completed successfully.
    rvm_terminate succeeded.

Whilst the rdsinit was running I could see it had been mapped at 0x50000000
by looking at /proc/pid/maps.  Additinally strace -p pid indicated that the
call to munmap() was successful with exit status 0.
What's going wrong here?



Question 4:

Are there other values that are permissable to rdsinit?  I've tried making
my own up(!) but I then get the error 'ERROR: rds_zap_heap RVM_EOFFSET'.
What values are permissable?



We really like Coda as it offers everything that we need for our project
but we've hit a brick wall with this.  Can you please advise?



Best Regards,
Chris Shiels.

Senior Systems Architect
Taglab Ltd.
Received on 2002-06-06 11:47:48