Coda File System

Re: SpoolVMLogRecord - no space left in volume

From: Ed Kuo <edkcc_at_yahoo.com>
Date: Tue, 19 Jun 2001 09:14:07 -0700 (PDT)
Hello

First thanks for your suggestion.
I have found that setlogparms command does not applies
to root volume. I created another volume without
replication, then it works, and make process
succeeded.
But I am still not able to do it on a replicated
volume. For example, I have created a volume in
replication group consists of 2 servers

[root_at_cluster1 /root]# volutil -h cluster1
getvolumelist
V_BindToServer: binding to host cluster1
P/vicepa Hcluster1 Ta098c0 F905f18
Wcoda.root.0 I1000001 H1 P/vicepa m0 M0 U3 W1000001
C3b24f85b D3b24f85b B0 A27
Wapproot.0 I1000002 H1 P/vicepa m0 M0 Uee4d0 W1000002
C3b24fda3 D3b24fda3 B0 A1a
5f86
Wapproot2.0 I1000003 H1 P/vicepa m0 M0 U2 W1000003
C3b2f749f D3b2f749f B0 A0
GetVolumeList finished successfully
[root_at_cluster1 /root]# volutil -h cluster3
getvolumelist
V_BindToServer: binding to host cluster3
P/vicepa Hcluster3 Ta098c0 Fa0989c
Wcoda.root.1 I3000001 H3 P/vicepa m0 M0 U3 W3000001
C3b24f7ca D3b24f7ca B0 A26
Wapproot2.1 I3000002 H3 P/vicepa m0 M0 U2 W3000002
C3b2f740a D3b2f740a B0 A0
GetVolumeList finished successfully
[root_at_cluster1 /root]#


Where approot is not replicated, and approot2 is
replicated in group consists of cluster1 and cluster3.
But volume approot2 has different ID on cluster1 and
cluster3, should I do setlogparms twice with different
ID to both servers? like this:

[root_at_cluster1 /coda]# volutil -h cluster1 setlogparms
1000003 reson 4 logsize 16384
V_BindToServer: binding to host cluster1
Set Log parameters
[root_at_cluster1 /coda]# volutil -h cluster3 setlogparms
3000002 reson 4 logsize 16384
V_BindToServer: binding to host cluster3
Set Log parameters
[root_at_cluster1 /coda]#


Actually, i did it and venus crashed, then I restart
venus with the following message shown:

00:19:18 /usr/coda/LOG size is 12932608 bytes
00:19:19 /usr/coda/DATA size is 51725504 bytes
00:19:19 Loading RVM data
00:19:19 Last init was Tue Jun 12 01:18:29 2001
00:19:19 Last shutdown was dirty
00:19:20 starting VDB scan
00:19:20        6 volume replicas
00:19:20        3 replicated volumes
00:19:20        0 CML entries allocated
00:19:20        0 CML entries on free-list
00:19:20 starting FSDB scan (20833, 500000) (25, 75,
4)
00:19:20 fatal error -- fsobj::Recover: bogus
VnodeType (0)


So, I still don't know how to setlogparms on
replicated volume.
Please help, thanks

Ed












--- Jan Harkes <jaharkes_at_cs.cmu.edu> wrote:
> On Wed, Jun 06, 2001 at 06:37:57AM -0700, Ed Kuo
> wrote:
> > 
> > Hello
> > 
> > I have encounter a problem of similar situation
> with
> > "Big Server" mail in codadev mail list.
> > 
> > After error in making mozilla.....
> > ...
> > [chris_at_cluster1 /coda]# mkdir 12345
> > mkdir: cannot create directory `12345': No space
> left
> > on device
> 
> Totally different error, but you're on the right
> track.
> 
> > [chris_at_cluster1 /coda]# volutil setlogparms
> 0x1000002
> > reson 4 logsize 16384
> > V_BindToServer: binding to host cluster1
> 
> Correct, the server ran out of resolution log
> entries. These are still
> used by singly replicated volumes. But they are not
> thrown out when the
> COP2 message is missing (second phase of the 2-phase
> commit), and there
> is never a reason for resolution, so they tend to
> hang around forever.
> 
> I've tried to add a hack to createvol_rep that
> disables resolution for
> newly created singly replicated volumes. However, it
> doesn't seem to
> work, basically trying to do 'volutil setlogparms
> <newvolume> reson 0'.
> 
> The volutil command might have failed because the
> server died when it
> ran out of reslog entries and wasn't restarted.
> 
> > [chris_at_cluster1 /coda]#
> > (There are totally about 4000 directories under
> > mozilla tree)
> 
> In a tree is no problem, it is the 4000-7000 files
> in a single directory
> that Coda doesn't handle.
> 
> The low limit on the number of directory entries is
> only really a
> problem in a few cases (my maildir format email
> directories, or Greg's
> RFC mirror). The current directory format isn't that
> useful for
> directories with many entries anyways. Coda uses a
> simple 128 bucket
> hash for directory lookups. With +/- 7000 entries,
> every hash-chain has
> an average length of about 54 entries, so IMHO
> lookup performance is
> already staring to become pretty bad around this
> point
> 
> > My coda configuration is:
> > cluster1:SCM, cluster3:non-SCM
> > Only root volume:coda.root is provided, /vice:300M
> /vicepa:10G
> > created by: "createvol_rep coda.root E0000100
> /vicepa"
> > after codasrvs on both cluster1 and cluster3
> started up 
> > and /vice/db/servers and /vice/db/VSGDB modified.
> 
> Hmm, if this is really a replicated volume, there
> must have been some
> network flakyness that kept the servers out of sync.
> The crashed server
> has over 4000 operations of which it didn't know
> whether they reached
> the second machine. And clients didn't detect any
> differences between
> the replicas because that would have triggered
> resolution which would
> have truncated the resolution logs.
> 
> > (originally it was RvmLog/RvmData:30M/315M before
> > enlarged trying to solve the no-space-left error)
> 
> Rvm log really doesn't have to be that large, our
> server typically run
> with a log of between 2MB and 6MB, the log is only
> used to record
> on-going transactions. The servers tend to apply
> logged modifications to
> the data segment pretty often.
> 
> > I wonder if there is anything not set well. Any
> > suggestion? Or it is some limitation problem?
> > 
> > Helps are greatly appreciated.
> 
> First thing would be to extend the resolution log
> size like you were
> trying to do using 'volutil setlogparms reson 4
> logsize 16384'.
> 
> Then, on a client, run 'cfs cs ; cfs strong ; ls -lR
> /coda'. This should
> trigger resolution for the parts of the tree that
> are out of sync between
> the servers.
> 
> 
> On Wed, Jun 06, 2001 at 06:37:57AM -0700, Ed Kuo
> wrote:
> > Chris
> 
> Identity crisis? ;)
> 
> Jan
> 
> 




__________________________________________________
Do You Yahoo!?
Spot the hottest trends in music, movies, and more.
http://buzz.yahoo.com/
Received on 2001-06-19 12:14:10