Coda File System

rvmsizer and.... coda monitoring project ?

From: Lionix <>
Date: Fri, 19 Dec 2003 21:41:35 +0100
Hi all....

Well, what a great tool it seems to be.... Thxs to Michael German !
Gonna compile it very soonly.... :o)

But I feel a little bit desperate as I had developed something not so 
far from it a couple weeks ago to test the future content of my coda 
server set, and be sure it would be possible acording to coda limits, 
identify breacking points to them, and estimate the ammount of RVM I 
would have to set up .... The result was quit interesting as I discover 
I need less than 100MB of RVM to store more than 35GB of data !

My dev is based on an other technology :  I should recognize I don't 
write C so fast as perl or php :-) !
And I find perl is a "not so bad" langage as the ratio speed / CPU-usage 
is comfortable...
( It tooks 6 minutes for full processing our data and the load average 
was really little )
It's a perl  script that do a big `ls -lR  > list_tree` ,  ls was the 
less heavy way I found, as "find" was doing to many "disk acces noise" 
for me, a look at command source will certainly explain why, but did not 
took the time to invistigate more .
The generated file is then processed to strore all informations in a 
mysql database ( number of directory / under-directories / number files 
/  RVM for the files / RVM for the directories at each point of the 
subtree .... ).

I didn't introduce it as there is still some bugs ( full path 
construction not well all the times: one charactere shifted, fuc.. 
spaces troubles....  and top level informations are wrong ) and I don't 
like other people to read/test my code until it is finished and well 
debug. Moreover I have to adapt it for coda now, as I think a ls -lR on 
the rootvolume would perhaps be a hard job for venus, and not so 
intelligent at all.... It would be better idea to get volume mount 
points and make such a work for each volumes !

However I think it could be a nice tool in the future for preventing 
troubles , such as number files / dir limits until this limit would be 
by-passed, by daily scaning contents of each volumes and directories, 
and really necessary in a production layout....

 I have recently developed some  scripts to monitor venus and codasrv ( 
cronly cfs commands and store all results in database... ) that permit 
me to monitor via a web page the cluster and venus. That's how I observe 
the behaviour of disconnecting from most volumes when one is heavy 
touched. Not really ready for a release as there's still too many hard 
coded things, perheps  too many server-oriented, and I should make a 
conf file, clean the code, process cases I did not see _ nobody's 
perfect_, and do some automatic detections of the realm configuration ( 
detect server names instead of stroring it manualy in the db for exemple ).

It seems I'm not the only one to develop some scripts or programs to 
"complete" coda....
That push me to write sooner I was expected to do....
This idea that was making his way into my mind for some weeks....
The dev ml is very poor at all.... Is it time to give the dev ml a new 
health ?
Perhaps we could coordinate our works on coda, and start a "paralel 
project" under the experimented vision of Jan no ?
After all, the users community could start  completing the developer's  
at CMU, we are all writing some code to simplify our work !
Start of a new source Forge project, or a new branch in CVS at CMU ?
Would I be alone ?
What are you thinking about the idea ?

FS-Realm (newbee?) Administrator
Hundreds hours of work but so powerful !
And still have to work and code...
Received on 2003-12-19 15:47:22