Jikes, indeed that was the thing I thought this morning. I decided to move out my old shares and copy over all the data (270GB) to my workstation temporary and move over the RAID-5 volume to ZFS (backed by a Highpoint ATA454, yeah cheap-ass stuff, if someone has an idea on an affordable real-hardware raid5 device which supports SATA300, please comment on the post ).
The raid5 volume consists for 4x300GB, normally resulting in 1.2TB storage, with the raid5 solution we have redundancy and gain around 900GB of data storage available.
But OK i am loosing track: I did the 270GB in around 6.5hours, roughly 40GB per hour, copy actions did around 13MB/s over the network (peaking at 30% of the GBIT link that is used between the servers) and all went without a glitch.
Since I dont have a binary backed driver at the moment, freebsd recognizes the raid array as ar0, resulting in the following:
[remko@guardian /home]$ zpool status
scrub: none requested
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
ar0 ONLINE 0 0 0
errors: No known data errors
Currently the following is available;
[remko@guardian /home]$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
data 133G 741G 25K /data
ofcourse somewhat more but that’s my private data et all thus no need to be verbose on that.
This configuration might not be as fancy as others have (multi TB arrays ) but perhaps the future can arrange me the SATA300 controller backed with 4 or more 500GB disks
Currently the data is being copied back, around 200GB had already been copied since the setup; and no creepy things had been found, peaks are again around the 13MB/s which is quite good imo.
I am not very sure whether the attached data is going to help a bit but:
there it is. Well onward in perhaps writing something for VuXML, a topic on which we still need (continues) help!