Fileserver converted to use ZFS

Jikes, indeed that was the thing I thought this morning. I decided to move out my old shares and copy over all the data (270GB) to my workstation temporary and move over the RAID-5 volume to ZFS (backed by a Highpoint ATA454, yeah cheap-ass stuff, if someone has an idea on an affordable real-hardware raid5 device which supports SATA300, please comment on the post :-)).

The raid5 volume consists for 4x300GB, normally resulting in 1.2TB storage, with the raid5 solution we have redundancy and gain around 900GB of data storage available.

But OK i am loosing track: I did the 270GB in around 6.5hours, roughly 40GB per hour, copy actions did around 13MB/s over the network (peaking at 30% of the GBIT link that is used between the servers) and all went without a glitch.

Since I dont have a binary backed driver at the moment, freebsd recognizes the raid array as ar0, resulting in the following:

[remko@guardian /home]$ zpool status
pool: data
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
ar0 ONLINE 0 0 0

errors: No known data errors

Currently the following is available;
[remko@guardian /home]$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
data 133G 741G 25K /data

ofcourse somewhat more but that’s my private data et all thus no need to be verbose on that.
This configuration might not be as fancy as others have (multi TB arrays ;-)) but perhaps the future can arrange me the SATA300 controller backed with 4 or more 500GB disks :)

Currently the data is being copied back, around 200GB had already been copied since the setup; and no creepy things had been found, peaks are again around the 13MB/s which is quite good imo.

I am not very sure whether the attached data is going to help a bit but:

kstat.zfs.misc.arcstats.hits: 416906
kstat.zfs.misc.arcstats.misses: 13678
kstat.zfs.misc.arcstats.demand_data_hits: 229801
kstat.zfs.misc.arcstats.demand_data_misses: 249
kstat.zfs.misc.arcstats.demand_metadata_hits: 173607
kstat.zfs.misc.arcstats.demand_metadata_misses: 11696
kstat.zfs.misc.arcstats.prefetch_data_hits: 12366
kstat.zfs.misc.arcstats.prefetch_data_misses: 0
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 1132
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1733
kstat.zfs.misc.arcstats.mru_hits: 173761
kstat.zfs.misc.arcstats.mru_ghost_hits: 5799
kstat.zfs.misc.arcstats.mfu_hits: 229653
kstat.zfs.misc.arcstats.mfu_ghost_hits: 2088
kstat.zfs.misc.arcstats.deleted: 1635502
kstat.zfs.misc.arcstats.recycle_miss: 920453
kstat.zfs.misc.arcstats.mutex_miss: 21
kstat.zfs.misc.arcstats.evict_skip: 189460
kstat.zfs.misc.arcstats.hash_elements: 13992
kstat.zfs.misc.arcstats.hash_elements_max: 29668
kstat.zfs.misc.arcstats.hash_collisions: 336691
kstat.zfs.misc.arcstats.hash_chains: 2217
kstat.zfs.misc.arcstats.hash_chain_max: 6
kstat.zfs.misc.arcstats.p: 295031867
kstat.zfs.misc.arcstats.c: 295499774
kstat.zfs.misc.arcstats.c_min: 16777216
kstat.zfs.misc.arcstats.c_max: 314572800
kstat.zfs.misc.arcstats.size: 295506432

there it is. Well onward in perhaps writing something for VuXML, a topic on which we still need (continues) help! :)

7 thoughts on “Fileserver converted to use ZFS”

  1. Hi Marius,

    OK that could be an option indeed, but what about when the power goes poof? did anyone test that ? (I saw a lot of Pawels work and presentations but I cannot remember this topic) since if ZFS can recover from that, it would be a great addition and save a bucket of money (and speed up the new 500GB disks, though my current system needs additional SATA ports before it can support that at all).

    Thanks for commenting!

  2. #zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    r4x320 723G 154G 721G /r4x320

    # zpool status
    pool: r4x320
    state: ONLINE
    scrub: none requested
    config:

    NAME STATE READ WRITE CKSUM
    r4x320 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    ad0s1d ONLINE 0 0 0
    ad1s1d ONLINE 0 0 0
    ad4 ONLINE 0 0 0
    ad6 ONLINE 0 0 0

    errors: No known data errors

    atapci0: port 0xb000-0xb03f,0xb400-0xb40f,0xb800-0xb87f mem 0xfc024000-0xfc024fff,0xfc000000-0xfc01ffff irq 23 at device 4.0 on pci4
    atapci1: port 0x1f0-0x1f7,0x3f6,0×170-0×177,0×376,0xf000-0xf00f irq 18 at device 31.2 on pci0
    ad0: 305245MB at ata0-master SATA150
    ad1: 305245MB at ata0-slave SATA150
    ad4: 305245MB at ata2-master SATA150
    ad6: 305245MB at ata3-master SATA150

    This volume has 2.4 Million files, has been running CURRENT from when Pawel commited ZFS(some months) and has survived numerous crashes(ZFS on i386 and the memory issues) and power failures with 0 failures.
    I access it via NFS on my Mac, play large video files from it, my bittorrent tmp and storage.

    I hope that responds to your question of “what if when the power goes poof?” ;)

  3. OK that was interesting information from Joao, I only need to have something supporting at least 5 SATA ports then and I can build up the ZFS-raid setup above it with additional space, but that will be something for 2008, at least ;-)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>