User Tools

Site Tools


misc:fedmsetpres

This is an old revision of the document!


Actual Server

Important Features

  • Use a lot of SAS Drives (12 2.5 1GB 7200 Drives in current server)
  • Get about 2GB of RAM for every TB of storage (32GB in current server)
  • Raid controllers slow down ZFS down use a Host Bus Adapter (LSI 9201 lsi-sas-9201-16 in current server, supports up to 16 drives)

Pool Geom, 4TB, 492 IOPS

zpool status                                                                                            
  pool: tank                                                                                               
 state: ONLINE                                                                                             
  scan: none requested                                                                                     
config:

        NAME           STATE     READ WRITE CKSUM
        tank           ONLINE       0     0     0
          raidz2-0     ONLINE       0     0     0
            gpt/zfs0   ONLINE       0     0     0
            gpt/zfs1   ONLINE       0     0     0
            gpt/zfs2   ONLINE       0     0     0
            gpt/zfs3   ONLINE       0     0     0
            gpt/zfs4   ONLINE       0     0     0
            gpt/zfs5   ONLINE       0     0     0
            gpt/zfs6   ONLINE       0     0     0
            gpt/zfs7   ONLINE       0     0     0
            gpt/zfs8   ONLINE       0     0     0
            gpt/zfs9   ONLINE       0     0     0
            gpt/zfs10  ONLINE       0     0     0
            gpt/zfs11  ONLINE       0     0     0
        logs
          ada0p1       ONLINE       0     0     0
        cache
          ada0p2       ONLINE       0     0     0

IOPS

RAID 6/raidz2: 492 IOPS

Pool IO Stat

zpool iostat -v                                                                                         
                  capacity     operations    bandwidth                                                     
pool           alloc   free   read  write   read  write                                                    
-------------  -----  -----  -----  -----  -----  -----
tank           5.42T  5.45T     30    164  3.14M  2.74M
  raidz2       5.42T  5.45T     30    138  3.14M  1.68M
    gpt/zfs0       -      -     13     18   305K   279K
    gpt/zfs1       -      -     13     17   306K   277K
    gpt/zfs2       -      -     13     17   308K   279K
    gpt/zfs3       -      -     13     18   306K   279K
    gpt/zfs4       -      -     14     17   312K   277K
    gpt/zfs5       -      -     14     17   313K   279K
    gpt/zfs6       -      -     13     18   305K   279K
    gpt/zfs7       -      -     13     17   305K   277K
    gpt/zfs8       -      -     13     17   308K   279K
    gpt/zfs9       -      -     13     18   306K   279K
    gpt/zfs10      -      -     14     17   312K   277K
    gpt/zfs11      -      -     14     17   313K   279K
logs               -      -      -      -      -      -
  ada0p1       19.2M  19.9G      0     25      0  1.06M
cache              -      -      -      -      -      -
  ada0p2        835G  76.8G     16     15  1.36M  1.58M
-------------  -----  -----  -----  -----  -----  -----

/etc/sysctl.conf

# $FreeBSD: releng/10.1/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux $
#
#  This file is read when going to multi-user and its contents piped thru
#  ``sysctl'' to adjust kernel values.  ``man 5 sysctl.conf'' for details.
#
 
# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0
hw.acpi.power_button_state=NONE
sysctl vfs.zfs.l2arc_noprefetch=0
sysctl vfs.zfs.l2arc_write_max=26214400
sysctl vfs.zfs.l2arc_write_boost=52428800

SAN Snapshots and Replication

  • All primary server virtual machines (VM) are stored on a two(primary and secondary) node storage area network (SAN). Each VM stores it's live data on the primary SAN node (zfs1.sebeka.k12.mn.us). The primary SAN node currently is storing 15 daily snapshots. Every night daily changes are sent to the secondary SAN node(zfs2.sebeka.k12.mn.us).
#output of "zfs list -t all" on zfs1.sebeka.k12.mn.us
NAME                    USED  AVAIL  REFER  MOUNTPOINT
zvol1                  2.92T  2.40T    32K  /zvol1
zvol1/vol1             2.92T  2.40T  2.70T  /zvol1/vol1
zvol1/vol1@2015-01-13  15.1G      -  2.62T  -
zvol1/vol1@2015-01-14  7.58G      -  2.62T  -
zvol1/vol1@2015-01-15  7.94G      -  2.62T  -
zvol1/vol1@2015-01-16  7.26G      -  2.62T  -
zvol1/vol1@2015-01-17  5.24G      -  2.62T  -
zvol1/vol1@2015-01-18  4.75G      -  2.69T  -
zvol1/vol1@2015-01-19  5.54G      -  2.69T  -
zvol1/vol1@2015-01-20  6.45G      -  2.69T  -
zvol1/vol1@2015-01-21  7.05G      -  2.69T  -
zvol1/vol1@2015-01-22  8.46G      -  2.69T  -
zvol1/vol1@2015-01-23  8.28G      -  2.69T  -
zvol1/vol1@2015-01-24  5.63G      -  2.69T  -
zvol1/vol1@2015-01-25  4.97G      -  2.70T  -
zvol1/vol1@2015-01-26  5.93G      -  2.70T  -
zvol1/vol1@2015-01-27  5.82G      -  2.70T  -
#output of "zfs list -t all" on zfs2.sebeka.k12.mn.us
NAME                   USED  AVAIL  REFER  MOUNTPOINT
tank                  2.92T  4.10T    19K  none
tank/root             4.68G  4.10T  4.45G  /
tank/root/tmp           19K  4.10T    19K  /tmp
tank/root/var          234M  4.10T   234M  /var
tank/vol1             2.91T  4.10T  2.70T  /vol1
tank/vol1@2015-01-13  15.1G      -  2.62T  -
tank/vol1@2015-01-14  7.58G      -  2.62T  -
tank/vol1@2015-01-15  7.94G      -  2.62T  -
tank/vol1@2015-01-16  7.26G      -  2.62T  -
tank/vol1@2015-01-17  5.24G      -  2.62T  -
tank/vol1@2015-01-18  4.74G      -  2.69T  -
tank/vol1@2015-01-19  5.54G      -  2.69T  -
tank/vol1@2015-01-20  6.45G      -  2.69T  -
tank/vol1@2015-01-21  7.05G      -  2.69T  -
tank/vol1@2015-01-22  8.46G      -  2.69T  -
tank/vol1@2015-01-23  8.28G      -  2.69T  -
tank/vol1@2015-01-24  5.63G      -  2.69T  -
tank/vol1@2015-01-25  4.96G      -  2.70T  -
tank/vol1@2015-01-26  5.93G      -  2.70T  -
tank/vol1@2015-01-27      0      -  2.70T  -

ZFS Rep Scripts

zfsrep.sh
#!/bin/sh
 
pool="tank/vol2"
destination="tank"
host="192.168.113.2"
 
today=`date +"$type%Y-%m-%d"`
yesterday=`date -v -1d +"$type%Y-%m-%d"`
 
# create today snapshot
snapshot_today="$pool@$today"
# look for a snapshot with this name
if zfs list -H -o name -t snapshot | sort | grep "$snapshot_today$" > /dev/null; then
        echo " snapshot, $snapshot_today, already exists"
        exit 1
else
        echo " taking todays snapshot, $snapshot_today"
        zfs snapshot -r $snapshot_today
fi
 
# look for yesterday snapshot
#snapshot_yesterday="$pool@$yesterday"
clean_snaps.sh
#`date -v -1d +"$type%Y-%m-%d"`
 
pool="zvol1/vol1"
 
# iterate i from 15 to 62
# any snapshot older than 2 weeks (14 days) or (15-62 days) old is deleted
for i in 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
do
        CLEANDATE=`date -v -${i}d +"$type%Y-%m-%d"`
 
        CLEAN_SNAP="${pool}@${CLEANDATE}"
        #echo $CLEAN_SNAP
        if zfs list -H -o name -t snapshot | sort | grep "$CLEAN_SNAP" > /dev/null;
        then
                zfs destroy -r $CLEAN_SNAP
        fi
done
misc/fedmsetpres.1446735722.txt.gz · Last modified: 2015/11/05 09:02 by tschulz