User Tools

Site Tools


misc:fedmsetpres

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
misc:fedmsetpres [2015/11/05 08:46]
tschulz [Pool Geom]
misc:fedmsetpres [2015/11/05 09:02] (current)
tschulz [IOPS]
Line 1: Line 1:
-====== /​etc/​sysctl.conf ====== 
-<file bash> 
-# $FreeBSD: releng/​10.1/​etc/​sysctl.conf 112200 2003-03-13 18:43:50Z mux $ 
-# 
-#  This file is read when going to multi-user and its contents piped thru 
-#  ``sysctl''​ to adjust kernel values. ​ ``man 5 sysctl.conf''​ for details. 
-# 
  
-# Uncomment this to prevent users from seeing information about processes that +====== Actual Server ====== 
-# are being run under another UID+{{:​misc:​aaaq65864-02_1_.pdf|Server Quote}} 
-#​security.bsd.see_other_uids=+===== Important Features ===== 
-hw.acpi.power_button_state=NONE +  * **Use a lot of SAS Drives** (12 2.5 1GB 7200 Drives in current server) 
-sysctl vfs.zfs.l2arc_noprefetch=+  * **Get about 2GB of RAM for every TB** of storage (32GB in current server) 
-sysctl vfs.zfs.l2arc_write_max=26214400 +  * **Raid controllers slow down ZFS** down use a Host Bus Adapter (LSI 9201 lsi-sas-9201-16 in current server, supports up to 16 drives)
-sysctl vfs.zfs.l2arc_write_boost=52428800 +
-</​file>​+
  
  
-====== Pool Geom ======+====== Pool Geom, 4TB, 492 IOPS ======
 <​file>​ <​file>​
 zpool status ​                                                                                           ​ zpool status ​                                                                                           ​
Line 44: Line 35:
         cache         cache
           ada0p2 ​      ​ONLINE ​      ​0 ​    ​0 ​    0           ada0p2 ​      ​ONLINE ​      ​0 ​    ​0 ​    0
-</​file>​ 
-===== IOOPS ===== 
-<​file>​ 
-RAID 6/raidz2: 492 IOPS 
 </​file>​ </​file>​
- 
  
 ====== Pool IO Stat ====== ====== Pool IO Stat ======
Line 77: Line 63:
 ------------- ​ -----  -----  -----  -----  -----  ----- ------------- ​ -----  -----  -----  -----  -----  -----
 </​file>​ </​file>​
 +
 +===== /​etc/​sysctl.conf =====
 +<file bash>
 +# $FreeBSD: releng/​10.1/​etc/​sysctl.conf 112200 2003-03-13 18:43:50Z mux $
 +#
 +#  This file is read when going to multi-user and its contents piped thru
 +#  ``sysctl''​ to adjust kernel values. ​ ``man 5 sysctl.conf''​ for details.
 +#
 +
 +# Uncomment this to prevent users from seeing information about processes that
 +# are being run under another UID.
 +#​security.bsd.see_other_uids=0
 +hw.acpi.power_button_state=NONE
 +sysctl vfs.zfs.l2arc_noprefetch=0
 +sysctl vfs.zfs.l2arc_write_max=26214400
 +sysctl vfs.zfs.l2arc_write_boost=52428800
 +</​file>​
 +
  
  
Line 82: Line 86:
 ====== SAN Snapshots and Replication ====== ====== SAN Snapshots and Replication ======
  
-  * All primary server virtual machines (VM) are stored on a two(primary and secondary) node storage area network (SAN). ​ Each VM stores it's live data on the primary SAN node (zfs1.sebeka.k12.mn.us). ​ The primary SAN node currently is storing 15 daily snapshots. ​ Every night daily changes are sent to the secondary SAN node(zfs2.sebeka.k12.mn.us).<​file bash>+  * All primary server virtual machines (VM) are stored on a two(primary and secondary) node storage area network (SAN). ​ Each VM stores it's live data on the primary SAN node (zfs1.sebeka.k12.mn.us). ​ The primary SAN node currently is storing 15 daily snapshots. ​ Every night daily changes are sent to the secondary SAN node(zfs2.sebeka.k12.mn.us). 
 +<file bash>
 #output of "zfs list -t all" on zfs1.sebeka.k12.mn.us #output of "zfs list -t all" on zfs1.sebeka.k12.mn.us
 NAME                    USED  AVAIL  REFER  MOUNTPOINT NAME                    USED  AVAIL  REFER  MOUNTPOINT
Line 125: Line 130:
 tank/​vol1@2015-01-26 ​ 5.93G      -  2.70T  - tank/​vol1@2015-01-26 ​ 5.93G      -  2.70T  -
 tank/​vol1@2015-01-27 ​     0      -  2.70T  - tank/​vol1@2015-01-27 ​     0      -  2.70T  -
 +</​file>​
 +
 +===== ZFS Rep Scripts =====
 +<file bash zfsrep.sh>​
 +#!/bin/sh
 +
 +pool="​tank/​vol2"​
 +destination="​tank"​
 +host="​192.168.113.2"​
 +
 +today=`date +"​$type%Y-%m-%d"​`
 +yesterday=`date -v -1d +"​$type%Y-%m-%d"​`
 +
 +# create today snapshot
 +snapshot_today="​$pool@$today"​
 +# look for a snapshot with this name
 +if zfs list -H -o name -t snapshot | sort | grep "​$snapshot_today$"​ > /dev/null; then
 +        echo " snapshot, $snapshot_today,​ already exists"​
 +        exit 1
 +else
 +        echo " taking todays snapshot, $snapshot_today"​
 +        zfs snapshot -r $snapshot_today
 +fi
 +
 +# look for yesterday snapshot
 +#​snapshot_yesterday="​$pool@$yesterday"​
 +</​file>​
 +<file bash clean_snaps.sh>​
 +#`date -v -1d +"​$type%Y-%m-%d"​`
 +
 +pool="​zvol1/​vol1"​
 +
 +# iterate i from 15 to 62
 +# any snapshot older than 2 weeks (14 days) or (15-62 days) old is deleted
 +for i in 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
 +do
 +        CLEANDATE=`date -v -${i}d +"​$type%Y-%m-%d"​`
 +
 +        CLEAN_SNAP="​${pool}@${CLEANDATE}"​
 +        #echo $CLEAN_SNAP
 +        if zfs list -H -o name -t snapshot | sort | grep "​$CLEAN_SNAP"​ > /dev/null;
 +        then
 +                zfs destroy -r $CLEAN_SNAP
 +        fi
 +done
 </​file>​ </​file>​
misc/fedmsetpres.1446734815.txt.gz · Last modified: 2015/11/05 08:46 by tschulz