rathouse is for capsulv2 # OS stuff buster-backports was enabled for later versions of qemu & libvirtd, since we need them for `virsh backup-begin` # Disk Stuff ### serial numbers ``` frontplate number | SSD serial number 0 | 19431D801189 1 | 22062712800355 2 | PHYF209106Y93P8EGN 3 | PHYF209300LX3P8EGN ``` use `smartctl -x /dev/sdb | grep Serial` to print serial numbers. ### raid setup #### setup ``` # when setting up the partition, I used +3520G as the size and "raid" as the type fdisk /dev/sda fdisk /dev/sdd # i left the chunk size default # also i chose near over far because # > mdadm cannot reshape arrays in far X layouts mdadm --create --verbose --level=10 --metadata=1.2 --raid-devices=2 --layout=n2 /dev/md/tank /dev/sda1 /dev/sdd1 mkfs.ext4 /dev/md/tank mount /dev/md/tank /tank ``` #### recovery if a disk is pulled from a running system, mdadm assumes the worst & disconnects it from the RAID permanently. > If the system had a way of knowing that the removal and restoration was intentional, it could automatically pick it up. But a software RAID has no such knowledge, so it assumes the worst and acts as if the disk or its connection has become unreliable for some reason, until told otherwise. to re-attach a disconnected disk, do this: ``` mdadm --manage /dev/md0 --add /dev/ ``` #### benchmarks mdadm + ext4 using `disk-benchmark` ``` hostname | write throughput | write iops | read throughput | read iops ------------------------------------------------------------------------ rathouse | 434MB/s | 21k | 1110MB/s | 116k alpine | 278MB/s | 18.2k | 15.7GB/s (wat) | 55.4k ``` note: redo these tests when the raid thing is complete #### disk-benchmark this is the script we use to benchmark disks pls don't change any of the values tbh because it throws off everything ``` # write throughput fio --name=write_throughput --directory=. --numjobs=8 \ --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \ --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \ --group_reporting=1 # write iops fio --name=write_iops --directory=. --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1 # read throughput (sequential reads) fio --name=read_throughput --directory=. --numjobs=8 \ --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \ --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read \ --group_reporting=1 # read iops (random reads) fio --name=read_iops --directory=. --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1 # clean up tbh rm write* read* ```