Вообщем .. проблемы с nfs меня задрали уже... поэтому решили поднять на тестовом стенде, что либо из распределеной системы хранения данных.Первое что начал тестировать это был glusterfs
Взлетает на Дебиане все в 2-5 мин =))
перед тем как старотовать volume сделал его сихронизацию... чтобы на обjих peerа была одинаковая инфа
стартанул volume
смонтировал на клиентской машинеделаю чтение запись мелких файлов тут наступает fail
делаю скрипт
#!/bin/bashfor i in {1..1000}; do
#size=$((RANDOM%2+1))
echo "Write"
dd if=/dev/zero of=/storage/test/bigfile${i} count=10 bs=2k
echo "read"
dd if=/storage/test/bigfile${i} of=/dev/zero count=10 bs=2k
doneи вот что получаю
Write
2+0 records in
2+0 records out
4096 bytes (4.1 kB) copied, 0.00204757 s, 2.0 MB/s
read
2+0 records in
2+0 records out
4096 bytes (4.1 kB) copied, 0.00157458 s, 2.6 MB/s
Write
4+0 records in
4+0 records out
8192 bytes (8.2 kB) copied, 0.00197661 s, 4.1 MB/s
read
4+0 records in
4+0 records out
8192 bytes (8.2 kB) copied, 0.00169478 s, 4.8 MB/sпри монтировании
nas.storage:/storage /storage glusterfs defaults,_netdev 0 0mount /storage
делаем по другому
mount -t nfs -o rw,rsize=8192,wsize=8192,proto=tcp,soft,intr,noatime,noauto,noacl,async,nodirati
me 192.168.15.165:/storage /storageскорость чтения большая, а вот скорость записи ужас какой-то =(
Write
2+0 records in
2+0 records out
4096 bytes (4.1 kB) copied, 0.136233 s, 30.1 kB/s
read
2+0 records in
2+0 records out
4096 bytes (4.1 kB) copied, 1.3131e-05 s, 312 MB/s
Write
3+0 records in
3+0 records out
6144 bytes (6.1 kB) copied, 0.143521 s, 42.8 kB/s
read
3+0 records in
3+0 records out
6144 bytes (6.1 kB) copied, 1.3899e-05 s, 442 MB/s
на очереди ceph .. сейчас жду когда подключат еще сервак.. и буду тестить cephу кого был какой опыт установки и использования таких систем ?
here the result of iozone
Record Size 4 KB
File size set to 4 KB
Command line used: ./iozone -l 2 -u 2 -r 4k -s 4k /storage/
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 2
Max process = 2
Throughput test with 2 processes
Each process writes a 4 Kbyte file in 4 Kbyte recordsChildren see throughput for 2 initial writers = 209712.23 KB/sec
Parent sees throughput for 2 initial writers = 98.28 KB/sec
Min throughput per process = 95430.45 KB/sec
Max throughput per process = 114281.78 KB/sec
Avg throughput per process = 104856.11 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 rewriters = 206101.09 KB/sec
Parent sees throughput for 2 rewriters = 109.33 KB/sec
Min throughput per process = 103050.55 KB/sec
Max throughput per process = 103050.55 KB/sec
Avg throughput per process = 103050.55 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 readers = 197831.25 KB/sec
Parent sees throughput for 2 readers = 93.25 KB/sec
Min throughput per process = 197831.25 KB/sec
Max throughput per process = 197831.25 KB/sec
Avg throughput per process = 98915.62 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 re-readers = 366269.22 KB/sec
Parent sees throughput for 2 re-readers = 89.50 KB/sec
Min throughput per process = 366269.22 KB/sec
Max throughput per process = 366269.22 KB/sec
Avg throughput per process = 183134.61 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 reverse readers = 102421.45 KB/sec
Parent sees throughput for 2 reverse readers = 151.60 KB/sec
Min throughput per process = 102421.45 KB/sec
Max throughput per process = 102421.45 KB/sec
Avg throughput per process = 51210.72 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 stride readers = 494861.56 KB/sec
Parent sees throughput for 2 stride readers = 351.51 KB/sec
Min throughput per process = 247430.78 KB/sec
Max throughput per process = 247430.78 KB/sec
Avg throughput per process = 247430.78 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 random readers = 191072.08 KB/sec
Parent sees throughput for 2 random readers = 160.73 KB/sec
Min throughput per process = 191072.08 KB/sec
Max throughput per process = 191072.08 KB/sec
Avg throughput per process = 95536.04 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 mixed workload = 105646.19 KB/sec
Parent sees throughput for 2 mixed workload = 58.51 KB/sec
Min throughput per process = 105646.19 KB/sec
Max throughput per process = 105646.19 KB/sec
Avg throughput per process = 52823.09 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 random writers = 117482.82 KB/sec
Parent sees throughput for 2 random writers = 51.33 KB/sec
Min throughput per process = 117482.82 KB/sec
Max throughput per process = 117482.82 KB/sec
Avg throughput per process = 58741.41 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 pwrite writers = 166431.23 KB/sec
Parent sees throughput for 2 pwrite writers = 51.03 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 166431.23 KB/sec
Avg throughput per process = 83215.62 KB/sec
Min xfer = 0.00 KBChildren see throughput for 2 pread readers = 200191.84 KB/sec
Parent sees throughput for 2 pread readers = 75.51 KB/sec
Min throughput per process = 200191.84 KB/sec
Max throughput per process = 200191.84 KB/sec
Avg throughput per process = 100095.92 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 fwriters = 280713.02 KB/sec
Parent sees throughput for 2 fwriters = 150.10 KB/sec
Min throughput per process = 114281.78 KB/sec
Max throughput per process = 166431.23 KB/sec
Avg throughput per process = 140356.51 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 freaders = 351151.31 KB/sec
Parent sees throughput for 2 freaders = 4291.50 KB/sec
Min throughput per process = 160079.23 KB/sec
Max throughput per process = 191072.08 KB/sec
Avg throughput per process = 175575.66 KB/sec
Min xfer = 4.00 KBiozone test complete.
/storage mounted like
nas.storage:/storage /storage glusterfs defaults,_netdev 0 0
now
mount -t nfs -o rw,rsize=8192,wsize=8192,proto=tcp,soft,intr,noatime,noauto,noacl,async,nodirati
me 192.168.15.165:/storage /storagenfs server served by glusterd
Record Size 4 KB
File size set to 4 KB
Command line used: ./iozone -l 2 -u 2 -r 4k -s 4k /storage/
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 2
Max process = 2
Throughput test with 2 processes
Each process writes a 4 Kbyte file in 4 Kbyte recordsChildren see throughput for 2 initial writers = 188920.50 KB/sec
Parent sees throughput for 2 initial writers = 56.48 KB/sec
Min throughput per process = 188920.50 KB/sec
Max throughput per process = 188920.50 KB/sec
Avg throughput per process = 94460.25 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 rewriters = 160079.23 KB/sec
Parent sees throughput for 2 rewriters = 54.93 KB/sec
Min throughput per process = 160079.23 KB/sec
Max throughput per process = 160079.23 KB/sec
Avg throughput per process = 80039.62 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 readers = 210225.80 KB/sec
Parent sees throughput for 2 readers = 72.11 KB/sec
Min throughput per process = 210225.80 KB/sec
Max throughput per process = 210225.80 KB/sec
Avg throughput per process = 105112.90 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 re-readers = 210225.80 KB/sec
Parent sees throughput for 2 re-readers = 281.29 KB/sec
Min throughput per process = 210225.80 KB/sec
Max throughput per process = 210225.80 KB/sec
Avg throughput per process = 105112.90 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 reverse readers = 143633.55 KB/sec
Parent sees throughput for 2 reverse readers = 194.59 KB/sec
Min throughput per process = 143633.55 KB/sec
Max throughput per process = 143633.55 KB/sec
Avg throughput per process = 71816.77 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 stride readers = 267128.91 KB/sec
Parent sees throughput for 2 stride readers = 196.70 KB/sec
Min throughput per process = 267128.91 KB/sec
Max throughput per process = 267128.91 KB/sec
Avg throughput per process = 133564.45 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 random readers = 200191.84 KB/sec
Parent sees throughput for 2 random readers = 159.47 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 200191.84 KB/sec
Avg throughput per process = 100095.92 KB/sec
Min xfer = 0.00 KBChildren see throughput for 2 mixed workload = 125384.91 KB/sec
Parent sees throughput for 2 mixed workload = 62.83 KB/sec
Min throughput per process = 125384.91 KB/sec
Max throughput per process = 125384.91 KB/sec
Avg throughput per process = 62692.46 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 random writers = 210225.80 KB/sec
Parent sees throughput for 2 random writers = 51.10 KB/sec
Min throughput per process = 210225.80 KB/sec
Max throughput per process = 210225.80 KB/sec
Avg throughput per process = 105112.90 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 pwrite writers = 166431.23 KB/sec
Parent sees throughput for 2 pwrite writers = 51.74 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 166431.23 KB/sec
Avg throughput per process = 83215.62 KB/sec
Min xfer = 0.00 KBChildren see throughput for 2 pread readers = 295679.92 KB/sec
Parent sees throughput for 2 pread readers = 148.73 KB/sec
Min throughput per process = 129248.69 KB/sec
Max throughput per process = 166431.23 KB/sec
Avg throughput per process = 147839.96 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 fwriters = 280151.83 KB/sec
Parent sees throughput for 2 fwriters = 144.09 KB/sec
Min throughput per process = 137737.53 KB/sec
Max throughput per process = 142414.30 KB/sec
Avg throughput per process = 140075.91 KB/sec
Min xfer = 4.00 KBChildren see throughput for 2 freaders = 239228.27 KB/sec
Parent sees throughput for 2 freaders = 6728.64 KB/sec
Min throughput per process = 117482.82 KB/sec
Max throughput per process = 121745.45 KB/sec
Avg throughput per process = 119614.13 KB/sec
Min xfer = 4.00 KBiozone test complete.
Да никуда не денетесь вы от глюков даже на кластерных фс. Особенно с кучами мелких файлов. Будут задержки и провалы. Если всё так критично - купите нормальную железку.
Чтение ещё фигня. Самый цинус у вас будет при записи. И не на тесте,а в продакшене. Когда оно будет "булькать" данными.
Этим болеют все "коленчатые" решения, что глистер, что дрдб.
Да мы юзаем Дисковую полку с дисками SATA 10k 500Gb
к серверу с nfs подключен через карту Pecl кажется