Skip to main content
added 85 characters in body
Source Link
Raman
  • 295
  • 1
  • 12

I have a software RAID-10 setup on Linux 3.16.6-203.fc20.x86_64, with 1.2 metadata and the default chunk size (512K):

$ cat /proc/mdstat
md0 : active raid10 sdc1[4] sdb1[0] sdd1[2] sde1[3]
      3907023872 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 4/30 pages [16KB], 65536KB chunk

unused devices: <none>

The filesystem is ext4, on top of LVM, on top of the RAID-10 volume group.

$ df -k .
Filesystem                     1K-blocks      Used  Available Use% Mounted on
/dev/mapper/vg_raid10-lv_home 2015734504 810039552 1103278568  43% /home

with mount options:

$ mount | grep vg_raid10-lv_home
/dev/mapper/vg_raid10-lv_home on /home type ext4 (rw,relatime,seclabel,stripe=256)

Everything seems fine. SMART indicates all disks are perfectly normal with no reallocated, pending, or offline sectors. Raw synchronous write throughput seems to be reasonably good:

$ dd if=/dev/zero of=tmp.bin bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 7.85743 s, 137 MB/s

However, when writing small 100b chunks(EDIT: as pointed out in the answers, I was writing 512 byte chunks, not 100 byte chunks) to the RAID-array, it is extremely slow (around 84ms per synchronous write):

$ dd if=/dev/zero of=tmp.bin bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 84.2859 s, 6.1 kB/s

Is this normal for the RAID-10 configuration that I have?

I have a software RAID-10 setup on Linux 3.16.6-203.fc20.x86_64, with 1.2 metadata and the default chunk size (512K):

$ cat /proc/mdstat
md0 : active raid10 sdc1[4] sdb1[0] sdd1[2] sde1[3]
      3907023872 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 4/30 pages [16KB], 65536KB chunk

unused devices: <none>

The filesystem is ext4, on top of LVM, on top of the RAID-10 volume group.

$ df -k .
Filesystem                     1K-blocks      Used  Available Use% Mounted on
/dev/mapper/vg_raid10-lv_home 2015734504 810039552 1103278568  43% /home

with mount options:

$ mount | grep vg_raid10-lv_home
/dev/mapper/vg_raid10-lv_home on /home type ext4 (rw,relatime,seclabel,stripe=256)

Everything seems fine. SMART indicates all disks are perfectly normal with no reallocated, pending, or offline sectors. Raw synchronous write throughput seems to be reasonably good:

$ dd if=/dev/zero of=tmp.bin bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 7.85743 s, 137 MB/s

However, when writing small 100b chunks to the RAID-array, it is extremely slow (around 84ms per synchronous write):

$ dd if=/dev/zero of=tmp.bin bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 84.2859 s, 6.1 kB/s

Is this normal for the RAID-10 configuration that I have?

I have a software RAID-10 setup on Linux 3.16.6-203.fc20.x86_64, with 1.2 metadata and the default chunk size (512K):

$ cat /proc/mdstat
md0 : active raid10 sdc1[4] sdb1[0] sdd1[2] sde1[3]
      3907023872 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 4/30 pages [16KB], 65536KB chunk

unused devices: <none>

The filesystem is ext4, on top of LVM, on top of the RAID-10 volume group.

$ df -k .
Filesystem                     1K-blocks      Used  Available Use% Mounted on
/dev/mapper/vg_raid10-lv_home 2015734504 810039552 1103278568  43% /home

with mount options:

$ mount | grep vg_raid10-lv_home
/dev/mapper/vg_raid10-lv_home on /home type ext4 (rw,relatime,seclabel,stripe=256)

Everything seems fine. SMART indicates all disks are perfectly normal with no reallocated, pending, or offline sectors. Raw synchronous write throughput seems to be reasonably good:

$ dd if=/dev/zero of=tmp.bin bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 7.85743 s, 137 MB/s

However, when writing small 100b (EDIT: as pointed out in the answers, I was writing 512 byte chunks, not 100 byte chunks) to the RAID-array, it is extremely slow (around 84ms per synchronous write):

$ dd if=/dev/zero of=tmp.bin bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 84.2859 s, 6.1 kB/s

Is this normal for the RAID-10 configuration that I have?

Source Link
Raman
  • 295
  • 1
  • 12

Software RAID10 setup writes small files extremely slowly, is this normal?

I have a software RAID-10 setup on Linux 3.16.6-203.fc20.x86_64, with 1.2 metadata and the default chunk size (512K):

$ cat /proc/mdstat
md0 : active raid10 sdc1[4] sdb1[0] sdd1[2] sde1[3]
      3907023872 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 4/30 pages [16KB], 65536KB chunk

unused devices: <none>

The filesystem is ext4, on top of LVM, on top of the RAID-10 volume group.

$ df -k .
Filesystem                     1K-blocks      Used  Available Use% Mounted on
/dev/mapper/vg_raid10-lv_home 2015734504 810039552 1103278568  43% /home

with mount options:

$ mount | grep vg_raid10-lv_home
/dev/mapper/vg_raid10-lv_home on /home type ext4 (rw,relatime,seclabel,stripe=256)

Everything seems fine. SMART indicates all disks are perfectly normal with no reallocated, pending, or offline sectors. Raw synchronous write throughput seems to be reasonably good:

$ dd if=/dev/zero of=tmp.bin bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 7.85743 s, 137 MB/s

However, when writing small 100b chunks to the RAID-array, it is extremely slow (around 84ms per synchronous write):

$ dd if=/dev/zero of=tmp.bin bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 84.2859 s, 6.1 kB/s

Is this normal for the RAID-10 configuration that I have?