Helpful tips to speed up a Linux sofware RAID rebuild

Tip #1: /proc/sys/dev/raid/{speed_limit_max,speed_limit_min} kernel variables

The /proc/sys/dev/raid/speed_limit_min is config file that reflects the current "goal" rebuild speed for times when non-rebuild activity is current on an array. The speed is in Kibibytes per second (1 kibibyte = 210 bytes = 1024 bytes), and is a per-device rate, not a per-array rate . The default is 1000.

The /proc/sys/dev/raid/speed_limit_max is config file that reflects the current "goal" rebuild speed for times when no non-rebuild activity is current on an array. The default is 100,000.

To see current limits, enter:

sysctl dev.raid.speed_limit_min
sysctl dev.raid.speed_limit_max

NOTE: The following hacks are used for recovering Linux software raid, and to increase the speed of RAID rebuilds. Options are good for tweaking rebuilt process and may increase overall system load, high cpu and memory usage.

To increase speed you can increase these values. Here are the ones that work great on my machine:

 
echo 100000 > /proc/sys/dev/raid/speed_limit_min
echo 200000 > /proc/sys/dev/raid/speed_limit_max

These sets the min value to 10,000 K/sec and the max value to 20,000 K/sec.  I have a quad core machine with 16 Gig of ram, and they seem to be about the best for me.  Feel free to tinker with the values, and check which ones give you the best performance by running:

cat /proc/mdstat

Tip #2: Set read-ahead option

Set readahead (in 512-byte sectors) per raid device. The syntax is:
# blockdev --setra 65536 /dev/mdX

Where /dev/mdX is the device identifier for your Sofware RAID device.  Below is an example of the command I run to  set the read-ahead to 32 MIB:

blockdev --setra 65536 /dev/md127

Tip #3: Set stripe-cache_size for RAID5 or RAID 6

This is only available on RAID5 and RAID6 and boost sync performance by 3-6 times. It records the size (in pages per device) of the stripe cache which is used for synchronising all write operations to the array and all read operations if the array is degraded. The default is 256. Valid values are 17 to 32768. Increasing this number can increase performance in some situations, at some cost in system memory. Note, setting this value too high can result in an "out of memory" condition for the system. Use the following formula:

memory_consumed = system_page_size * nr_disks * stripe_cache_size

To set stripe_cache_size to 16 MiB for /dev/md127, type:

echo 16384 > /sys/block/md127/md/stripe_cache_size

To set stripe_cache_size to 32 MiB for /dev/md127, type:

echo 32768 > /sys/block/md127/md/stripe_cache_size

Tip #4: Disable NCQ on all disks

The following will disable NCQ on /dev/sda,/dev/sdb,..,/dev/sde using bash for loop

 for i in sd[abcde]
do
  echo 1 > /sys/block/$i/device/queue_depth
done

Tip #5: Bitmap Option

Bitmaps optimize rebuild time after a crash, or after removing and re-adding a device. Turn it on by typing the following command, making sure you change the device identifer to what ever your RAID is:

mdadm --grow --bitmap=internal /dev/md127

Once array rebuild or fully synced, disable bitmaps:

mdadm --grow --bitmap=none /dev/md127

After running those commands, my speeds increase dramatically.  I went from 13184 K/sec to about 95108 K/sec!  Hope that helps you save some time o your raid rebuilds.

3 Comments

  • […] Like Loading... « How to test for Bash ShellShock Helpful tips to speed up a Linux sofware RAID rebuild » […]

  • Lokesh Bhandari says:

    Can this work for ubuntu 14.04 64 bit os?
    I am currently using mdadm 3.2.5

  • stripe_cache_size is the number of cache *entries*. Memory usage depends on page size and is multiplied by the number of drives in the array. So if you set it to 16384 on a 8-drive array, that will allocate 512MB of RAM, not 16 MiB. This memory is locked and unavailable to the rest of the system.

    Such a setting may work well for benchmarks but is probably not an optimal use of RAM unless your typical usage is flat-out sequential writing spilling over the size of your disk cache for significant periods of time, in which case you may wish to reconsider whether you have sufficient drives in your array.

Leave a Reply to Add disk back into a Linux MDADM software raid | The Daily Admin

XHTML: You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

%d bloggers like this: