sábado, 7 de julho de 2012

Proxmox VE v.2 - Software RAID

Instalação do servidor de virtualização Proxmox Virtual Environment versão 2 utilizando software RAID (não suportado nativamente).

Tutorial: petercarrero.com

0. Pré-requisitos:

  1. Instalar software necessário
  2. Preparar os discos para RAID
  3. Colocar /boot em /dev/md0
  4. Mover o PVE LVM para /dev/md1

1. Instalar software necessário

All you really need here is the mdadm package. I also install vim because I like it better than plain vi. On Proxmox VE 1.0 you also needed the initramfs-tools, but this is already installed on VE 2.0. So, to get your system ready, type the following on the command line of your Proxmox setup:
apt-get update; apt-get install mdadm vim

The mdadm package will prompt you for information and all you need to do on that screen is press the ENTER key.

2. Prepare raid devices

Now that you have the right software setup, let's get your raid devices ready. We will copy the partition information from /dev/sda into /dev/sdb, convert the partitions on sdb to raid members, initialize the raid devices and then save the raid configuration on /etc/mdadm/mdadm.conf so it persists after reboot. All that is done with the following 6 lines:
sfdisk -d /dev/sda | sfdisk -f /dev/sdb
sfdisk -c /dev/sdb 1 fd
sfdisk -c /dev/sdb 2 fd
mdadm --create -l 1 -n 2 /dev/md0 missing /dev/sdb1
mdadm --create -l 1 -n 2 /dev/md1 missing /dev/sdb2
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

3. Get /boot ready on /dev/md0

This is by far the step with the most number of commands, but we got quite a bit to do here… It may be possible to shorten some of these steps, but I wasn't successful on my attempts to simplify this and the instructions below worked pretty well for me. Anyway, let's begin by formatting and populating /dev/md0 with the contents of /boot.
mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot/* /mnt/md0


Now let's tell our system to use /dev/md0 after the Linux bootstrap (i.e.: grub2 will still use sda1 to boot for now).
vim /etc/fstab

Replace the line that has UUID=<your UUID here> /boot ext3 defaults 0 1 with /dev/md0 /boot ext3 defaults 0 1. If you are unfamiliar with vim, use the arrow keys to navigate to the line above, hit yypi# to make a copy of the old line and then turn the copy into a comment, use the up-arrow to go to the uncommented line, delete all the UUID=bla text and add /dev/md0. After that is done, hit the ESC key, which will take you out of insert mode, then :wq followed by the ENTER key. That will save and quit the file for you. We are now done with vim! Once you are on the command line again, reboot with the following command:
reboot


After the system reboots, let's verify that it is using the md0 device as your /boot mount point by typing the following:
mount|grep boot

You should get something like this:
/dev/md0 on /boot type ext3 (rw)

Now we go on to tell grub to use that device during boot as well on the next 10 commands:
echo '# customizations' >> /etc/default/grub
echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub
echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub
echo raid1 >> /etc/modules
echo raid1 >> /etc/initramfs-tools/modules
grub-install /dev/sda
grub-install /dev/sdb
grub-install /dev/md0
update-grub
update-initramfs -u

Maybe you don't need all 3 grub-install commands, but for me, not having the last one there didn't work and when reverting the process during one of my tests, I ended-up having to reissue the command grub-install /dev/sda.


We are almost done with this step! All that remains to do is make /dev/sda1 part of /dev/md0 and reboot using this config to make sure all is working at it should. We get that done with the following 2 commands:
sfdisk -c /dev/sda 1 fd
mdadm --add /dev/md0 /dev/sda1

This will get /dev/md0 rebuilding, and it shouldn't take long. You can verify the progress of the process with the following command:
watch -n 5 cat /proc/mdstat

Once that is completed, reboot and we are done with this step!
reboot

4. Move the PVE LVM to /dev/md1

Now the process is pretty much similar to the old one. This step is simple, but it is the one that could take the longest time to complete, depending on how big your data partition is and how fast is your system. What we need to do is vacate /dev/sda2 so we can join it to /dev/md1, and we do this with the following commands: (warning: the pvmove command can take a long time to complete, so use it on a tty or inside a screen session)
pvcreate /dev/md1
vgextend pve /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce pve /dev/sda2
pvremove /dev/sda2
sfdisk --change-id /dev/sda 2 fd
mdadm --add /dev/md1 /dev/sda2

You will get your second raid rebuilding after the last mdadm command. This will take longer than the 1st one and you can check its progress the same way as before, with:
watch -n 5 cat /proc/mdstat

However, this time it may be good to boost the limits with which the RAID subsystem can read and write to its devices. You do that with the following commands:
echo 800000 > /proc/sys/dev/raid/speed_limit_min
echo 1600000 > /proc/sys/dev/raid/speed_limit_max


Hopefully this guide will save you some time and quite possibly a lot of head-ache and frustration! If you like it or if you have a suggestion to improve on it in any way, please leave me a comment below.

Sem comentários:

Enviar um comentário