quarta-feira, 11 de julho de 2012

Debian Squeeze + MiniDLNA

Instalação do serviço MiniDLNA num servidor com Debian Squeeze (stable).

0. Pré-requisitos

1. Descarregar o software necessário
2. Instalar dependências e instalar o MiniDLNA
3. Configurar o MiniDLNA
4. Iniciar o MiniDLNA

1. Descarregar o software

Descarregar minidlna_1.0.24_static.tar.gz minidlna_1.0.24_src.tar.gz a partir do site oficial.
O ficheiro static contém o binário e o ficheiro de configuração, o ficheiro com o código fonte (src) é necessário apenas para copiar o script de init.d.

1. Instalar MiniDLNA

1.1. Descarregar código-fonte

Descarregar a última versão a partir de http://sourceforge.net/projects/minidlna/files/minidlna/, extrair e entrar no diretório.

1.2.a. Compilar em ubuntu 10.04 LTS KO - versões muito antigas

1.2.b. Compilar em ubuntu 12.04 LTS OK - funciona!

# apt-get install autopoint
$ sh autogen.sh
Dependências:
# apt-get install libavcodec-dev libavformat-dev libavutil-dev libjpeg-dev libsqlite3-dev libexif-dev libid3tag0-dev libogg-dev libvorbis-dev libflac-dev
$ ./configure
$ make
$ make install

2. Instalar dependências e instalar o MiniDLNA

2.1. Instalar dependências

As dependências necessárias são instaladas através:
# apt-get install libexif12 libjpeg62 libid3tag0 libflac8 libvorbisfile3 sqlite3 libavformat52 libuuid1 gcc

2.2. Instalar o MiniDLNA

  • Descompactar o ficheiro static com:
# tar zxvf minidlna_1.0.24_static.tar.gz
  • Copiar o ficheiro binário:
# cp usr/sbin/minidlna /usr/sbin
  • Copiar o ficheiro de configuração:
# cp etc/minidlna.conf /etc/
  • Descompactar o ficheiro src com:
# tar zxvf minidlna_1.0.24_src.tar.gz
  • Copiar o script para o init.d:
cp minidlna-1.0.24/linux/minidlna.init.d.script /etc/init.d/minidlna
  • Ativar as permissões de execução:
# chmod 755 /etc/init.d/minidlna
  • Adicionar o serviço ao arranque do sistema:
# update-rc.d minidlna defaults

3. Configurar o MiniDLNA

O ficheiro de configuração /etc/minidlna.conf permite especificar o funcionamento do serviço, devendo ser personalizado:
  • Localização da base de dados relativa aos ficheiros e localização do log:
db_dir=/var/cache/minidlna
log_dir=/var/log
  • Modo automático de descoberta de ficheiros:
inotify=yes
  • Pastas a partilhar (é possível especificar diferentes pastas para diferentes tipos de media, conforme é explicado no próprio ficheiro de configuração):
media_dir=/srv/media
  • Nome do servidor na rede
friendly_name=DLNA Server

4. Iniciar o MiniDLNA

Após a instalação e configuração é possível arrancar o serviço com:
# service minidlna start

O primeiro arranque irá demorar algum tempo até estar concluída a indexação dos ficheiros existentes, é possível acompanhar o desempenho através do comando top e esperando que o minidlna deixe de ocupar o processador de forma intensiva.

Após a indexação é possível aceder ao serviço nos dispositivos com suporte para DLNA.

5. Outras opções

  • Para reiniciar o serviço:
# service minidlna restart
  • Para reconstruir a base de dados:
# service minidlna stop
# minidlna -R
# service minidlna start

sábado, 7 de julho de 2012

Proxmox VE v.2 - Software RAID

Instalação do servidor de virtualização Proxmox Virtual Environment versão 2 utilizando software RAID (não suportado nativamente).

Tutorial: petercarrero.com

0. Pré-requisitos:

  1. Instalar software necessário
  2. Preparar os discos para RAID
  3. Colocar /boot em /dev/md0
  4. Mover o PVE LVM para /dev/md1

1. Instalar software necessário

All you really need here is the mdadm package. I also install vim because I like it better than plain vi. On Proxmox VE 1.0 you also needed the initramfs-tools, but this is already installed on VE 2.0. So, to get your system ready, type the following on the command line of your Proxmox setup:
apt-get update; apt-get install mdadm vim

The mdadm package will prompt you for information and all you need to do on that screen is press the ENTER key.

2. Prepare raid devices

Now that you have the right software setup, let's get your raid devices ready. We will copy the partition information from /dev/sda into /dev/sdb, convert the partitions on sdb to raid members, initialize the raid devices and then save the raid configuration on /etc/mdadm/mdadm.conf so it persists after reboot. All that is done with the following 6 lines:
sfdisk -d /dev/sda | sfdisk -f /dev/sdb
sfdisk -c /dev/sdb 1 fd
sfdisk -c /dev/sdb 2 fd
mdadm --create -l 1 -n 2 /dev/md0 missing /dev/sdb1
mdadm --create -l 1 -n 2 /dev/md1 missing /dev/sdb2
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

3. Get /boot ready on /dev/md0

This is by far the step with the most number of commands, but we got quite a bit to do here… It may be possible to shorten some of these steps, but I wasn't successful on my attempts to simplify this and the instructions below worked pretty well for me. Anyway, let's begin by formatting and populating /dev/md0 with the contents of /boot.
mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot/* /mnt/md0


Now let's tell our system to use /dev/md0 after the Linux bootstrap (i.e.: grub2 will still use sda1 to boot for now).
vim /etc/fstab

Replace the line that has UUID=<your UUID here> /boot ext3 defaults 0 1 with /dev/md0 /boot ext3 defaults 0 1. If you are unfamiliar with vim, use the arrow keys to navigate to the line above, hit yypi# to make a copy of the old line and then turn the copy into a comment, use the up-arrow to go to the uncommented line, delete all the UUID=bla text and add /dev/md0. After that is done, hit the ESC key, which will take you out of insert mode, then :wq followed by the ENTER key. That will save and quit the file for you. We are now done with vim! Once you are on the command line again, reboot with the following command:
reboot


After the system reboots, let's verify that it is using the md0 device as your /boot mount point by typing the following:
mount|grep boot

You should get something like this:
/dev/md0 on /boot type ext3 (rw)

Now we go on to tell grub to use that device during boot as well on the next 10 commands:
echo '# customizations' >> /etc/default/grub
echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub
echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub
echo raid1 >> /etc/modules
echo raid1 >> /etc/initramfs-tools/modules
grub-install /dev/sda
grub-install /dev/sdb
grub-install /dev/md0
update-grub
update-initramfs -u

Maybe you don't need all 3 grub-install commands, but for me, not having the last one there didn't work and when reverting the process during one of my tests, I ended-up having to reissue the command grub-install /dev/sda.


We are almost done with this step! All that remains to do is make /dev/sda1 part of /dev/md0 and reboot using this config to make sure all is working at it should. We get that done with the following 2 commands:
sfdisk -c /dev/sda 1 fd
mdadm --add /dev/md0 /dev/sda1

This will get /dev/md0 rebuilding, and it shouldn't take long. You can verify the progress of the process with the following command:
watch -n 5 cat /proc/mdstat

Once that is completed, reboot and we are done with this step!
reboot

4. Move the PVE LVM to /dev/md1

Now the process is pretty much similar to the old one. This step is simple, but it is the one that could take the longest time to complete, depending on how big your data partition is and how fast is your system. What we need to do is vacate /dev/sda2 so we can join it to /dev/md1, and we do this with the following commands: (warning: the pvmove command can take a long time to complete, so use it on a tty or inside a screen session)
pvcreate /dev/md1
vgextend pve /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce pve /dev/sda2
pvremove /dev/sda2
sfdisk --change-id /dev/sda 2 fd
mdadm --add /dev/md1 /dev/sda2

You will get your second raid rebuilding after the last mdadm command. This will take longer than the 1st one and you can check its progress the same way as before, with:
watch -n 5 cat /proc/mdstat

However, this time it may be good to boost the limits with which the RAID subsystem can read and write to its devices. You do that with the following commands:
echo 800000 > /proc/sys/dev/raid/speed_limit_min
echo 1600000 > /proc/sys/dev/raid/speed_limit_max


Hopefully this guide will save you some time and quite possibly a lot of head-ache and frustration! If you like it or if you have a suggestion to improve on it in any way, please leave me a comment below.