Converting an openvz to lxc container

To properly migrate an old openvz container to a new lxc container can be a bit tricky, but I finally succeeded to find a proper way with a minimal downtime.

In my examples the container ID is 108.

It starts with a vzdump of the current container:

vzdump 108 --remove 1 --mode snapshot --compress lzo --storage backup09 --node clust-10 --bwlimit 0 --size 8192

–remove 1 says it’s allowed to remove the previous dump, if there is any
–mode snapshot says it’s making a live dump of a running container (no downtime)
–compress lzo is a light but efficient compress method
–bwlimit 0 is important if you can handle speeds over 80000 kb/s, which is default bwlimit
–size 8192 will fix input/output errors on certain lvm storages with low free diskspace

Above command will result in a file which you can use later in the pct restore command:

pct restore 108 $(ls -1r /mnt/pve/backup09/dump/vzdump-openvz*108* | head -n1) --storage ssd --onboot 1 --net1 name=eth0,bridge=vmbr0 --net2 name=eth2,bridge=vmbr2

This command will restore the back-up in to a new lxc container on the new node. The command will extract the last found dump of that container ID to the storage called ‘ssd’ (in our case the CEPH filesystem). It will also add some network devices and change the setting to start at boot.

Above command you can run during daytime, because some containers will take hours to pack and unpack, it can be handy to do so.

After that you can mount the new container with:

pct mount 108

Now you can access the files directly on this path:

/var/lib/lxc/108/rootfs/home

This makes it possible to update certain files, to make sure you start the container with the latest data.

So next we are going to stop the old container:

vzctl stop 108

After that we need to rsync the data from the old node to the new node, like:

rsync --stats -avz /var/lib/vz/private/108/home/ root@10.0.0.61:/var/lib/lxc/108/rootfs/home
rsync --stats -avz /var/lib/vz/private/108/var/lib/mysql/ root@10.0.0.61:/var/lib/lxc/108/rootfs/var/lib/mysql
rsync --stats -avz /var/lib/vz/private/108/var/log/ root@10.0.0.61:/var/lib/lxc/108/rootfs/var/log
rsync --stats -avz /var/lib/vz/private/108/etc/ root@10.0.0.61:/var/lib/lxc/108/rootfs/etc

You can see that 10.0.0.61 is the IP of the new node. It will rsync all the data in just a couple of minutes, of course depending on the age of the dump.

After adding the network interfaces (if not done already by the restore command), you can unmount and start the new lxc container on the new node:

pct unmount 108
pct start 108

If you are lucky everything starts and works like a charm 🙂

Directadmin: Cannot find the script

I was having some problems on different servers when i tried to reload a service.

It was displaying “Cannot find the script” for all services

This probably started when we upgraded to CB 2.0….

Anyhow you can fix this by editing the “/usr/local/directadmin/conf/directadmin.conf” and change/add the following value to: systemd=1

Reload DirectAdmin afterwords and you should be able to restart services from directadmin again.

 

Install SpamFighter and BlockCracking with Exim and dkim

This is what I use to set-up my DirectAdmin boxes with Exim, SpamAssassin, SpamFighter, BlockCracking and dkim.

cd /usr/local/directadmin/custombuild 

./build set clamav yes
./build set exim yes
./build set eximconf yes
./build set eximconf_release 4.4
./build set blockcracking yes
./build set easy_spam_fighter yes
./build set spamassassin yes
./build set sa_update daily

./build update
./build exim
./build clamav
./build spamassassin
./build exim_conf

wget -O /etc/exim.dkim.conf http://files.directadmin.com/services/exim.dkim.conf

service exim restart

After that you can use the file /etc/virtual/domainips to set the interface IP for exim:

*:1.2.3.4

Fix domains list in DirectAdmin

I don’t know why, but at one of my servers the domains.list file gets erased every x weeks. So after some weeks they are missing in the httpd.conf and the domains will not work.

Before I figure out what causes it, this is a little piece of code to fix the list and rewrite httpd:

for d in `ls /home/user/domains`; do echo $d >> /usr/local/directadmin/data/users/user/domains.list; done
/usr/local/directadmin/custombuild/build rewrite_confs

Reset per e-mail account usage limit in DirectAdmin

Since DirectAdmin works with ‘per e-mail account send limits’ some things can work out pretty annoying.

In this case the customer sent 1000 e-mails (not SPAM) and want to send e-mail number 1001. But, changing the limit to 2000 will result in a

Please set a limit between 1 and 1000

Despite the fact of the “Zero is unlimited”, if you enter 0 it will give:

You cannot set an unlimited send limit

So I ended up to search in the files and remove the usage by hand:

rm -f /etc/virtual/domain.com/usage/info

Now the usage is removed and the customer is able to send another 1000 e-mails. Which still is not solved but for now it’s done.

Bash script to make MySQL back-ups

This simple but effective script makes a back-up of the target SQL database and deletes databases older then 90 days. I keep the folder synchronized with other systems so the customer can reach his database with 90 days retention himself.

#!/bin/bash

cd /backups

user="mysql_user"
passwd="password"
host="localhost"
db_name="mysql_database"

backup_path="/backups"
date=$(date +"%d-%b-%Y")

umask 177

# dump the database
mysqldump --user=$user --password=$passwd --host=$host $db_name > $backup_path/$db_name-$date.sql

# zip contents
zip $db_name-$date.zip $db_name-$date.sql

# remove old backups
find $backup_path/* -mtime +90 -exec rm {} \;

echo done

 

Of course, instead of setting the username/passwd variables in the script itself you can read the DirectAdmin credentials with:

source /usr/local/directadmin/conf/mysql.conf

Bash script to migrate Magento to DirectAdmin via rsync

Migrating Magento is pretty easy. Most of the times you only need to sync the files and export/import the database. But to automate such a process to test a lot, you can use a bash script to make things easier.

I often use this bash script (or similar) to migrate a Magento shop from one server to another:

#!/bin/bash

echo dumping sql
ssh root@10.0.0.1 -p 7685 "mysqldump -u magento_user -ppassword magento_db > /home/magento.sql"

echo rsync files
/usr/bin/rsync --exclude 'app/etc/local.xml' --exclude 'media/.htaccess' --exclude 'app/etc/config.xml' -a --delete -e "ssh -p 7685" root@10.0.0.1:/home/domain.com/. /home/user/domains/domain.com/public_html/.

echo rsync db
/usr/bin/rsync -a --delete -e "ssh -p 7685" root@10.0.0.1:/home/magento_db.sql /home/user/domains/domain.com/

echo importing db
source /usr/local/directadmin/conf/mysql.conf
mysql -u $user -p$passwd new_magentodb < /home/user/domains/domain.com/magento.sql

echo restoring permissions
chown -R user:user /home/user/domains/domain.com/public_html
chown user:user /home/user/domains/domain.com/magento.sql

echo done

This script is executed on the new server, which is loaded with DirectAdmin. The old server (10.0.0.1) is plain CentOS 6.5. I added the rsa-key.pub to the authorized keys on the old server, so this server is able to execute commands as root without entering a password.

The port on the old server is 7685, that’s why you find it in the ssh and rsync commands.

I exclude the local.xml because on the new server I had to change the MySQL credentials. The media/.htaccess file gave problems on the new DirectAdmin server as well, so I had to change it and exclude the file from future rsyncs.

After running this script the Magento webshop is migrated from the old server to the new DirectAdmin server and works perfectly!

 

WordPress update error: download failed

Sometimes WordPress can’t update itself or the plug-ins. This often happens after the installation has been moved or the username has been changed.

Downloading update from https://downloads.wordpress.org/release/wordpress-4.3.1-no-content.zip…

Download failed.: Destination directory for file streaming does not exist or is not writable.

Installation Failed

The solution is to fix the WP_TEMP_DIR in the wp-config.php. The full path looks like this on a DirectAdmin server:

define('WP_TEMP_DIR', '/home/username/domains/example.com/public_html/wp-content/uploads');

If the folder exists, and is writable for your user, WordPress will be able to update itself again.

DirectAdmin SQL back-ups via commandline

This is how to make a SQL only back-up via commandline in DirectAdmin. The first line puts the command in the task.queue. The second line runs the task queue and will show the output. The d400 means debug level 400.

echo "action=backup&append%5Fto%5Fpath=nothing&database%5Fdata%5Faware=yes&email%5Fdata%5Faware=yes&local%5Fpath=%2Fhome%2Fadmin%2Fmysql%5Fbackups&option%30=database&option%31=database%5Fdata&owner=admin&type=admin&value=multiple&what=select&when=now&where=local&who=all" >> /usr/local/directadmin/data/task.queue
/usr/local/directadmin/dataskq d400

When the back-up is completed you will receive an e-mail via the normal DirectAdmin message system.

The back-ups will be stored in /home/admin/mysql_backups