Archives 2005 - 2019    Search

Backing up a running virtual machine on Ubuntu 15.10

published Apr 08, 2016 02:45   by admin ( last modified Dec 31, 2016 09:42 )

 

Update 2016-05-09:

Folding back a snapshot in kvm works in Ubuntu 16.10! The whole chain as described below now works. I have not checked if my 16.10 is running on the hand installed debs or if they've been superceded by 16.10 versions.

Update 2016-04-13:

It seems that the last step, folding back a snapshot in kvm, does not work in Ubuntu 15.10 and indeed has a lot of problems on other systems too. Folding back:

virsh blockcommit guestName vda --active --pivot --wait

fails with:

error: internal error: unable to execute QEMU command 'block-commit': Device 'drive-virtio-disk0' is busy: block device is in use by block job: commit

It works if you stop the guest, but that kind of defeats the purpose since it is not backing up a running machine anymore. One may try to use RedHat instead and see if patches there have been quicker, since the Qemu/Kvm people seem to be aware of the problems.

Summary 2016-04-08: KVM can be used and can do this (but last step does still not work, see above), but the configuration and packages in Ubuntu 15.10 are buggy, so you have to replace stuff, and disable stuff:

 

Install KVM, disable AppArmor,  install by hand updated versions of libvirt-bin (libvirt-bin_1.2.16-2ubuntu11.15.10.4_amd64.deb) and libvirt0 (libvirt0_1.2.16-2ubuntu11.15.10.4_amd64.deb), install qemu guest agent in the guest, open up a communications channel between host and guest by creating a channel device called org.qemu.guest_agent_0 in the configuration of the guest.
 
Read on for more info on how to do  this (you can skip to the KVM heading if you don't want the background).

Backing up a running virtual machine on Ubuntu 15.10 turned out to be not so simple. A lot of the information that Google return is outdated, giving recommendations pertaining to 8 year old Ubuntu systems or claiming that KVM cannot run GUI systems.

Virtualbox

I started out with Virtualbox, and Virtualbox has something called snapshots. As far as I understand, when you take a snapshot, the main vdi file becomes read-only and all changes go into a new snapshot file. Several snapshots can be made and they can reference each other and branch, and be folded back into the main vdi file and into each other. I followed this guide:

Arne's Blog: Backup of running VirtualBox machines:

!/bin/bash
VBOXMANAGE="/usr/bin/VBoxManage -q"

if [ $# != 1 ]
then
    echo "Usage: $0 VBoxName"
    exit
fi

echo "Renaming old snapshot..."
$VBOXMANAGE snapshot "$1" edit previous --name deleteme
echo "Renaming current snapshot..."
$VBOXMANAGE snapshot "$1" edit current --name previous
echo "Taking new snapshot..."
$VBOXMANAGE snapshot "$1" take current
echo "Deleting old snapshot..."
$VBOXMANAGE snapshot "$1" delete deleteme
 

But I ended up with errors and inconsistent snapshots that were corrupt according to Virtualbox. Which is probably my fault somehow. Digging deeper I found a blog post referencing the same script, OSC — Backing up Virtual Machines in VirtualBox  and that guy writes that he has gone back to shutting down the virtual machines and then back them up:

Sadly, for me personally, live VirtualBox snapshots haven’t been a terribly robust backup strategy. I’ve unfortunately seen several snapshots fail. Or, worse, had VirtualBox crash while a snapshot was taking place.

Btrfs

So next solution, why not put the virtual machine on a snapshotting file system, that is guaranteed to copy the vdi file in one fell swoop? This may not be enough, in the sense that there might be state stored in other files or in RAM. But it is worth a shot. One option could be btrfs, which I have been experimenting with and which has a very smooth snapshotting mechanism: You can snapshot any file or director on a btrfs volume. But timely enough, this was posted on Reddit's /r/linuxadmin:

BTRFS for VM images? : linuxadmin

which thread says things like:

Yes. I do not recommend this. I spent more time maintaining the one client I had running VMs on a BTRFS store than I did the fifty or so others I had running VMs on ZFS stores, for roughly a year. The replication is unreliable, the performance is incredibly hit-or-miss

The poster lauds ZFS though. ZFS is supported well in Ubuntu 15.10, but the ZFS snapshotting is not as simple as btrfs. Just using LVM should also be an option.

VM options

So, what about changing the virtual machine from Virtualbox to something else? Three options are:

  • VmWare
  • Xen
  • KVM

VmWare costs money for the snapshotting version. I did however vaguely remember that Linode switched from one VM technology to another recently. A Google search turned up that they switched from Xen to KVM.

KVM

Ok KVM it is. What is available snapshot wise there then?

Well firstly there is a way of freezing the state of a VM, backup and then unfreeze it again Backup of running KVM qcow2 VPS - Server Fault.. Secondly it seems to have the same ability to do snapshots with mutable snapshot files as VirtualBox, but there is more documentation giving examples of what I think I want.

This is the best I have found so far [libvirt-users] Backup a VM (using live external snapshot and blockcommit:

# Create snapshot
virsh snapshot-create-as --domain $VM_NAME snap --diskspec
vda,file=$VM_DIR/"$VM_NAME"-snap.qcow2 --disk-only --atomic
--no-metadata --quiesce

# Copy frozen backing file
cp $VM_DIR/"$VM_NAME".qcow2 $SNAP_FILEPATH

# Blockcommit snapshot back into backing file
virsh blockcommit $VM_NAME vda --active --pivot

# Remove snapshot file
rm $VM_DIR/"$VM_NAME"-snap.qcow2

# Variables should be self-explanatory: 
# - VM_DIR is the directory where the VM are stored
# - VM_NAME is the name of the VM, and its qcow2 file is called
# VM_NAME.qcow2
# - SNAP_FILEPATH is the full path (including name) where the backup
# should be created

This seems to be essentially the same KVM – Live backups with qcow2 | Gonzalo Marcote | Open source, open mind.

Some more info here too: Live-disk-backup-with-active-blockcommit - Libvirt Wiki

The above script will only work if the following criteria are fulfilled:

Download links for amd64 are (to the best of my understanding and what I used):

Install with sudo dpkg -i libvirt*

(assuming you only have those two files in the directory)

Unfortunately, without tuning or disabling AppArmor there is currently no way to flatten the snapshot back into the image, as of the last comment currently at Bug #1517539 “Libvirt KVM can not create snapshot (with qemu-gue...” : Bugs : libvirt package : Ubuntu:

"Strictly speaking, the virsh command that prompted this ticket has been fixed. So, I can now successfully create a live snapshot using the proposed packages. Unfortunately, I have no way to flatten the snapshot back into the base image now."

However after having disabled AppArmor, it works, it seems.

Moving from VirtualBox to KVM

Was disarmingly simple. As long as you follow the up-to-date guides and use the virt-manager GUI, you can actually just use the vdi file directly in KVM. However then you will not get the snapshotting abilities. For that you need the qcow2 disk format. Which you can convert to directly from vdi like this:

qemu-img convert -f vdi -O qcow2 vm.vdi vm.qcow2

 


Snapshotting running Virtualbox instances...

published Apr 04, 2016 11:55   by admin ( last modified Apr 05, 2016 10:58 )

...seems very difficult. I tried the script listed here:

Arne's Blog: Backup of running VirtualBox machines

but I end up with corrupt stuff, including the entire Virtual machine. VBoxManage says it cannot delete the deleteme copy and then it all goes downhill from there.

I am probably doing something wrong, but I am now looking at KVM or Xen to see if they can do snapshots. This is a lot trickier than I thought. Maybe I will need to settle for stopping the Virtualbox vm, back it up and then start it again, or somehow back up its entire state from within itself. But that does not feel right.

I'm also looking into using a snapshotting file system such as btrfs for snapshotting VM state, but btrfs has a lot of problems in this context according to this thread:

BTRFS for VM images? : linuxadmin


Rotate a video with ffmpeg

published Mar 20, 2016 01:40   by admin ( last modified Mar 20, 2016 01:46 )

Tested (with .mv4 files). Works. 90 rotates the video 90 degrees clockwise.

 

ffmpeg -i input.mp4 -c copy -metadata:s:v:0 rotate=90 output.mp4


Read more: Link - Can I set rotation field for a video stream with FFmpeg? - Stack Overflow


Borde vi säga att religiösa extremister är folk som lider av vidskepelse?

published Mar 18, 2016 09:25   by admin ( last modified Mar 18, 2016 09:27 )

Uttrycket religiös extremist är lite konstigt. En religiös extremist är ju inte mer religiös än en vanligt religiös person. Bara knäppare.

I det antika Rom (innan kristendomen) hade man ett ord för sådana människor som trodde att de kunde manipulera gudarna för egna syften, eller trodde sig vara en själv. Man kallade dem vidskepliga. Man kallade dem inte mer religiösa.

En religion, speciellt de som är baserade på religiösa texter, måste tolkas utifrån den tid texterna skrevs i, och också tolkas för den tid vi lever i nu. Att inte göra det är inte mer religiöst, det är bara knäppt.


Using 16GB memory modules in 8GB slots

published Mar 16, 2016 01:23   by admin ( last modified Mar 16, 2016 01:23 )

This seems to be possible often with I'M Intelligent Memory | Beyond Limits, a company that stacks memory units beyond the specifications. A compatibility list is here: compatibilitylist.pdf, and you can buy them here in Europe: Memphis Electronic AG @ Suchergebnis auf Amazon.de (a bit pricey I'd say, may be cheaper to buy a motherboard with more memory slots).


sshuttle - a dead simple VPN solution

published Mar 13, 2016 10:45   by admin ( last modified Mar 23, 2016 07:46 )

There are a number of open source solutions for VPN, such as OpenVPN, SoftEther and strongSwan. They all take a bit of learning to setup, no actually for StrongSwan and SoftEther there is a massive amount of learning and OpenVPN is not trailing that far behind. If you're like me.

And then there is sshuttle, a python program that uses SSH to make a tunnel to the server. A server that does not need to have sshuttle installed: The client sshuttle will connect and run the needed stuff on the server side in a similar way to e.g. Ansible. I just tested it and it seems to work fine!

Forward all traffic except DNS:

sshuttle -r username@sshserver 0.0.0.0/0

Also forward DNS queries:

sshuttle --dns -r username@sshserver 0/0

I installed it (you can use apt-get, yum or pip) and then just ran it from terminal. Done. It works!

I can read through sshuttle's code, it's 3283 lines of python code (I haven't yet I might add).

According to the SoftEther site, OpenVPN has 91'000 lines of C code and SoftEther has 378'000 lines of C and C++ code.

Now granted, they do much more. I haven't  tested it much yet but sshuttle looks promising. I wonder a bit about throughput though, gotta check that. And there is no Android client.

sshuttle/sshuttle: Transparent proxy server that works as a poor man's VPN. Forwards over ssh. Doesn't require admin. Works with Linux and MacOS. Supports DNS tunneling.

There is also tinc by the way which seems quite interesting in other ways. Untested by me.


How fast are different buses and components in a computer?

published Mar 13, 2016 01:40   by admin ( last modified Apr 05, 2016 01:43 )

Wikipedia has a good page:

List of device bit rates

Observations: 10Gbits/s to 200Gbits/s seems to be the limit for transfer speed, depending on whether you are using copper, optics or RAM access over a short distance. Video cards go higher, not sure if that could be useful somehow outside of the realm of graphics.

Update 2016-04-05: There is a standard called Thunderbolt that is able to do 10Gbits/s, 20Gbits/s and in Thunderbolt 3 40Gbits/s. Thunderbolt seems to be mostly used to drive digital displaysand is mainly serial.

The fastest available SSDs seem to often max out at 5Gbits/s Best SSD 2016 - 119 Charts - UserBenchmark


zram and zswap for making your Linux computer faster

published Mar 12, 2016 10:40   by admin ( last modified Mar 14, 2016 03:01 )

Updated and clarified 2016-03-14

Zram and Zswhap are RAM compressors where your get more space in RAM for using a bit more CPU. This can make your computer handle bigger tasks without slowing down to a crawl or behave erratically.

Zram is supposed to make low memory devices use their memory more efficiently. It is apparently used a lot by e.g. TV manufacturers for their embedded Linuxes. Zram seems to work in such a way as to take RAM away from the computer and set it aside as a swapped RAM disk that uses lzo compression. So it's basically a RAM memory compressor.

Zram does not need a swap partition on a drive and in fact it may behave non-optimally if there is one: First Zram would then fill up, and any pages after that would be swapped to disk, which if we assume a LIFO usage pattern would put all the pages most in demand on the slow disk.

Zswap on the other hand is meant to improve swapping on low-memory systems with rotary hard disks or similar slow swap partitions. It also compresses memory pages and stores them in RAM, but it communicates with swap partitions present on the system and tries to cache in RAM in compressed format the pages most likely to be swapped in again any time soon.

Zswap probably should do good on any-sized system strapped for RAM with a slow swap disk. Now trying it on an 8GB system.

This is what it looks like after having run for a while on an 8GB system that hasn't been restarted since the install of zRam. It should have kicked in working, and it seems like it has shrunk the RAM use quite drastically and the swap just showing old pages since before zRam kicked in. Or I am misunderstanding something about its use. I think zRam is its own swap, but it does not show up when doing "swapon -s"

Increased Performance In Linux With zRam (Virtual Swap Compressed in RAM) ~ Web Upd8: Ubuntu / Linux blog

kernel - zram vs zswap vs zcache Ultimate guide: when to use which one - Ask Ubuntu

However zswap has a chequered history of being available or not depending on kernel build and is regarded as a bit unexplored:

kernel - How can I enable zswap? - Ask Ubuntu

Zswap - ArchWiki

Does 'Zswap' Really Improve Responsiveness ? (Ubuntu 13.10)

kernel - How to verify zswap running? - Ask Ubuntu

You could also go and borrow RAM from another machine with https://oss.oracle.com/projects/tmem/dist/files/RAMster/ramster-howto.txt whiich seems to be a part of the Zcache stuff. However you are unlikely to get better bandwidth than from a local SSD methinks.


The challenges for Europe - and their solutions

published Mar 11, 2016 12:10   by admin ( last modified Mar 11, 2016 12:13 )

Europe now faces two huge challenges

  • One is the deteriorating security situation in our near abroad: Chaos in the Middle East and a military powerful, unpredictable and aggressive Russia
  • A fast changing labour landscape that one is trying to paper over with quantitative easing, negative interest rates and currency wars, enriching the wrong people

 

Here are the solutions:

  • Europe must be much more assertive in its common foreign policy. Europe must match the U.S. in force projection capabilities at least in Europe and its near abroad. Increase military spending to 3% of GDP and use it for defence against the east and peace keeping in the south and south east
  • North Africa and the Middle East should be stabilised with free trade agreements and other agreements of cooperation. This may include programmes reeking of "neo colonisation". So be it. If Europe stays out, other forces move in, and a lot of people get killed, as we can see.
  • Quantitative easing, negative interest rates and currency wars must be stopped and the full force of technology development and globalisation must hit the labour markets
  • A citizens wage or negative income tax must be implemented to soften the blow to the people, especially for the ones getting hit the hardest by the winds of change
  • Public services in the fields of policing and mental health must be expanded massively, due to the deterioration already caused by the turmoil, and for the future.

 


ArchLinux' page on btrfs

published Mar 11, 2016 06:09   by admin ( last modified Mar 11, 2016 06:09 )

Link - Maximizing the performance of your Linux machine

published Mar 11, 2016 05:54   by admin ( last modified Mar 11, 2016 05:54 )

Hama BTH-150 voltage and manual

published Mar 08, 2016 11:05   by admin ( last modified Mar 10, 2016 12:13 )

Hama BTH-150 is an old bluetooth headset that I had lying around. How to charge it then?

Straight from the United States' FCC archives:

Microsoft Word - CS8035 User Manual _for approval_.doc - pdf.php

But that doc doesn't mention the voltage.

Further search, and at least one version of it runs on 6 volts:

Microsoft Word - Class II permissive change letter for FCC.doc - pdf.php

And for my headset it seems to be positive voltage at the tip. Now charging.

Bummer, it only holds charge for a couple of minutes! Ah, well.


A good explanation for what Docker is

published Mar 07, 2016 08:46   by admin ( last modified Mar 07, 2016 08:46 )

Can be found I think in the answer to a question in the Docker FAQ:

What does Docker technology add to just plain LXC?

 


How to create a read-only view of files to back up with e.g. rsync

published Mar 06, 2016 12:20   by admin ( last modified Sep 05, 2016 11:24 )

On Linux, it is possible with bindfs to create a read-only view of a part of the file system, so that a separate backup user (e.g. named "backup_user") can read the files for backup purposes, but not alter them.

If you use rsync to copy over the files you need to configure rsync directory permissions ( --chmod=Do+w), see the second part of this article for info on this. If you use rsync you may also restrict the shell of user backup_user to only be able to do rsync commands.

How to create a read-only view of a part of your file system

The machine getting backed up is referred to as the workstation henceforth.

Make a mount point that backup_user should read from

sudo mkdir /mnt/files-to-backup

Install bindfs

sudo apt-get install bindfs

Edit /etc/fstab to contain a line similar to this example. In the example, "/home/auser/files" is the part of the hard disk you want to backup, "/mnt/files-to-backup" is a read only view on the workstation of "/home/auser/files" that is only readable by the user "backup_user":

/home/auser/files /mnt/files-to-backup fuse.bindfs perms=0000:u=rD,force-user=backup_user,force-group=nogroup 0 0

Restart the workstation or just mount it with e.g.

sudo mount -a

How to configure rsync to pull over the files to the server

Limit backup_user to only be able to use rsync commands

sudo apt-get install rssh

Edit /etc/rssh.conf to allow rsync

Set the shell of "backup_user" to rssh.

chsh -s /usr/bin/rssh

Schedule the server to pull rsync transfers from the workstation

On the server:

Let's rsync from the server.

Put this in crontab with "crontab -e" (select "nano" if it asks you)  to run it every hour at 42 minutes into the hour:

42 *  *   *   *    /usr/bin/rsync -r --delete --relative --progress  --chmod=Do+w backup_user@workstation.ip.address:/mnt/files-to-backup/ /mnt/volume1/synced_files/

A little about the switches used for rsync:

-r
Recursive transfer

--delete
Delete files at target that aren't any more at source. This is if you want a mirror.

--relative
Makes it easier to use exclude filters

--progress
Not strictly necessary but is good for debugging the whole command line

 --chmod=Do+w
Do+w tells rsync to add write permissions to directories. We are reading from a read-only image of the file system, and if we keep those permissions for directories, rsync cannot place anything inside the top level directories.

Here is an example of an exclude file, which you could have on the crontab command line like so:

--exclude-from=/path/to/excludes.txt

Example contents of excludes.txt:

- /mnt/files-to-backup/.npm
- /mnt/files-to-backup/.mozilla
- /mnt/files-to-backup/Downloads
- /mnt/files-to-backup/.wine
- /mnt/files-to-backup/.cache
- node_modules/
- .git/
- .svn/

 

 


Peer to peer in-browser file sharing

published Feb 29, 2016 11:45   by admin ( last modified Feb 29, 2016 11:47 )

Somebody posted about such a service on Reddit and user "meekstadt" replied:

User "gfody" added:

All untested by me at this point in time.


When does cron.hourly and cron.daily run?

published Feb 29, 2016 08:48   by admin ( last modified Feb 29, 2016 08:48 )

On one of my machines it seems to be at 17 past the hour, and 06.25 in the morning respectively. No idea if this is representative though.


Backup/Time machine tools for Linux & btrfs

published Feb 29, 2016 05:25   by admin ( last modified Mar 04, 2016 04:56 )

A good simple guide to btrfs

published Feb 25, 2016 11:21   by admin ( last modified Feb 25, 2016 11:21 )

Myten om konstnären

published Feb 22, 2016 01:20   by admin ( last modified Feb 22, 2016 01:31 )

På stora affischer i tunnelbanan kan man nu se bilder på konstnärer, med texter om hur fantastiska de är. Men jag tror att man missar mycket genom att bara titta på konstnären.

Reklam för utställning på Nationalmuseum: Den fetställda texten lyder:
"Entreprenör. Geni. Avantgardist. Normbrytare. Visionär. Resenär."

Intressantare på många sätt är den kultur och samhälle som gett konstnären möjlighet att blomstra. Det har säkert fötts konstnärliga genier lite varstans, men som aldrig fick tillfälle att visa vad de går för.

Det är ingen slump att Rom, Florens, Flandern, Paris, London, Wien och Kalifornien varit centrum vid olika tidpunkter. De konstnärliga uttrycken är i mycket uttryck för det välstånd och den tolerans som fanns eller finns på dessa ställen.

Här en intressant video om vart bemärkta personer flyttat genom tiderna, när de kunnat:

 

Inflyttningen till Kalifornien mot slutet är massiv.


Does a de-duplicating backup program make sense in the age of ZFS et al?

published Feb 19, 2016 06:40   by admin ( last modified Oct 01, 2016 11:15 )

Disclaimer: I haven't tested a de-duplicating FS yet. However I have tested Obnam, a de-duplicating backup system, and it did not work that well (could be operator error, but still). I could also test Attic and Borg, other de-duplicating backup systems.

But in a way I'd be happy to just use Rsync. The upside with Obnam, Attic and Borg is that they de-duplicate your data, which is great if you have the same files on several computers.

But come to think of it, there are de-duplicating file systems such as ZFS and btrfs. Why not use Rsync, make new copies galore on the backup server and have the file system take care of the de-duplication? My guess is that the code in at least ZFS and maybe also btrfs (I do not know much about it) is better than the code in the above mentioned de-duplicating backup systems, not because of lack of trying from those backup systems but simply for them having had less resources.

Update 2016-02-22: At least for ZFS you seem to need about 5GB of extra RAM or SSD for each TB on the volume: ZFS Deduplication: To Dedupe or not to Dedupe...

That does not work for the servers I have in mind. btrfs seems less mature than ZFS so I will not use it. Conclusion: It is probably better to just have enough disk space and "eat" the cost of having several copies of data.