Discussion:
[PVE-User] can't make zfs-zsync working
Jean-Laurent Ivars
2015-09-24 10:44:29 UTC
Permalink
Dear co-users,

I just installed a two proxmox cluster with last iso 3.4 and have subscription so i made all the updates. The pve-zsync fonction is really great and I really really would like to succeed make it work.

If someone already managed to make it work would you like please tell me how, I followed all the instructions on the page :
https://pve.proxmox.com/wiki/PVE-zsync <https://pve.proxmox.com/wiki/PVE-zsync>

Not working for me :(

First, a little bit more on my configuration, i made a completely zfs install, i created some nfs datasets but I could not add them in the storage configuration with the gui so had to create the directly into storage.cfg :

dir: local
path /var/lib/vz
content images,iso,vztmpl,backup,rootdir
maxfiles 3

zfspool: Disks
pool rpool/disks
content images
sparse

zfspool: BKP_24H
pool rpool/BKP_24H
content images
sparse

zfspool: rpool
pool rpool
content images
sparse

I show you what my zfs config look like :

NAME USED AVAIL REFER MOUNTPOINT
rpool 199G 6,83T 96K /rpool
rpool/BKP_24H 96K 6,83T 96K /rpool/BKP_24H
rpool/ROOT 73,1G 6,83T 96K /rpool/ROOT
rpool/ROOT/pve-1 73,1G 6,83T 73,1G /
rpool/disks 92,5G 6,83T 96K /rpool/disks
rpool/disks/vm-100-disk-1 2,40G 6,83T 2,40G -
rpool/disks/vm-106-disk-1 963M 6,83T 963M -
rpool/disks/vm-107-disk-1 3,61G 6,83T 3,61G -
rpool/disks/vm-108-disk-1 9,29G 6,83T 9,29G -
rpool/disks/vm-110-disk-1 62,9G 6,83T 62,9G -
rpool/disks/vm-204-disk-1 13,4G 6,83T 13,4G -
rpool/swap 33,0G 6,86T 64K -

and for the exemple the configuration of my test machine 106.conf :

balloon: 256
bootdisk: virtio0
cores: 1
ide0: none,media=cdrom
memory: 1024
name: Deb-Test
net0: virtio=52:D5:C1:5C:3F:61,bridge=vmbr1
ostype: l26
scsihw: virtio-scsi-pci
sockets: 1
virtio0: Disks:vm-106-disk-1,cache=writeback,size=5G

So now the result when i try the command given in the wiki :

COMMAND:
zfs list -r -t snapshot -Ho name, -S creation rpool/vm-106-disk-1
GET ERROR:
cannot open 'rpool/vm-106-disk-1': dataset does not exist

I understand the command expect the vm disk to be at the root of the pool so I change the disk place and i try either to send directly to the pool to the other side, but with no more luck :

send from @ to rpool/vm-106-disk-***@rep_default_2015-09-24_12:30:49 estimated size is 1,26G
total estimated size is 1,26G
TIME SENT SNAPSHOT
warning: cannot send 'rpool/vm-106-disk-***@rep_default_2015-09-24_12:30:49': Relais brisé (pipe)
COMMAND:
zfs send -v rpool/vm-106-disk-***@rep_default_2015-09-24_12:30:49 | zfs recv ouragan:rpool/vm-106-disk-***@rep_default_2015-09-24_12:30:49
GET ERROR:
cannot open 'ouragan:rpool/vm-106-disk-1': dataset does not exist
cannot receive new filesystem stream: dataset does not exist


Now it seem to work from the sender side but from the receiver side I get the error « dataset does not exist » of course, it’s supposed to be created no ?

I am completely new to ZFS so surely i’m doing something wrong
 for exemple I don’t understand the difference between volume and dataset i searched a lot but nothing helped me understand clearly on the web and i suspect it could be it.

Is the a way to tell the command the disk is not at the root of the pool ?

Thank very much if someone can help me
P.S. I’m posting in the forum too (not sure what the best place to ask)

Best regards,


Jean-Laurent Ivars
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
Linkedin <http://fr.linkedin.com/in/jlivars/> | Viadeo <http://www.viadeo.com/fr/profile/jean-laurent.ivars> | www.ipgenius.fr <https://www.ipgenius.fr/>
Jean-Laurent Ivars
2015-09-25 15:27:49 UTC
Permalink
hi everyone,

Nobody answered my previous mail, maybe lack of inspiration...

I continue my investigations (never abandoning) if i do the command according to the wiki :

***@cyclone ~ # pve-zsync sync --source 106 --dest ouragan:rpool/BKP_24H --verbose
COMMAND:
zfs list -r -t snapshot -Ho name, -S creation rpool/vm-106-disk-1
GET ERROR:
cannot open 'rpool/vm-106-disk-1': dataset does not exist

I assume this is because my vm disk is in not at the root of the rpool... so i tried specifying the disk i want to sync :

***@cyclone ~ # pve-zsync sync --source rpool/disks/vm-106-disk-1 --dest ouragan:rpool/BKP_24H --verbose
send from @ to rpool/disks/vm-106-disk-***@rep_default_2015-09-25_16:55:51 estimated size is 1,26G
total estimated size is 1,26G
TIME SENT SNAPSHOT
warning: cannot send 'rpool/disks/vm-106-disk-***@rep_default_2015-09-25_16:55:51': Relais brisé (pipe)
COMMAND:
zfs send -v rpool/disks/vm-106-disk-***@rep_default_2015-09-25_16:55:51 | zfs recv ouragan:rpool/BKP_24H/vm-106-disk-***@rep_default_2015-09-25_16:55:51
GET ERROR:
cannot open 'ouragan:rpool/BKP_24H/vm-106-disk-1': dataset does not exist
cannot receive new filesystem stream: dataset does not exist

Always the same error from the remote side : dataset does not exist

Although if I try to create a snapshot and sending it by myself it seem to work :

***@cyclone ~ # zfs send rpool/disks/vm-106-disk-***@25-09-2015_16h58m14s | ssh -p 2223 ouragan zfs receive rpool/BKP_24H/vm-106-disk-1
***@cyclone ~ #

no error... and I can see it from the other side (even tried to boot from it and it works)

***@ouragan ~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 3,39T 3,63T 96K /rpool
rpool/BKP_24H 964M 3,63T 96K /rpool/BKP_24H
rpool/BKP_24H/vm-106-disk-1 963M 3,63T 963M -
rpool/ROOT 2,37T 3,63T 96K /rpool/ROOT

***@ouragan ~ # zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/BKP_24H/vm-106-disk-***@25-09-2015_16h58m14s 0 - 963M -

but can't do it for a second snapshot :

***@cyclone ~ # zfs send rpool/disks/vm-106-disk-***@25-09-2015_17h03m07s | ssh -p 2223 ouragan zfs receive rpool/BKP_24H/vm-106-disk-1
cannot receive new filesystem stream: destination 'rpool/BKP_24H/vm-106-disk-1' exists
must specify -F to overwrite it

I can try to find how to send multiples snapshot and end up with a script made by myself but seem a little bit silly to me not to use the tool already provided...

FYI I changed the default ssh port for security reasons but in case it would be the problem i tried go back to the standard one but it changes nothing.

I remarked there is a difference between the command used by the pve-zsync script and the command that works (at least for the first snapshot)
pve-zsync :
zfs send -v rpool/disks/vm-106-disk-***@rep_default_2015-09-25_16:55:51 | zfs recv ouragan:rpool/BKP_24H/vm-106-disk-***@rep_default_2015-09-25_16:55:51
my command :
zfs send rpool/disks/vm-106-disk-***@25-09-2015_16h58m14s | ssh -p 2223 ouragan zfs receive rpool/BKP_24H/vm-106-disk-1

Tried to have a look in the pve-zsync script since it seem to be a perl script but it didn't helped me (I only practice bash shell)

If someone came by and had an idea to help me to progress I would be very grateful

Can someone please answer me (just say hello would be great) since i’m not even sure it works when i write to the mailing list (i don’t write often)

Best regards,


Jean-Laurent Ivars
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
Linkedin <http://fr.linkedin.com/in/jlivars/> | Viadeo <http://www.viadeo.com/fr/profile/jean-laurent.ivars> | www.ipgenius.fr <https://www.ipgenius.fr/>
Date: 24 septembre 2015 12:44:29 UTC+2
Objet: can't make zfs-zsync working
Dear co-users,
I just installed a two proxmox cluster with last iso 3.4 and have subscription so i made all the updates. The pve-zsync fonction is really great and I really really would like to succeed make it work.
https://pve.proxmox.com/wiki/PVE-zsync <https://pve.proxmox.com/wiki/PVE-zsync>
Not working for me :(
dir: local
path /var/lib/vz
content images,iso,vztmpl,backup,rootdir
maxfiles 3
zfspool: Disks
pool rpool/disks
content images
sparse
zfspool: BKP_24H
pool rpool/BKP_24H
content images
sparse
zfspool: rpool
pool rpool
content images
sparse
NAME USED AVAIL REFER MOUNTPOINT
rpool 199G 6,83T 96K /rpool
rpool/BKP_24H 96K 6,83T 96K /rpool/BKP_24H
rpool/ROOT 73,1G 6,83T 96K /rpool/ROOT
rpool/ROOT/pve-1 73,1G 6,83T 73,1G /
rpool/disks 92,5G 6,83T 96K /rpool/disks
rpool/disks/vm-100-disk-1 2,40G 6,83T 2,40G -
rpool/disks/vm-106-disk-1 963M 6,83T 963M -
rpool/disks/vm-107-disk-1 3,61G 6,83T 3,61G -
rpool/disks/vm-108-disk-1 9,29G 6,83T 9,29G -
rpool/disks/vm-110-disk-1 62,9G 6,83T 62,9G -
rpool/disks/vm-204-disk-1 13,4G 6,83T 13,4G -
rpool/swap 33,0G 6,86T 64K -
balloon: 256
bootdisk: virtio0
cores: 1
ide0: none,media=cdrom
memory: 1024
name: Deb-Test
net0: virtio=52:D5:C1:5C:3F:61,bridge=vmbr1
ostype: l26
scsihw: virtio-scsi-pci
sockets: 1
virtio0: Disks:vm-106-disk-1,cache=writeback,size=5G
zfs list -r -t snapshot -Ho name, -S creation rpool/vm-106-disk-1
cannot open 'rpool/vm-106-disk-1': dataset does not exist
total estimated size is 1,26G
TIME SENT SNAPSHOT
cannot open 'ouragan:rpool/vm-106-disk-1': dataset does not exist
cannot receive new filesystem stream: dataset does not exist
Now it seem to work from the sender side but from the receiver side I get the error « dataset does not exist » of course, it’s supposed to be created no ?
I am completely new to ZFS so surely i’m doing something wrong
 for exemple I don’t understand the difference between volume and dataset i searched a lot but nothing helped me understand clearly on the web and i suspect it could be it.
Is the a way to tell the command the disk is not at the root of the pool ?
Thank very much if someone can help me
P.S. I’m posting in the forum too (not sure what the best place to ask)
Best regards,
Jean-Laurent Ivars
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
Linkedin <http://fr.linkedin.com/in/jlivars/> | Viadeo <http://www.viadeo.com/fr/profile/jean-laurent.ivars> | www.ipgenius.fr <https://www.ipgenius.fr/>
Wolfgang Bumiller
2015-09-28 06:14:42 UTC
Permalink
I just checked the source - apparently we currently only allow ip(v4) addresses
there, no hostnames (this still needs changing...).
Michael Rasmussen
2015-09-28 06:18:39 UTC
Permalink
On Mon, 28 Sep 2015 08:14:42 +0200 (CEST)
Post by Wolfgang Bumiller
I just checked the source - apparently we currently only allow ip(v4) addresses
there, no hostnames (this still needs changing...).
And IPv6?
--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael <at> rasmussen <dot> cc
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
mir <at> datanom <dot> net
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
mir <at> miras <dot> org
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
--------------------------------------------------------------
/usr/games/fortune -es says:
Nature makes boys and girls lovely to look upon so they can be
tolerated until they acquire some sense.
-- William Phelps
Jean-Laurent Ivars
2015-09-28 08:37:56 UTC
Permalink
I had no answer so I tough the mailing list wasn’t working for me
. I made a change in the wiki for other people not struggle like i have (completely insane the script can’t work with hostname


If someone is interested i made cool script. What the script make is to keep in sync in both the hosts, create a new log file everyday to log all the sync and send you an email containing the log file if something bad happens. In fact it's a loop and the goal is to always have the most recent copies of the vm disk on both sides. Almost as interesting than DRBD but without the split-brain complications :)
In my case for exemple i have approximately 15 KVM VM witch are not to much solicited and the script need 1 minute to make a loop, I think during solicited period maybe 2 or 3 minutes, surely less than 5... It's all new so i have not experience with it, if someone use it i would be very happy if he let me know how it works for him.

It's made to work almost "out of the box"" in a full ZFS Proxmox installation in a two hosts cluster only, if your configuration is different you will have to adapt it...

You just have to verify that you have following packages installed : pve-zsync and screen, you will have to put your mail address in the var monmail at the beginning of the script.

Sorry all the comments in the script are in french, hope you will understand :)

#!/bin/bash


monmail="***@mydomain.com"


gosync() {
##On commence la boucle
while true;do
## Creation du log (on verifie que le dossier des logs est cree avant)
if [ ! -d "/var/log/syncro" ];then
mkdir -p /var/log/syncro
fi
logfic="/var/log/syncro/syncro-`date '+%d-%m-%Y'`.log"
##On detecte sur quelle machine on se trouve et quelle est la machine distante
loc=`hostname`
dist=`ls /etc/pve/nodes/ | grep -v $loc`
###On recupere les ID des VM qui utilisent zfs locales puis de VM distantes
vmloc=`grep rpool /etc/pve/nodes/$loc/qemu-server/*.conf | cut -d / -f 7 | cut -d . -f 1`
vmdist=`grep rpool /etc/pve/nodes/$dist/qemu-server/*.conf | cut -d / -f 7 | cut -d . -f 1`
###On recupere l'IP de l'hote distant
ipdist=$(ping -c 1 $dist | gawk -F'[()]' '/PING/{print $2}')
##On vérifie la présence du répertoire des hotes du cluster
if [ ! -d "/etc/pve/nodes/" ]; then
echo "PB avec le cluster a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
##On laisse une trace d'envoi de mail et on l'envoie
if [ $logfic != `cat /tmp/mail.tmp` ];then
echo $logfic > /tmp/mail.tmp
cat $logfic | mail -s "PB Syncro ZFS" $monmail;
fi
fi


echo "syncro des machines de $loc vers $dist" >> $logfic
for n in $vmloc
do
if test -f "/tmp/stopsync.req"
then
rm /tmp/stopsync.req
touch /tmp/stopsync.ok
exit 0
else
echo "debut syncro de la machine $n a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
pve-zsync sync --source $n --dest $ipdist:rpool/lastsync --maxsnap 1 --verbose >> $logfic
if test ${?} -eq 0 ; then
echo "syncro de la machine $n finie a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
else
##On laisse une trace d'envoi de mail et on l'envoie
if [ $logfic != `cat /tmp/mail.tmp` ];then
echo $logfic > /tmp/mail.tmp
cat $logfic | mail -s "PB Syncro ZFS" $monmail;
fi
fi
fi
done


echo "syncro des machines de $dist vers $loc" >> $logfic
for n in $vmdist
do
if test -f "/tmp/stopsync.req"
then
rm /tmp/stopsync.req
touch /tmp/stopsync.ok
exit 0

else
echo "debut syncro de la machine $n a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
pve-zsync sync --source $ipdist:$n --dest rpool/lastsync --maxsnap 1 --verbose >> $logfic
if test ${?} -eq 0 ; then
echo "syncro de la machine $n finie a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
else
##On laisse une trace d'envoi de mail et on l'envoie
if [ $logfic != `cat /tmp/mail.tmp` ];then
echo $logfic > /tmp/mail.tmp
cat $logfic | mail -s "PB Syncro ZFS" $monmail;
fi
fi
fi


done


done
}


stop() {
touch /tmp/stopsync.req
##On commence une nouvelle boucle pour attendre que la syncro en cours soit finie
while true;do
if test -f "/tmp/stopsync.ok"
then
echo "Arret de la syncro : OK"
##Et l'arret du script en lui meme
rm /tmp/stopsync.ok
kill $$
exit 0
else
echo "Arret en cours..."
echo "la syncronisation en cours se finit, cela peut durer un peu..."
sleep 3
fi
done
}


case "$1" in
gosync)
gosync
;;
start)
screen -d -m -S syncro-zfs bash -c '/root/scripts/syncro-zfs gosync'
echo "Lancement de la syncronisation : OK"
echo "taper la commande 'screen -r syncro-zfs' pour voir la sortie standard."
;;
stop)
stop
;;
*)
echo "Usage: $0 {start|stop}" >&2
exit 1
;;
esac


Hope you will like it, please let me know.

Best regards,




Jean-Laurent Ivars
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
Linkedin <http://fr.linkedin.com/in/jlivars/> | Viadeo <http://www.viadeo.com/fr/profile/jean-laurent.ivars> | www.ipgenius.fr <https://www.ipgenius.fr/>
Post by Michael Rasmussen
On Mon, 28 Sep 2015 08:14:42 +0200 (CEST)
Post by Wolfgang Bumiller
I just checked the source - apparently we currently only allow ip(v4) addresses
there, no hostnames (this still needs changing...).
And IPv6?
--
Hilsen/Regards
Michael Rasmussen
michael <at> rasmussen <dot> cc
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
mir <at> datanom <dot> net
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
mir <at> miras <dot> org
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
--------------------------------------------------------------
Nature makes boys and girls lovely to look upon so they can be
tolerated until they acquire some sense.
-- William Phelps
_______________________________________________
pve-user mailing list
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Loading...