Discussion:
[PVE-User] pve-zsync error in path
Tonči Stipičević
2018-06-23 20:57:59 UTC
Permalink
Hello to all,
unfortunately I'm experineing this (error: in path) issue and cannot
figure out why. My scenario is:
My hosts make in 3-node cluster. I want to use pve-zsync in order to do
periodical fast incremental backups , what is not possible to achive
with classical vzdump.

1. this is the source vm conf:
bootdisk: virtio0
cores: 2
ide2: rn314:iso/ubuntu-16.04.3-server-amd64.iso,media=cdrom
memory: 1024
name: u1604server-zfs
net0: virtio=1A:5D:FC:7C:FC:98,bridge=vmbr1
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=ba277e4f-9bc4-455b-bf9c-a8c96bb32391
sockets: 1
vga: qxl
virtio0: zfs1:vm-70020-disk-1,size=7G
/etc/pve/qemu-server/70020.conf (END)

2. this is the destination zpool (on server 10.20.28.4):
***@pvesuma03:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
bckpool 444K 449G 96K /bckpool
bckpool/backup 96K 449G 96K /bckpool/backup
***@pvesuma03:~#



3. this is the command I run on the "source" node:
***@pvesuma01:~# pve-zsync create -source 70020 -dest
10.20.28.4:bckpool/backup --verbose --maxsnap 7 --name bck1
ERROR: in path
***@pvesuma01:~#

latest pve is running

proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve) pve-manager: 5.2-2
(running version: 5.2-2/b1d1c7f4) pve-kernel-4.15: 5.2-3
pve-kernel-4.13: 5.1-45 pve-kernel-4.15.17-3-pve: 4.15.17-13
pve-kernel-4.15.17-2-pve: 4.15.17-10 pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.15.15-1-pve: 4.15.15-6 pve-kernel-4.13.16-3-pve: 4.13.16-49
pve-kernel-4.13.16-2-pve: 4.13.16-48 pve-kernel-4.13.16-1-pve:
4.13.16-46 pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-5-pve: 4.13.13-38 pve-kernel-4.13.13-4-pve:
4.13.13-35 pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.13-1-pve: 4.13.13-31 pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.13.8-2-pve: 4.13.8-28 pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-4-pve: 4.10.17-24 pve-kernel-4.10.17-3-pve:
4.10.17-23 pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.17-1-pve: 4.10.17-18 pve-kernel-4.10.15-1-pve:
4.10.15-15 pve-kernel-4.4.67-1-pve: 4.4.67-92 pve-kernel-4.4.62-1-pve:
4.4.62-88 pve-kernel-4.4.59-1-pve: 4.4.59-87 pve-kernel-4.4.49-1-pve:
4.4.49-86 pve-kernel-4.4.44-1-pve: 4.4.44-84 pve-kernel-4.4.40-1-pve:
4.4.40-82 pve-kernel-4.4.35-2-pve: 4.4.35-79 pve-kernel-4.4.35-1-pve:
4.4.35-77 pve-kernel-4.4.24-1-pve: 4.4.24-72 pve-kernel-4.4.21-1-pve:
4.4.21-71 pve-kernel-4.4.19-1-pve: 4.4.19-66 pve-kernel-4.4.16-1-pve:
4.4.16-64 pve-kernel-4.4.15-1-pve: 4.4.15-60 pve-kernel-4.4.13-2-pve:
4.4.13-58 pve-kernel-4.4.13-1-pve: 4.4.13-56 pve-kernel-4.4.10-1-pve:
4.4.10-54 pve-kernel-4.2.6-1-pve: 4.2.6-36 corosync: 2.4.2-pve5 criu:
2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2 libpve-access-control: 5.0-8 libpve-apiclient-perl:
2.0-4 libpve-common-perl: 5.0-33 libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-9 libpve-storage-perl: 5.0-23 libqb0:
1.0.1-1 lvm2: 2.02.168-pve6 lxc-pve: 3.0.0-3 lxcfs: 3.0.0-1 novnc-pve:
1.0.0-1 openvswitch-switch: 2.7.0-2 proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-27 pve-container: 2.0-23 pve-docs: 5.2-4 pve-firewall:
3.0-12 pve-firmware: 2.0-4 pve-ha-manager: 2.0-5 pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3 pve-qemu-kvm: 2.11.1-5 pve-xtermjs: 1.0-5
pve-zsync: 1.6-16 qemu-server: 5.0-28 smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5 vncterm: 1.5-3 zfsutils-linux: 0.7.9-pve1~bpo9


I have no idea so far what could cause such response/error

Thank you very much in advance for you help

BR
Tonci
Christian Meiring
2018-06-23 23:04:37 UTC
Permalink
Hello Tonči,

you use -dest instead of --dest, or is this a typo in your mail only?

Christian
Post by Tonči Stipičević
Hello to all,
unfortunately I'm experineing this (error: in path) issue and cannot
My hosts make in 3-node cluster. I want to use pve-zsync in order to do
periodical fast incremental backups , what is not possible to achive
with classical vzdump.
bootdisk: virtio0
cores: 2
ide2: rn314:iso/ubuntu-16.04.3-server-amd64.iso,media=cdrom
memory: 1024
name: u1604server-zfs
net0: virtio=1A:5D:FC:7C:FC:98,bridge=vmbr1
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=ba277e4f-9bc4-455b-bf9c-a8c96bb32391
sockets: 1
vga: qxl
virtio0: zfs1:vm-70020-disk-1,size=7G
/etc/pve/qemu-server/70020.conf (END)
NAME USED AVAIL REFER MOUNTPOINT
bckpool 444K 449G 96K /bckpool
bckpool/backup 96K 449G 96K /bckpool/backup
10.20.28.4:bckpool/backup --verbose --maxsnap 7 --name bck1
ERROR: in path
latest pve is running
proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve) pve-manager: 5.2-2
(running version: 5.2-2/b1d1c7f4) pve-kernel-4.15: 5.2-3
pve-kernel-4.13: 5.1-45 pve-kernel-4.15.17-3-pve: 4.15.17-13
pve-kernel-4.15.17-2-pve: 4.15.17-10 pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.15.15-1-pve: 4.15.15-6 pve-kernel-4.13.16-3-pve: 4.13.16-49
4.13.16-46 pve-kernel-4.13.13-6-pve: 4.13.13-42
4.13.13-35 pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.13-1-pve: 4.13.13-31 pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.13.8-2-pve: 4.13.8-28 pve-kernel-4.13.4-1-pve: 4.13.4-26
4.10.17-23 pve-kernel-4.10.17-2-pve: 4.10.17-20
2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2
2.0-4 libpve-common-perl: 5.0-33 libpve-guest-common-perl: 2.0-16
1.0.0-1 openvswitch-switch: 2.7.0-2 proxmox-widget-toolkit: 1.0-19
3.0-12 pve-firmware: 2.0-4 pve-ha-manager: 2.0-5 pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3 pve-qemu-kvm: 2.11.1-5 pve-xtermjs: 1.0-5
pve-zsync: 1.6-16 qemu-server: 5.0-28 smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5 vncterm: 1.5-3 zfsutils-linux: 0.7.9-pve1~bpo9
I have no idea so far what could cause such response/error
Thank you very much in advance for you help
BR
Tonci
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Tonči Stipičević
2018-11-27 07:44:46 UTC
Permalink
Hi  to all,

I've just upgraded my lab-3node-HA-cluster from 5.2-10  to 5.2-12  and
cluster got down. No node sees the other one  .

Is there any way to troubleshoot this ?


************


Nov 27 08:42:09 pvesuma01 pvesr[32648]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 08:42:10 pvesuma01 pvesr[32648]: error with cfs lock
'file-replication_cfg': no quorum!
Nov 27 08:42:10 pvesuma01 systemd[1]: pvesr.service: Main process
exited, code=exited, status=13/n/a
Nov 27 08:42:10 pvesuma01 systemd[1]: Failed to start Proxmox VE
replication runner.
Nov 27 08:42:10 pvesuma01 systemd[1]: pvesr.service: Unit entered failed
state.

************


Thank you very much in advance and

BR

Tonci

//
Dmitry Petuhov
2018-11-27 07:47:34 UTC
Permalink
Check that corosync is running on all nodes.

And check cluster status with
pvecm status
Hi to all,
I've just upgraded my lab-3node-HA-cluster from 5.2-10  to 5.2-12 and
cluster got down. No node sees the other one  .
Is there any way to troubleshoot this ?
************
Nov 27 08:42:09 pvesuma01 pvesr[32648]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 08:42:10 pvesuma01 pvesr[32648]: error with cfs lock
'file-replication_cfg': no quorum!
Nov 27 08:42:10 pvesuma01 systemd[1]: pvesr.service: Main process
exited, code=exited, status=13/n/a
Nov 27 08:42:10 pvesuma01 systemd[1]: Failed to start Proxmox VE
replication runner.
Nov 27 08:42:10 pvesuma01 systemd[1]: pvesr.service: Unit entered
failed state.
************
Thank you very much in advance and
BR
Tonci
//
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Tonči Stipičević
2018-11-27 09:52:59 UTC
Permalink
No, corosync is not working on two nodes :

Job for corosync.service failed because a timeout was exceeded.
See "systemctl status corosync.service" and "journalctl -xe" for details.

TASK ERROR: command 'systemctl start corosync' failed: exit code 1

orts TolUSNA
Nov 27 10:47:01 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:02 pvesuma03 pmxcfs[2598]: [quorum] crit: quorum_initialize
failed: 2
Nov 27 10:47:02 pvesuma03 pmxcfs[2598]: [confdb] crit: cmap_initialize
failed: 2
Nov 27 10:47:02 pvesuma03 pmxcfs[2598]: [dcdb] crit: cpg_initialize
failed: 2
Nov 27 10:47:02 pvesuma03 pmxcfs[2598]: [status] crit: cpg_initialize
failed: 2
Nov 27 10:47:02 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:03 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:04 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:05 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:06 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:07 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:08 pvesuma03 pmxcfs[2598]: [quorum] crit: quorum_initialize
failed: 2
Nov 27 10:47:08 pvesuma03 pmxcfs[2598]: [confdb] crit: cmap_initialize
failed: 2
Nov 27 10:47:08 pvesuma03 pmxcfs[2598]: [dcdb] crit: cpg_initialize
failed: 2
Nov 27 10:47:08 pvesuma03 pmxcfs[2598]: [status] crit: cpg_initialize
failed: 2
Nov 27 10:47:08 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:09 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:10 pvesuma03 pvesr[16526]: error with cfs lock
'file-replication_cfg': no quorum!
Nov 27 10:47:10 pvesuma03 systemd[1]: pvesr.service: Main process
exited, code=exited, status=13/n/a
Nov 27 10:47:10 pvesuma03 systemd[1]: Failed to start Proxmox VE
replication runner.
Tonči Stipičević
2018-11-27 10:41:27 UTC
Permalink
Tnx for help

this thread solved everything

https://forum.proxmox.com/threads/after-upgrade-to-5-2-11-corosync-does-not-come-up.49075/

//
Post by Tonči Stipičević
Job for corosync.service failed because a timeout was exceeded.
See "systemctl status corosync.service" and "journalctl -xe" for details.
TASK ERROR: command 'systemctl start corosync' failed: exit code 1
orts TolUSNA
Nov 27 10:47:01 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
quorum_initialize failed: 2
Nov 27 10:47:02 pvesuma03 pmxcfs[2598]: [confdb] crit: cmap_initialize
failed: 2
Nov 27 10:47:02 pvesuma03 pmxcfs[2598]: [dcdb] crit: cpg_initialize
failed: 2
Nov 27 10:47:02 pvesuma03 pmxcfs[2598]: [status] crit: cpg_initialize
failed: 2
Nov 27 10:47:02 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:03 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:04 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:05 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:06 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:07 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
quorum_initialize failed: 2
Nov 27 10:47:08 pvesuma03 pmxcfs[2598]: [confdb] crit: cmap_initialize
failed: 2
Nov 27 10:47:08 pvesuma03 pmxcfs[2598]: [dcdb] crit: cpg_initialize
failed: 2
Nov 27 10:47:08 pvesuma03 pmxcfs[2598]: [status] crit: cpg_initialize
failed: 2
Nov 27 10:47:08 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:09 pvesuma03 pvesr[16526]: trying to acquire cfs lock
'file-replication_cfg' ...
Nov 27 10:47:10 pvesuma03 pvesr[16526]: error with cfs lock
'file-replication_cfg': no quorum!
Nov 27 10:47:10 pvesuma03 systemd[1]: pvesr.service: Main process
exited, code=exited, status=13/n/a
Nov 27 10:47:10 pvesuma03 systemd[1]: Failed to start Proxmox VE
replication runner.
Loading...