Discussion:
[PVE-User] Unable to join cluster
Eric Germann
2018-07-27 18:45:17 UTC
Permalink
I have two new Proxmox boxes in a virgin cluster. No VM’s, etc. The only thing setup on them is networking.

I created a cluster on the first one successfully.

However, I try to join the second to the cluster, I get the following error:

Starting worker failed: unable to parse worker upid 'UPID:pve02:00001876:00026B6B:5B5B6751:clusterjoin:2001:470:e2fc:10::160:***@pam:' (500)

The nodes are all IPv4 and IPv6 enabled. The cluster IP as show in the config is IPv6. Is this the issue?

If I put the v6 address in brackets, same error.

If I substitute the IPv4 address, I get the following error

Etablishing API connection with host '172.28.10.160'
TASK ERROR: 401 401 authentication failure

Thoughts? They haven’t been up more than 1 hr when this occurs.

Thanks in advance

EKg
Woods, Ken A (DNR)
2018-07-27 18:58:12 UTC
Permalink
This post might be inappropriate. Click to display it.
Eric Germann
2018-07-27 19:18:43 UTC
Permalink
What's the output of "pvecm status”?
***@pve01:~# pvwcm status
-bash: pvwcm: command not found
***@pve01:~# pvecm status
Quorum information
------------------
Date: Fri Jul 27 19:09:38 2018
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/56
Quorate: Yes

Votequorum information
----------------------
Expected votes: 1
Highest expected: 1
Total votes: 1
Quorum: 1
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 2001:470:e2fc:10::160 (local)
Same version on both nodes?
Yes
Time synchronized?
Yes. NTP
How are you adding the second node? Try "pvecm add XXX.XXX.XXX.XXX”
Both through the GUI and from the command line. Oddity is V6 and V4 errors are different.
Is multicast enabled? Does omping work and not drop packets?
Need to look in to that. moping returns no messages.

It’s on a dumb switch. I’ll reconfig on to an intelligent switch I can enable mcast on

EKG
(Also, can somebody fix the typo of the missing "s" in "Etablishing API connection with host XXXXX")
kw
-----Original Message-----
Sent: Friday, July 27, 2018 10:45 AM
Subject: [PVE-User] Unable to join cluster
I have two new Proxmox boxes in a virgin cluster. No VM’s, etc. The only thing setup on them is networking.
I created a cluster on the first one successfully.
The nodes are all IPv4 and IPv6 enabled. The cluster IP as show in the config is IPv6. Is this the issue?
If I put the v6 address in brackets, same error.
If I substitute the IPv4 address, I get the following error
Etablishing API connection with host '172.28.10.160'
TASK ERROR: 401 401 authentication failure
Thoughts? They haven’t been up more than 1 hr when this occurs.
Thanks in advance
EKg
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Thomas Lamprecht
2018-07-30 06:52:45 UTC
Permalink
(note: re-send as I forgot to hit answer all, thus the list wasn't included)
Post by Woods, Ken A (DNR)
I have two new Proxmox boxes in a virgin cluster. No VM’s, etc. The
only thing setup on them is networking.
Post by Woods, Ken A (DNR)
I created a cluster on the first one successfully.
Starting worker failed: unable to parse worker upid
'UPID:pve02:00001876:00026B6B:5B5B6751:clusterjoin:2001:470:e2fc:10::160:***@pam:'
(500)
Post by Woods, Ken A (DNR)
The nodes are all IPv4 and IPv6 enabled. The cluster IP as show in
the config is IPv6. Is this the issue?
Yes, this could be indeed a bug where our UPID parser cannot handle the
encoded IPv6 address...
Post by Woods, Ken A (DNR)
If I put the v6 address in brackets, same error.
If I substitute the IPv4 address, I get the following error
Etablishing API connection with host '172.28.10.160'
TASK ERROR: 401 401 authentication failure
Huh, that's a bit weird, sure you have the correct credentials?
Post by Woods, Ken A (DNR)
Thoughts? They haven’t been up more than 1 hr when this occurs.
For now you may want to use the 'pvecm add' CLI command with --use_ssh
as parameter, with this you should workaround the issue while we take a
look at this.

cheers,
Thomas

Loading...