lists
2018-09-26 15:26:07 UTC
Hi all,
I am going to make some change to our proxmox networking, and I'd like
some fresh eyes to take a look at my plans... :-)
We have three pve hosts, ceph network is a meshed 10G network setup,
directly connecting the pve hosts to each other. Client access is on a
'regular' ip 1G NIC.
Sample /etc/network/interfaces (server pve10) for current config:
# client access
auto vmbr0
iface vmbr0 inet static
address 192.168.89.10
netmask 255.255.255.0
gateway 192.168.89.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
# to pve2/ceph
auto eth2
iface eth2 inet static
address 10.10.89.1
netmask 255.255.255.0
mtu 9000
up route add -net 10.10.89.2 netmask 255.255.255.255 dev eth2
down route del -net 10.10.89.2 netmask 255.255.255.255 dev eth2
# to pve3/ceph
auto eth3
iface eth3 inet static
address 10.10.89.1
netmask 255.255.255.0
mtu 9000
up route add -net 10.10.89.3 netmask 255.255.255.255 dev eth3
down route del -net 10.10.89.3 netmask 255.255.255.255 dev eth3
See for more info on the meshed network:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server. It works
very nicely.
Now, we want to change networking from the above to: dual 10G lacp bonds
per server to our hp procurve chassis.
So, in order to change as little as possible, I would like to keep ceph
config the same, meaning: retain all IPs/config, and use something like
this:
auto bond0
iface bond0 inet manual
slaves eth2 eth3
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer3+4
allow-hotplug vmbr0
auto vmbr0
# client access
iface vmbr0 inet static
address 192.168.89.10
netmask 255.255.255.0
gateway 192.168.89.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
# ip for access to other cephs
iface vmbr0 inet static
address 10.10.89.1
netmask 255.255.255.0
Then cable eth2 / eth3 to the lacp ports on the HP procurve.
I am assuming that this would make all traffic (ceph and VMs) float over
the same two 10G lacp wires, and both ceph and VMs would not notice any
difference. I'm also assuming that no other config changes would be
required at all.
So, any errors in the above reasoning? I realise we cannot have jumbo
frames in this setup, but I don't think I mind. I also realise that
currently we have seperated ceph and VMs traffic, and in the new
situation we don't anymore, but this seems accepted (perhaps even
recommended) for small networks like ours at the ceph mailinglist nowadays.
So... feedback to all of the above please... :-)
Thanks!
MJ
I am going to make some change to our proxmox networking, and I'd like
some fresh eyes to take a look at my plans... :-)
We have three pve hosts, ceph network is a meshed 10G network setup,
directly connecting the pve hosts to each other. Client access is on a
'regular' ip 1G NIC.
Sample /etc/network/interfaces (server pve10) for current config:
# client access
auto vmbr0
iface vmbr0 inet static
address 192.168.89.10
netmask 255.255.255.0
gateway 192.168.89.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
# to pve2/ceph
auto eth2
iface eth2 inet static
address 10.10.89.1
netmask 255.255.255.0
mtu 9000
up route add -net 10.10.89.2 netmask 255.255.255.255 dev eth2
down route del -net 10.10.89.2 netmask 255.255.255.255 dev eth2
# to pve3/ceph
auto eth3
iface eth3 inet static
address 10.10.89.1
netmask 255.255.255.0
mtu 9000
up route add -net 10.10.89.3 netmask 255.255.255.255 dev eth3
down route del -net 10.10.89.3 netmask 255.255.255.255 dev eth3
See for more info on the meshed network:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server. It works
very nicely.
Now, we want to change networking from the above to: dual 10G lacp bonds
per server to our hp procurve chassis.
So, in order to change as little as possible, I would like to keep ceph
config the same, meaning: retain all IPs/config, and use something like
this:
auto bond0
iface bond0 inet manual
slaves eth2 eth3
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer3+4
allow-hotplug vmbr0
auto vmbr0
# client access
iface vmbr0 inet static
address 192.168.89.10
netmask 255.255.255.0
gateway 192.168.89.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
# ip for access to other cephs
iface vmbr0 inet static
address 10.10.89.1
netmask 255.255.255.0
Then cable eth2 / eth3 to the lacp ports on the HP procurve.
I am assuming that this would make all traffic (ceph and VMs) float over
the same two 10G lacp wires, and both ceph and VMs would not notice any
difference. I'm also assuming that no other config changes would be
required at all.
So, any errors in the above reasoning? I realise we cannot have jumbo
frames in this setup, but I don't think I mind. I also realise that
currently we have seperated ceph and VMs traffic, and in the new
situation we don't anymore, but this seems accepted (perhaps even
recommended) for small networks like ours at the ceph mailinglist nowadays.
So... feedback to all of the above please... :-)
Thanks!
MJ