Discussion:
[PVE-User] dual port cross over cable bonding
Adam Weremczuk
2018-10-05 13:40:47 UTC
Permalink
Hello,

I have 2 servers running Proxmox 5.2.

I've connected Ethernet ports between them with a pair of cross over cables.

When only 1 port on each and 1 cable are used connectivity looks fine
with the following config:

-------------------------------------------------------

auto ens1f0
iface ens1f0 inet static
    address 192.168.200.1(2)
    netmask 255.255.255.252

-------------------------------------------------------

Then I connected 2 cables and attempted link aggregation:

-------------------------------------------------------

iface ens1f0 inet manual

iface ens1f1 inet manual

auto bond1
iface bond1 inet static
    slaves ens1f0 ens1f1
    address 192.168.200.1
    netmask 255.255.255.252
    bond_miimon 100
#    bond_mode 802.3ad
    bond_mode balance-rr
    bond_xmit_hash_policy layer3+4

-------------------------------------------------------

Tried both 802.3ad and balance-rr modes.

AFAIK only these 2 provide link aggregation.

Ethtool appears to be happy:

ethtool bond1
Settings for bond1:
    Supported ports: [ ]
    Supported link modes:   Not reported
    Supported pause frame use: No
    Supports auto-negotiation: No
    Advertised link modes:  Not reported
    Advertised pause frame use: No
    Advertised auto-negotiation: No
    Speed: 2000Mb/s
    Duplex: Full
    Port: Other
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: off
    Link detected: yes

Unfortunately in either mode cross pinging fails with "Destination Host
Unreachable".

Own interfaces ping ok.

The same configuration works fine against managed switch ports (LACP/LAG).

So my question is why this is not working and whether it's possible at all?

Regards,
Adam
Josh Knight
2018-10-05 14:21:40 UTC
Permalink
One thing to check is that your routing table shows the network as going
out your bond interface instead of the old physical interface

`ip route show`

should say something like 192.168.200.0/30 dev bond1

Another thing you can do is use tcpdump on the receiving end to see if
you're getting ARP/ICMP messages.

Without using the bond interface (so your original example) did you test
both of the physical interfaces, or just 1? If you only tested ens1f0 and
not ens1f1, perhaps the request or response is going over ens1f1 and
something is wrong there? Just a thought.

Josh
Post by Adam Weremczuk
Hello,
I have 2 servers running Proxmox 5.2.
I've connected Ethernet ports between them with a pair of cross over cables.
When only 1 port on each and 1 cable are used connectivity looks fine
-------------------------------------------------------
auto ens1f0
iface ens1f0 inet static
address 192.168.200.1(2)
netmask 255.255.255.252
-------------------------------------------------------
-------------------------------------------------------
iface ens1f0 inet manual
iface ens1f1 inet manual
auto bond1
iface bond1 inet static
slaves ens1f0 ens1f1
address 192.168.200.1
netmask 255.255.255.252
bond_miimon 100
# bond_mode 802.3ad
bond_mode balance-rr
bond_xmit_hash_policy layer3+4
-------------------------------------------------------
Tried both 802.3ad and balance-rr modes.
AFAIK only these 2 provide link aggregation.
ethtool bond1
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 2000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes
Unfortunately in either mode cross pinging fails with "Destination Host
Unreachable".
Own interfaces ping ok.
The same configuration works fine against managed switch ports (LACP/LAG).
So my question is why this is not working and whether it's possible at all?
Regards,
Adam
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Adam Weremczuk
2018-10-05 15:01:15 UTC
Permalink
My config was ok and it's working like a charm after both servers have
been rebooted.
For some reason "systemctl restart networking" wasn't enough.
Mark Schouten
2018-10-05 15:09:52 UTC
Permalink
Post by Adam Weremczuk
My config was ok and it's working like a charm after both servers have
been rebooted.
For some reason "systemctl restart networking" wasn't enough.
This is not working anymore, unfortunatly...
--
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | ***@tuxis.nl
Adam Weremczuk
2018-10-05 15:41:25 UTC
Permalink
The bond operates in 2 gigabit mode:

ethtool bond1
(...)
    Speed: 2000Mb/s
    Duplex: Full
(...)

My working config (will play with it a bit more):

auto bond1
iface bond1 inet static
    slaves ens1f0 ens1f1
    address 192.168.200.1
    netmask 255.255.255.252
    bond_miimon 100
    bond_mode balance-rr
    bond_xmit_hash_policy layer3+4

I've done a simple some performance test - copied a file over ssh (which
adds own overhead).

90-100 MB/s over a single link and 150-160MB/s over a dual bond.

So it's definitely working and making a substantial difference.

Unplugging one cable doesn't break the transfer; speed temporarily
decreases but then goes up again after cable is reinserted.
Post by Mark Schouten
Post by Adam Weremczuk
My config was ok and it's working like a charm after both servers have
been rebooted.
For some reason "systemctl restart networking" wasn't enough.
This is not working anymore, unfortunatly...
Loading...