I don't know your topology, I'm assuming you're going from nodeA ->
switch -> nodeB ? Make sure that entire path is using RR. You could
verify this with interface counters on the various hops. If a single hop
is not doing it correctly, it will limit the throughput.
Post by Gilberto NunesSo I try balance-rr with LAG in the switch and still get 1 GB
pve-ceph02:~# iperf3 -c 10.10.10.100
Connecting to host 10.10.10.100, port 5201
[ 4] local 10.10.10.110 port 52674 connected to 10.10.10.100 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 116 MBytes 974 Mbits/sec 32 670
KBytes
[ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 3 597
KBytes
[ 4] 2.00-3.00 sec 112 MBytes 941 Mbits/sec 3 509
KBytes
[ 4] 3.00-4.00 sec 112 MBytes 941 Mbits/sec 0 660
KBytes
[ 4] 4.00-5.00 sec 112 MBytes 941 Mbits/sec 6 585
KBytes
[ 4] 5.00-6.00 sec 112 MBytes 941 Mbits/sec 0 720
KBytes
[ 4] 6.00-7.00 sec 112 MBytes 942 Mbits/sec 3 650
KBytes
[ 4] 7.00-8.00 sec 112 MBytes 941 Mbits/sec 4 570
KBytes
[ 4] 8.00-9.00 sec 112 MBytes 941 Mbits/sec 0 708
KBytes
[ 4] 9.00-10.00 sec 112 MBytes 941 Mbits/sec 8 635
KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.10 GBytes 945 Mbits/sec 59
sender
[ 4] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec
receiver
iperf Done.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Post by Josh KnightDepending on your topology/configuration, you could try to use bond-rr
mode
Post by Josh Knightin Linux instead of 802.3ad.
Bond-rr mode is the only mode that will put pkts for the same mac/ip/port
tuple across multiple interfaces. This will work well for UDP but TCP
may
Post by Josh Knightsuffer performance issues because pkts can end up out of order and
trigger
Post by Josh KnightTCP retransmits. There are some examples on this page, you may need to
do
Post by Josh Knightsome testing before deploying it to ensure it does what you want.
https://wiki.linuxfoundation.org/networking/bonding#bonding-driver-options
Post by Josh KnightAs others have stated, you can adjust the hashing, but a single flow
(mac/ip/port combination) will still end up limited to 1Gbps without
using
Post by Josh Knightround robin mode.
Post by mjHi,
Yes, it is our undertanding that if the hardware (switch) supports it,
"bond-xmit-hash-policy layer3+4" gives you best spread.
But it will still give you 4 'lanes' of 1GB. Ceph will connect using
different ports, ip's etc, en each connection should use a different
lane, so altogether, you should see a network throughput that
(theoretically) could be as high as 4GB.
That is how we understand it.
Procurve chassis(config)# show trunk
Load Balancing Method: L3-based (default)
Port | Name Type | Group Type
---- + -------------------------------- --------- + ------ --------
D1 | Link to prn004 - 1 10GbE-T | Trk1 LACP
D2 | Link to prn004 - 2 10GbE-T | Trk1 LACP
D3 | Link to prn005 - 1 10GbE-T | Trk2 LACP
D4 | Link to prn005 - 2 10GbE-T | Trk2 LACP
Procurve chassis(config)# trunk-load-balance L4
So the load balance is now based on Layer4 instead of L3.
Besides these details, I think what you are doing should work nicely.
MJ
If using standard 802.3ad (LACP) you will always get only the
performance of a single link between one host and another.
Using "bond-xmit-hash-policy layer3+4" might get you a better
performance but is not standard LACP.
Post by Gilberto NunesSo what bond mode I suppose to use in order to get more speed? I
mean
Post by Josh KnightPost by mjhow
Post by Gilberto Nunesto join the nic to get 4 GB? I will use Ceph!
I know I should use 10gb but I dont have it right now.
Thanks
Post by Dietmar MaurerPost by Gilberto NunesThis 802.3ad do no suppose to agrengate the speed of all available
NIC??
Post by Gilberto NunesPost by Dietmar MaurerNo, not really. One connection is limited to 1GB. If you start more
parallel connections you can gain more speed.
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user