Discussion:
[PVE-User] about Intel 82576 nic multiple queue issue.
lyt_yudi
2014-09-20 08:01:47 UTC
Permalink
hi,all

my server use intel 82576 nic for proxmox 3.3,and latest.

just only one queue per nic in pve 3.3:
# cat /proc/interrupts |grep eth2
64: 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2
65: 168492 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-0
# cat /proc/interrupts |grep eth3
67: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth3
68: 1830 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth3-TxRx-0

but in the centos 6.5, have 8 queues per nic:
# cat /proc/interrupts |grep eth2
60: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2
61: 37862 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-0
62: 47671 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-1
63: 31684 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-2
64: 39792 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-3
65: 27115 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-4
66: 33468 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-5
67: 33197 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-6
68: 37695 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth2-TxRx-7

#lspci|grep Intel
05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

what’s something wrong for me? and the PVE isn’t supported this type nic card?

#lspci -vvv
…...
05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 47
Region 0: Memory at dbf40000 (32-bit, non-prefetchable) [size=128K]
Region 1: Memory at dc000000 (32-bit, non-prefetchable) [size=4M]
Region 2: I/O ports at ecc0 [size=32]
Region 3: Memory at dbf38000 (32-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-
Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
Address: 0000000000000000 Data: 0000
Masking: 00000000 Pending: 00000000
Capabilities: [70] MSI-X: Enable+ Count=10 Masked-
Vector table: BAR=3 offset=00000000
PBA: BAR=3 offset=00002000
Capabilities: [a0] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
DevCtl: Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported+
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
MaxPayload 256 bytes, MaxReadReq 512 bytes
DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend-
LnkCap: Port #2, Speed 2.5GT/s, Width x4, ASPM L0s L1, Latency L0 <4us, L1 <64us
ClockPM- Surprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+
DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-
LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
Capabilities: [100 v1] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt+ UnxCmplt+ RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES- TLP+ FCP+ CmpltTO+ CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
CEMsk: RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+
AERCap: First Error Pointer: 14, GenCap- CGenEn- ChkCap- ChkEn-
Capabilities: [140 v1] Device Serial Number 00-1b-21-ff-ff-7c-41-08
Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
ARICap: MFVC- ACS-, Next Function: 1
ARICtl: MFVC- ACS-, Function Group: 0
Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
IOVCap: Migration-, Interrupt Message Number: 000
IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy-
IOVSta: Migration-
Initial VFs: 8, Total VFs: 8, Number of VFs: 8, Function Dependency Link: 00
VF offset: 384, stride: 2, Device ID: 10ca
Supported Page Size: 00000553, System Page Size: 00000001
Region 0: Memory at 00000000dbc00000 (64-bit, non-prefetchable)
Region 3: Memory at 00000000dbc20000 (64-bit, non-prefetchable)
VF Migration: offset: 00000000, BIR: 0
Kernel driver in use: igb

…...

# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
lyt_yudi
2014-09-20 08:04:04 UTC
Permalink
Post by lyt_yudi
my server use intel 82576 nic for proxmox 3.3,and latest.
with pve-kernel-3.10.0-4-pve have same issue.
lyt_yudi
2014-09-22 01:25:02 UTC
Permalink
hi,Dietmar,Aderumier

can you help me? or give me some idea?
what¡¯s something wrong for me? and the PVE isn¡¯t supported this type nic card?
Dhaussy Alexandre
2014-09-22 17:31:25 UTC
Permalink
Hello,

I also have some trouble with dropped/underruns paquets...maybe this is related.
My cards is Intel 82580 Gigabit.

I didn't check for queues...

***@proxmoxt2:~# grep eth6 /proc/interrupts
102: 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth6
103: 1447352150 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth6-TxRx-0

I have only one queue combined for tx/rx... is it normal ?

***@proxmoxt2:~# ethtool -l eth6
Channel parameters for eth6:
Pre-set maximums:
RX: 16
TX: 16
Other: 1
Combined: 16
Current hardware settings:
RX: 0
TX: 0
Other: 1
Combined: 1

I checked on another server with igb and CentOS 6 kernel...and i have 8 combined channels when looking at /proc/interrupts.
But i can't verify it with ethtool (Operation not supported.)

Maybe you could try to set channels settings with ethtool -L ?

Regards,
Alexandre.

Le 22/09/2014 03:25, lyt_yudi a écrit :

hi,Dietmar,Aderumier

can you help me? or give me some idea?

圚 2014幎9月20日䞋午4:01lyt_yudi <***@icloud.com<mailto:***@icloud.com>> 写道

what’s something wrong for me? and the PVE isn’t supported this type nic card?




_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com<mailto:pve-***@pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Dhaussy Alexandre
2014-09-22 17:44:48 UTC
Permalink
And beware of link down/ups when changing the channel numbers... :(

Alexandre.

Le 22/09/2014 19:31, Dhaussy Alexandre a écrit :
Hello,

I also have some trouble with dropped/underruns paquets...maybe this is related.
My cards is Intel 82580 Gigabit.

I didn't check for queues...

***@proxmoxt2:~# grep eth6 /proc/interrupts
102: 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth6
103: 1447352150 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge eth6-TxRx-0

I have only one queue combined for tx/rx... is it normal ?

***@proxmoxt2:~# ethtool -l eth6
Channel parameters for eth6:
Pre-set maximums:
RX: 16
TX: 16
Other: 1
Combined: 16
Current hardware settings:
RX: 0
TX: 0
Other: 1
Combined: 1

I checked on another server with igb and CentOS 6 kernel...and i have 8 combined channels when looking at /proc/interrupts.
But i can't verify it with ethtool (Operation not supported.)

Maybe you could try to set channels settings with ethtool -L ?

Regards,
Alexandre.

Le 22/09/2014 03:25, lyt_yudi a écrit :

hi,Dietmar,Aderumier

can you help me? or give me some idea?

圚 2014幎9月20日䞋午4:01lyt_yudi <***@icloud.com<mailto:***@icloud.com>> 写道

what’s something wrong for me? and the PVE isn’t supported this type nic card?




_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com<mailto:pve-***@pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user





_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com<mailto:pve-***@pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
lyt_yudi
2014-09-23 05:02:05 UTC
Permalink
hi, Dhaussy

Thanks your reply. very much!
Post by Dhaussy Alexandre
And beware of link down/ups when changing the channel numbers... :(
yes, but when change it, it¡¯s Hang.

# ethtool -l eth2
Channel parameters for eth2:
Pre-set maximums:
RX: 16
TX: 16
Other: 1
Combined: 16
Current hardware settings:
RX: 0
TX: 0
Other: 1
Combined: 1

# ethtool -L eth2 combined 8
Then after a while, constantly appeared the following information, and link down!

igb 0000:06:00.0 Detected Tx Unit Hang
Tx Queue <6>
TDH <c>
TDT <c>
next_to_use <c>
next_to_clean <0>
buffer_info[next_to_clean]
time_stamp <100181dd9>
next_to_waitch <ffff88082a63d000>
jiffies <100182671>
desc.status <f0000>
lyt_yudi
2015-05-07 02:41:50 UTC
Permalink
hi, all

Intel Ethernet driver has a lot of updates

igb - 5.2.18 (current: 5.2.15)
i40e - 1.2.38 (current: 1.2.37)
ixgbe - 4.0.3 (current: 3.23.2)

the latest igb driver can solve intel 82576 multi queue issue?

current, use pve-kernel-3.10.0-3-pve

and do this:
# cat /etc/modprobe.d/igb.conf
options igb IntMode=3,3,3,3,3,3 RSS=8,8,8,8,8,8

# update-initramfs -u

it¡¯s work fine.

but use other pve kernel can¡¯t work!

somebody have try it?

thanks a lot!
Post by lyt_yudi
hi, Dhaussy
Thanks your reply. very much!
Post by Dhaussy Alexandre
And beware of link down/ups when changing the channel numbers... :(
yes, but when change it, it¡¯s Hang.
# ethtool -l eth2
RX: 16
TX: 16
Other: 1
Combined: 16
RX: 0
TX: 0
Other: 1
Combined: 1
# ethtool -L eth2 combined 8
Then after a while, constantly appeared the following information, and link down!
igb 0000:06:00.0 Detected Tx Unit Hang
Tx Queue <6>
TDH <c>
TDT <c>
next_to_use <c>
next_to_clean <0>
buffer_info[next_to_clean]
time_stamp <100181dd9>
next_to_waitch <ffff88082a63d000>
jiffies <100182671>
desc.status <f0000>
lyt_yudi
2015-05-07 02:45:42 UTC
Permalink
sorry, maybe it¡¯s solve it!

Changelog for igb-5.2.17

* Fix build on newer kernels
* Fix transmit hangs on 82576
Post by lyt_yudi
hi, all
Intel Ethernet driver has a lot of updates
igb - 5.2.18 (current: 5.2.15)
i40e - 1.2.38 (current: 1.2.37)
ixgbe - 4.0.3 (current: 3.23.2)
the latest igb driver can solve intel 82576 multi queue issue?
current, use pve-kernel-3.10.0-3-pve
# cat /etc/modprobe.d/igb.conf
options igb IntMode=3,3,3,3,3,3 RSS=8,8,8,8,8,8
# update-initramfs -u
it¡¯s work fine.
but use other pve kernel can¡¯t work!
somebody have try it?
thanks a lot!
Post by lyt_yudi
hi, Dhaussy
Thanks your reply. very much!
Post by Dhaussy Alexandre
And beware of link down/ups when changing the channel numbers... :(
yes, but when change it, it¡¯s Hang.
# ethtool -l eth2
RX: 16
TX: 16
Other: 1
Combined: 16
RX: 0
TX: 0
Other: 1
Combined: 1
# ethtool -L eth2 combined 8
Then after a while, constantly appeared the following information, and link down!
igb 0000:06:00.0 Detected Tx Unit Hang
Tx Queue <6>
TDH <c>
TDT <c>
next_to_use <c>
next_to_clean <0>
buffer_info[next_to_clean]
time_stamp <100181dd9>
next_to_waitch <ffff88082a63d000>
jiffies <100182671>
desc.status <f0000>
Continue reading on narkive:
Search results for '[PVE-User] about Intel 82576 nic multiple queue issue.' (Questions and Answers)
3
replies
What processes are vital to the operation of Windows XP SP2?
started 2006-08-29 00:52:34 UTC
software
Loading...