Discussion:
[PVE-User] OVS Internal ports all in state unknown
Brian Sidebotham
2018-09-14 14:42:18 UTC
Permalink
Hi Guys,

We are using Openvswitch networking and we have a physical 1G management
network and two 10G physical links bonded. The physical interfaces show a
state of UP when doing "ip a".

However for the OVS bond, bridges and internal ports we get a state of"
UNKNOWN". Is this expected?

Everything else is essentially working OK - The GUI marks the bond, bridge
and internal ports as active and traffic is working as expected, but I
don't know why the state of these is not UP?

An example of an internal port OVS Configuration in /etc/network/interfaces
(as setup by the GUI):

allow-vmbr1 vlan233
iface vlan233 inet static
address 10.1.33.24
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=233

and ip a output:

14: vlan233: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
*UNKNOWN* group default qlen 1000
link/ether e2:53:9f:28:cb:2b brd ff:ff:ff:ff:ff:ff
inet 10.1.33.24/24 brd 10.1.33.255 scope global vlan233
valid_lft forever preferred_lft forever
inet6 fe80::e053:9fff:fe28:cb2b/64 scope link
valid_lft forever preferred_lft forever

The version we're running is detailed below. We rolled back the kernel as
we were having stability problems with 4.15.8 on our hardware (HP Proliant
Gen8)

root@ :/etc/network# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.13.16-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.7-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9

-----------
Brian Sidebotham

Wanless Systems Limited
e: ***@wanless.systems
m:+44 7739 359 883
o: +44 330 223 3595
<https://10.0.30.30:5001/webclient/#/call?phone=3302233595>

The information in this email is confidential and solely for the use of the
intended recipient(s). If you receive this email in error, please notify
the sender and delete the email from your system immediately. In such
circumstances, you must not make any use of the email or its contents.

Views expressed by an individual in this email do not necessarily reflect
the views of Wanless Systems Limited.

Computer viruses may be transmitted by email. Wanless Systems Limited
accepts no liability for any damage caused by any virus transmitted by this
email. E-mail transmission cannot be guaranteed to be secure or error-free.
It is possible that information may be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The sender does
not accept liability for any errors or omissions in the contents of this
message, which arise as a result of e-mail transmission.

Please note that all calls are recorded for monitoring and quality purposes.

Wanless Systems Limited.
Registered office: Wanless Systems Limited, Bracknell, Berkshire, RG12 0UN.
Registered in England.
Registered number: 6901359
<https://10.0.30.30:5001/webclient/#/call?phone=6901359>.
dORSY
2018-09-14 15:26:13 UTC
Permalink
this is not proxmox relatedthose are not phisical ifs so there is no link up or down on them



On Fri, Sep 14, 2018 at 16:42, Brian Sidebotham<***@wanless.systems> wrote: Hi Guys,

We are using Openvswitch networking and we have a physical 1G management
network and two 10G physical links bonded. The physical interfaces show a
state of UP when doing "ip a".

However for the OVS bond, bridges and internal ports we get a state of"
UNKNOWN". Is this expected?

Everything else is essentially working OK - The GUI marks the bond, bridge
and internal ports as active and traffic is working as expected, but I
don't know why the state of these is not UP?

An example of an internal port OVS Configuration in /etc/network/interfaces
(as setup by the GUI):

allow-vmbr1 vlan233
iface vlan233 inet static
address  10.1.33.24
netmask  255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=233

and ip a output:

14: vlan233: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
*UNKNOWN* group default qlen 1000
    link/ether e2:53:9f:28:cb:2b brd ff:ff:ff:ff:ff:ff
    inet 10.1.33.24/24 brd 10.1.33.255 scope global vlan233
      valid_lft forever preferred_lft forever
    inet6 fe80::e053:9fff:fe28:cb2b/64 scope link
      valid_lft forever preferred_lft forever

The version we're running is detailed below. We rolled back the kernel as
we were having stability problems with 4.15.8 on our hardware (HP Proliant
Gen8)

root@ :/etc/network# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.13.16-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.7-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9

-----------
Brian Sidebotham

Wanless Systems Limited
e: ***@wanless.systems
m:+44 7739 359 883
o: +44 330 223 3595
<https://10.0.30.30:5001/webclient/#/call?phone=3302233595>

The information in this email is confidential and solely for the use of the
intended recipient(s). If you receive this email in error, please notify
the sender and delete the email from your system immediately. In such
circumstances, you must not make any use of the email or its contents.

Views expressed by an individual in this email do not necessarily reflect
the views of Wanless Systems Limited.

Computer viruses may be transmitted by email. Wanless Systems Limited
accepts no liability for any damage caused by any virus transmitted by this
email. E-mail transmission cannot be guaranteed to be secure or error-free.
It is possible that information may be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The sender does
not accept liability for any errors or omissions in the contents of this
message, which arise as a result of e-mail transmission.

Please note that all calls are recorded for monitoring and quality purposes.

Wanless Systems Limited.
Registered office: Wanless Systems Limited, Bracknell, Berkshire, RG12 0UN.
Registered in England.
Registered number: 6901359
<https://10.0.30.30:5001/webclient/#/call?phone=6901359>.
_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Josh Knight
2018-09-14 18:06:41 UTC
Permalink
This looks to be expected. The operational state is provided by the
kernel/driver for the interface. For these virtual interfaces, it's just
not being reported, probably because they can't actually go down. This is
common and not specific to proxmox. The openvswitch and tun drivers must
not be reporting an operational state for the devices.

I believe you can use `ovs-vsctl list Interface` or a similar command if
you need to get the admin_state or link_state fields for the virtual
interfaces.

E.g. from my proxmox host I also see the same behavior.

***@host:~# ip link show eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
ovs-system state UP mode DEFAULT group default qlen 1000
***@host:~# cat /sys/class/net/eno1/operstate
up
***@host:~# ethtool -i eno1 | grep driver
driver: tg3

***@host:~# ip link show tap107i2
155: tap107i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast master ovs-system state UNKNOWN mode DEFAULT group default qlen
1000
***@host:~# cat /sys/class/net/tap107i2/operstate
unknown
***@host:~# ethtool -i tap107i2 | grep driver
driver: tun

***@host:~# ip link show bond0
23: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT group default qlen 1000
***@host:~# cat /sys/class/net/bond0/operstate
unknown
***@host:~# ethtool -i bond0 | grep driver
driver: openvswitch
Post by Brian Sidebotham
Hi Guys,
We are using Openvswitch networking and we have a physical 1G management
network and two 10G physical links bonded. The physical interfaces show a
state of UP when doing "ip a".
However for the OVS bond, bridges and internal ports we get a state of"
UNKNOWN". Is this expected?
Everything else is essentially working OK - The GUI marks the bond, bridge
and internal ports as active and traffic is working as expected, but I
don't know why the state of these is not UP?
An example of an internal port OVS Configuration in /etc/network/interfaces
allow-vmbr1 vlan233
iface vlan233 inet static
address 10.1.33.24
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=233
14: vlan233: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
*UNKNOWN* group default qlen 1000
link/ether e2:53:9f:28:cb:2b brd ff:ff:ff:ff:ff:ff
inet 10.1.33.24/24 brd 10.1.33.255 scope global vlan233
valid_lft forever preferred_lft forever
inet6 fe80::e053:9fff:fe28:cb2b/64 scope link
valid_lft forever preferred_lft forever
The version we're running is detailed below. We rolled back the kernel as
we were having stability problems with 4.15.8 on our hardware (HP Proliant
Gen8)
proxmox-ve: 5.2-2 (running kernel: 4.13.16-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.7-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
-----------
Brian Sidebotham
Wanless Systems Limited
m:+44 7739 359 883 <+44%207739%20359883>
o: +44 330 223 3595 <+44%20330%20223%203595>
<https://10.0.30.30:5001/webclient/#/call?phone=3302233595>
The information in this email is confidential and solely for the use of the
intended recipient(s). If you receive this email in error, please notify
the sender and delete the email from your system immediately. In such
circumstances, you must not make any use of the email or its contents.
Views expressed by an individual in this email do not necessarily reflect
the views of Wanless Systems Limited.
Computer viruses may be transmitted by email. Wanless Systems Limited
accepts no liability for any damage caused by any virus transmitted by this
email. E-mail transmission cannot be guaranteed to be secure or error-free.
It is possible that information may be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The sender does
not accept liability for any errors or omissions in the contents of this
message, which arise as a result of e-mail transmission.
Please note that all calls are recorded for monitoring and quality purposes.
Wanless Systems Limited.
Registered office: Wanless Systems Limited, Bracknell, Berkshire, RG12 0UN.
Registered in England.
Registered number: 6901359
<https://10.0.30.30:5001/webclient/#/call?phone=6901359>.
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Brian Sidebotham
2018-09-17 14:46:40 UTC
Permalink
Thanks for the responses to an essentially off-topic post.

Best Regards,

-----------
Brian Sidebotham
Post by Josh Knight
This looks to be expected. The operational state is provided by the
kernel/driver for the interface. For these virtual interfaces, it's just
not being reported, probably because they can't actually go down. This is
common and not specific to proxmox. The openvswitch and tun drivers must
not be reporting an operational state for the devices.
I believe you can use `ovs-vsctl list Interface` or a similar command if
you need to get the admin_state or link_state fields for the virtual
interfaces.
E.g. from my proxmox host I also see the same behavior.
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
ovs-system state UP mode DEFAULT group default qlen 1000
up
driver: tg3
155: tap107i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast master ovs-system state UNKNOWN mode DEFAULT group default qlen
1000
unknown
driver: tun
23: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT group default qlen 1000
unknown
driver: openvswitch
Post by Brian Sidebotham
Hi Guys,
We are using Openvswitch networking and we have a physical 1G management
network and two 10G physical links bonded. The physical interfaces show a
state of UP when doing "ip a".
However for the OVS bond, bridges and internal ports we get a state of"
UNKNOWN". Is this expected?
Everything else is essentially working OK - The GUI marks the bond,
bridge
Post by Brian Sidebotham
and internal ports as active and traffic is working as expected, but I
don't know why the state of these is not UP?
An example of an internal port OVS Configuration in
/etc/network/interfaces
Post by Brian Sidebotham
allow-vmbr1 vlan233
iface vlan233 inet static
address 10.1.33.24
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=233
14: vlan233: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state
Post by Brian Sidebotham
*UNKNOWN* group default qlen 1000
link/ether e2:53:9f:28:cb:2b brd ff:ff:ff:ff:ff:ff
inet 10.1.33.24/24 brd 10.1.33.255 scope global vlan233
valid_lft forever preferred_lft forever
inet6 fe80::e053:9fff:fe28:cb2b/64 scope link
valid_lft forever preferred_lft forever
The version we're running is detailed below. We rolled back the kernel as
we were having stability problems with 4.15.8 on our hardware (HP
Proliant
Post by Brian Sidebotham
Gen8)
proxmox-ve: 5.2-2 (running kernel: 4.13.16-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.7-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
-----------
Brian Sidebotham
Wanless Systems Limited
m:+44 7739 359 883 <+44%207739%20359883>
o: +44 330 223 3595 <+44%20330%20223%203595>
<https://10.0.30.30:5001/webclient/#/call?phone=3302233595>
The information in this email is confidential and solely for the use of
the
Post by Brian Sidebotham
intended recipient(s). If you receive this email in error, please notify
the sender and delete the email from your system immediately. In such
circumstances, you must not make any use of the email or its contents.
Views expressed by an individual in this email do not necessarily reflect
the views of Wanless Systems Limited.
Computer viruses may be transmitted by email. Wanless Systems Limited
accepts no liability for any damage caused by any virus transmitted by
this
Post by Brian Sidebotham
email. E-mail transmission cannot be guaranteed to be secure or
error-free.
Post by Brian Sidebotham
It is possible that information may be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The sender does
not accept liability for any errors or omissions in the contents of this
message, which arise as a result of e-mail transmission.
Please note that all calls are recorded for monitoring and quality purposes.
Wanless Systems Limited.
Registered office: Wanless Systems Limited, Bracknell, Berkshire, RG12
0UN.
Post by Brian Sidebotham
Registered in England.
Registered number: 6901359
<https://10.0.30.30:5001/webclient/#/call?phone=6901359>.
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Loading...