Project

General

Profile

Actions

action #125282

closed

[qe-core] propose a test plan for DPDK + OpenVSwitch at a functional level

Added by zluo about 1 year ago. Updated 11 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
New test
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:
Difficulty:

Description

How to setup and test OVS and DPDK at a functional level, it should include following as minimum:

  • provide a proper server: x86_64 with 16GB RAM, cpu core not less than 8, at least one NIC, VT-d supported and enabled in BIOS
  • install ovs dpdk, dpdk-tools on tumbleweeed or sles 15 sp5 (OVS-3.1.0 or higher is required)
  • enable and configure hugepages as needed
  • load the required modules and bind the NIC to the VFIO driver or uio_pci_generic
  • setup OVS
  • validating ovs is working:
    • systemctl status ovsdb-server ovs-vswitchd
    • for example 'ovs-vsctl get Open_vSwitch . dpdk_initialized'
    • dpdk-hugepages.py --show from /var/log

As extended test plan for DPDK basic functions can be explored later

Actions #1

Updated by zluo about 1 year ago

  • Description updated (diff)
Actions #2

Updated by zluo about 1 year ago

  • Description updated (diff)
Actions #3

Updated by szarate about 1 year ago

Zaoliang, do you already have the server?

Actions #4

Updated by zluo about 1 year ago

szarate wrote:

Zaoliang, do you already have the server?

I use at moment orthos machine, but it is better to use our own machine of course.

Actions #6

Updated by zluo about 1 year ago

  • Status changed from New to In Progress

finally I could set up sles 15 sp5 on our QA machine: ix64ph1079.qa.suse.de

We need new build of openvswitch before we can start testing.

At moment we have follwoing:

ix64ph1079:~ # zypper info openvswitch
Loading repository data...
Reading installed packages...


Information for package openvswitch:
------------------------------------
Repository     : sle-module-server-applications
Name           : openvswitch
Version        : 2.14.2-150400.24.3.1
Arch           : x86_64
Vendor         : SUSE LLC <https://www.suse.com/>
Support Level  : Level 3
Installed Size : 1,2 MiB
Installed      : No
Status         : not installed
Source package : openvswitch-2.14.2-150400.24.3.1.src
Upstream URL   : http://openvswitch.org/
Summary        : A multilayer virtual network switch

Actions #7

Updated by zluo about 1 year ago

I installed the latest build 81.1 on ix64ph1079.qa.suse.de

ix64ph1079:~ # zypper info openvswitch

Information for package openvswitch:
------------------------------------
Repository     : SLE-Module-Server-Applications15-SP5-Pool
Name           : openvswitch
Version        : 2.14.2-150400.24.3.1
Arch           : x86_64
Vendor         : SUSE LLC <https://www.suse.com/>
Support Level  : Level 3
Installed Size : 1.2 MiB
Installed      : No
Status         : not installed
Source package : openvswitch-2.14.2-150400.24.3.1.src
Upstream URL   : http://openvswitch.org/
Summary        : A multilayer virtual network switch
Description    : 
    Open vSwitch is a multilayer virtual network Ethernet switch. It is
    enables network automation through programmatic extension, and
    supports standard management interfaces and protocols (e.g. NetFlow,
    sFlow, RSPAN, ERSPAN, CLI, LACP, 802.1ag). In addition, it supports
    distribution across multiple physical servers similar to VMware’s
    vNetwork distributed vswitch or Cisco’s Nexus 1000V.
---

this build of openvswitch is still not meet the requirement yet.

Actions #8

Updated by zluo about 1 year ago

  • Status changed from In Progress to Feedback

the proposal is clear and we need correct build for sles 15 sp5, set it as feedback for now.

Actions #10

Updated by zluo about 1 year ago

I just installed sles 15 sp5 build 93.1 on ix64ph1079.qa.suse.de

but we still have old package:

Informationen zu Paket openvswitch:

Repository : SLE-Module-Server-Applications15-SP5-Pool
Name : openvswitch
Version : 2.14.2-150400.24.3.1
Arch : x86_64
Anbieter : SUSE LLC https://www.suse.com/
Support Level : Stufe 3
Installierte Größe : 1,2 MiB
Installiert : Nein
Status : nicht installiert
Quellpaket : openvswitch-2.14.2-150400.24.3.1.src
Upstream-URL : http://openvswitch.org/
Zusammenfassung : A multilayer virtual network switch
Beschreibung :

Actions #11

Updated by jstehlik about 1 year ago

According to http://xcdchk.suse.de/results/SLE-15-SP5-Full-Test/93.1
the dpdk rpms are there. Is change of openvswitch version really needed? I may lack some context, but it seems to me there is nothing holding us back from configuring openvswitch to use the dpdk library a testing it.

Actions #12

Updated by zluo about 1 year ago

jstehlik wrote:

According to http://xcdchk.suse.de/results/SLE-15-SP5-Full-Test/93.1
the dpdk rpms are there. Is change of openvswitch version really needed? I may lack some context, but it seems to me there is nothing holding us back from configuring openvswitch to use the dpdk library a testing it.

Please see https://progress.opensuse.org/issues/124949#note-6

I talked with Duraisankar about quite long time ago. He confirmed that we need updates of dpdk and ovs together, otherwise it doesn't make sense to test. At moment the requirement is not given for test.
dpdk22 and OVS 3.1.0 together should work.

Actions #13

Updated by zluo about 1 year ago

now I have installed sles 15 sp5 build 93.2 which provides correct packages: dpdk22 and openvswitch.

ix64ph1079:~ # systemctl status ovs-vswitchd


● ovs-vswitchd.service - Open vSwitch Forwarding Unit
     Loaded: loaded (/usr/lib/systemd/system/ovs-vswitchd.service; static)
     Active: active (running) since Wed 2023-04-19 11:15:29 CEST; 8s ago
    Process: 7899 ExecStart=/usr/share/openvswitch/scripts/ovs-ctl --no-ovsdb-server --no-monitor --system-id=random ${OVS_USER_OPT} start $OPTIONS (code=exited, status=0/SUCCESS)
   Main PID: 7938 (ovs-vswitchd)
      Tasks: 1
     CGroup: /system.slice/ovs-vswitchd.service
             └─ 7938 ovs-vswitchd unix:/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --user openvswitch:openvswitch --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/run/openvswitch/ovs->

Apr 19 11:15:29 ix64ph1079 systemd[1]: Starting Open vSwitch Forwarding Unit...
Apr 19 11:15:29 ix64ph1079 ovs-ctl[7927]: Inserting openvswitch module.
Apr 19 11:15:29 ix64ph1079 ovs-ctl[7899]: Starting ovs-vswitchd.
Apr 19 11:15:29 ix64ph1079 ovs-vsctl[7945]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=ix64ph1079.qa.suse.de
Apr 19 11:15:29 ix64ph1079 ovs-ctl[7899]: Enabling remote OVSDB managers.
Apr 19 11:15:29 ix64ph1079 systemd[1]: Started Open vSwitch Forwarding Unit.

  • setting for dpdk: ok
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
  • ovs-vswitchd and ovsdb-server

I need create /run/openvswitch/db.sock manually because it is not available


ix64ph1079:~ # ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" status
ovsdb-server is not running
ovs-vswitchd is running with pid 8440
ix64ph1079:~ # ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" stop
Exiting ovs-vswitchd (8440).
ovs-vswitchd: could not initiate process monitoring
ix64ph1079:~ # systemctl start ovsdb-server
ix64ph1079:~ # systemctl status ovsdb-server
● ovsdb-server.service - Open vSwitch Database Unit
     Loaded: loaded (/usr/lib/systemd/system/ovsdb-server.service; static)
     Active: active (running) since Wed 2023-04-19 11:39:19 CEST; 5s ago
    Process: 8489 ExecStartPre=/usr/bin/rm -f /run/openvswitch.useropts (code=exited, status=0/SUCCESS)
    Process: 8490 ExecStartPre=/usr/bin/chown ${OVS_USER_ID} /run/openvswitch /var/log/openvswitch (code=exited, status=0/SUCCESS)
    Process: 8491 ExecStartPre=/bin/sh -c /usr/bin/echo "OVS_USER_ID=${OVS_USER_ID}" > /run/openvswitch.useropts (code=exited, status=0/SUCCESS)
    Process: 8493 ExecStartPre=/bin/sh -c if [ "$${OVS_USER_ID/:*/}" != "root" ]; then /usr/bin/echo "OVS_USER_OPT=--ovs-user=${OVS_USER_ID}" >> /run/openvswitch.useropts; fi (code=exited, status=0/SUCCESS)
    Process: 8495 ExecStart=/usr/share/openvswitch/scripts/ovs-ctl --no-ovs-vswitchd --no-monitor --system-id=random ${OVS_USER_OPT} start $OPTIONS (code=exited, status=0/SUCCESS)
   Main PID: 8542 (ovsdb-server)
      Tasks: 1
     CGroup: /system.slice/ovsdb-server.service
             └─ 8542 ovsdb-server /var/lib/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate -->

Apr 19 11:39:19 ix64ph1079 systemd[1]: Starting Open vSwitch Database Unit...
Apr 19 11:39:19 ix64ph1079 ovs-ctl[8495]: Starting ovsdb-server.
Apr 19 11:39:19 ix64ph1079 ovs-vsctl[8543]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.3.1
Apr 19 11:39:19 ix64ph1079 ovs-vsctl[8548]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.1.0 "external-ids:system-id=\"c41778c1-2464-46a8-ba9c-29573436470a\"" "external-ids:rundir=\"/run/openvswitch>
Apr 19 11:39:19 ix64ph1079 ovs-ctl[8495]: Configuring Open vSwitch system IDs.
Apr 19 11:39:19 ix64ph1079 ovs-ctl[8495]: Enabling remote OVSDB managers.
Apr 19 11:39:19 ix64ph1079 ovs-vsctl[8554]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=ix64ph1079.qa.suse.de
Apr 19 11:39:19 ix64ph1079 systemd[1]: Started Open vSwitch Database Unit.

ix64ph1079:~ # ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start

ix64ph1079:~ # ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" status
ovsdb-server is running with pid 8542
ovs-vswitchd is running with pid 8582
ix64ph1079:~ # ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start
ovs-vswitchd is already running.
Enabling remote OVSDB managers.
  • dpdk test:
ix64ph1079:~ # /usr/bin/dpdk-testpmd
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
rte_pdump_init(): cannot allocate pdump statistics
testpmd: No probed ethernet devices
EAL: Error - exiting with code: 1
  Cause: rte_zmalloc(32 struct rte_port) failed

because I cannot find dpdk-devbind.py to run:

dpdk-devbind.py --bind=vfio-pci eth0
dpdk-devbind.py --status

this is why I got: testpmd: No probed ethernet devices

Actions #14

Updated by zluo about 1 year ago

well, I need install dpdk22-tools extra, it has not been installed automatically.

/usr/bin/dpdk-devbind.py --bind=vfio-pci eth0
Warning: routing table indicates that interface 0000:02:00.0 is active. Not modifying

if I bring eth0 down and try to bind it to dpdk, then I lost the device and cannot get it back. In this case I have to reboot the machine.
This is NOT working for me.
Without binding network device for dpdk, it means we cannot use dpdk22 and openvswitch3

Actions #15

Updated by zluo about 1 year ago

Duraisankar tried following:
modprobe uio_pci_generic
after this is not working (modprobe vfio enable_unsafe_noiommu_mode=1)
He thinks:

Just comment the second line in the file "/run/openvswitch.useropts " which has the username as "openvswitch:openvswitch"
Let me check the exact reason for vfio UIO failure. But my assumption is NOIOMMU should solve the problem.

then dpdk-devbind.py -b uio_pci_generic 0000:02:00.1
/usr/bin/dpdk-testpmd looks better:
ix64ph1079:~ # /usr/bin/dpdk-testpmd
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No free 2048 kB hugepages reported on node 0
EAL: No free 2048 kB hugepages reported on node 1

Logs: /var/log/openvswitch/ovs-vswitchd.log

2023-04-19T11:20:08.491Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-19T11:20:08.508Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-19T11:20:08.508Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-19T11:20:08.508Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-19T11:20:08.508Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-19T11:20:08.508Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-19T11:20:08.511Z|00007|dpdk|INFO|Using DPDK 22.11.1
2023-04-19T11:20:08.511Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2023-04-19T11:20:08.511Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd --in-memory -l 0.
2023-04-19T11:20:08.530Z|00010|dpdk|INFO|EAL: Detected CPU lcores: 8
2023-04-19T11:20:08.530Z|00011|dpdk|INFO|EAL: Detected NUMA nodes: 2
2023-04-19T11:20:08.531Z|00012|dpdk|INFO|EAL: Detected shared linkage of DPDK
2023-04-19T11:20:10.100Z|00013|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-04-19T11:20:10.100Z|00014|dpdk|INFO|EAL: Selected IOVA mode 'VA'
2023-04-19T11:20:10.101Z|00015|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 0
2023-04-19T11:20:10.101Z|00016|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 1
2023-04-19T11:30:09.003Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-19T11:30:09.010Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-19T11:30:09.010Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-19T11:30:09.010Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-19T11:30:09.010Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-19T11:30:09.010Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-19T11:30:09.013Z|00007|dpdk|INFO|Using DPDK 22.11.1
2023-04-19T11:30:09.013Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2023-04-19T11:30:09.013Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd --in-memory -l 0.
2023-04-19T11:30:09.033Z|00010|dpdk|INFO|EAL: Detected CPU lcores: 8
2023-04-19T11:30:09.033Z|00011|dpdk|INFO|EAL: Detected NUMA nodes: 2
2023-04-19T11:30:09.033Z|00012|dpdk|INFO|EAL: Detected shared linkage of DPDK
2023-04-19T11:30:09.115Z|00013|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-04-19T11:30:09.115Z|00014|dpdk|INFO|EAL: Selected IOVA mode 'VA'
2023-04-19T11:30:09.116Z|00015|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 0
2023-04-19T11:30:09.116Z|00016|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 1
2023-04-19T11:40:09.753Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-19T11:40:09.760Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-19T11:40:09.760Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-19T11:40:09.760Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-19T11:40:09.760Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-19T11:40:09.760Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-19T11:40:09.763Z|00007|dpdk|INFO|Using DPDK 22.11.1
2023-04-19T11:40:09.763Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2023-04-19T11:40:09.763Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd --in-memory -l 0.
2023-04-19T11:40:09.783Z|00010|dpdk|INFO|EAL: Detected CPU lcores: 8
2023-04-19T11:40:09.783Z|00011|dpdk|INFO|EAL: Detected NUMA nodes: 2
2023-04-19T11:40:09.783Z|00012|dpdk|INFO|EAL: Detected shared linkage of DPDK
2023-04-19T11:40:09.865Z|00013|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-04-19T11:40:09.865Z|00014|dpdk|INFO|EAL: Selected IOVA mode 'VA'
2023-04-19T11:40:09.866Z|00015|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 0
2023-04-19T11:40:09.866Z|00016|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 1
2023-04-19T11:50:10.507Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-19T11:50:10.515Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-19T11:50:10.515Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-19T11:50:10.515Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-19T11:50:10.515Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-19T11:50:10.515Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-19T11:50:10.517Z|00007|dpdk|INFO|Using DPDK 22.11.1
2023-04-19T11:50:10.518Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2023-04-19T11:50:10.518Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd --in-memory -l 0.
2023-04-19T11:50:10.537Z|00010|dpdk|INFO|EAL: Detected CPU lcores: 8
2023-04-19T11:50:10.537Z|00011|dpdk|INFO|EAL: Detected NUMA nodes: 2
2023-04-19T11:50:10.538Z|00012|dpdk|INFO|EAL: Detected shared linkage of DPDK
2023-04-19T11:50:10.620Z|00013|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-04-19T11:50:10.620Z|00014|dpdk|INFO|EAL: Selected IOVA mode 'VA'
2023-04-19T11:50:10.621Z|00015|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 0
2023-04-19T11:50:10.621Z|00016|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 1

  • Status ovsdb-server ovs-vswitchd
ix64ph1079:~ #  systemctl status ovsdb-server ovs-vswitchd
● ovsdb-server.service - Open vSwitch Database Unit
     Loaded: loaded (/usr/lib/systemd/system/ovsdb-server.service; static)
     Active: active (running) since Wed 2023-04-19 13:19:59 CEST; 39min ago
    Process: 5759 ExecStartPre=/usr/bin/rm -f /run/openvswitch.useropts (code=e>
    Process: 5760 ExecStartPre=/usr/bin/chown ${OVS_USER_ID} /run/openvswitch />
    Process: 5761 ExecStartPre=/bin/sh -c /usr/bin/echo "OVS_USER_ID=${OVS_USER>
    Process: 5763 ExecStartPre=/bin/sh -c if [ "$${OVS_USER_ID/:*/}" != "root" >
    Process: 5765 ExecStart=/usr/share/openvswitch/scripts/ovs-ctl --no-ovs-vsw>
   Main PID: 5812 (ovsdb-server)
      Tasks: 1
     CGroup: /system.slice/ovsdb-server.service
             └─ 5812 ovsdb-server /var/lib/openvswitch/conf.db -vconsole:emer ->

Apr 19 13:19:59 ix64ph1079 systemd[1]: Starting Open vSwitch Database Unit...
Apr 19 13:19:59 ix64ph1079 ovs-ctl[5765]: Starting ovsdb-server.
Apr 19 13:19:59 ix64ph1079 ovs-vsctl[5813]: ovs|00001|vsctl|INFO|Called as ovs->
Apr 19 13:19:59 ix64ph1079 ovs-vsctl[5818]: ovs|00001|vsctl|INFO|Called as ovs->
Apr 19 13:19:59 ix64ph1079 ovs-ctl[5765]: Configuring Open vSwitch system IDs.
Apr 19 13:19:59 ix64ph1079 ovs-vsctl[5824]: ovs|00001|vsctl|INFO|Called as ovs->
Apr 19 13:19:59 ix64ph1079 ovs-ctl[5765]: Enabling remote OVSDB managers.
Apr 19 13:19:59 ix64ph1079 systemd[1]: Started Open vSwitch Database Unit.

● ovs-vswitchd.service - Open vSwitch Forwarding Unit
Actions #16

Updated by zluo about 1 year ago

so the major issues:

  • kernel module vfio cannot be used for bind dpdk to network device
  • cannot bind active network device, even use uio_pci_generic
ix64ph1079:~ # dpdk-devbind.py -b uio_pci_generic 0000:02:00.0
Warning: routing table indicates that interface 0000:02:00.0 is active. Not modifying
  • only can bind a network device which is not configured but not in use, that doesn't make any sense. after binding, the network device disappeared. That shows also clearly in logs:

cat /var/log/openvswitch/ovs-vswitchd.log

2023-04-19T12:00:12.110Z|00013|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-04-19T12:00:12.110Z|00014|dpdk|ERR|EAL: Cannot use IOVA as 'PA' since physical addresses are not available
2023-04-19T12:00:12.110Z|00015|dpdk|EMER|Unable to initialize DPDK: Invalid argument
2023-04-19T12:00:12.516Z|00002|daemon_unix|ERR|fork child died before signaling startup (killed (Aborted), core dumped)
2023-04-19T12:00:12.516Z|00003|daemon_unix|EMER|could not detach from foreground session
2023-04-19T12:00:12.749Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-19T12:00:12.756Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-19T12:00:12.756Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-19T12:00:12.756Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-19T12:00:12.757Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-19T12:00:12.757Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-19T12:00:12.759Z|00007|dpdk|INFO|Using DPDK 22.11.1
2023-04-19T12:00:12.759Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2023-04-19T12:00:12.759Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd --in-memory -l 0.
2023-04-19T12:00:12.779Z|00010|dpdk|INFO|EAL: Detected CPU lcores: 8
2023-04-19T12:00:12.779Z|00011|dpdk|INFO|EAL: Detected NUMA nodes: 2
2023-04-19T12:00:12.779Z|00012|dpdk|INFO|EAL: Detected shared linkage of DPDK
2023-04-19T12:00:12.861Z|00013|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-04-19T12:00:12.861Z|00014|dpdk|ERR|EAL: Cannot use IOVA as 'PA' since physical addresses are not available
2023-04-19T12:00:12.861Z|00015|dpdk|EMER|Unable to initialize DPDK: Invalid argument
2023-04-19T12:00:13.272Z|00002|daemon_unix|ERR|fork child died before signaling startup (killed (Aborted), core dumped)
2023-04-19T12:00:13.272Z|00003|daemon_unix|EMER|could not detach from foreground session
2023-04-19T12:00:13.499Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-19T12:00:13.506Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-19T12:00:13.506Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-19T12:00:13.506Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-19T12:00:13.507Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-19T12:00:13.507Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-19T12:00:13.509Z|00007|dpdk|INFO|Using DPDK 22.11.1
2023-04-19T12:00:13.509Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2023-04-19T12:00:13.509Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd --in-memory -l 0.
2023-04-19T12:00:13.529Z|00010|dpdk|INFO|EAL: Detected CPU lcores: 8
2023-04-19T12:00:13.529Z|00011|dpdk|INFO|EAL: Detected NUMA nodes: 2
2023-04-19T12:00:13.529Z|00012|dpdk|INFO|EAL: Detected shared linkage of DPDK
2023-04-19T12:00:13.611Z|00013|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-04-19T12:00:13.611Z|00014|dpdk|ERR|EAL: Cannot use IOVA as 'PA' since physical addresses are not available
2023-04-19T12:00:13.611Z|00015|dpdk|EMER|Unable to initialize DPDK: Invalid argument
2023-04-19T12:00:14.015Z|00002|daemon_unix|ERR|fork child died before signaling startup (killed (Aborted), core dumped)
2023-04-19T12:00:14.015Z|00003|daemon_unix|EMER|could not detach from foreground session
2023-04-19T12:00:14.250Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-19T12:00:14.258Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-19T12:00:14.258Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-19T12:00:14.258Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-19T12:00:14.258Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-19T12:00:14.258Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-19T12:00:14.260Z|00007|dpdk|INFO|Using DPDK 22.11.1
2023-04-19T12:00:14.260Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2023-04-19T12:00:14.260Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd --in-memory -l 0.
2023-04-19T12:00:14.280Z|00010|dpdk|INFO|EAL: Detected CPU lcores: 8
2023-04-19T12:00:14.280Z|00011|dpdk|INFO|EAL: Detected NUMA nodes: 2
2023-04-19T12:00:14.280Z|00012|dpdk|INFO|EAL: Detected shared linkage of DPDK
2023-04-19T12:00:14.362Z|00013|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-04-19T12:00:14.362Z|00014|dpdk|ERR|EAL: Cannot use IOVA as 'PA' since physical addresses are not available
2023-04-19T12:00:14.362Z|00015|dpdk|EMER|Unable to initialize DPDK: Invalid argument
2023-04-19T12:00:14.778Z|00002|daemon_unix|ERR|fork child died before signaling startup (killed (Aborted), core dumped)
2023-04-19T12:00:14.778Z|00003|daemon_unix|EMER|could not detach from foreground session
Actions #17

Updated by szarate about 1 year ago

  • Project changed from 46 to openQA Tests
  • Category changed from New test to New test
Actions #18

Updated by dpitchumani about 1 year ago

  1. Active devices cannot be given to DPDK as active means it is been used by the kernel. So once we dedicateany interface it would disappear from kernel as it has been binded to DPDK drivers which can be identified with the command "dpdk-devbind.py -s"

  2. We need to set or probe VFIO-PCI with the NOIOMMU parameter to use VFIO-PCI.
    modprobe vfio enable_unsafe_noiommu_mode=1
    or
    echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode

  3. If we indent to use UIO_PCI_GENERIC driver, then the user "openvswitch:openvswitch" will not work (we need to comment this line in "/run/openvswitch.useropts" to make it work).

I will confirm shortly on how to solve the point number 2.

Actions #19

Updated by zluo about 1 year ago

now I tried with modprobe vfio enable_unsafe_noiommu_mode=1 again (/sys/module/vfio/parameters is not available, do we need to create the directory?)

ix64ph1079:~ # dpdk-devbind.py -s

Network devices using kernel driver
===================================
0000:02:00.0 'I350 Gigabit Network Connection 1521' if=eth0 drv=igb unused=vfio-pci *Active*

Other Network devices
=====================
0000:02:00.1 'I350 Gigabit Network Connection 1521' unused=igb,vfio-pci

ovsdb-server and ovs-vswitchd look however okay.

ix64ph1079:~ # /usr/share/openvswitch/scripts/ovs-ctl status
ovsdb-server is running with pid 3299
ovs-vswitchd is running with pid 3852

logs output

2023-04-20T13:24:57.043Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-20T13:24:57.049Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-20T13:24:57.050Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-20T13:24:57.050Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-20T13:24:57.050Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-20T13:24:57.050Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-20T13:24:57.050Z|00007|bridge|ERR|another ovs-vswitchd process is running, disabling this process (pid 3740) until it goes away
2023-04-20T13:29:57.216Z|00008|fatal_signal|WARN|terminating with signal 15 (Terminated)
2023-04-20T13:29:57.542Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2023-04-20T13:29:57.548Z|00002|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2023-04-20T13:29:57.548Z|00003|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 1
2023-04-20T13:29:57.548Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 8 CPU cores
2023-04-20T13:29:57.548Z|00005|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
2023-04-20T13:29:57.548Z|00006|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
2023-04-20T13:29:57.548Z|00007|bridge|ERR|another ovs-vswitchd process is running, disabling this process (pid 3852) until it goes away
2023-04-20T13:31:24.038Z|00008|memory|INFO|21168 kB peak resident set size after 86.5 seconds
2023-04-20T13:31:24.038Z|00009|memory|INFO|idl-cells-Open_vSwitch:153
2023-04-20T13:31:24.038Z|00010|bridge|ERR|Dropped 2 log messages in last 87 seconds (most recently, 87 seconds ago) due to excessive rate
2023-04-20T13:31:24.038Z|00011|bridge|ERR|another ovs-vswitchd process is running, disabling this process (pid 3852) until it goes away

Actions #20

Updated by zluo about 1 year ago

and I think I found bug report which related to my issue:

Unable to bind dpdk devices with vfio-pci driver
https://bugzilla.suse.com/show_bug.cgi?id=1205702

Actions #22

Updated by zluo 11 months ago

  • Status changed from Feedback to Resolved

test module dpdk created already.

Actions

Also available in: Atom PDF