Project

General

Profile

action #151273 » OVS_DPDK_single_VM.txt

szarate, 2024-01-29 09:08

 
Topology 2,3 :
==============

(DPDK driver/kernel Driver in the VM) VM <-> Host1 <-> NIC <-> Host2
Steps to test OVS-DPDK with one VM :
====================================

1. Configure hugepages and probe the vfio-pci driver,

#sysctl -w vm.nr_hugepages=2048 <Total hugepage memory allocated is 2048*2Mb(per hugepage size)>
#mount -t hugetlbfs none /dev/hugepages
#modprobe vfio-pci
#dpdk-devbind -b vfio-pci ${PCI_ID}
2. Create a new OVS bridge named br_test which will have a DPDK port and a VHOST User port,
#ovs-vsctl --if-exists del-br br_test
#ovs-vsctl add-br br_test -- set bridge br_test datapath_type=netdev
3. Add physical port dpdk0 to bridge br_test.
- Add the DPDK port to transmits/recieve traffic from the NIC
- We have taken a sample of 4 RX Queues and make sure whether NIC supports 4 RX queues
#ovs-vsctl add-port br_test dpdk0 -- set interface dpdk0 type=dpdk options:dpdk-devargs=${PCI_ID} options:dpdk-lsc-interrupt=true options:n_rxq=4 options:n_txq_desc=2048
4. Createvhost user client port and attach them to the OVS bridge using the below command,
- Configure vhost user port to transmit and recieve packets from the VM

#ovs-vsctl add-port br_test vhost-client-1 -- set Interface vhost-client-1 type=dpdkvhostuserclient options:vhost-server-path=/run/openvswitch/vhu-client-1

5. Set OVS switch config with the below details,

#ovs-vsctl set Open_vSwitch . other_config:dpdk-init=true other_config:dpdk-extra=" -w 0000:17:00.1,support-multi-driver=1 -w 0000:17:00.0,support-multi-driver=1" other_config:dpdk-hugepage-dir="/dev/hugepages" other_config:dpdk-lcore-mask="0x2" other_config:dpdk-socket-mem="1024,0" other_config:pmd-cpu-mask="0x5"
pmd-cpu-mask - mentions the cores on which the PMD threads run (value should be based on the available CPU cores, Here 4 PMD threads run on 4 cores )
dpdk-lcore-mask - specifies the cores on which the DPDK threads are spawned.
dpdk-socket-mem - Memory to be pre-allocated from the huge pages(not mandatory)
Note :
dpdk-init=true, dpdk-hugepage-dir="/dev/hugepages" and dpdk-extra=" -w ${PCI_ID#1},support-multi-driver=1 -w ${PCI_ID#10,support-multi-driver=1" are sufficient for testing and rest are not mandatory
6. start the VM's with QEMU command,

qemu-system-x86_64 -enable-kvm -cpu host -m 2G -smp 4 -drive file=${VM_IMAGE1} -object memory-backend-file,id=mem,size=2G,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -chardev socket,id=char0,path=/run/openvswitch/vhu-client-1,server -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce=on -device virtio-net-pci,netdev=mynet1,mac=DE:AD:BE:EF:0A:01,mq=on,vectors=10 -net user,hostfwd=tcp::10021-:22 -net nic,macaddr=BA:AD:CA:FE:0A:01 -serial mon:stdio

7. Configure the IP's in the VM in the same subnet as Host 2 so that the VM can bypass the host and then directly communicate with host 2. Ensure firewalls does not block the traffic from the VM.

8. Now to Test the traffic from the host 2 inside the VM,

a) Using the VM virtio_net driver,
- check whether the ping or iperf traffic can be seen in the VM as if they are in the same subnet
b) Using the DPDK driver in the Guest VM,
- Configure the hugepages and bind the NIC to VFIO-PCI driver in the VM,
- Start the testpmd with the recieve mode so that it captures the packet,
#./dpdk-testpmd -c 0x3 -n 4 --socket-mem 1024 -- --burst=64 -i --rxq=2 --txq=2 --nb-cores=4
# >set fwd rxonly
# >start
- Send some traffic from the Host 2 and then check the port stats with the below command in the dpdk-testpmd cli prompt,
# >show port stats all





(3-3/3)