Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does pktgen-dpdk have feature to control PPS rate? #286

Open
winnie022 opened this issue Oct 2, 2024 · 17 comments
Open

Does pktgen-dpdk have feature to control PPS rate? #286

winnie022 opened this issue Oct 2, 2024 · 17 comments

Comments

@winnie022
Copy link

Hello,

I have two servers running pktgen-dpdk. One server send packet and the other server receive the packet.

Does pktgen-dpdk have feature to control PPS rate?
For example, I want to send 10 MPPS for 10 seconds and then send 20 MPPS for next 10 seconds.
There is rate option, but I am not sure how to achieve this.

Thanks,

@winnie022 winnie022 changed the title Control PPS rate Does pktgen-dpdk have feature to control PPS rate? Oct 2, 2024
@KeithWiles
Copy link
Collaborator

You can set the rate, but you can't set a duration. If you wrote a Lua script you can control the duration by starting/stopping the traffic via the commands. If you do not want to write a script it means type the commands by hand.

@winnie022
Copy link
Author

Hello, @KeithWiles

Thank you for response.
If rate can achieve controlled PPS, it is ok without duration, but how to guarantee certain PPS (e.g., 10 MPPS)?
Are there any reference for script about duration by starting/stopping the traffic?

@KeithWiles
Copy link
Collaborator

The rate is in percent of the port max wire rate along with the packet size and you have to play with the rate value to get the MPPS you want. If NIC max rate 100Gbits with 64 byte frames, you will need to set the rate to some value to get the correct MPPS. 100GB with 64 byte frames max mpps is about 148.8MPPS so the rate would be less then 10% if I did the math right.

If you look in the scripts directory you can find some lua script using RFC2544 which uses Lua.

@winnie022
Copy link
Author

Hello, @KeithWiles

Thank you for your prompt response. It is really helpful.
I will take a look at the script.

I have another observation.

When I use two tx queues like ./pktgen -l 0-33 -- -m [1:2-3].[0] --txd=1024 --rxd=1024 -T on TX nodes, the tx node sends double packets to rx node compared to one tx queue.
However, in the rx node, even increasing rx queues (./pktgen -l 0-33 -- -m [2-3:1].[0] --txd=1024 --rxd=1024 -T -v) does not get the sent packets. It got the same number of packet when I use one tx queue and rx queue.

When I replace the rx node with dpdk-testpmd, it got much higher packet on dpdk-testpmd while there are some drops (~ 7M Pakcets are dropped). I am not sure why they are not the same.

Are there any missing flags or options? How to scale-up RX side?

I just generate two flows

   range 0 src port start 0
    range 0 src port min 0
    range 0 src port max 1
    range 0 src port inc 1
    range 0 dst port start 0
    range 0 dst port min 0
    range 0 dst port max 1
    range 0 dst port inc 1

Do you have any ideas?

@KeithWiles
Copy link
Collaborator

Please tell me the DPDK and Pktgen versions you are using. I only test latest DPDK and Pktgen versions. From DPDK.org and from github.com/pktgen/Pktgen-DPDK. Also what is the NIC being used?

When sending on two threads the packets are being pulled from the same pool of packets and 2 times is normal for two TX queues, if three then three times. I would not mess with txd/rxd queues the NIC driver normally has good values already. The -l 0-33 is using 34 cores and the command line is only needing 4 cores (0,1,2,3).

To get the RX side to work with more than one queue it will use RSS to distribute the packets across all queues. This means the receiving packets 5-tuples need to be different to make the NIC hardware distribute the packets across all of the RX queues.

@winnie022
Copy link
Author

@KeithWiles
I am sorry. I put the details.

Environment:

Pktgen version: 23.10.0
DPDK version: 23.11.0
OS distribution: Ubuntu 24.04.1 LTS
Arch: x86-64
Kernel version: 6.8.0-1014-gcp
NIC: gve

Right. I can change -l 0-4.

In RX node, the packets are well evenly distributed. I can check it with page stats, but half of packet are droplet.
In TX node


 Rate/s        ipackets        opackets         ibytes MB         obytes MB          errors
  Q  0:               0        36169728                 0              2170               0
  Q  1:               0        36293888                 0              2177               0

In RX node

 Rate/s        ipackets        opackets         ibytes MB         obytes MB          errors
  Q  0:        15866816               0               952                 0               0
  Q  1:        15445632               0               926                 0               0

If I use dpdk-testpmd instead of pktgen-dpdk on RX side, it shows

Port statistics ====================================
  ######################## NIC statistics for port 0  ########################
  RX-packets: 994240252  RX-missed: 0          RX-bytes:  59654415100
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:     66308573          Rx-bps:  31828115080
  Tx-pps:            0          Tx-bps:            0

I guess I missed some configs or setup on RX node.

@KeithWiles
Copy link
Collaborator

Can you please update to the latest Pktgen 24.07.1 and latest DPDK. I had made some performance updates in the latest version.

I would have used `-l 0-3' instead.

@winnie022
Copy link
Author

Sure. I will try it now. The recent is v24.07.
Is it your suggested version?

@winnie022
Copy link
Author

@KeithWiles

I updated Pktgen to 24.07.1 and DPDK to 24.07.0. It worked well and seemed like improving the perf.

However, it still rx does catch up TX PPS.
TX


 Rate/s     ipackets     opackets    ibytes MB    obytes MB       errors       bursts
  Q  0:            0     47864928            0         2871            0      1492451
  Q  1:            0     48079008            0         2884            0      1500634

RX

 Rate/s     ipackets     opackets    ibytes MB    obytes MB       errors       bursts
  Q  0:     31701816            0         1902            0            0            0
  Q  1:     31646984            0         1898            0            0            0

Do I need to use cfg file to initialize something?
I just started the pktgen-dpdk with ./pktgen -l 0-33 -- -m [1:2-3].[0] --txd=1024 --rxd=1024 -T

@KeithWiles
Copy link
Collaborator

I worry about setting the tx/rx descriptor size, did you try this test without having then options?

The above command line means the RX side it using one core and TX is using 2 cores. A single core has a limit to the number of packets it can process for RX and TX. The RX side does have to do a bit more work then TX side. You seem to be able to distribute the RX packet across two queues and I need to understand the full configuration of Pktgen.

Change the command line to ./pktgen -l 0-2 -- -m [1:2].0 -T then try ./pktgen -l 0-4 -- -m [1-2:3-4].0 -T

I use the ./tools/run.py with a cfg file to make it easier to use, but it is up to you.

In what mode are you sending packets in, single, sequence, range or pcap modes?

@winnie022
Copy link
Author

winnie022 commented Oct 14, 2024

Hello, @KeithWiles
Sorry for late response. I was on off for several days.

I have another issues now with the recent version (Pktgen 24.07.1 Powered by DPDK 24.07.0)
The TX with pktgen does not work. I noticed there are some changes about promiscuous mode in recent version.
So, I confirmed that node to run pktgen's MAC addr is correct.
Interestingly, RX with pktgen works though.

With Pktgen 23.10.0 (DPDK 23.11.0) Powered by DPDK , I can send packets (with pktgen) and receive it (testpmd).
Do you have any ideas or suggestion?

Please find the details below.

  • Working version

sudo ./pktgen -l 0-2 — -m [1:2].0 -T -v
 
*** Copyright(c) <2010-2023>, Intel Corporation. All rights reserved.
*** Pktgen  created by: Keith Wiles — >>> Powered by DPDK <<<
 
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probe PCI driver: net_gve (1ae0:0042) device: 0000:00:04.0 (socket -1)
TELEMETRY: No legacy callbacks, legacy socket not created
Lua 5.4.6  Copyright (C) 1994-2023 Lua.org, PUC-Rio
>>> Packet Max Burst 128/128, RX Desc 1024, TX Desc 2048, mbufs/port 24576, mbuf cache 128
    0: net_gve         0     -1   1ae0:42/0000:00:04.0
 
 
=== port to lcore mapping table (# lcores 3) ===
   lcore:    0       1       2      Total
port   0: ( D: T) ( 1: 0) ( 0: 1) = ( 1: 1)
Total   : ( 0: 0) ( 1: 0) ( 0: 1)
  Display and Timer on lcore 0, rx:tx counts per port/lcore
 
>>>> Configuring 1 ports, MBUF Size 10240, MBUF Cache Size 128
Lcore:
    1, RX-Only
                RX_cnt( 1): (pid= 0:qid= 0)
    2, TX-Only
                TX_cnt( 1): (pid= 0:qid= 0)
 
Port :
    0, nb_lcores  2, private 0x59cdbdfc0c80, lcores:  1  2
 
 
Initialize Port 0 — RxQ 1, TxQ 1
Reducing MTU from 1500 to 1460
    Create: 'Default RX  0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
    Create: 'Special TX  0:0 ' - Memory used (MBUFs   1024 x size  10240) =  10241 KB
 
    Create: 'Default TX  0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
    Create: 'Range TX    0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
    Create: 'Rate TX     0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
    Create: 'Sequence TX 0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
 
                                                         Port memory used = 1239046 KB
   Src MAC 42:01:0a:00:01:0b <Promiscuous is Disabled>
Device Info (0000:00:04.0, if_index:0, flags 00000000)
   min_rx_bufsize : 1024  max_rx_pktlen     : 1474  hash_key_size :    0
   max_rx_queues  :    4  max_tx_queues     :    4  max_vfs       :    0
   max_mac_addrs  :    1  max_hash_mac_addrs:    0  max_vmdq_pools:    0
   vmdq_queue_base:    0  vmdq_queue_num    :    0  vmdq_pool_base:    0
   nb_rx_queues   :    1  nb_tx_queues      :    1  speed_capa    : 00000000
 
   flow_type_rss_offloads:0000000000000000  reta_size             :    0
   rx_offload_capa       :TCP_LRO
   tx_offload_capa       :UDP_CKSUM TCP_CKSUM SCTP_CKSUM TCP_TSO MULTI_SEGS
   rx_queue_offload_capa :0000000000000000  tx_queue_offload_capa :0000000000000000
   dev_capa              :0000000000000000
 
  RX Conf:
     pthresh        :    0 hthresh          :    0 wthresh        :    0
     Free Thresh    :   64 Drop Enable      :    0 Deferred Start :    0
     offloads       :0000000000000000
  TX Conf:
     pthresh        :    0 hthresh          :    0 wthresh        :    0
     Free Thresh    :   32 RS Thresh        :   32 Deferred Start :    0
     offloads       :0000000000000000
  Rx: descriptor Limits
     nb_max         : 4096  nb_min          : 1024  nb_align      :    1
     nb_seg_max     :    0  nb_mtu_seg_max  :    0
  Tx: descriptor Limits
     nb_max         : 4096  nb_min          : 1024  nb_align      :    1
     nb_seg_max     :    0  nb_mtu_seg_max  :    0
  Rx: Port Config
     burst_size     :    0  ring_size       : 1024  nb_queues     :    0
  Tx: Port Config
     burst_size     :    0  ring_size       : 1024  nb_queues     :    0
  Switch Info: (null)
     domain_id      :65535  port_id         :    0
 
                                                                      Total memory used = 1239046 KB
 
 
=== Display processing on lcore 0
  RX processing lcore   1: rx:  1 tx:  0
  Using port/qid 0/0 for Rx on lcore id 1
 
  TX processing lcore   2: rx:  0 tx:  1
  Using port/qid 0/0 for Tx on lcore id 2
| Ports 0-0 of 1   <Main Page>  Copyright(c) <2010-2023>, Intel Corporation
  Port:Flags        : 0:-------      Single
Link State          :     <UP-935134639-FD>     ---Total Rate---
Pkts/s Rx           :                     0                    0
       Tx           :                     0                    0
MBits/s Rx/Tx       :                   0/0                  0/0
Pkts/s Rx Max       :                     1                    1
       Tx Max       :                     0                    0
Broadcast           :                     0
Multicast           :                     0
Sizes 64            :                     0
      65-127        :                     0
      128-255       :                     0
      256-511       :                     0
      512-1023      :                     0
      1024-1518     :                     0
Runts/Jumbos        :                  64/0
ARP/ICMP Pkts       :                   0/0
Errors Rx/Tx        :                   0/0
Total Rx Pkts       :                     1
      Tx Pkts       :                     0
      Rx/Tx MBs     :                   0/0
TCP Flags           :                .A....
TCP Seq/Ack         :           74616/74640
Pattern Type        :               abcd...
Tx Count/% Rate     :         Forever /100%
Pkt Size/Rx:Tx Burst:           64 / 64: 64
TTL/Port Src/Dest   :        64/ 1234/ 5678
Pkt Type:VLAN ID    :       IPv4 / TCP:0001
802.1p CoS/DSCP/IPP :             0/  0/  0
VxLAN Flg/Grp/vid   :      0000/    0/    0
IP  Destination     :           192.168.1.1
    Source          :        192.168.0.1/24
MAC Destination     :     00:00:00:00:00:00
    Source          :     42:01:0a:00:01:0b
NUMA/Vend:ID/PCI    :-1/1ae0:42/0000:00:04.0

-- Pktgen 23.10.0 (DPDK 23.11.0)  Powered by DPDK  (pid:67289) ----------------

# Update information on pktgen
set 0 dst mac 42:01:0a:00:01:01
set 0 dst ip 10.0.1.10
set 0 src ip 10.0.1.11/32
set 0 sport 54321
set 0 dport 51234
set 0 type ipv4
set 0 proto udp
start 0


Pktgen:/> page main
| Ports 0-0 of 1   <Main Page>  Copyright(c) <2010-2023>, Intel Corporation
  Port:Flags        : 0:-------      Single
Link State          :     <UP-935134639-FD>     ---Total Rate---
Pkts/s Rx           :                     2                    2
       Tx           :              15624960             15624960
MBits/s Rx/Tx       :                0/9999               0/9999
Pkts/s Rx Max       :                     2                    2
       Tx Max       :              16237952             16237952
Broadcast           :                     0
Multicast           :                     0
Sizes 64            :                    64
      65-127        :                  1536
      128-255       :                     0
      256-511       :                     0
      512-1023      :                     0
      1024-1518     :                   128
Runts/Jumbos        :               11072/0
ARP/ICMP Pkts       :                   0/0
Errors Rx/Tx        :              0/143965
Total Rx Pkts       :                   200
      Tx Pkts       :           22647508864
      Rx/Tx MBs     :            0/14494405
TCP Flags           :                .A....
TCP Seq/Ack         :           74616/74640
Pattern Type        :               abcd...
Tx Count/% Rate     :         Forever /100%
Pkt Size/Rx:Tx Burst:           64 / 64: 64
TTL/Port Src/Dest   :        64/54321/51234
Pkt Type:VLAN ID    :       IPv4 / UDP:0001
802.1p CoS/DSCP/IPP :             0/  0/  0
VxLAN Flg/Grp/vid   :      0000/    0/    0
IP  Destination     :             10.0.1.10
    Source          :             10.0.1.11
MAC Destination     :     42:01:0a:00:01:01
    Source          :     42:01:0a:00:01:0b
NUMA/Vend:ID/PCI    :-1/1ae0:42/0000:00:04.0
-- Pktgen 23.10.0 (DPDK 23.11.0)  Powered by DPDK  (pid:67289) ----------------


# RX output 

Port statistics ====================================
  ######################## NIC statistics for port 0  ########################
  RX-packets: 22098146823 RX-missed: 0          RX-bytes:  1325888811190
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:     17968467          Rx-bps:   8624864600
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

With Pktgen 24.07.1 Powered by DPDK 24.07.0, I cannot send packets (with pktgen) to RX node

 sudo ./pktgen -l 0-2 -- -m [1:2].0 -T -v

*** Copyright(c) <2010-2024>, Intel Corporation. All rights reserved.
*** Pktgen  created by: Keith Wiles -- >>> Powered by <<<

EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
Lua 5.4.6  Copyright (C) 1994-2023 Lua.org, PUC-Rio
  Create: 'RX-L1/P0/S4294967295' - Memory used (MBUFs  16384 x size   2176) =    34817 KB @ 0x17fefdf00
  Create: 'TX-L1/P0/S4294967295' - Memory used (MBUFs  16384 x size   2176) =    34817 KB @ 0x17d8f4c40
  Create: 'SP-L1/P0/S4294967295' - Memory used (MBUFs   1024 x size   2176) =     2177 KB @ 0x17b3ed980
                                                      Total memory used =    71811 KB
>>> Packet Max Burst 128/128, RX Desc 1024, TX Desc 2048, mbufs/port 24576, mbuf cache 128
Initialize Port 0 ...
                                                         Port memory used =  71811 KB
** Device Info (0000:00:04.0, if_index:0, flags 00000000) **
   min_rx_bufsize : 1024  max_rx_pktlen     : 1474  hash_key_size :   40
   max_rx_queues  :    4  max_tx_queues     :    4  max_vfs       :    0
   max_mac_addrs  :    1  max_hash_mac_addrs:    0  max_vmdq_pools:    0
   vmdq_queue_base:    0  vmdq_queue_num    :    0  vmdq_pool_base:    0
   nb_rx_queues   :    1  nb_tx_queues      :    1  speed_capa    : 00000000

   flow_type_rss_offloads:0000000000038d34  reta_size             :  128
   rx_offload_capa       :IPV4_CKSUM UDP_CKSUM TCP_CKSUM TCP_LRO RSS_HASH
   tx_offload_capa       :IPV4_CKSUM UDP_CKSUM TCP_CKSUM SCTP_CKSUM TCP_TSO MULTI_SEGS
   rx_queue_offload_capa :0000000000000000  tx_queue_offload_capa :0000000000000000
   dev_capa              :0000000000000000

  RX Conf:
     pthresh        :    0 hthresh          :    0 wthresh        :    0
     Free Thresh    :   64 Drop Enable      :    0 Deferred Start :    0
     offloads       :0000000000000000
  TX Conf:
     pthresh        :    0 hthresh          :    0 wthresh        :    0
     Free Thresh    :   32 RS Thresh        :   32 Deferred Start :    0
     offloads       :0000000000000000
  Rx: descriptor Limits
     nb_max         : 4096  nb_min          :    0  nb_align      :    1
     nb_seg_max     :    0  nb_mtu_seg_max  :    0
  Tx: descriptor Limits
     nb_max         : 4096  nb_min          :    0  nb_align      :    1
     nb_seg_max     :    0  nb_mtu_seg_max  :    0
  Rx: Port Config
     burst_size     :    0  ring_size       : 1024  nb_queues     :    0
  Tx: Port Config
     burst_size     :    0  ring_size       : 1024  nb_queues     :    0
  Switch Info: (null)
     domain_id      :65535  port_id         :    0



Port DevName          Index NUMA PCI Information   Src MAC           Promiscuous
  0  net_gve              0   -1 1ae0:42/0000:00:04.0 42:01:0a:00:01:0a <Disabled>


=== Display processing on lcore 0
RX lid   1, pid  0, qid  0, Mempool RX-L1/P0/S4294967295 @ 0x17fefdf00
TX lid   2, pid  0, qid  0, Mempool TX-L1/P0/S4294967295 @ 0x17d8f4c40
- <Main Page> Ports 0-0 of 1  Copyright(c) <2010-2024>, Intel Corporation
Port:Flags          :   0:-------        Unkn
Link State          :       <UP-935134639-FD>        ---Total Rate---
Pkts/s Rx           :                       0                       0
       Tx           :                       0                       0
MBits/s Rx/Tx       :                     0/0                     0/0
Total Rx Pkts       :                       0                       0
      Tx Pkts       :                       0                       0
      Rx/Tx MBs     :                     0/0
Pkts/s Rx Max       :                       0
       Tx Max       :                       0
Errors Rx/Tx        :                     0/0
Broadcast           :                       0
Multicast           :                       0
Sizes 64            :                       0
      65-127        :                       0
      128-255       :                       0
      256-511       :                       0
      512-1023      :                       0
      1024-1518     :                       0
Runts/Jumbos        :                     0/0
ARP/ICMP Pkts       :                     0/0
Tx Count/% Rate     :           Forever /100%
Pkt Size/Rx:Tx Burst:             64 / 64: 32
Port Src/Dest       :              1234/ 5678
Pkt Type:VLAN ID    :         IPv4 / TCP:0001
IP  Destination     :             192.168.1.1
    Source          :          192.168.0.1/24
MAC Destination     :       42:01:0a:00:01:0a
    Source          :       42:01:0a:00:01:0a
NUMA/Vend:ID/PCI    : -1/1ae0:42/0000:00:04.0
-- Pktgen 24.07.1  Powered by DPDK 24.07.0 (pid:70500) ------------------------


# Update information on pktgen
set 0 dst mac 42:01:0a:00:01:01
set 0 dst ip 10.0.1.10
set 0 src ip 10.0.1.11/32
set 0 sport 54321
set 0 dport 51234
set 0 type ipv4
set 0 proto udp
start 0


# RX output 

Port statistics ====================================
  ######################## NIC statistics for port 0  ########################
  RX-packets: 23         RX-missed: 0          RX-bytes:  1519
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

However, RX on pktgen works. I can use testpmd as sender and pktgen as receiver.


Port statistics ====================================
  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 1072753883 TX-errors: 37         TX-bytes:  68656248512

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:     38218083          Tx-bps:  19567658896
  ############################################################################

- <Main Page> Ports 0-0 of 1  Copyright(c) <2010-2024>, Intel Corporation
Port:Flags          :   0:-------        Unkn
Link State          :       <UP-935134639-FD>        ---Total Rate---
Pkts/s Rx           :                33830392                33830392
       Tx           :                       0                       0
MBits/s Rx/Tx       :                 23816/0                 23816/0
Total Rx Pkts       :              1334884564                33830392
      Tx Pkts       :                       0                       0
      Rx/Tx MBs     :                939758/0
Pkts/s Rx Max       :                33830392
       Tx Max       :                       0
Errors Rx/Tx        :                     0/0
Broadcast           :                       0
Multicast           :                       0
Sizes 64            :                       0
      65-127        :             21786784672
      128-255       :                       0
      256-511       :                       0
      512-1023      :                       0
      1024-1518     :                       0
Runts/Jumbos        :                  1120/0
ARP/ICMP Pkts       :                     0/0
Tx Count/% Rate     :           Forever /100%
Pkt Size/Rx:Tx Burst:             64 / 64: 32
Port Src/Dest       :             54321/51234
Pkt Type:VLAN ID    :         IPv4 / UDP:0001
IP  Destination     :               10.0.1.10
    Source          :               10.0.1.11
MAC Destination     :       42:01:0a:00:01:01
    Source          :       42:01:0a:00:01:0a
NUMA/Vend:ID/PCI    : -1/1ae0:42/0000:00:04.0
-- Pktgen 24.07.1  Powered by DPDK 24.07.0 (pid:70523) ------------------------



@KeithWiles
Copy link
Collaborator

Please try this branch fixes-for-release as I have been working on a number of fixes for the next release. Sorry, day job is sucking up a lot of my time. :-(

@winnie022
Copy link
Author

Hello, @KeithWiles

Thank you for support. I will try the branch and let you know.
I really appreciated your help while you have a lot of things!

@winnie022
Copy link
Author

Hello, @KeithWiles

I tried the branch

git branch
* (HEAD detached at origin/fixes-for-release)

make buildlua -j$(nproc)

[41/72] Compiling C object lib/lua/liblua.a.p/lua_pktmbuf.c.o
FAILED: lib/lua/liblua.a.p/lua_pktmbuf.c.o
cc -Ilib/lua/liblua.a.p -Ilib/lua -I../lib/lua -Ilib/common -I../lib/common -Ilib/utils -I../lib/utils -Ilib/vec -I../lib/vec -I/usr/include/lua5.4 -I/usr/local/include -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Wpedantic -Werror -O3 -march=native -mavx -mavx2 -DALLOW_EXPERIMENTAL_API -D_GNU_SOURCE -Wno-pedantic -Wno-format-truncation -DLUA_ENABLED -fPIC -include rte_config.h -march=native -mrtm -MD -MQ lib/lua/liblua.a.p/lua_pktmbuf.c.o -MF lib/lua/liblua.a.p/lua_pktmbuf.c.o.d -o lib/lua/liblua.a.p/lua_pktmbuf.c.o -c ../lib/lua/lua_pktmbuf.c
../lib/lua/lua_pktmbuf.c: In function ‘_new’:
../lib/lua/lua_pktmbuf.c:56:63: error: implicit declaration of function ‘pg_socket_id’; did you mean ‘rte_socket_id’? [-Werror=implicit-function-declaration]
   56 |     mp = rte_pktmbuf_pool_create(poolname, n, csize, 0, size, pg_socket_id());
      |                                                               ^~~~~~~~~~~~
      |                                                               rte_socket_id
cc1: all warnings being treated as errors
[50/72] Compiling C object app/pktgen.p/pktgen-cmds.c.o
ninja: build stopped: subcommand failed.
make: *** [Makefile:21: buildlua] Error 1

Got this error and update the code and enabled to compile

git diff

diff --git a/lib/lua/lua_pktmbuf.c b/lib/lua/lua_pktmbuf.c
index 64932e9..668623f 100644
--- a/lib/lua/lua_pktmbuf.c
+++ b/lib/lua/lua_pktmbuf.c
@@ -53,7 +53,7 @@ _new(lua_State *L)
     mbp = (pktmbuf_t **)lua_newuserdata(L, sizeof(void *));

     snprintf(poolname, sizeof(poolname), "%s-%d", name, lua_inst++);
-    mp = rte_pktmbuf_pool_create(poolname, n, csize, 0, size, pg_socket_id());
+    mp = rte_pktmbuf_pool_create(poolname, n, csize, 0, size, rte_socket_id());
     if (mp == NULL)
         luaL_error(L, "Failed to create MBUF Pool");
     *mbp = mp;

However, it still did not work on TX on pktgen while TX still works.

I felt like somehow TX part filters the packet on pktgen.

@KeithWiles
Copy link
Collaborator

Ah, I forgot to test building with lua and I will fix it. Thanks

@KeithWiles
Copy link
Collaborator

KeithWiles commented Oct 15, 2024

I have fixed the build problem and pushed the changes, thanks again

@winnie022
Copy link
Author

@KeithWiles

Thank you for fixing it promptly.

Do you have any speculations why it did not work on TX while RX works?
Or any guidance where I can debug (tools, code pointers, etc)?
I am thinking reading the codes (packet header generation and packet tx part).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants