Hi, everyone:
I recently under FreeBSD, want to achieve pkt-gen can send more traffic, because I did not ixgb card, only a plurality of igb, want to achieve produce 4,5Gbps flow test server. But I encountered some problems.
1. Open the first one process, works well, the flow reaches full bandwidth
2. Open the first two processes, work
3. Turn on the first three processes, does not work
Tip Can not allocate memory, I would like to achieve igb1, igb2, igb3, igb4 work simultaneously, but now only two interfaces at the same time work, the first three interfaces will not work, and where is the problem, please help to solve this problem, Thank you very much.
Here is the top output
test environment
=============================
operating system:
My pkt-gen version is:
I recently under FreeBSD, want to achieve pkt-gen can send more traffic, because I did not ixgb card, only a plurality of igb, want to achieve produce 4,5Gbps flow test server. But I encountered some problems.
1. Open the first one process, works well, the flow reaches full bandwidth
Code:
root @ test [~] # ./pkt-gen -f tx -i igb1 -d 192.168.53.27 -D 6C: 62: 6D: 66: 7A: E5 -a 2
104.670189 main [1857] interface is igb1
104.670213 main [1968] running on 1 cpus (have 8)
104.670276 extract_ip_range [362] range is 10.0.0.1:0 to 10.0.0.1:0
104.670283 extract_ip_range [362] range is 192.168.53.27:0 to 192.168.53.27:0
104.670308 main [2047] g.ifname = netmap: igb1
105.016926 main [2070] mapped 114000KB at 0x28c00000
105.016949 main [2072] nmreq: slot: tx = 1024, rx = 1024; ring: tx = 8, rx = 8
Sending on netmap: igb1: 8 queues, 1 threads and 1 cpus.
10.0.0.1 -> 192.168.53.27 (00: 00: 00: 00: 00: 00 -> 6C: 62: 6D: 66: 7A: E5)
105.016990 main [2158] Sending 512 packets every 0.000000000 s
105.016993 main [2160] Wait 2 secs for phy reset
107.054030 main [2162] Ready ...
107.054152 sender_body [1147] start, fd 3 main_fd 3
107.054282 sender_body [1183] before goto while: n = 0, sent = 0
107.121706 sender_body [1222] drop copy
108.054945 main_thread [1647] 1487647 pps (1.489 Mpkts 1.000 Gbps in 1000798 usec) 2.25 avg_batch
109.055942 main_thread [1647] 1488174 pps (1.490 Mpkts 1.001 Gbps in 1000997 usec) 2.21 avg_batch
110.056942 main_thread [1647] 1488171 pps (1.490 Mpkts 1.001 Gbps in 1001001 usec) 2.21 avg_batch
111.057945 main_thread [1647] 1488166 pps (1.490 Mpkts 1.001 Gbps in 1001002 usec) 2.21 avg_batch
Code:
root @ test [~] # ./pkt-gen -f tx -i igb2 -d 192.168.53.27 -D 6C: 62: 6D: 66: 7A: E5 -a 3
113.226786 main [1857] interface is igb2
113.226809 main [1968] running on 1 cpus (have 8)
113.226883 extract_ip_range [362] range is 10.0.0.1:0 to 10.0.0.1:0
113.226892 extract_ip_range [362] range is 192.168.53.27:0 to 192.168.53.27:0
113.226921 main [2047] g.ifname = netmap: igb2
113.244908 main [2070] mapped 114000KB at 0x28c00000
113.244919 main [2072] nmreq: slot: tx = 1024, rx = 1024; ring: tx = 8, rx = 8
Sending on netmap: igb2: 8 queues, 1 threads and 1 cpus.
10.0.0.1 -> 192.168.53.27 (00: 00: 00: 00: 00: 00 -> 6C: 62: 6D: 66: 7A: E5)
113.244955 main [2158] Sending 512 packets every 0.000000000 s
113.244959 main [2160] Wait 2 secs for phy reset
115.245944 main [2162] Ready ...
115.246081 sender_body [1147] start, fd 3 main_fd 3
115.246222 sender_body [1183] before goto while: n = 0, sent = 0
115.313822 sender_body [1222] drop copy
116.246942 main_thread [1647] 1487389 pps (1.489 Mpkts 1.000 Gbps in 1000862 usec) 2.47 avg_batch
117.248939 main_thread [1647] 1488177 pps (1.491 Mpkts 1.002 Gbps in 1001998 usec) 2.43 avg_batch
118.250943 main_thread [1647] 1488160 pps (1.491 Mpkts 1.002 Gbps in 1002004 usec) 2.43 avg_batch
119.252938 main_thread [1647] 1488186 pps (1.491 Mpkts 1.002 Gbps in 1001995 usec) 2.43 avg_batch
Code:
root @ test [~] # ./pkt-gen -f tx -i igb3 -d 192.168.53.27 -D 6C: 62: 6D: 66: 7A: E5 -a 4
349.715031 main [1857] interface is igb3
349.715055 main [1968] running on 1 cpus (have 8)
349.715130 extract_ip_range [362] range is 10.0.0.1:0 to 10.0.0.1:0
349.715142 extract_ip_range [362] range is 192.168.53.27:0 to 192.168.53.27:0
349.715171 main [2047] g.ifname = netmap: igb3
349.718176 nm_open [808] NIOCREGIF failed: Can not allocate memory igb3
349.718184 main [2050] Unable to open netmap: igb3: Can not allocate memory
349.718190 main [2123] aborting
Usage:
pkt-gen arguments
-i interface interface name
-f function tx rx ping pong
-n count number of iterations (can be 0)
-t pkts_to_send also forces tx mode
-r pkts_to_receive also forces rx mode
-l pkt_size in bytes excluding CRC
-d dst_ip [: port [-dst_ip: port]] single or range
-s src_ip [: port [-src_ip: port]] single or range
-D Dst-mac
Here is the top output
Code:
last pid: 39814; load averages: 0.40, 0.13, 0.09
30 processes: 3 running, 27 sleeping
CPU: 1.3% user, 0.0% nice, 22.6% system, 3.3% interrupt, 72.9% idle
Mem: 5644K Active, 252M Inact, 317M Wired, 90M Buf, 2417M Free
Swap: 382M Total, 382M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
39804 root 2 92 0 125M 18516K RUN 2 0:12 69.78% pkt-gen
39809 root 2 90 0 125M 18516K CPU3 3 0:10 62.70% pkt-gen
1612 root 1 20 0 15916K 6796K select 6 0:10 0.00% httpd
1629 root 1 20 0 10164K 1920K nanslp 6 0:02 0.00% cron
1404 root 1 20 0 10128K 1764K select 7 0:02 0.00% syslogd
test environment
=============================
operating system:
Code:
root @ test [netmap.git] # uname -sa
FreeBSD test 10.1-RELEASE FreeBSD 10.1-RELEASE # 0: Wed Aug 19 12:39:07 CST 2015 root @ vmt: / usr / src / sys / i386 / compile / GENERIC_NETMAP i386
root @ test [netmap.git] # sysctl -a | grep hw.mo
hw.model: Intel (R) Xeon (R) CPU E5606 @ 2.13GHz (8-core)
root @ test [netmap.git] # sysctl -a | grep mem
kern.ipc.maxmbufmem: 216006656
device mem
vm.kmem_size: 432013312
vm.kmem_zmax: 65536
vm.kmem_size_min: 12582912
vm.kmem_size_max: 432013312
vm.kmem_size_scale: 3
vm.kmem_map_size: 154939392
vm.kmem_map_free: 277073920
vm.lowmem_period: 10
vfs.ufs.dirhash_maxmem: 2097152
vfs.ufs.dirhash_mem: 1224563
vfs.ufs.dirhash_lowmemcount: 0
hw.physmem: 3190775808
hw.usermem: 2875990016
hw.realmem: 4194304
hw.cbb.start_memory: 2281701376
hw.pci.host_mem_start: 2147483648
p1003_1b.memlock: 0
p1003_1b.memlock_range: 0
p1003_1b.memory_protection: 0
p1003_1b.shared_memory_objects: 200112
dev.xen.balloon.low_mem: 0
dev.xen.balloon.high_mem: 0
root @ test [~] # ifconfig
igb0: flags = 8842 <BROADCAST, RUNNING, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 403bb <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, JUMBO_MTU, VLAN_HWCSUM, TSO4, TSO6, VLAN_HWTSO>
ether b0: 51: 8e: 00: a5: 1e
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
igb1: flags = 8842 <BROADCAST, RUNNING, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 403bb <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, JUMBO_MTU, VLAN_HWCSUM, TSO4, TSO6, VLAN_HWTSO>
ether b0: 51: 8e: 00: a5: 1f
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
igb2: flags = 8842 <BROADCAST, RUNNING, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 403bb <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, JUMBO_MTU, VLAN_HWCSUM, TSO4, TSO6, VLAN_HWTSO>
ether b0: 51: 8e: 00: a5: 20
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
igb3: flags = 8842 <BROADCAST, RUNNING, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 403bb <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, JUMBO_MTU, VLAN_HWCSUM, TSO4, TSO6, VLAN_HWTSO>
ether b0: 51: 8e: 00: a5: 21
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
igb4: flags = 8c02 <BROADCAST, OACTIVE, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 403bb <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, JUMBO_MTU, VLAN_HWCSUM, TSO4, TSO6, VLAN_HWTSO>
ether b0: 51: 8e: 00: a7: 96
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
igb5: flags = 8c02 <BROADCAST, OACTIVE, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 403bb <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, JUMBO_MTU, VLAN_HWCSUM, TSO4, TSO6, VLAN_HWTSO>
ether b0: 51: 8e: 00: a7: 97
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
igb6: flags = 8c02 <BROADCAST, OACTIVE, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 403bb <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, JUMBO_MTU, VLAN_HWCSUM, TSO4, TSO6, VLAN_HWTSO>
ether b0: 51: 8e: 00: a7: 98
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect
status: no carrier
igb7: flags = 8c02 <BROADCAST, OACTIVE, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 403bb <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, JUMBO_MTU, VLAN_HWCSUM, TSO4, TSO6, VLAN_HWTSO>
ether b0: 51: 8e: 00: a7: 99
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect
status: no carrier
em0: flags = 8843 <UP, BROADCAST, RUNNING, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 4219b <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, VLAN_HWCSUM, TSO4, WOL_MAGIC, VLAN_HWTSO>
ether 00: 25: 90: 62: 06: 84
inet 172.16.12.251 netmask 0xffff0000 broadcast 172.16.255.255
inet 192.168.100.251 netmask 0xffffff00 broadcast 192.168.100.255
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
em1: flags = 8843 <UP, BROADCAST, RUNNING, SIMPLEX, MULTICAST> metric 0 mtu 1500
options = 4219b <RXCSUM, TXCSUM, VLAN_MTU, VLAN_HWTAGGING, VLAN_HWCSUM, TSO4, WOL_MAGIC, VLAN_HWTSO>
ether 00: 25: 90: 62: 06: 85
nd6 options = 29 <PERFORMNUD, IFDISABLED, AUTO_LINKLOCAL>
media: Ethernet autoselect
status: no carrier
lo0: flags = 8049 <UP, LOOPBACK, RUNNING, MULTICAST> metric 0 mtu 16384
options = 600003 <RXCSUM, TXCSUM, RXCSUM_IPV6, TXCSUM_IPV6>
inet6 :: 1 prefixlen 128
inet6 fe80 :: 1% lo0 prefixlen 64 scopeid 0xb
inet 127.0.0.1 netmask 0xff000000
nd6 options = 21 <PERFORMNUD, AUTO_LINKLOCAL>
Code:
root @ test [netmap.git] # git log
commit d4bf89c0804fcd222f37cfebf1ce920bf08e44f9
Author: Luigi Rizzo <[EMAIL]rizzo@iet.unipi.it[/EMAIL]>
Date: Sat Aug 15 18:12:32 2015 -0700
actually catch packets from the host stack
.....