linux – Bonded Gigabit Interfaces的上限约为500mbps

前端之家收集整理的这篇文章主要介绍了linux – Bonded Gigabit Interfaces的上限约为500mbps前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
这个问题一直困扰我好几天了!我最近将几个 linux服务器上的eth0 / eth1接口绑定到bond1,并使用以下配置(在所有系统上都相同):
DEVICE=bond0
ONBOOT=yes
BONDING_OPTS="miimon=100 mode=4 xmit_hash_policy=layer3+4 
lacp_rate=1" 
TYPE=Bond0
BOOTPROTO=none

DEVICE=eth0
ONBOOT=yes
SLAVE=yes
MASTER=bond0
HOTPLUG=no
TYPE=Ethernet
BOOTPROTO=none

DEVICE=eth1
ONBOOT=yes
SLAVE=yes
MASTER=bond0
HOTPLUG=no
TYPE=Ethernet
BOOTPROTO=none

在这里你可以看到绑定状态:
以太网通道绑定驱动程序:v3.6.0(2009年9月26日)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
    Aggregator ID: 3
    Number of ports: 2
    Actor Key: 17
    Partner Key: 686
    Partner Mac Address: d0:67:e5:df:9c:dc

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:c9:95:74
Aggregator ID: 3
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:c9:95:75
Aggregator ID: 3
Slave queue ID: 0

Ethtool输出

Settings for bond0:
Supported ports: [ ]
Supported link modes:   Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes:  Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 2000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes

Settings for eth0:
    Supported ports: [ TP ]
    Supported link modes:   10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 
                            1000baseT/Full 
    Supported pause frame use: Symmetric
    Supports auto-negotiation: Yes
    Advertised link modes:  10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 
                            1000baseT/Full 
    Advertised pause frame use: Symmetric
    Advertised auto-negotiation: Yes
    Speed: 1000Mb/s
    Duplex: Full
    Port: Twisted Pair
    PHYAD: 1
    Transceiver: internal
    Auto-negotiation: on
    MDI-X: Unknown
    Supports Wake-on: pumbg
    Wake-on: g
    Current message level: 0x00000007 (7)
                   drv probe link
    Link detected: yes

Settings for eth1:
    Supported ports: [ TP ]
    Supported link modes:   10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 
                            1000baseT/Full 
    Supported pause frame use: Symmetric
    Supports auto-negotiation: Yes
    Advertised link modes:  10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 
                            1000baseT/Full 
    Advertised pause frame use: Symmetric
    Advertised auto-negotiation: Yes
    Speed: 1000Mb/s
    Duplex: Full
    Port: Twisted Pair
    PHYAD: 1
    Transceiver: internal
    Auto-negotiation: on
    MDI-X: Unknown
    Supports Wake-on: pumbg
    Wake-on: d
    Current message level: 0x00000007 (7)
                   drv probe link
    Link detected: yes

这些服务器都连接到相同的Dell PCT 7048交换机,每个服务器的两个端口都添加到其自己的动态LAG并设置为访问模式.一切看起来都不错吧?然而,这是从一个服务器到另一个服务器尝试iperf测试的结果,有2个线程:

------------------------------------------------------------
Client connecting to 172.16.8.183,TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.16.8.180 port 14773 connected with 172.16.8.183 port     5001
[  3] local 172.16.8.180 port 14772 connected with 172.16.8.183 port     5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   561 MBytes   471 Mbits/sec
[  3]  0.0-10.0 sec   519 MBytes   434 Mbits/sec
[SUM]  0.0-10.0 sec  1.05 GBytes   904 Mbits/sec

很明显,两个端口都在使用,但不是接近1Gbps的端口 –
这是他们在粘合之前单独工作的东西.我已经尝试了各种不同的绑定模式,xmit散列类型,mtu大小等等,但只是不能让各个端口超过500 Mbits / sec …..这几乎就像Bond本身受限了到某地1G!有没有人有任何想法?

增加1/19:感谢您的评论和问题,我将尝试在这里回答它们,因为我仍然对最大化这些服务器的性能非常感兴趣.首先,我清除了戴尔交换机上的接口计数器,然后让它为生产流量提供服务.以下是构成发送服务器绑定的2个接口的计数器:

Port      InTotalPkts      InUcastPkts      InMcastPkts      
InBcastPkts
--------- ---------------- ---------------- ---------------- --------
--------
Gi1/0/9           63113512         63113440               72                
0

  Port      OutTotalPkts     OutUcastPkts     OutMcastPkts     
OutBcastPkts
--------- ---------------- ---------------- ---------------- --------
--------
Gi1/0/9           55453195         55437966             6075             
9154

  Port      InTotalPkts      InUcastPkts      InMcastPkts      
InBcastPkts
--------- ---------------- ---------------- ---------------- --------
--------
Gi1/0/44          61904622         61904552               48               
22

  Port      OutTotalPkts     OutUcastPkts     OutMcastPkts     
OutBcastPkts
--------- ---------------- ---------------- ---------------- --------
--------
Gi1/0/44          53780693         53747972               48            
32673

似乎流量完全负载均衡 – 但是当rx和tx组合时,带宽图仍然显示每个接口几乎正好500mbps:

我还可以肯定地说,当它正在为生产流量提供服务时,它会不断推动更多带宽并同时​​与多个其他服务器通信.

编辑#2 1/19:Zordache,你让我觉得Iperf测试可能只受到接收端的限制,只使用1个端口而且只有1个接口,所以我同时运行了2个server1实例并运行了“iperf -s”在server2和server3上.然后我在同一时间从服务器1到服务器2和3运行Iperf测试:

iperf -c 172.16.8.182 -P 2
------------------------------------------------------------
Client connecting to 172.16.8.182,TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.16.8.225 port 2239 connected with 172.16.8.182 port 
5001
[  3] local 172.16.8.225 port 2238 connected with 172.16.8.182 port 
5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   234 MBytes   196 Mbits/sec
[  3]  0.0-10.0 sec   232 MBytes   195 Mbits/sec
[SUM]  0.0-10.0 sec   466 MBytes   391 Mbits/sec

iperf -c 172.16.8.183 -P 2
------------------------------------------------------------
Client connecting to 172.16.8.183,TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  3] local 172.16.8.225 port 5565 connected with 172.16.8.183 port 
5001
[  4] local 172.16.8.225 port 5566 connected with 172.16.8.183 port 
5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   287 MBytes   241 Mbits/sec
[  4]  0.0-10.0 sec   292 MBytes   244 Mbits/sec
[SUM]  0.0-10.0 sec   579 MBytes   484 Mbits/sec

添加的两个SUM仍然不会超过1Gbps!至于你的另一个问题,我的端口通道只设置了以下两行:

hashing-mode 7
switchport access vlan 60

哈希模式7是戴尔的“增强哈希”.它没有具体说明它做了什么,但我尝试了其他6种模式的各种组合,它们是:

Hash Algorithm Type
1 - Source MAC,VLAN,EtherType,source module and port Id
2 - Destination MAC,source module and port Id
3 - Source IP and source TCP/UDP port
4 - Destination IP and destination TCP/UDP port
5 - Source/Destination MAC,source MODID/port
6 - Source/Destination IP and source/destination TCP/UDP port
7 - Enhanced hashing mode

如果您有任何建议,我很乐意再次尝试其他模式,或更改我的端口通道上的配置.

解决方法

在计算机上,您的绑定使用哈希策略传输哈希策略:layer3 4,基本上意味着用于给定连接的接口基于ip / port.

您的iperf测试在两个系统之间,而iperf使用单个端口.因此,所有iperf流量可能仅限于绑定接口的单个​​成员.

我不确定你看到的是什么让你认为两个接口都被使用,或者一半接口正在处理. Iperf只报告每个线程的结果.不是每个接口.查看交换机上的接口计数器会更有趣.

你提到过玩不同的哈希模式.由于您要连接到交换机,因此还需要确保更改交换机上的哈希模式.计算机上的配置仅适用于传输的数据包.您还需要在交换机上配置散列模式(如果这甚至是硬件的选项).

在两个系统之间使用时,绑定不是很有用.绑定不会为您提供两个接口的全部带宽,它只是让您使用一个接口的某些连接,而另一些则使用另一个接口.有些模式可以帮助两个系统之间的一点点,但它最多只有25-50%的改进.您几乎永远无法获得两个接口的全部容量.

原文链接:https://www.f2er.com/linux/396701.html

猜你在找的Linux相关文章