06:00.1 Network controller: Mellanox Technologies MT27500 Family
[ConnectX-3 Virtual Function] 06:00.2 Network controller: Mellanox
Technologies MT27500 Family [ConnectX-3 Virtual Function] 06:00.3
Network controller: Mellanox Technologies MT27500 Family [ConnectX-3
Virtual Function] 06:00.4 Network controller: Mellanox Technologies
MT27500 Family [ConnectX-3 Virtual Function]
然后我从Dom0分离06:00.1并将其分配给xen-pciback.
我把它传递给了Xen测试域.
测试DomU内的lspci显示:
00:01.1 Network controller: Mellanox Technologies MT27500 Family
[ConnectX-3 Virtual Function]
我在DomU中加载了以下模块
mlx4_ib rdma_ucm ib_umad ib_uverbs ib_ipoib
[ 11.956787] mlx4_core: Mellanox ConnectX core driver v1.1 (Dec,2011) [ 11.956789] mlx4_core: Initializing 0000:00:01.1 [ 11.956859] mlx4_core 0000:00:01.1: enabling device (0000 -> 0002) [ 11.957242] mlx4_core 0000:00:01.1: Xen PCI mapped GSI0 to IRQ30 [ 11.957581] mlx4_core 0000:00:01.1: Detected virtual function - running in slave mode [ 11.957606] mlx4_core 0000:00:01.1: Sending reset [ 11.957699] mlx4_core 0000:00:01.1: Sending vhcr0 [ 11.976090] mlx4_core 0000:00:01.1: HCA minimum page size:512 [ 11.976672] mlx4_core 0000:00:01.1: Timestamping is not supported in slave mode. [ 12.068079] <mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v1.0 (April 4,2008) [ 12.184072] mlx4_core 0000:00:01.1: mlx4_ib: multi-function enabled [ 12.184075] mlx4_core 0000:00:01.1: mlx4_ib: operating in qp1 tunnel mode
我甚至出现了ib0设备.
ib0 Link encap:UNSPEC HWaddr 80-00-05-49-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:10.10.10.10 Bcast:10.10.10.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:2044 Metric:1 RX packets:117303 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:6576132 (6.5 MB) TX bytes:0 (0.0 B)
我甚至可以在本地ping 10.10.10.10.
但是,这些ping不会发送到infiniband结构上.
CA 'mlx4_0' CA type: MT4100 Number of ports: 1 Firmware version: 2.30.3000 Hardware version: 0 Node GUID: 0x001405005ef41f25 System image GUID: 0x002590ffff175727 Port 1: State: Down Physical state: LinkUp Rate: 10 Base lid: 9 LMC: 0 SM lid: 1 Capability mask: 0x02514868 Port GUID: 0x0000000000000000
答案实际上在这里找到:
根据这个链接:http://www.spinics.net/lists/linux-rdma/msg13307.html
What do I need for the slave VF’s port to become active?
I’m running opensm 3.3.13 on a different Box,is that new enough?
(does SR-IOV require any SM support?)
是的,正如Hal所说,至少你需要opensm 3.3.14
(http://marc.info/?l=linux-rdma&m=133819320432335&w=2)因为它是
第一个版本支持SRIOV所需的alias-guid等内容,3.3.15
现在也出来了,所以你想要支持这个的第二个版本……
基本上你需要PPF链接和奴隶获取别名
在SM注册的guid.我们(IL团队)在周二/周三休假
一个假期,今晚将尝试为您提供更多详细信息,如果没有,请通过
明天,当然.我现在已升级OpenSM,并将尽快报告.
编辑:好的,它现在正在工作.但是我得到了opensm的日志井喷.
OpenSM进程每秒写入数百个条目:Sep 30 20:36:26 707784 [7DC1700] 0x01 -> validate_requested_mgid: ERR 1B01: Wrong MGID Prefix 0x8000 must be 0xFF Sep 30 20:36:26 707810 [7DC1700] 0x01 -> mcmr_rcv_create_new_mgrp: ERR 1B22: Invalid requested MGID Sep 30 20:36:26 708096 [8DC3700] 0x01 -> validate_requested_mgid: ERR 1B01: Wrong MGID Prefix 0x8000 must be 0xFF Sep 30 20:36:26 708119 [8DC3700] 0x01 -> mcmr_rcv_create_new_mgrp: ERR 1B22: Invalid requested MGID Sep 30 20:36:26 708391 [FF5B0700] 0x01 -> validate_requested_mgid: ERR 1B01: Wrong MGID Prefix 0x8000 must be 0xFF Sep 30 20:36:26 708421 [FF5B0700] 0x01 -> mcmr_rcv_create_new_mgrp: ERR 1B22: Invalid requested MGID Sep 30 20:36:26 708696 [3DB9700] 0x01 -> validate_requested_mgid: ERR 1B01: Wrong MGID Prefix 0x8000 must be 0xFF Sep 30 20:36:26 708719 [3DB9700] 0x01 -> mcmr_rcv_create_new_mgrp: ERR 1B22: Invalid requested MGID当我重新启动并为Dom0提供更多内存时,上面的错误消息就消失了.我目前有2GB分配给它与autoballooning关闭.不幸的是,他们回来后没有明显的理由.所以我问了一个与here有关的新问题
我不确定为什么它在dom0中有效但在我的情况下我必须在拥有VF的Dom0上运行OpenSM.我认为这是因为在Dom0上运行的OpenSM实例知道VF并且可以在另一个节点上的子网管理器没有广告的情况下通告它们吗?这是我的猜测.我希望其他xen节点也会接收它的VF.这可能最终成为另一个问题.目前它正在使用单个Xen节点.