All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-05 10:49 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-05 10:49 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8677 bytes --]

Hi Jim

Please find attached Vm-guest-boot up log and host dmesg log

Regards
Nitin

On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Hi Nitin,
>
>
>
> It would be most helpful if you could get lspci working on your guest VM.
>
>
>
> Could you post dmesg contents from your VM and the SPDK vhost log after
> the VM has booted?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Wednesday, October 4, 2017 at 10:42 AM
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i am running this  on remote box which is having linux 3.10 .
>
> on the guest VM lspci command is not working and i am not able to install
> lspci as well
>
> below is the lsblk -a command output -S is also not available in guest VM
>
>
>
> NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
>
> ram0                          1:0    0    16M  0
>
> ram1                          1:1    0    16M  0
>
> ram2                          1:2    0    16M  0
>
> ram3                          1:3    0    16M  0
>
> ram4                          1:4    0    16M  0
>
> ram5                          1:5    0    16M  0
>
> ram6                          1:6    0    16M  0
>
> ram7                          1:7    0    16M  0
>
> ram8                          1:8    0    16M  0
>
> ram9                          1:9    0    16M  0
>
> ram10                         1:10   0    16M  0
>
> ram11                         1:11   0    16M  0
>
> ram12                         1:12   0    16M  0
>
> ram13                         1:13   0    16M  0
>
> ram14                         1:14   0    16M  0
>
> ram15                         1:15   0    16M  0
>
> loop0                         7:0    0         0
>
> loop1                         7:1    0         0
>
> loop2                         7:2    0         0
>
> loop3                         7:3    0         0
>
> loop4                         7:4    0         0
>
> loop5                         7:5    0         0
>
> loop6                         7:6    0         0
>
> loop7                         7:7    0         0
>
> sda                           8:0    0     8G  0
>
> ├─sda1                        8:1    0   500M  0 /boot
>
> └─sda2                        8:2    0   7.5G  0
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Are you running these commands from the host or the VM?  You will only see
> the virtio-scsi controller in lspci output from the guest VM.
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Tuesday, October 3, 2017 at 12:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
> Harris <james.r.harris(a)intel.com>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> One quick update , after running ./script/setup.h for spdk nvme drive is
> converting to uio generic pci device .
>
> so only difference which i found after and before mapping is command for
> ls -l /dev/u*
>
>
>
> can i use /dev/uio0 are the nvme device
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
>
>
> -bash-4.2# lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>
> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>
> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>
> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
>
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> i am getting below output for lspci  for NVram
>
>
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
>
>
> lsblk
>
>
>
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda      8:0    0 223.6G  0 disk
>
> ├─sda1   8:1    0     6G  0 part [SWAP]
>
> ├─sda2   8:2    0   512M  0 part /bootmgr
>
> └─sda3   8:3    0 217.1G  0 part /
>
> sdb      8:16   0 931.5G  0 disk
>
> sdc      8:32   0 931.5G  0 disk
>
> sdd      8:48   0 223.6G  0 disk
>
> sde      8:64   0 111.8G  0 disk
>
> sdf      8:80   0 223.6G  0 disk
>
> sdg      8:96   0 223.6G  0 disk
>
> sdh      8:112  0 223.6G  0 disk
>
> sdi      8:128  0 223.6G  0 disk
>
>
>
>
>
> So how to know which one is virto-scsi  controller basically i wanted to
> run fio test  with nvme mapped device
>
>
>
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please  give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
>
>
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 27778 bytes --]

[-- Attachment #3: vm-guest-bootuplog.txt --]
[-- Type: text/plain, Size: 16583 bytes --]

The highlighted entry will be booted automatically in 1 seconds.
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.32-131.0.15.el6.x86_64 (mockbuild@sl6.fnal.gov) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Sat May 21 10:27:57 CDT 2011
Command line: ro root=/dev/mapper/VolGroup-lv_root rd_LVM_LV=VolGroup/lv_root rd_LVM_LV=VolGroup/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us console=ttyS0,115200n8 crashkernel=auto
KERNEL supported cpus:
  Intel GenuineIntel
  AMD AuthenticAMD
  Centaur CentaurHauls
BIOS-provided physical RAM map:
 BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)
 BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved)
 BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
 BIOS-e820: 0000000000100000 - 000000003ffde000 (usable)
 BIOS-e820: 000000003ffde000 - 0000000040000000 (reserved)
 BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved)
 BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
DMI 2.8 present.
SMBIOS version 2.8 @ 0xF68F0
last_pfn = 0x3ffde max_arch_pfn = 0x400000000
PAT not supported by CPU.
init_memory_mapping: 0000000000000000-000000003ffde000
RAMDISK: 372f9000 - 37fef8f1
ACPI: RSDP 00000000000f68c0 00014 (v00 BOCHS )
ACPI: RSDT 000000003ffe15e5 00034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
ACPI: FACP 000000003ffe1409 00074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
ACPI: DSDT 000000003ffe0040 013C9 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
ACPI: FACS 000000003ffe0000 00040
ACPI: APIC 000000003ffe147d 00078 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
ACPI: HPET 000000003ffe14f5 00038 (v01 BOCHS  BXPCHPET 00000001 BXPC 00000001)
ACPI: SRAT 000000003ffe152d 000B8 (v01 BOCHS  BXPCSRAT 00000001 BXPC 00000001)
SRAT: PXM 0 -> APIC 0 -> Node 0
SRAT: Node 0 PXM 0 0-a0000
SRAT: Node 0 PXM 0 100000-40000000
Bootmem setup node 0 0000000000000000-000000003ffde000
  NODE_DATA [0000000000009840 - 000000000003d83f]
  bootmap [000000000003e000 -  0000000000045fff] pages 8
(8 early reservations) ==> bootmem [0000000000 - 003ffde000]
  #0 [0000000000 - 0000001000]   BIOS data page ==> [0000000000 - 0000001000]
  #1 [0000006000 - 0000008000]       TRAMPOLINE ==> [0000006000 - 0000008000]
  #2 [0001000000 - 0001f474e4]    TEXT DATA BSS ==> [0001000000 - 0001f474e4]
  #3 [00372f9000 - 0037fef8f1]          RAMDISK ==> [00372f9000 - 0037fef8f1]
  #4 [000009fc00 - 0000100000]    BIOS reserved ==> [000009fc00 - 0000100000]
  #5 [0001f48000 - 0001f480ad]              BRK ==> [0001f48000 - 0001f480ad]
  #6 [0000008000 - 0000009000]          PGTABLE ==> [0000008000 - 0000009000]
  #7 [0000009000 - 0000009840]       MEMNODEMAP ==> [0000009000 - 0000009840]
found SMP MP-table at [ffff8800000f6ab0] f6ab0
kvm-clock: Using msrs 4b564d01 and 4b564d00
kvm-clock: cpu 0, msr 0:1bbbf01, boot clock
Zone PFN ranges:
  DMA      0x00000001 -> 0x00001000
  DMA32    0x00001000 -> 0x00100000
  Normal   0x00100000 -> 0x00100000
Movable zone start PFN for each node
early_node_map[2] active PFN ranges
    0: 0x00000001 -> 0x0000009f
    0: 0x00000100 -> 0x0003ffde
ACPI: PM-Timer IO Port: 0x608
ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Using ACPI (MADT) for SMP configuration information
ACPI: HPET id: 0x8086a201 base: 0xfed00000
SMP: Allowing 1 CPUs, 0 hotplug CPUs
PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
Allocating PCI resources starting at 40000000 (gap: 40000000:beffc000)
Booting paravirtualized kernel on KVM
NR_CPUS:4096 nr_cpumask_bits:1 nr_cpu_ids:1 nr_node_ids:1
PERCPU: Embedded 30 pages/cpu @ffff880002000000 s92504 r8192 d22184 u2097152
pcpu-alloc: s92504 r8192 d22184 u2097152 alloc=1*2097152
pcpu-alloc: [0] 0
kvm-clock: cpu 0, msr 0:2015f01, primary cpu clock
Built 1 zonelists in Node order, mobility grouping on.  Total pages: 258328
Policy zone: DMA32
Kernel command line: ro root=/dev/mapper/VolGroup-lv_root rd_LVM_LV=VolGroup/lv_root rd_LVM_LV=VolGroup/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us console=ttyS0,115200n8
PID hash table entries: 4096 (order: 3, 32768 bytes)
Checking aperture...
No AGP bridge found
AMD-Vi disabled by default: pass amd_iommu=on to enable
Memory: 1004160k/1048440k available (5011k kernel code, 392k absent, 43888k reserved, 6909k data, 1232k init)
Hierarchical RCU implementation.
NR_IRQS:33024 nr_irqs:256
Console: colour *CGA 80x25
console [ttyS0] enabled
allocated 10485760 bytes of page_cgroup
please try 'cgroup_disable=memory' option if you don't want memory cgroups
HPET: 3 timers in total, 0 timers will be used for per-cpu timer
Detected 2194.816 MHz processor.
Calibrating delay loop (skipped) preset value.. 4389.63 BogoMIPS (lpj=2194816)
pid_max: default: 32768 minimum: 301
Security Framework initialized
SELinux:  Initializing.
Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes)
Inode-cache hash table entries: 65536 (order: 7, 524288 bytes)
Mount-cache hash table entries: 256
Initializing cgroup subsys ns
Initializing cgroup subsys cpuacct
Initializing cgroup subsys memory
Initializing cgroup subsys devices
Initializing cgroup subsys freezer
Initializing cgroup subsys net_cls
Initializing cgroup subsys blkio
CPU: Physical Processor ID: 0
mce: CPU supports 10 MCE banks
alternatives: switching to unfair spinlock
SMP alternatives: switching to UP code
Freeing SMP alternatives: 33k freed
ACPI: Core revision 20090903
ftrace: converting mcount calls to 0f 1f 44 00 00
ftrace: allocating 20700 entries in 82 pages
Enabling x2apic
Enabled x2apic
Setting APIC routing to physical x2apic
..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
CPU0: Intel QEMU Virtual CPU version 2.5+ stepping 03
Performance Events: Broken PMU hardware detected, using software events only.
NMI watchdog disabled (cpu0): hardware events not enabled
Brought up 1 CPUs
Total of 1 processors activated (4389.63 BogoMIPS).
devtmpfs: initialized
regulator: core version 0.5
NET: Registered protocol family 16
ACPI: bus type pci registered
PCI: Using configuration type 1 for base access
bio: create slab <bio-0> at 0
ACPI: Interpreter enabled
ACPI: (supports S0 S3 S4 S5)
ACPI: Using IOAPIC for interrupt routing
ACPI: No dock devices found.
ACPI: PCI Root Bridge [PCI0] (0000:00)
pci 0000:00:01.3: quirk: region 0600-063f claimed by PIIX4 ACPI
pci 0000:00:01.3: quirk: region 0700-070f claimed by PIIX4 SMB
ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
HEST: Table is not found!
vgaarb: loaded
SCSI subsystem initialized
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
PCI: Using ACPI for IRQ routing
NetLabel: Initializing
NetLabel:  domain hash size = 128
NetLabel:  protocols = UNLABELED CIPSOv4
NetLabel:  unlabeled traffic allowed by default
hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Switching to clocksource kvm-clock
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp: PnP ACPI: found 10 devices
ACPI: ACPI bus type pnp unregistered
NET: Registered protocol family 2
IP route cache hash table entries: 32768 (order: 6, 262144 bytes)
TCP established hash table entries: 131072 (order: 9, 2097152 bytes)
TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
TCP: Hash tables configured (established 131072 bind 65536)
TCP reno registered
NET: Registered protocol family 1
pci 0000:00:00.0: Limiting direct PCI/PCI transfers
pci 0000:00:01.0: PIIX3: Enabling Passive Release
pci 0000:00:01.0: Activating ISA DMA hang workarounds
Trying to unpack rootfs image as initramfs...
Freeing initrd memory: 13274k freed
audit: initializing netlink socket (disabled)
type=2000 audit(1507198337.299:1): initialized
HugeTLB registered 2 MB page size, pre-allocated 0 pages
VFS: Disk quotas dquot_6.5.2
Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
msgmni has been set to 1987
alg: No test for stdrng (krng)
ksign: Installing public key data
Loading keyring
- Added public key 289BE2EBD772141
- User ID: Red Hat, Inc. (Kernel Module GPG key)
- Added public key D4A26C9CCD09BEDA
- User ID: Red Hat Enterprise Linux Driver Update Program <secalert@redhat.com>
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
pciehp: PCI Express Hot Plug Controller Driver version: 0.4
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
acpiphp: Slot [2] registered
acpiphp: Slot [3] registered
acpiphp: Slot [4] registered
acpiphp: Slot [5] registered
acpiphp: Slot [6] registered
acpiphp: Slot [7] registered
acpiphp: Slot [8] registered
acpiphp: Slot [9] registered
acpiphp: Slot [10] registered
acpiphp: Slot [11] registered
acpiphp: Slot [12] registered
acpiphp: Slot [13] registered
acpiphp: Slot [14] registered
acpiphp: Slot [15] registered
acpiphp: Slot [16] registered
acpiphp: Slot [17] registered
acpiphp: Slot [18] registered
acpiphp: Slot [19] registered
acpiphp: Slot [20] registered
acpiphp: Slot [21] registered
acpiphp: Slot [22] registered
acpiphp: Slot [23] registered
acpiphp: Slot [24] registered
acpiphp: Slot [25] registered
acpiphp: Slot [26] registered
acpiphp: Slot [27] registered
acpiphp: Slot [28] registered
acpiphp: Slot [29] registered
acpiphp: Slot [30] registered
acpiphp: Slot [31] registered
pci-stub: invalid id string ""
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
processor LNXCPU:00: registered as cooling_device0
ERST: Table is not found!
Non-volatile memory driver v1.3
Linux agpgart interface v0.103
crash memory driver: version 1.1
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
erial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
brd: module loaded
loop: module loaded
input: Macintosh mouse button emulation as /devices/virtual/input/input1
Fixed MDIO Bus: probed
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
uhci_hcd: USB Universal Host Controller Interface driver
PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
serio: i8042 KBD port at 0x60,0x64 irq 1
serio: i8042 AUX port at 0x60,0x64 irq 12
mice: PS/2 mouse device common for all mice
input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input2
rtc_cmos 00:01: RTC can wake from S4
rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, y3k, 114 bytes nvram, hpet irqs
cpuidle: using governor ladder
cpuidle: using governor menu
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos 00:01: setting system clock to 2017-10-05 10:12:17 UTC (1507198337)
Initalizing network drop monitor service
Freeing unused kernel memory: 1232k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 1112k freed
Freeing unused kernel memory: 1796k freed
dracut: dracut-004-53.el6
device-mapper: uevent: version 1.0.3
device-mapper: ioctl: 4.20.6-ioctl (2011-02-02) initialised: dm-devel@redhat.com
udev: starting version 147
dracut: Starting plymouth daemon
putfont: PIO_FONT trying ...
scsi0 : ata_piix
scsi1 : ata_piix
ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc040 irq 14
ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc048 irq 15
ata2.00: ATA-7: QEMU HARDDISK, 2.5+, max UDMA/100
ata2.00: 16777216 sectors, multi 16: LBA48
ata2.00: configured for MWDMA2
scsi 1:0:0:0: Direct-Access     ATA      QEMU HARDDISK    2.5+ PQ: 0 ANSI: 5
ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
virtio-pci 0000:00:02.0: PCI INT A -> Link[LNKB] -> GSI 10 (level, high) -> IRQ 10
sd 1:0:0:0: [sda] 16777216 512-byte logical blocks: (8.58 GB/8.00 GiB)
sd 1:0:0:0: [sda] Write Protect is off
sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sda: sda1 sda2
sd 1:0:0:0: [sda] Attached SCSI disk
.input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
dracut: Scanning devices sda2  for LVM logical volumes VolGroup/lv_root VolGroup/lv_swap
dracut: inactive '/dev/VolGroup/lv_root' [5.54 GiB] inherit
dracut: inactive '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
EXT4-fs (dm-0): INFO: recovery required on readonly filesystem
EXT4-fs (dm-0): write access will be enabled during recovery
EXT4-fs (dm-0): recovery complete
EXT4-fs (dm-0): mounted filesystem with ordered data mode
dracut: Mounted root filesystem /dev/mapper/VolGroup-lv_root
.dracut: Loading SELinux policy
type=1404 audit(1507198338.078:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
.type=1403 audit(1507198338.311:3): policy loaded auid=4294967295 ses=4294967295
dracut:
dracut: Switching root
                Welcome to .Scientific Linux
Starting udev: udev: starting version 147
.sd 1:0:0:0: Attached scsi generic sg0 type 0
piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
.[  OK  ]
Setting hostname localhost.localdomain:  [  OK  ]
Setting up Logical Volume Management: .  2 logical volume(s) in volume group "VolGroup" now active
[  OK  ]
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/mapper/VolGroup-lv_root
/dev/mapper/VolGroup-lv_root: clean, 16723/363600 files, 185767/1452032 blocks
[/sbin/fsck.ext4 (1) -- /boot] fsck.ext4 -a /dev/sda1
/dev/sda1: recovering journal
/dev/sda1: clean, 38/128016 files, 46727/512000 blocks
[  OK  ]
Remounting root filesystem in read-write mode:  [  OK  ]
Mounting local filesystems:  EXT4-fs (sda1): mounted filesystem with ordered data mode
[  OK  ]
.Enabling /etc/fstab swaps:  Adding 2064376k swap on /dev/mapper/VolGroup-lv_swap.  Priority:-1 extents:1 across:2064376k D
[  OK  ]
Entering non-interactive startup
Starting monitoring for VG VolGroup: .  2 logical volume(s) in volume group "VolGroup" monitored
[  OK  ]
ip6tables: Applying firewall rules: NET: Registered protocol family 10
lo: Disabled Privacy Extensions
ip6_tables: (C) 2000-2006 Netfilter Core Team
nf_conntrack version 0.5.0 (7981 buckets, 31924 max)
[  OK  ]
iptables: Applying firewall rules: ip_tables: (C) 2000-2006 Netfilter Core Team
[  OK  ]
Bringing up loopback interface:  .[  OK  ]
Bringing up interface eth0:  Device eth0 does not seem to be present, delaying initialization.
[FAILED]
Starting auditd: [  OK  ]
.Starting system logger: [  OK  ]
Mounting other filesystems:  [  OK  ]
Retrigger failed udev events[  OK  ]
Starting sshd: [  OK  ]
Starting postfix: .[  OK  ]
Starting crond: [  OK  ]
.....
Scientific Linux release 6.1 (Carbon)
Kernel 2.6.32-131.0.15.el6.x86_64 on an x86_64

localhost.localdomain login: ..
                               setfont: putfont: 512,8x16:  failed: -1
                                                                      putfont: PIO_FONT: Invalid argument

Scientific Linux release 6.1 (Carbon)
Kernel 2.6.32-131.0.15.el6.x86_64 on an x86_64

localhost.localdomain login:
Scientific Linux release 6.1 (Carbon)
Kernel 2.6.32-131.0.15.el6.x86_64 on an x86_64

localhost.localdomain login:

[-- Attachment #4: dmesg.old --]
[-- Type: application/octet-stream, Size: 155358 bytes --]

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.10.0-514.21.1.el7.x86_64 (mockbuild@c7-bannow-worker-7.novalocal) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Mon Jul 24 05:00:35 PDT 2017
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-514.21.1.el7.x86_64 root=UUID=d774878d-7f00-42de-aefc-e4850d55c17d ro console=tty0 nmi_watchdog=0 crashkernel=auto console=ttyS0,9600 intel_iommu=on iommu=pt
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000069eb3fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000069eb4000-0x000000006c7a0fff] reserved
[    0.000000] BIOS-e820: [mem 0x000000006c7a1000-0x000000006c910fff] usable
[    0.000000] BIOS-e820: [mem 0x000000006c911000-0x000000006d35afff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x000000006d35b000-0x000000006f205fff] reserved
[    0.000000] BIOS-e820: [mem 0x000000006f206000-0x000000006f7fffff] usable
[    0.000000] BIOS-e820: [mem 0x000000006f800000-0x000000008fffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fd000000-0x00000000fe7fffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fed20000-0x00000000fed44fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000c3fffffff] usable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 3.0 present.
[    0.000000] DMI: empty empty/S7106, BIOS V7.016 07/04/2017
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0xc40000 max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: uncachable
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF uncachable
[    0.000000]   C0000-FFFFF write-protect
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 000000000000 mask 3FF800000000 write-back
[    0.000000]   1 base 000800000000 mask 3FFC00000000 write-back
[    0.000000]   2 base 000C00000000 mask 3FFFC0000000 write-back
[    0.000000]   3 base 000080000000 mask 3FFF80000000 uncachable
[    0.000000]   4 base 00007F000000 mask 3FFFFF000000 uncachable
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000]   8 disabled
[    0.000000]   9 disabled
[    0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[    0.000000] original variable MTRRs
[    0.000000] reg 0, base: 0GB, range: 32GB, type WB
[    0.000000] reg 1, base: 32GB, range: 16GB, type WB
[    0.000000] reg 2, base: 48GB, range: 1GB, type WB
[    0.000000] reg 3, base: 2GB, range: 2GB, type UC
[    0.000000] reg 4, base: 2032MB, range: 16MB, type UC
[    0.000000] total RAM covered: 48112M
[    0.000000] Found optimal setting for mtrr clean up
[    0.000000]  gran_size: 64K 	chunk_size: 32M 	num_reg: 7  	lose cover RAM: 0G
[    0.000000] New variable MTRRs
[    0.000000] reg 0, base: 0GB, range: 2GB, type WB
[    0.000000] reg 1, base: 2032MB, range: 16MB, type UC
[    0.000000] reg 2, base: 4GB, range: 4GB, type WB
[    0.000000] reg 3, base: 8GB, range: 8GB, type WB
[    0.000000] reg 4, base: 16GB, range: 16GB, type WB
[    0.000000] reg 5, base: 32GB, range: 16GB, type WB
[    0.000000] reg 6, base: 48GB, range: 1GB, type WB
[    0.000000] e820: update [mem 0x7f000000-0xffffffff] usable ==> reserved
[    0.000000] e820: last_pfn = 0x6f800 max_arch_pfn = 0x400000000
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 24576
[    0.000000] Using GB pages for direct mapping
[    0.000000] BRK [0x01fa2000, 0x01fa2fff] PGTABLE
[    0.000000] BRK [0x01fa3000, 0x01fa3fff] PGTABLE
[    0.000000] BRK [0x01fa4000, 0x01fa4fff] PGTABLE
[    0.000000] BRK [0x01fa5000, 0x01fa5fff] PGTABLE
[    0.000000] BRK [0x01fa6000, 0x01fa6fff] PGTABLE
[    0.000000] BRK [0x01fa7000, 0x01fa7fff] PGTABLE
[    0.000000] RAMDISK: [mem 0x3645a000-0x37224fff]
[    0.000000] ACPI: RSDP 00000000000f05b0 00024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 000000006c9110c8 0010C (v01 ALASKA   A M I  01072009 AMI  00010013)
[    0.000000] ACPI: FACP 000000006c9518a0 00114 (v06 ALASKA   A M I  01072009 INTL 20091013)
[    0.000000] ACPI BIOS Warning (bug): FADT (revision 6) is longer than ACPI 5.0 version, truncating length 276 to 268 (20130517/tbfadt-323)
[    0.000000] ACPI: DSDT 000000006c911268 40635 (v02 ALASKA   A M I  01072009 INTL 20091013)
[    0.000000] ACPI: FACS 000000006d359080 00040
[    0.000000] ACPI: FPDT 000000006c9519b8 00044 (v01 ALASKA   A M I  01072009 AMI  00010013)
[    0.000000] ACPI: FIDT 000000006c951a00 0009C (v01 ALASKA    A M I 01072009 AMI  00010013)
[    0.000000] ACPI: SPMI 000000006c951aa0 00041 (v05 ALASKA   A M I  00000000 AMI. 00000000)
[    0.000000] ACPI: UEFI 000000006c951ae8 0005C (v01  INTEL RstUefiV 00000000      00000000)
[    0.000000] ACPI: UEFI 000000006c951b48 00042 (v01 ALASKA   A M I  01072009      01000013)
[    0.000000] ACPI: MCFG 000000006c951b90 0003C (v01 ALASKA    A M I 01072009 MSFT 00000097)
[    0.000000] ACPI: HPET 000000006c951bd0 00038 (v01 ALASKA   A M I  00000001 INTL 20091013)
[    0.000000] ACPI: APIC 000000006c951c08 00C5E (v03 ALASKA   A M I  00000000 INTL 20091013)
[    0.000000] ACPI: MIGT 000000006c952868 00040 (v01 ALASKA   A M I  00000000 INTL 20091013)
[    0.000000] ACPI: MSCT 000000006c9528a8 00064 (v01 ALASKA   A M I  00000001 INTL 20091013)
[    0.000000] ACPI: NFIT 000000006c952910 18028 (v01                 00000000      00000000)
[    0.000000] ACPI: PCAT 000000006c96a938 00048 (v01 ALASKA   A M I  00000002 INTL 20091013)
[    0.000000] ACPI: PCCT 000000006c96a980 0006E (v01 ALASKA   A M I  00000002 INTL 20091013)
[    0.000000] ACPI: RASF 000000006c96a9f0 00030 (v01 ALASKA   A M I  00000001 INTL 20091013)
[    0.000000] ACPI: SLIT 000000006c96aa20 00030 (v01 ALASKA   A M I  00000001 INTL 20091013)
[    0.000000] ACPI: SRAT 000000006c96aa50 01430 (v03 ALASKA   A M I  00000002 INTL 20091013)
[    0.000000] ACPI: SVOS 000000006c96be80 00032 (v01 ALASKA   A M I  00000000 INTL 20091013)
[    0.000000] ACPI: WDDT 000000006c96beb8 00040 (v01 ALASKA   A M I  00000000 INTL 20091013)
[    0.000000] ACPI: OEM4 000000006c96bef8 513F4 (v02  INTEL CPU  CST 00003000 INTL 20140828)
[    0.000000] ACPI: OEM1 000000006c9bd2f0 15174 (v02  INTEL CPU EIST 00003000 INTL 20140828)
[    0.000000] ACPI: OEM2 000000006c9d2468 0CA44 (v02  INTEL CPU  HWP 00003000 INTL 20140828)
[    0.000000] ACPI: SSDT 000000006c9deeb0 19D00 (v02  INTEL SSDT  PM 00004000 INTL 20140828)
[    0.000000] ACPI: SSDT 000000006c9f8bb0 0065B (v02 ALASKA   A M I  00000000 INTL 20091013)
[    0.000000] ACPI: SSDT 000000006c9f9210 01B34 (v02  INTEL SpsNm    00000002 INTL 20140828)
[    0.000000] ACPI: HEST 000000006c9fad48 000A8 (v01 ALASKA   A M I  00000001 INTL 00000001)
[    0.000000] ACPI: BERT 000000006c9fadf0 00030 (v01 ALASKA   A M I  00000001 INTL 00000001)
[    0.000000] ACPI: ERST 000000006c9fae20 00230 (v01 ALASKA   A M I  00000001 INTL 00000001)
[    0.000000] ACPI: EINJ 000000006c9fb050 00150 (v01 ALASKA   A M I  00000001 INTL 00000001)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] SRAT: PXM 0 -> APIC 0x00 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x02 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x04 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x06 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x08 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x10 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x12 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x14 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x16 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x18 -> Node 0
[    0.000000] SRAT: PXM 1 -> APIC 0x20 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x22 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x24 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x26 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x28 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x30 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x32 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x34 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x36 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x38 -> Node 1
[    0.000000] SRAT: PXM 0 -> APIC 0x01 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x03 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x05 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x07 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x09 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x11 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x13 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x15 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x17 -> Node 0
[    0.000000] SRAT: PXM 0 -> APIC 0x19 -> Node 0
[    0.000000] SRAT: PXM 1 -> APIC 0x21 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x23 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x25 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x27 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x29 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x31 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x33 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x35 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x37 -> Node 1
[    0.000000] SRAT: PXM 1 -> APIC 0x39 -> Node 1
[    0.000000] SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
[    0.000000] SRAT: Node 0 PXM 0 [mem 0x100000000-0x63fffffff]
[    0.000000] SRAT: Node 1 PXM 1 [mem 0x640000000-0xc3fffffff]
[    0.000000] NUMA: Initialized distance table, cnt=2
[    0.000000] NUMA: Node 0 [mem 0x00000000-0x7fffffff] + [mem 0x100000000-0x63fffffff] -> [mem 0x00000000-0x63fffffff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x63fffffff]
[    0.000000]   NODE_DATA [mem 0x63ffd9000-0x63fffffff]
[    0.000000] Initmem setup node 1 [mem 0x640000000-0xc3fffffff]
[    0.000000]   NODE_DATA [mem 0xc3ffd6000-0xc3fffcfff]
[    0.000000] Reserving 163MB of memory at 704MB for crashkernel (System RAM: 47781MB)
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   [mem 0x100000000-0xc3fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x69eb3fff]
[    0.000000]   node   0: [mem 0x6c7a1000-0x6c910fff]
[    0.000000]   node   0: [mem 0x6f206000-0x6f7fffff]
[    0.000000]   node   0: [mem 0x100000000-0x63fffffff]
[    0.000000]   node   1: [mem 0x640000000-0xc3fffffff]
[    0.000000] On node 0 totalpages: 5940662
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 6745 pages used for memmap
[    0.000000]   DMA32 zone: 431646 pages, LIFO batch:31
[    0.000000]   Normal zone: 86016 pages used for memmap
[    0.000000]   Normal zone: 5505024 pages, LIFO batch:31
[    0.000000] On node 1 totalpages: 6291456
[    0.000000]   Normal zone: 98304 pages used for memmap
[    0.000000]   Normal zone: 6291456 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x508
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x08] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x10] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x12] lapic_id[0x12] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x14] lapic_id[0x14] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x16] lapic_id[0x16] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x18] lapic_id[0x18] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x38] lapic_id[0x20] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x3a] lapic_id[0x22] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x3c] lapic_id[0x24] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x3e] lapic_id[0x26] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x40] lapic_id[0x28] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x48] lapic_id[0x30] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x4a] lapic_id[0x32] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x4c] lapic_id[0x34] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x4e] lapic_id[0x36] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x50] lapic_id[0x38] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x09] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x11] lapic_id[0x11] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x13] lapic_id[0x13] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x15] lapic_id[0x15] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x17] lapic_id[0x17] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x19] lapic_id[0x19] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x39] lapic_id[0x21] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x3b] lapic_id[0x23] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x3d] lapic_id[0x25] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x3f] lapic_id[0x27] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x41] lapic_id[0x29] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x49] lapic_id[0x31] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x4b] lapic_id[0x33] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x4d] lapic_id[0x35] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x4f] lapic_id[0x37] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x51] lapic_id[0x39] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0xff] lapic_id[0xff] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x00] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x01] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x02] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x03] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x04] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x05] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x06] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x07] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x08] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x09] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x0a] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x0b] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x0c] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x0d] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x0e] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x0f] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x10] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x11] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x12] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x13] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x14] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x15] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x16] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x17] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x18] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x19] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x1a] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x1b] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x1c] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x1d] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x1e] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x1f] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x20] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x21] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x22] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x23] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x24] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x25] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x26] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x27] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x28] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x29] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x2a] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x2b] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x2c] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x2d] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x2e] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x2f] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x30] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x31] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x32] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x33] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x34] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x35] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x36] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x37] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x38] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x39] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x3a] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x3b] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x3c] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x3d] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x3e] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x3f] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x40] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x41] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x42] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x43] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x44] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x45] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x46] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x47] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x48] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x49] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x4a] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x4b] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x4c] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x4d] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x4e] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x4f] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x50] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x51] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x52] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x53] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x54] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x55] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x56] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x57] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x58] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x59] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x5a] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x5b] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x5c] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x5d] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x5e] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x5f] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x60] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x61] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x62] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x63] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x64] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x65] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x66] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x67] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x68] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x69] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x6a] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x6b] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x6c] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x6d] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x6e] disabled)
[    0.000000] ACPI: X2APIC (apic_id[0xffffffff] uid[0x6f] disabled)
[    0.000000] ACPI: X2APIC_NMI (uid[0xffffffff] high level lint[0x1])
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high level lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: IOAPIC (id[0x09] address[0xfec01000] gsi_base[24])
[    0.000000] IOAPIC[1]: apic_id 9, version 32, address 0xfec01000, GSI 24-31
[    0.000000] ACPI: IOAPIC (id[0x0a] address[0xfec08000] gsi_base[32])
[    0.000000] IOAPIC[2]: apic_id 10, version 32, address 0xfec08000, GSI 32-39
[    0.000000] ACPI: IOAPIC (id[0x0b] address[0xfec10000] gsi_base[40])
[    0.000000] IOAPIC[3]: apic_id 11, version 32, address 0xfec10000, GSI 40-47
[    0.000000] ACPI: IOAPIC (id[0x0c] address[0xfec18000] gsi_base[48])
[    0.000000] IOAPIC[4]: apic_id 12, version 32, address 0xfec18000, GSI 48-55
[    0.000000] ACPI: IOAPIC (id[0x0f] address[0xfec20000] gsi_base[72])
[    0.000000] IOAPIC[5]: apic_id 15, version 32, address 0xfec20000, GSI 72-79
[    0.000000] ACPI: IOAPIC (id[0x10] address[0xfec28000] gsi_base[80])
[    0.000000] IOAPIC[6]: apic_id 16, version 32, address 0xfec28000, GSI 80-87
[    0.000000] ACPI: IOAPIC (id[0x11] address[0xfec30000] gsi_base[88])
[    0.000000] IOAPIC[7]: apic_id 17, version 32, address 0xfec30000, GSI 88-95
[    0.000000] ACPI: IOAPIC (id[0x12] address[0xfec38000] gsi_base[96])
[    0.000000] IOAPIC[8]: apic_id 18, version 32, address 0xfec38000, GSI 96-103
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 224 CPUs, 184 hotplug CPUs
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x0009ffff]
[    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000dffff]
[    0.000000] PM: Registered nosave memory: [mem 0x000e0000-0x000fffff]
[    0.000000] PM: Registered nosave memory: [mem 0x69eb4000-0x6c7a0fff]
[    0.000000] PM: Registered nosave memory: [mem 0x6c911000-0x6d35afff]
[    0.000000] PM: Registered nosave memory: [mem 0x6d35b000-0x6f205fff]
[    0.000000] PM: Registered nosave memory: [mem 0x6f800000-0x8fffffff]
[    0.000000] PM: Registered nosave memory: [mem 0x90000000-0xfcffffff]
[    0.000000] PM: Registered nosave memory: [mem 0xfd000000-0xfe7fffff]
[    0.000000] PM: Registered nosave memory: [mem 0xfe800000-0xfed1ffff]
[    0.000000] PM: Registered nosave memory: [mem 0xfed20000-0xfed44fff]
[    0.000000] PM: Registered nosave memory: [mem 0xfed45000-0xfeffffff]
[    0.000000] PM: Registered nosave memory: [mem 0xff000000-0xffffffff]
[    0.000000] e820: [mem 0x90000000-0xfcffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on bare hardware
[    0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:224 nr_cpu_ids:224 nr_node_ids:2
[    0.000000] PERCPU: Embedded 33 pages/cpu @ffff880627600000 s96728 r8192 d30248 u262144
[    0.000000] pcpu-alloc: s96728 r8192 d30248 u262144 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 000 001 002 003 004 005 006 007 
[    0.000000] pcpu-alloc: [0] 008 009 020 021 022 023 024 025 
[    0.000000] pcpu-alloc: [0] 026 027 028 029 040 042 044 046 
[    0.000000] pcpu-alloc: [0] 048 050 052 054 056 058 060 062 
[    0.000000] pcpu-alloc: [0] 064 066 068 070 072 074 076 078 
[    0.000000] pcpu-alloc: [0] 080 082 084 086 088 090 092 094 
[    0.000000] pcpu-alloc: [0] 096 098 100 102 104 106 108 110 
[    0.000000] pcpu-alloc: [0] 112 114 116 118 120 122 124 126 
[    0.000000] pcpu-alloc: [0] 128 130 132 134 136 138 140 142 
[    0.000000] pcpu-alloc: [0] 144 146 148 150 152 154 156 158 
[    0.000000] pcpu-alloc: [0] 160 162 164 166 168 170 172 174 
[    0.000000] pcpu-alloc: [0] 176 178 180 182 184 186 188 190 
[    0.000000] pcpu-alloc: [0] 192 194 196 198 200 202 204 206 
[    0.000000] pcpu-alloc: [0] 208 210 212 214 216 218 220 222 
[    0.000000] pcpu-alloc: [1] 010 011 012 013 014 015 016 017 
[    0.000000] pcpu-alloc: [1] 018 019 030 031 032 033 034 035 
[    0.000000] pcpu-alloc: [1] 036 037 038 039 041 043 045 047 
[    0.000000] pcpu-alloc: [1] 049 051 053 055 057 059 061 063 
[    0.000000] pcpu-alloc: [1] 065 067 069 071 073 075 077 079 
[    0.000000] pcpu-alloc: [1] 081 083 085 087 089 091 093 095 
[    0.000000] pcpu-alloc: [1] 097 099 101 103 105 107 109 111 
[    0.000000] pcpu-alloc: [1] 113 115 117 119 121 123 125 127 
[    0.000000] pcpu-alloc: [1] 129 131 133 135 137 139 141 143 
[    0.000000] pcpu-alloc: [1] 145 147 149 151 153 155 157 159 
[    0.000000] pcpu-alloc: [1] 161 163 165 167 169 171 173 175 
[    0.000000] pcpu-alloc: [1] 177 179 181 183 185 187 189 191 
[    0.000000] pcpu-alloc: [1] 193 195 197 199 201 203 205 207 
[    0.000000] pcpu-alloc: [1] 209 211 213 215 217 219 221 223 
[    0.000000] Built 2 zonelists in Zone order, mobility grouping on.  Total pages: 12040968
[    0.000000] Policy zone: Normal
[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.10.0-514.21.1.el7.x86_64 root=UUID=d774878d-7f00-42de-aefc-e4850d55c17d ro console=tty0 nmi_watchdog=0 crashkernel=auto console=ttyS0,9600 intel_iommu=on iommu=pt
[    0.000000] DMAR: IOMMU enabled
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100
[    0.000000] x86/fpu: xstate_offset[3]: 03c0, xstate_sizes[3]: 0040
[    0.000000] x86/fpu: xstate_offset[4]: 0400, xstate_sizes[4]: 0040
[    0.000000] x86/fpu: xstate_offset[5]: 0440, xstate_sizes[5]: 0040
[    0.000000] x86/fpu: xstate_offset[6]: 0480, xstate_sizes[6]: 0200
[    0.000000] x86/fpu: xstate_offset[7]: 0680, xstate_sizes[7]: 0400
[    0.000000] xsave: enabled xstate_bv 0xff, cntxt size 0xa80 using standard form
[    0.000000] Memory: 5674136k/51380224k available (6768k kernel code, 2451752k absent, 1059424k reserved, 4446k data, 1680k init)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=224, Nodes=2
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=224.
[    0.000000] NR_IRQS:327936 nr_irqs:3576 0
[    0.000000] Console: colour VGA+ 80x25
[    0.000000] console [tty0] enabled
[    0.000000] console [ttyS0] enabled
[    0.000000] allocated 196083712 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[    0.000000] Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl
[    0.000000] hpet clockevent registered
[    0.000000] tsc: Detected 2200.000 MHz processor
[    0.000096] Calibrating delay loop (skipped), value calculated using timer frequency.. 4400.00 BogoMIPS (lpj=2200000)
[    0.127301] pid_max: default: 229376 minimum: 1792
[    0.185342] Security Framework initialized
[    0.234488] SELinux:  Initializing.
[    0.276366] SELinux:  Starting in permissive mode
[    0.281045] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes)
[    0.382374] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes)
[    0.479925] Mount-cache hash table entries: 4096
[    0.536866] Initializing cgroup subsys memory
[    0.589160] Initializing cgroup subsys devices
[    0.642446] Initializing cgroup subsys freezer
[    0.695672] Initializing cgroup subsys net_cls
[    0.748899] Initializing cgroup subsys blkio
[    0.800049] Initializing cgroup subsys perf_event
[    0.856420] Initializing cgroup subsys hugetlb
[    0.909717] Initializing cgroup subsys pids
[    0.959831] Initializing cgroup subsys net_prio
[    1.014221] CPU: Physical Processor ID: 0
[    1.062235] CPU: Processor Core ID: 0
[    1.106124] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    1.178030] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
[    1.263609] mce: CPU supports 20 MCE banks
[    1.312796] CPU0: Thermal monitoring enabled (TM1)
[    1.370333] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[    1.433990] Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0
[    1.498631] tlb_flushall_shift: 6
[    1.538781] Freeing SMP alternatives: 28k freed
[    1.594071] ACPI: Core revision 20130517
[    1.668193] ACPI: All ACPI Tables successfully acquired
[    1.745798] ftrace: allocating 25815 entries in 101 pages
[    1.822053] smpboot: Max logical packages: 23
[    1.874781] IRQ remapping doesn't support X2APIC mode, disable x2apic.
[    1.953084] Switched APIC routing to physical flat.
[    2.012482] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    2.094417] smpboot: CPU0: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz (fam: 06, model: 55, stepping: 04)
[    2.208599] TSC deadline timer enabled
[    2.208634] Performance Events: PEBS fmt3+, 32-deep LBR, Skylake events, full-width counters, Intel PMU driver.
[    2.330199] ... version:                4
[    2.378240] ... bit width:              48
[    2.427319] ... generic registers:      4
[    2.475355] ... value mask:             0000ffffffffffff
[    2.538962] ... max period:             0000ffffffffffff
[    2.602562] ... fixed-purpose events:   3
[    2.650598] ... event mask:             000000070000000f
[    2.719226] smpboot: Booting Node   0, Processors  #1 #2 #3 #4 #5 #6 #7 #8 #9 OK
[    2.854838] smpboot: Booting Node   1, Processors  #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 OK
[    3.088533] smpboot: Booting Node   0, Processors  #20 #21 #22 #23 #24 #25 #26 #27 #28 #29 OK
[    3.242714] smpboot: Booting Node   1, Processors  #30 #31 #32 #33 #34 #35 #36 #37 #38 #39
[    3.394190] Brought up 40 CPUs
[    3.433217] smpboot: Total of 40 processors activated (176103.44 BogoMIPS)
[    3.729330] node 0 initialised, 4883739 pages in 93ms
[    3.744130] node 1 initialised, 5664985 pages in 93ms
[    3.851373] devtmpfs: initialized
[    3.895877] EVM: security.selinux
[    3.935594] EVM: security.ima
[    3.971179] EVM: security.capability
[    4.014196] PM: Registering ACPI NVS region [mem 0x6c911000-0x6d35afff] (10788864 bytes)
[    4.112634] atomic64 test passed for x86-64 platform with CX8 and with SSE
[    4.194892] pinctrl core: initialized pinctrl subsystem
[    4.257996] NET: Registered protocol family 16
[    4.311588] ACPI: bus type PCI registered
[    4.359639] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    4.437054] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0x80000000-0x8fffffff] (base 0x80000000)
[    4.548555] PCI: MMCONFIG at [mem 0x80000000-0x8fffffff] reserved in E820
[    4.629842] PCI: Using configuration type 1 for base access
[    4.704696] ACPI: Added _OSI(Module Device)
[    4.754841] ACPI: Added _OSI(Processor Device)
[    4.808068] ACPI: Added _OSI(3.0 _SCP Extensions)
[    4.864408] ACPI: Added _OSI(Processor Aggregator Device)
[    4.936108] ACPI: EC: Look up EC in DSDT
[    4.956482] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    5.147356] ACPI: Dynamic OEM Table Load:
[    5.195676] ACPI: OEM1           (null) 15174 (v02  INTEL CPU EIST 00003000 INTL 20140828)
[    5.304121] ACPI: Dynamic OEM Table Load:
[    5.352345] ACPI: OEM4           (null) 513F4 (v02  INTEL CPU  CST 00003000 INTL 20140828)
[    5.469000] ACPI: Interpreter enabled
[    5.512885] ACPI: (supports S0 S4 S5)
[    5.556754] ACPI: Using IOAPIC for interrupt routing
[    5.616254] HEST: Table parsing has been initialized.
[    5.676806] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    5.822990] ACPI: PCI Root Bridge [PC00] (domain 0000 [bus 00-16])
[    5.896981] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[    5.995091] acpi PNP0A08:00: _OSC: platform does not support [AER]
[    6.069192] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
[    6.161449] PCI host bridge to bus 0000:00
[    6.210515] pci_bus 0000:00: root bus resource [bus 00-16]
[    6.276196] pci_bus 0000:00: root bus resource [io  0x0000-0x03af window]
[    6.357440] pci_bus 0000:00: root bus resource [io  0x03e0-0x0cf7 window]
[    6.438678] pci_bus 0000:00: root bus resource [io  0x03b0-0x03bb window]
[    6.519917] pci_bus 0000:00: root bus resource [io  0x03c0-0x03df window]
[    6.601157] pci_bus 0000:00: root bus resource [io  0x1000-0x3fff window]
[    6.682397] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    6.772103] pci_bus 0000:00: root bus resource [mem 0x000c4000-0x000c7fff window]
[    6.861853] pci_bus 0000:00: root bus resource [mem 0xfe010000-0xfe010fff window]
[    6.951597] pci_bus 0000:00: root bus resource [mem 0x90000000-0x9d7fffff window]
[    7.041346] pci_bus 0000:00: root bus resource [mem 0x30000000000-0x30fffffffff window]
[    7.137327] pci 0000:00:00.0: [8086:2020] type 00 class 0x060000
[    7.137461] pci 0000:00:04.0: [8086:2021] type 00 class 0x088000
[    7.137472] pci 0000:00:04.0: reg 0x10: [mem 0x30ffff2c000-0x30ffff2ffff 64bit]
[    7.137586] pci 0000:00:04.1: [8086:2021] type 00 class 0x088000
[    7.137596] pci 0000:00:04.1: reg 0x10: [mem 0x30ffff28000-0x30ffff2bfff 64bit]
[    7.137708] pci 0000:00:04.2: [8086:2021] type 00 class 0x088000
[    7.137718] pci 0000:00:04.2: reg 0x10: [mem 0x30ffff24000-0x30ffff27fff 64bit]
[    7.137828] pci 0000:00:04.3: [8086:2021] type 00 class 0x088000
[    7.137838] pci 0000:00:04.3: reg 0x10: [mem 0x30ffff20000-0x30ffff23fff 64bit]
[    7.137949] pci 0000:00:04.4: [8086:2021] type 00 class 0x088000
[    7.137958] pci 0000:00:04.4: reg 0x10: [mem 0x30ffff1c000-0x30ffff1ffff 64bit]
[    7.138069] pci 0000:00:04.5: [8086:2021] type 00 class 0x088000
[    7.138079] pci 0000:00:04.5: reg 0x10: [mem 0x30ffff18000-0x30ffff1bfff 64bit]
[    7.138189] pci 0000:00:04.6: [8086:2021] type 00 class 0x088000
[    7.138199] pci 0000:00:04.6: reg 0x10: [mem 0x30ffff14000-0x30ffff17fff 64bit]
[    7.138308] pci 0000:00:04.7: [8086:2021] type 00 class 0x088000
[    7.138317] pci 0000:00:04.7: reg 0x10: [mem 0x30ffff10000-0x30ffff13fff 64bit]
[    7.138429] pci 0000:00:05.0: [8086:2024] type 00 class 0x088000
[    7.138533] pci 0000:00:05.2: [8086:2025] type 00 class 0x088000
[    7.138637] pci 0000:00:05.4: [8086:2026] type 00 class 0x080020
[    7.138644] pci 0000:00:05.4: reg 0x10: [mem 0x9d60a000-0x9d60afff]
[    7.138753] pci 0000:00:08.0: [8086:2014] type 00 class 0x088000
[    7.138848] pci 0000:00:08.1: [8086:2015] type 00 class 0x110100
[    7.138937] pci 0000:00:08.2: [8086:2016] type 00 class 0x088000
[    7.139043] pci 0000:00:11.0: [8086:a1ec] type 00 class 0xff0000
[    7.139202] pci 0000:00:11.1: [8086:a1ed] type 00 class 0xff0000
[    7.139369] pci 0000:00:11.5: [8086:a1d2] type 00 class 0x010601
[    7.139382] pci 0000:00:11.5: reg 0x10: [mem 0x9d606000-0x9d607fff]
[    7.139390] pci 0000:00:11.5: reg 0x14: [mem 0x9d609000-0x9d6090ff]
[    7.139397] pci 0000:00:11.5: reg 0x18: [io  0x3070-0x3077]
[    7.139406] pci 0000:00:11.5: reg 0x1c: [io  0x3060-0x3063]
[    7.139413] pci 0000:00:11.5: reg 0x20: [io  0x3020-0x303f]
[    7.139421] pci 0000:00:11.5: reg 0x24: [mem 0x9d580000-0x9d5fffff]
[    7.139455] pci 0000:00:11.5: PME# supported from D3hot
[    7.139582] pci 0000:00:14.0: [8086:a1af] type 00 class 0x0c0330
[    7.139600] pci 0000:00:14.0: reg 0x10: [mem 0x30ffff00000-0x30ffff0ffff 64bit]
[    7.139660] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    7.139759] pci 0000:00:14.0: System wakeup disabled by ACPI
[    7.207519] pci 0000:00:14.2: [8086:a1b1] type 00 class 0x118000
[    7.207536] pci 0000:00:14.2: reg 0x10: [mem 0x30ffff33000-0x30ffff33fff 64bit]
[    7.207683] pci 0000:00:16.0: [8086:a1ba] type 00 class 0x078000
[    7.207706] pci 0000:00:16.0: reg 0x10: [mem 0x30ffff32000-0x30ffff32fff 64bit]
[    7.207786] pci 0000:00:16.0: PME# supported from D3hot
[    7.207875] pci 0000:00:16.1: [8086:a1bb] type 00 class 0x078000
[    7.207898] pci 0000:00:16.1: reg 0x10: [mem 0x30ffff31000-0x30ffff31fff 64bit]
[    7.207977] pci 0000:00:16.1: PME# supported from D3hot
[    7.208069] pci 0000:00:16.4: [8086:a1be] type 00 class 0x078000
[    7.208092] pci 0000:00:16.4: reg 0x10: [mem 0x30ffff30000-0x30ffff30fff 64bit]
[    7.208171] pci 0000:00:16.4: PME# supported from D3hot
[    7.208261] pci 0000:00:17.0: [8086:a182] type 00 class 0x010601
[    7.208274] pci 0000:00:17.0: reg 0x10: [mem 0x9d604000-0x9d605fff]
[    7.208282] pci 0000:00:17.0: reg 0x14: [mem 0x9d608000-0x9d6080ff]
[    7.208289] pci 0000:00:17.0: reg 0x18: [io  0x3050-0x3057]
[    7.208297] pci 0000:00:17.0: reg 0x1c: [io  0x3040-0x3043]
[    7.208304] pci 0000:00:17.0: reg 0x20: [io  0x3000-0x301f]
[    7.208311] pci 0000:00:17.0: reg 0x24: [mem 0x9d500000-0x9d57ffff]
[    7.208345] pci 0000:00:17.0: PME# supported from D3hot
[    7.208459] pci 0000:00:1c.0: [8086:a190] type 01 class 0x060400
[    7.208512] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    7.208578] pci 0000:00:1c.0: System wakeup disabled by ACPI
[    7.276407] pci 0000:00:1c.2: [8086:a192] type 01 class 0x060400
[    7.276461] pci 0000:00:1c.2: PME# supported from D0 D3hot D3cold
[    7.276532] pci 0000:00:1c.2: System wakeup disabled by ACPI
[    7.344365] pci 0000:00:1c.3: [8086:a193] type 01 class 0x060400
[    7.344419] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    7.344490] pci 0000:00:1c.3: System wakeup disabled by ACPI
[    7.412325] pci 0000:00:1c.5: [8086:a195] type 01 class 0x060400
[    7.412377] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
[    7.412448] pci 0000:00:1c.5: System wakeup disabled by ACPI
[    7.480292] pci 0000:00:1f.0: [8086:a1c4] type 00 class 0x060100
[    7.480491] pci 0000:00:1f.2: [8086:a1a1] type 00 class 0x058000
[    7.480503] pci 0000:00:1f.2: reg 0x10: [mem 0x9d600000-0x9d603fff]
[    7.480654] pci 0000:00:1f.4: [8086:a1a3] type 00 class 0x0c0500
[    7.480670] pci 0000:00:1f.4: reg 0x10: [mem 0x00000000-0x000000ff 64bit]
[    7.480693] pci 0000:00:1f.4: reg 0x20: [io  0x0780-0x079f]
[    7.480794] pci 0000:00:1f.5: [8086:a1a4] type 00 class 0x0c8000
[    7.480806] pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff]
[    7.480977] pci 0000:00:1c.0: PCI bridge to [bus 01]
[    7.540506] pci 0000:02:00.0: [8086:1533] type 00 class 0x020000
[    7.540529] pci 0000:02:00.0: reg 0x10: [mem 0x9d300000-0x9d3fffff]
[    7.540568] pci 0000:02:00.0: reg 0x1c: [mem 0x9d400000-0x9d403fff]
[    7.540682] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[    7.542468] pci 0000:00:1c.2: PCI bridge to [bus 02]
[    7.601968] pci 0000:00:1c.2:   bridge window [mem 0x9d300000-0x9d4fffff]
[    7.602034] pci 0000:03:00.0: [8086:1533] type 00 class 0x020000
[    7.602057] pci 0000:03:00.0: reg 0x10: [mem 0x9d100000-0x9d1fffff]
[    7.602095] pci 0000:03:00.0: reg 0x1c: [mem 0x9d200000-0x9d203fff]
[    7.602210] pci 0000:03:00.0: PME# supported from D0 D3hot D3cold
[    7.604990] pci 0000:00:1c.3: PCI bridge to [bus 03]
[    7.664428] pci 0000:00:1c.3:   bridge window [mem 0x9d100000-0x9d2fffff]
[    7.664486] pci 0000:04:00.0: [1a03:1150] type 01 class 0x060400
[    7.664598] pci 0000:04:00.0: supports D1 D2
[    7.664599] pci 0000:04:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    7.666459] pci 0000:00:1c.5: PCI bridge to [bus 04-05]
[    7.729066] pci 0000:00:1c.5:   bridge window [io  0x2000-0x2fff]
[    7.729069] pci 0000:00:1c.5:   bridge window [mem 0x9c000000-0x9d0fffff]
[    7.729140] pci 0000:05:00.0: [1a03:2000] type 00 class 0x030000
[    7.729165] pci 0000:05:00.0: reg 0x10: [mem 0x9c000000-0x9cffffff]
[    7.729179] pci 0000:05:00.0: reg 0x14: [mem 0x9d000000-0x9d01ffff]
[    7.729194] pci 0000:05:00.0: reg 0x18: [io  0x2000-0x207f]
[    7.729295] pci 0000:05:00.0: supports D1 D2
[    7.729296] pci 0000:05:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    7.729387] pci 0000:04:00.0: PCI bridge to [bus 05]
[    7.788833] pci 0000:04:00.0:   bridge window [io  0x2000-0x2fff]
[    7.788837] pci 0000:04:00.0:   bridge window [mem 0x9c000000-0x9d0fffff]
[    7.788870] pci_bus 0000:00: on NUMA node 0
[    7.789540] ACPI: PCI Root Bridge [PC01] (domain 0000 [bus 17-39])
[    7.863537] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[    7.961892] acpi PNP0A08:01: _OSC: platform does not support [AER]
[    8.036053] acpi PNP0A08:01: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
[    8.128024] PCI host bridge to bus 0000:17
[    8.177073] pci_bus 0000:17: root bus resource [bus 17-39]
[    8.242753] pci_bus 0000:17: root bus resource [io  0x4000-0x5fff window]
[    8.323996] pci_bus 0000:17: root bus resource [mem 0x9d800000-0xaaffffff window]
[    8.413704] pci_bus 0000:17: root bus resource [mem 0x31000000000-0x31fffffffff window]
[    8.509688] pci 0000:17:01.0: [8086:2031] type 01 class 0x060400
[    8.509731] pci 0000:17:01.0: PME# supported from D0 D3hot D3cold
[    8.509767] pci 0000:17:01.0: System wakeup disabled by ACPI
[    8.577594] pci 0000:17:05.0: [8086:2034] type 00 class 0x088000
[    8.577666] pci 0000:17:05.2: [8086:2035] type 00 class 0x088000
[    8.577737] pci 0000:17:05.4: [8086:2036] type 00 class 0x080020
[    8.577744] pci 0000:17:05.4: reg 0x10: [mem 0xaaf00000-0xaaf00fff]
[    8.577814] pci 0000:17:08.0: [8086:208d] type 00 class 0x088000
[    8.577869] pci 0000:17:08.1: [8086:208d] type 00 class 0x088000
[    8.577919] pci 0000:17:08.2: [8086:208d] type 00 class 0x088000
[    8.577968] pci 0000:17:08.3: [8086:208d] type 00 class 0x088000
[    8.578018] pci 0000:17:08.4: [8086:208d] type 00 class 0x088000
[    8.578067] pci 0000:17:08.5: [8086:208d] type 00 class 0x088000
[    8.578116] pci 0000:17:08.6: [8086:208d] type 00 class 0x088000
[    8.578165] pci 0000:17:08.7: [8086:208d] type 00 class 0x088000
[    8.578215] pci 0000:17:09.0: [8086:208d] type 00 class 0x088000
[    8.578267] pci 0000:17:09.1: [8086:208d] type 00 class 0x088000
[    8.578322] pci 0000:17:0e.0: [8086:208e] type 00 class 0x088000
[    8.578374] pci 0000:17:0e.1: [8086:208e] type 00 class 0x088000
[    8.578424] pci 0000:17:0e.2: [8086:208e] type 00 class 0x088000
[    8.578472] pci 0000:17:0e.3: [8086:208e] type 00 class 0x088000
[    8.578523] pci 0000:17:0e.4: [8086:208e] type 00 class 0x088000
[    8.578574] pci 0000:17:0e.5: [8086:208e] type 00 class 0x088000
[    8.578623] pci 0000:17:0e.6: [8086:208e] type 00 class 0x088000
[    8.578672] pci 0000:17:0e.7: [8086:208e] type 00 class 0x088000
[    8.578722] pci 0000:17:0f.0: [8086:208e] type 00 class 0x088000
[    8.578773] pci 0000:17:0f.1: [8086:208e] type 00 class 0x088000
[    8.578836] pci 0000:17:1d.0: [8086:2054] type 00 class 0x088000
[    8.578889] pci 0000:17:1d.1: [8086:2055] type 00 class 0x088000
[    8.578941] pci 0000:17:1d.2: [8086:2056] type 00 class 0x088000
[    8.578990] pci 0000:17:1d.3: [8086:2057] type 00 class 0x088000
[    8.579046] pci 0000:17:1e.0: [8086:2080] type 00 class 0x088000
[    8.579100] pci 0000:17:1e.1: [8086:2081] type 00 class 0x088000
[    8.579149] pci 0000:17:1e.2: [8086:2082] type 00 class 0x088000
[    8.579202] pci 0000:17:1e.3: [8086:2083] type 00 class 0x088000
[    8.579252] pci 0000:17:1e.4: [8086:2084] type 00 class 0x088000
[    8.579303] pci 0000:17:1e.5: [8086:2085] type 00 class 0x088000
[    8.579355] pci 0000:17:1e.6: [8086:2086] type 00 class 0x088000
[    8.579452] pci 0000:18:00.0: [8086:1521] type 00 class 0x020000
[    8.579464] pci 0000:18:00.0: reg 0x10: [mem 0xaae60000-0xaae7ffff]
[    8.579478] pci 0000:18:00.0: reg 0x18: [io  0x5060-0x507f]
[    8.579486] pci 0000:18:00.0: reg 0x1c: [mem 0xaae8c000-0xaae8ffff]
[    8.579557] pci 0000:18:00.0: PME# supported from D0 D3hot
[    8.579585] pci 0000:18:00.0: reg 0x184: [mem 0x31ffffe0000-0x31ffffe3fff 64bit pref]
[    8.579587] pci 0000:18:00.0: VF(n) BAR0 space: [mem 0x31ffffe0000-0x31fffffffff 64bit pref] (contains BAR0 for 8 VFs)
[    8.707768] pci 0000:18:00.0: reg 0x190: [mem 0x31ffffc0000-0x31ffffc3fff 64bit pref]
[    8.707770] pci 0000:18:00.0: VF(n) BAR3 space: [mem 0x31ffffc0000-0x31ffffdffff 64bit pref] (contains BAR3 for 8 VFs)
[    8.835957] pci 0000:18:00.1: [8086:1521] type 00 class 0x020000
[    8.835970] pci 0000:18:00.1: reg 0x10: [mem 0xaae40000-0xaae5ffff]
[    8.835984] pci 0000:18:00.1: reg 0x18: [io  0x5040-0x505f]
[    8.835991] pci 0000:18:00.1: reg 0x1c: [mem 0xaae88000-0xaae8bfff]
[    8.836060] pci 0000:18:00.1: PME# supported from D0 D3hot
[    8.836083] pci 0000:18:00.1: reg 0x184: [mem 0x31ffffa0000-0x31ffffa3fff 64bit pref]
[    8.836084] pci 0000:18:00.1: VF(n) BAR0 space: [mem 0x31ffffa0000-0x31ffffbffff 64bit pref] (contains BAR0 for 8 VFs)
[    8.964248] pci 0000:18:00.1: reg 0x190: [mem 0x31ffff80000-0x31ffff83fff 64bit pref]
[    8.964249] pci 0000:18:00.1: VF(n) BAR3 space: [mem 0x31ffff80000-0x31ffff9ffff 64bit pref] (contains BAR3 for 8 VFs)
[    9.092435] pci 0000:18:00.2: [8086:1521] type 00 class 0x020000
[    9.092447] pci 0000:18:00.2: reg 0x10: [mem 0xaae20000-0xaae3ffff]
[    9.092461] pci 0000:18:00.2: reg 0x18: [io  0x5020-0x503f]
[    9.092469] pci 0000:18:00.2: reg 0x1c: [mem 0xaae84000-0xaae87fff]
[    9.092537] pci 0000:18:00.2: PME# supported from D0 D3hot
[    9.092560] pci 0000:18:00.2: reg 0x184: [mem 0x31ffff60000-0x31ffff63fff 64bit pref]
[    9.092562] pci 0000:18:00.2: VF(n) BAR0 space: [mem 0x31ffff60000-0x31ffff7ffff 64bit pref] (contains BAR0 for 8 VFs)
[    9.220729] pci 0000:18:00.2: reg 0x190: [mem 0x31ffff40000-0x31ffff43fff 64bit pref]
[    9.220730] pci 0000:18:00.2: VF(n) BAR3 space: [mem 0x31ffff40000-0x31ffff5ffff 64bit pref] (contains BAR3 for 8 VFs)
[    9.348915] pci 0000:18:00.3: [8086:1521] type 00 class 0x020000
[    9.348927] pci 0000:18:00.3: reg 0x10: [mem 0xaae00000-0xaae1ffff]
[    9.348940] pci 0000:18:00.3: reg 0x18: [io  0x5000-0x501f]
[    9.348948] pci 0000:18:00.3: reg 0x1c: [mem 0xaae80000-0xaae83fff]
[    9.349017] pci 0000:18:00.3: PME# supported from D0 D3hot
[    9.349039] pci 0000:18:00.3: reg 0x184: [mem 0x31ffff20000-0x31ffff23fff 64bit pref]
[    9.349041] pci 0000:18:00.3: VF(n) BAR0 space: [mem 0x31ffff20000-0x31ffff3ffff 64bit pref] (contains BAR0 for 8 VFs)
[    9.477211] pci 0000:18:00.3: reg 0x190: [mem 0x31ffff00000-0x31ffff03fff 64bit pref]
[    9.477212] pci 0000:18:00.3: VF(n) BAR3 space: [mem 0x31ffff00000-0x31ffff1ffff 64bit pref] (contains BAR3 for 8 VFs)
[    9.607360] pci 0000:17:01.0: PCI bridge to [bus 18-19]
[    9.669996] pci 0000:17:01.0:   bridge window [io  0x5000-0x5fff]
[    9.669999] pci 0000:17:01.0:   bridge window [mem 0xaae00000-0xaaefffff]
[    9.670002] pci 0000:17:01.0:   bridge window [mem 0x31ffff00000-0x31fffffffff 64bit pref]
[    9.670009] pci_bus 0000:17: on NUMA node 0
[    9.670088] ACPI: PCI Root Bridge [PC02] (domain 0000 [bus 3a-5c])
[    9.744083] acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[    9.842433] acpi PNP0A08:02: _OSC: platform does not support [AER]
[    9.916586] acpi PNP0A08:02: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
[   10.008540] PCI host bridge to bus 0000:3a
[   10.057619] pci_bus 0000:3a: root bus resource [bus 3a-5c]
[   10.123299] pci_bus 0000:3a: root bus resource [io  0x6000-0x7fff window]
[   10.204544] pci_bus 0000:3a: root bus resource [mem 0xab000000-0xb87fffff window]
[   10.294250] pci_bus 0000:3a: root bus resource [mem 0x32000000000-0x32fffffffff window]
[   10.390231] pci 0000:3a:00.0: [8086:2030] type 01 class 0x060400
[   10.390272] pci 0000:3a:00.0: PME# supported from D0 D3hot D3cold
[   10.390300] pci 0000:3a:00.0: System wakeup disabled by ACPI
[   10.458138] pci 0000:3a:05.0: [8086:2034] type 00 class 0x088000
[   10.458204] pci 0000:3a:05.2: [8086:2035] type 00 class 0x088000
[   10.458266] pci 0000:3a:05.4: [8086:2036] type 00 class 0x080020
[   10.458273] pci 0000:3a:05.4: reg 0x10: [mem 0xb8700000-0xb8700fff]
[   10.458338] pci 0000:3a:08.0: [8086:2066] type 00 class 0x088000
[   10.458397] pci 0000:3a:09.0: [8086:2066] type 00 class 0x088000
[   10.458455] pci 0000:3a:0a.0: [8086:2040] type 00 class 0x088000
[   10.458512] pci 0000:3a:0a.1: [8086:2041] type 00 class 0x088000
[   10.458569] pci 0000:3a:0a.2: [8086:2042] type 00 class 0x088000
[   10.458623] pci 0000:3a:0a.3: [8086:2043] type 00 class 0x088000
[   10.458678] pci 0000:3a:0a.4: [8086:2044] type 00 class 0x088000
[   10.458732] pci 0000:3a:0a.5: [8086:2045] type 00 class 0x088000
[   10.458787] pci 0000:3a:0a.6: [8086:2046] type 00 class 0x088000
[   10.458840] pci 0000:3a:0a.7: [8086:2047] type 00 class 0x088000
[   10.458895] pci 0000:3a:0b.0: [8086:2048] type 00 class 0x088000
[   10.458952] pci 0000:3a:0b.1: [8086:2049] type 00 class 0x088000
[   10.459006] pci 0000:3a:0b.2: [8086:204a] type 00 class 0x088000
[   10.459059] pci 0000:3a:0b.3: [8086:204b] type 00 class 0x088000
[   10.459117] pci 0000:3a:0c.0: [8086:2040] type 00 class 0x088000
[   10.459177] pci 0000:3a:0c.1: [8086:2041] type 00 class 0x088000
[   10.459231] pci 0000:3a:0c.2: [8086:2042] type 00 class 0x088000
[   10.459286] pci 0000:3a:0c.3: [8086:2043] type 00 class 0x088000
[   10.459340] pci 0000:3a:0c.4: [8086:2044] type 00 class 0x088000
[   10.459397] pci 0000:3a:0c.5: [8086:2045] type 00 class 0x088000
[   10.459452] pci 0000:3a:0c.6: [8086:2046] type 00 class 0x088000
[   10.459507] pci 0000:3a:0c.7: [8086:2047] type 00 class 0x088000
[   10.459561] pci 0000:3a:0d.0: [8086:2048] type 00 class 0x088000
[   10.459620] pci 0000:3a:0d.1: [8086:2049] type 00 class 0x088000
[   10.459674] pci 0000:3a:0d.2: [8086:204a] type 00 class 0x088000
[   10.459729] pci 0000:3a:0d.3: [8086:204b] type 00 class 0x088000
[   10.459829] pci 0000:3b:00.0: [8086:37c0] type 01 class 0x060400
[   10.459842] pci 0000:3b:00.0: reg 0x10: [mem 0xb8600000-0xb861ffff 64bit]
[   10.459848] pci 0000:3b:00.0: reg 0x38: [mem 0xb8300000-0xb83fffff pref]
[   10.459885] pci 0000:3b:00.0: PME# supported from D0 D3hot D3cold
[   10.459904] pci 0000:3b:00.0: System wakeup disabled by ACPI
[   10.529651] pci 0000:3a:00.0: PCI bridge to [bus 3b-3f]
[   10.592272] pci 0000:3a:00.0:   bridge window [mem 0xb8300000-0xb86fffff]
[   10.592276] pci 0000:3a:00.0:   bridge window [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   10.592328] pci 0000:3c:00.0: [8086:37c2] type 01 class 0x060400
[   10.592393] pci 0000:3c:00.0: PME# supported from D0 D3hot D3cold
[   10.592415] pci 0000:3c:00.0: System wakeup disabled by ACPI
[   10.660257] pci 0000:3c:03.0: [8086:37c5] type 01 class 0x060400
[   10.660319] pci 0000:3c:03.0: PME# supported from D0 D3hot D3cold
[   10.660341] pci 0000:3c:03.0: System wakeup disabled by ACPI
[   10.728125] pci 0000:3b:00.0: PCI bridge to [bus 3c-3f]
[   10.790756] pci 0000:3b:00.0:   bridge window [mem 0xb8400000-0xb85fffff]
[   10.790760] pci 0000:3b:00.0:   bridge window [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   10.790804] pci 0000:3d:00.0: [8086:37c8] type 00 class 0x0b4000
[   10.790835] pci 0000:3d:00.0: reg 0x18: [mem 0xb8540000-0xb857ffff 64bit]
[   10.790847] pci 0000:3d:00.0: reg 0x20: [mem 0xb8500000-0xb853ffff 64bit]
[   10.790933] pci 0000:3d:00.0: reg 0x164: [mem 0xb8590000-0xb8590fff 64bit]
[   10.790935] pci 0000:3d:00.0: VF(n) BAR0 space: [mem 0xb8590000-0xb859ffff 64bit] (contains BAR0 for 16 VFs)
[   10.908701] pci 0000:3d:00.0: reg 0x16c: [mem 0xb8580000-0xb8580fff 64bit]
[   10.908703] pci 0000:3d:00.0: VF(n) BAR2 space: [mem 0xb8580000-0xb858ffff 64bit] (contains BAR2 for 16 VFs)
[   11.026553] pci 0000:3c:00.0: PCI bridge to [bus 3d]
[   11.086038] pci 0000:3c:00.0:   bridge window [mem 0xb8500000-0xb85fffff]
[   11.086089] pci 0000:3e:00.0: [8086:37d2] type 00 class 0x020000
[   11.086106] pci 0000:3e:00.0: reg 0x10: [mem 0x32ffe000000-0x32ffeffffff 64bit pref]
[   11.086124] pci 0000:3e:00.0: reg 0x1c: [mem 0x32fff808000-0x32fff80ffff 64bit pref]
[   11.086138] pci 0000:3e:00.0: reg 0x30: [mem 0xb8480000-0xb84fffff pref]
[   11.086194] pci 0000:3e:00.0: PME# supported from D0 D3hot D3cold
[   11.086223] pci 0000:3e:00.0: reg 0x184: [mem 0x32fff400000-0x32fff41ffff 64bit pref]
[   11.086225] pci 0000:3e:00.0: VF(n) BAR0 space: [mem 0x32fff400000-0x32fff7fffff 64bit pref] (contains BAR0 for 32 VFs)
[   11.215409] pci 0000:3e:00.0: reg 0x190: [mem 0x32fff890000-0x32fff893fff 64bit pref]
[   11.215411] pci 0000:3e:00.0: VF(n) BAR3 space: [mem 0x32fff890000-0x32fff90ffff 64bit pref] (contains BAR3 for 32 VFs)
[   11.344691] pci 0000:3e:00.1: [8086:37d2] type 00 class 0x020000
[   11.344707] pci 0000:3e:00.1: reg 0x10: [mem 0x32ffd000000-0x32ffdffffff 64bit pref]
[   11.344725] pci 0000:3e:00.1: reg 0x1c: [mem 0x32fff800000-0x32fff807fff 64bit pref]
[   11.344739] pci 0000:3e:00.1: reg 0x30: [mem 0xb8400000-0xb847ffff pref]
[   11.344794] pci 0000:3e:00.1: PME# supported from D0 D3hot D3cold
[   11.344819] pci 0000:3e:00.1: reg 0x184: [mem 0x32fff000000-0x32fff01ffff 64bit pref]
[   11.344821] pci 0000:3e:00.1: VF(n) BAR0 space: [mem 0x32fff000000-0x32fff3fffff 64bit pref] (contains BAR0 for 32 VFs)
[   11.473967] pci 0000:3e:00.1: reg 0x190: [mem 0x32fff810000-0x32fff813fff 64bit pref]
[   11.473968] pci 0000:3e:00.1: VF(n) BAR3 space: [mem 0x32fff810000-0x32fff88ffff 64bit pref] (contains BAR3 for 32 VFs)
[   11.603246] pci 0000:3c:03.0: PCI bridge to [bus 3e-3f]
[   11.670553] pci 0000:3c:03.0:   bridge window [mem 0xb8400000-0xb84fffff]
[   11.670558] pci 0000:3c:03.0:   bridge window [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   11.670576] pci_bus 0000:3a: on NUMA node 0
[   11.670650] ACPI: PCI Root Bridge [PC03] (domain 0000 [bus 5d-7f])
[   11.744674] acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[   11.843018] acpi PNP0A08:03: _OSC: platform does not support [AER]
[   11.917179] acpi PNP0A08:03: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
[   12.009135] PCI host bridge to bus 0000:5d
[   12.058211] pci_bus 0000:5d: root bus resource [bus 5d-7f]
[   12.123890] pci_bus 0000:5d: root bus resource [io  0x8000-0x9fff window]
[   12.205136] pci_bus 0000:5d: root bus resource [mem 0xb8800000-0xc5ffffff window]
[   12.294841] pci_bus 0000:5d: root bus resource [mem 0x33000000000-0x33fffffffff window]
[   12.390822] pci 0000:5d:00.0: [8086:2030] type 01 class 0x060400
[   12.390867] pci 0000:5d:00.0: PME# supported from D0 D3hot D3cold
[   12.390895] pci 0000:5d:00.0: System wakeup disabled by ACPI
[   12.458731] pci 0000:5d:02.0: [8086:2032] type 01 class 0x060400
[   12.458774] pci 0000:5d:02.0: PME# supported from D0 D3hot D3cold
[   12.458801] pci 0000:5d:02.0: System wakeup disabled by ACPI
[   12.526582] pci 0000:5d:05.0: [8086:2034] type 00 class 0x088000
[   12.526647] pci 0000:5d:05.2: [8086:2035] type 00 class 0x088000
[   12.526710] pci 0000:5d:05.4: [8086:2036] type 00 class 0x080020
[   12.526717] pci 0000:5d:05.4: reg 0x10: [mem 0xc5f00000-0xc5f00fff]
[   12.526788] pci 0000:5d:0e.0: [8086:2058] type 00 class 0x110100
[   12.526848] pci 0000:5d:0e.1: [8086:2059] type 00 class 0x088000
[   12.526904] pci 0000:5d:0f.0: [8086:2058] type 00 class 0x110100
[   12.526958] pci 0000:5d:0f.1: [8086:2059] type 00 class 0x088000
[   12.527014] pci 0000:5d:12.0: [8086:204c] type 00 class 0x110100
[   12.527067] pci 0000:5d:12.1: [8086:204d] type 00 class 0x110100
[   12.527112] pci 0000:5d:12.2: [8086:204e] type 00 class 0x088000
[   12.527160] pci 0000:5d:15.0: [8086:2018] type 00 class 0x088000
[   12.527212] pci 0000:5d:16.0: [8086:2018] type 00 class 0x088000
[   12.527259] pci 0000:5d:16.4: [8086:2018] type 00 class 0x088000
[   12.527339] pci 0000:5d:00.0: PCI bridge to [bus 5e]
[   12.586884] pci 0000:5f:00.0: [8086:1522] type 00 class 0x020000
[   12.586896] pci 0000:5f:00.0: reg 0x10: [mem 0xc5e60000-0xc5e7ffff]
[   12.586909] pci 0000:5f:00.0: reg 0x18: [io  0x9060-0x907f]
[   12.586917] pci 0000:5f:00.0: reg 0x1c: [mem 0xc5e8c000-0xc5e8ffff]
[   12.586988] pci 0000:5f:00.0: PME# supported from D0 D3hot
[   12.587015] pci 0000:5f:00.0: reg 0x184: [mem 0x33ffffe0000-0x33ffffe3fff 64bit pref]
[   12.587016] pci 0000:5f:00.0: VF(n) BAR0 space: [mem 0x33ffffe0000-0x33fffffffff 64bit pref] (contains BAR0 for 8 VFs)
[   12.715174] pci 0000:5f:00.0: reg 0x190: [mem 0x33ffffc0000-0x33ffffc3fff 64bit pref]
[   12.715175] pci 0000:5f:00.0: VF(n) BAR3 space: [mem 0x33ffffc0000-0x33ffffdffff 64bit pref] (contains BAR3 for 8 VFs)
[   12.843360] pci 0000:5f:00.1: [8086:1522] type 00 class 0x020000
[   12.843372] pci 0000:5f:00.1: reg 0x10: [mem 0xc5e40000-0xc5e5ffff]
[   12.843386] pci 0000:5f:00.1: reg 0x18: [io  0x9040-0x905f]
[   12.843393] pci 0000:5f:00.1: reg 0x1c: [mem 0xc5e88000-0xc5e8bfff]
[   12.843461] pci 0000:5f:00.1: PME# supported from D0 D3hot
[   12.843484] pci 0000:5f:00.1: reg 0x184: [mem 0x33ffffa0000-0x33ffffa3fff 64bit pref]
[   12.843486] pci 0000:5f:00.1: VF(n) BAR0 space: [mem 0x33ffffa0000-0x33ffffbffff 64bit pref] (contains BAR0 for 8 VFs)
[   12.971653] pci 0000:5f:00.1: reg 0x190: [mem 0x33ffff80000-0x33ffff83fff 64bit pref]
[   12.971655] pci 0000:5f:00.1: VF(n) BAR3 space: [mem 0x33ffff80000-0x33ffff9ffff 64bit pref] (contains BAR3 for 8 VFs)
[   13.099838] pci 0000:5f:00.2: [8086:1522] type 00 class 0x020000
[   13.099850] pci 0000:5f:00.2: reg 0x10: [mem 0xc5e20000-0xc5e3ffff]
[   13.099864] pci 0000:5f:00.2: reg 0x18: [io  0x9020-0x903f]
[   13.099871] pci 0000:5f:00.2: reg 0x1c: [mem 0xc5e84000-0xc5e87fff]
[   13.099939] pci 0000:5f:00.2: PME# supported from D0 D3hot
[   13.099962] pci 0000:5f:00.2: reg 0x184: [mem 0x33ffff60000-0x33ffff63fff 64bit pref]
[   13.099964] pci 0000:5f:00.2: VF(n) BAR0 space: [mem 0x33ffff60000-0x33ffff7ffff 64bit pref] (contains BAR0 for 8 VFs)
[   13.228135] pci 0000:5f:00.2: reg 0x190: [mem 0x33ffff40000-0x33ffff43fff 64bit pref]
[   13.228136] pci 0000:5f:00.2: VF(n) BAR3 space: [mem 0x33ffff40000-0x33ffff5ffff 64bit pref] (contains BAR3 for 8 VFs)
[   13.356322] pci 0000:5f:00.3: [8086:1522] type 00 class 0x020000
[   13.356333] pci 0000:5f:00.3: reg 0x10: [mem 0xc5e00000-0xc5e1ffff]
[   13.356347] pci 0000:5f:00.3: reg 0x18: [io  0x9000-0x901f]
[   13.356354] pci 0000:5f:00.3: reg 0x1c: [mem 0xc5e80000-0xc5e83fff]
[   13.356422] pci 0000:5f:00.3: PME# supported from D0 D3hot
[   13.356445] pci 0000:5f:00.3: reg 0x184: [mem 0x33ffff20000-0x33ffff23fff 64bit pref]
[   13.356446] pci 0000:5f:00.3: VF(n) BAR0 space: [mem 0x33ffff20000-0x33ffff3ffff 64bit pref] (contains BAR0 for 8 VFs)
[   13.484615] pci 0000:5f:00.3: reg 0x190: [mem 0x33ffff00000-0x33ffff03fff 64bit pref]
[   13.484617] pci 0000:5f:00.3: VF(n) BAR3 space: [mem 0x33ffff00000-0x33ffff1ffff 64bit pref] (contains BAR3 for 8 VFs)
[   13.614766] pci 0000:5d:02.0: PCI bridge to [bus 5f-60]
[   13.677405] pci 0000:5d:02.0:   bridge window [io  0x9000-0x9fff]
[   13.677408] pci 0000:5d:02.0:   bridge window [mem 0xc5e00000-0xc5efffff]
[   13.677411] pci 0000:5d:02.0:   bridge window [mem 0x33ffff00000-0x33fffffffff 64bit pref]
[   13.677422] pci_bus 0000:5d: on NUMA node 0
[   13.677542] ACPI: PCI Root Bridge [PC06] (domain 0000 [bus 80-84])
[   13.751591] acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[   13.849796] acpi PNP0A08:06: _OSC: platform does not support [AER]
[   13.923954] acpi PNP0A08:06: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
[   14.015803] acpiphp: Slot [8191-9] registered
[   14.068058] PCI host bridge to bus 0000:80
[   14.117110] pci_bus 0000:80: root bus resource [bus 80-84]
[   14.182789] pci_bus 0000:80: root bus resource [io  0xa000-0xbfff window]
[   14.264035] pci_bus 0000:80: root bus resource [mem 0xc6000000-0xd37fffff window]
[   14.353739] pci_bus 0000:80: root bus resource [mem 0x34000000000-0x34fffffffff window]
[   14.449723] pci 0000:80:04.0: [8086:2021] type 00 class 0x088000
[   14.449734] pci 0000:80:04.0: reg 0x10: [mem 0x34ffff1c000-0x34ffff1ffff 64bit]
[   14.449812] pci 0000:80:04.1: [8086:2021] type 00 class 0x088000
[   14.449823] pci 0000:80:04.1: reg 0x10: [mem 0x34ffff18000-0x34ffff1bfff 64bit]
[   14.449895] pci 0000:80:04.2: [8086:2021] type 00 class 0x088000
[   14.449906] pci 0000:80:04.2: reg 0x10: [mem 0x34ffff14000-0x34ffff17fff 64bit]
[   14.449979] pci 0000:80:04.3: [8086:2021] type 00 class 0x088000
[   14.449990] pci 0000:80:04.3: reg 0x10: [mem 0x34ffff10000-0x34ffff13fff 64bit]
[   14.450066] pci 0000:80:04.4: [8086:2021] type 00 class 0x088000
[   14.450076] pci 0000:80:04.4: reg 0x10: [mem 0x34ffff0c000-0x34ffff0ffff 64bit]
[   14.450151] pci 0000:80:04.5: [8086:2021] type 00 class 0x088000
[   14.450161] pci 0000:80:04.5: reg 0x10: [mem 0x34ffff08000-0x34ffff0bfff 64bit]
[   14.450234] pci 0000:80:04.6: [8086:2021] type 00 class 0x088000
[   14.450244] pci 0000:80:04.6: reg 0x10: [mem 0x34ffff04000-0x34ffff07fff 64bit]
[   14.450317] pci 0000:80:04.7: [8086:2021] type 00 class 0x088000
[   14.450327] pci 0000:80:04.7: reg 0x10: [mem 0x34ffff00000-0x34ffff03fff 64bit]
[   14.450399] pci 0000:80:05.0: [8086:2024] type 00 class 0x088000
[   14.450466] pci 0000:80:05.2: [8086:2025] type 00 class 0x088000
[   14.450531] pci 0000:80:05.4: [8086:2026] type 00 class 0x080020
[   14.450539] pci 0000:80:05.4: reg 0x10: [mem 0xd3700000-0xd3700fff]
[   14.450612] pci 0000:80:08.0: [8086:2014] type 00 class 0x088000
[   14.450670] pci 0000:80:08.1: [8086:2015] type 00 class 0x110100
[   14.450721] pci 0000:80:08.2: [8086:2016] type 00 class 0x088000
[   14.450788] pci_bus 0000:80: on NUMA node 1
[   14.450853] ACPI: PCI Root Bridge [PC07] (domain 0000 [bus 85-ad])
[   14.524873] acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[   14.623221] acpi PNP0A08:07: _OSC: platform does not support [AER]
[   14.697374] acpi PNP0A08:07: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
[   14.789328] acpiphp: Slot [8191-13] registered
[   14.842577] acpiphp: Slot [8191-8] registered
[   14.894765] acpiphp: Slot [8191-10] registered
[   14.947992] acpiphp: Slot [8191-11] registered
[   15.001248] PCI host bridge to bus 0000:85
[   15.050377] pci_bus 0000:85: root bus resource [bus 85-ad]
[   15.116059] pci_bus 0000:85: root bus resource [io  0xc000-0xdfff window]
[   15.197305] pci_bus 0000:85: root bus resource [mem 0xd3800000-0xe0ffffff window]
[   15.287011] pci_bus 0000:85: root bus resource [mem 0x35000000000-0x35fffffffff window]
[   15.382993] pci 0000:85:05.0: [8086:2034] type 00 class 0x088000
[   15.383070] pci 0000:85:05.2: [8086:2035] type 00 class 0x088000
[   15.383143] pci 0000:85:05.4: [8086:2036] type 00 class 0x080020
[   15.383150] pci 0000:85:05.4: reg 0x10: [mem 0xe0f00000-0xe0f00fff]
[   15.383227] pci 0000:85:08.0: [8086:208d] type 00 class 0x088000
[   15.383285] pci 0000:85:08.1: [8086:208d] type 00 class 0x088000
[   15.383339] pci 0000:85:08.2: [8086:208d] type 00 class 0x088000
[   15.383395] pci 0000:85:08.3: [8086:208d] type 00 class 0x088000
[   15.383446] pci 0000:85:08.4: [8086:208d] type 00 class 0x088000
[   15.383499] pci 0000:85:08.5: [8086:208d] type 00 class 0x088000
[   15.383551] pci 0000:85:08.6: [8086:208d] type 00 class 0x088000
[   15.383604] pci 0000:85:08.7: [8086:208d] type 00 class 0x088000
[   15.383659] pci 0000:85:09.0: [8086:208d] type 00 class 0x088000
[   15.383714] pci 0000:85:09.1: [8086:208d] type 00 class 0x088000
[   15.383772] pci 0000:85:0e.0: [8086:208e] type 00 class 0x088000
[   15.383829] pci 0000:85:0e.1: [8086:208e] type 00 class 0x088000
[   15.383883] pci 0000:85:0e.2: [8086:208e] type 00 class 0x088000
[   15.383935] pci 0000:85:0e.3: [8086:208e] type 00 class 0x088000
[   15.383988] pci 0000:85:0e.4: [8086:208e] type 00 class 0x088000
[   15.384040] pci 0000:85:0e.5: [8086:208e] type 00 class 0x088000
[   15.384092] pci 0000:85:0e.6: [8086:208e] type 00 class 0x088000
[   15.384144] pci 0000:85:0e.7: [8086:208e] type 00 class 0x088000
[   15.384196] pci 0000:85:0f.0: [8086:208e] type 00 class 0x088000
[   15.384255] pci 0000:85:0f.1: [8086:208e] type 00 class 0x088000
[   15.384321] pci 0000:85:1d.0: [8086:2054] type 00 class 0x088000
[   15.384382] pci 0000:85:1d.1: [8086:2055] type 00 class 0x088000
[   15.384436] pci 0000:85:1d.2: [8086:2056] type 00 class 0x088000
[   15.384488] pci 0000:85:1d.3: [8086:2057] type 00 class 0x088000
[   15.384545] pci 0000:85:1e.0: [8086:2080] type 00 class 0x088000
[   15.384601] pci 0000:85:1e.1: [8086:2081] type 00 class 0x088000
[   15.384655] pci 0000:85:1e.2: [8086:2082] type 00 class 0x088000
[   15.384711] pci 0000:85:1e.3: [8086:2083] type 00 class 0x088000
[   15.384765] pci 0000:85:1e.4: [8086:2084] type 00 class 0x088000
[   15.384819] pci 0000:85:1e.5: [8086:2085] type 00 class 0x088000
[   15.384873] pci 0000:85:1e.6: [8086:2086] type 00 class 0x088000
[   15.384927] pci_bus 0000:85: on NUMA node 1
[   15.385011] ACPI: PCI Root Bridge [PC08] (domain 0000 [bus ae-d6])
[   15.458972] acpi PNP0A08:08: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[   15.557323] acpi PNP0A08:08: _OSC: platform does not support [AER]
[   15.631484] acpi PNP0A08:08: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
[   15.723426] acpiphp: Slot [8191-17] registered
[   15.776678] acpiphp: Slot [8191-12] registered
[   15.829903] acpiphp: Slot [8191-14] registered
[   15.883130] acpiphp: Slot [8191-15] registered
[   15.936364] PCI host bridge to bus 0000:ae
[   15.985413] pci_bus 0000:ae: root bus resource [bus ae-d6]
[   16.051095] pci_bus 0000:ae: root bus resource [io  0xe000-0xefff window]
[   16.132336] pci_bus 0000:ae: root bus resource [mem 0xe1000000-0xee7fffff window]
[   16.222045] pci_bus 0000:ae: root bus resource [mem 0x36000000000-0x36fffffffff window]
[   16.318028] pci 0000:ae:05.0: [8086:2034] type 00 class 0x088000
[   16.318101] pci 0000:ae:05.2: [8086:2035] type 00 class 0x088000
[   16.318167] pci 0000:ae:05.4: [8086:2036] type 00 class 0x080020
[   16.318175] pci 0000:ae:05.4: reg 0x10: [mem 0xee700000-0xee700fff]
[   16.318249] pci 0000:ae:08.0: [8086:2066] type 00 class 0x088000
[   16.318314] pci 0000:ae:09.0: [8086:2066] type 00 class 0x088000
[   16.318377] pci 0000:ae:0a.0: [8086:2040] type 00 class 0x088000
[   16.318439] pci 0000:ae:0a.1: [8086:2041] type 00 class 0x088000
[   16.318498] pci 0000:ae:0a.2: [8086:2042] type 00 class 0x088000
[   16.318556] pci 0000:ae:0a.3: [8086:2043] type 00 class 0x088000
[   16.318615] pci 0000:ae:0a.4: [8086:2044] type 00 class 0x088000
[   16.318672] pci 0000:ae:0a.5: [8086:2045] type 00 class 0x088000
[   16.318730] pci 0000:ae:0a.6: [8086:2046] type 00 class 0x088000
[   16.318790] pci 0000:ae:0a.7: [8086:2047] type 00 class 0x088000
[   16.318848] pci 0000:ae:0b.0: [8086:2048] type 00 class 0x088000
[   16.318909] pci 0000:ae:0b.1: [8086:2049] type 00 class 0x088000
[   16.318968] pci 0000:ae:0b.2: [8086:204a] type 00 class 0x088000
[   16.319026] pci 0000:ae:0b.3: [8086:204b] type 00 class 0x088000
[   16.319087] pci 0000:ae:0c.0: [8086:2040] type 00 class 0x088000
[   16.319149] pci 0000:ae:0c.1: [8086:2041] type 00 class 0x088000
[   16.319208] pci 0000:ae:0c.2: [8086:2042] type 00 class 0x088000
[   16.319269] pci 0000:ae:0c.3: [8086:2043] type 00 class 0x088000
[   16.319329] pci 0000:ae:0c.4: [8086:2044] type 00 class 0x088000
[   16.319387] pci 0000:ae:0c.5: [8086:2045] type 00 class 0x088000
[   16.319446] pci 0000:ae:0c.6: [8086:2046] type 00 class 0x088000
[   16.319504] pci 0000:ae:0c.7: [8086:2047] type 00 class 0x088000
[   16.319563] pci 0000:ae:0d.0: [8086:2048] type 00 class 0x088000
[   16.319625] pci 0000:ae:0d.1: [8086:2049] type 00 class 0x088000
[   16.319685] pci 0000:ae:0d.2: [8086:204a] type 00 class 0x088000
[   16.319746] pci 0000:ae:0d.3: [8086:204b] type 00 class 0x088000
[   16.319816] pci_bus 0000:ae: on NUMA node 1
[   16.319895] ACPI: PCI Root Bridge [PC09] (domain 0000 [bus d7-ff])
[   16.393903] acpi PNP0A08:09: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[   16.492253] acpi PNP0A08:09: _OSC: platform does not support [AER]
[   16.566411] acpi PNP0A08:09: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
[   16.658339] acpiphp: Slot [5] registered
[   16.705380] acpiphp: Slot [0-5] registered
[   16.754453] acpiphp: Slot [7] registered
[   16.801458] acpiphp: Slot [0-3] registered
[   16.850540] PCI host bridge to bus 0000:d7
[   16.899593] pci_bus 0000:d7: root bus resource [bus d7-ff]
[   16.965272] pci_bus 0000:d7: root bus resource [io  0xf000-0xffff window]
[   17.046515] pci_bus 0000:d7: root bus resource [mem 0xee800000-0xfbffffff window]
[   17.136224] pci_bus 0000:d7: root bus resource [mem 0x37000000000-0x37fffffffff window]
[   17.232203] pci 0000:d7:00.0: [8086:2030] type 01 class 0x060400
[   17.232251] pci 0000:d7:00.0: PME# supported from D0 D3hot D3cold
[   17.232282] pci 0000:d7:00.0: System wakeup disabled by ACPI
[   17.300114] pci 0000:d7:01.0: [8086:2031] type 01 class 0x060400
[   17.300161] pci 0000:d7:01.0: PME# supported from D0 D3hot D3cold
[   17.300190] pci 0000:d7:01.0: System wakeup disabled by ACPI
[   17.367966] pci 0000:d7:02.0: [8086:2032] type 01 class 0x060400
[   17.368013] pci 0000:d7:02.0: PME# supported from D0 D3hot D3cold
[   17.368042] pci 0000:d7:02.0: System wakeup disabled by ACPI
[   17.435819] pci 0000:d7:03.0: [8086:2033] type 01 class 0x060400
[   17.435865] pci 0000:d7:03.0: PME# supported from D0 D3hot D3cold
[   17.435895] pci 0000:d7:03.0: System wakeup disabled by ACPI
[   17.503673] pci 0000:d7:05.0: [8086:2034] type 00 class 0x088000
[   17.503740] pci 0000:d7:05.2: [8086:2035] type 00 class 0x088000
[   17.503806] pci 0000:d7:05.4: [8086:2036] type 00 class 0x080020
[   17.503814] pci 0000:d7:05.4: reg 0x10: [mem 0xfbf00000-0xfbf00fff]
[   17.503893] pci 0000:d7:0e.0: [8086:2058] type 00 class 0x110100
[   17.503954] pci 0000:d7:0e.1: [8086:2059] type 00 class 0x088000
[   17.504014] pci 0000:d7:0f.0: [8086:2058] type 00 class 0x110100
[   17.504073] pci 0000:d7:0f.1: [8086:2059] type 00 class 0x088000
[   17.504136] pci 0000:d7:12.0: [8086:204c] type 00 class 0x110100
[   17.504193] pci 0000:d7:12.1: [8086:204d] type 00 class 0x110100
[   17.504240] pci 0000:d7:12.2: [8086:204e] type 00 class 0x088000
[   17.504292] pci 0000:d7:15.0: [8086:2018] type 00 class 0x088000
[   17.504349] pci 0000:d7:16.0: [8086:2018] type 00 class 0x088000
[   17.504400] pci 0000:d7:16.4: [8086:2018] type 00 class 0x088000
[   17.504492] pci 0000:d8:00.0: [8086:0a53] type 00 class 0x010802
[   17.504507] pci 0000:d8:00.0: reg 0x10: [mem 0xfbe10000-0xfbe13fff 64bit]
[   17.504536] pci 0000:d8:00.0: reg 0x30: [mem 0xfbe00000-0xfbe0ffff pref]
[   17.506676] pci 0000:d7:00.0: PCI bridge to [bus d8]
[   17.566113] pci 0000:d7:00.0:   bridge window [mem 0xfbe00000-0xfbefffff]
[   17.566157] pci 0000:d9:00.0: [8086:0a53] type 00 class 0x010802
[   17.566171] pci 0000:d9:00.0: reg 0x10: [mem 0xfbd10000-0xfbd13fff 64bit]
[   17.566201] pci 0000:d9:00.0: reg 0x30: [mem 0xfbd00000-0xfbd0ffff pref]
[   17.568131] pci 0000:d7:01.0: PCI bridge to [bus d9]
[   17.627639] pci 0000:d7:01.0:   bridge window [mem 0xfbd00000-0xfbdfffff]
[   17.627683] pci 0000:da:00.0: [8086:0a53] type 00 class 0x010802
[   17.627697] pci 0000:da:00.0: reg 0x10: [mem 0xfbc10000-0xfbc13fff 64bit]
[   17.627726] pci 0000:da:00.0: reg 0x30: [mem 0xfbc00000-0xfbc0ffff pref]
[   17.629655] pci 0000:d7:02.0: PCI bridge to [bus da]
[   17.689167] pci 0000:d7:02.0:   bridge window [mem 0xfbc00000-0xfbcfffff]
[   17.689212] pci 0000:db:00.0: [8086:0a53] type 00 class 0x010802
[   17.689226] pci 0000:db:00.0: reg 0x10: [mem 0xfbb10000-0xfbb13fff 64bit]
[   17.689255] pci 0000:db:00.0: reg 0x30: [mem 0xfbb00000-0xfbb0ffff pref]
[   17.692185] pci 0000:d7:03.0: PCI bridge to [bus db]
[   17.751626] pci 0000:d7:03.0:   bridge window [mem 0xfbb00000-0xfbbfffff]
[   17.751647] pci_bus 0000:d7: on NUMA node 1
[   17.752003] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 10 *11 12 14 15)
[   17.837763] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 *10 11 12 14 15)
[   17.923461] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 10 *11 12 14 15)
[   18.006980] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 10 *11 12 14 15)
[   18.090506] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 *11 12 14 15)
[   18.176204] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 10 *11 12 14 15)
[   18.261904] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 10 *11 12 14 15)
[   18.347608] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 10 *11 12 14 15)
[   18.433362] ACPI: Enabled 4 GPEs in block 00 to 7F
[   18.491224] vgaarb: device added: PCI:0000:05:00.0,decodes=io+mem,owns=io+mem,locks=none
[   18.588315] vgaarb: loaded
[   18.620765] vgaarb: bridge control possible 0000:05:00.0
[   18.684507] SCSI subsystem initialized
[   18.729417] ACPI: bus type USB registered
[   18.777454] usbcore: registered new interface driver usbfs
[   18.843128] usbcore: registered new interface driver hub
[   18.906968] usbcore: registered new device driver usb
[   18.968001] PCI: Using ACPI for IRQ routing
[   19.021816] PCI: pci_cache_line_size set to 64 bytes
[   19.022131] e820: reserve RAM buffer [mem 0x00099800-0x0009ffff]
[   19.022133] e820: reserve RAM buffer [mem 0x69eb4000-0x6bffffff]
[   19.022134] e820: reserve RAM buffer [mem 0x6c911000-0x6fffffff]
[   19.022135] e820: reserve RAM buffer [mem 0x6f800000-0x6fffffff]
[   19.022338] NetLabel: Initializing
[   19.063175] NetLabel:  domain hash size = 128
[   19.115369] NetLabel:  protocols = UNLABELED CIPSOv4
[   19.174830] NetLabel:  unlabeled traffic allowed by default
[   19.241587] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
[   19.316635] hpet0: 8 comparators, 64-bit 24.000000 MHz counter
[   19.388707] Switched to clocksource hpet
[   19.442478] pnp: PnP ACPI init
[   19.479159] ACPI: bus type PNP registered
[   19.527677] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
[   19.527896] system 00:01: [io  0x0500-0x053f] could not be reserved
[   19.602935] system 00:01: [io  0x0400-0x047f] has been reserved
[   19.673798] system 00:01: [io  0x0540-0x057f] has been reserved
[   19.744660] system 00:01: [io  0x0600-0x061f] has been reserved
[   19.815523] system 00:01: [io  0x0880-0x0883] has been reserved
[   19.886387] system 00:01: [io  0x0800-0x081f] has been reserved
[   19.957258] system 00:01: [mem 0xfed1c000-0xfed3ffff] could not be reserved
[   20.040570] system 00:01: [mem 0xfed45000-0xfed8bfff] has been reserved
[   20.119734] system 00:01: [mem 0xff000000-0xffffffff] has been reserved
[   20.198899] system 00:01: [mem 0xfee00000-0xfeefffff] has been reserved
[   20.278064] system 00:01: [mem 0xfed12000-0xfed1200f] has been reserved
[   20.357227] system 00:01: [mem 0xfed12010-0xfed1201f] has been reserved
[   20.436393] system 00:01: [mem 0xfed1b000-0xfed1bfff] has been reserved
[   20.515556] system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active)
[   20.515859] system 00:02: [io  0x0a00-0x0a0f] has been reserved
[   20.586732] system 00:02: [io  0x0a10-0x0a1f] has been reserved
[   20.657592] system 00:02: [io  0x0a20-0x0a2f] has been reserved
[   20.728458] system 00:02: [io  0x0a30-0x0a3f] has been reserved
[   20.799322] system 00:02: [io  0x0a40-0x0a4f] has been reserved
[   20.870189] system 00:02: Plug and Play ACPI device, IDs PNP0c02 (active)
[   20.870429] pnp 00:03: [dma 0 disabled]
[   20.870476] pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active)
[   20.870692] pnp 00:04: [dma 0 disabled]
[   20.870732] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
[   20.870902] system 00:05: [mem 0xfd000000-0xfdabffff] has been reserved
[   20.950085] system 00:05: [mem 0xfdad0000-0xfdadffff] has been reserved
[   21.029244] system 00:05: [mem 0xfdb00000-0xfdffffff] has been reserved
[   21.108409] system 00:05: [mem 0xfe000000-0xfe00ffff] has been reserved
[   21.187575] system 00:05: [mem 0xfe011000-0xfe01ffff] has been reserved
[   21.266737] system 00:05: [mem 0xfe036000-0xfe03bfff] has been reserved
[   21.345903] system 00:05: [mem 0xfe03d000-0xfe3fffff] has been reserved
[   21.425064] system 00:05: [mem 0xfe410000-0xfe7fffff] has been reserved
[   21.504231] system 00:05: Plug and Play ACPI device, IDs PNP0c02 (active)
[   21.504467] system 00:06: [io  0x0f00-0x0ffe] has been reserved
[   21.575405] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active)
[   21.576162] pnp: PnP ACPI: found 7 devices
[   21.625303] ACPI: bus type PNP unregistered
[   21.682006] pci 0000:00:1c.0: bridge window [io  0x1000-0x0fff] to [bus 01] add_size 1000
[   21.682009] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000
[   21.682011] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000
[   21.682049] pci 0000:00:1c.0: res[14]=[mem 0x00100000-0x000fffff] res_to_dev_res add_size 200000 min_align 100000
[   21.682051] pci 0000:00:1c.0: res[14]=[mem 0x00100000-0x002fffff] res_to_dev_res add_size 200000 min_align 100000
[   21.682052] pci 0000:00:1c.0: res[15]=[mem 0x00100000-0x000fffff 64bit pref] res_to_dev_res add_size 200000 min_align 100000
[   21.682054] pci 0000:00:1c.0: res[15]=[mem 0x00100000-0x002fffff 64bit pref] res_to_dev_res add_size 200000 min_align 100000
[   21.682056] pci 0000:00:1c.0: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[   21.682057] pci 0000:00:1c.0: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[   21.682062] pci 0000:00:1c.0: BAR 14: assigned [mem 0x90000000-0x901fffff]
[   21.764347] pci 0000:00:1c.0: BAR 15: assigned [mem 0x30000000000-0x300001fffff 64bit pref]
[   21.864438] pci 0000:00:1c.0: BAR 13: assigned [io  0x1000-0x1fff]
[   21.938442] pci 0000:00:1f.4: BAR 0: assigned [mem 0x30000200000-0x300002000ff 64bit]
[   22.032316] pci 0000:00:1c.0: PCI bridge to [bus 01]
[   22.091785] pci 0000:00:1c.0:   bridge window [io  0x1000-0x1fff]
[   22.164733] pci 0000:00:1c.0:   bridge window [mem 0x90000000-0x901fffff]
[   22.245972] pci 0000:00:1c.0:   bridge window [mem 0x30000000000-0x300001fffff 64bit pref]
[   22.345025] pci 0000:00:1c.2: PCI bridge to [bus 02]
[   22.404505] pci 0000:00:1c.2:   bridge window [mem 0x9d300000-0x9d4fffff]
[   22.485751] pci 0000:00:1c.3: PCI bridge to [bus 03]
[   22.545194] pci 0000:00:1c.3:   bridge window [mem 0x9d100000-0x9d2fffff]
[   22.626444] pci 0000:04:00.0: PCI bridge to [bus 05]
[   22.685883] pci 0000:04:00.0:   bridge window [io  0x2000-0x2fff]
[   22.763776] pci 0000:04:00.0:   bridge window [mem 0x9c000000-0x9d0fffff]
[   22.845058] pci 0000:00:1c.5: PCI bridge to [bus 04-05]
[   22.907609] pci 0000:00:1c.5:   bridge window [io  0x2000-0x2fff]
[   22.980550] pci 0000:00:1c.5:   bridge window [mem 0x9c000000-0x9d0fffff]
[   23.061795] pci_bus 0000:00: resource 4 [io  0x0000-0x03af window]
[   23.061797] pci_bus 0000:00: resource 5 [io  0x03e0-0x0cf7 window]
[   23.061798] pci_bus 0000:00: resource 6 [io  0x03b0-0x03bb window]
[   23.061799] pci_bus 0000:00: resource 7 [io  0x03c0-0x03df window]
[   23.061801] pci_bus 0000:00: resource 8 [io  0x1000-0x3fff window]
[   23.061802] pci_bus 0000:00: resource 9 [mem 0x000a0000-0x000bffff window]
[   23.061804] pci_bus 0000:00: resource 10 [mem 0x000c4000-0x000c7fff window]
[   23.061805] pci_bus 0000:00: resource 11 [mem 0xfe010000-0xfe010fff window]
[   23.061806] pci_bus 0000:00: resource 12 [mem 0x90000000-0x9d7fffff window]
[   23.061808] pci_bus 0000:00: resource 13 [mem 0x30000000000-0x30fffffffff window]
[   23.061810] pci_bus 0000:01: resource 0 [io  0x1000-0x1fff]
[   23.061811] pci_bus 0000:01: resource 1 [mem 0x90000000-0x901fffff]
[   23.061812] pci_bus 0000:01: resource 2 [mem 0x30000000000-0x300001fffff 64bit pref]
[   23.061814] pci_bus 0000:02: resource 1 [mem 0x9d300000-0x9d4fffff]
[   23.061815] pci_bus 0000:03: resource 1 [mem 0x9d100000-0x9d2fffff]
[   23.061817] pci_bus 0000:04: resource 0 [io  0x2000-0x2fff]
[   23.061818] pci_bus 0000:04: resource 1 [mem 0x9c000000-0x9d0fffff]
[   23.061820] pci_bus 0000:05: resource 0 [io  0x2000-0x2fff]
[   23.061821] pci_bus 0000:05: resource 1 [mem 0x9c000000-0x9d0fffff]
[   23.061836] pci 0000:17:01.0: PCI bridge to [bus 18-19]
[   23.124455] pci 0000:17:01.0:   bridge window [io  0x5000-0x5fff]
[   23.197396] pci 0000:17:01.0:   bridge window [mem 0xaae00000-0xaaefffff]
[   23.278639] pci 0000:17:01.0:   bridge window [mem 0x31ffff00000-0x31fffffffff 64bit pref]
[   23.377692] pci_bus 0000:17: resource 4 [io  0x4000-0x5fff window]
[   23.377693] pci_bus 0000:17: resource 5 [mem 0x9d800000-0xaaffffff window]
[   23.377695] pci_bus 0000:17: resource 6 [mem 0x31000000000-0x31fffffffff window]
[   23.377696] pci_bus 0000:18: resource 0 [io  0x5000-0x5fff]
[   23.377698] pci_bus 0000:18: resource 1 [mem 0xaae00000-0xaaefffff]
[   23.377699] pci_bus 0000:18: resource 2 [mem 0x31ffff00000-0x31fffffffff 64bit pref]
[   23.377730] pci 0000:3c:00.0: PCI bridge to [bus 3d]
[   23.437171] pci 0000:3c:00.0:   bridge window [mem 0xb8500000-0xb85fffff]
[   23.518421] pci 0000:3c:03.0: PCI bridge to [bus 3e-3f]
[   23.580974] pci 0000:3c:03.0:   bridge window [mem 0xb8400000-0xb84fffff]
[   23.662220] pci 0000:3c:03.0:   bridge window [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   23.761273] pci 0000:3b:00.0: PCI bridge to [bus 3c-3f]
[   23.823862] pci 0000:3b:00.0:   bridge window [mem 0xb8400000-0xb85fffff]
[   23.905107] pci 0000:3b:00.0:   bridge window [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   24.004159] pci 0000:3a:00.0: PCI bridge to [bus 3b-3f]
[   24.066752] pci 0000:3a:00.0:   bridge window [mem 0xb8300000-0xb86fffff]
[   24.147995] pci 0000:3a:00.0:   bridge window [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   24.247048] pci_bus 0000:3a: resource 4 [io  0x6000-0x7fff window]
[   24.247049] pci_bus 0000:3a: resource 5 [mem 0xab000000-0xb87fffff window]
[   24.247051] pci_bus 0000:3a: resource 6 [mem 0x32000000000-0x32fffffffff window]
[   24.247052] pci_bus 0000:3b: resource 1 [mem 0xb8300000-0xb86fffff]
[   24.247054] pci_bus 0000:3b: resource 2 [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   24.247055] pci_bus 0000:3c: resource 1 [mem 0xb8400000-0xb85fffff]
[   24.247057] pci_bus 0000:3c: resource 2 [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   24.247058] pci_bus 0000:3d: resource 1 [mem 0xb8500000-0xb85fffff]
[   24.247060] pci_bus 0000:3e: resource 1 [mem 0xb8400000-0xb84fffff]
[   24.247061] pci_bus 0000:3e: resource 2 [mem 0x32ffd000000-0x32fff9fffff 64bit pref]
[   24.247067] pci 0000:5d:00.0: bridge window [io  0x1000-0x0fff] to [bus 5e] add_size 1000
[   24.247069] pci 0000:5d:00.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 5e] add_size 200000 add_align 100000
[   24.247071] pci 0000:5d:00.0: bridge window [mem 0x00100000-0x000fffff] to [bus 5e] add_size 200000 add_align 100000
[   24.247081] pci 0000:5d:00.0: res[14]=[mem 0x00100000-0x000fffff] res_to_dev_res add_size 200000 min_align 100000
[   24.247082] pci 0000:5d:00.0: res[14]=[mem 0x00100000-0x002fffff] res_to_dev_res add_size 200000 min_align 100000
[   24.247084] pci 0000:5d:00.0: res[15]=[mem 0x00100000-0x000fffff 64bit pref] res_to_dev_res add_size 200000 min_align 100000
[   24.247085] pci 0000:5d:00.0: res[15]=[mem 0x00100000-0x002fffff 64bit pref] res_to_dev_res add_size 200000 min_align 100000
[   24.247087] pci 0000:5d:00.0: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[   24.247088] pci 0000:5d:00.0: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[   24.247091] pci 0000:5d:00.0: BAR 14: assigned [mem 0xb8800000-0xb89fffff]
[   24.329357] pci 0000:5d:00.0: BAR 15: assigned [mem 0x33000000000-0x330001fffff 64bit pref]
[   24.429448] pci 0000:5d:00.0: BAR 13: assigned [io  0x8000-0x8fff]
[   24.503454] pci 0000:5d:00.0: PCI bridge to [bus 5e]
[   24.562902] pci 0000:5d:00.0:   bridge window [io  0x8000-0x8fff]
[   24.635845] pci 0000:5d:00.0:   bridge window [mem 0xb8800000-0xb89fffff]
[   24.717086] pci 0000:5d:00.0:   bridge window [mem 0x33000000000-0x330001fffff 64bit pref]
[   24.816140] pci 0000:5d:02.0: PCI bridge to [bus 5f-60]
[   24.878731] pci 0000:5d:02.0:   bridge window [io  0x9000-0x9fff]
[   24.951676] pci 0000:5d:02.0:   bridge window [mem 0xc5e00000-0xc5efffff]
[   25.032915] pci 0000:5d:02.0:   bridge window [mem 0x33ffff00000-0x33fffffffff 64bit pref]
[   25.131968] pci_bus 0000:5d: resource 4 [io  0x8000-0x9fff window]
[   25.131969] pci_bus 0000:5d: resource 5 [mem 0xb8800000-0xc5ffffff window]
[   25.131971] pci_bus 0000:5d: resource 6 [mem 0x33000000000-0x33fffffffff window]
[   25.131972] pci_bus 0000:5e: resource 0 [io  0x8000-0x8fff]
[   25.131974] pci_bus 0000:5e: resource 1 [mem 0xb8800000-0xb89fffff]
[   25.131975] pci_bus 0000:5e: resource 2 [mem 0x33000000000-0x330001fffff 64bit pref]
[   25.131976] pci_bus 0000:5f: resource 0 [io  0x9000-0x9fff]
[   25.131978] pci_bus 0000:5f: resource 1 [mem 0xc5e00000-0xc5efffff]
[   25.131979] pci_bus 0000:5f: resource 2 [mem 0x33ffff00000-0x33fffffffff 64bit pref]
[   25.131984] pci_bus 0000:80: resource 4 [io  0xa000-0xbfff window]
[   25.131986] pci_bus 0000:80: resource 5 [mem 0xc6000000-0xd37fffff window]
[   25.131987] pci_bus 0000:80: resource 6 [mem 0x34000000000-0x34fffffffff window]
[   25.131996] pci_bus 0000:85: resource 4 [io  0xc000-0xdfff window]
[   25.131998] pci_bus 0000:85: resource 5 [mem 0xd3800000-0xe0ffffff window]
[   25.131999] pci_bus 0000:85: resource 6 [mem 0x35000000000-0x35fffffffff window]
[   25.132008] pci_bus 0000:ae: resource 4 [io  0xe000-0xefff window]
[   25.132010] pci_bus 0000:ae: resource 5 [mem 0xe1000000-0xee7fffff window]
[   25.132011] pci_bus 0000:ae: resource 6 [mem 0x36000000000-0x36fffffffff window]
[   25.132031] pci 0000:d7:00.0: PCI bridge to [bus d8]
[   25.191550] pci 0000:d7:00.0:   bridge window [mem 0xfbe00000-0xfbefffff]
[   25.272798] pci 0000:d7:01.0: PCI bridge to [bus d9]
[   25.332240] pci 0000:d7:01.0:   bridge window [mem 0xfbd00000-0xfbdfffff]
[   25.413488] pci 0000:d7:02.0: PCI bridge to [bus da]
[   25.472933] pci 0000:d7:02.0:   bridge window [mem 0xfbc00000-0xfbcfffff]
[   25.554178] pci 0000:d7:03.0: PCI bridge to [bus db]
[   25.613622] pci 0000:d7:03.0:   bridge window [mem 0xfbb00000-0xfbbfffff]
[   25.694870] pci_bus 0000:d7: resource 4 [io  0xf000-0xffff window]
[   25.694872] pci_bus 0000:d7: resource 5 [mem 0xee800000-0xfbffffff window]
[   25.694873] pci_bus 0000:d7: resource 6 [mem 0x37000000000-0x37fffffffff window]
[   25.694875] pci_bus 0000:d8: resource 1 [mem 0xfbe00000-0xfbefffff]
[   25.694876] pci_bus 0000:d9: resource 1 [mem 0xfbd00000-0xfbdfffff]
[   25.694878] pci_bus 0000:da: resource 1 [mem 0xfbc00000-0xfbcfffff]
[   25.694879] pci_bus 0000:db: resource 1 [mem 0xfbb00000-0xfbbfffff]
[   25.695107] NET: Registered protocol family 2
[   25.748820] TCP established hash table entries: 524288 (order: 10, 4194304 bytes)
[   25.839206] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[   25.919623] TCP: Hash tables configured (established 524288 bind 65536)
[   25.998903] TCP: reno registered
[   26.037776] UDP hash table entries: 32768 (order: 8, 1048576 bytes)
[   26.113011] UDP-Lite hash table entries: 32768 (order: 8, 1048576 bytes)
[   26.194111] NET: Registered protocol family 1
[   26.246724] pci 0000:05:00.0: Boot video device
[   26.246776] PCI: CLS mismatch (64 != 32), using 64 bytes
[   26.246923] Unpacking initramfs...
[   26.531366] Freeing initrd memory: 14124k freed
[   26.591174] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[   26.668293] software IO TLB [mem 0x65eb4000-0x69eb4000] (64MB) mapped at [ffff880065eb4000-ffff880069eb3fff]
[   26.786115] Intel CQM monitoring enabled
[   26.833146] Intel MBM enabled
[   26.871613] sha1_ssse3: Using AVX2 optimized SHA-1 implementation
[   26.944670] sha256_ssse3: Using AVX2 optimized SHA-256 implementation
[   27.023615] futex hash table entries: 65536 (order: 10, 4194304 bytes)
[   27.102289] Initialise system trusted keyring
[   27.154574] audit: initializing netlink socket (disabled)
[   27.219230] type=2000 audit(1507109560.782:1): initialized
[   27.310390] HugeTLB registered 1 GB page size, pre-allocated 0 pages
[   27.386478] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[   27.463446] zpool: loaded
[   27.494896] zbud: loaded
[   27.525944] VFS: Disk quotas dquot_6.5.2
[   27.573351] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[   27.651628] msgmni has been set to 32768
[   27.698893] Key type big_key registered
[   27.744937] SELinux:  Registering netfilter hooks
[   27.747019] NET: Registered protocol family 38
[   27.800247] Key type asymmetric registered
[   27.849317] Asymmetric key parser 'x509' registered
[   27.907749] tsc: Refined TSC clocksource calibration: 2194.843 MHz
[   27.981811] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[   28.070648] Switched to clocksource tsc
[   28.070865] io scheduler noop registered
[   28.070868] io scheduler deadline registered (default)
[   28.070969] io scheduler cfq registered
[   28.071987] pcieport 0000:00:1c.0: irq 24 for MSI/MSI-X
[   28.072330] pcieport 0000:00:1c.2: irq 25 for MSI/MSI-X
[   28.072648] pcieport 0000:00:1c.3: irq 26 for MSI/MSI-X
[   28.072962] pcieport 0000:00:1c.5: irq 27 for MSI/MSI-X
[   28.073323] pcieport 0000:17:01.0: irq 29 for MSI/MSI-X
[   28.073581] pcieport 0000:3a:00.0: irq 31 for MSI/MSI-X
[   28.074107] pcieport 0000:5d:00.0: irq 34 for MSI/MSI-X
[   28.074287] pcieport 0000:5d:02.0: irq 35 for MSI/MSI-X
[   28.074623] pcieport 0000:d7:00.0: irq 37 for MSI/MSI-X
[   28.074813] pcieport 0000:d7:01.0: irq 38 for MSI/MSI-X
[   28.074986] pcieport 0000:d7:02.0: irq 39 for MSI/MSI-X
[   28.075161] pcieport 0000:d7:03.0: irq 40 for MSI/MSI-X
[   28.075259] pcieport 0000:00:1c.0: Signaling PME through PCIe PME interrupt
[   28.075264] pcie_pme 0000:00:1c.0:pcie01: service driver pcie_pme loaded
[   28.075286] pcieport 0000:00:1c.2: Signaling PME through PCIe PME interrupt
[   28.075288] pci 0000:02:00.0: Signaling PME through PCIe PME interrupt
[   28.075292] pcie_pme 0000:00:1c.2:pcie01: service driver pcie_pme loaded
[   28.075313] pcieport 0000:00:1c.3: Signaling PME through PCIe PME interrupt
[   28.075314] pci 0000:03:00.0: Signaling PME through PCIe PME interrupt
[   28.075319] pcie_pme 0000:00:1c.3:pcie01: service driver pcie_pme loaded
[   28.075340] pcieport 0000:00:1c.5: Signaling PME through PCIe PME interrupt
[   28.075341] pci 0000:04:00.0: Signaling PME through PCIe PME interrupt
[   28.075342] pci 0000:05:00.0: Signaling PME through PCIe PME interrupt
[   28.075347] pcie_pme 0000:00:1c.5:pcie01: service driver pcie_pme loaded
[   28.075365] pcieport 0000:17:01.0: Signaling PME through PCIe PME interrupt
[   28.075367] pci 0000:18:00.0: Signaling PME through PCIe PME interrupt
[   28.075367] pci 0000:18:00.1: Signaling PME through PCIe PME interrupt
[   28.075368] pci 0000:18:00.2: Signaling PME through PCIe PME interrupt
[   28.075369] pci 0000:18:00.3: Signaling PME through PCIe PME interrupt
[   28.075372] pcie_pme 0000:17:01.0:pcie01: service driver pcie_pme loaded
[   28.075387] pcieport 0000:3a:00.0: Signaling PME through PCIe PME interrupt
[   28.075389] pcieport 0000:3b:00.0: Signaling PME through PCIe PME interrupt
[   28.075390] pcieport 0000:3c:00.0: Signaling PME through PCIe PME interrupt
[   28.075391] pci 0000:3d:00.0: Signaling PME through PCIe PME interrupt
[   28.075392] pcieport 0000:3c:03.0: Signaling PME through PCIe PME interrupt
[   28.075393] pci 0000:3e:00.0: Signaling PME through PCIe PME interrupt
[   28.075394] pci 0000:3e:00.1: Signaling PME through PCIe PME interrupt
[   28.075397] pcie_pme 0000:3a:00.0:pcie01: service driver pcie_pme loaded
[   28.075412] pcieport 0000:5d:00.0: Signaling PME through PCIe PME interrupt
[   28.075414] pcie_pme 0000:5d:00.0:pcie01: service driver pcie_pme loaded
[   28.075429] pcieport 0000:5d:02.0: Signaling PME through PCIe PME interrupt
[   28.075430] pci 0000:5f:00.0: Signaling PME through PCIe PME interrupt
[   28.075431] pci 0000:5f:00.1: Signaling PME through PCIe PME interrupt
[   28.075432] pci 0000:5f:00.2: Signaling PME through PCIe PME interrupt
[   28.075433] pci 0000:5f:00.3: Signaling PME through PCIe PME interrupt
[   28.075435] pcie_pme 0000:5d:02.0:pcie01: service driver pcie_pme loaded
[   28.075449] pcieport 0000:d7:00.0: Signaling PME through PCIe PME interrupt
[   28.075450] pci 0000:d8:00.0: Signaling PME through PCIe PME interrupt
[   28.075453] pcie_pme 0000:d7:00.0:pcie01: service driver pcie_pme loaded
[   28.075465] pcieport 0000:d7:01.0: Signaling PME through PCIe PME interrupt
[   28.075466] pci 0000:d9:00.0: Signaling PME through PCIe PME interrupt
[   28.075468] pcie_pme 0000:d7:01.0:pcie01: service driver pcie_pme loaded
[   28.075484] pcieport 0000:d7:02.0: Signaling PME through PCIe PME interrupt
[   28.075485] pci 0000:da:00.0: Signaling PME through PCIe PME interrupt
[   28.075487] pcie_pme 0000:d7:02.0:pcie01: service driver pcie_pme loaded
[   28.075499] pcieport 0000:d7:03.0: Signaling PME through PCIe PME interrupt
[   28.075500] pci 0000:db:00.0: Signaling PME through PCIe PME interrupt
[   28.075502] pcie_pme 0000:d7:03.0:pcie01: service driver pcie_pme loaded
[   28.075537] ioapic: probe of 0000:00:05.4 failed with error -22
[   28.075565] ioapic: probe of 0000:17:05.4 failed with error -22
[   28.075587] ioapic: probe of 0000:3a:05.4 failed with error -22
[   28.075616] ioapic: probe of 0000:5d:05.4 failed with error -22
[   28.075626] ioapic: probe of 0000:80:05.4 failed with error -22
[   28.075632] ioapic: probe of 0000:85:05.4 failed with error -22
[   28.075639] ioapic: probe of 0000:ae:05.4 failed with error -22
[   28.075646] ioapic: probe of 0000:d7:05.4 failed with error -22
[   28.075652] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[   28.075666] pciehp 0000:00:1c.0:pcie04: Slot #0 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise+ Interlock- NoCompl+ LLActRep+
[   28.075950] pciehp 0000:00:1c.0:pcie04: service driver pciehp loaded
[   28.075957] pciehp 0000:5d:00.0:pcie04: Slot #2 AttnBtn+ PwrCtrl+ MRL+ AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock- NoCompl- LLActRep+
[   28.076225] pciehp 0000:5d:00.0:pcie04: service driver pciehp loaded
[   28.076228] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[   32.011772] intel_idle: MWAIT substates: 0x2020
[   32.011774] intel_idle: v0.4.1 model 0x55
[   32.011775] intel_idle: lapic_timer_reliable_states 0xffffffff
[   32.012328] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[   32.101030] ACPI: Power Button [PWRF]
[   32.144986] ACPI: Requesting acpi_cpufreq
[   32.194033] ERST: Error Record Serialization Table (ERST) support is initialized.
[   32.283742] pstore: Registered erst as persistent store backend
[   32.355139] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC.
[   32.443908] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[   32.539870] 00:03: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[   32.627496] 00:04: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
[   32.694779] Non-volatile memory driver v1.3
[   32.744919] Linux agpgart interface v0.103
[   32.794225] crash memory driver: version 1.1
[   32.845629] rdac: device handler registered
[   32.895790] hp_sw: device handler registered
[   32.947018] emc: device handler registered
[   32.996088] alua: device handler registered
[   33.046225] libphy: Fixed MDIO Bus: probed
[   33.095304] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[   33.173416] ehci-pci: EHCI PCI platform driver
[   33.226650] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[   33.300615] ohci-pci: OHCI PCI platform driver
[   33.353854] uhci_hcd: USB Universal Host Controller Interface driver
[   33.430095] xhci_hcd 0000:00:14.0: xHCI Host Controller
[   33.492725] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1
[   33.583553] xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x100 quirks 0x00009810
[   33.684671] xhci_hcd 0000:00:14.0: cache line size of 64 is not supported
[   33.684750] xhci_hcd 0000:00:14.0: irq 41 for MSI/MSI-X
[   33.684805] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
[   33.766051] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[   33.852643] usb usb1: Product: xHCI Host Controller
[   33.911091] usb usb1: Manufacturer: Linux 3.10.0-514.21.1.el7.x86_64 xhci-hcd
[   33.996488] usb usb1: SerialNumber: 0000:00:14.0
[   34.051859] hub 1-0:1.0: USB hub found
[   34.096825] hub 1-0:1.0: 16 ports detected
[   34.152885] xhci_hcd 0000:00:14.0: xHCI Host Controller
[   34.215541] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2
[   34.304240] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003
[   34.385479] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[   34.472057] usb usb2: Product: xHCI Host Controller
[   34.530504] usb usb2: Manufacturer: Linux 3.10.0-514.21.1.el7.x86_64 xhci-hcd
[   34.615904] usb usb2: SerialNumber: 0000:00:14.0
[   34.676065] hub 2-0:1.0: USB hub found
[   34.721009] hub 2-0:1.0: 10 ports detected
[   34.773497] usb: port power management may be unreliable
[   34.838073] usbcore: registered new interface driver usbserial
[   34.907986] usbcore: registered new interface driver usbserial_generic
[   34.986107] usbserial: USB Serial support registered for generic
[   34.990116] usb 1-2: new high-speed USB device number 2 using xhci_hcd
[   35.136165] i8042: PNP: No PS/2 controller found. Probing ports directly.
[   35.154663] usb 1-2: New USB device found, idVendor=046b, idProduct=ff01
[   35.154664] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   35.154664] usb 1-2: Product: Virtual Hub
[   35.154665] usb 1-2: Manufacturer: American Megatrends Inc.
[   35.154666] usb 1-2: SerialNumber: serial
[   35.155087] hub 1-2:1.0: USB hub found
[   35.155223] hub 1-2:1.0: 5 ports detected
[   35.427087] usb 1-2.4: new low-speed USB device number 3 using xhci_hcd
[   35.513373] usb 1-2.4: New USB device found, idVendor=046b, idProduct=ff10
[   35.513374] usb 1-2.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[   35.513375] usb 1-2.4: Product: Virtual Keyboard and Mouse
[   35.513375] usb 1-2.4: Manufacturer: American Megatrends Inc.
[   37.091325] i8042: No controller found
[   37.136329] mousedev: PS/2 mouse device common for all mice
[   37.203268] rtc_cmos 00:00: RTC can wake from S4
[   37.258952] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
[   37.332013] rtc_cmos 00:00: alarms up to one month, y3k, 114 bytes nvram, hpet irqs
[   37.423920] Intel P-state driver initializing.
[   37.478372] intel_pstate: HWP enabled
[   37.522606] cpuidle: using governor menu
[   37.569876] hidraw: raw HID events driver (C) Jiri Kosina
[   37.636330] input: American Megatrends Inc. Virtual Keyboard and Mouse as /devices/pci0000:00/0000:00:14.0/usb1/1-2/1-2.4/1-2.4:1.0/input/input1
[   37.842497] hid-generic 0003:046B:FF10.0001: input,hidraw0: USB HID v1.10 Keyboard [American Megatrends Inc. Virtual Keyboard and Mouse] on usb-0000:00:14.0-2.4/input0
[   38.023058] input: American Megatrends Inc. Virtual Keyboard and Mouse as /devices/pci0000:00/0000:00:14.0/usb1/1-2/1-2.4/1-2.4:1.1/input/input2
[   38.178241] hid-generic 0003:046B:FF10.0002: input,hidraw1: USB HID v1.10 Mouse [American Megatrends Inc. Virtual Keyboard and Mouse] on usb-0000:00:14.0-2.4/input1
[   38.354327] usbcore: registered new interface driver usbhid
[   38.421085] usbhid: USB HID core driver
[   38.467201] drop_monitor: Initializing network drop monitor service
[   38.542543] TCP: cubic registered
[   38.582315] Initializing XFRM netlink socket
[   38.633888] NET: Registered protocol family 10
[   38.688145] NET: Registered protocol family 17
[   38.741920] microcode: CPU0 sig=0x50654, pf=0x1, revision=0x200001e
[   38.816920] microcode: CPU1 sig=0x50654, pf=0x1, revision=0x200001e
[   38.891928] microcode: CPU2 sig=0x50654, pf=0x1, revision=0x200001e
[   38.966945] microcode: CPU3 sig=0x50654, pf=0x1, revision=0x200001e
[   39.041957] microcode: CPU4 sig=0x50654, pf=0x1, revision=0x200001e
[   39.116974] microcode: CPU5 sig=0x50654, pf=0x1, revision=0x200001e
[   39.191987] microcode: CPU6 sig=0x50654, pf=0x1, revision=0x200001e
[   39.267001] microcode: CPU7 sig=0x50654, pf=0x1, revision=0x200001e
[   39.342017] microcode: CPU8 sig=0x50654, pf=0x1, revision=0x200001e
[   39.417029] microcode: CPU9 sig=0x50654, pf=0x1, revision=0x200001e
[   39.492046] microcode: CPU10 sig=0x50654, pf=0x1, revision=0x200001e
[   39.568107] microcode: CPU11 sig=0x50654, pf=0x1, revision=0x200001e
[   39.644157] microcode: CPU12 sig=0x50654, pf=0x1, revision=0x200001e
[   39.720209] microcode: CPU13 sig=0x50654, pf=0x1, revision=0x200001e
[   39.796259] microcode: CPU14 sig=0x50654, pf=0x1, revision=0x200001e
[   39.872313] microcode: CPU15 sig=0x50654, pf=0x1, revision=0x200001e
[   39.948363] microcode: CPU16 sig=0x50654, pf=0x1, revision=0x200001e
[   40.024417] microcode: CPU17 sig=0x50654, pf=0x1, revision=0x200001e
[   40.100470] microcode: CPU18 sig=0x50654, pf=0x1, revision=0x200001e
[   40.176520] microcode: CPU19 sig=0x50654, pf=0x1, revision=0x200001e
[   40.252571] microcode: CPU20 sig=0x50654, pf=0x1, revision=0x200001e
[   40.328617] microcode: CPU21 sig=0x50654, pf=0x1, revision=0x200001e
[   40.404667] microcode: CPU22 sig=0x50654, pf=0x1, revision=0x200001e
[   40.480719] microcode: CPU23 sig=0x50654, pf=0x1, revision=0x200001e
[   40.556771] microcode: CPU24 sig=0x50654, pf=0x1, revision=0x200001e
[   40.632820] microcode: CPU25 sig=0x50654, pf=0x1, revision=0x200001e
[   40.708875] microcode: CPU26 sig=0x50654, pf=0x1, revision=0x200001e
[   40.784927] microcode: CPU27 sig=0x50654, pf=0x1, revision=0x200001e
[   40.860976] microcode: CPU28 sig=0x50654, pf=0x1, revision=0x200001e
[   40.937028] microcode: CPU29 sig=0x50654, pf=0x1, revision=0x200001e
[   41.013084] microcode: CPU30 sig=0x50654, pf=0x1, revision=0x200001e
[   41.089141] microcode: CPU31 sig=0x50654, pf=0x1, revision=0x200001e
[   41.165193] microcode: CPU32 sig=0x50654, pf=0x1, revision=0x200001e
[   41.241244] microcode: CPU33 sig=0x50654, pf=0x1, revision=0x200001e
[   41.317295] microcode: CPU34 sig=0x50654, pf=0x1, revision=0x200001e
[   41.393348] microcode: CPU35 sig=0x50654, pf=0x1, revision=0x200001e
[   41.469401] microcode: CPU36 sig=0x50654, pf=0x1, revision=0x200001e
[   41.545451] microcode: CPU37 sig=0x50654, pf=0x1, revision=0x200001e
[   41.621505] microcode: CPU38 sig=0x50654, pf=0x1, revision=0x200001e
[   41.697557] microcode: CPU39 sig=0x50654, pf=0x1, revision=0x200001e
[   41.773623] microcode: Microcode Update Driver: v2.01 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
[   41.878976] Loading compiled-in X.509 certificates
[   41.936401] Loaded X.509 cert 'CentOS Linux kpatch signing key: ea0413152cde1d98ebdca3fe6f0230904c9ef717'
[   42.051015] Loaded X.509 cert 'CentOS Linux Driver update signing key: 7f421ee0ab69461574bb358861dbe77762a4201b'
[   42.173326] Loaded X.509 cert 'CentOS Linux kernel signing key: 9afcdf990a3a1da31d308ca20e5414dadf4732c4'
[   42.287993] registered taskstats version 1
[   42.339906] Key type trusted registered
[   42.387910] Key type encrypted registered
[   42.438114] IMA: No TPM chip found, activating TPM-bypass!
[   42.505564] rtc_cmos 00:00: setting system clock to 2017-10-04 09:33:31 UTC (1507109611)
[   42.603439] Freeing unused kernel memory: 1680k freed
[   42.677227] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
[   42.893238] systemd[1]: Detected architecture x86-64.
[   42.953712] systemd[1]: Running in initial RAM disk.
[   43.104167] systemd[1]: Set hostname to <localhost.localdomain>.
[   43.291693] systemd[1]: Reached target Local File Systems.
[   43.357450] systemd[1]: Starting Local File Systems.
[   43.462262] systemd[1]: Reached target Timers.
[   43.515564] systemd[1]: Starting Timers.
[   43.562603] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[   43.657559] systemd[1]: Starting Dispatch Password Requests to Console Directory Watch.
[   43.796479] systemd[1]: Reached target Paths.
[   43.848726] systemd[1]: Starting Paths.
[   43.938145] systemd[1]: Reached target Swap.
[   43.989320] systemd[1]: Starting Swap.
[   44.081808] systemd[1]: Created slice Root Slice.
[   44.138217] systemd[1]: Starting Root Slice.
[   44.240457] systemd[1]: Listening on Journal Socket.
[   44.299977] systemd[1]: Starting Journal Socket.
[   44.411029] systemd[1]: Listening on udev Control Socket.
[   44.475739] systemd[1]: Starting udev Control Socket.
[   44.591602] systemd[1]: Listening on udev Kernel Socket.
[   44.655248] systemd[1]: Starting udev Kernel Socket.
[   44.764207] systemd[1]: Created slice System Slice.
[   44.822711] systemd[1]: Starting System Slice.
[   44.876341] systemd[1]: Starting dracut cmdline hook...
[   45.028588] systemd[1]: Reached target Slices.
[   45.081906] systemd[1]: Starting Slices.
[   45.174235] systemd[1]: Reached target Sockets.
[   45.228516] systemd[1]: Starting Sockets.
[   45.276855] systemd[1]: Starting Journal Service...
[   45.377550] systemd[1]: Starting Create list of required static device nodes for the current kernel...
[   45.575248] systemd[1]: Starting Apply Kernel Variables...
[   45.735935] systemd[1]: Started Journal Service.
[   46.439141] FUJITSU Extended Socket Network Device Driver - version 1.1 - Copyright (c) 2015 FUJITSU LIMITED
[   46.570769] dca service started, version 1.12.1
[   46.643096] pps_core: LinuxPPS API ver. 1 registered
[   46.716676] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[   46.834571] libata version 3.00 loaded.
[   46.835706] PTP clock support registered
[   46.836136] nvme 0000:d8:00.0: irq 44 for MSI/MSI-X
[   46.836204] nvme 0000:d9:00.0: irq 46 for MSI/MSI-X
[   46.836271] nvme 0000:da:00.0: irq 48 for MSI/MSI-X
[   46.836331] nvme 0000:db:00.0: irq 49 for MSI/MSI-X
[   46.897719] ahci 0000:00:11.5: version 3.0
[   46.897950] ahci 0000:00:11.5: irq 50 for MSI/MSI-X
[   46.899098] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.3.0-k
[   46.909058] ahci 0000:00:11.5: AHCI 0001.0301 32 slots 6 ports 6 Gbps 0x3f impl SATA mode
[   46.909060] ahci 0000:00:11.5: flags: 64bit ncq sntf pm led clo only pio slum part ems deso sadm sds apst 
[   46.919481] scsi host0: ahci
[   46.919585] scsi host1: ahci
[   46.919679] scsi host2: ahci
[   46.919759] scsi host3: ahci
[   46.919842] scsi host4: ahci
[   46.919927] scsi host5: ahci
[   46.919961] ata1: SATA max UDMA/133 abar m524288@0x9d580000 port 0x9d580100 irq 50
[   46.919971] ata2: SATA max UDMA/133 abar m524288@0x9d580000 port 0x9d580180 irq 50
[   46.919974] ata3: SATA max UDMA/133 abar m524288@0x9d580000 port 0x9d580200 irq 50
[   46.919978] ata4: SATA max UDMA/133 abar m524288@0x9d580000 port 0x9d580280 irq 50
[   46.919981] ata5: SATA max UDMA/133 abar m524288@0x9d580000 port 0x9d580300 irq 50
[   46.919982] ata6: SATA max UDMA/133 abar m524288@0x9d580000 port 0x9d580380 irq 50
[   46.920244] ahci 0000:00:17.0: irq 51 for MSI/MSI-X
[   46.920360] ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode
[   46.920362] ahci 0000:00:17.0: flags: 64bit ncq sntf pm led clo only pio slum part ems deso sadm sds apst 
[   46.934874] scsi host6: ahci
[   46.934995] scsi host7: ahci
[   46.935097] scsi host8: ahci
[   46.935198] scsi host9: ahci
[   46.935296] scsi host10: ahci
[   46.935398] scsi host11: ahci
[   46.935500] scsi host12: ahci
[   46.935596] scsi host13: ahci
[   46.935636] ata7: SATA max UDMA/133 abar m524288@0x9d500000 port 0x9d500100 irq 51
[   46.935638] ata8: SATA max UDMA/133 abar m524288@0x9d500000 port 0x9d500180 irq 51
[   46.935641] ata9: SATA max UDMA/133 abar m524288@0x9d500000 port 0x9d500200 irq 51
[   46.935645] ata10: SATA max UDMA/133 abar m524288@0x9d500000 port 0x9d500280 irq 51
[   46.935648] ata11: SATA max UDMA/133 abar m524288@0x9d500000 port 0x9d500300 irq 51
[   46.935651] ata12: SATA max UDMA/133 abar m524288@0x9d500000 port 0x9d500380 irq 51
[   46.935655] ata13: SATA max UDMA/133 abar m524288@0x9d500000 port 0x9d500400 irq 51
[   46.935658] ata14: SATA max UDMA/133 abar m524288@0x9d500000 port 0x9d500480 irq 51
[   47.224279] ata5: SATA link down (SStatus 4 SControl 300)
[   47.224317] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[   47.224352] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   47.224387] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   47.224421] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   47.231252] ata1.00: ATA-9: INTEL SSDSC2BB240G6, G2010039, max UDMA/133
[   47.231255] ata1.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 31/32)
[   47.231281] ata4.00: ATA-9: INTEL SSDSC2BB240G6, G2010039, max UDMA/133
[   47.231284] ata4.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 31/32)
[   47.231331] ata6.00: READ LOG DMA EXT failed, trying unqueued
[   47.231366] ata6.00: failed to get NCQ Send/Recv Log Emask 0x1
[   47.231367] ata6.00: ATA-9: SAMSUNG MZ7WD120HCFV-00003, DXM9103Q, max UDMA/133
[   47.231369] ata6.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
[   47.231408] ata3.00: ATA-8: ST31000524NS, SN12, max UDMA/133
[   47.231410] ata3.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 31/32)
[   47.231673] ata6.00: failed to get NCQ Send/Recv Log Emask 0x1
[   47.231723] ata1.00: configured for UDMA/133
[   47.231733] ata6.00: configured for UDMA/133
[   47.231780] ata4.00: configured for UDMA/133
[   47.232086] scsi 0:0:0:0: Direct-Access     ATA      INTEL SSDSC2BB24 0039 PQ: 0 ANSI: 5
[   47.233095] ata3.00: configured for UDMA/133
[   47.239245] ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   47.239272] ata7: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   47.239297] ata11: SATA link down (SStatus 4 SControl 300)
[   47.239328] ata13: SATA link down (SStatus 4 SControl 300)
[   47.239359] ata12: SATA link down (SStatus 4 SControl 300)
[   47.239396] ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   47.239727] ata10.00: ATA-9: INTEL SSDSC2BB240G6, G2010039, max UDMA/133
[   47.239729] ata10.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 31/32)
[   47.239734] ata7.00: ATA-9: INTEL SSDSC2BB240G6, G2010039, max UDMA/133
[   47.239735] ata7.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 31/32)
[   47.239834] ata9.00: ATA-9: INTEL SSDSC2BB240G6, G2010039, max UDMA/133
[   47.239835] ata9.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 31/32)
[   47.240187] ata10.00: configured for UDMA/133
[   47.240204] ata7.00: configured for UDMA/133
[   47.240343] ata9.00: configured for UDMA/133
[   47.266741] igb: Copyright (c) 2007-2014 Intel Corporation.
[   47.332035] ata8: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   47.332038] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[   47.333016] ata14: SATA link down (SStatus 4 SControl 300)
[   47.335301] i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 1.5.10-k
[   47.335302] i40e: Copyright (c) 2013 - 2014 Intel Corporation.
[   47.369795] ata2.00: ATA-8: ST31000524NS, SN11, max UDMA/133
[   47.369797] ata2.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 31/32)
[   47.369800] ata8.00: ATA-9: INTEL SSDSC2BB240G6, G2010039, max UDMA/133
[   47.369802] ata8.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 31/32)
[   47.387875] i40e 0000:3e:00.0: fw 3.1.52349 api 1.5 nvm 3.25 0x800009e7 0.0.0
[   47.523553] i40e 0000:3e:00.0: MAC address: a0:42:3f:37:35:99
[   47.529654] ata2.00: configured for UDMA/133
[   47.529657] ata8.00: configured for UDMA/133
[   47.539640] i40e 0000:3e:00.0: irq 53 for MSI/MSI-X
[   47.539726] i40e 0000:3e:00.0: irq 54 for MSI/MSI-X
[   47.539809] i40e 0000:3e:00.0: irq 55 for MSI/MSI-X
[   47.539891] i40e 0000:3e:00.0: irq 56 for MSI/MSI-X
[   47.539974] i40e 0000:3e:00.0: irq 57 for MSI/MSI-X
[   47.540058] i40e 0000:3e:00.0: irq 58 for MSI/MSI-X
[   47.540140] i40e 0000:3e:00.0: irq 59 for MSI/MSI-X
[   47.540221] i40e 0000:3e:00.0: irq 60 for MSI/MSI-X
[   47.540303] i40e 0000:3e:00.0: irq 61 for MSI/MSI-X
[   47.540382] i40e 0000:3e:00.0: irq 62 for MSI/MSI-X
[   47.540463] i40e 0000:3e:00.0: irq 63 for MSI/MSI-X
[   47.540550] i40e 0000:3e:00.0: irq 64 for MSI/MSI-X
[   47.540630] i40e 0000:3e:00.0: irq 65 for MSI/MSI-X
[   47.540711] i40e 0000:3e:00.0: irq 66 for MSI/MSI-X
[   47.540791] i40e 0000:3e:00.0: irq 67 for MSI/MSI-X
[   47.540871] i40e 0000:3e:00.0: irq 68 for MSI/MSI-X
[   47.540952] i40e 0000:3e:00.0: irq 69 for MSI/MSI-X
[   47.541038] i40e 0000:3e:00.0: irq 70 for MSI/MSI-X
[   47.541124] i40e 0000:3e:00.0: irq 71 for MSI/MSI-X
[   47.541210] i40e 0000:3e:00.0: irq 72 for MSI/MSI-X
[   47.541296] i40e 0000:3e:00.0: irq 73 for MSI/MSI-X
[   47.541382] i40e 0000:3e:00.0: irq 74 for MSI/MSI-X
[   47.541464] i40e 0000:3e:00.0: irq 75 for MSI/MSI-X
[   47.541552] i40e 0000:3e:00.0: irq 76 for MSI/MSI-X
[   47.541634] i40e 0000:3e:00.0: irq 77 for MSI/MSI-X
[   47.541716] i40e 0000:3e:00.0: irq 78 for MSI/MSI-X
[   47.541798] i40e 0000:3e:00.0: irq 79 for MSI/MSI-X
[   47.541881] i40e 0000:3e:00.0: irq 80 for MSI/MSI-X
[   47.541962] i40e 0000:3e:00.0: irq 81 for MSI/MSI-X
[   47.542045] i40e 0000:3e:00.0: irq 82 for MSI/MSI-X
[   47.542125] i40e 0000:3e:00.0: irq 83 for MSI/MSI-X
[   47.542207] i40e 0000:3e:00.0: irq 84 for MSI/MSI-X
[   47.542289] i40e 0000:3e:00.0: irq 85 for MSI/MSI-X
[   47.542375] i40e 0000:3e:00.0: irq 86 for MSI/MSI-X
[   47.542460] i40e 0000:3e:00.0: irq 87 for MSI/MSI-X
[   47.542549] i40e 0000:3e:00.0: irq 88 for MSI/MSI-X
[   47.542634] i40e 0000:3e:00.0: irq 89 for MSI/MSI-X
[   47.542721] i40e 0000:3e:00.0: irq 90 for MSI/MSI-X
[   47.542804] i40e 0000:3e:00.0: irq 91 for MSI/MSI-X
[   47.542886] i40e 0000:3e:00.0: irq 92 for MSI/MSI-X
[   47.542969] i40e 0000:3e:00.0: irq 93 for MSI/MSI-X
[   47.543050] i40e 0000:3e:00.0: irq 94 for MSI/MSI-X
[   47.543132] i40e 0000:3e:00.0: irq 95 for MSI/MSI-X
[   47.543214] i40e 0000:3e:00.0: irq 96 for MSI/MSI-X
[   47.543296] i40e 0000:3e:00.0: irq 97 for MSI/MSI-X
[   47.543380] i40e 0000:3e:00.0: irq 98 for MSI/MSI-X
[   47.543462] i40e 0000:3e:00.0: irq 99 for MSI/MSI-X
[   47.543548] i40e 0000:3e:00.0: irq 100 for MSI/MSI-X
[   47.543630] i40e 0000:3e:00.0: irq 101 for MSI/MSI-X
[   47.543716] i40e 0000:3e:00.0: irq 102 for MSI/MSI-X
[   47.543799] i40e 0000:3e:00.0: irq 103 for MSI/MSI-X
[   47.543882] i40e 0000:3e:00.0: irq 104 for MSI/MSI-X
[   47.543963] i40e 0000:3e:00.0: irq 105 for MSI/MSI-X
[   47.544047] i40e 0000:3e:00.0: irq 106 for MSI/MSI-X
[   47.544126] i40e 0000:3e:00.0: irq 107 for MSI/MSI-X
[   47.544205] i40e 0000:3e:00.0: irq 108 for MSI/MSI-X
[   47.544282] i40e 0000:3e:00.0: irq 109 for MSI/MSI-X
[   47.544359] i40e 0000:3e:00.0: irq 110 for MSI/MSI-X
[   47.544437] i40e 0000:3e:00.0: irq 111 for MSI/MSI-X
[   47.544518] i40e 0000:3e:00.0: irq 112 for MSI/MSI-X
[   47.544596] i40e 0000:3e:00.0: irq 113 for MSI/MSI-X
[   47.544676] i40e 0000:3e:00.0: irq 114 for MSI/MSI-X
[   47.544752] i40e 0000:3e:00.0: irq 115 for MSI/MSI-X
[   47.544830] i40e 0000:3e:00.0: irq 116 for MSI/MSI-X
[   47.544907] i40e 0000:3e:00.0: irq 117 for MSI/MSI-X
[   47.544993] i40e 0000:3e:00.0: irq 118 for MSI/MSI-X
[   47.545077] i40e 0000:3e:00.0: irq 119 for MSI/MSI-X
[   47.545160] i40e 0000:3e:00.0: irq 120 for MSI/MSI-X
[   47.545244] i40e 0000:3e:00.0: irq 121 for MSI/MSI-X
[   47.545330] i40e 0000:3e:00.0: irq 122 for MSI/MSI-X
[   47.545411] i40e 0000:3e:00.0: irq 123 for MSI/MSI-X
[   47.545492] i40e 0000:3e:00.0: irq 124 for MSI/MSI-X
[   47.545577] i40e 0000:3e:00.0: irq 125 for MSI/MSI-X
[   47.545659] i40e 0000:3e:00.0: irq 126 for MSI/MSI-X
[   47.545742] i40e 0000:3e:00.0: irq 127 for MSI/MSI-X
[   47.545824] i40e 0000:3e:00.0: irq 128 for MSI/MSI-X
[   47.545906] i40e 0000:3e:00.0: irq 129 for MSI/MSI-X
[   47.545991] i40e 0000:3e:00.0: irq 130 for MSI/MSI-X
[   47.546071] i40e 0000:3e:00.0: irq 131 for MSI/MSI-X
[   47.546150] i40e 0000:3e:00.0: irq 132 for MSI/MSI-X
[   47.546226] i40e 0000:3e:00.0: irq 133 for MSI/MSI-X
[   47.546666] i40e 0000:3e:00.0: irq 134 for MSI/MSI-X
[   47.546747] i40e 0000:3e:00.0: irq 135 for MSI/MSI-X
[   47.546828] i40e 0000:3e:00.0: irq 136 for MSI/MSI-X
[   47.546908] i40e 0000:3e:00.0: irq 137 for MSI/MSI-X
[   47.546990] i40e 0000:3e:00.0: irq 138 for MSI/MSI-X
[   47.547067] i40e 0000:3e:00.0: irq 139 for MSI/MSI-X
[   47.547144] i40e 0000:3e:00.0: irq 140 for MSI/MSI-X
[   47.547220] i40e 0000:3e:00.0: irq 141 for MSI/MSI-X
[   47.547296] i40e 0000:3e:00.0: irq 142 for MSI/MSI-X
[   47.547370] i40e 0000:3e:00.0: irq 143 for MSI/MSI-X
[   47.628660] i40e 0000:3e:00.0: Added LAN device PF0 bus=0x00 func=0x00
[   47.634695] i40e 0000:3e:00.0: Features: PF-id[0] VFs: 32 VSIs: 66 QP: 40 RX: 1BUF RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[   47.686904] i40e 0000:3e:00.1: fw 3.1.52349 api 1.5 nvm 3.25 0x800009e7 0.0.0
[   47.821551] i40e 0000:3e:00.1: MAC address: 00:00:00:00:03:15
[   47.837787] i40e 0000:3e:00.1: irq 144 for MSI/MSI-X
[   47.837880] i40e 0000:3e:00.1: irq 145 for MSI/MSI-X
[   47.837960] i40e 0000:3e:00.1: irq 146 for MSI/MSI-X
[   47.838041] i40e 0000:3e:00.1: irq 147 for MSI/MSI-X
[   47.838120] i40e 0000:3e:00.1: irq 148 for MSI/MSI-X
[   47.838199] i40e 0000:3e:00.1: irq 149 for MSI/MSI-X
[   47.838278] i40e 0000:3e:00.1: irq 150 for MSI/MSI-X
[   47.838355] i40e 0000:3e:00.1: irq 151 for MSI/MSI-X
[   47.838432] i40e 0000:3e:00.1: irq 152 for MSI/MSI-X
[   47.838508] i40e 0000:3e:00.1: irq 153 for MSI/MSI-X
[   47.838584] i40e 0000:3e:00.1: irq 154 for MSI/MSI-X
[   47.838661] i40e 0000:3e:00.1: irq 155 for MSI/MSI-X
[   47.838738] i40e 0000:3e:00.1: irq 156 for MSI/MSI-X
[   47.838828] i40e 0000:3e:00.1: irq 157 for MSI/MSI-X
[   47.838914] i40e 0000:3e:00.1: irq 158 for MSI/MSI-X
[   47.838999] i40e 0000:3e:00.1: irq 159 for MSI/MSI-X
[   47.839085] i40e 0000:3e:00.1: irq 160 for MSI/MSI-X
[   47.839171] i40e 0000:3e:00.1: irq 161 for MSI/MSI-X
[   47.839257] i40e 0000:3e:00.1: irq 162 for MSI/MSI-X
[   47.839342] i40e 0000:3e:00.1: irq 163 for MSI/MSI-X
[   47.839428] i40e 0000:3e:00.1: irq 164 for MSI/MSI-X
[   47.839513] i40e 0000:3e:00.1: irq 165 for MSI/MSI-X
[   47.839598] i40e 0000:3e:00.1: irq 166 for MSI/MSI-X
[   47.839683] i40e 0000:3e:00.1: irq 167 for MSI/MSI-X
[   47.839766] i40e 0000:3e:00.1: irq 168 for MSI/MSI-X
[   47.839854] i40e 0000:3e:00.1: irq 169 for MSI/MSI-X
[   47.839938] i40e 0000:3e:00.1: irq 170 for MSI/MSI-X
[   47.840023] i40e 0000:3e:00.1: irq 171 for MSI/MSI-X
[   47.840106] i40e 0000:3e:00.1: irq 172 for MSI/MSI-X
[   47.840194] i40e 0000:3e:00.1: irq 173 for MSI/MSI-X
[   47.840281] i40e 0000:3e:00.1: irq 174 for MSI/MSI-X
[   47.840369] i40e 0000:3e:00.1: irq 175 for MSI/MSI-X
[   47.840456] i40e 0000:3e:00.1: irq 176 for MSI/MSI-X
[   47.840543] i40e 0000:3e:00.1: irq 177 for MSI/MSI-X
[   47.840628] i40e 0000:3e:00.1: irq 178 for MSI/MSI-X
[   47.840715] i40e 0000:3e:00.1: irq 179 for MSI/MSI-X
[   47.840807] i40e 0000:3e:00.1: irq 180 for MSI/MSI-X
[   47.840893] i40e 0000:3e:00.1: irq 181 for MSI/MSI-X
[   47.840980] i40e 0000:3e:00.1: irq 182 for MSI/MSI-X
[   47.841067] i40e 0000:3e:00.1: irq 183 for MSI/MSI-X
[   47.841154] i40e 0000:3e:00.1: irq 184 for MSI/MSI-X
[   47.841240] i40e 0000:3e:00.1: irq 185 for MSI/MSI-X
[   47.841327] i40e 0000:3e:00.1: irq 186 for MSI/MSI-X
[   47.841414] i40e 0000:3e:00.1: irq 187 for MSI/MSI-X
[   47.841500] i40e 0000:3e:00.1: irq 188 for MSI/MSI-X
[   47.841582] i40e 0000:3e:00.1: irq 189 for MSI/MSI-X
[   47.841664] i40e 0000:3e:00.1: irq 190 for MSI/MSI-X
[   47.841744] i40e 0000:3e:00.1: irq 191 for MSI/MSI-X
[   47.841837] i40e 0000:3e:00.1: irq 192 for MSI/MSI-X
[   47.841912] i40e 0000:3e:00.1: irq 193 for MSI/MSI-X
[   47.841984] i40e 0000:3e:00.1: irq 194 for MSI/MSI-X
[   47.842057] i40e 0000:3e:00.1: irq 195 for MSI/MSI-X
[   47.842126] i40e 0000:3e:00.1: irq 196 for MSI/MSI-X
[   47.842192] i40e 0000:3e:00.1: irq 197 for MSI/MSI-X
[   47.842257] i40e 0000:3e:00.1: irq 198 for MSI/MSI-X
[   47.842319] i40e 0000:3e:00.1: irq 199 for MSI/MSI-X
[   47.842383] i40e 0000:3e:00.1: irq 200 for MSI/MSI-X
[   47.842447] i40e 0000:3e:00.1: irq 201 for MSI/MSI-X
[   47.842507] i40e 0000:3e:00.1: irq 202 for MSI/MSI-X
[   47.842569] i40e 0000:3e:00.1: irq 203 for MSI/MSI-X
[   47.842632] i40e 0000:3e:00.1: irq 204 for MSI/MSI-X
[   47.842719] i40e 0000:3e:00.1: irq 205 for MSI/MSI-X
[   47.842810] i40e 0000:3e:00.1: irq 206 for MSI/MSI-X
[   47.842894] i40e 0000:3e:00.1: irq 207 for MSI/MSI-X
[   47.842976] i40e 0000:3e:00.1: irq 208 for MSI/MSI-X
[   47.843069] i40e 0000:3e:00.1: irq 209 for MSI/MSI-X
[   47.843160] i40e 0000:3e:00.1: irq 210 for MSI/MSI-X
[   47.843251] i40e 0000:3e:00.1: irq 211 for MSI/MSI-X
[   47.843341] i40e 0000:3e:00.1: irq 212 for MSI/MSI-X
[   47.843430] i40e 0000:3e:00.1: irq 213 for MSI/MSI-X
[   47.843520] i40e 0000:3e:00.1: irq 214 for MSI/MSI-X
[   47.843608] i40e 0000:3e:00.1: irq 215 for MSI/MSI-X
[   47.843699] i40e 0000:3e:00.1: irq 216 for MSI/MSI-X
[   47.843789] i40e 0000:3e:00.1: irq 217 for MSI/MSI-X
[   47.843878] i40e 0000:3e:00.1: irq 218 for MSI/MSI-X
[   47.843968] i40e 0000:3e:00.1: irq 219 for MSI/MSI-X
[   47.844058] i40e 0000:3e:00.1: irq 220 for MSI/MSI-X
[   47.844149] i40e 0000:3e:00.1: irq 221 for MSI/MSI-X
[   47.844236] i40e 0000:3e:00.1: irq 222 for MSI/MSI-X
[   47.844323] i40e 0000:3e:00.1: irq 223 for MSI/MSI-X
[   47.844412] i40e 0000:3e:00.1: irq 224 for MSI/MSI-X
[   47.844499] i40e 0000:3e:00.1: irq 225 for MSI/MSI-X
[   47.844583] i40e 0000:3e:00.1: irq 226 for MSI/MSI-X
[   47.844669] i40e 0000:3e:00.1: irq 227 for MSI/MSI-X
[   47.844754] i40e 0000:3e:00.1: irq 228 for MSI/MSI-X
[   47.844844] i40e 0000:3e:00.1: irq 229 for MSI/MSI-X
[   47.844928] i40e 0000:3e:00.1: irq 230 for MSI/MSI-X
[   47.845013] i40e 0000:3e:00.1: irq 231 for MSI/MSI-X
[   47.845100] i40e 0000:3e:00.1: irq 232 for MSI/MSI-X
[   47.845184] i40e 0000:3e:00.1: irq 233 for MSI/MSI-X
[   47.845268] i40e 0000:3e:00.1: irq 234 for MSI/MSI-X
[   47.926547] i40e 0000:3e:00.1: Added LAN device PF1 bus=0x00 func=0x01
[   47.932594] i40e 0000:3e:00.1: Features: PF-id[1] VFs: 32 VSIs: 66 QP: 40 RX: 1BUF RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[   49.658861] nvme 0000:d9:00.0: irq 46 for MSI/MSI-X
[   49.658979] nvme 0000:d9:00.0: irq 235 for MSI/MSI-X
[   49.659093] nvme 0000:d9:00.0: irq 236 for MSI/MSI-X
[   49.659210] nvme 0000:d9:00.0: irq 237 for MSI/MSI-X
[   49.659326] nvme 0000:d9:00.0: irq 238 for MSI/MSI-X
[   49.659439] nvme 0000:d9:00.0: irq 239 for MSI/MSI-X
[   49.659583] nvme 0000:d9:00.0: irq 240 for MSI/MSI-X
[   49.659675] nvme 0000:d9:00.0: irq 241 for MSI/MSI-X
[   49.659766] nvme 0000:d9:00.0: irq 242 for MSI/MSI-X
[   49.659858] nvme 0000:d9:00.0: irq 243 for MSI/MSI-X
[   49.659949] nvme 0000:d9:00.0: irq 244 for MSI/MSI-X
[   49.660042] nvme 0000:d9:00.0: irq 245 for MSI/MSI-X
[   49.660134] nvme 0000:d9:00.0: irq 246 for MSI/MSI-X
[   49.660228] nvme 0000:d9:00.0: irq 247 for MSI/MSI-X
[   49.660322] nvme 0000:d9:00.0: irq 248 for MSI/MSI-X
[   49.660415] nvme 0000:d9:00.0: irq 249 for MSI/MSI-X
[   49.660506] nvme 0000:d9:00.0: irq 250 for MSI/MSI-X
[   49.660600] nvme 0000:d9:00.0: irq 251 for MSI/MSI-X
[   49.660693] nvme 0000:d9:00.0: irq 252 for MSI/MSI-X
[   49.660784] nvme 0000:d9:00.0: irq 253 for MSI/MSI-X
[   49.660870] nvme 0000:d9:00.0: irq 254 for MSI/MSI-X
[   49.660954] nvme 0000:d9:00.0: irq 255 for MSI/MSI-X
[   49.661046] nvme 0000:d9:00.0: irq 256 for MSI/MSI-X
[   49.661126] nvme 0000:d9:00.0: irq 257 for MSI/MSI-X
[   49.661205] nvme 0000:d9:00.0: irq 258 for MSI/MSI-X
[   49.661282] nvme 0000:d9:00.0: irq 259 for MSI/MSI-X
[   49.661360] nvme 0000:d9:00.0: irq 260 for MSI/MSI-X
[   49.661440] nvme 0000:d9:00.0: irq 261 for MSI/MSI-X
[   49.661518] nvme 0000:d9:00.0: irq 262 for MSI/MSI-X
[   49.661598] nvme 0000:d9:00.0: irq 263 for MSI/MSI-X
[   49.661672] nvme 0000:d9:00.0: irq 264 for MSI/MSI-X
[   49.860311] nvme 0000:db:00.0: irq 49 for MSI/MSI-X
[   49.860413] nvme 0000:db:00.0: irq 265 for MSI/MSI-X
[   49.860508] nvme 0000:db:00.0: irq 266 for MSI/MSI-X
[   49.860603] nvme 0000:db:00.0: irq 267 for MSI/MSI-X
[   49.860698] nvme 0000:db:00.0: irq 268 for MSI/MSI-X
[   49.860793] nvme 0000:db:00.0: irq 269 for MSI/MSI-X
[   49.860890] nvme 0000:db:00.0: irq 270 for MSI/MSI-X
[   49.860988] nvme 0000:db:00.0: irq 271 for MSI/MSI-X
[   49.861091] nvme 0000:db:00.0: irq 272 for MSI/MSI-X
[   49.861187] nvme 0000:db:00.0: irq 273 for MSI/MSI-X
[   49.861273] nvme 0000:db:00.0: irq 274 for MSI/MSI-X
[   49.861370] nvme 0000:db:00.0: irq 275 for MSI/MSI-X
[   49.861465] nvme 0000:db:00.0: irq 276 for MSI/MSI-X
[   49.861561] nvme 0000:db:00.0: irq 277 for MSI/MSI-X
[   49.861657] nvme 0000:db:00.0: irq 278 for MSI/MSI-X
[   49.861753] nvme 0000:db:00.0: irq 279 for MSI/MSI-X
[   49.861849] nvme 0000:db:00.0: irq 280 for MSI/MSI-X
[   49.861946] nvme 0000:db:00.0: irq 281 for MSI/MSI-X
[   49.862042] nvme 0000:db:00.0: irq 282 for MSI/MSI-X
[   49.862151] nvme 0000:db:00.0: irq 283 for MSI/MSI-X
[   49.862248] nvme 0000:db:00.0: irq 284 for MSI/MSI-X
[   49.862345] nvme 0000:db:00.0: irq 285 for MSI/MSI-X
[   49.862442] nvme 0000:db:00.0: irq 286 for MSI/MSI-X
[   49.862538] nvme 0000:db:00.0: irq 287 for MSI/MSI-X
[   49.862634] nvme 0000:db:00.0: irq 288 for MSI/MSI-X
[   49.862730] nvme 0000:db:00.0: irq 289 for MSI/MSI-X
[   49.862825] nvme 0000:db:00.0: irq 290 for MSI/MSI-X
[   49.862923] nvme 0000:db:00.0: irq 291 for MSI/MSI-X
[   49.863019] nvme 0000:db:00.0: irq 292 for MSI/MSI-X
[   49.863118] nvme 0000:db:00.0: irq 293 for MSI/MSI-X
[   49.863214] nvme 0000:db:00.0: irq 294 for MSI/MSI-X
[   49.961043] nvme 0000:d8:00.0: irq 44 for MSI/MSI-X
[   49.961133] nvme 0000:d8:00.0: irq 295 for MSI/MSI-X
[   49.961231] nvme 0000:d8:00.0: irq 296 for MSI/MSI-X
[   49.961327] nvme 0000:d8:00.0: irq 297 for MSI/MSI-X
[   49.961424] nvme 0000:d8:00.0: irq 298 for MSI/MSI-X
[   49.961520] nvme 0000:d8:00.0: irq 299 for MSI/MSI-X
[   49.961616] nvme 0000:d8:00.0: irq 300 for MSI/MSI-X
[   49.961715] nvme 0000:d8:00.0: irq 301 for MSI/MSI-X
[   49.961811] nvme 0000:d8:00.0: irq 302 for MSI/MSI-X
[   49.961913] nvme 0000:d8:00.0: irq 303 for MSI/MSI-X
[   49.962010] nvme 0000:d8:00.0: irq 304 for MSI/MSI-X
[   49.962107] nvme 0000:d8:00.0: irq 305 for MSI/MSI-X
[   49.962203] nvme 0000:d8:00.0: irq 306 for MSI/MSI-X
[   49.962301] nvme 0000:d8:00.0: irq 307 for MSI/MSI-X
[   49.962393] nvme 0000:d8:00.0: irq 308 for MSI/MSI-X
[   49.962492] nvme 0000:d8:00.0: irq 309 for MSI/MSI-X
[   49.962590] nvme 0000:d8:00.0: irq 310 for MSI/MSI-X
[   49.962688] nvme 0000:d8:00.0: irq 311 for MSI/MSI-X
[   49.962786] nvme 0000:d8:00.0: irq 312 for MSI/MSI-X
[   49.962886] nvme 0000:d8:00.0: irq 313 for MSI/MSI-X
[   49.962983] nvme 0000:d8:00.0: irq 314 for MSI/MSI-X
[   49.963079] nvme 0000:d8:00.0: irq 315 for MSI/MSI-X
[   49.963177] nvme 0000:d8:00.0: irq 316 for MSI/MSI-X
[   49.963274] nvme 0000:d8:00.0: irq 317 for MSI/MSI-X
[   49.963370] nvme 0000:d8:00.0: irq 318 for MSI/MSI-X
[   49.963467] nvme 0000:d8:00.0: irq 319 for MSI/MSI-X
[   49.963563] nvme 0000:d8:00.0: irq 320 for MSI/MSI-X
[   49.963658] nvme 0000:d8:00.0: irq 321 for MSI/MSI-X
[   49.963754] nvme 0000:d8:00.0: irq 322 for MSI/MSI-X
[   49.963854] nvme 0000:d8:00.0: irq 323 for MSI/MSI-X
[   49.963952] nvme 0000:d8:00.0: irq 324 for MSI/MSI-X
[   49.974583]  nvme0n1: p1 p2
[   50.061630] nvme 0000:da:00.0: irq 48 for MSI/MSI-X
[   50.061735] nvme 0000:da:00.0: irq 325 for MSI/MSI-X
[   50.061832] nvme 0000:da:00.0: irq 326 for MSI/MSI-X
[   50.061929] nvme 0000:da:00.0: irq 327 for MSI/MSI-X
[   50.062027] nvme 0000:da:00.0: irq 328 for MSI/MSI-X
[   50.062122] nvme 0000:da:00.0: irq 329 for MSI/MSI-X
[   50.062219] nvme 0000:da:00.0: irq 330 for MSI/MSI-X
[   50.062316] nvme 0000:da:00.0: irq 331 for MSI/MSI-X
[   50.062412] nvme 0000:da:00.0: irq 332 for MSI/MSI-X
[   50.062510] nvme 0000:da:00.0: irq 333 for MSI/MSI-X
[   50.062610] nvme 0000:da:00.0: irq 334 for MSI/MSI-X
[   50.062711] nvme 0000:da:00.0: irq 335 for MSI/MSI-X
[   50.062809] nvme 0000:da:00.0: irq 336 for MSI/MSI-X
[   50.062906] nvme 0000:da:00.0: irq 337 for MSI/MSI-X
[   50.063003] nvme 0000:da:00.0: irq 338 for MSI/MSI-X
[   50.063096] nvme 0000:da:00.0: irq 339 for MSI/MSI-X
[   50.063193] nvme 0000:da:00.0: irq 340 for MSI/MSI-X
[   50.063290] nvme 0000:da:00.0: irq 341 for MSI/MSI-X
[   50.063386] nvme 0000:da:00.0: irq 342 for MSI/MSI-X
[   50.063480] nvme 0000:da:00.0: irq 343 for MSI/MSI-X
[   50.063586] nvme 0000:da:00.0: irq 344 for MSI/MSI-X
[   50.063683] nvme 0000:da:00.0: irq 345 for MSI/MSI-X
[   50.063781] nvme 0000:da:00.0: irq 346 for MSI/MSI-X
[   50.063878] nvme 0000:da:00.0: irq 347 for MSI/MSI-X
[   50.063975] nvme 0000:da:00.0: irq 348 for MSI/MSI-X
[   50.064071] nvme 0000:da:00.0: irq 349 for MSI/MSI-X
[   50.064166] nvme 0000:da:00.0: irq 350 for MSI/MSI-X
[   50.064264] nvme 0000:da:00.0: irq 351 for MSI/MSI-X
[   50.064361] nvme 0000:da:00.0: irq 352 for MSI/MSI-X
[   50.064456] nvme 0000:da:00.0: irq 353 for MSI/MSI-X
[   50.064553] nvme 0000:da:00.0: irq 354 for MSI/MSI-X
[   53.281428] igb 0000:02:00.0: irq 355 for MSI/MSI-X
[   53.281501] igb 0000:02:00.0: irq 356 for MSI/MSI-X
[   53.281577] igb 0000:02:00.0: irq 357 for MSI/MSI-X
[   53.281653] igb 0000:02:00.0: irq 358 for MSI/MSI-X
[   53.281732] igb 0000:02:00.0: irq 359 for MSI/MSI-X
[   53.297764] scsi 1:0:0:0: Direct-Access     ATA      ST31000524NS     SN11 PQ: 0 ANSI: 5
[   53.413926] ata1.00: Enabling discard_zeroes_data
[   53.416181] pps pps0: new PPS source ptp2
[   53.416184] igb 0000:02:00.0: added PHC on eth0
[   53.416185] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connection
[   53.416188] igb 0000:02:00.0: eth0: (PCIe:2.5Gb/s:Width x1) a0:42:3f:37:35:96
[   53.416274] igb 0000:02:00.0: eth0: PBA No: 000300-000
[   53.416275] igb 0000:02:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[   53.416978] igb 0000:03:00.0: irq 360 for MSI/MSI-X
[   53.417056] igb 0000:03:00.0: irq 361 for MSI/MSI-X
[   53.417134] igb 0000:03:00.0: irq 362 for MSI/MSI-X
[   53.417215] igb 0000:03:00.0: irq 363 for MSI/MSI-X
[   53.417292] igb 0000:03:00.0: irq 364 for MSI/MSI-X
[   53.454199] pps pps1: new PPS source ptp3
[   53.454201] igb 0000:03:00.0: added PHC on eth1
[   53.454203] igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection
[   53.454205] igb 0000:03:00.0: eth1: (PCIe:2.5Gb/s:Width x1) a0:42:3f:37:35:97
[   53.454289] igb 0000:03:00.0: eth1: PBA No: 000300-000
[   53.454291] igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[   53.454832] igb 0000:18:00.0: irq 366 for MSI/MSI-X
[   53.455010] igb 0000:18:00.0: irq 366 for MSI/MSI-X
[   53.455071] igb 0000:18:00.0: irq 367 for MSI/MSI-X
[   53.455130] igb 0000:18:00.0: irq 368 for MSI/MSI-X
[   53.455192] igb 0000:18:00.0: irq 369 for MSI/MSI-X
[   53.455252] igb 0000:18:00.0: irq 370 for MSI/MSI-X
[   53.455312] igb 0000:18:00.0: irq 371 for MSI/MSI-X
[   53.455371] igb 0000:18:00.0: irq 372 for MSI/MSI-X
[   53.455430] igb 0000:18:00.0: irq 373 for MSI/MSI-X
[   53.455491] igb 0000:18:00.0: irq 374 for MSI/MSI-X
[   53.509580] igb 0000:18:00.0: added PHC on eth2
[   53.509582] igb 0000:18:00.0: Intel(R) Gigabit Ethernet Network Connection
[   53.509585] igb 0000:18:00.0: eth2: (PCIe:5.0Gb/s:Width x2) a0:42:3f:36:98:66
[   53.509661] igb 0000:18:00.0: eth2: PBA No: 106300-000
[   53.509663] igb 0000:18:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[   53.510167] igb 0000:18:00.1: irq 376 for MSI/MSI-X
[   53.510279] igb 0000:18:00.1: irq 376 for MSI/MSI-X
[   53.510354] igb 0000:18:00.1: irq 377 for MSI/MSI-X
[   53.510427] igb 0000:18:00.1: irq 378 for MSI/MSI-X
[   53.510509] igb 0000:18:00.1: irq 379 for MSI/MSI-X
[   53.510585] igb 0000:18:00.1: irq 380 for MSI/MSI-X
[   53.510660] igb 0000:18:00.1: irq 381 for MSI/MSI-X
[   53.510734] igb 0000:18:00.1: irq 382 for MSI/MSI-X
[   53.510810] igb 0000:18:00.1: irq 383 for MSI/MSI-X
[   53.510885] igb 0000:18:00.1: irq 384 for MSI/MSI-X
[   53.564517] igb 0000:18:00.1: added PHC on eth3
[   53.564518] igb 0000:18:00.1: Intel(R) Gigabit Ethernet Network Connection
[   53.564521] igb 0000:18:00.1: eth3: (PCIe:5.0Gb/s:Width x2) a0:42:3f:36:98:67
[   53.564597] igb 0000:18:00.1: eth3: PBA No: 106300-000
[   53.564599] igb 0000:18:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[   53.565139] igb 0000:18:00.2: irq 386 for MSI/MSI-X
[   53.565268] igb 0000:18:00.2: irq 386 for MSI/MSI-X
[   53.565367] igb 0000:18:00.2: irq 387 for MSI/MSI-X
[   53.565455] igb 0000:18:00.2: irq 388 for MSI/MSI-X
[   53.565546] igb 0000:18:00.2: irq 389 for MSI/MSI-X
[   53.565636] igb 0000:18:00.2: irq 390 for MSI/MSI-X
[   53.565725] igb 0000:18:00.2: irq 391 for MSI/MSI-X
[   53.565826] igb 0000:18:00.2: irq 392 for MSI/MSI-X
[   53.565917] igb 0000:18:00.2: irq 393 for MSI/MSI-X
[   53.566007] igb 0000:18:00.2: irq 394 for MSI/MSI-X
[   53.621407] igb 0000:18:00.2: added PHC on eth4
[   53.621409] igb 0000:18:00.2: Intel(R) Gigabit Ethernet Network Connection
[   53.621411] igb 0000:18:00.2: eth4: (PCIe:5.0Gb/s:Width x2) a0:42:3f:36:98:68
[   53.621488] igb 0000:18:00.2: eth4: PBA No: 106300-000
[   53.621490] igb 0000:18:00.2: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[   53.621992] igb 0000:18:00.3: irq 396 for MSI/MSI-X
[   53.622106] igb 0000:18:00.3: irq 396 for MSI/MSI-X
[   53.622184] igb 0000:18:00.3: irq 397 for MSI/MSI-X
[   53.622266] igb 0000:18:00.3: irq 398 for MSI/MSI-X
[   53.622363] igb 0000:18:00.3: irq 399 for MSI/MSI-X
[   53.622459] igb 0000:18:00.3: irq 400 for MSI/MSI-X
[   53.622556] igb 0000:18:00.3: irq 401 for MSI/MSI-X
[   53.622653] igb 0000:18:00.3: irq 402 for MSI/MSI-X
[   53.622747] igb 0000:18:00.3: irq 403 for MSI/MSI-X
[   53.622841] igb 0000:18:00.3: irq 404 for MSI/MSI-X
[   53.664894] sd 0:0:0:0: [sda] 468862128 512-byte logical blocks: (240 GB/223 GiB)
[   53.664895] sd 0:0:0:0: [sda] 4096-byte physical blocks
[   53.664920] sd 0:0:0:0: [sda] Write Protect is off
[   53.664921] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[   53.664928] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   53.664999] ata1.00: Enabling discard_zeroes_data
[   53.665333]  sda: sda1 sda2 sda3
[   53.665628] ata1.00: Enabling discard_zeroes_data
[   53.665657] sd 0:0:0:0: [sda] Attached SCSI disk
[   53.678106] igb 0000:18:00.3: added PHC on eth5
[   53.678108] igb 0000:18:00.3: Intel(R) Gigabit Ethernet Network Connection
[   53.678110] igb 0000:18:00.3: eth5: (PCIe:5.0Gb/s:Width x2) a0:42:3f:36:98:69
[   53.678186] igb 0000:18:00.3: eth5: PBA No: 106300-000
[   53.678188] igb 0000:18:00.3: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[   53.983503] igb 0000:5f:00.0: irq 406 for MSI/MSI-X
[   53.983636] igb 0000:5f:00.0: irq 406 for MSI/MSI-X
[   53.983725] igb 0000:5f:00.0: irq 407 for MSI/MSI-X
[   53.983815] igb 0000:5f:00.0: irq 408 for MSI/MSI-X
[   53.983904] igb 0000:5f:00.0: irq 409 for MSI/MSI-X
[   53.984002] igb 0000:5f:00.0: irq 410 for MSI/MSI-X
[   53.984093] igb 0000:5f:00.0: irq 411 for MSI/MSI-X
[   53.984184] igb 0000:5f:00.0: irq 412 for MSI/MSI-X
[   53.984274] igb 0000:5f:00.0: irq 413 for MSI/MSI-X
[   53.984370] igb 0000:5f:00.0: irq 414 for MSI/MSI-X
[   54.022132] igb 0000:5f:00.0: added PHC on eth6
[   54.022134] igb 0000:5f:00.0: Intel(R) Gigabit Ethernet Network Connection
[   54.022136] igb 0000:5f:00.0: eth6: (PCIe:5.0Gb/s:Width x4) 08:35:71:01:f7:12
[   54.022429] igb 0000:5f:00.0: eth6: PBA No: 
[   54.022431] igb 0000:5f:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[   54.327658] igb 0000:5f:00.1: irq 416 for MSI/MSI-X
[   54.327769] igb 0000:5f:00.1: irq 416 for MSI/MSI-X
[   54.327847] igb 0000:5f:00.1: irq 417 for MSI/MSI-X
[   54.327924] igb 0000:5f:00.1: irq 418 for MSI/MSI-X
[   54.328003] igb 0000:5f:00.1: irq 419 for MSI/MSI-X
[   54.328078] igb 0000:5f:00.1: irq 420 for MSI/MSI-X
[   54.328154] igb 0000:5f:00.1: irq 421 for MSI/MSI-X
[   54.328231] igb 0000:5f:00.1: irq 422 for MSI/MSI-X
[   54.328305] igb 0000:5f:00.1: irq 423 for MSI/MSI-X
[   54.328380] igb 0000:5f:00.1: irq 424 for MSI/MSI-X
[   54.365040] igb 0000:5f:00.1: added PHC on eth7
[   54.365041] igb 0000:5f:00.1: Intel(R) Gigabit Ethernet Network Connection
[   54.365043] igb 0000:5f:00.1: eth7: (PCIe:5.0Gb/s:Width x4) 08:35:71:01:f7:13
[   54.365339] igb 0000:5f:00.1: eth7: PBA No: 
[   54.365340] igb 0000:5f:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[   54.366808] igb 0000:5f:00.2: irq 426 for MSI/MSI-X
[   54.366931] igb 0000:5f:00.2: irq 426 for MSI/MSI-X
[   54.367015] igb 0000:5f:00.2: irq 427 for MSI/MSI-X
[   54.367101] igb 0000:5f:00.2: irq 428 for MSI/MSI-X
[   54.367185] igb 0000:5f:00.2: irq 429 for MSI/MSI-X
[   54.367269] igb 0000:5f:00.2: irq 430 for MSI/MSI-X
[   54.367355] igb 0000:5f:00.2: irq 431 for MSI/MSI-X
[   54.367460] igb 0000:5f:00.2: irq 432 for MSI/MSI-X
[   54.367567] igb 0000:5f:00.2: irq 433 for MSI/MSI-X
[   54.367666] igb 0000:5f:00.2: irq 434 for MSI/MSI-X
[   54.404867] igb 0000:5f:00.2: added PHC on eth8
[   54.404868] igb 0000:5f:00.2: Intel(R) Gigabit Ethernet Network Connection
[   54.404870] igb 0000:5f:00.2: eth8: (PCIe:5.0Gb/s:Width x4) 08:35:71:01:f7:14
[   54.405166] igb 0000:5f:00.2: eth8: PBA No: 
[   54.405167] igb 0000:5f:00.2: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[   54.406635] igb 0000:5f:00.3: irq 436 for MSI/MSI-X
[   54.406771] igb 0000:5f:00.3: irq 436 for MSI/MSI-X
[   54.406871] igb 0000:5f:00.3: irq 437 for MSI/MSI-X
[   54.406972] igb 0000:5f:00.3: irq 438 for MSI/MSI-X
[   54.407073] igb 0000:5f:00.3: irq 439 for MSI/MSI-X
[   54.407171] igb 0000:5f:00.3: irq 440 for MSI/MSI-X
[   54.407270] igb 0000:5f:00.3: irq 441 for MSI/MSI-X
[   54.407370] igb 0000:5f:00.3: irq 442 for MSI/MSI-X
[   54.407473] igb 0000:5f:00.3: irq 443 for MSI/MSI-X
[   54.407571] igb 0000:5f:00.3: irq 444 for MSI/MSI-X
[   54.444820] igb 0000:5f:00.3: added PHC on eth9
[   54.444821] igb 0000:5f:00.3: Intel(R) Gigabit Ethernet Network Connection
[   54.444823] igb 0000:5f:00.3: eth9: (PCIe:5.0Gb/s:Width x4) 08:35:71:01:f7:15
[   54.445120] igb 0000:5f:00.3: eth9: PBA No: 
[   54.445121] igb 0000:5f:00.3: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[   57.818811] sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[   57.819240] scsi 2:0:0:0: Direct-Access     ATA      ST31000524NS     SN12 PQ: 0 ANSI: 5
[   58.040243] sd 1:0:0:0: [sdb] Write Protect is off
[   58.106952] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[   58.106962] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   58.229649]  sdb:
[   58.230731] sd 2:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[   58.230892] sd 2:0:0:0: [sdc] Write Protect is off
[   58.230895] sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[   58.230962] sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   58.231147] scsi 3:0:0:0: Direct-Access     ATA      INTEL SSDSC2BB24 0039 PQ: 0 ANSI: 5
[   58.255255]  sdc:
[   58.255787] sd 2:0:0:0: [sdc] Attached SCSI disk
[   58.721597] sd 1:0:0:0: [sdb] Attached SCSI disk
[   58.792402] ata4.00: Enabling discard_zeroes_data
[   58.848805] sd 3:0:0:0: [sdd] 468862128 512-byte logical blocks: (240 GB/223 GiB)
[   58.848814] scsi 5:0:0:0: Direct-Access     ATA      SAMSUNG MZ7WD120 103Q PQ: 0 ANSI: 5
[   59.035514] sd 3:0:0:0: [sdd] 4096-byte physical blocks
[   59.098157] sd 3:0:0:0: [sdd] Write Protect is off
[   59.160330] sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[   59.160341] sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   59.268908] ata4.00: Enabling discard_zeroes_data
[   59.325416]  sdd:
[   59.348631] ata4.00: Enabling discard_zeroes_data
[   59.405050] sd 3:0:0:0: [sdd] Attached SCSI disk
[   59.465665] sd 5:0:0:0: [sde] 234441648 512-byte logical blocks: (120 GB/111 GiB)
[   59.466331] scsi 6:0:0:0: Direct-Access     ATA      INTEL SSDSC2BB24 0039 PQ: 0 ANSI: 5
[   59.652451] sd 5:0:0:0: [sde] Write Protect is off
[   59.709844] sd 5:0:0:0: [sde] Mode Sense: 00 3a 00 00
[   59.709852] sd 5:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   59.818528]  sde:
[   59.841788] sd 5:0:0:0: [sde] Attached SCSI disk
[   59.903676] ata7.00: Enabling discard_zeroes_data
[   59.959995] sd 6:0:0:0: [sdf] 468862128 512-byte logical blocks: (240 GB/223 GiB)
[   59.960008] scsi 7:0:0:0: Direct-Access     ATA      INTEL SSDSC2BB24 0039 PQ: 0 ANSI: 5
[   60.146716] sd 6:0:0:0: [sdf] 4096-byte physical blocks
[   60.209338] sd 6:0:0:0: [sdf] Write Protect is off
[   60.266688] sd 6:0:0:0: [sdf] Mode Sense: 00 3a 00 00
[   60.266696] sd 6:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   60.375203] ata7.00: Enabling discard_zeroes_data
[   60.431855] ata7.00: Enabling discard_zeroes_data
[   60.488232] sd 6:0:0:0: [sdf] Attached SCSI disk
[   60.547122] ata8.00: Enabling discard_zeroes_data
[   60.603475] sd 7:0:0:0: [sdg] 468862128 512-byte logical blocks: (240 GB/223 GiB)
[   60.603511] scsi 8:0:0:0: Direct-Access     ATA      INTEL SSDSC2BB24 0039 PQ: 0 ANSI: 5
[   60.790192] sd 7:0:0:0: [sdg] 4096-byte physical blocks
[   60.852813] sd 7:0:0:0: [sdg] Write Protect is off
[   60.910170] sd 7:0:0:0: [sdg] Mode Sense: 00 3a 00 00
[   60.910178] sd 7:0:0:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   61.018690] ata8.00: Enabling discard_zeroes_data
[   61.075463] ata8.00: Enabling discard_zeroes_data
[   61.131795] ata9.00: Enabling discard_zeroes_data
[   61.188132] sd 8:0:0:0: [sdh] 468862128 512-byte logical blocks: (240 GB/223 GiB)
[   61.188136] scsi 9:0:0:0: Direct-Access     ATA      INTEL SSDSC2BB24 0039 PQ: 0 ANSI: 5
[   61.188152] sd 7:0:0:0: [sdg] Attached SCSI disk
[   61.430182] sd 8:0:0:0: [sdh] 4096-byte physical blocks
[   61.492787] sd 8:0:0:0: [sdh] Write Protect is off
[   61.550229] sd 8:0:0:0: [sdh] Mode Sense: 00 3a 00 00
[   61.550237] sd 8:0:0:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   61.658760] ata9.00: Enabling discard_zeroes_data
[   61.715751] ata9.00: Enabling discard_zeroes_data
[   61.772205] sd 8:0:0:0: [sdh] Attached SCSI disk
[   61.831150] ata10.00: Enabling discard_zeroes_data
[   61.888577] sd 9:0:0:0: [sdi] 468862128 512-byte logical blocks: (240 GB/223 GiB)
[   61.978281] sd 9:0:0:0: [sdi] 4096-byte physical blocks
[   62.040900] sd 9:0:0:0: [sdi] Write Protect is off
[   62.098257] sd 9:0:0:0: [sdi] Mode Sense: 00 3a 00 00
[   62.098264] sd 9:0:0:0: [sdi] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   62.206775] ata10.00: Enabling discard_zeroes_data
[   62.264450] ata10.00: Enabling discard_zeroes_data
[   62.321892] sd 9:0:0:0: [sdi] Attached SCSI disk
[   62.855436] EXT4-fs (sda3): mounting ext3 file system using the ext4 subsystem
[   62.952615] EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null)
[   65.774035] systemd-journald[340]: Received SIGTERM from PID 1 (systemd).
[   65.855643] ip_tables: (C) 2000-2006 Netfilter Core Team
[   65.919433] systemd[1]: Inserted module 'ip_tables'
[   66.964914] RPC: Registered named UNIX socket transport module.
[   67.045045] RPC: Registered udp transport module.
[   67.117953] RPC: Registered tcp transport module.
[   67.178442] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   67.275558] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[   68.769383] EXT4-fs (sda3): re-mounted. Opts: (null)
[   69.649299] ipmi message handler version 39.2
[   69.708684] i801_smbus 0000:00:1f.4: enabling device (0001 -> 0003)
[   69.708851] input: PC Speaker as /devices/platform/pcspkr/input/input3
[   69.893514] i801_smbus 0000:00:1f.4: SMBus using PCI interrupt
[   69.965718] lpc_ich 0000:00:1f.0: I/O space for ACPI uninitialized
[   70.056193] lpc_ich 0000:00:1f.0: No MFD cells added
[   70.132538] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
[   70.132570] sd 0:0:0:0: Attached scsi generic sg0 type 0
[   70.133457] sd 1:0:0:0: Attached scsi generic sg1 type 0
[   70.135404] sd 2:0:0:0: Attached scsi generic sg2 type 0
[   70.135471] sd 3:0:0:0: Attached scsi generic sg3 type 0
[   70.135520] sd 5:0:0:0: Attached scsi generic sg4 type 0
[   70.135828] sd 6:0:0:0: Attached scsi generic sg5 type 0
[   70.135879] sd 7:0:0:0: Attached scsi generic sg6 type 0
[   70.135920] sd 8:0:0:0: Attached scsi generic sg7 type 0
[   70.135962] sd 9:0:0:0: Attached scsi generic sg8 type 0
[   70.630334] power_meter ACPI000D:00: Found ACPI power meter.
[   70.630488] wmi: Mapper loaded
[   70.912297] iTCO_vendor_support: vendor-support=0
[   70.969965] ipmi_si IPI0001:00: ipmi_si: probing via ACPI
[   71.050154] ipmi_si IPI0001:00: [io  0x0ca8] regsize 1 spacing 4 irq 0
[   71.144651] ipmi_si: Adding ACPI-specified kcs state machine
[   71.223020] systemd-journald[694]: Received request to flush runtime journal from PID 1
[   71.224837] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
[   71.224903] iTCO_wdt: Found a Intel PCH TCO device (Version=4, TCOBASE=0x0400)
[   71.225007] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[   71.474138] IPMI System Interface driver.
[   71.474196] ipmi_si: probing via SMBIOS
[   71.474197] ipmi_si: SMBIOS: io 0xca8 regsize 1 spacing 4 irq 0
[   71.474198] ipmi_si: Adding SMBIOS-specified kcs state machine duplicate interface
[   71.474200] ipmi_si: probing via SPMI
[   71.474201] ipmi_si: SPMI: io 0xca8 regsize 4 spacing 4 irq 0
[   71.474201] ipmi_si: Adding SPMI-specified kcs state machine duplicate interface
[   71.474202] ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca8, slave address 0x0, irq 0
[   71.618783] ipmi_si IPI0001:00: Found new BMC (man_id: 0x0019fd, prod_id: 0x1bc2, dev_id: 0x24)
[   71.618787] ipmi_si IPI0001:00: IPMI kcs interface initialized
[   72.224613] [drm] Initialized drm 1.1.0 20060810
[   72.350006] power_meter ACPI000D:00: Found ACPI power meter.
[   72.417999] power_meter ACPI000D:00: Ignoring unsafe software power cap!
[   72.516592] IPMI SSIF Interface driver
[   72.580240] AES CTR mode by8 optimization enabled
[   72.652366] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[   72.746787] [drm] AST 2400 detected
[   72.801439] [drm] Analog VGA only
[   72.857778] [drm] dram -1094967296 6 16 01000000
[   72.929713] [TTM] Zone  kernel: Available graphics memory: 23942440 kiB
[   73.025443] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
[   73.120156] [TTM] Initializing pool allocator
[   73.182704] [TTM] Initializing DMA pool allocator
[   73.182978] alg: No test for crc32 (crc32-pclmul)
[   73.300030] fbcon: astdrmfb (fb0) is primary device
[   73.321990] intel_rapl: Found RAPL domain package
[   73.322002] intel_rapl: Found RAPL domain dram
[   73.322003] intel_rapl: DRAM domain energy unit 15300pj
[   73.322016] intel_rapl: Found RAPL domain package
[   73.322023] intel_rapl: Found RAPL domain dram
[   73.322024] intel_rapl: DRAM domain energy unit 15300pj
[   73.345510] Console: switching to colour frame buffer device 128x48
[   73.600119] ast 0000:05:00.0: fb0: astdrmfb frame buffer device
[   73.920544] [drm] Initialized ast 0.1.0 20120228 for 0000:05:00.0 on minor 0
[   74.076798] Adding 6291452k swap on /dev/sda1.  Priority:-1 extents:1 across:6291452k SSFS
[   74.686155] EXT4-fs (sda2): mounting ext3 file system using the ext4 subsystem
[   74.790600] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-13  6:02 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-13  6:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 28168 bytes --]

Hi Pawel

just wanted you confirmation  please correct me if i am wrong

sdc   is   ssd
and  sdd is  nvme

Regards
Nitin

On Thu, Oct 12, 2017 at 9:34 PM, Nitin Gupta <nitin.gupta981(a)gmail.com>
wrote:

> Hi PawelX
>
> Sorry i forgot to add command output , since now both nvme and AIO (sdf)
> is enabled ..
> i came to know that sdd2  is  nvme ..
> but  sdc also giving  00:02:0 so i am confused if it is SSD.
>
> [root(a)localhost ~]# ls -l /sys/block | grep host
> lrwxrwxrwx 1 root root 0 Oct 12 11:54 sda -> ../devices/pci0000:00/0000:00:
> 01.1/host1/target1:0:0/1:0:0:0/block/sda
> lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdb -> ../devices/pci0000:00/0000:00:
> 02.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb
> lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdc -> ../devices/pci0000:00/0000:00:
> 02.0/virtio0/host2/target2:0:1/2:0:1:0/block/sdc
> lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdd -> ../devices/pci0000:00/0
> 000:00:02.0/virtio0/host2/target2:0:2/2:0:2:0/block/sdd
> [root(a)localhost ~]# lspci
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
> [root(a)localhost ~]# lsblk
> NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> sda                           8:0    0     8G  0 disk
> ├─sda1                        8:1    0   500M  0 part /boot
> └─sda2                        8:2    0   7.5G  0 part
>   ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
>   └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
> sdb                           8:16   0   256M  0 disk
> sdc                           8:32   0 223.6G  0 disk
> sdd                           8:48   0 419.2G  0 disk
> ├─sdd1                        8:49   0  20.2M  0 part
> └─sdd2                        8:50   0 419.2G  0 part
>
>
>
> On Thu, Oct 12, 2017 at 7:54 PM, Wodkowski, PawelX <
> pawelx.wodkowski(a)intel.com> wrote:
>
>> Steps are already provided. Just use ‘*ls -l /sys/block/ | grep host*’
>> to check which SCSI target each /dev/sdX device is.
>>
>>
>>
>> *From:* Nitin Gupta [mailto:nitin.gupta981(a)gmail.com]
>> *Sent:* Thursday, October 12, 2017 3:58 PM
>> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>; Harris,
>> James R <james.r.harris(a)intel.com>; Wodkowski, PawelX <
>> pawelx.wodkowski(a)intel.com>
>> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim / Pawel
>>
>>
>>
>> Thanks for you response , could you please  help me to find ssd device as
>> well
>>
>> i mean i am trying to map ssd device as well .please find below output
>>
>>
>>
>> 1. for my host machine output of lsscsi is
>>
>>
>>
>> -bash-4.2# lsscsi
>>
>> [0:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sda
>>
>> [1:0:0:0]    disk    ATA      ST31000524NS     SN11  /dev/sdb
>>
>> [2:0:0:0]    disk    ATA      ST31000524NS     SN12  /dev/sdc
>>
>> [3:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdd
>>
>> [5:0:0:0]    disk    ATA      SAMSUNG MZ7WD120 103Q  /dev/sde
>>
>> [6:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdf
>>
>> [7:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdg
>>
>> [8:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdh
>>
>> [9:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdi
>>
>>
>>
>> 2.  changes done in conf file only to map /dev/sdf partition  int AIO0
>> device
>>
>>
>>
>> # Users must change this section to match the /dev/sdX devices to be
>>
>> #  exported as vhost scsi drives. The devices are accessed using Linux
>> AIO.
>>
>> [AIO]
>>
>>   #AIO /dev/sda AIO0
>>
>>   AIO /dev/sdf AIO0
>>
>>
>>
>> 3.  added AIO device in conf
>>
>>
>>
>> # Vhost scsi controller configuration
>>
>> # Users should change the VhostScsi section(s) below to match the desired
>>
>> # vhost configuration.
>>
>> # Name is minimum required
>>
>> [VhostScsi0]
>>
>>   # Define name for controller
>>
>>   Name vhost.0
>>
>>   # Assign devices from backend
>>
>>   # Use the first malloc device
>>
>>   Dev 0 Malloc0
>>
>>   #Dev 1 Malloc1
>>
>>   Dev 2 Nvme0n1
>>
>>   #Dev 3 Malloc3
>>
>>
>>
>>   # Use the first AIO device
>>
>>   Dev 1 AIO0
>>
>>   # Use the frist Nvme device
>>
>>   #Dev 0 Nvme0n1
>>
>>   #Dev 0 Nvme0n1p0
>>
>>   #Dev 1 Nvme0n1p1
>>
>>   # Use the third partition from second Nvme device
>>
>>   #Dev 3 Nvme1n1p2
>>
>>
>>
>>   # Start the poller for this vhost controller on one of the cores in
>>
>>   #  this cpumask.  By default, it not specified, will use any core in the
>>
>>   #  SPDK process.
>>
>>   #Cpumask 0x2
>>
>>
>>
>>
>>
>>
>>
>> 4 . running below command
>>
>>
>>
>> usr/local/bin/qemu-system-x86_64 -name sl6.9 -m 1024 -object
>> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
>> -nographic -no-user-config -nodefaults -serial
>> mon:telnet:localhost:7704,server,nowait -monitor
>> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
>> file=/home/qemu/qcows1,format=qcow2,if=none,id=disk -device
>> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
>> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>>
>>
>>
>>
>>
>> 5.   in the guest VM  below is no output for   lsscsi
>>
>>
>>
>> [root(a)localhost ~]# lsblk
>>
>> NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>>
>> sda                           8:0    0     8G  0 disk
>>
>> ├─sda1                        8:1    0   500M  0 part /boot
>>
>> └─sda2                        8:2    0   7.5G  0 part
>>
>>   ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
>>
>>   └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
>>
>> sdb                           8:16   0   256M  0 disk
>>
>> sdc                           8:32   0 223.6G  0 disk
>>
>> sdd                           8:48   0 419.2G  0 disk
>>
>> ├─sdd1                        8:49   0  20.2M  0 part
>>
>> └─sdd2                        8:50   0 419.2G  0 part
>>
>>
>>
>> [root(a)localhost ~]# lspci
>>
>> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
>> 02)
>>
>> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>>
>> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
>> II]
>>
>> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>>
>> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
>>
>>
>>
>> [root(a)localhost ~]# lsscsi
>>
>> [root(a)localhost ~]#
>>
>>
>>
>> Regards
>>
>> nitin
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Oct 11, 2017 at 5:53 PM, Wodkowski, PawelX <
>> pawelx.wodkowski(a)intel.com> wrote:
>>
>> Most likely, yes.
>>
>>
>>
>> Pawel
>>
>>
>>
>> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Nitin
>> Gupta
>> *Sent:* Wednesday, October 11, 2017 12:49 PM
>>
>>
>> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Pawel
>>
>>
>>
>> Thanks for you reply . some how in my guest VM lsblk -S is not working
>>
>> please find below output of  ls -l /sys/block/ | grep host
>>
>>
>>
>> [root(a)localhost ~]# ls -l /sys/block/ | grep host
>>
>> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sda ->
>> ../devices/pci0000:00/0000:00:01.1/host1/target1:0:0/1:0:0:0/block/sda
>>
>> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdb ->
>> ../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:0
>> /2:0:0:0/block/sdb
>>
>> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdc ->
>> ../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:2
>> /2:0:2:0/block/sdc
>>
>>
>>
>> looks like then sdc is the nvme device , please correct me if i am wrong
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>>
>>
>> On Wed, Oct 11, 2017 at 3:22 PM, Wodkowski, PawelX <
>> pawelx.wodkowski(a)intel.com> wrote:
>>
>> Consider this config:
>>
>>
>>
>> [Malloc]
>>
>>   NumberOfLuns 8
>>
>>   LunSizeInMb 128
>>
>>   BlockSize 512
>>
>>
>>
>> [Split]
>>
>>   Split Nvme0n1 8
>>
>>
>>
>> [VhostScsi0]
>>
>>   Name ctrl0
>>
>>   Dev 0 Nvme0n1p0
>>
>>   Dev 1 Malloc0
>>
>>
>>
>> This is output from my VM (for readability I filter out devices using ‘|
>> grep host’).
>>
>>
>>
>> # lspci
>>
>> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
>> 02)
>>
>> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>>
>> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
>> II]
>>
>> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>>
>> 00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
>>
>> 00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
>> Controller (rev 03)
>>
>> *00:04.0* *SCSI storage controller: Red Hat, Inc Virtio SCSI*
>>
>>
>>
>> # ll /sys/block/ | grep host
>>
>> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sda ->
>> ../devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:
>> 0:0:0/block/sda/
>>
>> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdb -> ../devices/pci0000:00/0000:
>> *00:04.0*/virtio0/host2/*target2:0:0*/2:0:0:0/block/*sdb*/
>>
>> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdc -> ../devices/pci0000:00/0000:
>> *00:04.0*/virtio0/host2/*target2:0:1*/2:0:1:0/block/*sdc*/
>>
>>
>>
>> As you can see (in this case) device which is reported as “SCSI storage
>> controller: Red Hat, Inc Virtio SCSI“ is SPDK vhost device. Now find PCI
>> address
>>
>> and use it to figure out which device is which. In this case I have two
>> targets defined in vhost.conf (one is split of NVMe and one is Malloc disk)
>> and have two
>>
>> Scsi disks: *sdb* and *sdc* in VM. I know that in vhost.conf *Dev 0* is
>> *Nvme0n1p0* so I know that target2:0:*0* is NVMe split device mapped to
>> *sdb*. Analogue
>>
>> target2:0:1 is Malloc0 mapped to *sdc*. To confirm this I run following
>> command:
>>
>>
>>
>> # lsblk -S
>>
>> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>>
>> sda  1:0:0:0    disk ATA      QEMU HARDDISK    2.5+ ata
>>
>> *sdb  2:0:0:0*    disk INTEL    *Split Disk*       0001
>>
>> *sdc  2:0:1:0*    disk INTEL    *Malloc disk*      0001
>>
>>
>>
>> Pawel
>>
>>
>>
>> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Nitin
>> Gupta
>> *Sent:* Wednesday, October 11, 2017 11:07 AM
>> *To:* Harris, James R <james.r.harris(a)intel.com>
>>
>>
>> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> i was able to update my  environment for guest VM which runs now
>> in 2.6.32-696.el6.x86_64
>>
>> please find the lspci output and able to load virtio-scsi  module as well
>>
>>
>>
>> Please help me  to understand  , how to identify nvme disk mapping .
>>
>> below mapping we used  in  etc/spdk/vhost.conf.in
>>
>>
>>
>> Question :-
>>
>>
>>
>> [VhostScsi0]
>>
>>   # Define name for controller
>>
>>   Name vhost.0
>>
>>   # Assign devices from backend
>>
>>   # Use the first malloc device
>>
>>   Dev 0 Malloc0
>>
>>   #Dev 1 Malloc1
>>
>>   Dev 2 Nvme0n1
>>
>>   #Dev 3 Malloc3
>>
>>
>>
>> [root(a)localhost ~]# lsblk
>>
>> NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>>
>> sda                           8:0    0     8G  0 disk
>>
>> ├─sda1                        8:1    0   500M  0 part /boot
>>
>> └─sda2                        8:2    0   7.5G  0 part
>>
>>   ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
>>
>>   └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
>>
>> sdb                           8:16   0   256M  0 disk
>>
>> sdc                           8:32   0 419.2G  0 disk
>>
>> ├─sdc1                        8:33   0  20.2M  0 part
>>
>> └─sdc2                        8:34   0 419.2G  0 part
>>
>>
>>
>> [root(a)localhost ~]# lspci
>>
>> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
>> 02)
>>
>> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>>
>> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
>> II]
>>
>> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>>
>> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
>>
>>
>>
>>
>>
>> [root(a)localhost ~]# lsmod
>>
>> Module                  Size  Used by
>>
>> ib_ipoib               80839  0
>>
>> rdma_ucm               15739  0
>>
>> ib_ucm                 12328  0
>>
>> ib_uverbs              40372  2 rdma_ucm,ib_ucm
>>
>> ib_umad                13487  0
>>
>> rdma_cm                36555  1 rdma_ucm
>>
>> ib_cm                  36900  3 ib_ipoib,ib_ucm,rdma_cm
>>
>> iw_cm                  32976  1 rdma_cm
>>
>> ib_sa                  24092  4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
>>
>> ib_mad                 41340  3 ib_umad,ib_cm,ib_sa
>>
>> ib_core                82732  10 ib_ipoib,rdma_ucm,ib_ucm,ib_uv
>> erbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
>>
>> ib_addr                 8304  3 rdma_ucm,rdma_cm,ib_core
>>
>> ipv6                  336368  14 ib_ipoib,ib_addr
>>
>> i2c_piix4              11232  0
>>
>> i2c_core               29132  1 i2c_piix4
>>
>> sg                     29350  0
>>
>> ext4                  381065  2
>>
>> jbd2                   93284  1 ext4
>>
>> mbcache                 8193  1 ext4
>>
>> virtio_scsi            10761  0
>>
>> sd_mod                 37158  3
>>
>> crc_t10dif              1209  1 sd_mod
>>
>> virtio_pci              7512  0
>>
>> virtio_ring             8891  2 virtio_scsi,virtio_pci
>>
>> virtio                  5639  2 virtio_scsi,virtio_pci
>>
>> pata_acpi               3701  0
>>
>> ata_generic             3837  0
>>
>> ata_piix               24409  2
>>
>> dm_mirror              14864  0
>>
>> dm_region_hash         12085  1 dm_mirror
>>
>> dm_log                  9930  2 dm_mirror,dm_region_hash
>>
>> dm_mod                102467  8 dm_mirror,dm_log
>>
>>
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>>
>>
>> On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
>> wrote:
>>
>> Hi Jim
>>
>>
>>
>> Thanks , i will try to install virtio-scsi and update you
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> Can you try loading the virtio-scsi module in the guest VM?
>>
>>
>>
>> Without a virtio-scsi driver in the guest, there is no way for the guest
>> to see the virtio-scsi device backend created by the SPDK vhost target.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
>> *Date: *Friday, October 6, 2017 at 1:28 AM
>>
>>
>> *To: *James Harris <james.r.harris(a)intel.com>
>> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> Thanks for looking logs ,
>>
>> Please find attached vhost log  and qemu command which i am invoking
>>
>>
>>
>> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
>> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
>> -nographic -no-user-config -nodefaults -serial
>> mon:telnet:localhost:7704,server,nowait -monitor
>> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
>> file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
>> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
>> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>>
>>
>>
>> 2.  looks like there is no virtio-scsi module loaded in guest VM
>>
>>
>>
>> i ran lsmod command in guest VM please find below output
>>
>>
>>
>> [root(a)localhost ~]# lsmod
>>
>> Module                  Size  Used by
>>
>> ipt_REJECT              2349  2
>>
>> nf_conntrack_ipv4       9440  2
>>
>> nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
>>
>> iptable_filter          2759  1
>>
>> ip_tables              17765  1 iptable_filter
>>
>> ip6t_REJECT             4562  2
>>
>> nf_conntrack_ipv6       8650  2
>>
>> nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
>>
>> xt_state                1458  4
>>
>> nf_conntrack           79611  3 nf_conntrack_ipv4,nf_conntrack
>> _ipv6,xt_state
>>
>> ip6table_filter         2855  1
>>
>> ip6_tables             19424  1 ip6table_filter
>>
>> ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,
>> nf_defrag_ipv6
>>
>> i2c_piix4              12574  0
>>
>> i2c_core               31274  1 i2c_piix4
>>
>> sg                     30090  0
>>
>> ext4                  359671  2
>>
>> mbcache                 7918  1 ext4
>>
>> jbd2                   88768  1 ext4
>>
>> sd_mod                 38196  3
>>
>> crc_t10dif              1507  1 sd_mod
>>
>> virtio_pci              6653  0
>>
>> virtio_ring             7169  1 virtio_pci
>>
>> virtio                  4824  1 virtio_pci
>>
>> pata_acpi               3667  0
>>
>> ata_generic             3611  0
>>
>> ata_piix               22652  2
>>
>> dm_mod                 75539  6
>>
>>
>>
>>
>>
>> please let me know if i am missing something
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>>
>>
>> On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com>
>> wrote:
>>
>> Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you
>> add it?
>>
>>
>>
>> Can you also confirm the virtio-scsi module is loaded in your guest VM?
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
>> *Date: *Thursday, October 5, 2017 at 3:49 AM
>>
>>
>> *To: *James Harris <james.r.harris(a)intel.com>
>> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> Please find attached Vm-guest-boot up log and host dmesg log
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> It would be most helpful if you could get lspci working on your guest VM.
>>
>>
>>
>> Could you post dmesg contents from your VM and the SPDK vhost log after
>> the VM has booted?
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
>> *Date: *Wednesday, October 4, 2017 at 10:42 AM
>> *To: *James Harris <james.r.harris(a)intel.com>
>> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>>
>>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> i am running this  on remote box which is having linux 3.10 .
>>
>> on the guest VM lspci command is not working and i am not able to install
>> lspci as well
>>
>> below is the lsblk -a command output -S is also not available in guest VM
>>
>>
>>
>> NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
>>
>> ram0                          1:0    0    16M  0
>>
>> ram1                          1:1    0    16M  0
>>
>> ram2                          1:2    0    16M  0
>>
>> ram3                          1:3    0    16M  0
>>
>> ram4                          1:4    0    16M  0
>>
>> ram5                          1:5    0    16M  0
>>
>> ram6                          1:6    0    16M  0
>>
>> ram7                          1:7    0    16M  0
>>
>> ram8                          1:8    0    16M  0
>>
>> ram9                          1:9    0    16M  0
>>
>> ram10                         1:10   0    16M  0
>>
>> ram11                         1:11   0    16M  0
>>
>> ram12                         1:12   0    16M  0
>>
>> ram13                         1:13   0    16M  0
>>
>> ram14                         1:14   0    16M  0
>>
>> ram15                         1:15   0    16M  0
>>
>> loop0                         7:0    0         0
>>
>> loop1                         7:1    0         0
>>
>> loop2                         7:2    0         0
>>
>> loop3                         7:3    0         0
>>
>> loop4                         7:4    0         0
>>
>> loop5                         7:5    0         0
>>
>> loop6                         7:6    0         0
>>
>> loop7                         7:7    0         0
>>
>> sda                           8:0    0     8G  0
>>
>> ├─sda1                        8:1    0   500M  0 /boot
>>
>> └─sda2                        8:2    0   7.5G  0
>>
>>   ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
>>
>>   └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> Are you running these commands from the host or the VM?  You will only
>> see the virtio-scsi controller in lspci output from the guest VM.
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
>> *Date: *Tuesday, October 3, 2017 at 12:23 AM
>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
>> Harris <james.r.harris(a)intel.com>
>>
>>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> One quick update , after running ./script/setup.h for spdk nvme drive is
>> converting to uio generic pci device .
>>
>> so only difference which i found after and before mapping is command for
>> ls -l /dev/u*
>>
>>
>>
>> can i use /dev/uio0 are the nvme device
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
>> wrote:
>>
>> Hi Jim
>>
>>
>>
>> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>>
>>
>>
>> -bash-4.2# lsblk -S
>>
>> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>>
>> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>>
>> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>>
>> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>>
>> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
>> wrote:
>>
>> Hi Jim
>>
>>
>>
>> i am getting below output for lspci  for NVram
>>
>>
>>
>> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>>
>>
>> lsblk
>>
>>
>>
>> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>>
>> sda      8:0    0 223.6G  0 disk
>>
>> ├─sda1   8:1    0     6G  0 part [SWAP]
>>
>> ├─sda2   8:2    0   512M  0 part /bootmgr
>>
>> └─sda3   8:3    0 217.1G  0 part /
>>
>> sdb      8:16   0 931.5G  0 disk
>>
>> sdc      8:32   0 931.5G  0 disk
>>
>> sdd      8:48   0 223.6G  0 disk
>>
>> sde      8:64   0 111.8G  0 disk
>>
>> sdf      8:80   0 223.6G  0 disk
>>
>> sdg      8:96   0 223.6G  0 disk
>>
>> sdh      8:112  0 223.6G  0 disk
>>
>> sdi      8:128  0 223.6G  0 disk
>>
>>
>>
>>
>>
>> So how to know which one is virto-scsi  controller basically i wanted to
>> run fio test  with nvme mapped device
>>
>>
>>
>>
>>
>> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> lspci should show you the virtio-scsi controller PCI device.
>>
>> lsblk –S should show you the SCSI block devices attached to that
>> virtio-scsi controller.
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
>> nitin.gupta981(a)gmail.com>
>> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Date: *Monday, October 2, 2017 at 10:38 AM
>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> Thanks for your reply and sorry for my late reply ..
>>
>> could you please  give one example to know how to identify virtio-scsi
>> controller in the linux
>>
>> i mean which directory it will be present or which file system ?
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> You should see a virtio-scsi controller in the VM, not an NVMe device.
>> This controller should have one LUN attached, which SPDK vhost maps to the
>> NVMe device attached to the host.
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
>> nitin.gupta981(a)gmail.com>
>> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Date: *Thursday, September 28, 2017 at 4:07 AM
>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi All
>>
>>
>>
>> i am new in spdk development and currently doing spdk setup in that  was
>> able to setup back-end storage with NVME .After running the VM with
>> following command , there is no nvme drive present .
>>
>>
>>
>> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
>> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
>> -nographic -no-user-config -nodefaults -serial
>> mon:telnet:localhost:7704,server,nowait -monitor
>> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
>> file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
>> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
>> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>>
>>
>>
>>
>>
>> how to identify which is nvme drive ?
>>
>> is there any way to  enable nvme from qemu command ?
>>
>>
>>
>> PS:  i have already specified the nvme drive in vhost.conf.in
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>>
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 87049 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-12 16:04 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-12 16:04 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 26653 bytes --]

Hi PawelX

Sorry i forgot to add command output , since now both nvme and AIO (sdf) is
enabled ..
i came to know that sdd2  is  nvme ..
but  sdc also giving  00:02:0 so i am confused if it is SSD.

[root(a)localhost ~]# ls -l /sys/block | grep host
lrwxrwxrwx 1 root root 0 Oct 12 11:54 sda ->
../devices/pci0000:00/0000:00:01.1/host1/target1:0:0/1:0:0:0/block/sda
lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdb ->
../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb
lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdc ->
../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:1/2:0:1:0/block/sdc
lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdd -> ../devices/pci0000:00/0
000:00:02.0/virtio0/host2/target2:0:2/2:0:2:0/block/sdd
[root(a)localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
[root(a)localhost ~]# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0     8G  0 disk
├─sda1                        8:1    0   500M  0 part /boot
└─sda2                        8:2    0   7.5G  0 part
  ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
  └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
sdb                           8:16   0   256M  0 disk
sdc                           8:32   0 223.6G  0 disk
sdd                           8:48   0 419.2G  0 disk
├─sdd1                        8:49   0  20.2M  0 part
└─sdd2                        8:50   0 419.2G  0 part



On Thu, Oct 12, 2017 at 7:54 PM, Wodkowski, PawelX <
pawelx.wodkowski(a)intel.com> wrote:

> Steps are already provided. Just use ‘*ls -l /sys/block/ | grep host*’ to
> check which SCSI target each /dev/sdX device is.
>
>
>
> *From:* Nitin Gupta [mailto:nitin.gupta981(a)gmail.com]
> *Sent:* Thursday, October 12, 2017 3:58 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>; Harris,
> James R <james.r.harris(a)intel.com>; Wodkowski, PawelX <
> pawelx.wodkowski(a)intel.com>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim / Pawel
>
>
>
> Thanks for you response , could you please  help me to find ssd device as
> well
>
> i mean i am trying to map ssd device as well .please find below output
>
>
>
> 1. for my host machine output of lsscsi is
>
>
>
> -bash-4.2# lsscsi
>
> [0:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sda
>
> [1:0:0:0]    disk    ATA      ST31000524NS     SN11  /dev/sdb
>
> [2:0:0:0]    disk    ATA      ST31000524NS     SN12  /dev/sdc
>
> [3:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdd
>
> [5:0:0:0]    disk    ATA      SAMSUNG MZ7WD120 103Q  /dev/sde
>
> [6:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdf
>
> [7:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdg
>
> [8:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdh
>
> [9:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdi
>
>
>
> 2.  changes done in conf file only to map /dev/sdf partition  int AIO0
> device
>
>
>
> # Users must change this section to match the /dev/sdX devices to be
>
> #  exported as vhost scsi drives. The devices are accessed using Linux AIO.
>
> [AIO]
>
>   #AIO /dev/sda AIO0
>
>   AIO /dev/sdf AIO0
>
>
>
> 3.  added AIO device in conf
>
>
>
> # Vhost scsi controller configuration
>
> # Users should change the VhostScsi section(s) below to match the desired
>
> # vhost configuration.
>
> # Name is minimum required
>
> [VhostScsi0]
>
>   # Define name for controller
>
>   Name vhost.0
>
>   # Assign devices from backend
>
>   # Use the first malloc device
>
>   Dev 0 Malloc0
>
>   #Dev 1 Malloc1
>
>   Dev 2 Nvme0n1
>
>   #Dev 3 Malloc3
>
>
>
>   # Use the first AIO device
>
>   Dev 1 AIO0
>
>   # Use the frist Nvme device
>
>   #Dev 0 Nvme0n1
>
>   #Dev 0 Nvme0n1p0
>
>   #Dev 1 Nvme0n1p1
>
>   # Use the third partition from second Nvme device
>
>   #Dev 3 Nvme1n1p2
>
>
>
>   # Start the poller for this vhost controller on one of the cores in
>
>   #  this cpumask.  By default, it not specified, will use any core in the
>
>   #  SPDK process.
>
>   #Cpumask 0x2
>
>
>
>
>
>
>
> 4 . running below command
>
>
>
> usr/local/bin/qemu-system-x86_64 -name sl6.9 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows1,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> 5.   in the guest VM  below is no output for   lsscsi
>
>
>
> [root(a)localhost ~]# lsblk
>
> NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda                           8:0    0     8G  0 disk
>
> ├─sda1                        8:1    0   500M  0 part /boot
>
> └─sda2                        8:2    0   7.5G  0 part
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
>
> sdb                           8:16   0   256M  0 disk
>
> sdc                           8:32   0 223.6G  0 disk
>
> sdd                           8:48   0 419.2G  0 disk
>
> ├─sdd1                        8:49   0  20.2M  0 part
>
> └─sdd2                        8:50   0 419.2G  0 part
>
>
>
> [root(a)localhost ~]# lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
>
>
>
> [root(a)localhost ~]# lsscsi
>
> [root(a)localhost ~]#
>
>
>
> Regards
>
> nitin
>
>
>
>
>
>
>
>
>
> On Wed, Oct 11, 2017 at 5:53 PM, Wodkowski, PawelX <
> pawelx.wodkowski(a)intel.com> wrote:
>
> Most likely, yes.
>
>
>
> Pawel
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Nitin Gupta
> *Sent:* Wednesday, October 11, 2017 12:49 PM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Pawel
>
>
>
> Thanks for you reply . some how in my guest VM lsblk -S is not working
>
> please find below output of  ls -l /sys/block/ | grep host
>
>
>
> [root(a)localhost ~]# ls -l /sys/block/ | grep host
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sda -> ../devices/pci0000:00/0000:00:
> 01.1/host1/target1:0:0/1:0:0:0/block/sda
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdb -> ../devices/pci0000:00/0000:00:
> 02.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdc -> ../devices/pci0000:00/0000:00:
> 02.0/virtio0/host2/target2:0:2/2:0:2:0/block/sdc
>
>
>
> looks like then sdc is the nvme device , please correct me if i am wrong
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Wed, Oct 11, 2017 at 3:22 PM, Wodkowski, PawelX <
> pawelx.wodkowski(a)intel.com> wrote:
>
> Consider this config:
>
>
>
> [Malloc]
>
>   NumberOfLuns 8
>
>   LunSizeInMb 128
>
>   BlockSize 512
>
>
>
> [Split]
>
>   Split Nvme0n1 8
>
>
>
> [VhostScsi0]
>
>   Name ctrl0
>
>   Dev 0 Nvme0n1p0
>
>   Dev 1 Malloc0
>
>
>
> This is output from my VM (for readability I filter out devices using ‘|
> grep host’).
>
>
>
> # lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
>
> 00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
> Controller (rev 03)
>
> *00:04.0* *SCSI storage controller: Red Hat, Inc Virtio SCSI*
>
>
>
> # ll /sys/block/ | grep host
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sda ->
> ../devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:
> 0:0:0/block/sda/
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdb -> ../devices/pci0000:00/0000:
> *00:04.0*/virtio0/host2/*target2:0:0*/2:0:0:0/block/*sdb*/
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdc -> ../devices/pci0000:00/0000:
> *00:04.0*/virtio0/host2/*target2:0:1*/2:0:1:0/block/*sdc*/
>
>
>
> As you can see (in this case) device which is reported as “SCSI storage
> controller: Red Hat, Inc Virtio SCSI“ is SPDK vhost device. Now find PCI
> address
>
> and use it to figure out which device is which. In this case I have two
> targets defined in vhost.conf (one is split of NVMe and one is Malloc disk)
> and have two
>
> Scsi disks: *sdb* and *sdc* in VM. I know that in vhost.conf *Dev 0* is
> *Nvme0n1p0* so I know that target2:0:*0* is NVMe split device mapped to
> *sdb*. Analogue
>
> target2:0:1 is Malloc0 mapped to *sdc*. To confirm this I run following
> command:
>
>
>
> # lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  1:0:0:0    disk ATA      QEMU HARDDISK    2.5+ ata
>
> *sdb  2:0:0:0*    disk INTEL    *Split Disk*       0001
>
> *sdc  2:0:1:0*    disk INTEL    *Malloc disk*      0001
>
>
>
> Pawel
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Nitin Gupta
> *Sent:* Wednesday, October 11, 2017 11:07 AM
> *To:* Harris, James R <james.r.harris(a)intel.com>
>
>
> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i was able to update my  environment for guest VM which runs now
> in 2.6.32-696.el6.x86_64
>
> please find the lspci output and able to load virtio-scsi  module as well
>
>
>
> Please help me  to understand  , how to identify nvme disk mapping .
>
> below mapping we used  in  etc/spdk/vhost.conf.in
>
>
>
> Question :-
>
>
>
> [VhostScsi0]
>
>   # Define name for controller
>
>   Name vhost.0
>
>   # Assign devices from backend
>
>   # Use the first malloc device
>
>   Dev 0 Malloc0
>
>   #Dev 1 Malloc1
>
>   Dev 2 Nvme0n1
>
>   #Dev 3 Malloc3
>
>
>
> [root(a)localhost ~]# lsblk
>
> NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda                           8:0    0     8G  0 disk
>
> ├─sda1                        8:1    0   500M  0 part /boot
>
> └─sda2                        8:2    0   7.5G  0 part
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
>
> sdb                           8:16   0   256M  0 disk
>
> sdc                           8:32   0 419.2G  0 disk
>
> ├─sdc1                        8:33   0  20.2M  0 part
>
> └─sdc2                        8:34   0 419.2G  0 part
>
>
>
> [root(a)localhost ~]# lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
>
>
>
>
>
> [root(a)localhost ~]# lsmod
>
> Module                  Size  Used by
>
> ib_ipoib               80839  0
>
> rdma_ucm               15739  0
>
> ib_ucm                 12328  0
>
> ib_uverbs              40372  2 rdma_ucm,ib_ucm
>
> ib_umad                13487  0
>
> rdma_cm                36555  1 rdma_ucm
>
> ib_cm                  36900  3 ib_ipoib,ib_ucm,rdma_cm
>
> iw_cm                  32976  1 rdma_cm
>
> ib_sa                  24092  4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
>
> ib_mad                 41340  3 ib_umad,ib_cm,ib_sa
>
> ib_core                82732  10 ib_ipoib,rdma_ucm,ib_ucm,ib_
> uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
>
> ib_addr                 8304  3 rdma_ucm,rdma_cm,ib_core
>
> ipv6                  336368  14 ib_ipoib,ib_addr
>
> i2c_piix4              11232  0
>
> i2c_core               29132  1 i2c_piix4
>
> sg                     29350  0
>
> ext4                  381065  2
>
> jbd2                   93284  1 ext4
>
> mbcache                 8193  1 ext4
>
> virtio_scsi            10761  0
>
> sd_mod                 37158  3
>
> crc_t10dif              1209  1 sd_mod
>
> virtio_pci              7512  0
>
> virtio_ring             8891  2 virtio_scsi,virtio_pci
>
> virtio                  5639  2 virtio_scsi,virtio_pci
>
> pata_acpi               3701  0
>
> ata_generic             3837  0
>
> ata_piix               24409  2
>
> dm_mirror              14864  0
>
> dm_region_hash         12085  1 dm_mirror
>
> dm_log                  9930  2 dm_mirror,dm_region_hash
>
> dm_mod                102467  8 dm_mirror,dm_log
>
>
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Thanks , i will try to install virtio-scsi and update you
>
>
>
> Regards
>
> Nitin
>
>
>
> On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Can you try loading the virtio-scsi module in the guest VM?
>
>
>
> Without a virtio-scsi driver in the guest, there is no way for the guest
> to see the virtio-scsi device backend created by the SPDK vhost target.
>
>
>
> Thanks,
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Friday, October 6, 2017 at 1:28 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for looking logs ,
>
> Please find attached vhost log  and qemu command which i am invoking
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
> 2.  looks like there is no virtio-scsi module loaded in guest VM
>
>
>
> i ran lsmod command in guest VM please find below output
>
>
>
> [root(a)localhost ~]# lsmod
>
> Module                  Size  Used by
>
> ipt_REJECT              2349  2
>
> nf_conntrack_ipv4       9440  2
>
> nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
>
> iptable_filter          2759  1
>
> ip_tables              17765  1 iptable_filter
>
> ip6t_REJECT             4562  2
>
> nf_conntrack_ipv6       8650  2
>
> nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
>
> xt_state                1458  4
>
> nf_conntrack           79611  3 nf_conntrack_ipv4,nf_
> conntrack_ipv6,xt_state
>
> ip6table_filter         2855  1
>
> ip6_tables             19424  1 ip6table_filter
>
> ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,
> nf_defrag_ipv6
>
> i2c_piix4              12574  0
>
> i2c_core               31274  1 i2c_piix4
>
> sg                     30090  0
>
> ext4                  359671  2
>
> mbcache                 7918  1 ext4
>
> jbd2                   88768  1 ext4
>
> sd_mod                 38196  3
>
> crc_t10dif              1507  1 sd_mod
>
> virtio_pci              6653  0
>
> virtio_ring             7169  1 virtio_pci
>
> virtio                  4824  1 virtio_pci
>
> pata_acpi               3667  0
>
> ata_generic             3611  0
>
> ata_piix               22652  2
>
> dm_mod                 75539  6
>
>
>
>
>
> please let me know if i am missing something
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you
> add it?
>
>
>
> Can you also confirm the virtio-scsi module is loaded in your guest VM?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Thursday, October 5, 2017 at 3:49 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Please find attached Vm-guest-boot up log and host dmesg log
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> It would be most helpful if you could get lspci working on your guest VM.
>
>
>
> Could you post dmesg contents from your VM and the SPDK vhost log after
> the VM has booted?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Wednesday, October 4, 2017 at 10:42 AM
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i am running this  on remote box which is having linux 3.10 .
>
> on the guest VM lspci command is not working and i am not able to install
> lspci as well
>
> below is the lsblk -a command output -S is also not available in guest VM
>
>
>
> NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
>
> ram0                          1:0    0    16M  0
>
> ram1                          1:1    0    16M  0
>
> ram2                          1:2    0    16M  0
>
> ram3                          1:3    0    16M  0
>
> ram4                          1:4    0    16M  0
>
> ram5                          1:5    0    16M  0
>
> ram6                          1:6    0    16M  0
>
> ram7                          1:7    0    16M  0
>
> ram8                          1:8    0    16M  0
>
> ram9                          1:9    0    16M  0
>
> ram10                         1:10   0    16M  0
>
> ram11                         1:11   0    16M  0
>
> ram12                         1:12   0    16M  0
>
> ram13                         1:13   0    16M  0
>
> ram14                         1:14   0    16M  0
>
> ram15                         1:15   0    16M  0
>
> loop0                         7:0    0         0
>
> loop1                         7:1    0         0
>
> loop2                         7:2    0         0
>
> loop3                         7:3    0         0
>
> loop4                         7:4    0         0
>
> loop5                         7:5    0         0
>
> loop6                         7:6    0         0
>
> loop7                         7:7    0         0
>
> sda                           8:0    0     8G  0
>
> ├─sda1                        8:1    0   500M  0 /boot
>
> └─sda2                        8:2    0   7.5G  0
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Are you running these commands from the host or the VM?  You will only see
> the virtio-scsi controller in lspci output from the guest VM.
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Tuesday, October 3, 2017 at 12:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
> Harris <james.r.harris(a)intel.com>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> One quick update , after running ./script/setup.h for spdk nvme drive is
> converting to uio generic pci device .
>
> so only difference which i found after and before mapping is command for
> ls -l /dev/u*
>
>
>
> can i use /dev/uio0 are the nvme device
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
>
>
> -bash-4.2# lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>
> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>
> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>
> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
>
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> i am getting below output for lspci  for NVram
>
>
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
>
>
> lsblk
>
>
>
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda      8:0    0 223.6G  0 disk
>
> ├─sda1   8:1    0     6G  0 part [SWAP]
>
> ├─sda2   8:2    0   512M  0 part /bootmgr
>
> └─sda3   8:3    0 217.1G  0 part /
>
> sdb      8:16   0 931.5G  0 disk
>
> sdc      8:32   0 931.5G  0 disk
>
> sdd      8:48   0 223.6G  0 disk
>
> sde      8:64   0 111.8G  0 disk
>
> sdf      8:80   0 223.6G  0 disk
>
> sdg      8:96   0 223.6G  0 disk
>
> sdh      8:112  0 223.6G  0 disk
>
> sdi      8:128  0 223.6G  0 disk
>
>
>
>
>
> So how to know which one is virto-scsi  controller basically i wanted to
> run fio test  with nvme mapped device
>
>
>
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please  give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 85057 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-12 14:24 Wodkowski, PawelX
  0 siblings, 0 replies; 23+ messages in thread
From: Wodkowski, PawelX @ 2017-10-12 14:24 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 23141 bytes --]

Steps are already provided. Just use ‘ls -l /sys/block/ | grep host’ to check which SCSI target each /dev/sdX device is.

From: Nitin Gupta [mailto:nitin.gupta981(a)gmail.com]
Sent: Thursday, October 12, 2017 3:58 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; Harris, James R <james.r.harris(a)intel.com>; Wodkowski, PawelX <pawelx.wodkowski(a)intel.com>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim / Pawel

Thanks for you response , could you please  help me to find ssd device as well
i mean i am trying to map ssd device as well .please find below output

1. for my host machine output of lsscsi is

-bash-4.2# lsscsi
[0:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sda
[1:0:0:0]    disk    ATA      ST31000524NS     SN11  /dev/sdb
[2:0:0:0]    disk    ATA      ST31000524NS     SN12  /dev/sdc
[3:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdd
[5:0:0:0]    disk    ATA      SAMSUNG MZ7WD120 103Q  /dev/sde
[6:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdf
[7:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdg
[8:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdh
[9:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdi

2.  changes done in conf file only to map /dev/sdf partition  int AIO0 device

# Users must change this section to match the /dev/sdX devices to be
#  exported as vhost scsi drives. The devices are accessed using Linux AIO.
[AIO]
  #AIO /dev/sda AIO0
  AIO /dev/sdf AIO0

3.  added AIO device in conf

# Vhost scsi controller configuration
# Users should change the VhostScsi section(s) below to match the desired
# vhost configuration.
# Name is minimum required
[VhostScsi0]
  # Define name for controller
  Name vhost.0
  # Assign devices from backend
  # Use the first malloc device
  Dev 0 Malloc0
  #Dev 1 Malloc1
  Dev 2 Nvme0n1
  #Dev 3 Malloc3

  # Use the first AIO device
  Dev 1 AIO0
  # Use the frist Nvme device
  #Dev 0 Nvme0n1
  #Dev 0 Nvme0n1p0
  #Dev 1 Nvme0n1p1
  # Use the third partition from second Nvme device
  #Dev 3 Nvme1n1p2

  # Start the poller for this vhost controller on one of the cores in
  #  this cpumask.  By default, it not specified, will use any core in the
  #  SPDK process.
  #Cpumask 0x2



4 . running below command

usr/local/bin/qemu-system-x86_64 -name sl6.9 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows1,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


5.   in the guest VM  below is no output for   lsscsi

[root(a)localhost ~]# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0     8G  0 disk
├─sda1                        8:1    0   500M  0 part /boot
└─sda2                        8:2    0   7.5G  0 part
  ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
  └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
sdb                           8:16   0   256M  0 disk
sdc                           8:32   0 223.6G  0 disk
sdd                           8:48   0 419.2G  0 disk
├─sdd1                        8:49   0  20.2M  0 part
└─sdd2                        8:50   0 419.2G  0 part

[root(a)localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI

[root(a)localhost ~]# lsscsi
[root(a)localhost ~]#

Regards
nitin




On Wed, Oct 11, 2017 at 5:53 PM, Wodkowski, PawelX <pawelx.wodkowski(a)intel.com<mailto:pawelx.wodkowski(a)intel.com>> wrote:
Most likely, yes.

Pawel

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Nitin Gupta
Sent: Wednesday, October 11, 2017 12:49 PM

To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Pawel

Thanks for you reply . some how in my guest VM lsblk -S is not working
please find below output of  ls -l /sys/block/ | grep host

[root(a)localhost ~]# ls -l /sys/block/ | grep host
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sda -> ../devices/pci0000:00/0000:00:01.1/host1/target1:0:0/1:0:0:0/block/sda
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdb -> ../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdc -> ../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:2/2:0:2:0/block/sdc

looks like then sdc is the nvme device , please correct me if i am wrong

Regards
Nitin


On Wed, Oct 11, 2017 at 3:22 PM, Wodkowski, PawelX <pawelx.wodkowski(a)intel.com<mailto:pawelx.wodkowski(a)intel.com>> wrote:
Consider this config:

[Malloc]
  NumberOfLuns 8
  LunSizeInMb 128
  BlockSize 512

[Split]
  Split Nvme0n1 8

[VhostScsi0]
  Name ctrl0
  Dev 0 Nvme0n1p0
  Dev 1 Malloc0

This is output from my VM (for readability I filter out devices using ‘| grep host’).

# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
00:04.0 SCSI storage controller: Red Hat, Inc Virtio SCSI

# ll /sys/block/ | grep host
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sda -> ../devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:0:0:0/block/sda/
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdb -> ../devices/pci0000:00/0000:00:04.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb/
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdc -> ../devices/pci0000:00/0000:00:04.0/virtio0/host2/target2:0:1/2:0:1:0/block/sdc/

As you can see (in this case) device which is reported as “SCSI storage controller: Red Hat, Inc Virtio SCSI“ is SPDK vhost device. Now find PCI address
and use it to figure out which device is which. In this case I have two targets defined in vhost.conf (one is split of NVMe and one is Malloc disk) and have two
Scsi disks: sdb and sdc in VM. I know that in vhost.conf Dev 0 is Nvme0n1p0 so I know that target2:0:0 is NVMe split device mapped to sdb. Analogue
target2:0:1 is Malloc0 mapped to sdc. To confirm this I run following command:

# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  1:0:0:0    disk ATA      QEMU HARDDISK    2.5+ ata
sdb  2:0:0:0    disk INTEL    Split Disk       0001
sdc  2:0:1:0    disk INTEL    Malloc disk      0001

Pawel

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Nitin Gupta
Sent: Wednesday, October 11, 2017 11:07 AM
To: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>

Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i was able to update my  environment for guest VM which runs now in 2.6.32-696.el6.x86_64
please find the lspci output and able to load virtio-scsi  module as well

Please help me  to understand  , how to identify nvme disk mapping .
below mapping we used  in  etc/spdk/vhost.conf.in<http://vhost.conf.in>

Question :-

[VhostScsi0]
  # Define name for controller
  Name vhost.0
  # Assign devices from backend
  # Use the first malloc device
  Dev 0 Malloc0
  #Dev 1 Malloc1
  Dev 2 Nvme0n1
  #Dev 3 Malloc3

[root(a)localhost ~]# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0     8G  0 disk
├─sda1                        8:1    0   500M  0 part /boot
└─sda2                        8:2    0   7.5G  0 part
  ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
  └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
sdb                           8:16   0   256M  0 disk
sdc                           8:32   0 419.2G  0 disk
├─sdc1                        8:33   0  20.2M  0 part
└─sdc2                        8:34   0 419.2G  0 part

[root(a)localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI


[root(a)localhost ~]# lsmod
Module                  Size  Used by
ib_ipoib               80839  0
rdma_ucm               15739  0
ib_ucm                 12328  0
ib_uverbs              40372  2 rdma_ucm,ib_ucm
ib_umad                13487  0
rdma_cm                36555  1 rdma_ucm
ib_cm                  36900  3 ib_ipoib,ib_ucm,rdma_cm
iw_cm                  32976  1 rdma_cm
ib_sa                  24092  4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
ib_mad                 41340  3 ib_umad,ib_cm,ib_sa
ib_core                82732  10 ib_ipoib,rdma_ucm,ib_ucm,ib_uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
ib_addr                 8304  3 rdma_ucm,rdma_cm,ib_core
ipv6                  336368  14 ib_ipoib,ib_addr
i2c_piix4              11232  0
i2c_core               29132  1 i2c_piix4
sg                     29350  0
ext4                  381065  2
jbd2                   93284  1 ext4
mbcache                 8193  1 ext4
virtio_scsi            10761  0
sd_mod                 37158  3
crc_t10dif              1209  1 sd_mod
virtio_pci              7512  0
virtio_ring             8891  2 virtio_scsi,virtio_pci
virtio                  5639  2 virtio_scsi,virtio_pci
pata_acpi               3701  0
ata_generic             3837  0
ata_piix               24409  2
dm_mirror              14864  0
dm_region_hash         12085  1 dm_mirror
dm_log                  9930  2 dm_mirror,dm_region_hash
dm_mod                102467  8 dm_mirror,dm_log


Regards
Nitin


On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Thanks , i will try to install virtio-scsi and update you

Regards
Nitin

On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Can you try loading the virtio-scsi module in the guest VM?

Without a virtio-scsi driver in the guest, there is no way for the guest to see the virtio-scsi device backend created by the SPDK vhost target.

Thanks,

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Friday, October 6, 2017 at 1:28 AM

To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for looking logs ,
Please find attached vhost log  and qemu command which i am invoking

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm

2.  looks like there is no virtio-scsi module loaded in guest VM

i ran lsmod command in guest VM please find below output

[root(a)localhost ~]# lsmod
Module                  Size  Used by
ipt_REJECT              2349  2
nf_conntrack_ipv4       9440  2
nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
iptable_filter          2759  1
ip_tables              17765  1 iptable_filter
ip6t_REJECT             4562  2
nf_conntrack_ipv6       8650  2
nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
xt_state                1458  4
nf_conntrack           79611  3 nf_conntrack_ipv4,nf_conntrack_ipv6,xt_state
ip6table_filter         2855  1
ip6_tables             19424  1 ip6table_filter
ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
i2c_piix4              12574  0
i2c_core               31274  1 i2c_piix4
sg                     30090  0
ext4                  359671  2
mbcache                 7918  1 ext4
jbd2                   88768  1 ext4
sd_mod                 38196  3
crc_t10dif              1507  1 sd_mod
virtio_pci              6653  0
virtio_ring             7169  1 virtio_pci
virtio                  4824  1 virtio_pci
pata_acpi               3667  0
ata_generic             3611  0
ata_piix               22652  2
dm_mod                 75539  6


please let me know if i am missing something

Regards
Nitin


On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you add it?

Can you also confirm the virtio-scsi module is loaded in your guest VM?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Thursday, October 5, 2017 at 3:49 AM

To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Please find attached Vm-guest-boot up log and host dmesg log

Regards
Nitin

On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

It would be most helpful if you could get lspci working on your guest VM.

Could you post dmesg contents from your VM and the SPDK vhost log after the VM has booted?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Wednesday, October 4, 2017 at 10:42 AM
To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i am running this  on remote box which is having linux 3.10 .
on the guest VM lspci command is not working and i am not able to install lspci as well
below is the lsblk -a command output -S is also not available in guest VM

NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
ram0                          1:0    0    16M  0
ram1                          1:1    0    16M  0
ram2                          1:2    0    16M  0
ram3                          1:3    0    16M  0
ram4                          1:4    0    16M  0
ram5                          1:5    0    16M  0
ram6                          1:6    0    16M  0
ram7                          1:7    0    16M  0
ram8                          1:8    0    16M  0
ram9                          1:9    0    16M  0
ram10                         1:10   0    16M  0
ram11                         1:11   0    16M  0
ram12                         1:12   0    16M  0
ram13                         1:13   0    16M  0
ram14                         1:14   0    16M  0
ram15                         1:15   0    16M  0
loop0                         7:0    0         0
loop1                         7:1    0         0
loop2                         7:2    0         0
loop3                         7:3    0         0
loop4                         7:4    0         0
loop5                         7:5    0         0
loop6                         7:6    0         0
loop7                         7:7    0         0
sda                           8:0    0     8G  0
├─sda1                        8:1    0   500M  0 /boot
└─sda2                        8:2    0   7.5G  0
  ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
  └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]

Regards
Nitin

On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Are you running these commands from the host or the VM?  You will only see the virtio-scsi controller in lspci output from the guest VM.

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Tuesday, October 3, 2017 at 12:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

One quick update , after running ./script/setup.h for spdk nvme drive is converting to uio generic pci device .
so only difference which i found after and before mapping is command for ls -l /dev/u*

can i use /dev/uio0 are the nvme device
Regards
Nitin

On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Looks like sdf to sdi is the nvme , please correct me if i ma wrong

-bash-4.2# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata

Regards
Nitin

On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

i am getting below output for lspci  for NVram

d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)

lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0     6G  0 part [SWAP]
├─sda2   8:2    0   512M  0 part /bootmgr
└─sda3   8:3    0 217.1G  0 part /
sdb      8:16   0 931.5G  0 disk
sdc      8:32   0 931.5G  0 disk
sdd      8:48   0 223.6G  0 disk
sde      8:64   0 111.8G  0 disk
sdf      8:80   0 223.6G  0 disk
sdg      8:96   0 223.6G  0 disk
sdh      8:112  0 223.6G  0 disk
sdi      8:128  0 223.6G  0 disk


So how to know which one is virto-scsi  controller basically i wanted to run fio test  with nvme mapped device


On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

lspci should show you the virtio-scsi controller PCI device.
lsblk –S should show you the SCSI block devices attached to that virtio-scsi controller.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, October 2, 2017 at 10:38 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk









_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 114414 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-12 13:57 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-12 13:57 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 23755 bytes --]

Hi Jim / Pawel

Thanks for you response , could you please  help me to find ssd device as
well
i mean i am trying to map ssd device as well .please find below output

1. for my host machine output of lsscsi is

-bash-4.2# lsscsi
[0:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sda
[1:0:0:0]    disk    ATA      ST31000524NS     SN11  /dev/sdb
[2:0:0:0]    disk    ATA      ST31000524NS     SN12  /dev/sdc
[3:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdd
[5:0:0:0]    disk    ATA      SAMSUNG MZ7WD120 103Q  /dev/sde
[6:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdf
[7:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdg
[8:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdh
[9:0:0:0]    disk    ATA      INTEL SSDSC2BB24 0039  /dev/sdi

2.  changes done in conf file only to map /dev/sdf partition  int AIO0
device

# Users must change this section to match the /dev/sdX devices to be
#  exported as vhost scsi drives. The devices are accessed using Linux AIO.
[AIO]
  #AIO /dev/sda AIO0
  AIO /dev/sdf AIO0

3.  added AIO device in conf

# Vhost scsi controller configuration
# Users should change the VhostScsi section(s) below to match the desired
# vhost configuration.
# Name is minimum required
[VhostScsi0]
  # Define name for controller
  Name vhost.0
  # Assign devices from backend
  # Use the first malloc device
  Dev 0 Malloc0
  #Dev 1 Malloc1
  Dev 2 Nvme0n1
  #Dev 3 Malloc3

  # Use the first AIO device
  Dev 1 AIO0
  # Use the frist Nvme device
  #Dev 0 Nvme0n1
  #Dev 0 Nvme0n1p0
  #Dev 1 Nvme0n1p1
  # Use the third partition from second Nvme device
  #Dev 3 Nvme1n1p2

  # Start the poller for this vhost controller on one of the cores in
  #  this cpumask.  By default, it not specified, will use any core in the
  #  SPDK process.
  #Cpumask 0x2



4 . running below command

usr/local/bin/qemu-system-x86_64 -name sl6.9 -m 1024 -object
memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
-nographic -no-user-config -nodefaults -serial
mon:telnet:localhost:7704,server,nowait -monitor
mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
file=/home/qemu/qcows1,format=qcow2,if=none,id=disk -device
ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


5.   in the guest VM  below is no output for   lsscsi

[root(a)localhost ~]# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0     8G  0 disk
├─sda1                        8:1    0   500M  0 part /boot
└─sda2                        8:2    0   7.5G  0 part
  ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
  └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
sdb                           8:16   0   256M  0 disk
sdc                           8:32   0 223.6G  0 disk
sdd                           8:48   0 419.2G  0 disk
├─sdd1                        8:49   0  20.2M  0 part
└─sdd2                        8:50   0 419.2G  0 part

[root(a)localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI

[root(a)localhost ~]# lsscsi
[root(a)localhost ~]#

Regards
nitin




On Wed, Oct 11, 2017 at 5:53 PM, Wodkowski, PawelX <
pawelx.wodkowski(a)intel.com> wrote:

> Most likely, yes.
>
>
>
> Pawel
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Nitin Gupta
> *Sent:* Wednesday, October 11, 2017 12:49 PM
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Pawel
>
>
>
> Thanks for you reply . some how in my guest VM lsblk -S is not working
>
> please find below output of  ls -l /sys/block/ | grep host
>
>
>
> [root(a)localhost ~]# ls -l /sys/block/ | grep host
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sda -> ../devices/pci0000:00/0000:00:
> 01.1/host1/target1:0:0/1:0:0:0/block/sda
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdb -> ../devices/pci0000:00/0000:00:
> 02.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdc -> ../devices/pci0000:00/0000:00:
> 02.0/virtio0/host2/target2:0:2/2:0:2:0/block/sdc
>
>
>
> looks like then sdc is the nvme device , please correct me if i am wrong
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Wed, Oct 11, 2017 at 3:22 PM, Wodkowski, PawelX <
> pawelx.wodkowski(a)intel.com> wrote:
>
> Consider this config:
>
>
>
> [Malloc]
>
>   NumberOfLuns 8
>
>   LunSizeInMb 128
>
>   BlockSize 512
>
>
>
> [Split]
>
>   Split Nvme0n1 8
>
>
>
> [VhostScsi0]
>
>   Name ctrl0
>
>   Dev 0 Nvme0n1p0
>
>   Dev 1 Malloc0
>
>
>
> This is output from my VM (for readability I filter out devices using ‘|
> grep host’).
>
>
>
> # lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
>
> 00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
> Controller (rev 03)
>
> *00:04.0* *SCSI storage controller: Red Hat, Inc Virtio SCSI*
>
>
>
> # ll /sys/block/ | grep host
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sda ->
> ../devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:
> 0:0:0/block/sda/
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdb -> ../devices/pci0000:00/0000:
> *00:04.0*/virtio0/host2/*target2:0:0*/2:0:0:0/block/*sdb*/
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdc -> ../devices/pci0000:00/0000:
> *00:04.0*/virtio0/host2/*target2:0:1*/2:0:1:0/block/*sdc*/
>
>
>
> As you can see (in this case) device which is reported as “SCSI storage
> controller: Red Hat, Inc Virtio SCSI“ is SPDK vhost device. Now find PCI
> address
>
> and use it to figure out which device is which. In this case I have two
> targets defined in vhost.conf (one is split of NVMe and one is Malloc disk)
> and have two
>
> Scsi disks: *sdb* and *sdc* in VM. I know that in vhost.conf *Dev 0* is
> *Nvme0n1p0* so I know that target2:0:*0* is NVMe split device mapped to
> *sdb*. Analogue
>
> target2:0:1 is Malloc0 mapped to *sdc*. To confirm this I run following
> command:
>
>
>
> # lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  1:0:0:0    disk ATA      QEMU HARDDISK    2.5+ ata
>
> *sdb  2:0:0:0*    disk INTEL    *Split Disk*       0001
>
> *sdc  2:0:1:0*    disk INTEL    *Malloc disk*      0001
>
>
>
> Pawel
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Nitin Gupta
> *Sent:* Wednesday, October 11, 2017 11:07 AM
> *To:* Harris, James R <james.r.harris(a)intel.com>
>
>
> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i was able to update my  environment for guest VM which runs now
> in 2.6.32-696.el6.x86_64
>
> please find the lspci output and able to load virtio-scsi  module as well
>
>
>
> Please help me  to understand  , how to identify nvme disk mapping .
>
> below mapping we used  in  etc/spdk/vhost.conf.in
>
>
>
> Question :-
>
>
>
> [VhostScsi0]
>
>   # Define name for controller
>
>   Name vhost.0
>
>   # Assign devices from backend
>
>   # Use the first malloc device
>
>   Dev 0 Malloc0
>
>   #Dev 1 Malloc1
>
>   Dev 2 Nvme0n1
>
>   #Dev 3 Malloc3
>
>
>
> [root(a)localhost ~]# lsblk
>
> NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda                           8:0    0     8G  0 disk
>
> ├─sda1                        8:1    0   500M  0 part /boot
>
> └─sda2                        8:2    0   7.5G  0 part
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
>
> sdb                           8:16   0   256M  0 disk
>
> sdc                           8:32   0 419.2G  0 disk
>
> ├─sdc1                        8:33   0  20.2M  0 part
>
> └─sdc2                        8:34   0 419.2G  0 part
>
>
>
> [root(a)localhost ~]# lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
>
>
>
>
>
> [root(a)localhost ~]# lsmod
>
> Module                  Size  Used by
>
> ib_ipoib               80839  0
>
> rdma_ucm               15739  0
>
> ib_ucm                 12328  0
>
> ib_uverbs              40372  2 rdma_ucm,ib_ucm
>
> ib_umad                13487  0
>
> rdma_cm                36555  1 rdma_ucm
>
> ib_cm                  36900  3 ib_ipoib,ib_ucm,rdma_cm
>
> iw_cm                  32976  1 rdma_cm
>
> ib_sa                  24092  4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
>
> ib_mad                 41340  3 ib_umad,ib_cm,ib_sa
>
> ib_core                82732  10 ib_ipoib,rdma_ucm,ib_ucm,ib_
> uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
>
> ib_addr                 8304  3 rdma_ucm,rdma_cm,ib_core
>
> ipv6                  336368  14 ib_ipoib,ib_addr
>
> i2c_piix4              11232  0
>
> i2c_core               29132  1 i2c_piix4
>
> sg                     29350  0
>
> ext4                  381065  2
>
> jbd2                   93284  1 ext4
>
> mbcache                 8193  1 ext4
>
> virtio_scsi            10761  0
>
> sd_mod                 37158  3
>
> crc_t10dif              1209  1 sd_mod
>
> virtio_pci              7512  0
>
> virtio_ring             8891  2 virtio_scsi,virtio_pci
>
> virtio                  5639  2 virtio_scsi,virtio_pci
>
> pata_acpi               3701  0
>
> ata_generic             3837  0
>
> ata_piix               24409  2
>
> dm_mirror              14864  0
>
> dm_region_hash         12085  1 dm_mirror
>
> dm_log                  9930  2 dm_mirror,dm_region_hash
>
> dm_mod                102467  8 dm_mirror,dm_log
>
>
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Thanks , i will try to install virtio-scsi and update you
>
>
>
> Regards
>
> Nitin
>
>
>
> On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Can you try loading the virtio-scsi module in the guest VM?
>
>
>
> Without a virtio-scsi driver in the guest, there is no way for the guest
> to see the virtio-scsi device backend created by the SPDK vhost target.
>
>
>
> Thanks,
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Friday, October 6, 2017 at 1:28 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for looking logs ,
>
> Please find attached vhost log  and qemu command which i am invoking
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
> 2.  looks like there is no virtio-scsi module loaded in guest VM
>
>
>
> i ran lsmod command in guest VM please find below output
>
>
>
> [root(a)localhost ~]# lsmod
>
> Module                  Size  Used by
>
> ipt_REJECT              2349  2
>
> nf_conntrack_ipv4       9440  2
>
> nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
>
> iptable_filter          2759  1
>
> ip_tables              17765  1 iptable_filter
>
> ip6t_REJECT             4562  2
>
> nf_conntrack_ipv6       8650  2
>
> nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
>
> xt_state                1458  4
>
> nf_conntrack           79611  3 nf_conntrack_ipv4,nf_
> conntrack_ipv6,xt_state
>
> ip6table_filter         2855  1
>
> ip6_tables             19424  1 ip6table_filter
>
> ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,
> nf_defrag_ipv6
>
> i2c_piix4              12574  0
>
> i2c_core               31274  1 i2c_piix4
>
> sg                     30090  0
>
> ext4                  359671  2
>
> mbcache                 7918  1 ext4
>
> jbd2                   88768  1 ext4
>
> sd_mod                 38196  3
>
> crc_t10dif              1507  1 sd_mod
>
> virtio_pci              6653  0
>
> virtio_ring             7169  1 virtio_pci
>
> virtio                  4824  1 virtio_pci
>
> pata_acpi               3667  0
>
> ata_generic             3611  0
>
> ata_piix               22652  2
>
> dm_mod                 75539  6
>
>
>
>
>
> please let me know if i am missing something
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you
> add it?
>
>
>
> Can you also confirm the virtio-scsi module is loaded in your guest VM?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Thursday, October 5, 2017 at 3:49 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Please find attached Vm-guest-boot up log and host dmesg log
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> It would be most helpful if you could get lspci working on your guest VM.
>
>
>
> Could you post dmesg contents from your VM and the SPDK vhost log after
> the VM has booted?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Wednesday, October 4, 2017 at 10:42 AM
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i am running this  on remote box which is having linux 3.10 .
>
> on the guest VM lspci command is not working and i am not able to install
> lspci as well
>
> below is the lsblk -a command output -S is also not available in guest VM
>
>
>
> NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
>
> ram0                          1:0    0    16M  0
>
> ram1                          1:1    0    16M  0
>
> ram2                          1:2    0    16M  0
>
> ram3                          1:3    0    16M  0
>
> ram4                          1:4    0    16M  0
>
> ram5                          1:5    0    16M  0
>
> ram6                          1:6    0    16M  0
>
> ram7                          1:7    0    16M  0
>
> ram8                          1:8    0    16M  0
>
> ram9                          1:9    0    16M  0
>
> ram10                         1:10   0    16M  0
>
> ram11                         1:11   0    16M  0
>
> ram12                         1:12   0    16M  0
>
> ram13                         1:13   0    16M  0
>
> ram14                         1:14   0    16M  0
>
> ram15                         1:15   0    16M  0
>
> loop0                         7:0    0         0
>
> loop1                         7:1    0         0
>
> loop2                         7:2    0         0
>
> loop3                         7:3    0         0
>
> loop4                         7:4    0         0
>
> loop5                         7:5    0         0
>
> loop6                         7:6    0         0
>
> loop7                         7:7    0         0
>
> sda                           8:0    0     8G  0
>
> ├─sda1                        8:1    0   500M  0 /boot
>
> └─sda2                        8:2    0   7.5G  0
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Are you running these commands from the host or the VM?  You will only see
> the virtio-scsi controller in lspci output from the guest VM.
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Tuesday, October 3, 2017 at 12:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
> Harris <james.r.harris(a)intel.com>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> One quick update , after running ./script/setup.h for spdk nvme drive is
> converting to uio generic pci device .
>
> so only difference which i found after and before mapping is command for
> ls -l /dev/u*
>
>
>
> can i use /dev/uio0 are the nvme device
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
>
>
> -bash-4.2# lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>
> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>
> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>
> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
>
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> i am getting below output for lspci  for NVram
>
>
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
>
>
> lsblk
>
>
>
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda      8:0    0 223.6G  0 disk
>
> ├─sda1   8:1    0     6G  0 part [SWAP]
>
> ├─sda2   8:2    0   512M  0 part /bootmgr
>
> └─sda3   8:3    0 217.1G  0 part /
>
> sdb      8:16   0 931.5G  0 disk
>
> sdc      8:32   0 931.5G  0 disk
>
> sdd      8:48   0 223.6G  0 disk
>
> sde      8:64   0 111.8G  0 disk
>
> sdf      8:80   0 223.6G  0 disk
>
> sdg      8:96   0 223.6G  0 disk
>
> sdh      8:112  0 223.6G  0 disk
>
> sdi      8:128  0 223.6G  0 disk
>
>
>
>
>
> So how to know which one is virto-scsi  controller basically i wanted to
> run fio test  with nvme mapped device
>
>
>
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please  give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 76674 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-11 12:23 Wodkowski, PawelX
  0 siblings, 0 replies; 23+ messages in thread
From: Wodkowski, PawelX @ 2017-10-11 12:23 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 18717 bytes --]

Most likely, yes.

Pawel

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Nitin Gupta
Sent: Wednesday, October 11, 2017 12:49 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Pawel

Thanks for you reply . some how in my guest VM lsblk -S is not working
please find below output of  ls -l /sys/block/ | grep host

[root(a)localhost ~]# ls -l /sys/block/ | grep host
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sda -> ../devices/pci0000:00/0000:00:01.1/host1/target1:0:0/1:0:0:0/block/sda
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdb -> ../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdc -> ../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:2/2:0:2:0/block/sdc

looks like then sdc is the nvme device , please correct me if i am wrong

Regards
Nitin


On Wed, Oct 11, 2017 at 3:22 PM, Wodkowski, PawelX <pawelx.wodkowski(a)intel.com<mailto:pawelx.wodkowski(a)intel.com>> wrote:
Consider this config:

[Malloc]
  NumberOfLuns 8
  LunSizeInMb 128
  BlockSize 512

[Split]
  Split Nvme0n1 8

[VhostScsi0]
  Name ctrl0
  Dev 0 Nvme0n1p0
  Dev 1 Malloc0

This is output from my VM (for readability I filter out devices using ‘| grep host’).

# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
00:04.0 SCSI storage controller: Red Hat, Inc Virtio SCSI

# ll /sys/block/ | grep host
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sda -> ../devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:0:0:0/block/sda/
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdb -> ../devices/pci0000:00/0000:00:04.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb/
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdc -> ../devices/pci0000:00/0000:00:04.0/virtio0/host2/target2:0:1/2:0:1:0/block/sdc/

As you can see (in this case) device which is reported as “SCSI storage controller: Red Hat, Inc Virtio SCSI“ is SPDK vhost device. Now find PCI address
and use it to figure out which device is which. In this case I have two targets defined in vhost.conf (one is split of NVMe and one is Malloc disk) and have two
Scsi disks: sdb and sdc in VM. I know that in vhost.conf Dev 0 is Nvme0n1p0 so I know that target2:0:0 is NVMe split device mapped to sdb. Analogue
target2:0:1 is Malloc0 mapped to sdc. To confirm this I run following command:

# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  1:0:0:0    disk ATA      QEMU HARDDISK    2.5+ ata
sdb  2:0:0:0    disk INTEL    Split Disk       0001
sdc  2:0:1:0    disk INTEL    Malloc disk      0001

Pawel

From: SPDK [mailto:spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>] On Behalf Of Nitin Gupta
Sent: Wednesday, October 11, 2017 11:07 AM
To: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>

Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i was able to update my  environment for guest VM which runs now in 2.6.32-696.el6.x86_64
please find the lspci output and able to load virtio-scsi  module as well

Please help me  to understand  , how to identify nvme disk mapping .
below mapping we used  in  etc/spdk/vhost.conf.in<http://vhost.conf.in>

Question :-

[VhostScsi0]
  # Define name for controller
  Name vhost.0
  # Assign devices from backend
  # Use the first malloc device
  Dev 0 Malloc0
  #Dev 1 Malloc1
  Dev 2 Nvme0n1
  #Dev 3 Malloc3

[root(a)localhost ~]# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0     8G  0 disk
├─sda1                        8:1    0   500M  0 part /boot
└─sda2                        8:2    0   7.5G  0 part
  ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
  └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
sdb                           8:16   0   256M  0 disk
sdc                           8:32   0 419.2G  0 disk
├─sdc1                        8:33   0  20.2M  0 part
└─sdc2                        8:34   0 419.2G  0 part

[root(a)localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI


[root(a)localhost ~]# lsmod
Module                  Size  Used by
ib_ipoib               80839  0
rdma_ucm               15739  0
ib_ucm                 12328  0
ib_uverbs              40372  2 rdma_ucm,ib_ucm
ib_umad                13487  0
rdma_cm                36555  1 rdma_ucm
ib_cm                  36900  3 ib_ipoib,ib_ucm,rdma_cm
iw_cm                  32976  1 rdma_cm
ib_sa                  24092  4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
ib_mad                 41340  3 ib_umad,ib_cm,ib_sa
ib_core                82732  10 ib_ipoib,rdma_ucm,ib_ucm,ib_uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
ib_addr                 8304  3 rdma_ucm,rdma_cm,ib_core
ipv6                  336368  14 ib_ipoib,ib_addr
i2c_piix4              11232  0
i2c_core               29132  1 i2c_piix4
sg                     29350  0
ext4                  381065  2
jbd2                   93284  1 ext4
mbcache                 8193  1 ext4
virtio_scsi            10761  0
sd_mod                 37158  3
crc_t10dif              1209  1 sd_mod
virtio_pci              7512  0
virtio_ring             8891  2 virtio_scsi,virtio_pci
virtio                  5639  2 virtio_scsi,virtio_pci
pata_acpi               3701  0
ata_generic             3837  0
ata_piix               24409  2
dm_mirror              14864  0
dm_region_hash         12085  1 dm_mirror
dm_log                  9930  2 dm_mirror,dm_region_hash
dm_mod                102467  8 dm_mirror,dm_log


Regards
Nitin


On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Thanks , i will try to install virtio-scsi and update you

Regards
Nitin

On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Can you try loading the virtio-scsi module in the guest VM?

Without a virtio-scsi driver in the guest, there is no way for the guest to see the virtio-scsi device backend created by the SPDK vhost target.

Thanks,

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Friday, October 6, 2017 at 1:28 AM

To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for looking logs ,
Please find attached vhost log  and qemu command which i am invoking

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm

2.  looks like there is no virtio-scsi module loaded in guest VM

i ran lsmod command in guest VM please find below output

[root(a)localhost ~]# lsmod
Module                  Size  Used by
ipt_REJECT              2349  2
nf_conntrack_ipv4       9440  2
nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
iptable_filter          2759  1
ip_tables              17765  1 iptable_filter
ip6t_REJECT             4562  2
nf_conntrack_ipv6       8650  2
nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
xt_state                1458  4
nf_conntrack           79611  3 nf_conntrack_ipv4,nf_conntrack_ipv6,xt_state
ip6table_filter         2855  1
ip6_tables             19424  1 ip6table_filter
ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
i2c_piix4              12574  0
i2c_core               31274  1 i2c_piix4
sg                     30090  0
ext4                  359671  2
mbcache                 7918  1 ext4
jbd2                   88768  1 ext4
sd_mod                 38196  3
crc_t10dif              1507  1 sd_mod
virtio_pci              6653  0
virtio_ring             7169  1 virtio_pci
virtio                  4824  1 virtio_pci
pata_acpi               3667  0
ata_generic             3611  0
ata_piix               22652  2
dm_mod                 75539  6


please let me know if i am missing something

Regards
Nitin


On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you add it?

Can you also confirm the virtio-scsi module is loaded in your guest VM?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Thursday, October 5, 2017 at 3:49 AM

To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Please find attached Vm-guest-boot up log and host dmesg log

Regards
Nitin

On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

It would be most helpful if you could get lspci working on your guest VM.

Could you post dmesg contents from your VM and the SPDK vhost log after the VM has booted?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Wednesday, October 4, 2017 at 10:42 AM
To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i am running this  on remote box which is having linux 3.10 .
on the guest VM lspci command is not working and i am not able to install lspci as well
below is the lsblk -a command output -S is also not available in guest VM

NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
ram0                          1:0    0    16M  0
ram1                          1:1    0    16M  0
ram2                          1:2    0    16M  0
ram3                          1:3    0    16M  0
ram4                          1:4    0    16M  0
ram5                          1:5    0    16M  0
ram6                          1:6    0    16M  0
ram7                          1:7    0    16M  0
ram8                          1:8    0    16M  0
ram9                          1:9    0    16M  0
ram10                         1:10   0    16M  0
ram11                         1:11   0    16M  0
ram12                         1:12   0    16M  0
ram13                         1:13   0    16M  0
ram14                         1:14   0    16M  0
ram15                         1:15   0    16M  0
loop0                         7:0    0         0
loop1                         7:1    0         0
loop2                         7:2    0         0
loop3                         7:3    0         0
loop4                         7:4    0         0
loop5                         7:5    0         0
loop6                         7:6    0         0
loop7                         7:7    0         0
sda                           8:0    0     8G  0
├─sda1                        8:1    0   500M  0 /boot
└─sda2                        8:2    0   7.5G  0
  ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
  └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]

Regards
Nitin

On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Are you running these commands from the host or the VM?  You will only see the virtio-scsi controller in lspci output from the guest VM.

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Tuesday, October 3, 2017 at 12:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

One quick update , after running ./script/setup.h for spdk nvme drive is converting to uio generic pci device .
so only difference which i found after and before mapping is command for ls -l /dev/u*

can i use /dev/uio0 are the nvme device
Regards
Nitin

On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Looks like sdf to sdi is the nvme , please correct me if i ma wrong

-bash-4.2# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata

Regards
Nitin

On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

i am getting below output for lspci  for NVram

d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)

lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0     6G  0 part [SWAP]
├─sda2   8:2    0   512M  0 part /bootmgr
└─sda3   8:3    0 217.1G  0 part /
sdb      8:16   0 931.5G  0 disk
sdc      8:32   0 931.5G  0 disk
sdd      8:48   0 223.6G  0 disk
sde      8:64   0 111.8G  0 disk
sdf      8:80   0 223.6G  0 disk
sdg      8:96   0 223.6G  0 disk
sdh      8:112  0 223.6G  0 disk
sdi      8:128  0 223.6G  0 disk


So how to know which one is virto-scsi  controller basically i wanted to run fio test  with nvme mapped device


On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

lspci should show you the virtio-scsi controller PCI device.
lsblk –S should show you the SCSI block devices attached to that virtio-scsi controller.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, October 2, 2017 at 10:38 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk









_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 100680 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-11 10:49 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-11 10:49 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 19468 bytes --]

Hi Pawel

Thanks for you reply . some how in my guest VM lsblk -S is not working
please find below output of  ls -l /sys/block/ | grep host

[root(a)localhost ~]# ls -l /sys/block/ | grep host
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sda ->
../devices/pci0000:00/0000:00:01.1/host1/target1:0:0/1:0:0:0/block/sda
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdb ->
../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb
lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdc ->
../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:2/2:0:2:0/block/sdc

looks like then sdc is the nvme device , please correct me if i am wrong

Regards
Nitin


On Wed, Oct 11, 2017 at 3:22 PM, Wodkowski, PawelX <
pawelx.wodkowski(a)intel.com> wrote:

> Consider this config:
>
>
>
> [Malloc]
>
>   NumberOfLuns 8
>
>   LunSizeInMb 128
>
>   BlockSize 512
>
>
>
> [Split]
>
>   Split Nvme0n1 8
>
>
>
> [VhostScsi0]
>
>   Name ctrl0
>
>   Dev 0 Nvme0n1p0
>
>   Dev 1 Malloc0
>
>
>
> This is output from my VM (for readability I filter out devices using ‘|
> grep host’).
>
>
>
> # lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
>
> 00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
> Controller (rev 03)
>
> *00:04.0* *SCSI storage controller: Red Hat, Inc Virtio SCSI*
>
>
>
> # ll /sys/block/ | grep host
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sda ->
> ../devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:
> 0:0:0/block/sda/
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdb -> ../devices/pci0000:00/0000:
> *00:04.0*/virtio0/host2/*target2:0:0*/2:0:0:0/block/*sdb*/
>
> lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdc -> ../devices/pci0000:00/0000:
> *00:04.0*/virtio0/host2/*target2:0:1*/2:0:1:0/block/*sdc*/
>
>
>
> As you can see (in this case) device which is reported as “SCSI storage
> controller: Red Hat, Inc Virtio SCSI“ is SPDK vhost device. Now find PCI
> address
>
> and use it to figure out which device is which. In this case I have two
> targets defined in vhost.conf (one is split of NVMe and one is Malloc disk)
> and have two
>
> Scsi disks: *sdb* and *sdc* in VM. I know that in vhost.conf *Dev 0* is
> *Nvme0n1p0* so I know that target2:0:*0* is NVMe split device mapped to
> *sdb*. Analogue
>
> target2:0:1 is Malloc0 mapped to *sdc*. To confirm this I run following
> command:
>
>
>
> # lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  1:0:0:0    disk ATA      QEMU HARDDISK    2.5+ ata
>
> *sdb  2:0:0:0*    disk INTEL    *Split Disk*       0001
>
> *sdc  2:0:1:0*    disk INTEL    *Malloc disk*      0001
>
>
>
> Pawel
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Nitin Gupta
> *Sent:* Wednesday, October 11, 2017 11:07 AM
> *To:* Harris, James R <james.r.harris(a)intel.com>
>
> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i was able to update my  environment for guest VM which runs now
> in 2.6.32-696.el6.x86_64
>
> please find the lspci output and able to load virtio-scsi  module as well
>
>
>
> Please help me  to understand  , how to identify nvme disk mapping .
>
> below mapping we used  in  etc/spdk/vhost.conf.in
>
>
>
> Question :-
>
>
>
> [VhostScsi0]
>
>   # Define name for controller
>
>   Name vhost.0
>
>   # Assign devices from backend
>
>   # Use the first malloc device
>
>   Dev 0 Malloc0
>
>   #Dev 1 Malloc1
>
>   Dev 2 Nvme0n1
>
>   #Dev 3 Malloc3
>
>
>
> [root(a)localhost ~]# lsblk
>
> NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda                           8:0    0     8G  0 disk
>
> ├─sda1                        8:1    0   500M  0 part /boot
>
> └─sda2                        8:2    0   7.5G  0 part
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
>
> sdb                           8:16   0   256M  0 disk
>
> sdc                           8:32   0 419.2G  0 disk
>
> ├─sdc1                        8:33   0  20.2M  0 part
>
> └─sdc2                        8:34   0 419.2G  0 part
>
>
>
> [root(a)localhost ~]# lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
>
>
>
>
>
> [root(a)localhost ~]# lsmod
>
> Module                  Size  Used by
>
> ib_ipoib               80839  0
>
> rdma_ucm               15739  0
>
> ib_ucm                 12328  0
>
> ib_uverbs              40372  2 rdma_ucm,ib_ucm
>
> ib_umad                13487  0
>
> rdma_cm                36555  1 rdma_ucm
>
> ib_cm                  36900  3 ib_ipoib,ib_ucm,rdma_cm
>
> iw_cm                  32976  1 rdma_cm
>
> ib_sa                  24092  4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
>
> ib_mad                 41340  3 ib_umad,ib_cm,ib_sa
>
> ib_core                82732  10 ib_ipoib,rdma_ucm,ib_ucm,ib_
> uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
>
> ib_addr                 8304  3 rdma_ucm,rdma_cm,ib_core
>
> ipv6                  336368  14 ib_ipoib,ib_addr
>
> i2c_piix4              11232  0
>
> i2c_core               29132  1 i2c_piix4
>
> sg                     29350  0
>
> ext4                  381065  2
>
> jbd2                   93284  1 ext4
>
> mbcache                 8193  1 ext4
>
> virtio_scsi            10761  0
>
> sd_mod                 37158  3
>
> crc_t10dif              1209  1 sd_mod
>
> virtio_pci              7512  0
>
> virtio_ring             8891  2 virtio_scsi,virtio_pci
>
> virtio                  5639  2 virtio_scsi,virtio_pci
>
> pata_acpi               3701  0
>
> ata_generic             3837  0
>
> ata_piix               24409  2
>
> dm_mirror              14864  0
>
> dm_region_hash         12085  1 dm_mirror
>
> dm_log                  9930  2 dm_mirror,dm_region_hash
>
> dm_mod                102467  8 dm_mirror,dm_log
>
>
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Thanks , i will try to install virtio-scsi and update you
>
>
>
> Regards
>
> Nitin
>
>
>
> On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Can you try loading the virtio-scsi module in the guest VM?
>
>
>
> Without a virtio-scsi driver in the guest, there is no way for the guest
> to see the virtio-scsi device backend created by the SPDK vhost target.
>
>
>
> Thanks,
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Friday, October 6, 2017 at 1:28 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for looking logs ,
>
> Please find attached vhost log  and qemu command which i am invoking
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
> 2.  looks like there is no virtio-scsi module loaded in guest VM
>
>
>
> i ran lsmod command in guest VM please find below output
>
>
>
> [root(a)localhost ~]# lsmod
>
> Module                  Size  Used by
>
> ipt_REJECT              2349  2
>
> nf_conntrack_ipv4       9440  2
>
> nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
>
> iptable_filter          2759  1
>
> ip_tables              17765  1 iptable_filter
>
> ip6t_REJECT             4562  2
>
> nf_conntrack_ipv6       8650  2
>
> nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
>
> xt_state                1458  4
>
> nf_conntrack           79611  3 nf_conntrack_ipv4,nf_
> conntrack_ipv6,xt_state
>
> ip6table_filter         2855  1
>
> ip6_tables             19424  1 ip6table_filter
>
> ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,
> nf_defrag_ipv6
>
> i2c_piix4              12574  0
>
> i2c_core               31274  1 i2c_piix4
>
> sg                     30090  0
>
> ext4                  359671  2
>
> mbcache                 7918  1 ext4
>
> jbd2                   88768  1 ext4
>
> sd_mod                 38196  3
>
> crc_t10dif              1507  1 sd_mod
>
> virtio_pci              6653  0
>
> virtio_ring             7169  1 virtio_pci
>
> virtio                  4824  1 virtio_pci
>
> pata_acpi               3667  0
>
> ata_generic             3611  0
>
> ata_piix               22652  2
>
> dm_mod                 75539  6
>
>
>
>
>
> please let me know if i am missing something
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you
> add it?
>
>
>
> Can you also confirm the virtio-scsi module is loaded in your guest VM?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Thursday, October 5, 2017 at 3:49 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Please find attached Vm-guest-boot up log and host dmesg log
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> It would be most helpful if you could get lspci working on your guest VM.
>
>
>
> Could you post dmesg contents from your VM and the SPDK vhost log after
> the VM has booted?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Wednesday, October 4, 2017 at 10:42 AM
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i am running this  on remote box which is having linux 3.10 .
>
> on the guest VM lspci command is not working and i am not able to install
> lspci as well
>
> below is the lsblk -a command output -S is also not available in guest VM
>
>
>
> NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
>
> ram0                          1:0    0    16M  0
>
> ram1                          1:1    0    16M  0
>
> ram2                          1:2    0    16M  0
>
> ram3                          1:3    0    16M  0
>
> ram4                          1:4    0    16M  0
>
> ram5                          1:5    0    16M  0
>
> ram6                          1:6    0    16M  0
>
> ram7                          1:7    0    16M  0
>
> ram8                          1:8    0    16M  0
>
> ram9                          1:9    0    16M  0
>
> ram10                         1:10   0    16M  0
>
> ram11                         1:11   0    16M  0
>
> ram12                         1:12   0    16M  0
>
> ram13                         1:13   0    16M  0
>
> ram14                         1:14   0    16M  0
>
> ram15                         1:15   0    16M  0
>
> loop0                         7:0    0         0
>
> loop1                         7:1    0         0
>
> loop2                         7:2    0         0
>
> loop3                         7:3    0         0
>
> loop4                         7:4    0         0
>
> loop5                         7:5    0         0
>
> loop6                         7:6    0         0
>
> loop7                         7:7    0         0
>
> sda                           8:0    0     8G  0
>
> ├─sda1                        8:1    0   500M  0 /boot
>
> └─sda2                        8:2    0   7.5G  0
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Are you running these commands from the host or the VM?  You will only see
> the virtio-scsi controller in lspci output from the guest VM.
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Tuesday, October 3, 2017 at 12:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
> Harris <james.r.harris(a)intel.com>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> One quick update , after running ./script/setup.h for spdk nvme drive is
> converting to uio generic pci device .
>
> so only difference which i found after and before mapping is command for
> ls -l /dev/u*
>
>
>
> can i use /dev/uio0 are the nvme device
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
>
>
> -bash-4.2# lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>
> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>
> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>
> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
>
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> i am getting below output for lspci  for NVram
>
>
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
>
>
> lsblk
>
>
>
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda      8:0    0 223.6G  0 disk
>
> ├─sda1   8:1    0     6G  0 part [SWAP]
>
> ├─sda2   8:2    0   512M  0 part /bootmgr
>
> └─sda3   8:3    0 217.1G  0 part /
>
> sdb      8:16   0 931.5G  0 disk
>
> sdc      8:32   0 931.5G  0 disk
>
> sdd      8:48   0 223.6G  0 disk
>
> sde      8:64   0 111.8G  0 disk
>
> sdf      8:80   0 223.6G  0 disk
>
> sdg      8:96   0 223.6G  0 disk
>
> sdh      8:112  0 223.6G  0 disk
>
> sdi      8:128  0 223.6G  0 disk
>
>
>
>
>
> So how to know which one is virto-scsi  controller basically i wanted to
> run fio test  with nvme mapped device
>
>
>
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please  give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 70255 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-11  9:52 Wodkowski, PawelX
  0 siblings, 0 replies; 23+ messages in thread
From: Wodkowski, PawelX @ 2017-10-11  9:52 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 17389 bytes --]

Consider this config:

[Malloc]
  NumberOfLuns 8
  LunSizeInMb 128
  BlockSize 512

[Split]
  Split Nvme0n1 8

[VhostScsi0]
  Name ctrl0
  Dev 0 Nvme0n1p0
  Dev 1 Malloc0

This is output from my VM (for readability I filter out devices using ‘| grep host’).

# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
00:04.0 SCSI storage controller: Red Hat, Inc Virtio SCSI

# ll /sys/block/ | grep host
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sda -> ../devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:0:0:0/block/sda/
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdb -> ../devices/pci0000:00/0000:00:04.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb/
lrwxrwxrwx  1 root root 0 Oct 11 15:16 sdc -> ../devices/pci0000:00/0000:00:04.0/virtio0/host2/target2:0:1/2:0:1:0/block/sdc/

As you can see (in this case) device which is reported as “SCSI storage controller: Red Hat, Inc Virtio SCSI“ is SPDK vhost device. Now find PCI address
and use it to figure out which device is which. In this case I have two targets defined in vhost.conf (one is split of NVMe and one is Malloc disk) and have two
Scsi disks: sdb and sdc in VM. I know that in vhost.conf Dev 0 is Nvme0n1p0 so I know that target2:0:0 is NVMe split device mapped to sdb. Analogue
target2:0:1 is Malloc0 mapped to sdc. To confirm this I run following command:

# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  1:0:0:0    disk ATA      QEMU HARDDISK    2.5+ ata
sdb  2:0:0:0    disk INTEL    Split Disk       0001
sdc  2:0:1:0    disk INTEL    Malloc disk      0001

Pawel

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Nitin Gupta
Sent: Wednesday, October 11, 2017 11:07 AM
To: Harris, James R <james.r.harris(a)intel.com>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i was able to update my  environment for guest VM which runs now in 2.6.32-696.el6.x86_64
please find the lspci output and able to load virtio-scsi  module as well

Please help me  to understand  , how to identify nvme disk mapping .
below mapping we used  in  etc/spdk/vhost.conf.in<http://vhost.conf.in>

Question :-

[VhostScsi0]
  # Define name for controller
  Name vhost.0
  # Assign devices from backend
  # Use the first malloc device
  Dev 0 Malloc0
  #Dev 1 Malloc1
  Dev 2 Nvme0n1
  #Dev 3 Malloc3

[root(a)localhost ~]# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0     8G  0 disk
├─sda1                        8:1    0   500M  0 part /boot
└─sda2                        8:2    0   7.5G  0 part
  ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
  └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
sdb                           8:16   0   256M  0 disk
sdc                           8:32   0 419.2G  0 disk
├─sdc1                        8:33   0  20.2M  0 part
└─sdc2                        8:34   0 419.2G  0 part

[root(a)localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI


[root(a)localhost ~]# lsmod
Module                  Size  Used by
ib_ipoib               80839  0
rdma_ucm               15739  0
ib_ucm                 12328  0
ib_uverbs              40372  2 rdma_ucm,ib_ucm
ib_umad                13487  0
rdma_cm                36555  1 rdma_ucm
ib_cm                  36900  3 ib_ipoib,ib_ucm,rdma_cm
iw_cm                  32976  1 rdma_cm
ib_sa                  24092  4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
ib_mad                 41340  3 ib_umad,ib_cm,ib_sa
ib_core                82732  10 ib_ipoib,rdma_ucm,ib_ucm,ib_uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
ib_addr                 8304  3 rdma_ucm,rdma_cm,ib_core
ipv6                  336368  14 ib_ipoib,ib_addr
i2c_piix4              11232  0
i2c_core               29132  1 i2c_piix4
sg                     29350  0
ext4                  381065  2
jbd2                   93284  1 ext4
mbcache                 8193  1 ext4
virtio_scsi            10761  0
sd_mod                 37158  3
crc_t10dif              1209  1 sd_mod
virtio_pci              7512  0
virtio_ring             8891  2 virtio_scsi,virtio_pci
virtio                  5639  2 virtio_scsi,virtio_pci
pata_acpi               3701  0
ata_generic             3837  0
ata_piix               24409  2
dm_mirror              14864  0
dm_region_hash         12085  1 dm_mirror
dm_log                  9930  2 dm_mirror,dm_region_hash
dm_mod                102467  8 dm_mirror,dm_log


Regards
Nitin


On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Thanks , i will try to install virtio-scsi and update you

Regards
Nitin

On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Can you try loading the virtio-scsi module in the guest VM?

Without a virtio-scsi driver in the guest, there is no way for the guest to see the virtio-scsi device backend created by the SPDK vhost target.

Thanks,

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Friday, October 6, 2017 at 1:28 AM

To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for looking logs ,
Please find attached vhost log  and qemu command which i am invoking

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm

2.  looks like there is no virtio-scsi module loaded in guest VM

i ran lsmod command in guest VM please find below output

[root(a)localhost ~]# lsmod
Module                  Size  Used by
ipt_REJECT              2349  2
nf_conntrack_ipv4       9440  2
nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
iptable_filter          2759  1
ip_tables              17765  1 iptable_filter
ip6t_REJECT             4562  2
nf_conntrack_ipv6       8650  2
nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
xt_state                1458  4
nf_conntrack           79611  3 nf_conntrack_ipv4,nf_conntrack_ipv6,xt_state
ip6table_filter         2855  1
ip6_tables             19424  1 ip6table_filter
ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
i2c_piix4              12574  0
i2c_core               31274  1 i2c_piix4
sg                     30090  0
ext4                  359671  2
mbcache                 7918  1 ext4
jbd2                   88768  1 ext4
sd_mod                 38196  3
crc_t10dif              1507  1 sd_mod
virtio_pci              6653  0
virtio_ring             7169  1 virtio_pci
virtio                  4824  1 virtio_pci
pata_acpi               3667  0
ata_generic             3611  0
ata_piix               22652  2
dm_mod                 75539  6


please let me know if i am missing something

Regards
Nitin


On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you add it?

Can you also confirm the virtio-scsi module is loaded in your guest VM?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Thursday, October 5, 2017 at 3:49 AM

To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Please find attached Vm-guest-boot up log and host dmesg log

Regards
Nitin

On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

It would be most helpful if you could get lspci working on your guest VM.

Could you post dmesg contents from your VM and the SPDK vhost log after the VM has booted?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Wednesday, October 4, 2017 at 10:42 AM
To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i am running this  on remote box which is having linux 3.10 .
on the guest VM lspci command is not working and i am not able to install lspci as well
below is the lsblk -a command output -S is also not available in guest VM

NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
ram0                          1:0    0    16M  0
ram1                          1:1    0    16M  0
ram2                          1:2    0    16M  0
ram3                          1:3    0    16M  0
ram4                          1:4    0    16M  0
ram5                          1:5    0    16M  0
ram6                          1:6    0    16M  0
ram7                          1:7    0    16M  0
ram8                          1:8    0    16M  0
ram9                          1:9    0    16M  0
ram10                         1:10   0    16M  0
ram11                         1:11   0    16M  0
ram12                         1:12   0    16M  0
ram13                         1:13   0    16M  0
ram14                         1:14   0    16M  0
ram15                         1:15   0    16M  0
loop0                         7:0    0         0
loop1                         7:1    0         0
loop2                         7:2    0         0
loop3                         7:3    0         0
loop4                         7:4    0         0
loop5                         7:5    0         0
loop6                         7:6    0         0
loop7                         7:7    0         0
sda                           8:0    0     8G  0
├─sda1                        8:1    0   500M  0 /boot
└─sda2                        8:2    0   7.5G  0
  ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
  └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]

Regards
Nitin

On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Are you running these commands from the host or the VM?  You will only see the virtio-scsi controller in lspci output from the guest VM.

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Tuesday, October 3, 2017 at 12:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

One quick update , after running ./script/setup.h for spdk nvme drive is converting to uio generic pci device .
so only difference which i found after and before mapping is command for ls -l /dev/u*

can i use /dev/uio0 are the nvme device
Regards
Nitin

On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Looks like sdf to sdi is the nvme , please correct me if i ma wrong

-bash-4.2# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata

Regards
Nitin

On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

i am getting below output for lspci  for NVram

d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)

lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0     6G  0 part [SWAP]
├─sda2   8:2    0   512M  0 part /bootmgr
└─sda3   8:3    0 217.1G  0 part /
sdb      8:16   0 931.5G  0 disk
sdc      8:32   0 931.5G  0 disk
sdd      8:48   0 223.6G  0 disk
sde      8:64   0 111.8G  0 disk
sdf      8:80   0 223.6G  0 disk
sdg      8:96   0 223.6G  0 disk
sdh      8:112  0 223.6G  0 disk
sdi      8:128  0 223.6G  0 disk


So how to know which one is virto-scsi  controller basically i wanted to run fio test  with nvme mapped device


On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

lspci should show you the virtio-scsi controller PCI device.
lsblk –S should show you the SCSI block devices attached to that virtio-scsi controller.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, October 2, 2017 at 10:38 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk









[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 91197 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-11  9:07 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-11  9:07 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 16095 bytes --]

Hi Jim

i was able to update my  environment for guest VM which runs now
in 2.6.32-696.el6.x86_64
please find the lspci output and able to load virtio-scsi  module as well

Please help me  to understand  , how to identify nvme disk mapping .
below mapping we used  in  etc/spdk/vhost.conf.in

Question :-

[VhostScsi0]
  # Define name for controller
  Name vhost.0
  # Assign devices from backend
  # Use the first malloc device
  Dev 0 Malloc0
  #Dev 1 Malloc1
  Dev 2 Nvme0n1
  #Dev 3 Malloc3

[root(a)localhost ~]# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0     8G  0 disk
├─sda1                        8:1    0   500M  0 part /boot
└─sda2                        8:2    0   7.5G  0 part
  ├─VolGroup-lv_root (dm-0) 253:0    0   6.7G  0 lvm  /
  └─VolGroup-lv_swap (dm-1) 253:1    0   816M  0 lvm  [SWAP]
sdb                           8:16   0   256M  0 disk
sdc                           8:32   0 419.2G  0 disk
├─sdc1                        8:33   0  20.2M  0 part
└─sdc2                        8:34   0 419.2G  0 part

[root(a)localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI


[root(a)localhost ~]# lsmod
Module                  Size  Used by
ib_ipoib               80839  0
rdma_ucm               15739  0
ib_ucm                 12328  0
ib_uverbs              40372  2 rdma_ucm,ib_ucm
ib_umad                13487  0
rdma_cm                36555  1 rdma_ucm
ib_cm                  36900  3 ib_ipoib,ib_ucm,rdma_cm
iw_cm                  32976  1 rdma_cm
ib_sa                  24092  4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
ib_mad                 41340  3 ib_umad,ib_cm,ib_sa
ib_core                82732  10
ib_ipoib,rdma_ucm,ib_ucm,ib_uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
ib_addr                 8304  3 rdma_ucm,rdma_cm,ib_core
ipv6                  336368  14 ib_ipoib,ib_addr
i2c_piix4              11232  0
i2c_core               29132  1 i2c_piix4
sg                     29350  0
ext4                  381065  2
jbd2                   93284  1 ext4
mbcache                 8193  1 ext4
virtio_scsi            10761  0
sd_mod                 37158  3
crc_t10dif              1209  1 sd_mod
virtio_pci              7512  0
virtio_ring             8891  2 virtio_scsi,virtio_pci
virtio                  5639  2 virtio_scsi,virtio_pci
pata_acpi               3701  0
ata_generic             3837  0
ata_piix               24409  2
dm_mirror              14864  0
dm_region_hash         12085  1 dm_mirror
dm_log                  9930  2 dm_mirror,dm_region_hash
dm_mod                102467  8 dm_mirror,dm_log


Regards
Nitin


On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
wrote:

> Hi Jim
>
> Thanks , i will try to install virtio-scsi and update you
>
> Regards
> Nitin
>
> On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <james.r.harris(a)intel.com
> > wrote:
>
>> Hi Nitin,
>>
>>
>>
>> Can you try loading the virtio-scsi module in the guest VM?
>>
>>
>>
>> Without a virtio-scsi driver in the guest, there is no way for the guest
>> to see the virtio-scsi device backend created by the SPDK vhost target.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
>> *Date: *Friday, October 6, 2017 at 1:28 AM
>>
>> *To: *James Harris <james.r.harris(a)intel.com>
>> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> Thanks for looking logs ,
>>
>> Please find attached vhost log  and qemu command which i am invoking
>>
>>
>>
>> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
>> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
>> -nographic -no-user-config -nodefaults -serial
>> mon:telnet:localhost:7704,server,nowait -monitor
>> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
>> file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
>> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
>> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>>
>>
>>
>> 2.  looks like there is no virtio-scsi module loaded in guest VM
>>
>>
>>
>> i ran lsmod command in guest VM please find below output
>>
>>
>>
>> [root(a)localhost ~]# lsmod
>>
>> Module                  Size  Used by
>>
>> ipt_REJECT              2349  2
>>
>> nf_conntrack_ipv4       9440  2
>>
>> nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
>>
>> iptable_filter          2759  1
>>
>> ip_tables              17765  1 iptable_filter
>>
>> ip6t_REJECT             4562  2
>>
>> nf_conntrack_ipv6       8650  2
>>
>> nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
>>
>> xt_state                1458  4
>>
>> nf_conntrack           79611  3 nf_conntrack_ipv4,nf_conntrack
>> _ipv6,xt_state
>>
>> ip6table_filter         2855  1
>>
>> ip6_tables             19424  1 ip6table_filter
>>
>> ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,
>> nf_defrag_ipv6
>>
>> i2c_piix4              12574  0
>>
>> i2c_core               31274  1 i2c_piix4
>>
>> sg                     30090  0
>>
>> ext4                  359671  2
>>
>> mbcache                 7918  1 ext4
>>
>> jbd2                   88768  1 ext4
>>
>> sd_mod                 38196  3
>>
>> crc_t10dif              1507  1 sd_mod
>>
>> virtio_pci              6653  0
>>
>> virtio_ring             7169  1 virtio_pci
>>
>> virtio                  4824  1 virtio_pci
>>
>> pata_acpi               3667  0
>>
>> ata_generic             3611  0
>>
>> ata_piix               22652  2
>>
>> dm_mod                 75539  6
>>
>>
>>
>>
>>
>> please let me know if i am missing something
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>>
>>
>> On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com>
>> wrote:
>>
>> Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you
>> add it?
>>
>>
>>
>> Can you also confirm the virtio-scsi module is loaded in your guest VM?
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
>> *Date: *Thursday, October 5, 2017 at 3:49 AM
>>
>>
>> *To: *James Harris <james.r.harris(a)intel.com>
>> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> Please find attached Vm-guest-boot up log and host dmesg log
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> It would be most helpful if you could get lspci working on your guest VM.
>>
>>
>>
>> Could you post dmesg contents from your VM and the SPDK vhost log after
>> the VM has booted?
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
>> *Date: *Wednesday, October 4, 2017 at 10:42 AM
>> *To: *James Harris <james.r.harris(a)intel.com>
>> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>>
>>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> i am running this  on remote box which is having linux 3.10 .
>>
>> on the guest VM lspci command is not working and i am not able to install
>> lspci as well
>>
>> below is the lsblk -a command output -S is also not available in guest VM
>>
>>
>>
>> NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
>>
>> ram0                          1:0    0    16M  0
>>
>> ram1                          1:1    0    16M  0
>>
>> ram2                          1:2    0    16M  0
>>
>> ram3                          1:3    0    16M  0
>>
>> ram4                          1:4    0    16M  0
>>
>> ram5                          1:5    0    16M  0
>>
>> ram6                          1:6    0    16M  0
>>
>> ram7                          1:7    0    16M  0
>>
>> ram8                          1:8    0    16M  0
>>
>> ram9                          1:9    0    16M  0
>>
>> ram10                         1:10   0    16M  0
>>
>> ram11                         1:11   0    16M  0
>>
>> ram12                         1:12   0    16M  0
>>
>> ram13                         1:13   0    16M  0
>>
>> ram14                         1:14   0    16M  0
>>
>> ram15                         1:15   0    16M  0
>>
>> loop0                         7:0    0         0
>>
>> loop1                         7:1    0         0
>>
>> loop2                         7:2    0         0
>>
>> loop3                         7:3    0         0
>>
>> loop4                         7:4    0         0
>>
>> loop5                         7:5    0         0
>>
>> loop6                         7:6    0         0
>>
>> loop7                         7:7    0         0
>>
>> sda                           8:0    0     8G  0
>>
>> ├─sda1                        8:1    0   500M  0 /boot
>>
>> └─sda2                        8:2    0   7.5G  0
>>
>>   ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
>>
>>   └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> Are you running these commands from the host or the VM?  You will only
>> see the virtio-scsi controller in lspci output from the guest VM.
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
>> *Date: *Tuesday, October 3, 2017 at 12:23 AM
>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
>> Harris <james.r.harris(a)intel.com>
>>
>>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> One quick update , after running ./script/setup.h for spdk nvme drive is
>> converting to uio generic pci device .
>>
>> so only difference which i found after and before mapping is command for
>> ls -l /dev/u*
>>
>>
>>
>> can i use /dev/uio0 are the nvme device
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
>> wrote:
>>
>> Hi Jim
>>
>>
>>
>> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>>
>>
>>
>> -bash-4.2# lsblk -S
>>
>> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>>
>> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>>
>> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>>
>> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>>
>> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
>> wrote:
>>
>> Hi Jim
>>
>>
>>
>> i am getting below output for lspci  for NVram
>>
>>
>>
>> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>>
>>
>> lsblk
>>
>>
>>
>> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>>
>> sda      8:0    0 223.6G  0 disk
>>
>> ├─sda1   8:1    0     6G  0 part [SWAP]
>>
>> ├─sda2   8:2    0   512M  0 part /bootmgr
>>
>> └─sda3   8:3    0 217.1G  0 part /
>>
>> sdb      8:16   0 931.5G  0 disk
>>
>> sdc      8:32   0 931.5G  0 disk
>>
>> sdd      8:48   0 223.6G  0 disk
>>
>> sde      8:64   0 111.8G  0 disk
>>
>> sdf      8:80   0 223.6G  0 disk
>>
>> sdg      8:96   0 223.6G  0 disk
>>
>> sdh      8:112  0 223.6G  0 disk
>>
>> sdi      8:128  0 223.6G  0 disk
>>
>>
>>
>>
>>
>> So how to know which one is virto-scsi  controller basically i wanted to
>> run fio test  with nvme mapped device
>>
>>
>>
>>
>>
>> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> lspci should show you the virtio-scsi controller PCI device.
>>
>> lsblk –S should show you the SCSI block devices attached to that
>> virtio-scsi controller.
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
>> nitin.gupta981(a)gmail.com>
>> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Date: *Monday, October 2, 2017 at 10:38 AM
>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> Thanks for your reply and sorry for my late reply ..
>>
>> could you please  give one example to know how to identify virtio-scsi
>> controller in the linux
>>
>> i mean which directory it will be present or which file system ?
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> You should see a virtio-scsi controller in the VM, not an NVMe device.
>> This controller should have one LUN attached, which SPDK vhost maps to the
>> NVMe device attached to the host.
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
>> nitin.gupta981(a)gmail.com>
>> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Date: *Thursday, September 28, 2017 at 4:07 AM
>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi All
>>
>>
>>
>> i am new in spdk development and currently doing spdk setup in that  was
>> able to setup back-end storage with NVME .After running the VM with
>> following command , there is no nvme drive present .
>>
>>
>>
>> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
>> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
>> -nographic -no-user-config -nodefaults -serial
>> mon:telnet:localhost:7704,server,nowait -monitor
>> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
>> file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
>> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
>> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>>
>>
>>
>>
>>
>> how to identify which is nvme drive ?
>>
>> is there any way to  enable nvme from qemu command ?
>>
>>
>>
>> PS:  i have already specified the nvme drive in vhost.conf.in
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 44971 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-07  4:40 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-07  4:40 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 12343 bytes --]

Hi Jim

Thanks , i will try to install virtio-scsi and update you

Regards
Nitin

On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Hi Nitin,
>
>
>
> Can you try loading the virtio-scsi module in the guest VM?
>
>
>
> Without a virtio-scsi driver in the guest, there is no way for the guest
> to see the virtio-scsi device backend created by the SPDK vhost target.
>
>
>
> Thanks,
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Friday, October 6, 2017 at 1:28 AM
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for looking logs ,
>
> Please find attached vhost log  and qemu command which i am invoking
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
> 2.  looks like there is no virtio-scsi module loaded in guest VM
>
>
>
> i ran lsmod command in guest VM please find below output
>
>
>
> [root(a)localhost ~]# lsmod
>
> Module                  Size  Used by
>
> ipt_REJECT              2349  2
>
> nf_conntrack_ipv4       9440  2
>
> nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
>
> iptable_filter          2759  1
>
> ip_tables              17765  1 iptable_filter
>
> ip6t_REJECT             4562  2
>
> nf_conntrack_ipv6       8650  2
>
> nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
>
> xt_state                1458  4
>
> nf_conntrack           79611  3 nf_conntrack_ipv4,nf_
> conntrack_ipv6,xt_state
>
> ip6table_filter         2855  1
>
> ip6_tables             19424  1 ip6table_filter
>
> ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,
> nf_defrag_ipv6
>
> i2c_piix4              12574  0
>
> i2c_core               31274  1 i2c_piix4
>
> sg                     30090  0
>
> ext4                  359671  2
>
> mbcache                 7918  1 ext4
>
> jbd2                   88768  1 ext4
>
> sd_mod                 38196  3
>
> crc_t10dif              1507  1 sd_mod
>
> virtio_pci              6653  0
>
> virtio_ring             7169  1 virtio_pci
>
> virtio                  4824  1 virtio_pci
>
> pata_acpi               3667  0
>
> ata_generic             3611  0
>
> ata_piix               22652  2
>
> dm_mod                 75539  6
>
>
>
>
>
> please let me know if i am missing something
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you
> add it?
>
>
>
> Can you also confirm the virtio-scsi module is loaded in your guest VM?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Thursday, October 5, 2017 at 3:49 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Please find attached Vm-guest-boot up log and host dmesg log
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> It would be most helpful if you could get lspci working on your guest VM.
>
>
>
> Could you post dmesg contents from your VM and the SPDK vhost log after
> the VM has booted?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Wednesday, October 4, 2017 at 10:42 AM
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i am running this  on remote box which is having linux 3.10 .
>
> on the guest VM lspci command is not working and i am not able to install
> lspci as well
>
> below is the lsblk -a command output -S is also not available in guest VM
>
>
>
> NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
>
> ram0                          1:0    0    16M  0
>
> ram1                          1:1    0    16M  0
>
> ram2                          1:2    0    16M  0
>
> ram3                          1:3    0    16M  0
>
> ram4                          1:4    0    16M  0
>
> ram5                          1:5    0    16M  0
>
> ram6                          1:6    0    16M  0
>
> ram7                          1:7    0    16M  0
>
> ram8                          1:8    0    16M  0
>
> ram9                          1:9    0    16M  0
>
> ram10                         1:10   0    16M  0
>
> ram11                         1:11   0    16M  0
>
> ram12                         1:12   0    16M  0
>
> ram13                         1:13   0    16M  0
>
> ram14                         1:14   0    16M  0
>
> ram15                         1:15   0    16M  0
>
> loop0                         7:0    0         0
>
> loop1                         7:1    0         0
>
> loop2                         7:2    0         0
>
> loop3                         7:3    0         0
>
> loop4                         7:4    0         0
>
> loop5                         7:5    0         0
>
> loop6                         7:6    0         0
>
> loop7                         7:7    0         0
>
> sda                           8:0    0     8G  0
>
> ├─sda1                        8:1    0   500M  0 /boot
>
> └─sda2                        8:2    0   7.5G  0
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Are you running these commands from the host or the VM?  You will only see
> the virtio-scsi controller in lspci output from the guest VM.
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Tuesday, October 3, 2017 at 12:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
> Harris <james.r.harris(a)intel.com>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> One quick update , after running ./script/setup.h for spdk nvme drive is
> converting to uio generic pci device .
>
> so only difference which i found after and before mapping is command for
> ls -l /dev/u*
>
>
>
> can i use /dev/uio0 are the nvme device
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
>
>
> -bash-4.2# lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>
> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>
> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>
> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
>
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> i am getting below output for lspci  for NVram
>
>
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
>
>
> lsblk
>
>
>
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda      8:0    0 223.6G  0 disk
>
> ├─sda1   8:1    0     6G  0 part [SWAP]
>
> ├─sda2   8:2    0   512M  0 part /bootmgr
>
> └─sda3   8:3    0 217.1G  0 part /
>
> sdb      8:16   0 931.5G  0 disk
>
> sdc      8:32   0 931.5G  0 disk
>
> sdd      8:48   0 223.6G  0 disk
>
> sde      8:64   0 111.8G  0 disk
>
> sdf      8:80   0 223.6G  0 disk
>
> sdg      8:96   0 223.6G  0 disk
>
> sdh      8:112  0 223.6G  0 disk
>
> sdi      8:128  0 223.6G  0 disk
>
>
>
>
>
> So how to know which one is virto-scsi  controller basically i wanted to
> run fio test  with nvme mapped device
>
>
>
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please  give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
>
>
>
>
>
>
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 40115 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-06 18:31 Harris, James R
  0 siblings, 0 replies; 23+ messages in thread
From: Harris, James R @ 2017-10-06 18:31 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 11590 bytes --]

Hi Nitin,

Can you try loading the virtio-scsi module in the guest VM?

Without a virtio-scsi driver in the guest, there is no way for the guest to see the virtio-scsi device backend created by the SPDK vhost target.

Thanks,

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com>
Date: Friday, October 6, 2017 at 1:28 AM
To: James Harris <james.r.harris(a)intel.com>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for looking logs ,
Please find attached vhost log  and qemu command which i am invoking

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm

2.  looks like there is no virtio-scsi module loaded in guest VM

i ran lsmod command in guest VM please find below output

[root(a)localhost ~]# lsmod
Module                  Size  Used by
ipt_REJECT              2349  2
nf_conntrack_ipv4       9440  2
nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
iptable_filter          2759  1
ip_tables              17765  1 iptable_filter
ip6t_REJECT             4562  2
nf_conntrack_ipv6       8650  2
nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
xt_state                1458  4
nf_conntrack           79611  3 nf_conntrack_ipv4,nf_conntrack_ipv6,xt_state
ip6table_filter         2855  1
ip6_tables             19424  1 ip6table_filter
ipv6                  322291  15 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
i2c_piix4              12574  0
i2c_core               31274  1 i2c_piix4
sg                     30090  0
ext4                  359671  2
mbcache                 7918  1 ext4
jbd2                   88768  1 ext4
sd_mod                 38196  3
crc_t10dif              1507  1 sd_mod
virtio_pci              6653  0
virtio_ring             7169  1 virtio_pci
virtio                  4824  1 virtio_pci
pata_acpi               3667  0
ata_generic             3611  0
ata_piix               22652  2
dm_mod                 75539  6


please let me know if i am missing something

Regards
Nitin


On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you add it?

Can you also confirm the virtio-scsi module is loaded in your guest VM?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Thursday, October 5, 2017 at 3:49 AM

To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Please find attached Vm-guest-boot up log and host dmesg log

Regards
Nitin

On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

It would be most helpful if you could get lspci working on your guest VM.

Could you post dmesg contents from your VM and the SPDK vhost log after the VM has booted?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Wednesday, October 4, 2017 at 10:42 AM
To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i am running this  on remote box which is having linux 3.10 .
on the guest VM lspci command is not working and i am not able to install lspci as well
below is the lsblk -a command output -S is also not available in guest VM

NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
ram0                          1:0    0    16M  0
ram1                          1:1    0    16M  0
ram2                          1:2    0    16M  0
ram3                          1:3    0    16M  0
ram4                          1:4    0    16M  0
ram5                          1:5    0    16M  0
ram6                          1:6    0    16M  0
ram7                          1:7    0    16M  0
ram8                          1:8    0    16M  0
ram9                          1:9    0    16M  0
ram10                         1:10   0    16M  0
ram11                         1:11   0    16M  0
ram12                         1:12   0    16M  0
ram13                         1:13   0    16M  0
ram14                         1:14   0    16M  0
ram15                         1:15   0    16M  0
loop0                         7:0    0         0
loop1                         7:1    0         0
loop2                         7:2    0         0
loop3                         7:3    0         0
loop4                         7:4    0         0
loop5                         7:5    0         0
loop6                         7:6    0         0
loop7                         7:7    0         0
sda                           8:0    0     8G  0
├─sda1                        8:1    0   500M  0 /boot
└─sda2                        8:2    0   7.5G  0
  ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
  └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]

Regards
Nitin

On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Are you running these commands from the host or the VM?  You will only see the virtio-scsi controller in lspci output from the guest VM.

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Tuesday, October 3, 2017 at 12:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

One quick update , after running ./script/setup.h for spdk nvme drive is converting to uio generic pci device .
so only difference which i found after and before mapping is command for ls -l /dev/u*

can i use /dev/uio0 are the nvme device
Regards
Nitin

On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Looks like sdf to sdi is the nvme , please correct me if i ma wrong

-bash-4.2# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata

Regards
Nitin

On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

i am getting below output for lspci  for NVram

d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)

lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0     6G  0 part [SWAP]
├─sda2   8:2    0   512M  0 part /bootmgr
└─sda3   8:3    0 217.1G  0 part /
sdb      8:16   0 931.5G  0 disk
sdc      8:32   0 931.5G  0 disk
sdd      8:48   0 223.6G  0 disk
sde      8:64   0 111.8G  0 disk
sdf      8:80   0 223.6G  0 disk
sdg      8:96   0 223.6G  0 disk
sdh      8:112  0 223.6G  0 disk
sdi      8:128  0 223.6G  0 disk


So how to know which one is virto-scsi  controller basically i wanted to run fio test  with nvme mapped device


On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

lspci should show you the virtio-scsi controller PCI device.
lsblk –S should show you the SCSI block devices attached to that virtio-scsi controller.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, October 2, 2017 at 10:38 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk







[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 54942 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-06  8:28 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-06  8:28 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 11311 bytes --]

Hi Jim

Thanks for looking logs ,
Please find attached vhost log  and qemu command which i am invoking

/usr/local/bin/qemu-system-x86_64 -m 1024 -object
memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
-nographic -no-user-config -nodefaults -serial
mon:telnet:localhost:7704,server,nowait -monitor
mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm

2.  looks like there is no virtio-scsi module loaded in guest VM

i ran lsmod command in guest VM please find below output

[root(a)localhost ~]# lsmod
Module                  Size  Used by
ipt_REJECT              2349  2
nf_conntrack_ipv4       9440  2
nf_defrag_ipv4          1449  1 nf_conntrack_ipv4
iptable_filter          2759  1
ip_tables              17765  1 iptable_filter
ip6t_REJECT             4562  2
nf_conntrack_ipv6       8650  2
nf_defrag_ipv6         12148  1 nf_conntrack_ipv6
xt_state                1458  4
nf_conntrack           79611  3 nf_conntrack_ipv4,nf_conntrack_ipv6,xt_state
ip6table_filter         2855  1
ip6_tables             19424  1 ip6table_filter
ipv6                  322291  15
ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6
i2c_piix4              12574  0
i2c_core               31274  1 i2c_piix4
sg                     30090  0
ext4                  359671  2
mbcache                 7918  1 ext4
jbd2                   88768  1 ext4
sd_mod                 38196  3
crc_t10dif              1507  1 sd_mod
virtio_pci              6653  0
virtio_ring             7169  1 virtio_pci
virtio                  4824  1 virtio_pci
pata_acpi               3667  0
ata_generic             3611  0
ata_piix               22652  2
dm_mod                 75539  6


please let me know if i am missing something

Regards
Nitin


On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you
> add it?
>
>
>
> Can you also confirm the virtio-scsi module is loaded in your guest VM?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Thursday, October 5, 2017 at 3:49 AM
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Please find attached Vm-guest-boot up log and host dmesg log
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> It would be most helpful if you could get lspci working on your guest VM.
>
>
>
> Could you post dmesg contents from your VM and the SPDK vhost log after
> the VM has booted?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Wednesday, October 4, 2017 at 10:42 AM
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i am running this  on remote box which is having linux 3.10 .
>
> on the guest VM lspci command is not working and i am not able to install
> lspci as well
>
> below is the lsblk -a command output -S is also not available in guest VM
>
>
>
> NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
>
> ram0                          1:0    0    16M  0
>
> ram1                          1:1    0    16M  0
>
> ram2                          1:2    0    16M  0
>
> ram3                          1:3    0    16M  0
>
> ram4                          1:4    0    16M  0
>
> ram5                          1:5    0    16M  0
>
> ram6                          1:6    0    16M  0
>
> ram7                          1:7    0    16M  0
>
> ram8                          1:8    0    16M  0
>
> ram9                          1:9    0    16M  0
>
> ram10                         1:10   0    16M  0
>
> ram11                         1:11   0    16M  0
>
> ram12                         1:12   0    16M  0
>
> ram13                         1:13   0    16M  0
>
> ram14                         1:14   0    16M  0
>
> ram15                         1:15   0    16M  0
>
> loop0                         7:0    0         0
>
> loop1                         7:1    0         0
>
> loop2                         7:2    0         0
>
> loop3                         7:3    0         0
>
> loop4                         7:4    0         0
>
> loop5                         7:5    0         0
>
> loop6                         7:6    0         0
>
> loop7                         7:7    0         0
>
> sda                           8:0    0     8G  0
>
> ├─sda1                        8:1    0   500M  0 /boot
>
> └─sda2                        8:2    0   7.5G  0
>
>   ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
>
>   └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> Are you running these commands from the host or the VM?  You will only see
> the virtio-scsi controller in lspci output from the guest VM.
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Tuesday, October 3, 2017 at 12:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
> Harris <james.r.harris(a)intel.com>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> One quick update , after running ./script/setup.h for spdk nvme drive is
> converting to uio generic pci device .
>
> so only difference which i found after and before mapping is command for
> ls -l /dev/u*
>
>
>
> can i use /dev/uio0 are the nvme device
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
>
>
> -bash-4.2# lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>
> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>
> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>
> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
>
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> i am getting below output for lspci  for NVram
>
>
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
>
>
> lsblk
>
>
>
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda      8:0    0 223.6G  0 disk
>
> ├─sda1   8:1    0     6G  0 part [SWAP]
>
> ├─sda2   8:2    0   512M  0 part /bootmgr
>
> └─sda3   8:3    0 217.1G  0 part /
>
> sdb      8:16   0 931.5G  0 disk
>
> sdc      8:32   0 931.5G  0 disk
>
> sdd      8:48   0 223.6G  0 disk
>
> sde      8:64   0 111.8G  0 disk
>
> sdf      8:80   0 223.6G  0 disk
>
> sdg      8:96   0 223.6G  0 disk
>
> sdh      8:112  0 223.6G  0 disk
>
> sdi      8:128  0 223.6G  0 disk
>
>
>
>
>
> So how to know which one is virto-scsi  controller basically i wanted to
> run fio test  with nvme mapped device
>
>
>
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please  give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
>
>
>
>
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 33710 bytes --]

[-- Attachment #3: vhost-log.txt --]
[-- Type: text/plain, Size: 6090 bytes --]

-bash-4.2# cd /root/spdk
-bash-4.2# ./scripts/setup.sh                                                                                                                                                                0000:d8:00.0 (8086 0a53): nvme -> uio_pci_generic
0000:d9:00.0 (8086 0a53): nvme -> uio_pci_generic
0000:da:00.0 (8086 0a53): nvme -> uio_pci_generic
0000:db:00.0 (8086 0a53): nvme -> uio_pci_generic
0000:00:04.0 (8086 2021): no driver -> uio_pci_generic
-bash-4.2# clear
-bash-4.2# app/vhost/vhost -c etc/spdk/vhost.conf.in a
Starting DPDK 17.05.0 initialization...
[ DPDK EAL parameters: vhost -c 0x1 -m 1024 --file-prefix=spdk_pid3094 ]
EAL: Detected 40 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Total cores available: 1
Occupied cpu socket mask is 0x1
reactor.c: 362:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
EAL: PCI device 0000:d8:00.0 on NUMA socket 1
EAL:   probe driver: 8086:a53 spdk_nvme
EAL: PCI device 0000:d9:00.0 on NUMA socket 1
EAL:   probe driver: 8086:a53 spdk_nvme
EAL: PCI device 0000:da:00.0 on NUMA socket 1
EAL:   probe driver: 8086:a53 spdk_nvme
EAL: PCI device 0000:db:00.0 on NUMA socket 1
EAL:   probe driver: 8086:a53 spdk_nvme
gpt.c: 201:spdk_gpt_check_mbr: *ERROR*: Currently only support GPT Protective MBR format
VHOST_CONFIG: vhost-user server: socket created, fd: 14
VHOST_CONFIG: bind to vhost.0
vhost.c: 426:spdk_vhost_dev_construct: *NOTICE*: Controller vhost.0: new controller added
vhost_scsi.c: 874:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller vhost.0: defined device 'Dev 0' using lun 'Malloc0'
vhost_scsi.c: 874:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller vhost.0: defined device 'Dev 2' using lun 'Nvme0n1'
VHOST_CONFIG: new vhost user connection is 15
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:16
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:17
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:18
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: guest memory region 0, size: 0x40000000
         guest physical addr: 0x0
         guest virtual  addr: 0x7f7b0ac00000
         host  virtual  addr: 0x2aaaaac00000
         mmap addr : 0x2aaaaac00000
         mmap size : 0x40000000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: last_used_idx (0) and vq->used->idx (61440) mismatches; some packets maybe resent for Tx and dropped for Rx
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:20
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: last_used_idx (0) and vq->used->idx (61440) mismatches; some packets maybe resent for Tx and dropped for Rx
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:21
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:22
VHOST_CONFIG: virtio is now ready for processing.
vhost_scsi.c:1087:new_device: *NOTICE*: Started poller for vhost controller vhost.0 on lcore 0
vhost.c: 250:spdk_vhost_dev_mem_register: *NOTICE*: Registering VM memory for vtophys translation - 0x2aaaaac00000 len:0x40000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
vhost_scsi.c:1124:destroy_device_poller_cb: *NOTICE*: Stopping poller for vhost controller vhost.0
VHOST_CONFIG: vring call idx:0 file:23
VHOST_CONFIG: virtio is now ready for processing.
vhost_scsi.c:1087:new_device: *NOTICE*: Started poller for vhost controller vhost.0 on lcore 0
vhost.c: 250:spdk_vhost_dev_mem_register: *NOTICE*: Registering VM memory for vtophys translation - 0x2aaaaac00000 len:0x40000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
vhost_scsi.c:1124:destroy_device_poller_cb: *NOTICE*: Stopping poller for vhost controller vhost.0
VHOST_CONFIG: vring call idx:1 file:16
VHOST_CONFIG: virtio is now ready for processing.
vhost_scsi.c:1087:new_device: *NOTICE*: Started poller for vhost controller vhost.0 on lcore 0
vhost.c: 250:spdk_vhost_dev_mem_register: *NOTICE*: Registering VM memory for vtophys translation - 0x2aaaaac00000 len:0x40000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
vhost_scsi.c:1124:destroy_device_poller_cb: *NOTICE*: Stopping poller for vhost controller vhost.0
VHOST_CONFIG: vring call idx:2 file:17
VHOST_CONFIG: virtio is now ready for processing.
vhost_scsi.c:1087:new_device: *NOTICE*: Started poller for vhost controller vhost.0 on lcore 0
vhost.c: 250:spdk_vhost_dev_mem_register: *NOTICE*: Registering VM memory for vtophys translation - 0x2aaaaac00000 len:0x40000000
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: read message VHOST_USER_GET_VRING_BASE
vhost_scsi.c:1124:destroy_device_poller_cb: *NOTICE*: Stopping poller for vhost controller vhost.0
VHOST_CONFIG: vring base idx:0 file:61440
VHOST_CONFIG: read message VHOST_USER_GET_VRING_BASE
VHOST_CONFIG: vring base idx:1 file:61440
VHOST_CONFIG: read message VHOST_USER_GET_VRING_BASE
VHOST_CONFIG: vring base idx:2 file:262

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-06  3:58 Harris, James R
  0 siblings, 0 replies; 23+ messages in thread
From: Harris, James R @ 2017-10-06  3:58 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8871 bytes --]

Thanks Nitin.  I don’t see the SPDK vhost log attached though – could you add it?

Can you also confirm the virtio-scsi module is loaded in your guest VM?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com>
Date: Thursday, October 5, 2017 at 3:49 AM
To: James Harris <james.r.harris(a)intel.com>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Please find attached Vm-guest-boot up log and host dmesg log

Regards
Nitin

On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

It would be most helpful if you could get lspci working on your guest VM.

Could you post dmesg contents from your VM and the SPDK vhost log after the VM has booted?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Wednesday, October 4, 2017 at 10:42 AM
To: James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i am running this  on remote box which is having linux 3.10 .
on the guest VM lspci command is not working and i am not able to install lspci as well
below is the lsblk -a command output -S is also not available in guest VM

NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
ram0                          1:0    0    16M  0
ram1                          1:1    0    16M  0
ram2                          1:2    0    16M  0
ram3                          1:3    0    16M  0
ram4                          1:4    0    16M  0
ram5                          1:5    0    16M  0
ram6                          1:6    0    16M  0
ram7                          1:7    0    16M  0
ram8                          1:8    0    16M  0
ram9                          1:9    0    16M  0
ram10                         1:10   0    16M  0
ram11                         1:11   0    16M  0
ram12                         1:12   0    16M  0
ram13                         1:13   0    16M  0
ram14                         1:14   0    16M  0
ram15                         1:15   0    16M  0
loop0                         7:0    0         0
loop1                         7:1    0         0
loop2                         7:2    0         0
loop3                         7:3    0         0
loop4                         7:4    0         0
loop5                         7:5    0         0
loop6                         7:6    0         0
loop7                         7:7    0         0
sda                           8:0    0     8G  0
├─sda1                        8:1    0   500M  0 /boot
└─sda2                        8:2    0   7.5G  0
  ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
  └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]

Regards
Nitin

On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Are you running these commands from the host or the VM?  You will only see the virtio-scsi controller in lspci output from the guest VM.

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Tuesday, October 3, 2017 at 12:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

One quick update , after running ./script/setup.h for spdk nvme drive is converting to uio generic pci device .
so only difference which i found after and before mapping is command for ls -l /dev/u*

can i use /dev/uio0 are the nvme device
Regards
Nitin

On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Looks like sdf to sdi is the nvme , please correct me if i ma wrong

-bash-4.2# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata

Regards
Nitin

On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

i am getting below output for lspci  for NVram

d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)

lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0     6G  0 part [SWAP]
├─sda2   8:2    0   512M  0 part /bootmgr
└─sda3   8:3    0 217.1G  0 part /
sdb      8:16   0 931.5G  0 disk
sdc      8:32   0 931.5G  0 disk
sdd      8:48   0 223.6G  0 disk
sde      8:64   0 111.8G  0 disk
sdf      8:80   0 223.6G  0 disk
sdg      8:96   0 223.6G  0 disk
sdh      8:112  0 223.6G  0 disk
sdi      8:128  0 223.6G  0 disk


So how to know which one is virto-scsi  controller basically i wanted to run fio test  with nvme mapped device


On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

lspci should show you the virtio-scsi controller PCI device.
lsblk –S should show you the SCSI block devices attached to that virtio-scsi controller.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, October 2, 2017 at 10:38 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk






[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 43902 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-04 18:06 Harris, James R
  0 siblings, 0 replies; 23+ messages in thread
From: Harris, James R @ 2017-10-04 18:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 8120 bytes --]

Hi Nitin,

It would be most helpful if you could get lspci working on your guest VM.

Could you post dmesg contents from your VM and the SPDK vhost log after the VM has booted?

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com>
Date: Wednesday, October 4, 2017 at 10:42 AM
To: James Harris <james.r.harris(a)intel.com>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

i am running this  on remote box which is having linux 3.10 .
on the guest VM lspci command is not working and i am not able to install lspci as well
below is the lsblk -a command output -S is also not available in guest VM

NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
ram0                          1:0    0    16M  0
ram1                          1:1    0    16M  0
ram2                          1:2    0    16M  0
ram3                          1:3    0    16M  0
ram4                          1:4    0    16M  0
ram5                          1:5    0    16M  0
ram6                          1:6    0    16M  0
ram7                          1:7    0    16M  0
ram8                          1:8    0    16M  0
ram9                          1:9    0    16M  0
ram10                         1:10   0    16M  0
ram11                         1:11   0    16M  0
ram12                         1:12   0    16M  0
ram13                         1:13   0    16M  0
ram14                         1:14   0    16M  0
ram15                         1:15   0    16M  0
loop0                         7:0    0         0
loop1                         7:1    0         0
loop2                         7:2    0         0
loop3                         7:3    0         0
loop4                         7:4    0         0
loop5                         7:5    0         0
loop6                         7:6    0         0
loop7                         7:7    0         0
sda                           8:0    0     8G  0
├─sda1                        8:1    0   500M  0 /boot
└─sda2                        8:2    0   7.5G  0
  ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
  └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]

Regards
Nitin

On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

Are you running these commands from the host or the VM?  You will only see the virtio-scsi controller in lspci output from the guest VM.

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Date: Tuesday, October 3, 2017 at 12:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>, James Harris <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>>

Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

One quick update , after running ./script/setup.h for spdk nvme drive is converting to uio generic pci device .
so only difference which i found after and before mapping is command for ls -l /dev/u*

can i use /dev/uio0 are the nvme device
Regards
Nitin

On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Looks like sdf to sdi is the nvme , please correct me if i ma wrong

-bash-4.2# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata

Regards
Nitin

On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

i am getting below output for lspci  for NVram

d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)

lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0     6G  0 part [SWAP]
├─sda2   8:2    0   512M  0 part /bootmgr
└─sda3   8:3    0 217.1G  0 part /
sdb      8:16   0 931.5G  0 disk
sdc      8:32   0 931.5G  0 disk
sdd      8:48   0 223.6G  0 disk
sde      8:64   0 111.8G  0 disk
sdf      8:80   0 223.6G  0 disk
sdg      8:96   0 223.6G  0 disk
sdh      8:112  0 223.6G  0 disk
sdi      8:128  0 223.6G  0 disk


So how to know which one is virto-scsi  controller basically i wanted to run fio test  with nvme mapped device


On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

lspci should show you the virtio-scsi controller PCI device.
lsblk –S should show you the SCSI block devices attached to that virtio-scsi controller.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, October 2, 2017 at 10:38 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk





[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 37757 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-04 17:42 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-04 17:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7753 bytes --]

Hi Jim

i am running this  on remote box which is having linux 3.10 .
on the guest VM lspci command is not working and i am not able to install
lspci as well
below is the lsblk -a command output -S is also not available in guest VM

NAME                        MAJ:MIN RM   SIZE RO MOUNTPOINT
ram0                          1:0    0    16M  0
ram1                          1:1    0    16M  0
ram2                          1:2    0    16M  0
ram3                          1:3    0    16M  0
ram4                          1:4    0    16M  0
ram5                          1:5    0    16M  0
ram6                          1:6    0    16M  0
ram7                          1:7    0    16M  0
ram8                          1:8    0    16M  0
ram9                          1:9    0    16M  0
ram10                         1:10   0    16M  0
ram11                         1:11   0    16M  0
ram12                         1:12   0    16M  0
ram13                         1:13   0    16M  0
ram14                         1:14   0    16M  0
ram15                         1:15   0    16M  0
loop0                         7:0    0         0
loop1                         7:1    0         0
loop2                         7:2    0         0
loop3                         7:3    0         0
loop4                         7:4    0         0
loop5                         7:5    0         0
loop6                         7:6    0         0
loop7                         7:7    0         0
sda                           8:0    0     8G  0
├─sda1                        8:1    0   500M  0 /boot
└─sda2                        8:2    0   7.5G  0
  ├─VolGroup-lv_root (dm-0) 253:0    0   5.6G  0 /
  └─VolGroup-lv_swap (dm-1) 253:1    0     2G  0 [SWAP]

Regards
Nitin

On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Hi Nitin,
>
>
>
> Are you running these commands from the host or the VM?  You will only see
> the virtio-scsi controller in lspci output from the guest VM.
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Tuesday, October 3, 2017 at 12:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
> Harris <james.r.harris(a)intel.com>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> One quick update , after running ./script/setup.h for spdk nvme drive is
> converting to uio generic pci device .
>
> so only difference which i found after and before mapping is command for
> ls -l /dev/u*
>
>
>
> can i use /dev/uio0 are the nvme device
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
>
>
> -bash-4.2# lsblk -S
>
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
>
> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
>
> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
>
> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
>
> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
>
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> i am getting below output for lspci  for NVram
>
>
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
>
>
> lsblk
>
>
>
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>
> sda      8:0    0 223.6G  0 disk
>
> ├─sda1   8:1    0     6G  0 part [SWAP]
>
> ├─sda2   8:2    0   512M  0 part /bootmgr
>
> └─sda3   8:3    0 217.1G  0 part /
>
> sdb      8:16   0 931.5G  0 disk
>
> sdc      8:32   0 931.5G  0 disk
>
> sdd      8:48   0 223.6G  0 disk
>
> sde      8:64   0 111.8G  0 disk
>
> sdf      8:80   0 223.6G  0 disk
>
> sdg      8:96   0 223.6G  0 disk
>
> sdh      8:112  0 223.6G  0 disk
>
> sdi      8:128  0 223.6G  0 disk
>
>
>
>
>
> So how to know which one is virto-scsi  controller basically i wanted to
> run fio test  with nvme mapped device
>
>
>
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please  give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 21813 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-04 16:43 Harris, James R
  0 siblings, 0 replies; 23+ messages in thread
From: Harris, James R @ 2017-10-04 16:43 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5638 bytes --]

Hi Nitin,

Are you running these commands from the host or the VM?  You will only see the virtio-scsi controller in lspci output from the guest VM.

-Jim


From: Nitin Gupta <nitin.gupta981(a)gmail.com>
Date: Tuesday, October 3, 2017 at 12:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>, James Harris <james.r.harris(a)intel.com>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

One quick update , after running ./script/setup.h for spdk nvme drive is converting to uio generic pci device .
so only difference which i found after and before mapping is command for ls -l /dev/u*

can i use /dev/uio0 are the nvme device
Regards
Nitin

On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

Looks like sdf to sdi is the nvme , please correct me if i ma wrong

-bash-4.2# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata

Regards
Nitin

On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>> wrote:
Hi Jim

i am getting below output for lspci  for NVram

d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)
db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev 02)

lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0     6G  0 part [SWAP]
├─sda2   8:2    0   512M  0 part /bootmgr
└─sda3   8:3    0 217.1G  0 part /
sdb      8:16   0 931.5G  0 disk
sdc      8:32   0 931.5G  0 disk
sdd      8:48   0 223.6G  0 disk
sde      8:64   0 111.8G  0 disk
sdf      8:80   0 223.6G  0 disk
sdg      8:96   0 223.6G  0 disk
sdh      8:112  0 223.6G  0 disk
sdi      8:128  0 223.6G  0 disk


So how to know which one is virto-scsi  controller basically i wanted to run fio test  with nvme mapped device


On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

lspci should show you the virtio-scsi controller PCI device.
lsblk –S should show you the SCSI block devices attached to that virtio-scsi controller.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Monday, October 2, 2017 at 10:38 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk




[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 23178 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-03  7:23 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-03  7:23 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5462 bytes --]

Hi Jim

One quick update , after running ./script/setup.h for spdk nvme drive is
converting to uio generic pci device .
so only difference which i found after and before mapping is command for ls
-l /dev/u*

can i use /dev/uio0 are the nvme device
Regards
Nitin

On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
wrote:

> Hi Jim
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
> -bash-4.2# lsblk -S
> NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
> sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
> sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
> sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
> sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
> sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
> sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
> sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
> sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
> sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
>
> Regards
> Nitin
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
>> Hi Jim
>>
>> i am getting below output for lspci  for NVram
>>
>> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
>> (rev 02)
>>
>> lsblk
>>
>> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>> sda      8:0    0 223.6G  0 disk
>> ├─sda1   8:1    0     6G  0 part [SWAP]
>> ├─sda2   8:2    0   512M  0 part /bootmgr
>> └─sda3   8:3    0 217.1G  0 part /
>> sdb      8:16   0 931.5G  0 disk
>> sdc      8:32   0 931.5G  0 disk
>> sdd      8:48   0 223.6G  0 disk
>> sde      8:64   0 111.8G  0 disk
>> sdf      8:80   0 223.6G  0 disk
>> sdg      8:96   0 223.6G  0 disk
>> sdh      8:112  0 223.6G  0 disk
>> sdi      8:128  0 223.6G  0 disk
>>
>>
>> So how to know which one is virto-scsi  controller basically i wanted to
>> run fio test  with nvme mapped device
>>
>>
>> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>>> Hi Nitin,
>>>
>>>
>>>
>>> lspci should show you the virtio-scsi controller PCI device.
>>>
>>> lsblk –S should show you the SCSI block devices attached to that
>>> virtio-scsi controller.
>>>
>>>
>>>
>>> -Jim
>>>
>>>
>>>
>>>
>>>
>>> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
>>> nitin.gupta981(a)gmail.com>
>>> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>>> *Date: *Monday, October 2, 2017 at 10:38 AM
>>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>>
>>>
>>>
>>> Hi Jim
>>>
>>>
>>>
>>> Thanks for your reply and sorry for my late reply ..
>>>
>>> could you please  give one example to know how to identify virtio-scsi
>>> controller in the linux
>>>
>>> i mean which directory it will be present or which file system ?
>>>
>>>
>>>
>>> Regards
>>>
>>> Nitin
>>>
>>>
>>>
>>> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <
>>> james.r.harris(a)intel.com> wrote:
>>>
>>> Hi Nitin,
>>>
>>>
>>>
>>> You should see a virtio-scsi controller in the VM, not an NVMe device.
>>> This controller should have one LUN attached, which SPDK vhost maps to the
>>> NVMe device attached to the host.
>>>
>>>
>>>
>>> -Jim
>>>
>>>
>>>
>>>
>>>
>>> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
>>> nitin.gupta981(a)gmail.com>
>>> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>>> *Date: *Thursday, September 28, 2017 at 4:07 AM
>>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>>> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>>>
>>>
>>>
>>> Hi All
>>>
>>>
>>>
>>> i am new in spdk development and currently doing spdk setup in that  was
>>> able to setup back-end storage with NVME .After running the VM with
>>> following command , there is no nvme drive present .
>>>
>>>
>>>
>>> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
>>> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
>>> -nographic -no-user-config -nodefaults -serial
>>> mon:telnet:localhost:7704,server,nowait -monitor
>>> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
>>> file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
>>> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
>>> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>>>
>>>
>>>
>>>
>>>
>>> how to identify which is nvme drive ?
>>>
>>> is there any way to  enable nvme from qemu command ?
>>>
>>>
>>>
>>> PS:  i have already specified the nvme drive in vhost.conf.in
>>>
>>>
>>>
>>> Regards
>>>
>>> Nitin
>>>
>>>
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org
>>> https://lists.01.org/mailman/listinfo/spdk
>>>
>>>
>>>
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org
>>> https://lists.01.org/mailman/listinfo/spdk
>>>
>>>
>>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 12676 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-03  6:00 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-03  6:00 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4899 bytes --]

Hi Jim

Looks like sdf to sdi is the nvme , please correct me if i ma wrong

-bash-4.2# lsblk -S
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdb  1:0:0:0    disk ATA      ST31000524NS     SN11 sata
sdc  2:0:0:0    disk ATA      ST31000524NS     SN12 sata
sdd  3:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sde  5:0:0:0    disk ATA      SAMSUNG MZ7WD120 103Q sata
sdf  6:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdg  7:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdh  8:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata
sdi  9:0:0:0    disk ATA      INTEL SSDSC2BB24 0039 sata

Regards
Nitin

On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
wrote:

> Hi Jim
>
> i am getting below output for lspci  for NVram
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
> 02)
>
> lsblk
>
> NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> sda      8:0    0 223.6G  0 disk
> ├─sda1   8:1    0     6G  0 part [SWAP]
> ├─sda2   8:2    0   512M  0 part /bootmgr
> └─sda3   8:3    0 217.1G  0 part /
> sdb      8:16   0 931.5G  0 disk
> sdc      8:32   0 931.5G  0 disk
> sdd      8:48   0 223.6G  0 disk
> sde      8:64   0 111.8G  0 disk
> sdf      8:80   0 223.6G  0 disk
> sdg      8:96   0 223.6G  0 disk
> sdh      8:112  0 223.6G  0 disk
> sdi      8:128  0 223.6G  0 disk
>
>
> So how to know which one is virto-scsi  controller basically i wanted to
> run fio test  with nvme mapped device
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com
> > wrote:
>
>> Hi Nitin,
>>
>>
>>
>> lspci should show you the virtio-scsi controller PCI device.
>>
>> lsblk –S should show you the SCSI block devices attached to that
>> virtio-scsi controller.
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
>> nitin.gupta981(a)gmail.com>
>> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Date: *Monday, October 2, 2017 at 10:38 AM
>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi Jim
>>
>>
>>
>> Thanks for your reply and sorry for my late reply ..
>>
>> could you please  give one example to know how to identify virtio-scsi
>> controller in the linux
>>
>> i mean which directory it will be present or which file system ?
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>>
>> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <
>> james.r.harris(a)intel.com> wrote:
>>
>> Hi Nitin,
>>
>>
>>
>> You should see a virtio-scsi controller in the VM, not an NVMe device.
>> This controller should have one LUN attached, which SPDK vhost maps to the
>> NVMe device attached to the host.
>>
>>
>>
>> -Jim
>>
>>
>>
>>
>>
>> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
>> nitin.gupta981(a)gmail.com>
>> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Date: *Thursday, September 28, 2017 at 4:07 AM
>> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
>> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>>
>>
>>
>> Hi All
>>
>>
>>
>> i am new in spdk development and currently doing spdk setup in that  was
>> able to setup back-end storage with NVME .After running the VM with
>> following command , there is no nvme drive present .
>>
>>
>>
>> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
>> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
>> -nographic -no-user-config -nodefaults -serial
>> mon:telnet:localhost:7704,server,nowait -monitor
>> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
>> file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
>> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
>> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>>
>>
>>
>>
>>
>> how to identify which is nvme drive ?
>>
>> is there any way to  enable nvme from qemu command ?
>>
>>
>>
>> PS:  i have already specified the nvme drive in vhost.conf.in
>>
>>
>>
>> Regards
>>
>> Nitin
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
>>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 11762 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-03  5:51 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-03  5:51 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3915 bytes --]

Hi Jim

i am getting below output for lspci  for NVram

d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
02)
d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
02)
da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
02)
db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53 (rev
02)

lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0     6G  0 part [SWAP]
├─sda2   8:2    0   512M  0 part /bootmgr
└─sda3   8:3    0 217.1G  0 part /
sdb      8:16   0 931.5G  0 disk
sdc      8:32   0 931.5G  0 disk
sdd      8:48   0 223.6G  0 disk
sde      8:64   0 111.8G  0 disk
sdf      8:80   0 223.6G  0 disk
sdg      8:96   0 223.6G  0 disk
sdh      8:112  0 223.6G  0 disk
sdi      8:128  0 223.6G  0 disk


So how to know which one is virto-scsi  controller basically i wanted to
run fio test  with nvme mapped device


On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please  give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 10339 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-02 17:42 Harris, James R
  0 siblings, 0 replies; 23+ messages in thread
From: Harris, James R @ 2017-10-02 17:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2598 bytes --]

Hi Nitin,

lspci should show you the virtio-scsi controller PCI device.
lsblk –S should show you the SCSI block devices attached to that virtio-scsi controller.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Monday, October 2, 2017 at 10:38 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] nvme drive not showing in vm in spdk

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris(a)intel.com>> wrote:
Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com<mailto:nitin.gupta981(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK(a)lists.01.org>
https://lists.01.org/mailman/listinfo/spdk


[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 10697 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-10-02 17:38 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-10-02 17:38 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2045 bytes --]

Hi Jim

Thanks for your reply and sorry for my late reply ..
could you please  give one example to know how to identify virtio-scsi
controller in the linux
i mean which directory it will be present or which file system ?

Regards
Nitin

On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <james.r.harris(a)intel.com>
wrote:

> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that  was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait
> -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem
> -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to  enable nvme from qemu command ?
>
>
>
> PS:  i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 5452 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [SPDK] nvme drive not showing in vm in spdk
@ 2017-09-28 15:00 Harris, James R
  0 siblings, 0 replies; 23+ messages in thread
From: Harris, James R @ 2017-09-28 15:00 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1429 bytes --]

Hi Nitin,

You should see a virtio-scsi controller in the VM, not an NVMe device.  This controller should have one LUN attached, which SPDK vhost maps to the NVMe device attached to the host.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <nitin.gupta981(a)gmail.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, September 28, 2017 at 4:07 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] nvme drive not showing in vm in spdk

Hi All

i am new in spdk development and currently doing spdk setup in that  was able to setup back-end storage with NVME .After running the VM with following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -nographic -no-user-config -nodefaults -serial mon:telnet:localhost:7704,server,nowait -monitor mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in<http://vhost.conf.in>

Regards
Nitin

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 5254 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [SPDK] nvme drive not showing in vm in spdk
@ 2017-09-28 11:07 Nitin Gupta
  0 siblings, 0 replies; 23+ messages in thread
From: Nitin Gupta @ 2017-09-28 11:07 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 878 bytes --]

Hi All

i am new in spdk development and currently doing spdk setup in that  was
able to setup back-end storage with NVME .After running the VM with
following command , there is no nvme drive present .

/usr/local/bin/qemu-system-x86_64 -m 1024 -object
memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
-nographic -no-user-config -nodefaults -serial
mon:telnet:localhost:7704,server,nowait -monitor
mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm


how to identify which is nvme drive ?
is there any way to  enable nvme from qemu command ?

PS:  i have already specified the nvme drive in vhost.conf.in

Regards
Nitin

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 1080 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2017-10-13  6:02 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-05 10:49 [SPDK] nvme drive not showing in vm in spdk Nitin Gupta
  -- strict thread matches above, loose matches on Subject: below --
2017-10-13  6:02 Nitin Gupta
2017-10-12 16:04 Nitin Gupta
2017-10-12 14:24 Wodkowski, PawelX
2017-10-12 13:57 Nitin Gupta
2017-10-11 12:23 Wodkowski, PawelX
2017-10-11 10:49 Nitin Gupta
2017-10-11  9:52 Wodkowski, PawelX
2017-10-11  9:07 Nitin Gupta
2017-10-07  4:40 Nitin Gupta
2017-10-06 18:31 Harris, James R
2017-10-06  8:28 Nitin Gupta
2017-10-06  3:58 Harris, James R
2017-10-04 18:06 Harris, James R
2017-10-04 17:42 Nitin Gupta
2017-10-04 16:43 Harris, James R
2017-10-03  7:23 Nitin Gupta
2017-10-03  6:00 Nitin Gupta
2017-10-03  5:51 Nitin Gupta
2017-10-02 17:42 Harris, James R
2017-10-02 17:38 Nitin Gupta
2017-09-28 15:00 Harris, James R
2017-09-28 11:07 Nitin Gupta

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.