All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout
@ 2018-06-21 23:41 Wan, Qun
  0 siblings, 0 replies; 5+ messages in thread
From: Wan, Qun @ 2018-06-21 23:41 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2073 bytes --]

Hi, Bob
                If you post the issue on the github, it’s easier to track and solve it. Appreciate for your reporting and filing issues. Thank you.

Best Regards,
Anna

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Thursday, June 21, 2018 11:55 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout

Thanks for the report Bob.

Could you file an issue in GitHub with these attachments?

https://github.com/spdk/spdk/issues

-Jim



From: SPDK <spdk-bounces(a)lists.01.org<mailto:spdk-bounces(a)lists.01.org>> on behalf of Bob Chen <a175818323(a)gmail.com<mailto:a175818323(a)gmail.com>>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Date: Thursday, June 21, 2018 at 2:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout

Hi, guys

I was running some stress tests before trying to deploy SPDK into production environment. Then I found the vhost app would occasionally crash or respond with connection timeout, when stress was high.

My test case is like this:
1. Setup vhost app
2. Create 10 SPDK volumes (construct_lvol_bdev, construct_vhost_scsi_controller, add_vhost_scsi_lun)
3. Create 10 QEMU guests
4. Attach the volumes to guests as non-bootable disks (vhost-user-scsi-pci)
5. Start the guests

Note for all the guests, the above operations are executed simultaneously.


And in the meantime while the guests starting, the vhost app would:
1. Segmentation fault (core dumped)
2. vhost.c: 928:spdk_vhost_event_send: *ERROR*: Timeout waiting for event: start device.
    Then the program changes to sleeping status, no longer occupy 100% CPU any more.

Test environment: SPDK 18.04
RPC operations log and vhost log are attached within this mail.


- Bob

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8923 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout
@ 2018-06-22  9:55 Bob Chen
  0 siblings, 0 replies; 5+ messages in thread
From: Bob Chen @ 2018-06-22  9:55 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2468 bytes --]

https://github.com/spdk/spdk/issues/339

2018-06-22 7:41 GMT+08:00 Wan, Qun <qun.wan(a)intel.com>:

> Hi, Bob
>
>                 If you post the issue on the github, it’s easier to track
> and solve it. Appreciate for your reporting and filing issues. Thank you.
>
>
>
> Best Regards,
>
> Anna
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *Harris,
> James R
> *Sent:* Thursday, June 21, 2018 11:55 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] [vhost bug] Continuous stressful operations cause
> vhost app to crash or respond timeout
>
>
>
> Thanks for the report Bob.
>
>
>
> Could you file an issue in GitHub with these attachments?
>
>
>
> https://github.com/spdk/spdk/issues
>
>
>
> -Jim
>
>
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Bob Chen <
> a175818323(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, June 21, 2018 at 2:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] [vhost bug] Continuous stressful operations cause vhost
> app to crash or respond timeout
>
>
>
> Hi, guys
>
>
>
> I was running some stress tests before trying to deploy SPDK into
> production environment. Then I found the vhost app would occasionally crash
> or respond with connection timeout, when stress was high.
>
>
>
> My test case is like this:
>
> 1. Setup vhost app
>
> 2. Create 10 SPDK volumes (construct_lvol_bdev, construct_vhost_scsi_controller,
> add_vhost_scsi_lun)
>
> 3. Create 10 QEMU guests
>
> 4. Attach the volumes to guests as non-bootable disks (vhost-user-scsi-pci)
>
> 5. Start the guests
>
>
>
> Note for all the guests, the above operations are executed simultaneously.
>
>
>
>
>
> And in the meantime while the guests starting, the vhost app would:
>
> 1. Segmentation fault (core dumped)
>
> 2. vhost.c: 928:spdk_vhost_event_send: *ERROR*: Timeout waiting for event:
> start device.
>
>     Then the program changes to sleeping status, no longer occupy 100% CPU
> any more.
>
>
>
> Test environment: SPDK 18.04
>
> RPC operations log and vhost log are attached within this mail.
>
>
>
>
>
> - Bob
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8127 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout
@ 2018-06-21 15:54 Harris, James R
  0 siblings, 0 replies; 5+ messages in thread
From: Harris, James R @ 2018-06-21 15:54 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1473 bytes --]

Thanks for the report Bob.

Could you file an issue in GitHub with these attachments?

https://github.com/spdk/spdk/issues

-Jim



From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Bob Chen <a175818323(a)gmail.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, June 21, 2018 at 2:23 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout

Hi, guys

I was running some stress tests before trying to deploy SPDK into production environment. Then I found the vhost app would occasionally crash or respond with connection timeout, when stress was high.

My test case is like this:
1. Setup vhost app
2. Create 10 SPDK volumes (construct_lvol_bdev, construct_vhost_scsi_controller, add_vhost_scsi_lun)
3. Create 10 QEMU guests
4. Attach the volumes to guests as non-bootable disks (vhost-user-scsi-pci)
5. Start the guests

Note for all the guests, the above operations are executed simultaneously.


And in the meantime while the guests starting, the vhost app would:
1. Segmentation fault (core dumped)
2. vhost.c: 928:spdk_vhost_event_send: *ERROR*: Timeout waiting for event: start device.
    Then the program changes to sleeping status, no longer occupy 100% CPU any more.

Test environment: SPDK 18.04
RPC operations log and vhost log are attached within this mail.


- Bob

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 6382 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout
@ 2018-06-21 14:40 Michael Haeuptle
  0 siblings, 0 replies; 5+ messages in thread
From: Michael Haeuptle @ 2018-06-21 14:40 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1351 bytes --]

Bob, not sure I can help you but can you post the stack trace of the
offending thread in the core dump?

On Thu, Jun 21, 2018 at 3:23 AM, Bob Chen <a175818323(a)gmail.com> wrote:

> Hi, guys
>
> I was running some stress tests before trying to deploy SPDK into
> production environment. Then I found the vhost app would occasionally crash
> or respond with connection timeout, when stress was high.
>
> My test case is like this:
> 1. Setup vhost app
> 2. Create 10 SPDK volumes (construct_lvol_bdev, construct_vhost_scsi_controller,
> add_vhost_scsi_lun)
> 3. Create 10 QEMU guests
> 4. Attach the volumes to guests as non-bootable disks (vhost-user-scsi-pci)
> 5. Start the guests
>
> Note for all the guests, the above operations are executed simultaneously.
>
>
> And in the meantime while the guests starting, the vhost app would:
> 1. Segmentation fault (core dumped)
> 2. vhost.c: 928:spdk_vhost_event_send: *ERROR*: Timeout waiting for event:
> start device.
>     Then the program changes to sleeping status, no longer occupy 100% CPU
> any more.
>
> Test environment: SPDK 18.04
> RPC operations log and vhost log are attached within this mail.
>
>
> - Bob
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2052 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout
@ 2018-06-21  9:23 Bob Chen
  0 siblings, 0 replies; 5+ messages in thread
From: Bob Chen @ 2018-06-21  9:23 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 966 bytes --]

Hi, guys

I was running some stress tests before trying to deploy SPDK into
production environment. Then I found the vhost app would occasionally crash
or respond with connection timeout, when stress was high.

My test case is like this:
1. Setup vhost app
2. Create 10 SPDK volumes (construct_lvol_bdev,
construct_vhost_scsi_controller, add_vhost_scsi_lun)
3. Create 10 QEMU guests
4. Attach the volumes to guests as non-bootable disks (vhost-user-scsi-pci)
5. Start the guests

Note for all the guests, the above operations are executed simultaneously.


And in the meantime while the guests starting, the vhost app would:
1. Segmentation fault (core dumped)
2. vhost.c: 928:spdk_vhost_event_send: *ERROR*: Timeout waiting for event:
start device.
    Then the program changes to sleeping status, no longer occupy 100% CPU
any more.

Test environment: SPDK 18.04
RPC operations log and vhost log are attached within this mail.


- Bob

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 1228 bytes --]

[-- Attachment #3: rpc-log.txt --]
[-- Type: text/plain, Size: 7239 bytes --]

[I 180620 17:26:55 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.9015'}
[I 180620 17:26:55 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.9015', 'thin_provision': False}
[I 180620 17:26:55 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.9015'}
[I 180620 17:26:55 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.9015', 'ctrlr': 'vhost.9015'}
[I 180620 17:26:56 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.175'}
[I 180620 17:26:56 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.175', 'thin_provision': False}
[I 180620 17:26:56 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.175'}
[I 180620 17:26:56 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.175', 'ctrlr': 'vhost.175'}
[I 180620 17:26:57 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.4383'}
[I 180620 17:26:57 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.4383', 'thin_provision': False}
[I 180620 17:26:57 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.4383'}
[I 180620 17:26:57 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.4383', 'ctrlr': 'vhost.4383'}
[I 180620 17:26:57 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.2968'}
[I 180620 17:26:57 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.2968', 'thin_provision': False}
[I 180620 17:26:57 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.2968'}
[I 180620 17:26:57 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.2968', 'ctrlr': 'vhost.2968'}
[I 180620 17:26:58 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.1166'}
[I 180620 17:26:58 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.1166', 'thin_provision': False}
[I 180620 17:26:58 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.1166'}
[I 180620 17:26:58 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.1166', 'ctrlr': 'vhost.1166'}
[I 180620 17:26:59 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.3965'}
[I 180620 17:26:59 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.3965', 'thin_provision': False}
[I 180620 17:26:59 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.3965'}
[I 180620 17:26:59 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.3965', 'ctrlr': 'vhost.3965'}
[I 180620 17:27:00 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.4485'}
[I 180620 17:27:00 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.4485', 'thin_provision': False}
[I 180620 17:27:00 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.4485'}
[I 180620 17:27:00 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.4485', 'ctrlr': 'vhost.4485'}
[I 180620 17:27:01 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.9015'}
[I 180620 17:27:02 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.9789'}
[I 180620 17:27:02 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.9789', 'thin_provision': False}
[I 180620 17:27:02 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.9789'}
[I 180620 17:27:02 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.9789', 'ctrlr': 'vhost.9789'}
[I 180620 17:27:03 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.3208'}
[I 180620 17:27:03 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.3208', 'thin_provision': False}
[I 180620 17:27:03 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.3208'}
[I 180620 17:27:03 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.3208', 'ctrlr': 'vhost.3208'}
[I 180620 17:27:03 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.8494'}
[I 180620 17:27:03 spdk_tools:21] -------- construct_lvol_bdev: {'lvs_name': u'lvs.nvme0n1', 'size': 5368709120, 'lvol_name': u'lvol.8494', 'thin_provision': False}
[I 180620 17:27:03 spdk_tools:21] -------- construct_vhost_scsi_controller: {'ctrlr': 'vhost.8494'}
[I 180620 17:27:03 spdk_tools:21] -------- add_vhost_scsi_lun: {'scsi_target_num': 0, 'bdev_name': u'lvs.nvme0n1/lvol.8494', 'ctrlr': 'vhost.8494'}
[I 180620 17:27:05 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.175'}
[I 180620 17:27:08 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.4383'}
[I 180620 17:27:11 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.2968'}
[I 180620 17:27:11 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.2968'}
[I 180620 17:27:13 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.2968'}
[I 180620 17:27:13 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.2968'}
[I 180620 17:27:17 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.2968'}
[I 180620 17:27:17 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.2968'}
[I 180620 17:27:25 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.1166'}
[I 180620 17:27:25 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.1166'}
[I 180620 17:27:27 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.1166'}
[I 180620 17:27:27 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.1166'}
[I 180620 17:27:31 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.1166'}
[I 180620 17:27:31 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.1166'}
[I 180620 17:27:40 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.3965'}
[I 180620 17:27:40 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.3965'}
[I 180620 17:27:42 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.3965'}
[I 180620 17:27:42 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.3965'}
[I 180620 17:27:46 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.3965'}
[I 180620 17:27:46 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.3965'}
[I 180620 17:27:54 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.4485'}
[I 180620 17:27:54 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.4485'}
[I 180620 17:27:56 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.4485'}
[I 180620 17:27:56 spdk_tools:21] -------- get_bdevs: {'name': u'lvs.nvme0n1/lvol.4485'}

[-- Attachment #4: vhost-log.txt --]
[-- Type: text/plain, Size: 11512 bytes --]

Starting SPDK v18.04 / DPDK 17.11.1 initialization...
[ DPDK EAL parameters: vhost -c 0x1 -m 2048 --file-prefix=spdk_pid27570 ]
EAL: Detected 32 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 443:spdk_app_start: *NOTICE*: Total cores available: 1
reactor.c: 650:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
reactor.c: 434:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:a54 spdk_nvme
EAL:   using IOMMU type 1 (Type 1)
bdev_rpc.c: 147:spdk_rpc_get_bdevs: *ERROR*: bdev 'lvs.nvme0n1/lvol.129' does not exist
VHOST_CONFIG: vhost-user server: socket created, fd: 17
VHOST_CONFIG: bind to /var/tmp/vhost.129
bdev_rpc.c: 147:spdk_rpc_get_bdevs: *ERROR*: bdev 'lvs.nvme0n1/lvol.935' does not exist
bdev_rpc.c: 147:spdk_rpc_get_bdevs: *ERROR*: bdev 'lvs.nvme0n1/lvol.546' does not exist
bdev_rpc.c: 147:spdk_rpc_get_bdevs: *ERROR*: bdev 'lvs.nvme0n1/lvol.206' does not exist
VHOST_CONFIG: vhost-user server: socket created, fd: 23
VHOST_CONFIG: bind to /var/tmp/vhost.206
bdev_rpc.c: 147:spdk_rpc_get_bdevs: *ERROR*: bdev 'lvs.nvme0n1/lvol.14' does not exist
VHOST_CONFIG: vhost-user server: socket created, fd: 26
VHOST_CONFIG: bind to /var/tmp/vhost.14
VHOST_CONFIG: new vhost user connection is 27
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:28
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:29
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:30
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:31
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:4 file:32
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:5 file:33
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: guest memory region 0, size: 0x80000000
         guest physical addr: 0x0
         guest virtual  addr: 0x7fda00000000
         host  virtual  addr: 0x2aaac0000000
         mmap addr : 0x2aaac0000000
         mmap size : 0x80000000
         mmap align: 0x40000000
         mmap off  : 0x0
VHOST_CONFIG: reallocate vq from 0 to 1 node
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:35
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:36
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:28
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:29
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:30
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:4 file:31
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:5 file:32
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_GET_VRING_BASE
VHOST_CONFIG: vring base idx:2 file:259
VHOST_CONFIG: new vhost user connection is 29
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:33
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:35
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:37
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:38
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:4 file:39
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:5 file:40
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: guest memory region 0, size: 0x80000000
         guest physical addr: 0x0
         guest virtual  addr: 0x7fb9c0000000
         host  virtual  addr: 0x2aab40000000
         mmap addr : 0x2aab40000000
         mmap size : 0x80000000
         mmap align: 0x40000000
         mmap off  : 0x0
VHOST_CONFIG: reallocate vq from 0 to 1 node
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:42
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:43
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:33
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:35
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:37
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:4 file:38
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:5 file:39
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.206: read message VHOST_USER_GET_VRING_BASE
VHOST_CONFIG: vring base idx:2 file:259
VHOST_CONFIG: new vhost user connection is 35
VHOST_CONFIG: new device, handle is 2
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:40
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:42
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:44
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:45
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:4 file:46
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:5 file:47
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: guest memory region 0, size: 0x80000000
         guest physical addr: 0x0
         guest virtual  addr: 0x7f62c0000000
         host  virtual  addr: 0x2aabc0000000
         mmap addr : 0x2aabc0000000
         mmap size : 0x80000000
         mmap align: 0x40000000
         mmap off  : 0x0
VHOST_CONFIG: reallocate vq from 0 to 1 node
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:49
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:50
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:40
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:42
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:51
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:4 file:45
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: /var/tmp/vhost.14: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:5 file:46
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: guest memory region 0, size: 0x80000000
         guest physical addr: 0x0
         guest virtual  addr: 0x7fda00000000
         host  virtual  addr: 0x2aac40000000
         mmap addr : 0x2aac40000000
         mmap size : 0x80000000
         mmap align: 0x40000000
         mmap off  : 0x0
VHOST_CONFIG: reallocate vq from 0 to 1 node
VHOST_CONFIG: /var/tmp/vhost.129: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:34
VHOST_CONFIG: virtio is now ready for processing.
Segmentation fault (core dumped)

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-06-22  9:55 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-21 23:41 [SPDK] [vhost bug] Continuous stressful operations cause vhost app to crash or respond timeout Wan, Qun
  -- strict thread matches above, loose matches on Subject: below --
2018-06-22  9:55 Bob Chen
2018-06-21 15:54 Harris, James R
2018-06-21 14:40 Michael Haeuptle
2018-06-21  9:23 Bob Chen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.