All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Mike Christie <michael.christie@oracle.com>
Cc: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org,
	pbonzini@redhat.com, jasowang@redhat.com, mst@redhat.com,
	sgarzare@redhat.com, virtualization@lists.linux-foundation.org
Subject: Re: vhost: multiple worker support
Date: Thu, 3 Jun 2021 11:13:57 +0100	[thread overview]
Message-ID: <YLirZVVeL95jm2/O@stefanha-x1.localdomain> (raw)
In-Reply-To: <20210525180600.6349-1-michael.christie@oracle.com>

[-- Attachment #1: Type: text/plain, Size: 1898 bytes --]

On Tue, May 25, 2021 at 01:05:51PM -0500, Mike Christie wrote:
> Results:
> --------
> When running with the null_blk driver and vhost-scsi I can get 1.2
> million IOPs by just running a simple
> 
> fio --filename=/dev/sda --direct=1 --rw=randrw --bs=4k --ioengine=libaio
> --iodepth=128  --numjobs=8 --time_based --group_reporting --name=iops
> --runtime=60 --eta-newline=1
> 
> The VM has 8 vCPUs and sda has 8 virtqueues and we can do a total of
> 1024 cmds per devices. To get 1.2 million IOPs I did have to tune and
> ran the virsh emulatorpin command so the vhost threads were running
> on different CPUs than the VM. If the vhost threads share CPUs then I
> get around 800K.
> 
> For a more real device that are also CPU hogs like iscsi, I can still
> get 1 million IOPs using 1 dm-multipath device over 8 iscsi paths
> (natively it gets 1.1 million IOPs).

There is no comparison against a baseline, but I guess it would be the
same 8 vCPU guest with single queue vhost-scsi?

> 
> Results/TODO Note:
> 
> - I ported the vdpa sim code to support multiple workers and as-is now
> it made perf much worse. If I increase vdpa_sim_blk's num queues to 4-8
> I get 700K IOPs with the fio command above. However with the multiple
> worker support it drops to 400K. The problem is the vdpa_sim lock
> and the iommu_lock. If I hack (like comment out locks or not worry about
> data corruption or crashes) then I can get around 1.2M - 1.6M IOPs with
> 8 queues and fio command above.
> 
> So these patches could help other drivers, but it will just take more
> work to remove those types of locks. I was hoping the 2 items could be
> done indepentently since it helps vhost-scsi immediately.

Cool, thank you for taking a look. That's useful info for Stefano. vDPA
and vhost can be handled independently though in the long term hopefully
they can share code.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Mike Christie <michael.christie@oracle.com>
Cc: linux-scsi@vger.kernel.org, mst@redhat.com,
	virtualization@lists.linux-foundation.org,
	target-devel@vger.kernel.org, pbonzini@redhat.com
Subject: Re: vhost: multiple worker support
Date: Thu, 3 Jun 2021 11:13:57 +0100	[thread overview]
Message-ID: <YLirZVVeL95jm2/O@stefanha-x1.localdomain> (raw)
In-Reply-To: <20210525180600.6349-1-michael.christie@oracle.com>


[-- Attachment #1.1: Type: text/plain, Size: 1898 bytes --]

On Tue, May 25, 2021 at 01:05:51PM -0500, Mike Christie wrote:
> Results:
> --------
> When running with the null_blk driver and vhost-scsi I can get 1.2
> million IOPs by just running a simple
> 
> fio --filename=/dev/sda --direct=1 --rw=randrw --bs=4k --ioengine=libaio
> --iodepth=128  --numjobs=8 --time_based --group_reporting --name=iops
> --runtime=60 --eta-newline=1
> 
> The VM has 8 vCPUs and sda has 8 virtqueues and we can do a total of
> 1024 cmds per devices. To get 1.2 million IOPs I did have to tune and
> ran the virsh emulatorpin command so the vhost threads were running
> on different CPUs than the VM. If the vhost threads share CPUs then I
> get around 800K.
> 
> For a more real device that are also CPU hogs like iscsi, I can still
> get 1 million IOPs using 1 dm-multipath device over 8 iscsi paths
> (natively it gets 1.1 million IOPs).

There is no comparison against a baseline, but I guess it would be the
same 8 vCPU guest with single queue vhost-scsi?

> 
> Results/TODO Note:
> 
> - I ported the vdpa sim code to support multiple workers and as-is now
> it made perf much worse. If I increase vdpa_sim_blk's num queues to 4-8
> I get 700K IOPs with the fio command above. However with the multiple
> worker support it drops to 400K. The problem is the vdpa_sim lock
> and the iommu_lock. If I hack (like comment out locks or not worry about
> data corruption or crashes) then I can get around 1.2M - 1.6M IOPs with
> 8 queues and fio command above.
> 
> So these patches could help other drivers, but it will just take more
> work to remove those types of locks. I was hoping the 2 items could be
> done indepentently since it helps vhost-scsi immediately.

Cool, thank you for taking a look. That's useful info for Stefano. vDPA
and vhost can be handled independently though in the long term hopefully
they can share code.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2021-06-03 10:14 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-25 18:05 vhost: multiple worker support Mike Christie
2021-05-25 18:05 ` Mike Christie
2021-05-25 18:05 ` [PATCH 1/9] vhost: move worker thread fields to new struct Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 10:16   ` Stefan Hajnoczi
2021-06-03 10:16     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 2/9] vhost: move vhost worker creation to kick setup Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 10:28   ` Stefan Hajnoczi
2021-06-03 10:28     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 3/9] vhost: modify internal functions to take a vhost_worker Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 10:45   ` Stefan Hajnoczi
2021-06-03 10:45     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 4/9] vhost: allow vhost_polls to use different vhost_workers Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 13:51   ` Stefan Hajnoczi
2021-06-03 13:51     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 5/9] vhost-scsi: flush IO vqs then send TMF rsp Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 13:54   ` Stefan Hajnoczi
2021-06-03 13:54     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 6/9] vhost-scsi: make SCSI cmd completion per vq Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 13:57   ` Stefan Hajnoczi
2021-06-03 13:57     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 7/9] vhost: allow userspace to create workers Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 14:30   ` Stefan Hajnoczi
2021-06-03 14:30     ` Stefan Hajnoczi
2021-06-05 23:53     ` michael.christie
2021-06-05 23:53       ` michael.christie
2021-06-07 15:19       ` Stefan Hajnoczi
2021-06-07 15:19         ` Stefan Hajnoczi
2021-06-09 21:03         ` Mike Christie
2021-06-09 21:03           ` Mike Christie
2021-06-10  8:06           ` Stefan Hajnoczi
2021-06-10  8:06             ` Stefan Hajnoczi
2021-06-18  2:49             ` Mike Christie
2021-06-18  2:49               ` Mike Christie
2021-06-21 13:41               ` Stefan Hajnoczi
2021-06-21 13:41                 ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 8/9] vhost: add vhost_dev pointer to vhost_work Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 14:31   ` Stefan Hajnoczi
2021-06-03 14:31     ` Stefan Hajnoczi
2021-05-25 18:06 ` [PATCH 9/9] vhost: support sharing workers across devs Mike Christie
2021-05-25 18:06   ` Mike Christie
2021-06-03 14:32   ` Stefan Hajnoczi
2021-06-03 14:32     ` Stefan Hajnoczi
2021-06-07  2:18     ` Jason Wang
2021-06-07  2:18       ` Jason Wang
2021-06-03 10:13 ` Stefan Hajnoczi [this message]
2021-06-03 10:13   ` vhost: multiple worker support Stefan Hajnoczi
2021-06-03 18:45   ` Mike Christie
2021-06-03 18:45     ` Mike Christie
2021-06-03 14:37 ` Stefan Hajnoczi
2021-06-03 14:37   ` Stefan Hajnoczi
2021-06-03 22:16   ` Mike Christie
2021-06-03 22:16     ` Mike Christie
2021-06-05 22:40     ` michael.christie
2021-06-05 22:40       ` michael.christie
2021-06-07 15:23       ` Stefan Hajnoczi
2021-06-07 15:23         ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YLirZVVeL95jm2/O@stefanha-x1.localdomain \
    --to=stefanha@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=michael.christie@oracle.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.