target-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>,
	Mike Christie <michael.christie@oracle.com>
Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org, pbonzini@redhat.com,
	stefanha@redhat.com, virtualization@lists.linux-foundation.org
Subject: Re: [PATCH 07/17] vhost scsi: support delayed IO vq creation
Date: Mon, 02 Nov 2020 06:36:18 +0000	[thread overview]
Message-ID: <688e46b6-77f1-1629-0c93-e3a27d582da5@redhat.com> (raw)
In-Reply-To: <20201030044402-mutt-send-email-mst@kernel.org>


On 2020/10/30 下午4:47, Michael S. Tsirkin wrote:
> On Tue, Oct 27, 2020 at 12:47:34AM -0500, Mike Christie wrote:
>> On 10/25/20 10:51 PM, Jason Wang wrote:
>>> On 2020/10/22 上午8:34, Mike Christie wrote:
>>>> Each vhost-scsi device will need a evt and ctl queue, but the number
>>>> of IO queues depends on whatever the user has configured in userspace.
>>>> This patch has vhost-scsi create the evt, ctl and one IO vq at device
>>>> open time. We then create the other IO vqs when userspace starts to
>>>> set them up. We still waste some mem on the vq and scsi vq structs,
>>>> but we don't waste mem on iovec related arrays and for later patches
>>>> we know which queues are used by the dev->nvqs value.
>>>>
>>>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>>>> ---
>>>>    drivers/vhost/scsi.c | 19 +++++++++++++++----
>>>>    1 file changed, 15 insertions(+), 4 deletions(-)
>>>
>>> Not familiar with SCSI. But I wonder if it could behave like vhost-net.
>>>
>>> E.g userspace should known the number of virtqueues so it can just open
>>> and close multiple vhost-scsi file descriptors.
>>>
>> One hiccup I'm hitting is that we might end up creating about 3x more vqs
>> than we need. The problem is that for scsi each vhost device has:
>>
>> vq=0: special control vq
>> vq=1: event vq
>> vq=2 and above: SCSI CMD/IO vqs. We want to create N of these.
>>
>> Today we do:
>>
>> Uerspace does open(/dev/vhost-scsi)
>>          vhost_dev_init(create 128 vqs and then later we setup and use N of
>> them);
>>
>> Qemu does ioctl(VHOST_SET_OWNER)
>>          vhost_dev_set_owner()
>>
>> For N vqs userspace does:
>>          // virtqueue setup related ioctls
>>
>> Qemu does ioctl(VHOST_SCSI_SET_ENDPOINT)
>>          - match LIO/target port to vhost_dev
>>
>>
>> So we could change that to:
>>
>> For N IO vqs userspace does
>>          open(/dev/vhost-scsi)
>>                  vhost_dev_init(create IO, evt, and ctl);
>>
>> for N IO vqs Qemu does:
>>          ioctl(VHOST_SET_OWNER)
>>                  vhost_dev_set_owner()
>>
>> for N IO vqs Qemu does:
>>          // virtqueue setup related ioctls
>>
>> for N IO vqs Qemu does:
>>          ioctl(VHOST_SCSI_SET_ENDPOINT)
>>                  - match LIO/target port to vhost_dev and assemble the
>> multiple vhost_dev device.
>>
>> The problem is that we have to setup some of the evt/ctl specific parts at
>> open() time when vhost_dev_init does vhost_poll_init for example.
>>
>> - At open time, we don't know if this vhost_dev is going to be part of a
>> multiple vhost_device device or a single one so we need to create at least 3
>> of them
>> - If it is a multiple device we don't know if its the first device being
>> created for the device or the N'th, so we don't know if the dev's vqs will
>> be used for IO or ctls/evts, so we have to create all 3.
>>
>> When we get the first VHOST_SCSI_SET_ENDPOINT call for a new style multiple
>> vhost_dev device, we can use that dev's evt/ctl vqs for events/controls
>> requests. When we get the other VHOST_SCSI_SET_ENDPOINT calls for the
>> multiple vhost_dev device then those dev's evt/ctl vqs will be ignored and
>> we will only use their IO vqs. So we end up with a lot of extra vqs.
> The issue Jason's hinting at is how can admins control the amount
> of resources a given qemu instance can consume?
> After all vhost vqs all live in host kernel memory ...
> Limiting # of open fds would be one way to do that ...
>
> The need to share event/control vqs between devices is a problem though,
> and sending lots of ioctls on things like reset is also not that elegant.
> Jason, did you have a good solution in mind?


Nope, I'm not familiar with SCSI so I don't even know sharing evt/cvq is 
possible. Consider VHOST_SCSI_MAX_VQ is already 128 per device. Mike's 
proposal seems to be better.

Thanks


>
>> One other question/issue I have is that qemu can open the /dev/vhost-scsi
>> device or it allows tools like libvirtd to open the device and pass in the
>> fd to use. For the latter case, would we continue to have those tools pass
>> in the leading fd, then have qemu do the other num_queues - 1
>> open(/dev/vhost-scsi) calls? Or do these apps that pass in the fd need to
>> know about all of the fds for some management reason?
> They know about all the fds, for resource control and priveledge
> separation reasons.
>

  parent reply	other threads:[~2020-11-02  6:36 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-22  0:34 [PATCH 00/17 V3] vhost: fix scsi cmd handling and cgroup support Mike Christie
2020-10-22  0:34 ` [PATCH 01/17] vhost scsi: add lun parser helper Mike Christie
2020-10-26  3:33   ` Jason Wang
2020-10-22  0:34 ` [PATCH 02/17] vhost: remove work arg from vhost_work_flush Mike Christie
2020-10-22  0:51   ` Chaitanya Kulkarni
2020-10-22  0:34 ` [PATCH 03/17] vhost net: use goto error handling in open Mike Christie
2020-10-22  0:45   ` Chaitanya Kulkarni
2020-10-26  3:34   ` Jason Wang
2020-10-22  0:34 ` [PATCH 04/17] vhost: prep vhost_dev_init users to handle failures Mike Christie
2020-10-22  5:22   ` kernel test robot
2020-10-23 16:15   ` Mike Christie
2020-11-02  5:57   ` Jason Wang
2020-11-03 10:04   ` Dan Carpenter
2020-10-22  0:34 ` [PATCH 05/17] vhost: move vq iovec allocation to dev init time Mike Christie
2020-10-22  0:34 ` [PATCH 06/17] vhost: support delayed vq creation Mike Christie
2020-10-22  0:34 ` [PATCH 07/17] vhost scsi: support delayed IO " Mike Christie
2020-10-26  3:51   ` Jason Wang
2020-10-27  5:47     ` Mike Christie
2020-10-28  1:55       ` Jason Wang
2020-10-30  8:47       ` Michael S. Tsirkin
2020-10-30 16:30         ` Mike Christie
2020-10-30 17:26           ` Mike Christie
2020-11-01 22:06         ` Mike Christie
2020-11-02  6:36         ` Jason Wang [this message]
2020-11-02  6:49           ` Jason Wang
2020-11-02 16:19             ` Mike Christie
2020-10-22  0:34 ` [PATCH 08/17] vhost scsi: alloc cmds per vq instead of session Mike Christie
2020-10-22  0:34 ` [PATCH 09/17] vhost scsi: fix cmd completion race Mike Christie
2020-10-27 13:07   ` Maurizio Lombardi
2020-10-30  8:51   ` Michael S. Tsirkin
2020-10-30 16:04     ` Paolo Bonzini
2020-10-22  0:34 ` [PATCH 10/17] vhost scsi: Add support for LUN resets Mike Christie
2020-10-22  0:34 ` [PATCH 11/17] vhost scsi: remove extra flushes Mike Christie
2020-10-22  0:34 ` [PATCH 12/17] vhost poll: fix coding style Mike Christie
2020-10-22  0:39   ` Chaitanya Kulkarni
2020-10-22  0:34 ` [PATCH 13/17] vhost: support multiple worker threads Mike Christie
2020-10-22  0:35 ` [PATCH 14/17] vhost: poll support support multiple workers Mike Christie
2020-10-22  0:35 ` [PATCH 15/17] host: support delayed vq creation Mike Christie
2020-10-22  0:50   ` Mike Christie
2020-10-22  0:35 ` [PATCH 16/17] vhost scsi: multiple worker support Mike Christie
2020-10-22  0:35 ` [PATCH 17/17] vhost scsi: drop submission workqueue Mike Christie
2020-10-29 21:47 ` [PATCH 00/17 V3] vhost: fix scsi cmd handling and cgroup support Michael S. Tsirkin
2020-10-29 22:19   ` Mike Christie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=688e46b6-77f1-1629-0c93-e3a27d582da5@redhat.com \
    --to=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=michael.christie@oracle.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).