target-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: Jason Wang <jasowang@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>
Cc: fam@euphon.net, linux-scsi@vger.kernel.org, mst@redhat.com,
	qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
	target-devel@vger.kernel.org, pbonzini@redhat.com
Subject: Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
Date: Thu, 19 Nov 2020 09:49:28 -0600	[thread overview]
Message-ID: <0ba1bd55-6772-2d75-4b63-72445830a446@oracle.com> (raw)
In-Reply-To: <e91e9eee-7ff4-3ef6-955a-706276065d9b@redhat.com>

On 11/18/20 10:35 PM, Jason Wang wrote:
>> its just extra code. This patch:
>> https://urldefense.com/v3/__https://www.spinics.net/lists/linux-scsi/msg150151.html__;!!GqivPVa7Brio!MJS-iYeBuOljoz2xerETyn4c1N9i0XnOE8oNhz4ebbzCMNeQIP_Iie8zH18L7cY7_hur$ 
>> would work without the ENABLE ioctl I mean.
> 
> 
> That seems to pre-allocate all workers. If we don't care the resources 
> (127 workers) consumption it could be fine.

It only makes what the user requested via num_queues.

That patch will:
1. For the default case of num_queues=1 we use the single worker created 
from the SET_OWNER ioctl.
2. If num_queues > 1, then it creates a worker thread per num_queue > 1.


> 
> 
>>
>>
>> And if you guys want to do the completely new interface, then none of 
>> this matters I guess :)
>>
>> For disable see below.
>>
>>>
>>>
>>>>
>>>> My issue/convern is that in general these calls seems useful, but we 
>>>> don't really
>>>> need them for scsi because vhost scsi is already stuck creating vqs 
>>>> like how it does
>>>> due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of 
>>>> design where
>>>> we just set some bit, then the new ioctl does not give us a lot. 
>>>> It's just an extra
>>>> check and extra code.
>>>>
>>>> And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem 
>>>> like it's going
>>>> to happen a lot where the admin is going to want to remove vqs from 
>>>> a running device.
>>>
>>>
>>> In this case, qemu may just disable the queues of vhost-scsi via 
>>> SET_VRING_ENABLE and then we can free resources?
>>
>>
>> Some SCSI background in case it doesn't work like net:
>> -------
>> When the user sets up mq for vhost-scsi/virtio-scsi, for max perf and 
>> no cares about mem use they would normally set num_queues based on the 
>> number of vCPUs and MSI-x vectors. I think the default in qemu now is 
>> to try and detect that value.
>>
>> When the virtio_scsi driver is loaded into the guest kernel, it takes 
>> the num_queues value and tells the scsi/block mq layer to create 
>> num_queues multiqueue hw queues.
> 
> 
> If I read the code correctly, for modern device, guest will set 
> queue_enable for the queues that it wants to use. So in this ideal case, 
> qemu can forward them to VRING_ENABLE and reset VRING_ENABLE during 
> device reset.

I was thinking more you want an event like when a device/LUN is 
added/removed to a host. Instead of kicking off a device scan, you could 
call the block helper to remap queues. It would then not be too invasive 
to running IO. I'll look into reset some more.


> 
> But it would be complicated to support legacy device and qemu.
> 
> 
>>
>> ------
>>
>> I was trying to say in the previous email that is if all we do is set 
>> some bits to indicate the queue is disabled, free its resources, stop 
>> polling/queueing in the scsi/target layer, flush etc, it does not seem 
>> useful. I was trying to ask when would a user only want this behavior?
> 
> 
> I think it's device reset, the semantic is that unless the queue is 
> enabled, we should treat it as disabled.
> 

Ah ok. I I'll look into that some more. A funny thing is that I was 
trying to test that a while ago, but it wasn't helpful. I'm guessing it 
didn't work because it didn't implement what you wanted for disable 
right now :)

      reply	other threads:[~2020-11-19 15:49 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-12 23:18 [PATCH 00/10] vhost/qemu: thread per IO SCSI vq Mike Christie
2020-11-12 23:19 ` [PATCH 1/1] qemu vhost scsi: add VHOST_SET_VRING_ENABLE support Mike Christie
2020-11-17 11:53   ` Stefan Hajnoczi
2020-12-02  9:59   ` Michael S. Tsirkin
2020-12-02 16:05     ` Michael Christie
2020-11-12 23:19 ` [PATCH 01/10] vhost: remove work arg from vhost_work_flush Mike Christie
2020-11-17 13:04   ` Stefan Hajnoczi
2020-11-12 23:19 ` [PATCH 02/10] vhost scsi: remove extra flushes Mike Christie
2020-11-17 13:07   ` Stefan Hajnoczi
2020-11-12 23:19 ` [PATCH 03/10] vhost poll: fix coding style Mike Christie
2020-11-17 13:07   ` Stefan Hajnoczi
2020-11-12 23:19 ` [PATCH 04/10] vhost: support multiple worker threads Mike Christie
2020-11-12 23:19 ` [PATCH 05/10] vhost: poll support support multiple workers Mike Christie
2020-11-17 15:32   ` Stefan Hajnoczi
2020-11-12 23:19 ` [PATCH 06/10] vhost scsi: make SCSI cmd completion per vq Mike Christie
2020-11-17 16:04   ` Stefan Hajnoczi
2020-11-12 23:19 ` [PATCH 07/10] vhost, vhost-scsi: flush IO vqs then send TMF rsp Mike Christie
2020-11-17 16:05   ` Stefan Hajnoczi
2020-11-12 23:19 ` [PATCH 08/10] vhost: move msg_handler to new ops struct Mike Christie
2020-11-17 16:08   ` Stefan Hajnoczi
2020-11-12 23:19 ` [PATCH 09/10] vhost: add VHOST_SET_VRING_ENABLE support Mike Christie
2020-11-17 16:14   ` Stefan Hajnoczi
2020-11-12 23:19 ` [PATCH 10/10] vhost-scsi: create a woker per IO vq Mike Christie
2020-11-17 16:40 ` [PATCH 00/10] vhost/qemu: thread per IO SCSI vq Stefan Hajnoczi
2020-11-17 19:13   ` Mike Christie
2020-11-18  9:54     ` Michael S. Tsirkin
2020-11-19 14:00       ` Stefan Hajnoczi
2020-11-18 11:31     ` Stefan Hajnoczi
2020-11-19 14:46       ` Michael S. Tsirkin
2020-11-19 16:11         ` Mike Christie
2020-11-19 16:24           ` Stefan Hajnoczi
2020-11-19 16:43             ` Mike Christie
2020-11-19 17:08               ` Stefan Hajnoczi
2020-11-20  8:45                 ` Stefan Hajnoczi
2020-11-20 12:31                   ` Michael S. Tsirkin
2020-12-01 12:59                     ` Stefan Hajnoczi
2020-12-01 13:45                       ` Stefano Garzarella
2020-12-01 17:43                         ` Stefan Hajnoczi
2020-12-02 10:35                           ` Stefano Garzarella
2020-11-23 15:17                   ` Stefano Garzarella
2020-11-18  5:17   ` Jason Wang
2020-11-18  6:57     ` Mike Christie
2020-11-18  7:19       ` Mike Christie
2020-11-18  7:54       ` Jason Wang
2020-11-18 20:06         ` Mike Christie
2020-11-19  4:35           ` Jason Wang
2020-11-19 15:49             ` Mike Christie [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0ba1bd55-6772-2d75-4b63-72445830a446@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=fam@euphon.net \
    --cc=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).