linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Julien Grall <julien.grall@citrix.com>, <xen-devel@lists.xenproject.org>
Cc: <linux-kernel@vger.kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, <ian.campbell@citrix.com>,
	<stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/2] block/xen-blkfront: Handle non-indirect grant with 64KB pages
Date: Mon, 19 Oct 2015 13:16:05 +0200	[thread overview]
Message-ID: <5624D0F5.60207@citrix.com> (raw)
In-Reply-To: <561BF52B.9020900@citrix.com>

El 12/10/15 a les 20.00, Julien Grall ha escrit:
> On 06/10/15 11:06, Roger Pau Monné wrote:
>> El 06/10/15 a les 11.58, Julien Grall ha escrit:
>>> Hi Roger,
>>>
>>> On 06/10/2015 10:39, Roger Pau Monné wrote:
>>>> El 05/10/15 a les 19.05, Julien Grall ha escrit:
>>>>> On 05/10/15 17:01, Roger Pau Monné wrote:
>>>>>> El 11/09/15 a les 21.32, Julien Grall ha escrit:
>>>>>>>           ring_req->u.rw.nr_segments = num_grant;
>>>>>>> +        if (unlikely(require_extra_req)) {
>>>>>>> +            id2 = blkif_ring_get_request(info, req, &ring_req2);
>>>>>>
>>>>>> How can you guarantee that there's always going to be another free
>>>>>> request? AFAICT blkif_queue_rq checks for RING_FULL, but you don't
>>>>>> actually know if there's only one slot or more than one available.
>>>>>
>>>>> Because the depth of the queue is divided by 2 when the extra request is
>>>>> used (see xlvbd_init_blk_queue).
>>>
>>> I just noticed that I didn't mention this restriction in the commit
>>> message. I will do it in the next revision.
>>>
>>>> I see, that's quite restrictive but I guess it's better than introducing
>>>> a new ring macro in order to figure out if there are at least two free
>>>> slots.
>>>
>>> I actually didn't think about your suggestion. I choose to divide by two
>>> based on the assumption that the block framework will always try to send
>>> a request with the maximum data possible.
>>
>> AFAIK that depends on the request itself, the block layer will try to
>> merge requests if possible, but you can also expect that there are going
>> to be requests that will just contain a single block.
>>
>>> I don't know if this assumption is correct as I'm not fully aware how
>>> the block framework is working.
>>>
>>> If it's valid, in the case of 64KB guest, the maximum size of a request
>>> would be 64KB when indirect segment is not supported. So we would end up
>>> with a lot of 64KB request which will require 2 ring request.
>>
>> I guess you could add a counter in order to see how many requests were
>> split vs total number of requests.
> 
> So the number of 64KB request is fairly small compare to the total
> number of request (277 for 4687 request) for general usage (i.e cd, find).
> 
> Although as soon as I use dd, the block request will be merged. So I
> guess a common usage will not provide enough data to fill a 64KB request.
> 
> Although as soon as I use dd with a block size of 64KB, most of the
> request fill 64KB and an extra request is required.
> 
> Note that I had to implement quickly xen_biovec_phys_mergeable for 64KB
> page as I left this aside. Without it, the biovec won't be merge except
> with dd if you specific the block size (bs=64KB).
> 
> I've also looked to the code to see if it's possible to check if there
> is 2 ring requests free and if not waiting until they are available.
> 
> Currently, we don't need to check if a request if free because the queue
> is sized according to the number of request supported by the ring. This
> means that the block layer is handling the check and we will always have
> space in the ring.
> 
> If we decide to avoid dividing the number of request enqueue by the
> block layer, we would have to handle ourself if there is 2 ring requests
> free.
> AFAICT, when BLK_MQ_RQ_BUSY is returned the block layer will stop the
> queue. So we need to have some logic in blkfront to know when the 2 ring
> requests become free and restart the queue. I guest it would be similar
> to gnttab_request_free_callback.
> 
> I'd like your advice to know whether this is worth to implement it in
> blockfront given that it will be only used for 64KB guest with backend
> not supporting indirect grant.

At this point I don't think it's worth implementing it, if you feel like
doing that later in order to improve performance that would be fine, but
I don't think it should be required in order to get this merged.

I think you had to resend the patch anyway to fix some comments, but
apart from that:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Roger.


  reply	other threads:[~2015-10-19 11:16 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-11 19:31 [PATCH 0/2] block/xen-blkfront: Support non-indirect with 64KB page granularity Julien Grall
2015-09-11 19:31 ` [PATCH 1/2] block/xen-blkfront: Introduce blkif_ring_get_request Julien Grall
2015-10-05 15:22   ` Roger Pau Monné
2015-09-11 19:32 ` [PATCH 2/2] block/xen-blkfront: Handle non-indirect grant with 64KB pages Julien Grall
2015-10-05 16:01   ` Roger Pau Monné
2015-10-05 17:05     ` [Xen-devel] " Julien Grall
2015-10-06  9:39       ` Roger Pau Monné
2015-10-06  9:58         ` Julien Grall
2015-10-06 10:06           ` Roger Pau Monné
2015-10-12 18:00             ` Julien Grall
2015-10-19 11:16               ` Roger Pau Monné [this message]
2015-10-19 11:36                 ` Julien Grall
2015-09-12  9:46 ` [Xen-devel] [PATCH 0/2] block/xen-blkfront: Support non-indirect with 64KB page granularity Bob Liu
2015-09-13 12:06   ` Julien Grall
2015-09-13 12:44     ` Bob Liu
2015-09-13 17:47       ` Julien Grall
2015-09-14  0:37         ` Bob Liu
2015-10-02  9:32 ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5624D0F5.60207@citrix.com \
    --to=roger.pau@citrix.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=david.vrabel@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=julien.grall@citrix.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).