From: Julien Grall <julien.grall@citrix.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: <ian.campbell@citrix.com>, <stefano.stabellini@eu.citrix.com>,
<linux-kernel@vger.kernel.org>,
David Vrabel <david.vrabel@citrix.com>,
"Boris Ostrovsky" <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] block/xen-blkfront: Handle non-indirect grant with 64KB pages
Date: Tue, 6 Oct 2015 10:58:02 +0100 [thread overview]
Message-ID: <56139B2A.1050809@citrix.com> (raw)
In-Reply-To: <561396DF.9040406@citrix.com>
Hi Roger,
On 06/10/2015 10:39, Roger Pau Monné wrote:
> El 05/10/15 a les 19.05, Julien Grall ha escrit:
>> On 05/10/15 17:01, Roger Pau Monné wrote:
>>> El 11/09/15 a les 21.32, Julien Grall ha escrit:
>>>> ring_req->u.rw.nr_segments = num_grant;
>>>> + if (unlikely(require_extra_req)) {
>>>> + id2 = blkif_ring_get_request(info, req, &ring_req2);
>>>
>>> How can you guarantee that there's always going to be another free
>>> request? AFAICT blkif_queue_rq checks for RING_FULL, but you don't
>>> actually know if there's only one slot or more than one available.
>>
>> Because the depth of the queue is divided by 2 when the extra request is
>> used (see xlvbd_init_blk_queue).
I just noticed that I didn't mention this restriction in the commit
message. I will do it in the next revision.
> I see, that's quite restrictive but I guess it's better than introducing
> a new ring macro in order to figure out if there are at least two free
> slots.
I actually didn't think about your suggestion. I choose to divide by two
based on the assumption that the block framework will always try to send
a request with the maximum data possible.
I don't know if this assumption is correct as I'm not fully aware how
the block framework is working.
If it's valid, in the case of 64KB guest, the maximum size of a request
would be 64KB when indirect segment is not supported. So we would end up
with a lot of 64KB request which will require 2 ring request.
Regards,
--
Julien Grall
next prev parent reply other threads:[~2015-10-06 9:58 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-11 19:31 [PATCH 0/2] block/xen-blkfront: Support non-indirect with 64KB page granularity Julien Grall
2015-09-11 19:31 ` [PATCH 1/2] block/xen-blkfront: Introduce blkif_ring_get_request Julien Grall
2015-10-05 15:22 ` Roger Pau Monné
2015-09-11 19:32 ` [PATCH 2/2] block/xen-blkfront: Handle non-indirect grant with 64KB pages Julien Grall
2015-10-05 16:01 ` Roger Pau Monné
2015-10-05 17:05 ` [Xen-devel] " Julien Grall
2015-10-06 9:39 ` Roger Pau Monné
2015-10-06 9:58 ` Julien Grall [this message]
2015-10-06 10:06 ` Roger Pau Monné
2015-10-12 18:00 ` Julien Grall
2015-10-19 11:16 ` Roger Pau Monné
2015-10-19 11:36 ` Julien Grall
2015-09-12 9:46 ` [Xen-devel] [PATCH 0/2] block/xen-blkfront: Support non-indirect with 64KB page granularity Bob Liu
2015-09-13 12:06 ` Julien Grall
2015-09-13 12:44 ` Bob Liu
2015-09-13 17:47 ` Julien Grall
2015-09-14 0:37 ` Bob Liu
2015-10-02 9:32 ` Julien Grall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56139B2A.1050809@citrix.com \
--to=julien.grall@citrix.com \
--cc=boris.ostrovsky@oracle.com \
--cc=david.vrabel@citrix.com \
--cc=ian.campbell@citrix.com \
--cc=linux-kernel@vger.kernel.org \
--cc=roger.pau@citrix.com \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).