xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Ping: [PATCH qemu-trad] HVM: atomically access pointers in bufioreq handling
Date: Tue, 07 Jul 2015 17:03:17 +0100	[thread overview]
Message-ID: <559C1465020000780008DB36@mail.emea.novell.com> (raw)
In-Reply-To: <558812A1020000780008778E@mail.emea.novell.com>

>>> On 22.06.15 at 13:50, <JBeulich@suse.com> wrote:
> The number of slots per page being 511 (i.e. not a power of two) means
> that the (32-bit) read and write indexes going beyond 2^32 will likely
> disturb operation. The hypervisor side gets I/O req server creation
> extended so we can indicate that we're using suitable atomic accesses
> where needed (not all accesses to the two pointers really need to be
> atomic), allowing it to atomically canonicalize both pointers when both
> have gone through at least one cycle.
> 
> The Xen side counterpart (which is not a functional prereq to this
> change, albeit the intention is for Xen to assume default servers
> always use suitable atomic accesses) can be found at e.g.
> http://lists.xenproject.org/archives/html/xen-devel/2015-06/msg02996.html 
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -493,10 +493,19 @@ static int __handle_buffered_iopage(CPUS
>  
>      memset(&req, 0x00, sizeof(req));
>  
> -    while (buffered_io_page->read_pointer !=
> -           buffered_io_page->write_pointer) {
> -        buf_req = &buffered_io_page->buf_ioreq[
> -            buffered_io_page->read_pointer % IOREQ_BUFFER_SLOT_NUM];
> +    for (;;) {
> +        uint32_t rdptr = buffered_io_page->read_pointer, wrptr;
> +
> +        xen_rmb();
> +        wrptr = buffered_io_page->write_pointer;
> +        xen_rmb();
> +        if (rdptr != buffered_io_page->read_pointer) {
> +            continue;
> +        }
> +        if (rdptr == wrptr) {
> +            break;
> +        }
> +        buf_req = &buffered_io_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
>          req.size = 1UL << buf_req->size;
>          req.count = 1;
>          req.addr = buf_req->addr;
> @@ -508,15 +517,14 @@ static int __handle_buffered_iopage(CPUS
>          req.data_is_ptr = 0;
>          qw = (req.size == 8);
>          if (qw) {
> -            buf_req = &buffered_io_page->buf_ioreq[
> -                (buffered_io_page->read_pointer+1) % IOREQ_BUFFER_SLOT_NUM];
> +            buf_req = &buffered_io_page->buf_ioreq[(rdptr + 1) %
> +                                                   IOREQ_BUFFER_SLOT_NUM];
>              req.data |= ((uint64_t)buf_req->data) << 32;
>          }
>  
>          __handle_ioreq(env, &req);
>  
> -        xen_mb();
> -        buffered_io_page->read_pointer += qw ? 2 : 1;
> +        __sync_fetch_and_add(&buffered_io_page->read_pointer, qw + 1);
>      }
>  
>      return req.count;

      reply	other threads:[~2015-07-07 16:03 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-22 11:50 [PATCH qemu-trad] HVM: atomically access pointers in bufioreq handling Jan Beulich
2015-07-07 16:03 ` Jan Beulich [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=559C1465020000780008DB36@mail.emea.novell.com \
    --to=jbeulich@suse.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).