xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Razvan Cojocaru <rcojocaru@bitdefender.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>
Subject: Issues with the xen-access.c model
Date: Tue, 5 Apr 2016 13:55:30 +0300	[thread overview]
Message-ID: <570399A2.4090507@bitdefender.com> (raw)

Hello,

xen-access.c does roughly this:

for (;;) {
	poll_evt_channel();

	if (new_events) {
		while (ring_buffer_has_requests) {
			pull_request();
			process_request();
			put_response();
		}

		signal_evt_channel();
	}
}

The problems are these:

1. vm_event_put_request() does notify_via_xen_event_channel(d,
ved->xen_port); at the very end, and we seem to be using a FIFO event
channel (if I'm reading the code right, it's generic event channel code
that pre-dates the vm_event code that sits on top of it).

This means that for a guest that's running two VCPUs, the following
scenario is possible:

* VCPU1 puts an event in the ring buffer
* VCPU1 signals the event channel
* poll_evt_channel() wakes up
* VCPU2 puts an event in the ring buffer
* the loop processes the first event
* VCPU2 signals the event channel
* the loop processes the second event (since there _are_ more requests
in the ring buffer)
* the loop signals the event channel that requests have been processed
* poll_evt_channel() wakes up again, because of the second event channel
signal, but this is pointless since all events have already been processed

This can be avoided by counting the requests processed vs. the number of
succesful poll()s, but it still does not solve the second problem:

2. Race conditions. At least in theory, there's nothing to stop the
second iteration of the request processing loop to read a partially
written request from the ring buffer, or garbage left over from a
previous request, if it hits it "just right" (i.e. the "VCPU2 puts an
event in the ring buffer" part can run on top of the "the loop processes
the second event").

All this doesn't really show up unless we're in heavy processing
scenarios, but they can occur, especially in synced vm_event scenarios.

The problems seems to be alleviated by changing the loop to this:

for (;;) {
	poll_evt_channel();

	if (new_events) {
		while (ring_buffer_has_requests) {
			pull_request();
			process_request();
			put_response();

			/* Moved here. */
			signal_evt_channel();
		}
	}
}

but it's obviously not foolproof, the main problem remaining the
synchronization of the ring buffer from both the hypervisor side and
userspace.

How should we proceed to fix this?


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

             reply	other threads:[~2016-04-05 10:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-05 10:55 Razvan Cojocaru [this message]
2016-04-05 12:13 ` Issues with the xen-access.c model Andrew Cooper
2016-04-05 15:35   ` Tamas K Lengyel
2016-04-05 15:55     ` Razvan Cojocaru
2016-04-05 16:10       ` Andrew Cooper
2016-04-06 16:08         ` Razvan Cojocaru

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=570399A2.4090507@bitdefender.com \
    --to=rcojocaru@bitdefender.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=tamas@tklengyel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).