xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Meng Xu <xumengpanda@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Question about sharing spinlock_t among VMs in Xen
Date: Mon, 13 Jun 2016 23:54:47 +0100	[thread overview]
Message-ID: <cc67dd25-2649-ef43-0d7b-482e5f1af9e7@citrix.com> (raw)
In-Reply-To: <CAENZ-+k8rLKsXDnCDu7xRSAPkHn9umGDVhbWYTy-J_vn-ffy9g@mail.gmail.com>

On 13/06/2016 18:43, Meng Xu wrote:
> Hi,
>
> I have a quick question about using the Linux spin_lock() in Xen
> environment to protect some host-wide shared (memory) resource among
> VMs.
>
> *** The question is as follows ***
> Suppose I have two Linux VMs sharing the same spinlock_t lock (through
> the sharing memory) on the same host. Suppose we have one process in
> each VM. Each process uses the linux function spin_lock(&lock) [1] to
> grab & release the lock.
> Will these two processes in the two VMs have race on the shared lock?

"Race" is debatable.  (After all, the point of a lock is to have
serialise multiple accessors).  But yes, this will be the same lock.

The underlying cache coherency fabric will perform atomic locked
operations on the same physical piece of RAM.

The important question is whether the two difference VMs have an
identical idea of what a spinlock_t is.  If not, this will definitely fail.

> My speculation is that it should have the race on the shard lock when
> the spin_lock() function in *two VMs* operate on the same lock.
>
> We did some quick experiment on this and we found one VM sometimes see
> the soft lockup on the lock. But we want to make sure our
> understanding is correct.
>
> We are exploring if we can use the spin_lock to protect the shared
> resources among VMs, instead of using the PV drivers. If the
> spin_lock() in linux can provide the host-wide atomicity (which will
> surprise me, though), that will be great. Otherwise, we probably have
> to expose the spin_lock in Xen to the Linux?

What are you attempting to protect like this?

Anything which a guest can spin on like this is a recipe for disaster,
as you observe, as the guest which holds the lock will get scheduled out
in favour of the guest attempting to take the lock.  Alternatively, two
different guests with a different idea of how to manage the memory
backing a spinlock_t.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  parent reply	other threads:[~2016-06-13 22:54 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-13 17:43 Question about sharing spinlock_t among VMs in Xen Meng Xu
2016-06-13 18:28 ` Boris Ostrovsky
2016-06-13 20:46   ` Meng Xu
2016-06-13 21:17     ` Boris Ostrovsky
2016-06-14  1:50       ` Meng Xu
2016-06-13 22:54 ` Andrew Cooper [this message]
2016-06-14  2:13   ` Meng Xu
2016-06-14 16:01     ` Andrew Cooper
2016-06-15 15:28       ` Meng Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cc67dd25-2649-ef43-0d7b-482e5f1af9e7@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    --cc=xumengpanda@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).