xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: Elaboration of "Question about sharing spinlock_t among VMs in Xen"
@ 2016-06-27  2:44 Dagaen Golomb
  2016-06-27 14:08 ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 8+ messages in thread
From: Dagaen Golomb @ 2016-06-27  2:44 UTC (permalink / raw)
  To: Xen-devel

I wanted some elaboration on this question and answer posted recently.

On 06/13/2016 01:43 PM, Meng Xu wrote:
>> Hi,
>>
>> I have a quick question about using the Linux spin_lock() in Xen
>> environment to protect some host-wide shared (memory) resource among
>> VMs.
>>
>> *** The question is as follows ***
>> Suppose I have two Linux VMs sharing the same spinlock_t lock (through
>> the sharing memory) on the same host. Suppose we have one process in
>> each VM. Each process uses the linux function spin_lock(&lock) [1] to
>> grab & release the lock.
>> Will these two processes in the two VMs have race on the shared lock?

> You can't do this: depending on which Linux version you use you will
> find that kernel uses ticketlocks or qlocks locks which keep track of
> who is holding the lock (obviously this information is internal to VM).
> On top of this on Xen we use pvlocks which add another (internal)
> control layer.

I wanted to see if this can be done with the correct combination of
versions and parameters. We are using 4.1.0 for all domains, which
still has the CONFIG_PARAVIRT_SPINLOCK option. I've recompiled the
guests with this option set to n, and have also added the boot
parameter xen_nopvspin to both domains and dom0 for good measure. A
basic ticketlock holds all the information inside the struct itself to
order the requests, and I believe this is the version I'm using.

Do you think this *should* work? I am still getting a deadlock issue
but I do not believe its due to blocking vcpus, especially after the
above changes. Instead, I believe the spinlock struct is getting
corrupted. To be more precise, I only have two competing domains as a
test, both domUs. I print the raw spinlock struct out when I create it
and after a lock/unlock test. I get the following:

Init: [ 00 00 00 00 ]
Lock: [ 00 00 02 00 ]
Unlock: [ 02 00 02 00 ]
Lock: [ 02 00 04 00 ]
Unlock: [ 04 00 04 00 ]

It seems clear from the output and reading I've done that the first 2
bytes are the "currently servicing" number and the next two are the
"next number to draw" value. With only two guests, one should always
be getting serviced while another waits, so I would expect these two
halves to stay nearly the same (within one grab actually) and end with
both values equal when both are done with their locking/unlocking.
Instead, after what seems to be deadlock I destroy the VMs and print
the spinlock values an its this: [ 11 1e 14 1e ]. Note the 11 and 14,
should these be an odd number apart? The accesses I see keep them
even. Please correct me if I am wrong! Seems practically every time
there is this issue, the first pair of bytes are 3 off and the last
pair match. Could this have something to do with the issue?

>> My speculation is that it should have the race on the shard lock when
>> the spin_lock() function in *two VMs* operate on the same lock.
>>
>> We did some quick experiment on this and we found one VM sometimes see
>> the soft lockup on the lock. But we want to make sure our
>> understanding is correct.
>>
>> We are exploring if we can use the spin_lock to protect the shared
>> resources among VMs, instead of using the PV drivers. If the
>> spin_lock() in linux can provide the host-wide atomicity (which will
>> surprise me, though), that will be great. Otherwise, we probably have
>> to expose the spin_lock in Xen to the Linux?

> I'd think this has to be via the hypervisor (or some other third party).
> Otherwise what happens if one of the guests dies while holding the lock?
> -boris

This is a valid point against locking in the guests, but itself won't
prevent a spinlock implementation from working! We may move this
direction for several reasons but I am interested in why the above is
not working when I've disabled the PV part that sleeps vcpus.

Regards,
Dagaen Golomb
Ph.D. Student, University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Elaboration of "Question about sharing spinlock_t among VMs in Xen"
  2016-06-27  2:44 Elaboration of "Question about sharing spinlock_t among VMs in Xen" Dagaen Golomb
@ 2016-06-27 14:08 ` Konrad Rzeszutek Wilk
  2016-06-27 15:24   ` Dagaen Golomb
  0 siblings, 1 reply; 8+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-06-27 14:08 UTC (permalink / raw)
  To: Dagaen Golomb; +Cc: Xen-devel

On Sun, Jun 26, 2016 at 10:44:53PM -0400, Dagaen Golomb wrote:
> I wanted some elaboration on this question and answer posted recently.
> 
> On 06/13/2016 01:43 PM, Meng Xu wrote:
> >> Hi,
> >>
> >> I have a quick question about using the Linux spin_lock() in Xen
> >> environment to protect some host-wide shared (memory) resource among
> >> VMs.
> >>
> >> *** The question is as follows ***
> >> Suppose I have two Linux VMs sharing the same spinlock_t lock (through
> >> the sharing memory) on the same host. Suppose we have one process in
> >> each VM. Each process uses the linux function spin_lock(&lock) [1] to
> >> grab & release the lock.
> >> Will these two processes in the two VMs have race on the shared lock?
> 
> > You can't do this: depending on which Linux version you use you will
> > find that kernel uses ticketlocks or qlocks locks which keep track of
> > who is holding the lock (obviously this information is internal to VM).
> > On top of this on Xen we use pvlocks which add another (internal)
> > control layer.
> 
> I wanted to see if this can be done with the correct combination of
> versions and parameters. We are using 4.1.0 for all domains, which
> still has the CONFIG_PARAVIRT_SPINLOCK option. I've recompiled the
> guests with this option set to n, and have also added the boot
> parameter xen_nopvspin to both domains and dom0 for good measure. A
> basic ticketlock holds all the information inside the struct itself to
> order the requests, and I believe this is the version I'm using.

Hm, weird. B/c from arch/x86/include/asm/spinlock_types.h:
  6 #ifdef CONFIG_PARAVIRT_SPINLOCKS                                                
  7 #define __TICKET_LOCK_INC       2                                               
  8 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)1)                                 
  9 #else                                                                           
 10 #define __TICKET_LOCK_INC       1                                               
 11 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)0)                                 
 12 #endif                                                                          
 13                                        

Which means that one of your guests is adding '2' while another is
adding '1'. Or one of them is putting the 'slowpath' flag
which means that the paravirt spinlock is enabled.


> 
> Do you think this *should* work? I am still getting a deadlock issue
> but I do not believe its due to blocking vcpus, especially after the
> above changes. Instead, I believe the spinlock struct is getting
> corrupted. To be more precise, I only have two competing domains as a
> test, both domUs. I print the raw spinlock struct out when I create it
> and after a lock/unlock test. I get the following:
> 
> Init: [ 00 00 00 00 ]
> Lock: [ 00 00 02 00 ]
> Unlock: [ 02 00 02 00 ]
> Lock: [ 02 00 04 00 ]
> Unlock: [ 04 00 04 00 ]
> 
> It seems clear from the output and reading I've done that the first 2
> bytes are the "currently servicing" number and the next two are the
> "next number to draw" value. With only two guests, one should always
> be getting serviced while another waits, so I would expect these two
> halves to stay nearly the same (within one grab actually) and end with
> both values equal when both are done with their locking/unlocking.
> Instead, after what seems to be deadlock I destroy the VMs and print
> the spinlock values an its this: [ 11 1e 14 1e ]. Note the 11 and 14,
> should these be an odd number apart? The accesses I see keep them
> even. Please correct me if I am wrong! Seems practically every time
> there is this issue, the first pair of bytes are 3 off and the last
> pair match. Could this have something to do with the issue?

The odd number would suggest that the TICKET_SLOWPATH_FLAG has been set.

> 
> >> My speculation is that it should have the race on the shard lock when
> >> the spin_lock() function in *two VMs* operate on the same lock.
> >>
> >> We did some quick experiment on this and we found one VM sometimes see
> >> the soft lockup on the lock. But we want to make sure our
> >> understanding is correct.
> >>
> >> We are exploring if we can use the spin_lock to protect the shared
> >> resources among VMs, instead of using the PV drivers. If the
> >> spin_lock() in linux can provide the host-wide atomicity (which will
> >> surprise me, though), that will be great. Otherwise, we probably have
> >> to expose the spin_lock in Xen to the Linux?
> 
> > I'd think this has to be via the hypervisor (or some other third party).
> > Otherwise what happens if one of the guests dies while holding the lock?
> > -boris
> 
> This is a valid point against locking in the guests, but itself won't
> prevent a spinlock implementation from working! We may move this
> direction for several reasons but I am interested in why the above is
> not working when I've disabled the PV part that sleeps vcpus.
> 
> Regards,
> Dagaen Golomb
> Ph.D. Student, University of Pennsylvania
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Elaboration of "Question about sharing spinlock_t among VMs in Xen"
  2016-06-27 14:08 ` Konrad Rzeszutek Wilk
@ 2016-06-27 15:24   ` Dagaen Golomb
  2016-06-27 18:22     ` Juergen Gross
  0 siblings, 1 reply; 8+ messages in thread
From: Dagaen Golomb @ 2016-06-27 15:24 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Xen-devel

>> >> *** The question is as follows ***
>> >> Suppose I have two Linux VMs sharing the same spinlock_t lock (through
>> >> the sharing memory) on the same host. Suppose we have one process in
>> >> each VM. Each process uses the linux function spin_lock(&lock) [1] to
>> >> grab & release the lock.
>> >> Will these two processes in the two VMs have race on the shared lock?
>>
>> > You can't do this: depending on which Linux version you use you will
>> > find that kernel uses ticketlocks or qlocks locks which keep track of
>> > who is holding the lock (obviously this information is internal to VM).
>> > On top of this on Xen we use pvlocks which add another (internal)
>> > control layer.
>>
>> I wanted to see if this can be done with the correct combination of
>> versions and parameters. We are using 4.1.0 for all domains, which
>> still has the CONFIG_PARAVIRT_SPINLOCK option. I've recompiled the
>> guests with this option set to n, and have also added the boot
>> parameter xen_nopvspin to both domains and dom0 for good measure. A
>> basic ticketlock holds all the information inside the struct itself to
>> order the requests, and I believe this is the version I'm using.
>
> Hm, weird. B/c from arch/x86/include/asm/spinlock_types.h:
>   6 #ifdef CONFIG_PARAVIRT_SPINLOCKS
>   7 #define __TICKET_LOCK_INC       2
>   8 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)1)
>   9 #else
>  10 #define __TICKET_LOCK_INC       1
>  11 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)0)
>  12 #endif
>  13
>
> Which means that one of your guests is adding '2' while another is
> adding '1'. Or one of them is putting the 'slowpath' flag
> which means that the paravirt spinlock is enabled.

Interesting. I went back to check on one of my guests, and the .config
from the source tree I used, as well as the one in /boot/ for the
current build both have it "not set" which shows as unchecked in make
menuconfig, where the option was disabled. So this domain appears to
be correctly configured. The thing is, the other domain is literally a
copy of this domain. Either both are wrong or neither are.

>>
>> Do you think this *should* work? I am still getting a deadlock issue
>> but I do not believe its due to blocking vcpus, especially after the
>> above changes. Instead, I believe the spinlock struct is getting
>> corrupted. To be more precise, I only have two competing domains as a
>> test, both domUs. I print the raw spinlock struct out when I create it
>> and after a lock/unlock test. I get the following:
>>
>> Init: [ 00 00 00 00 ]
>> Lock: [ 00 00 02 00 ]
>> Unlock: [ 02 00 02 00 ]
>> Lock: [ 02 00 04 00 ]
>> Unlock: [ 04 00 04 00 ]
>>
>> It seems clear from the output and reading I've done that the first 2
>> bytes are the "currently servicing" number and the next two are the
>> "next number to draw" value. With only two guests, one should always
>> be getting serviced while another waits, so I would expect these two
>> halves to stay nearly the same (within one grab actually) and end with
>> both values equal when both are done with their locking/unlocking.
>> Instead, after what seems to be deadlock I destroy the VMs and print
>> the spinlock values an its this: [ 11 1e 14 1e ]. Note the 11 and 14,
>> should these be an odd number apart? The accesses I see keep them
>> even. Please correct me if I am wrong! Seems practically every time
>> there is this issue, the first pair of bytes are 3 off and the last
>> pair match. Could this have something to do with the issue?
>
> The odd number would suggest that the TICKET_SLOWPATH_FLAG has been set.

It would seem so, and from the default behavior where increments show
an increase of two, both of these suggest paravirt spinlocking is
still in use. Any idea how to turn these off? I would try disabling
any paravirtual options in the configuration but I still need access
to XenStore and grant pages, which I feel I would lose by doing so.
Its odd that my boot config points to this option being not set, yet
the behavior is that it is...

>>
>> >> My speculation is that it should have the race on the shard lock when
>> >> the spin_lock() function in *two VMs* operate on the same lock.
>> >>
>> >> We did some quick experiment on this and we found one VM sometimes see
>> >> the soft lockup on the lock. But we want to make sure our
>> >> understanding is correct.
>> >>
>> >> We are exploring if we can use the spin_lock to protect the shared
>> >> resources among VMs, instead of using the PV drivers. If the
>> >> spin_lock() in linux can provide the host-wide atomicity (which will
>> >> surprise me, though), that will be great. Otherwise, we probably have
>> >> to expose the spin_lock in Xen to the Linux?
>>
>> > I'd think this has to be via the hypervisor (or some other third party).
>> > Otherwise what happens if one of the guests dies while holding the lock?
>> > -boris
>>
>> This is a valid point against locking in the guests, but itself won't
>> prevent a spinlock implementation from working! We may move this
>> direction for several reasons but I am interested in why the above is
>> not working when I've disabled the PV part that sleeps vcpus.

Regards,
Dagaen Golomb

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Elaboration of "Question about sharing spinlock_t among VMs in Xen"
  2016-06-27 15:24   ` Dagaen Golomb
@ 2016-06-27 18:22     ` Juergen Gross
  2016-06-27 18:40       ` Dagaen Golomb
  0 siblings, 1 reply; 8+ messages in thread
From: Juergen Gross @ 2016-06-27 18:22 UTC (permalink / raw)
  To: Dagaen Golomb, Konrad Rzeszutek Wilk; +Cc: Xen-devel

On 27/06/16 17:24, Dagaen Golomb wrote:
>>>>> *** The question is as follows ***
>>>>> Suppose I have two Linux VMs sharing the same spinlock_t lock (through
>>>>> the sharing memory) on the same host. Suppose we have one process in
>>>>> each VM. Each process uses the linux function spin_lock(&lock) [1] to
>>>>> grab & release the lock.
>>>>> Will these two processes in the two VMs have race on the shared lock?
>>>
>>>> You can't do this: depending on which Linux version you use you will
>>>> find that kernel uses ticketlocks or qlocks locks which keep track of
>>>> who is holding the lock (obviously this information is internal to VM).
>>>> On top of this on Xen we use pvlocks which add another (internal)
>>>> control layer.
>>>
>>> I wanted to see if this can be done with the correct combination of
>>> versions and parameters. We are using 4.1.0 for all domains, which
>>> still has the CONFIG_PARAVIRT_SPINLOCK option. I've recompiled the
>>> guests with this option set to n, and have also added the boot

Just a paranoid question: how exactly does the .config line look like?
It should _not_ be

CONFIG_PARAVIRT_SPINLOCK=n

but rather:

# CONFIG_PARAVIRT_SPINLOCK is not set

>>> parameter xen_nopvspin to both domains and dom0 for good measure. A
>>> basic ticketlock holds all the information inside the struct itself to
>>> order the requests, and I believe this is the version I'm using.
>>
>> Hm, weird. B/c from arch/x86/include/asm/spinlock_types.h:
>>   6 #ifdef CONFIG_PARAVIRT_SPINLOCKS
>>   7 #define __TICKET_LOCK_INC       2
>>   8 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)1)
>>   9 #else
>>  10 #define __TICKET_LOCK_INC       1
>>  11 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)0)
>>  12 #endif
>>  13
>>
>> Which means that one of your guests is adding '2' while another is
>> adding '1'. Or one of them is putting the 'slowpath' flag
>> which means that the paravirt spinlock is enabled.
> 
> Interesting. I went back to check on one of my guests, and the .config
> from the source tree I used, as well as the one in /boot/ for the
> current build both have it "not set" which shows as unchecked in make
> menuconfig, where the option was disabled. So this domain appears to
> be correctly configured. The thing is, the other domain is literally a
> copy of this domain. Either both are wrong or neither are.

One other thing you should be aware of: as soon as one of your guests
has only one vcpu it will drop the "lock" prefixes for updates of the
lock word. So there will be a chance of races just because one or both
guests are thinking no other cpu can access the lock word concurrently.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Elaboration of "Question about sharing spinlock_t among VMs in Xen"
  2016-06-27 18:22     ` Juergen Gross
@ 2016-06-27 18:40       ` Dagaen Golomb
  2016-06-27 19:09         ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 8+ messages in thread
From: Dagaen Golomb @ 2016-06-27 18:40 UTC (permalink / raw)
  To: Juergen Gross; +Cc: Xen-devel

>>>>>> *** The question is as follows ***
>>>>>> Suppose I have two Linux VMs sharing the same spinlock_t lock (through
>>>>>> the sharing memory) on the same host. Suppose we have one process in
>>>>>> each VM. Each process uses the linux function spin_lock(&lock) [1] to
>>>>>> grab & release the lock.
>>>>>> Will these two processes in the two VMs have race on the shared lock?
>>>>
>>>>> You can't do this: depending on which Linux version you use you will
>>>>> find that kernel uses ticketlocks or qlocks locks which keep track of
>>>>> who is holding the lock (obviously this information is internal to VM).
>>>>> On top of this on Xen we use pvlocks which add another (internal)
>>>>> control layer.
>>>>
>>>> I wanted to see if this can be done with the correct combination of
>>>> versions and parameters. We are using 4.1.0 for all domains, which
>>>> still has the CONFIG_PARAVIRT_SPINLOCK option. I've recompiled the
>>>> guests with this option set to n, and have also added the boot
>
> Just a paranoid question: how exactly does the .config line look like?
> It should _not_ be
>
> CONFIG_PARAVIRT_SPINLOCK=n
>
> but rather:
>
> # CONFIG_PARAVIRT_SPINLOCK is not set

Yes, it is not set. Good to cover all bases. Below is the config
grepped with "SPIN"

CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
# CONFIG_PARAVIRT_SPINLOCKS is not set
# CONFIG_SPINLOCK_DEV is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_SPINLOCK_DEVICE is not set

>>>> parameter xen_nopvspin to both domains and dom0 for good measure. A
>>>> basic ticketlock holds all the information inside the struct itself to
>>>> order the requests, and I believe this is the version I'm using.
>>>
>>> Hm, weird. B/c from arch/x86/include/asm/spinlock_types.h:
>>>   6 #ifdef CONFIG_PARAVIRT_SPINLOCKS
>>>   7 #define __TICKET_LOCK_INC       2
>>>   8 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)1)
>>>   9 #else
>>>  10 #define __TICKET_LOCK_INC       1
>>>  11 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)0)
>>>  12 #endif
>>>  13
>>>
>>> Which means that one of your guests is adding '2' while another is
>>> adding '1'. Or one of them is putting the 'slowpath' flag
>>> which means that the paravirt spinlock is enabled.
>>
>> Interesting. I went back to check on one of my guests, and the .config
>> from the source tree I used, as well as the one in /boot/ for the
>> current build both have it "not set" which shows as unchecked in make
>> menuconfig, where the option was disabled. So this domain appears to
>> be correctly configured. The thing is, the other domain is literally a
>> copy of this domain. Either both are wrong or neither are.
>
> One other thing you should be aware of: as soon as one of your guests
> has only one vcpu it will drop the "lock" prefixes for updates of the
> lock word. So there will be a chance of races just because one or both
> guests are thinking no other cpu can access the lock word concurrently.

Now that is an interesting point! I am indeed using 1 vcpu for each
domain right now. Does it automatically drop lock if it detects one
vcpu when booting? Or is this set at compile time? Shouldn't setting
SMP to y regardless of core/vcpu count keep the SMP spinlock
implementation? I definitely did not think about this -- It was
compiled with one vcpu so if its done at compile time this could be
the issue. I doubt its done at boot but if so I would presume there is
a way to disable this?

Below is the config file grepped for "SMP".
CONFIG_X86_64_SMP=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_SMP=y
# CONFIG_X86_VSMP is not set
# CONFIG_MAXSMP is not set
CONFIG_PM_SLEEP_SMP=y

See anything problematic? Seems PV spinlocks is not set, and SMP is
enabled... or is something else required to prevent stripping the
spinlocks? Also not sure if any of the set SPIN config items could
mess with this. If this is done at boot, a point in the direction for
preventing this would be appreciated!

Regards,
Dagaen Golomb
Ph.D Student, University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Elaboration of "Question about sharing spinlock_t among VMs in Xen"
  2016-06-27 18:40       ` Dagaen Golomb
@ 2016-06-27 19:09         ` Konrad Rzeszutek Wilk
  2016-06-27 19:12           ` Dagaen Golomb
  0 siblings, 1 reply; 8+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-06-27 19:09 UTC (permalink / raw)
  To: Dagaen Golomb; +Cc: Juergen Gross, Xen-devel

> > One other thing you should be aware of: as soon as one of your guests
> > has only one vcpu it will drop the "lock" prefixes for updates of the
> > lock word. So there will be a chance of races just because one or both
> > guests are thinking no other cpu can access the lock word concurrently.
> 
> Now that is an interesting point! I am indeed using 1 vcpu for each
> domain right now. Does it automatically drop lock if it detects one
> vcpu when booting? Or is this set at compile time? Shouldn't setting

It does patching of the binary during bootup time.

You can disable the patching by using noreplace-smp on Linux command line.


> SMP to y regardless of core/vcpu count keep the SMP spinlock
> implementation? I definitely did not think about this -- It was
> compiled with one vcpu so if its done at compile time this could be
> the issue. I doubt its done at boot but if so I would presume there is
> a way to disable this?
> 
> Below is the config file grepped for "SMP".
> CONFIG_X86_64_SMP=y
> CONFIG_GENERIC_SMP_IDLE_THREAD=y
> CONFIG_SMP=y
> # CONFIG_X86_VSMP is not set
> # CONFIG_MAXSMP is not set
> CONFIG_PM_SLEEP_SMP=y
> 
> See anything problematic? Seems PV spinlocks is not set, and SMP is
> enabled... or is something else required to prevent stripping the
> spinlocks? Also not sure if any of the set SPIN config items could
> mess with this. If this is done at boot, a point in the direction for
> preventing this would be appreciated!
> 
> Regards,
> Dagaen Golomb
> Ph.D Student, University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Elaboration of "Question about sharing spinlock_t among VMs in Xen"
  2016-06-27 19:09         ` Konrad Rzeszutek Wilk
@ 2016-06-27 19:12           ` Dagaen Golomb
  2016-06-27 19:59             ` Dagaen Golomb
  0 siblings, 1 reply; 8+ messages in thread
From: Dagaen Golomb @ 2016-06-27 19:12 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Juergen Gross, Xen-devel

On Mon, Jun 27, 2016 at 3:09 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>> > One other thing you should be aware of: as soon as one of your guests
>> > has only one vcpu it will drop the "lock" prefixes for updates of the
>> > lock word. So there will be a chance of races just because one or both
>> > guests are thinking no other cpu can access the lock word concurrently.
>>
>> Now that is an interesting point! I am indeed using 1 vcpu for each
>> domain right now. Does it automatically drop lock if it detects one
>> vcpu when booting? Or is this set at compile time? Shouldn't setting
>
> It does patching of the binary during bootup time.
>
> You can disable the patching by using noreplace-smp on Linux command line.

Thanks a lot! I will give this a shot... I'm thinking I've covered the
other bases so maybe this will finally make it work as expected!

>> SMP to y regardless of core/vcpu count keep the SMP spinlock
>> implementation? I definitely did not think about this -- It was
>> compiled with one vcpu so if its done at compile time this could be
>> the issue. I doubt its done at boot but if so I would presume there is
>> a way to disable this?
>>
>> Below is the config file grepped for "SMP".
>> CONFIG_X86_64_SMP=y
>> CONFIG_GENERIC_SMP_IDLE_THREAD=y
>> CONFIG_SMP=y
>> # CONFIG_X86_VSMP is not set
>> # CONFIG_MAXSMP is not set
>> CONFIG_PM_SLEEP_SMP=y
>>
>> See anything problematic? Seems PV spinlocks is not set, and SMP is
>> enabled... or is something else required to prevent stripping the
>> spinlocks? Also not sure if any of the set SPIN config items could
>> mess with this. If this is done at boot, a point in the direction for
>> preventing this would be appreciated!

Regards,
Dagaen Golomb
Ph.D Student, University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Elaboration of "Question about sharing spinlock_t among VMs in Xen"
  2016-06-27 19:12           ` Dagaen Golomb
@ 2016-06-27 19:59             ` Dagaen Golomb
  0 siblings, 0 replies; 8+ messages in thread
From: Dagaen Golomb @ 2016-06-27 19:59 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Juergen Gross, Xen-devel

>>> > One other thing you should be aware of: as soon as one of your guests
>>> > has only one vcpu it will drop the "lock" prefixes for updates of the
>>> > lock word. So there will be a chance of races just because one or both
>>> > guests are thinking no other cpu can access the lock word concurrently.
>>>
>>> Now that is an interesting point! I am indeed using 1 vcpu for each
>>> domain right now. Does it automatically drop lock if it detects one
>>> vcpu when booting? Or is this set at compile time? Shouldn't setting
>>
>> It does patching of the binary during bootup time.
>>
>> You can disable the patching by using noreplace-smp on Linux command line.
>
> Thanks a lot! I will give this a shot... I'm thinking I've covered the
> other bases so maybe this will finally make it work as expected!

Aha! Seems to work now! Thanks for all the useful feedback and quick responses!

Regards,
Dagaen Golomb
Ph.D Student, University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-06-27 19:59 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-27  2:44 Elaboration of "Question about sharing spinlock_t among VMs in Xen" Dagaen Golomb
2016-06-27 14:08 ` Konrad Rzeszutek Wilk
2016-06-27 15:24   ` Dagaen Golomb
2016-06-27 18:22     ` Juergen Gross
2016-06-27 18:40       ` Dagaen Golomb
2016-06-27 19:09         ` Konrad Rzeszutek Wilk
2016-06-27 19:12           ` Dagaen Golomb
2016-06-27 19:59             ` Dagaen Golomb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).