All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/34] arch: barrier cleanup + barriers for virt
@ 2015-12-31 19:05 ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Changes since v1:
	- replaced my asm-generic patch with an equivalent patch already in tip
	- add wrappers with virt_ prefix for better code annotation,
	  as suggested by David Miller
	- dropped XXX in patch names as this makes vger choke, Cc all relevant
	  mailing lists on all patches (not personal email, as the list becomes
	  too long then)

I parked this in vhost tree for now, but the inclusion of patch 1 from tip
creates a merge conflict (even though it's easy to resolve).
Would tip maintainers prefer merging it through tip tree instead
(including the virtio patches)?
Or should I just merge it all through my tree, including the
duplicate patch, and assume conflict will be resolved?
If the second, acks will be appreciated.

Thanks!

This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.

This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?

Note to arch maintainers: please don't cherry-pick patches out of this patchset
as it's been structured in this order to avoid breaking bisect.
Please send acks instead!

Sometimes, virtualization is weird. For example, virtio does this (conceptually):

#ifdef CONFIG_SMP
                smp_mb();
#else
                mb();
#endif

Similarly, Xen calls mb() when it's not doing any MMIO at all.

Of course it's wrong in the sense that it's suboptimal. What we would really
like is to have, on UP, exactly the same barrier as on SMP.  This is because a
UP guest can run on an SMP host.

But Linux doesn't provide this ability: if CONFIG_SMP is not defined is
optimizes most barriers out to a compiler barrier.

Consider for example x86: what we want is xchg (NOT mfence - there's no real IO
going on here - just switching out of the VM - more like a function call
really) but if built without CONFIG_SMP smp_store_mb does not include this.

Virt in general is probably the only use-case, because this really is an
artifact of interfacing with an SMP host while running an UP kernel,
but since we have (at least) two users, it seems to make sense to
put these APIs in a central place.

In fact, smp_ barriers are stubs on !SMP, so they can be defined as follows:

arch/XXX/include/asm/barrier.h:

#define __smp_mb() DOSOMETHING

include/asm-generic/barrier.h:

#ifdef CONFIG_SMP
#define smp_mb() __smp_mb()
#else
#define smp_mb() barrier()
#endif

This has the benefit of cleaning out a bunch of duplicated
ifdefs on a bunch of architectures - this patchset brings
about a net reduction in LOC, even with new barriers and extra documentation :)

Then virt can use __smp_XXX when talking to an SMP host.
To make those users explicit, this patchset adds virt_xxx wrappers
for them.

Touching all archs is a tad tedious, but its fairly straight forward.

The rest of the patchset is structured as follows:


-. Patch 1 fixes a bug in asm-generic.
   It is already in tip, included here for completeness.

-. Patches 2-12 make sure barrier.h on all remaining
   architectures includes asm-generic/barrier.h:
   after the change in Patch 1, code there matches
   asm-generic/barrier.h almost verbatim.
   Minor code tweaks were required in a couple of places.
   Macros duplicated from asm-generic/barrier.h are dropped
   in the process.

After all that preparatory work, we are getting to the actual change.

-. Patches 13 adds generic smp_XXX wrappers in asm-generic
   these select __smp_XXX or barrier() depending on CONFIG_SMP

-. Patches 14-27 change all architectures to
   define __smp_XXX macros; the generic code in asm-generic/barrier.h
   then defines smp_XXX macros

   I compiled the affected arches before and after the changes,
   dumped the .text section (using objdump -O binary) and
   made sure that the object code is exactly identical
   before and after the change.
   I couldn't fully build sh,tile,xtensa but I did this test
   kernel/rcu/tree.o kernel/sched/wait.o and
   kernel/futex.o and tested these instead.

Unfortunately, I don't have a metag cross-build toolset ready.
Hoping for some acks on this architecture.

Finally, the following patches put the __smp_xxx APIs to work for virt:

-. Patch 28 adds virt_ wrappers for __smp_, and documents them.
   After all this work, this requires very few lines of code in
   the generic header.

-. Patches 29,30,33,34 convert virtio xen drivers to use the virt_xxx APIs

   xen patches are untested
   virtio ones have been tested on x86

-. Patches 31-32 teach virtio to use virt_store_mb
   sh architecture was missing a 2-byte smp_store_mb,
   the fix is trivial although my code is not optimal:
   if anyone cares, pls send me a patch to apply on top.
   I didn't build this architecture, but intel's 0-day
   infrastructure builds it.

   tested on x86


Davidlohr Bueso (1):
  lcoking/barriers, arch: Use smp barriers in smp_store_release()

Michael S. Tsirkin (33):
  asm-generic: guard smp_store_release/load_acquire
  ia64: rename nop->iosapic_nop
  ia64: reuse asm-generic/barrier.h
  powerpc: reuse asm-generic/barrier.h
  s390: reuse asm-generic/barrier.h
  sparc: reuse asm-generic/barrier.h
  arm: reuse asm-generic/barrier.h
  arm64: reuse asm-generic/barrier.h
  metag: reuse asm-generic/barrier.h
  mips: reuse asm-generic/barrier.h
  x86/um: reuse asm-generic/barrier.h
  x86: reuse asm-generic/barrier.h
  asm-generic: add __smp_xxx wrappers
  powerpc: define __smp_xxx
  arm64: define __smp_xxx
  arm: define __smp_xxx
  blackfin: define __smp_xxx
  ia64: define __smp_xxx
  metag: define __smp_xxx
  mips: define __smp_xxx
  s390: define __smp_xxx
  sh: define __smp_xxx, fix smp_store_mb for !SMP
  sparc: define __smp_xxx
  tile: define __smp_xxx
  xtensa: define __smp_xxx
  x86: define __smp_xxx
  asm-generic: implement virt_xxx memory barriers
  Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
  virtio_ring: update weak barriers to use __smp_XXX
  sh: support a 2-byte smp_store_mb
  virtio_ring: use virt_store_mb
  xenbus: use virt_xxx barriers
  xen/io: use virt_xxx barriers

 arch/arm/include/asm/barrier.h      |  35 ++-----------
 arch/arm64/include/asm/barrier.h    |  19 +++----
 arch/blackfin/include/asm/barrier.h |   4 +-
 arch/ia64/include/asm/barrier.h     |  24 +++------
 arch/metag/include/asm/barrier.h    |  55 ++++++-------------
 arch/mips/include/asm/barrier.h     |  51 ++++++------------
 arch/powerpc/include/asm/barrier.h  |  33 ++++--------
 arch/s390/include/asm/barrier.h     |  25 ++++-----
 arch/sh/include/asm/barrier.h       |  11 +++-
 arch/sparc/include/asm/barrier_32.h |   1 -
 arch/sparc/include/asm/barrier_64.h |  29 +++-------
 arch/sparc/include/asm/processor.h  |   3 --
 arch/tile/include/asm/barrier.h     |   9 ++--
 arch/x86/include/asm/barrier.h      |  36 +++++--------
 arch/x86/um/asm/barrier.h           |   9 +---
 arch/xtensa/include/asm/barrier.h   |   4 +-
 include/asm-generic/barrier.h       | 102 ++++++++++++++++++++++++++++++++----
 include/linux/virtio_ring.h         |  22 +++++---
 include/xen/interface/io/ring.h     |  16 +++---
 arch/ia64/kernel/iosapic.c          |   6 +--
 drivers/virtio/virtio_ring.c        |  15 +++---
 drivers/xen/xenbus/xenbus_comms.c   |   8 +--
 Documentation/memory-barriers.txt   |  28 ++++++++--
 23 files changed, 266 insertions(+), 279 deletions(-)

-- 
MST


^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 00/34] arch: barrier cleanup + barriers for virt
@ 2015-12-31 19:05 ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel

Changes since v1:
	- replaced my asm-generic patch with an equivalent patch already in tip
	- add wrappers with virt_ prefix for better code annotation,
	  as suggested by David Miller
	- dropped XXX in patch names as this makes vger choke, Cc all relevant
	  mailing lists on all patches (not personal email, as the list becomes
	  too long then)

I parked this in vhost tree for now, but the inclusion of patch 1 from tip
creates a merge conflict (even though it's easy to resolve).
Would tip maintainers prefer merging it through tip tree instead
(including the virtio patches)?
Or should I just merge it all through my tree, including the
duplicate patch, and assume conflict will be resolved?
If the second, acks will be appreciated.

Thanks!

This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.

This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?

Note to arch maintainers: please don't cherry-pick patches out of this patchset
as it's been structured in this order to avoid breaking bisect.
Please send acks instead!

Sometimes, virtualization is weird. For example, virtio does this (conceptually):

#ifdef CONFIG_SMP
                smp_mb();
#else
                mb();
#endif

Similarly, Xen calls mb() when it's not doing any MMIO at all.

Of course it's wrong in the sense that it's suboptimal. What we would really
like is to have, on UP, exactly the same barrier as on SMP.  This is because a
UP guest can run on an SMP host.

But Linux doesn't provide this ability: if CONFIG_SMP is not defined is
optimizes most barriers out to a compiler barrier.

Consider for example x86: what we want is xchg (NOT mfence - there's no real IO
going on here - just switching out of the VM - more like a function call
really) but if built without CONFIG_SMP smp_store_mb does not include this.

Virt in general is probably the only use-case, because this really is an
artifact of interfacing with an SMP host while running an UP kernel,
but since we have (at least) two users, it seems to make sense to
put these APIs in a central place.

In fact, smp_ barriers are stubs on !SMP, so they can be defined as follows:

arch/XXX/include/asm/barrier.h:

#define __smp_mb() DOSOMETHING

include/asm-generic/barrier.h:

#ifdef CONFIG_SMP
#define smp_mb() __smp_mb()
#else
#define smp_mb() barrier()
#endif

This has the benefit of cleaning out a bunch of duplicated
ifdefs on a bunch of architectures - this patchset brings
about a net reduction in LOC, even with new barriers and extra documentation :)

Then virt can use __smp_XXX when talking to an SMP host.
To make those users explicit, this patchset adds virt_xxx wrappers
for them.

Touching all archs is a tad tedious, but its fairly straight forward.

The rest of the patchset is structured as follows:


-. Patch 1 fixes a bug in asm-generic.
   It is already in tip, included here for completeness.

-. Patches 2-12 make sure barrier.h on all remaining
   architectures includes asm-generic/barrier.h:
   after the change in Patch 1, code there matches
   asm-generic/barrier.h almost verbatim.
   Minor code tweaks were required in a couple of places.
   Macros duplicated from asm-generic/barrier.h are dropped
   in the process.

After all that preparatory work, we are getting to the actual change.

-. Patches 13 adds generic smp_XXX wrappers in asm-generic
   these select __smp_XXX or barrier() depending on CONFIG_SMP

-. Patches 14-27 change all architectures to
   define __smp_XXX macros; the generic code in asm-generic/barrier.h
   then defines smp_XXX macros

   I compiled the affected arches before and after the changes,
   dumped the .text section (using objdump -O binary) and
   made sure that the object code is exactly identical
   before and after the change.
   I couldn't fully build sh,tile,xtensa but I did this test
   kernel/rcu/tree.o kernel/sched/wait.o and
   kernel/futex.o and tested these instead.

Unfortunately, I don't have a metag cross-build toolset ready.
Hoping for some acks on this architecture.

Finally, the following patches put the __smp_xxx APIs to work for virt:

-. Patch 28 adds virt_ wrappers for __smp_, and documents them.
   After all this work, this requires very few lines of code in
   the generic header.

-. Patches 29,30,33,34 convert virtio xen drivers to use the virt_xxx APIs

   xen patches are untested
   virtio ones have been tested on x86

-. Patches 31-32 teach virtio to use virt_store_mb
   sh architecture was missing a 2-byte smp_store_mb,
   the fix is trivial although my code is not optimal:
   if anyone cares, pls send me a patch to apply on top.
   I didn't build this architecture, but intel's 0-day
   infrastructure builds it.

   tested on x86


Davidlohr Bueso (1):
  lcoking/barriers, arch: Use smp barriers in smp_store_release()

Michael S. Tsirkin (33):
  asm-generic: guard smp_store_release/load_acquire
  ia64: rename nop->iosapic_nop
  ia64: reuse asm-generic/barrier.h
  powerpc: reuse asm-generic/barrier.h
  s390: reuse asm-generic/barrier.h
  sparc: reuse asm-generic/barrier.h
  arm: reuse asm-generic/barrier.h
  arm64: reuse asm-generic/barrier.h
  metag: reuse asm-generic/barrier.h
  mips: reuse asm-generic/barrier.h
  x86/um: reuse asm-generic/barrier.h
  x86: reuse asm-generic/barrier.h
  asm-generic: add __smp_xxx wrappers
  powerpc: define __smp_xxx
  arm64: define __smp_xxx
  arm: define __smp_xxx
  blackfin: define __smp_xxx
  ia64: define __smp_xxx
  metag: define __smp_xxx
  mips: define __smp_xxx
  s390: define __smp_xxx
  sh: define __smp_xxx, fix smp_store_mb for !SMP
  sparc: define __smp_xxx
  tile: define __smp_xxx
  xtensa: define __smp_xxx
  x86: define __smp_xxx
  asm-generic: implement virt_xxx memory barriers
  Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
  virtio_ring: update weak barriers to use __smp_XXX
  sh: support a 2-byte smp_store_mb
  virtio_ring: use virt_store_mb
  xenbus: use virt_xxx barriers
  xen/io: use virt_xxx barriers

 arch/arm/include/asm/barrier.h      |  35 ++-----------
 arch/arm64/include/asm/barrier.h    |  19 +++----
 arch/blackfin/include/asm/barrier.h |   4 +-
 arch/ia64/include/asm/barrier.h     |  24 +++------
 arch/metag/include/asm/barrier.h    |  55 ++++++-------------
 arch/mips/include/asm/barrier.h     |  51 ++++++------------
 arch/powerpc/include/asm/barrier.h  |  33 ++++--------
 arch/s390/include/asm/barrier.h     |  25 ++++-----
 arch/sh/include/asm/barrier.h       |  11 +++-
 arch/sparc/include/asm/barrier_32.h |   1 -
 arch/sparc/include/asm/barrier_64.h |  29 +++-------
 arch/sparc/include/asm/processor.h  |   3 --
 arch/tile/include/asm/barrier.h     |   9 ++--
 arch/x86/include/asm/barrier.h      |  36 +++++--------
 arch/x86/um/asm/barrier.h           |   9 +---
 arch/xtensa/include/asm/barrier.h   |   4 +-
 include/asm-generic/barrier.h       | 102 ++++++++++++++++++++++++++++++++----
 include/linux/virtio_ring.h         |  22 +++++---
 include/xen/interface/io/ring.h     |  16 +++---
 arch/ia64/kernel/iosapic.c          |   6 +--
 drivers/virtio/virtio_ring.c        |  15 +++---
 drivers/xen/xenbus/xenbus_comms.c   |   8 +--
 Documentation/memory-barriers.txt   |  28 ++++++++--
 23 files changed, 266 insertions(+), 279 deletions(-)

-- 
MST


^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 00/34] arch: barrier cleanup + barriers for virt
@ 2015-12-31 19:05 ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Changes since v1:
	- replaced my asm-generic patch with an equivalent patch already in tip
	- add wrappers with virt_ prefix for better code annotation,
	  as suggested by David Miller
	- dropped XXX in patch names as this makes vger choke, Cc all relevant
	  mailing lists on all patches (not personal email, as the list becomes
	  too long then)

I parked this in vhost tree for now, but the inclusion of patch 1 from tip
creates a merge conflict (even though it's easy to resolve).
Would tip maintainers prefer merging it through tip tree instead
(including the virtio patches)?
Or should I just merge it all through my tree, including the
duplicate patch, and assume conflict will be resolved?
If the second, acks will be appreciated.

Thanks!

This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.

This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?

Note to arch maintainers: please don't cherry-pick patches out of this patchset
as it's been structured in this order to avoid breaking bisect.
Please send acks instead!

Sometimes, virtualization is weird. For example, virtio does this (conceptually):

#ifdef CONFIG_SMP
                smp_mb();
#else
                mb();
#endif

Similarly, Xen calls mb() when it's not doing any MMIO at all.

Of course it's wrong in the sense that it's suboptimal. What we would really
like is to have, on UP, exactly the same barrier as on SMP.  This is because a
UP guest can run on an SMP host.

But Linux doesn't provide this ability: if CONFIG_SMP is not defined is
optimizes most barriers out to a compiler barrier.

Consider for example x86: what we want is xchg (NOT mfence - there's no real IO
going on here - just switching out of the VM - more like a function call
really) but if built without CONFIG_SMP smp_store_mb does not include this.

Virt in general is probably the only use-case, because this really is an
artifact of interfacing with an SMP host while running an UP kernel,
but since we have (at least) two users, it seems to make sense to
put these APIs in a central place.

In fact, smp_ barriers are stubs on !SMP, so they can be defined as follows:

arch/XXX/include/asm/barrier.h:

#define __smp_mb() DOSOMETHING

include/asm-generic/barrier.h:

#ifdef CONFIG_SMP
#define smp_mb() __smp_mb()
#else
#define smp_mb() barrier()
#endif

This has the benefit of cleaning out a bunch of duplicated
ifdefs on a bunch of architectures - this patchset brings
about a net reduction in LOC, even with new barriers and extra documentation :)

Then virt can use __smp_XXX when talking to an SMP host.
To make those users explicit, this patchset adds virt_xxx wrappers
for them.

Touching all archs is a tad tedious, but its fairly straight forward.

The rest of the patchset is structured as follows:


-. Patch 1 fixes a bug in asm-generic.
   It is already in tip, included here for completeness.

-. Patches 2-12 make sure barrier.h on all remaining
   architectures includes asm-generic/barrier.h:
   after the change in Patch 1, code there matches
   asm-generic/barrier.h almost verbatim.
   Minor code tweaks were required in a couple of places.
   Macros duplicated from asm-generic/barrier.h are dropped
   in the process.

After all that preparatory work, we are getting to the actual change.

-. Patches 13 adds generic smp_XXX wrappers in asm-generic
   these select __smp_XXX or barrier() depending on CONFIG_SMP

-. Patches 14-27 change all architectures to
   define __smp_XXX macros; the generic code in asm-generic/barrier.h
   then defines smp_XXX macros

   I compiled the affected arches before and after the changes,
   dumped the .text section (using objdump -O binary) and
   made sure that the object code is exactly identical
   before and after the change.
   I couldn't fully build sh,tile,xtensa but I did this test
   kernel/rcu/tree.o kernel/sched/wait.o and
   kernel/futex.o and tested these instead.

Unfortunately, I don't have a metag cross-build toolset ready.
Hoping for some acks on this architecture.

Finally, the following patches put the __smp_xxx APIs to work for virt:

-. Patch 28 adds virt_ wrappers for __smp_, and documents them.
   After all this work, this requires very few lines of code in
   the generic header.

-. Patches 29,30,33,34 convert virtio xen drivers to use the virt_xxx APIs

   xen patches are untested
   virtio ones have been tested on x86

-. Patches 31-32 teach virtio to use virt_store_mb
   sh architecture was missing a 2-byte smp_store_mb,
   the fix is trivial although my code is not optimal:
   if anyone cares, pls send me a patch to apply on top.
   I didn't build this architecture, but intel's 0-day
   infrastructure builds it.

   tested on x86


Davidlohr Bueso (1):
  lcoking/barriers, arch: Use smp barriers in smp_store_release()

Michael S. Tsirkin (33):
  asm-generic: guard smp_store_release/load_acquire
  ia64: rename nop->iosapic_nop
  ia64: reuse asm-generic/barrier.h
  powerpc: reuse asm-generic/barrier.h
  s390: reuse asm-generic/barrier.h
  sparc: reuse asm-generic/barrier.h
  arm: reuse asm-generic/barrier.h
  arm64: reuse asm-generic/barrier.h
  metag: reuse asm-generic/barrier.h
  mips: reuse asm-generic/barrier.h
  x86/um: reuse asm-generic/barrier.h
  x86: reuse asm-generic/barrier.h
  asm-generic: add __smp_xxx wrappers
  powerpc: define __smp_xxx
  arm64: define __smp_xxx
  arm: define __smp_xxx
  blackfin: define __smp_xxx
  ia64: define __smp_xxx
  metag: define __smp_xxx
  mips: define __smp_xxx
  s390: define __smp_xxx
  sh: define __smp_xxx, fix smp_store_mb for !SMP
  sparc: define __smp_xxx
  tile: define __smp_xxx
  xtensa: define __smp_xxx
  x86: define __smp_xxx
  asm-generic: implement virt_xxx memory barriers
  Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
  virtio_ring: update weak barriers to use __smp_XXX
  sh: support a 2-byte smp_store_mb
  virtio_ring: use virt_store_mb
  xenbus: use virt_xxx barriers
  xen/io: use virt_xxx barriers

 arch/arm/include/asm/barrier.h      |  35 ++-----------
 arch/arm64/include/asm/barrier.h    |  19 +++----
 arch/blackfin/include/asm/barrier.h |   4 +-
 arch/ia64/include/asm/barrier.h     |  24 +++------
 arch/metag/include/asm/barrier.h    |  55 ++++++-------------
 arch/mips/include/asm/barrier.h     |  51 ++++++------------
 arch/powerpc/include/asm/barrier.h  |  33 ++++--------
 arch/s390/include/asm/barrier.h     |  25 ++++-----
 arch/sh/include/asm/barrier.h       |  11 +++-
 arch/sparc/include/asm/barrier_32.h |   1 -
 arch/sparc/include/asm/barrier_64.h |  29 +++-------
 arch/sparc/include/asm/processor.h  |   3 --
 arch/tile/include/asm/barrier.h     |   9 ++--
 arch/x86/include/asm/barrier.h      |  36 +++++--------
 arch/x86/um/asm/barrier.h           |   9 +---
 arch/xtensa/include/asm/barrier.h   |   4 +-
 include/asm-generic/barrier.h       | 102 ++++++++++++++++++++++++++++++++----
 include/linux/virtio_ring.h         |  22 +++++---
 include/xen/interface/io/ring.h     |  16 +++---
 arch/ia64/kernel/iosapic.c          |   6 +--
 drivers/virtio/virtio_ring.c        |  15 +++---
 drivers/xen/xenbus/xenbus_comms.c   |   8 +--
 Documentation/memory-barriers.txt   |  28 ++++++++--
 23 files changed, 266 insertions(+), 279 deletions(-)

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 00/34] arch: barrier cleanup + barriers for virt
@ 2015-12-31 19:05 ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-arm-kernel

Changes since v1:
	- replaced my asm-generic patch with an equivalent patch already in tip
	- add wrappers with virt_ prefix for better code annotation,
	  as suggested by David Miller
	- dropped XXX in patch names as this makes vger choke, Cc all relevant
	  mailing lists on all patches (not personal email, as the list becomes
	  too long then)

I parked this in vhost tree for now, but the inclusion of patch 1 from tip
creates a merge conflict (even though it's easy to resolve).
Would tip maintainers prefer merging it through tip tree instead
(including the virtio patches)?
Or should I just merge it all through my tree, including the
duplicate patch, and assume conflict will be resolved?
If the second, acks will be appreciated.

Thanks!

This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.

This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?

Note to arch maintainers: please don't cherry-pick patches out of this patchset
as it's been structured in this order to avoid breaking bisect.
Please send acks instead!

Sometimes, virtualization is weird. For example, virtio does this (conceptually):

#ifdef CONFIG_SMP
                smp_mb();
#else
                mb();
#endif

Similarly, Xen calls mb() when it's not doing any MMIO at all.

Of course it's wrong in the sense that it's suboptimal. What we would really
like is to have, on UP, exactly the same barrier as on SMP.  This is because a
UP guest can run on an SMP host.

But Linux doesn't provide this ability: if CONFIG_SMP is not defined is
optimizes most barriers out to a compiler barrier.

Consider for example x86: what we want is xchg (NOT mfence - there's no real IO
going on here - just switching out of the VM - more like a function call
really) but if built without CONFIG_SMP smp_store_mb does not include this.

Virt in general is probably the only use-case, because this really is an
artifact of interfacing with an SMP host while running an UP kernel,
but since we have (at least) two users, it seems to make sense to
put these APIs in a central place.

In fact, smp_ barriers are stubs on !SMP, so they can be defined as follows:

arch/XXX/include/asm/barrier.h:

#define __smp_mb() DOSOMETHING

include/asm-generic/barrier.h:

#ifdef CONFIG_SMP
#define smp_mb() __smp_mb()
#else
#define smp_mb() barrier()
#endif

This has the benefit of cleaning out a bunch of duplicated
ifdefs on a bunch of architectures - this patchset brings
about a net reduction in LOC, even with new barriers and extra documentation :)

Then virt can use __smp_XXX when talking to an SMP host.
To make those users explicit, this patchset adds virt_xxx wrappers
for them.

Touching all archs is a tad tedious, but its fairly straight forward.

The rest of the patchset is structured as follows:


-. Patch 1 fixes a bug in asm-generic.
   It is already in tip, included here for completeness.

-. Patches 2-12 make sure barrier.h on all remaining
   architectures includes asm-generic/barrier.h:
   after the change in Patch 1, code there matches
   asm-generic/barrier.h almost verbatim.
   Minor code tweaks were required in a couple of places.
   Macros duplicated from asm-generic/barrier.h are dropped
   in the process.

After all that preparatory work, we are getting to the actual change.

-. Patches 13 adds generic smp_XXX wrappers in asm-generic
   these select __smp_XXX or barrier() depending on CONFIG_SMP

-. Patches 14-27 change all architectures to
   define __smp_XXX macros; the generic code in asm-generic/barrier.h
   then defines smp_XXX macros

   I compiled the affected arches before and after the changes,
   dumped the .text section (using objdump -O binary) and
   made sure that the object code is exactly identical
   before and after the change.
   I couldn't fully build sh,tile,xtensa but I did this test
   kernel/rcu/tree.o kernel/sched/wait.o and
   kernel/futex.o and tested these instead.

Unfortunately, I don't have a metag cross-build toolset ready.
Hoping for some acks on this architecture.

Finally, the following patches put the __smp_xxx APIs to work for virt:

-. Patch 28 adds virt_ wrappers for __smp_, and documents them.
   After all this work, this requires very few lines of code in
   the generic header.

-. Patches 29,30,33,34 convert virtio xen drivers to use the virt_xxx APIs

   xen patches are untested
   virtio ones have been tested on x86

-. Patches 31-32 teach virtio to use virt_store_mb
   sh architecture was missing a 2-byte smp_store_mb,
   the fix is trivial although my code is not optimal:
   if anyone cares, pls send me a patch to apply on top.
   I didn't build this architecture, but intel's 0-day
   infrastructure builds it.

   tested on x86


Davidlohr Bueso (1):
  lcoking/barriers, arch: Use smp barriers in smp_store_release()

Michael S. Tsirkin (33):
  asm-generic: guard smp_store_release/load_acquire
  ia64: rename nop->iosapic_nop
  ia64: reuse asm-generic/barrier.h
  powerpc: reuse asm-generic/barrier.h
  s390: reuse asm-generic/barrier.h
  sparc: reuse asm-generic/barrier.h
  arm: reuse asm-generic/barrier.h
  arm64: reuse asm-generic/barrier.h
  metag: reuse asm-generic/barrier.h
  mips: reuse asm-generic/barrier.h
  x86/um: reuse asm-generic/barrier.h
  x86: reuse asm-generic/barrier.h
  asm-generic: add __smp_xxx wrappers
  powerpc: define __smp_xxx
  arm64: define __smp_xxx
  arm: define __smp_xxx
  blackfin: define __smp_xxx
  ia64: define __smp_xxx
  metag: define __smp_xxx
  mips: define __smp_xxx
  s390: define __smp_xxx
  sh: define __smp_xxx, fix smp_store_mb for !SMP
  sparc: define __smp_xxx
  tile: define __smp_xxx
  xtensa: define __smp_xxx
  x86: define __smp_xxx
  asm-generic: implement virt_xxx memory barriers
  Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
  virtio_ring: update weak barriers to use __smp_XXX
  sh: support a 2-byte smp_store_mb
  virtio_ring: use virt_store_mb
  xenbus: use virt_xxx barriers
  xen/io: use virt_xxx barriers

 arch/arm/include/asm/barrier.h      |  35 ++-----------
 arch/arm64/include/asm/barrier.h    |  19 +++----
 arch/blackfin/include/asm/barrier.h |   4 +-
 arch/ia64/include/asm/barrier.h     |  24 +++------
 arch/metag/include/asm/barrier.h    |  55 ++++++-------------
 arch/mips/include/asm/barrier.h     |  51 ++++++------------
 arch/powerpc/include/asm/barrier.h  |  33 ++++--------
 arch/s390/include/asm/barrier.h     |  25 ++++-----
 arch/sh/include/asm/barrier.h       |  11 +++-
 arch/sparc/include/asm/barrier_32.h |   1 -
 arch/sparc/include/asm/barrier_64.h |  29 +++-------
 arch/sparc/include/asm/processor.h  |   3 --
 arch/tile/include/asm/barrier.h     |   9 ++--
 arch/x86/include/asm/barrier.h      |  36 +++++--------
 arch/x86/um/asm/barrier.h           |   9 +---
 arch/xtensa/include/asm/barrier.h   |   4 +-
 include/asm-generic/barrier.h       | 102 ++++++++++++++++++++++++++++++++----
 include/linux/virtio_ring.h         |  22 +++++---
 include/xen/interface/io/ring.h     |  16 +++---
 arch/ia64/kernel/iosapic.c          |   6 +--
 drivers/virtio/virtio_ring.c        |  15 +++---
 drivers/xen/xenbus/xenbus_comms.c   |   8 +--
 Documentation/memory-barriers.txt   |  28 ++++++++--
 23 files changed, 266 insertions(+), 279 deletions(-)

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 01/32] lcoking/barriers, arch: Use smp barriers in smp_store_release()
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, Heiko Carstens, virtualization,
	Paul Mackerras, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann,
	Davidlohr Bueso, Michael Ellerman, x86, Christian Borntraeger,
	Linus Torvalds, xen-devel, Ingo Molnar, Paul E . McKenney,
	linux-xtensa, user-mode-linux-devel

From: Davidlohr Bueso <dave@stgolabs.net>

With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
it was made clear that the context of this call (and thus set_mb)
is strictly for CPU ordering, as opposed to IO. As such all archs
should use the smp variant of mb(), respecting the semantics and
saving a mandatory barrier on UP.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <linux-arch@vger.kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: dave@stgolabs.net
Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave@stgolabs.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/ia64/include/asm/barrier.h    | 2 +-
 arch/powerpc/include/asm/barrier.h | 2 +-
 arch/s390/include/asm/barrier.h    | 2 +-
 include/asm-generic/barrier.h      | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index df896a1..209c4b8 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -77,7 +77,7 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 0eca6ef..a7af5fb 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,7 +34,7 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index d68e11e..7ffd0b1 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -36,7 +36,7 @@
 #define smp_mb__before_atomic()		smp_mb()
 #define smp_mb__after_atomic()		smp_mb()
 
-#define smp_store_mb(var, value)		do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #define smp_store_release(p, v)						\
 do {									\
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index b42afad..0f45f93 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -93,7 +93,7 @@
 #endif	/* CONFIG_SMP */
 
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 01/32] lcoking/barriers, arch: Use smp barriers in smp_store_release()
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Davidlohr Bueso,
	Davidlohr Bueso, Andrew Morton, Benjamin Herrenschmidt,
	Heiko Carstens, Linus Torvalds, Paul E . McKenney, Tony Luck,
	Ingo Molnar, Fenghua Yu, Paul Mackerras, Michael Ellerman,
	Martin Schwidefsky, Andrey Konovalov, Christian Borntraeger

From: Davidlohr Bueso <dave@stgolabs.net>

With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
it was made clear that the context of this call (and thus set_mb)
is strictly for CPU ordering, as opposed to IO. As such all archs
should use the smp variant of mb(), respecting the semantics and
saving a mandatory barrier on UP.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <linux-arch@vger.kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: dave@stgolabs.net
Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave@stgolabs.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/ia64/include/asm/barrier.h    | 2 +-
 arch/powerpc/include/asm/barrier.h | 2 +-
 arch/s390/include/asm/barrier.h    | 2 +-
 include/asm-generic/barrier.h      | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index df896a1..209c4b8 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -77,7 +77,7 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 0eca6ef..a7af5fb 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,7 +34,7 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index d68e11e..7ffd0b1 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -36,7 +36,7 @@
 #define smp_mb__before_atomic()		smp_mb()
 #define smp_mb__after_atomic()		smp_mb()
 
-#define smp_store_mb(var, value)		do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #define smp_store_release(p, v)						\
 do {									\
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index b42afad..0f45f93 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -93,7 +93,7 @@
 #endif	/* CONFIG_SMP */
 
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 01/32] lcoking/barriers, arch: Use smp barriers in smp_store_release()
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, Heiko Carstens, virtualization,
	Paul Mackerras, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann,
	Davidlohr Bueso, Michael Ellerman, x86, Christian Borntraeger,
	Linus Torvalds, xen-devel, Ingo Molnar, Paul E . McKenney,
	linux-xtensa, user-mode-linux-devel

From: Davidlohr Bueso <dave@stgolabs.net>

With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
it was made clear that the context of this call (and thus set_mb)
is strictly for CPU ordering, as opposed to IO. As such all archs
should use the smp variant of mb(), respecting the semantics and
saving a mandatory barrier on UP.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <linux-arch@vger.kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: dave@stgolabs.net
Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave@stgolabs.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/ia64/include/asm/barrier.h    | 2 +-
 arch/powerpc/include/asm/barrier.h | 2 +-
 arch/s390/include/asm/barrier.h    | 2 +-
 include/asm-generic/barrier.h      | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index df896a1..209c4b8 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -77,7 +77,7 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 0eca6ef..a7af5fb 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,7 +34,7 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index d68e11e..7ffd0b1 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -36,7 +36,7 @@
 #define smp_mb__before_atomic()		smp_mb()
 #define smp_mb__after_atomic()		smp_mb()
 
-#define smp_store_mb(var, value)		do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #define smp_store_release(p, v)						\
 do {									\
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index b42afad..0f45f93 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -93,7 +93,7 @@
 #endif	/* CONFIG_SMP */
 
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 01/32] lcoking/barriers, arch: Use smp barriers in smp_store_release()
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-arm-kernel

From: Davidlohr Bueso <dave@stgolabs.net>

With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
it was made clear that the context of this call (and thus set_mb)
is strictly for CPU ordering, as opposed to IO. As such all archs
should use the smp variant of mb(), respecting the semantics and
saving a mandatory barrier on UP.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <linux-arch@vger.kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: dave at stgolabs.net
Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave at stgolabs.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/ia64/include/asm/barrier.h    | 2 +-
 arch/powerpc/include/asm/barrier.h | 2 +-
 arch/s390/include/asm/barrier.h    | 2 +-
 include/asm-generic/barrier.h      | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index df896a1..209c4b8 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -77,7 +77,7 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 0eca6ef..a7af5fb 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,7 +34,7 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index d68e11e..7ffd0b1 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -36,7 +36,7 @@
 #define smp_mb__before_atomic()		smp_mb()
 #define smp_mb__after_atomic()		smp_mb()
 
-#define smp_store_mb(var, value)		do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #define smp_store_release(p, v)						\
 do {									\
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index b42afad..0f45f93 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -93,7 +93,7 @@
 #endif	/* CONFIG_SMP */
 
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 01/32] lcoking/barriers, arch: Use smp barriers in smp_store_release()
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (2 preceding siblings ...)
  (?)
@ 2015-12-31 19:05 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, Heiko Carstens, virtualization,
	Paul Mackerras, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann,
	Davidlohr Bueso, Michael Ellerman, x86, Christian Borntraeger,
	Linus Torvalds, xen-devel, Ingo Molnar, Paul E . McKenney,
	linux-xtensa, user-mode-linux-devel

From: Davidlohr Bueso <dave@stgolabs.net>

With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
it was made clear that the context of this call (and thus set_mb)
is strictly for CPU ordering, as opposed to IO. As such all archs
should use the smp variant of mb(), respecting the semantics and
saving a mandatory barrier on UP.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <linux-arch@vger.kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: dave@stgolabs.net
Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave@stgolabs.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/ia64/include/asm/barrier.h    | 2 +-
 arch/powerpc/include/asm/barrier.h | 2 +-
 arch/s390/include/asm/barrier.h    | 2 +-
 include/asm-generic/barrier.h      | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index df896a1..209c4b8 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -77,7 +77,7 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 0eca6ef..a7af5fb 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,7 +34,7 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index d68e11e..7ffd0b1 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -36,7 +36,7 @@
 #define smp_mb__before_atomic()		smp_mb()
 #define smp_mb__after_atomic()		smp_mb()
 
-#define smp_store_mb(var, value)		do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 
 #define smp_store_release(p, v)						\
 do {									\
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index b42afad..0f45f93 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -93,7 +93,7 @@
 #endif	/* CONFIG_SMP */
 
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 02/32] asm-generic: guard smp_store_release/load_acquire
  2015-12-31 19:05 ` Michael S. Tsirkin
                     ` (2 preceding siblings ...)
  (?)
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Allow architectures to override smp_store_release
and smp_load_acquire by guarding the defines
in asm-generic/barrier.h with ifndef directives.

This is in preparation to reusing asm-generic/barrier.h
on architectures which have their own definition
of these macros.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 0f45f93..987b2e0 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -104,13 +104,16 @@
 #define smp_mb__after_atomic()	smp_mb()
 #endif
 
+#ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
+#endif
 
+#ifndef smp_load_acquire
 #define smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
@@ -118,6 +121,7 @@ do {									\
 	smp_mb();							\
 	___p1;								\
 })
+#endif
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 02/32] asm-generic: guard smp_store_release/load_acquire
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel

Allow architectures to override smp_store_release
and smp_load_acquire by guarding the defines
in asm-generic/barrier.h with ifndef directives.

This is in preparation to reusing asm-generic/barrier.h
on architectures which have their own definition
of these macros.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 0f45f93..987b2e0 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -104,13 +104,16 @@
 #define smp_mb__after_atomic()	smp_mb()
 #endif
 
+#ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
+#endif
 
+#ifndef smp_load_acquire
 #define smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
@@ -118,6 +121,7 @@ do {									\
 	smp_mb();							\
 	___p1;								\
 })
+#endif
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 02/32] asm-generic: guard smp_store_release/load_acquire
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Allow architectures to override smp_store_release
and smp_load_acquire by guarding the defines
in asm-generic/barrier.h with ifndef directives.

This is in preparation to reusing asm-generic/barrier.h
on architectures which have their own definition
of these macros.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 0f45f93..987b2e0 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -104,13 +104,16 @@
 #define smp_mb__after_atomic()	smp_mb()
 #endif
 
+#ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
+#endif
 
+#ifndef smp_load_acquire
 #define smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
@@ -118,6 +121,7 @@ do {									\
 	smp_mb();							\
 	___p1;								\
 })
+#endif
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 02/32] asm-generic: guard smp_store_release/load_acquire
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-arm-kernel

Allow architectures to override smp_store_release
and smp_load_acquire by guarding the defines
in asm-generic/barrier.h with ifndef directives.

This is in preparation to reusing asm-generic/barrier.h
on architectures which have their own definition
of these macros.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 0f45f93..987b2e0 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -104,13 +104,16 @@
 #define smp_mb__after_atomic()	smp_mb()
 #endif
 
+#ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
+#endif
 
+#ifndef smp_load_acquire
 #define smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
@@ -118,6 +121,7 @@ do {									\
 	smp_mb();							\
 	___p1;								\
 })
+#endif
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 02/32] asm-generic: guard smp_store_release/load_acquire
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (5 preceding siblings ...)
  (?)
@ 2015-12-31 19:05 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Allow architectures to override smp_store_release
and smp_load_acquire by guarding the defines
in asm-generic/barrier.h with ifndef directives.

This is in preparation to reusing asm-generic/barrier.h
on architectures which have their own definition
of these macros.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 0f45f93..987b2e0 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -104,13 +104,16 @@
 #define smp_mb__after_atomic()	smp_mb()
 #endif
 
+#ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
+#endif
 
+#ifndef smp_load_acquire
 #define smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
@@ -118,6 +121,7 @@ do {									\
 	smp_mb();							\
 	___p1;								\
 })
+#endif
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 02/32] asm-generic: guard smp_store_release/load_acquire
@ 2015-12-31 19:05   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Allow architectures to override smp_store_release
and smp_load_acquire by guarding the defines
in asm-generic/barrier.h with ifndef directives.

This is in preparation to reusing asm-generic/barrier.h
on architectures which have their own definition
of these macros.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 0f45f93..987b2e0 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -104,13 +104,16 @@
 #define smp_mb__after_atomic()	smp_mb()
 #endif
 
+#ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
+#endif
 
+#ifndef smp_load_acquire
 #define smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
@@ -118,6 +121,7 @@ do {									\
 	smp_mb();							\
 	___p1;								\
 })
+#endif
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 03/32] ia64: rename nop->iosapic_nop
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Jiang Liu

asm-generic/barrier.h defines a nop() macro.
To be able to use this header on ia64, we shouldn't
call local functions/variables nop().

There's one instance where this breaks on ia64:
rename the function to iosapic_nop to avoid the conflict.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/kernel/iosapic.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index d2fae05..90fde5b 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -256,7 +256,7 @@ set_rte (unsigned int gsi, unsigned int irq, unsigned int dest, int mask)
 }
 
 static void
-nop (struct irq_data *data)
+iosapic_nop (struct irq_data *data)
 {
 	/* do nothing... */
 }
@@ -415,7 +415,7 @@ iosapic_unmask_level_irq (struct irq_data *data)
 #define iosapic_shutdown_level_irq	mask_irq
 #define iosapic_enable_level_irq	unmask_irq
 #define iosapic_disable_level_irq	mask_irq
-#define iosapic_ack_level_irq		nop
+#define iosapic_ack_level_irq		iosapic_nop
 
 static struct irq_chip irq_type_iosapic_level = {
 	.name =			"IO-SAPIC-level",
@@ -453,7 +453,7 @@ iosapic_ack_edge_irq (struct irq_data *data)
 }
 
 #define iosapic_enable_edge_irq		unmask_irq
-#define iosapic_disable_edge_irq	nop
+#define iosapic_disable_edge_irq	iosapic_nop
 
 static struct irq_chip irq_type_iosapic_edge = {
 	.name =			"IO-SAPIC-edge",
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 03/32] ia64: rename nop->iosapic_nop
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Jiang Liu, Rusty Russell

asm-generic/barrier.h defines a nop() macro.
To be able to use this header on ia64, we shouldn't
call local functions/variables nop().

There's one instance where this breaks on ia64:
rename the function to iosapic_nop to avoid the conflict.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/kernel/iosapic.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index d2fae05..90fde5b 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -256,7 +256,7 @@ set_rte (unsigned int gsi, unsigned int irq, unsigned int dest, int mask)
 }
 
 static void
-nop (struct irq_data *data)
+iosapic_nop (struct irq_data *data)
 {
 	/* do nothing... */
 }
@@ -415,7 +415,7 @@ iosapic_unmask_level_irq (struct irq_data *data)
 #define iosapic_shutdown_level_irq	mask_irq
 #define iosapic_enable_level_irq	unmask_irq
 #define iosapic_disable_level_irq	mask_irq
-#define iosapic_ack_level_irq		nop
+#define iosapic_ack_level_irq		iosapic_nop
 
 static struct irq_chip irq_type_iosapic_level = {
 	.name =			"IO-SAPIC-level",
@@ -453,7 +453,7 @@ iosapic_ack_edge_irq (struct irq_data *data)
 }
 
 #define iosapic_enable_edge_irq		unmask_irq
-#define iosapic_disable_edge_irq	nop
+#define iosapic_disable_edge_irq	iosapic_nop
 
 static struct irq_chip irq_type_iosapic_edge = {
 	.name =			"IO-SAPIC-edge",
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 03/32] ia64: rename nop->iosapic_nop
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Jiang Liu

asm-generic/barrier.h defines a nop() macro.
To be able to use this header on ia64, we shouldn't
call local functions/variables nop().

There's one instance where this breaks on ia64:
rename the function to iosapic_nop to avoid the conflict.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/kernel/iosapic.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index d2fae05..90fde5b 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -256,7 +256,7 @@ set_rte (unsigned int gsi, unsigned int irq, unsigned int dest, int mask)
 }
 
 static void
-nop (struct irq_data *data)
+iosapic_nop (struct irq_data *data)
 {
 	/* do nothing... */
 }
@@ -415,7 +415,7 @@ iosapic_unmask_level_irq (struct irq_data *data)
 #define iosapic_shutdown_level_irq	mask_irq
 #define iosapic_enable_level_irq	unmask_irq
 #define iosapic_disable_level_irq	mask_irq
-#define iosapic_ack_level_irq		nop
+#define iosapic_ack_level_irq		iosapic_nop
 
 static struct irq_chip irq_type_iosapic_level = {
 	.name =			"IO-SAPIC-level",
@@ -453,7 +453,7 @@ iosapic_ack_edge_irq (struct irq_data *data)
 }
 
 #define iosapic_enable_edge_irq		unmask_irq
-#define iosapic_disable_edge_irq	nop
+#define iosapic_disable_edge_irq	iosapic_nop
 
 static struct irq_chip irq_type_iosapic_edge = {
 	.name =			"IO-SAPIC-edge",
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 03/32] ia64: rename nop->iosapic_nop
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (8 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Tony Luck,
	Andrew Cooper, Fenghua Yu, Jiang Liu, linuxppc-dev, David Miller

asm-generic/barrier.h defines a nop() macro.
To be able to use this header on ia64, we shouldn't
call local functions/variables nop().

There's one instance where this breaks on ia64:
rename the function to iosapic_nop to avoid the conflict.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/kernel/iosapic.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index d2fae05..90fde5b 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -256,7 +256,7 @@ set_rte (unsigned int gsi, unsigned int irq, unsigned int dest, int mask)
 }
 
 static void
-nop (struct irq_data *data)
+iosapic_nop (struct irq_data *data)
 {
 	/* do nothing... */
 }
@@ -415,7 +415,7 @@ iosapic_unmask_level_irq (struct irq_data *data)
 #define iosapic_shutdown_level_irq	mask_irq
 #define iosapic_enable_level_irq	unmask_irq
 #define iosapic_disable_level_irq	mask_irq
-#define iosapic_ack_level_irq		nop
+#define iosapic_ack_level_irq		iosapic_nop
 
 static struct irq_chip irq_type_iosapic_level = {
 	.name =			"IO-SAPIC-level",
@@ -453,7 +453,7 @@ iosapic_ack_edge_irq (struct irq_data *data)
 }
 
 #define iosapic_enable_edge_irq		unmask_irq
-#define iosapic_disable_edge_irq	nop
+#define iosapic_disable_edge_irq	iosapic_nop
 
 static struct irq_chip irq_type_iosapic_edge = {
 	.name =			"IO-SAPIC-edge",
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 03/32] ia64: rename nop->iosapic_nop
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-arm-kernel

asm-generic/barrier.h defines a nop() macro.
To be able to use this header on ia64, we shouldn't
call local functions/variables nop().

There's one instance where this breaks on ia64:
rename the function to iosapic_nop to avoid the conflict.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/kernel/iosapic.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index d2fae05..90fde5b 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -256,7 +256,7 @@ set_rte (unsigned int gsi, unsigned int irq, unsigned int dest, int mask)
 }
 
 static void
-nop (struct irq_data *data)
+iosapic_nop (struct irq_data *data)
 {
 	/* do nothing... */
 }
@@ -415,7 +415,7 @@ iosapic_unmask_level_irq (struct irq_data *data)
 #define iosapic_shutdown_level_irq	mask_irq
 #define iosapic_enable_level_irq	unmask_irq
 #define iosapic_disable_level_irq	mask_irq
-#define iosapic_ack_level_irq		nop
+#define iosapic_ack_level_irq		iosapic_nop
 
 static struct irq_chip irq_type_iosapic_level = {
 	.name =			"IO-SAPIC-level",
@@ -453,7 +453,7 @@ iosapic_ack_edge_irq (struct irq_data *data)
 }
 
 #define iosapic_enable_edge_irq		unmask_irq
-#define iosapic_disable_edge_irq	nop
+#define iosapic_disable_edge_irq	iosapic_nop
 
 static struct irq_chip irq_type_iosapic_edge = {
 	.name =			"IO-SAPIC-edge",
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 03/32] ia64: rename nop->iosapic_nop
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (7 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Rusty Russell,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Tony Luck, Andrew Cooper, Fenghua Yu,
	Jiang Liu, linuxppc-dev

asm-generic/barrier.h defines a nop() macro.
To be able to use this header on ia64, we shouldn't
call local functions/variables nop().

There's one instance where this breaks on ia64:
rename the function to iosapic_nop to avoid the conflict.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/kernel/iosapic.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index d2fae05..90fde5b 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -256,7 +256,7 @@ set_rte (unsigned int gsi, unsigned int irq, unsigned int dest, int mask)
 }
 
 static void
-nop (struct irq_data *data)
+iosapic_nop (struct irq_data *data)
 {
 	/* do nothing... */
 }
@@ -415,7 +415,7 @@ iosapic_unmask_level_irq (struct irq_data *data)
 #define iosapic_shutdown_level_irq	mask_irq
 #define iosapic_enable_level_irq	unmask_irq
 #define iosapic_disable_level_irq	mask_irq
-#define iosapic_ack_level_irq		nop
+#define iosapic_ack_level_irq		iosapic_nop
 
 static struct irq_chip irq_type_iosapic_level = {
 	.name =			"IO-SAPIC-level",
@@ -453,7 +453,7 @@ iosapic_ack_edge_irq (struct irq_data *data)
 }
 
 #define iosapic_enable_edge_irq		unmask_irq
-#define iosapic_disable_edge_irq	nop
+#define iosapic_disable_edge_irq	iosapic_nop
 
 static struct irq_chip irq_type_iosapic_edge = {
 	.name =			"IO-SAPIC-edge",
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar

On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
 # define smp_mb()	barrier()
 #endif
 
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	smp_mb()
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
@@ -77,12 +71,12 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
  * that none of the previous instructions in the same group are
  * affected by the rsm/ssm.
  */
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_IA64_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar, Davidlohr Bueso, Andrey Konovalov

On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
 # define smp_mb()	barrier()
 #endif
 
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	smp_mb()
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
@@ -77,12 +71,12 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
  * that none of the previous instructions in the same group are
  * affected by the rsm/ssm.
  */
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_IA64_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar

On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
 # define smp_mb()	barrier()
 #endif
 
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	smp_mb()
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
@@ -77,12 +71,12 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
  * that none of the previous instructions in the same group are
  * affected by the rsm/ssm.
  */
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_IA64_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (11 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Tony Luck, Andrew Cooper,
	Fenghua Yu

On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
 # define smp_mb()	barrier()
 #endif
 
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	smp_mb()
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
@@ -77,12 +71,12 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
  * that none of the previous instructions in the same group are
  * affected by the rsm/ssm.
  */
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_IA64_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-arm-kernel

On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
 # define smp_mb()	barrier()
 #endif
 
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	smp_mb()
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
@@ -77,12 +71,12 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
  * that none of the previous instructions in the same group are
  * affected by the rsm/ssm.
  */
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_IA64_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (10 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Tony Luck, Andrew Cooper,
	Fenghua Yu

On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
 # define smp_mb()	barrier()
 #endif
 
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	smp_mb()
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
@@ -77,12 +71,12 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
  * that none of the previous instructions in the same group are
  * affected by the rsm/ssm.
  */
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_IA64_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar, D

On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
 # define smp_mb()	barrier()
 #endif
 
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	smp_mb()
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
@@ -77,12 +71,12 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
  * that none of the previous instructions in the same group are
  * affected by the rsm/ssm.
  */
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_IA64_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar, D

On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
 # define smp_mb()	barrier()
 #endif
 
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	smp_mb()
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
@@ -77,12 +71,12 @@ do {									\
 	___p1;								\
 })
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 /*
  * The group barrier in front of the rsm & ssm are necessary to ensure
  * that none of the previous instructions in the same group are
  * affected by the rsm/ssm.
  */
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_IA64_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 05/32] powerpc: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Benjamin Herrenschmidt,
	Paul Mackerras, Michael

On powerpc read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a7af5fb..980ad0c 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,8 +34,6 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
 #else
@@ -60,9 +58,6 @@
 #define smp_wmb()	barrier()
 #endif /* CONFIG_SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 /*
  * This is a barrier which prevents following instructions from being
  * started until the value of the argument x is known.  For example, if
@@ -87,8 +82,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()     smp_mb()
-#define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_POWERPC_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 05/32] powerpc: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov, Paul E. McKenney

On powerpc read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a7af5fb..980ad0c 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,8 +34,6 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
 #else
@@ -60,9 +58,6 @@
 #define smp_wmb()	barrier()
 #endif /* CONFIG_SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 /*
  * This is a barrier which prevents following instructions from being
  * started until the value of the argument x is known.  For example, if
@@ -87,8 +82,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()     smp_mb()
-#define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_POWERPC_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 05/32] powerpc: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Benjamin Herrenschmidt,
	Paul Mackerras, Michael

On powerpc read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a7af5fb..980ad0c 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,8 +34,6 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
 #else
@@ -60,9 +58,6 @@
 #define smp_wmb()	barrier()
 #endif /* CONFIG_SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 /*
  * This is a barrier which prevents following instructions from being
  * started until the value of the argument x is known.  For example, if
@@ -87,8 +82,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()     smp_mb()
-#define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_POWERPC_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 05/32] powerpc: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (12 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On powerpc read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a7af5fb..980ad0c 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,8 +34,6 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
 #else
@@ -60,9 +58,6 @@
 #define smp_wmb()	barrier()
 #endif /* CONFIG_SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 /*
  * This is a barrier which prevents following instructions from being
  * started until the value of the argument x is known.  For example, if
@@ -87,8 +82,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()     smp_mb()
-#define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_POWERPC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 05/32] powerpc: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-arm-kernel

On powerpc read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a7af5fb..980ad0c 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,8 +34,6 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
 #else
@@ -60,9 +58,6 @@
 #define smp_wmb()	barrier()
 #endif /* CONFIG_SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 /*
  * This is a barrier which prevents following instructions from being
  * started until the value of the argument x is known.  For example, if
@@ -87,8 +82,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()     smp_mb()
-#define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_POWERPC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 05/32] powerpc: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (14 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On powerpc read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a7af5fb..980ad0c 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,8 +34,6 @@
 #define rmb()  __asm__ __volatile__ ("sync" : : : "memory")
 #define wmb()  __asm__ __volatile__ ("sync" : : : "memory")
 
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #ifdef __SUBARCH_HAS_LWSYNC
 #    define SMPWMB      LWSYNC
 #else
@@ -60,9 +58,6 @@
 #define smp_wmb()	barrier()
 #endif /* CONFIG_SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 /*
  * This is a barrier which prevents following instructions from being
  * started until the value of the argument x is known.  For example, if
@@ -87,8 +82,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()     smp_mb()
-#define smp_mb__after_atomic()      smp_mb()
 #define smp_mb__before_spinlock()   smp_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_POWERPC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens

On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
 #define smp_rmb()			rmb()
 #define smp_wmb()			wmb()
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_mb__before_atomic()		smp_mb()
-#define smp_mb__after_atomic()		smp_mb()
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -53,4 +45,6 @@ do {									\
 	___p1;								\
 })
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Christian Borntraeger

On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
 #define smp_rmb()			rmb()
 #define smp_wmb()			wmb()
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_mb__before_atomic()		smp_mb()
-#define smp_mb__after_atomic()		smp_mb()
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -53,4 +45,6 @@ do {									\
 	___p1;								\
 })
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens

On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
 #define smp_rmb()			rmb()
 #define smp_wmb()			wmb()
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_mb__before_atomic()		smp_mb()
-#define smp_mb__after_atomic()		smp_mb()
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -53,4 +45,6 @@ do {									\
 	___p1;								\
 })
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (17 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
 #define smp_rmb()			rmb()
 #define smp_wmb()			wmb()
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_mb__before_atomic()		smp_mb()
-#define smp_mb__after_atomic()		smp_mb()
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -53,4 +45,6 @@ do {									\
 	___p1;								\
 })
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-arm-kernel

On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
 #define smp_rmb()			rmb()
 #define smp_wmb()			wmb()
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_mb__before_atomic()		smp_mb()
-#define smp_mb__after_atomic()		smp_mb()
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -53,4 +45,6 @@ do {									\
 	___p1;								\
 })
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (16 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
 #define smp_rmb()			rmb()
 #define smp_wmb()			wmb()
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_mb__before_atomic()		smp_mb()
-#define smp_mb__after_atomic()		smp_mb()
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -53,4 +45,6 @@ do {									\
 	___p1;								\
 })
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo

On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
 #define smp_rmb()			rmb()
 #define smp_wmb()			wmb()
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_mb__before_atomic()		smp_mb()
-#define smp_mb__after_atomic()		smp_mb()
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -53,4 +45,6 @@ do {									\
 	___p1;								\
 })
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo

On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
 #define smp_rmb()			rmb()
 #define smp_wmb()			wmb()
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_mb__before_atomic()		smp_mb()
-#define smp_mb__after_atomic()		smp_mb()
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -53,4 +45,6 @@ do {									\
 	___p1;								\
 })
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Ralf Baechle,
	Andrey Konovalov

On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

nop uses __asm__ __volatile but is otherwise identical to
the generic version, drop that as well.

This is in preparation to refactoring this code area.

Note: nop() was in processor.h and not in barrier.h as on other
architectures. Nothing seems to depend on it being there though.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_32.h |  1 -
 arch/sparc/include/asm/barrier_64.h | 21 ++-------------------
 arch/sparc/include/asm/processor.h  |  3 ---
 3 files changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_32.h b/arch/sparc/include/asm/barrier_32.h
index ae69eda..8059130 100644
--- a/arch/sparc/include/asm/barrier_32.h
+++ b/arch/sparc/include/asm/barrier_32.h
@@ -1,7 +1,6 @@
 #ifndef __SPARC_BARRIER_H
 #define __SPARC_BARRIER_H
 
-#include <asm/processor.h> /* for nop() */
 #include <asm-generic/barrier.h>
 
 #endif /* !(__SPARC_BARRIER_H) */
diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 14a9286..26c3f72 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,25 +37,6 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
-#define smp_store_mb(__var, __value) \
-	do { WRITE_ONCE(__var, __value); membar_safe("#StoreLoad"); } while(0)
-
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	rmb()
-#define smp_wmb()	wmb()
-#else
-#define smp_mb()	__asm__ __volatile__("":::"memory")
-#define smp_rmb()	__asm__ __volatile__("":::"memory")
-#define smp_wmb()	__asm__ __volatile__("":::"memory")
-#endif
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -74,4 +55,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* !(__SPARC64_BARRIER_H) */
diff --git a/arch/sparc/include/asm/processor.h b/arch/sparc/include/asm/processor.h
index 2fe99e6..9da9646 100644
--- a/arch/sparc/include/asm/processor.h
+++ b/arch/sparc/include/asm/processor.h
@@ -5,7 +5,4 @@
 #else
 #include <asm/processor_32.h>
 #endif
-
-#define nop() 		__asm__ __volatile__ ("nop")
-
 #endif
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Ralf Baechle,
	Andrey Konovalov

On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

nop uses __asm__ __volatile but is otherwise identical to
the generic version, drop that as well.

This is in preparation to refactoring this code area.

Note: nop() was in processor.h and not in barrier.h as on other
architectures. Nothing seems to depend on it being there though.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_32.h |  1 -
 arch/sparc/include/asm/barrier_64.h | 21 ++-------------------
 arch/sparc/include/asm/processor.h  |  3 ---
 3 files changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_32.h b/arch/sparc/include/asm/barrier_32.h
index ae69eda..8059130 100644
--- a/arch/sparc/include/asm/barrier_32.h
+++ b/arch/sparc/include/asm/barrier_32.h
@@ -1,7 +1,6 @@
 #ifndef __SPARC_BARRIER_H
 #define __SPARC_BARRIER_H
 
-#include <asm/processor.h> /* for nop() */
 #include <asm-generic/barrier.h>
 
 #endif /* !(__SPARC_BARRIER_H) */
diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 14a9286..26c3f72 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,25 +37,6 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
-#define smp_store_mb(__var, __value) \
-	do { WRITE_ONCE(__var, __value); membar_safe("#StoreLoad"); } while(0)
-
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	rmb()
-#define smp_wmb()	wmb()
-#else
-#define smp_mb()	__asm__ __volatile__("":::"memory")
-#define smp_rmb()	__asm__ __volatile__("":::"memory")
-#define smp_wmb()	__asm__ __volatile__("":::"memory")
-#endif
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -74,4 +55,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* !(__SPARC64_BARRIER_H) */
diff --git a/arch/sparc/include/asm/processor.h b/arch/sparc/include/asm/processor.h
index 2fe99e6..9da9646 100644
--- a/arch/sparc/include/asm/processor.h
+++ b/arch/sparc/include/asm/processor.h
@@ -5,7 +5,4 @@
 #else
 #include <asm/processor_32.h>
 #endif
-
-#define nop() 		__asm__ __volatile__ ("nop")
-
 #endif
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Ralf Baechle,
	Andrey Konovalov

On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

nop uses __asm__ __volatile but is otherwise identical to
the generic version, drop that as well.

This is in preparation to refactoring this code area.

Note: nop() was in processor.h and not in barrier.h as on other
architectures. Nothing seems to depend on it being there though.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_32.h |  1 -
 arch/sparc/include/asm/barrier_64.h | 21 ++-------------------
 arch/sparc/include/asm/processor.h  |  3 ---
 3 files changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_32.h b/arch/sparc/include/asm/barrier_32.h
index ae69eda..8059130 100644
--- a/arch/sparc/include/asm/barrier_32.h
+++ b/arch/sparc/include/asm/barrier_32.h
@@ -1,7 +1,6 @@
 #ifndef __SPARC_BARRIER_H
 #define __SPARC_BARRIER_H
 
-#include <asm/processor.h> /* for nop() */
 #include <asm-generic/barrier.h>
 
 #endif /* !(__SPARC_BARRIER_H) */
diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 14a9286..26c3f72 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,25 +37,6 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
-#define smp_store_mb(__var, __value) \
-	do { WRITE_ONCE(__var, __value); membar_safe("#StoreLoad"); } while(0)
-
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	rmb()
-#define smp_wmb()	wmb()
-#else
-#define smp_mb()	__asm__ __volatile__("":::"memory")
-#define smp_rmb()	__asm__ __volatile__("":::"memory")
-#define smp_wmb()	__asm__ __volatile__("":::"memory")
-#endif
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -74,4 +55,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* !(__SPARC64_BARRIER_H) */
diff --git a/arch/sparc/include/asm/processor.h b/arch/sparc/include/asm/processor.h
index 2fe99e6..9da9646 100644
--- a/arch/sparc/include/asm/processor.h
+++ b/arch/sparc/include/asm/processor.h
@@ -5,7 +5,4 @@
 #else
 #include <asm/processor_32.h>
 #endif
-
-#define nop() 		__asm__ __volatile__ ("nop")
-
 #endif
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (20 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, Ralf Baechle, linuxppc-dev,
	David Miller

On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

nop uses __asm__ __volatile but is otherwise identical to
the generic version, drop that as well.

This is in preparation to refactoring this code area.

Note: nop() was in processor.h and not in barrier.h as on other
architectures. Nothing seems to depend on it being there though.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_32.h |  1 -
 arch/sparc/include/asm/barrier_64.h | 21 ++-------------------
 arch/sparc/include/asm/processor.h  |  3 ---
 3 files changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_32.h b/arch/sparc/include/asm/barrier_32.h
index ae69eda..8059130 100644
--- a/arch/sparc/include/asm/barrier_32.h
+++ b/arch/sparc/include/asm/barrier_32.h
@@ -1,7 +1,6 @@
 #ifndef __SPARC_BARRIER_H
 #define __SPARC_BARRIER_H
 
-#include <asm/processor.h> /* for nop() */
 #include <asm-generic/barrier.h>
 
 #endif /* !(__SPARC_BARRIER_H) */
diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 14a9286..26c3f72 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,25 +37,6 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
-#define smp_store_mb(__var, __value) \
-	do { WRITE_ONCE(__var, __value); membar_safe("#StoreLoad"); } while(0)
-
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	rmb()
-#define smp_wmb()	wmb()
-#else
-#define smp_mb()	__asm__ __volatile__("":::"memory")
-#define smp_rmb()	__asm__ __volatile__("":::"memory")
-#define smp_wmb()	__asm__ __volatile__("":::"memory")
-#endif
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -74,4 +55,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* !(__SPARC64_BARRIER_H) */
diff --git a/arch/sparc/include/asm/processor.h b/arch/sparc/include/asm/processor.h
index 2fe99e6..9da9646 100644
--- a/arch/sparc/include/asm/processor.h
+++ b/arch/sparc/include/asm/processor.h
@@ -5,7 +5,4 @@
 #else
 #include <asm/processor_32.h>
 #endif
-
-#define nop() 		__asm__ __volatile__ ("nop")
-
 #endif
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-arm-kernel

On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

nop uses __asm__ __volatile but is otherwise identical to
the generic version, drop that as well.

This is in preparation to refactoring this code area.

Note: nop() was in processor.h and not in barrier.h as on other
architectures. Nothing seems to depend on it being there though.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_32.h |  1 -
 arch/sparc/include/asm/barrier_64.h | 21 ++-------------------
 arch/sparc/include/asm/processor.h  |  3 ---
 3 files changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_32.h b/arch/sparc/include/asm/barrier_32.h
index ae69eda..8059130 100644
--- a/arch/sparc/include/asm/barrier_32.h
+++ b/arch/sparc/include/asm/barrier_32.h
@@ -1,7 +1,6 @@
 #ifndef __SPARC_BARRIER_H
 #define __SPARC_BARRIER_H
 
-#include <asm/processor.h> /* for nop() */
 #include <asm-generic/barrier.h>
 
 #endif /* !(__SPARC_BARRIER_H) */
diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 14a9286..26c3f72 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,25 +37,6 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
-#define smp_store_mb(__var, __value) \
-	do { WRITE_ONCE(__var, __value); membar_safe("#StoreLoad"); } while(0)
-
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	rmb()
-#define smp_wmb()	wmb()
-#else
-#define smp_mb()	__asm__ __volatile__("":::"memory")
-#define smp_rmb()	__asm__ __volatile__("":::"memory")
-#define smp_wmb()	__asm__ __volatile__("":::"memory")
-#endif
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -74,4 +55,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* !(__SPARC64_BARRIER_H) */
diff --git a/arch/sparc/include/asm/processor.h b/arch/sparc/include/asm/processor.h
index 2fe99e6..9da9646 100644
--- a/arch/sparc/include/asm/processor.h
+++ b/arch/sparc/include/asm/processor.h
@@ -5,7 +5,4 @@
 #else
 #include <asm/processor_32.h>
 #endif
-
-#define nop() 		__asm__ __volatile__ ("nop")
-
 #endif
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (18 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, Ralf Baechle, linuxppc-dev,
	David Miller

On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

nop uses __asm__ __volatile but is otherwise identical to
the generic version, drop that as well.

This is in preparation to refactoring this code area.

Note: nop() was in processor.h and not in barrier.h as on other
architectures. Nothing seems to depend on it being there though.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_32.h |  1 -
 arch/sparc/include/asm/barrier_64.h | 21 ++-------------------
 arch/sparc/include/asm/processor.h  |  3 ---
 3 files changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_32.h b/arch/sparc/include/asm/barrier_32.h
index ae69eda..8059130 100644
--- a/arch/sparc/include/asm/barrier_32.h
+++ b/arch/sparc/include/asm/barrier_32.h
@@ -1,7 +1,6 @@
 #ifndef __SPARC_BARRIER_H
 #define __SPARC_BARRIER_H
 
-#include <asm/processor.h> /* for nop() */
 #include <asm-generic/barrier.h>
 
 #endif /* !(__SPARC_BARRIER_H) */
diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 14a9286..26c3f72 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,25 +37,6 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
-#define smp_store_mb(__var, __value) \
-	do { WRITE_ONCE(__var, __value); membar_safe("#StoreLoad"); } while(0)
-
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	rmb()
-#define smp_wmb()	wmb()
-#else
-#define smp_mb()	__asm__ __volatile__("":::"memory")
-#define smp_rmb()	__asm__ __volatile__("":::"memory")
-#define smp_wmb()	__asm__ __volatile__("":::"memory")
-#endif
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
@@ -74,4 +55,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* !(__SPARC64_BARRIER_H) */
diff --git a/arch/sparc/include/asm/processor.h b/arch/sparc/include/asm/processor.h
index 2fe99e6..9da9646 100644
--- a/arch/sparc/include/asm/processor.h
+++ b/arch/sparc/include/asm/processor.h
@@ -5,7 +5,4 @@
 #else
 #include <asm/processor_32.h>
 #endif
-
-#define nop() 		__asm__ __volatile__ ("nop")
-
 #endif
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Russell King, Ingo Molnar,
	Tony Lindgren

On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
smp_store_release, smp_load_acquire, smp_mb__before_atomic and
smp_mb__after_atomic match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 3ff5642..31152e8 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -70,28 +70,7 @@ extern void arm_heavy_mb(void);
 #define smp_wmb()	dmb(ishst)
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Russell King, Ingo Molnar,
	Tony Lindgren, Andrey Konovalov

On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
smp_store_release, smp_load_acquire, smp_mb__before_atomic and
smp_mb__after_atomic match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 3ff5642..31152e8 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -70,28 +70,7 @@ extern void arm_heavy_mb(void);
 #define smp_wmb()	dmb(ishst)
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Russell King, Ingo Molnar,
	Tony Lindgren

On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
smp_store_release, smp_load_acquire, smp_mb__before_atomic and
smp_mb__after_atomic match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 3ff5642..31152e8 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -70,28 +70,7 @@ extern void arm_heavy_mb(void);
 #define smp_wmb()	dmb(ishst)
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (21 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King, Arnd Bergmann, x86, Tony Lindgren, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev

On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
smp_store_release, smp_load_acquire, smp_mb__before_atomic and
smp_mb__after_atomic match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 3ff5642..31152e8 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -70,28 +70,7 @@ extern void arm_heavy_mb(void);
 #define smp_wmb()	dmb(ishst)
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-arm-kernel

On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
smp_store_release, smp_load_acquire, smp_mb__before_atomic and
smp_mb__after_atomic match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 3ff5642..31152e8 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -70,28 +70,7 @@ extern void arm_heavy_mb(void);
 #define smp_wmb()	dmb(ishst)
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (23 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King, Arnd Bergmann, x86, Tony Lindgren, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev

On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
smp_store_release, smp_load_acquire, smp_mb__before_atomic and
smp_mb__after_atomic match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 3ff5642..31152e8 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -70,28 +70,7 @@ extern void arm_heavy_mb(void);
 #define smp_wmb()	dmb(ishst)
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 09/32] arm64: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Catalin Marinas, Will Deacon,
	Ingo Molnar

On arm64 nop, read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9622eb4..91a43f4 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -91,14 +91,7 @@ do {									\
 	__u.__val;							\
 })
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-#define nop()		asm volatile("nop");
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif	/* __ASSEMBLY__ */
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 09/32] arm64: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Catalin Marinas, Will Deacon,
	Ingo Molnar, Andre Przywara, Andrey Konovalov

On arm64 nop, read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9622eb4..91a43f4 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -91,14 +91,7 @@ do {									\
 	__u.__val;							\
 })
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-#define nop()		asm volatile("nop");
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif	/* __ASSEMBLY__ */
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 09/32] arm64: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Catalin Marinas, Will Deacon,
	Ingo Molnar

On arm64 nop, read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9622eb4..91a43f4 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -91,14 +91,7 @@ do {									\
 	__u.__val;							\
 })
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-#define nop()		asm volatile("nop");
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif	/* __ASSEMBLY__ */
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 09/32] arm64: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (26 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Catalin Marinas, Will Deacon, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	x86, Andrey Konovalov, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andre Przywara,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel

On arm64 nop, read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9622eb4..91a43f4 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -91,14 +91,7 @@ do {									\
 	__u.__val;							\
 })
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-#define nop()		asm volatile("nop");
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif	/* __ASSEMBLY__ */
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 09/32] arm64: reuse asm-generic/barrier.h
@ 2015-12-31 19:06   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-arm-kernel

On arm64 nop, read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9622eb4..91a43f4 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -91,14 +91,7 @@ do {									\
 	__u.__val;							\
 })
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-#define nop()		asm volatile("nop");
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif	/* __ASSEMBLY__ */
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 09/32] arm64: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (24 preceding siblings ...)
  (?)
@ 2015-12-31 19:06 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Catalin Marinas, Will Deacon, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	x86, Andrey Konovalov, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andre Przywara,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel

On arm64 nop, read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9622eb4..91a43f4 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -91,14 +91,7 @@ do {									\
 	__u.__val;							\
 })
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
-#define smp_store_mb(var, value)	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-#define nop()		asm volatile("nop");
-
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#include <asm-generic/barrier.h>
 
 #endif	/* __ASSEMBLY__ */
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, James Hogan,
	Ingo Molnar, Michael Ellerman

On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index 172b7e5..b5b778b 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,9 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
 #ifndef CONFIG_SMP
 #define fence()		do { } while (0)
 #define smp_mb()        barrier()
@@ -81,27 +78,9 @@ static inline void fence(void)
 #endif
 #endif
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_METAG_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, James Hogan, Ingo Molnar,
	Michael Ellerman, Andrey Konovalov

On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index 172b7e5..b5b778b 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,9 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
 #ifndef CONFIG_SMP
 #define fence()		do { } while (0)
 #define smp_mb()        barrier()
@@ -81,27 +78,9 @@ static inline void fence(void)
 #endif
 #endif
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_METAG_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, James Hogan,
	Ingo Molnar, Michael Ellerman

On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
---
 arch/metag/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index 172b7e5..b5b778b 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,9 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
 #ifndef CONFIG_SMP
 #define fence()		do { } while (0)
 #define smp_mb()        barrier()
@@ -81,27 +78,9 @@ static inline void fence(void)
 #endif
 #endif
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_METAG_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (28 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Michael Ellerman, x86, xen-devel, Ingo Molnar,
	linux-xtensa, James Hogan, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev

On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index 172b7e5..b5b778b 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,9 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
 #ifndef CONFIG_SMP
 #define fence()		do { } while (0)
 #define smp_mb()        barrier()
@@ -81,27 +78,9 @@ static inline void fence(void)
 #endif
 #endif
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_METAG_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-arm-kernel

On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index 172b7e5..b5b778b 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,9 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
 #ifndef CONFIG_SMP
 #define fence()		do { } while (0)
 #define smp_mb()        barrier()
@@ -81,27 +78,9 @@ static inline void fence(void)
 #endif
 #endif
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_METAG_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (27 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Michael Ellerman, x86, xen-devel, Ingo Molnar,
	linux-xtensa, James Hogan, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev

On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index 172b7e5..b5b778b 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,9 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#define dma_rmb()	rmb()
-#define dma_wmb()	wmb()
-
 #ifndef CONFIG_SMP
 #define fence()		do { } while (0)
 #define smp_mb()        barrier()
@@ -81,27 +78,9 @@ static inline void fence(void)
 #endif
 #endif
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_METAG_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ralf Baechle, Ingo Molnar,
	Michael Ellerman

On mips dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 752e0b8..3eac4b9 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -10,9 +10,6 @@
 
 #include <asm/addrspace.h>
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
 #ifdef CONFIG_CPU_HAS_SYNC
 #define __sync()				\
 	__asm__ __volatile__(			\
@@ -87,8 +84,6 @@
 
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
-#define dma_wmb()	fast_wmb()
-#define dma_rmb()	fast_rmb()
 
 #if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
 #define __WEAK_LLSC_MB		"		\n"
 #endif
 
-#define smp_store_mb(var, value) \
-	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_llsc_mb()	__asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -129,22 +121,9 @@
 #define nudge_writes() mb()
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	smp_mb__before_llsc()
 #define smp_mb__after_atomic()	smp_llsc_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ralf Baechle, Ingo Molnar,
	Michael Ellerman, Andrey Konovalov

On mips dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 752e0b8..3eac4b9 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -10,9 +10,6 @@
 
 #include <asm/addrspace.h>
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
 #ifdef CONFIG_CPU_HAS_SYNC
 #define __sync()				\
 	__asm__ __volatile__(			\
@@ -87,8 +84,6 @@
 
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
-#define dma_wmb()	fast_wmb()
-#define dma_rmb()	fast_rmb()
 
 #if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
 #define __WEAK_LLSC_MB		"		\n"
 #endif
 
-#define smp_store_mb(var, value) \
-	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_llsc_mb()	__asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -129,22 +121,9 @@
 #define nudge_writes() mb()
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	smp_mb__before_llsc()
 #define smp_mb__after_atomic()	smp_llsc_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ralf Baechle, Ingo Molnar,
	Michael Ellerman

On mips dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 752e0b8..3eac4b9 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -10,9 +10,6 @@
 
 #include <asm/addrspace.h>
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
 #ifdef CONFIG_CPU_HAS_SYNC
 #define __sync()				\
 	__asm__ __volatile__(			\
@@ -87,8 +84,6 @@
 
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
-#define dma_wmb()	fast_wmb()
-#define dma_rmb()	fast_rmb()
 
 #if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
 #define __WEAK_LLSC_MB		"		\n"
 #endif
 
-#define smp_store_mb(var, value) \
-	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_llsc_mb()	__asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -129,22 +121,9 @@
 #define nudge_writes() mb()
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	smp_mb__before_llsc()
 #define smp_mb__after_atomic()	smp_llsc_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (30 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Michael Ellerman, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, Ralf Baechle,
	linuxppc-dev

On mips dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 752e0b8..3eac4b9 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -10,9 +10,6 @@
 
 #include <asm/addrspace.h>
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
 #ifdef CONFIG_CPU_HAS_SYNC
 #define __sync()				\
 	__asm__ __volatile__(			\
@@ -87,8 +84,6 @@
 
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
-#define dma_wmb()	fast_wmb()
-#define dma_rmb()	fast_rmb()
 
 #if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
 #define __WEAK_LLSC_MB		"		\n"
 #endif
 
-#define smp_store_mb(var, value) \
-	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_llsc_mb()	__asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -129,22 +121,9 @@
 #define nudge_writes() mb()
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	smp_mb__before_llsc()
 #define smp_mb__after_atomic()	smp_llsc_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-arm-kernel

On mips dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 752e0b8..3eac4b9 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -10,9 +10,6 @@
 
 #include <asm/addrspace.h>
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
 #ifdef CONFIG_CPU_HAS_SYNC
 #define __sync()				\
 	__asm__ __volatile__(			\
@@ -87,8 +84,6 @@
 
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
-#define dma_wmb()	fast_wmb()
-#define dma_rmb()	fast_rmb()
 
 #if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
 #define __WEAK_LLSC_MB		"		\n"
 #endif
 
-#define smp_store_mb(var, value) \
-	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_llsc_mb()	__asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -129,22 +121,9 @@
 #define nudge_writes() mb()
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	smp_mb__before_llsc()
 #define smp_mb__after_atomic()	smp_llsc_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (31 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Michael Ellerman, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, Ralf Baechle,
	linuxppc-dev

On mips dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 752e0b8..3eac4b9 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -10,9 +10,6 @@
 
 #include <asm/addrspace.h>
 
-#define read_barrier_depends()		do { } while(0)
-#define smp_read_barrier_depends()	do { } while(0)
-
 #ifdef CONFIG_CPU_HAS_SYNC
 #define __sync()				\
 	__asm__ __volatile__(			\
@@ -87,8 +84,6 @@
 
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
-#define dma_wmb()	fast_wmb()
-#define dma_rmb()	fast_rmb()
 
 #if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
 #define __WEAK_LLSC_MB		"		\n"
 #endif
 
-#define smp_store_mb(var, value) \
-	do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
 #define smp_llsc_mb()	__asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -129,22 +121,9 @@
 #define nudge_writes() mb()
 #endif
 
-#define smp_store_release(p, v)						\
-do {									\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	WRITE_ONCE(*p, v);						\
-} while (0)
-
-#define smp_load_acquire(p)						\
-({									\
-	typeof(*p) ___p1 = READ_ONCE(*p);				\
-	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
-	___p1;								\
-})
-
 #define smp_mb__before_atomic()	smp_mb__before_llsc()
 #define smp_mb__after_atomic()	smp_llsc_mb()
 
+#include <asm-generic/barrier.h>
+
 #endif /* __ASM_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Jeff Dike,
	Richard Weinberger, Ingo Molnar, Bo

On x86/um CONFIG_SMP is never defined.  As a result, several macros
match the asm-generic variant exactly. Drop the local definitions and
pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/um/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/x86/um/asm/barrier.h b/arch/x86/um/asm/barrier.h
index 755481f..174781a 100644
--- a/arch/x86/um/asm/barrier.h
+++ b/arch/x86/um/asm/barrier.h
@@ -36,13 +36,6 @@
 #endif /* CONFIG_X86_PPRO_FENCE */
 #define dma_wmb()	barrier()
 
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
+#include <asm-generic/barrier.h>
 
 #endif
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Jeff Dike, Richard Weinberger,
	Ingo Molnar, Borislav Petkov, Andy Lutomirski,
	user-mode-linux-user

On x86/um CONFIG_SMP is never defined.  As a result, several macros
match the asm-generic variant exactly. Drop the local definitions and
pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/um/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/x86/um/asm/barrier.h b/arch/x86/um/asm/barrier.h
index 755481f..174781a 100644
--- a/arch/x86/um/asm/barrier.h
+++ b/arch/x86/um/asm/barrier.h
@@ -36,13 +36,6 @@
 #endif /* CONFIG_X86_PPRO_FENCE */
 #define dma_wmb()	barrier()
 
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
+#include <asm-generic/barrier.h>
 
 #endif
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Jeff Dike,
	Richard Weinberger, Ingo Molnar, Bo

On x86/um CONFIG_SMP is never defined.  As a result, several macros
match the asm-generic variant exactly. Drop the local definitions and
pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
---
 arch/x86/um/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/x86/um/asm/barrier.h b/arch/x86/um/asm/barrier.h
index 755481f..174781a 100644
--- a/arch/x86/um/asm/barrier.h
+++ b/arch/x86/um/asm/barrier.h
@@ -36,13 +36,6 @@
 #endif /* CONFIG_X86_PPRO_FENCE */
 #define dma_wmb()	barrier()
 
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
+#include <asm-generic/barrier.h>
 
 #endif
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (32 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, user-mode-linux-user, linux-sh,
	Peter Zijlstra, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, Richard Weinberger, x86,
	Ingo Molnar, xen-devel, Ingo Molnar, Borislav Petkov,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Jeff Dike, adi-buildroot-devel, Andy Lutomirski, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andr

On x86/um CONFIG_SMP is never defined.  As a result, several macros
match the asm-generic variant exactly. Drop the local definitions and
pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/um/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/x86/um/asm/barrier.h b/arch/x86/um/asm/barrier.h
index 755481f..174781a 100644
--- a/arch/x86/um/asm/barrier.h
+++ b/arch/x86/um/asm/barrier.h
@@ -36,13 +36,6 @@
 #endif /* CONFIG_X86_PPRO_FENCE */
 #define dma_wmb()	barrier()
 
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
+#include <asm-generic/barrier.h>
 
 #endif
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-arm-kernel

On x86/um CONFIG_SMP is never defined.  As a result, several macros
match the asm-generic variant exactly. Drop the local definitions and
pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/um/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/x86/um/asm/barrier.h b/arch/x86/um/asm/barrier.h
index 755481f..174781a 100644
--- a/arch/x86/um/asm/barrier.h
+++ b/arch/x86/um/asm/barrier.h
@@ -36,13 +36,6 @@
 #endif /* CONFIG_X86_PPRO_FENCE */
 #define dma_wmb()	barrier()
 
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
+#include <asm-generic/barrier.h>
 
 #endif
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (33 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, user-mode-linux-user, linux-sh,
	Peter Zijlstra, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, Richard Weinberger, x86,
	Ingo Molnar, xen-devel, Ingo Molnar, Borislav Petkov,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Jeff Dike, adi-buildroot-devel, Andy Lutomirski, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andr

On x86/um CONFIG_SMP is never defined.  As a result, several macros
match the asm-generic variant exactly. Drop the local definitions and
pull in asm-generic/barrier.h instead.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/um/asm/barrier.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/x86/um/asm/barrier.h b/arch/x86/um/asm/barrier.h
index 755481f..174781a 100644
--- a/arch/x86/um/asm/barrier.h
+++ b/arch/x86/um/asm/barrier.h
@@ -36,13 +36,6 @@
 #endif /* CONFIG_X86_PPRO_FENCE */
 #define dma_wmb()	barrier()
 
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
+#include <asm-generic/barrier.h>
 
 #endif
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 13/32] x86: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski

As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty.  Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
 #define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif /* SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #if defined(CONFIG_X86_PPRO_FENCE)
 
 /*
@@ -91,4 +88,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_X86_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 13/32] x86: reuse asm-generic/barrier.h
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, Andrey Konovalov

As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty.  Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
 #define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif /* SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #if defined(CONFIG_X86_PPRO_FENCE)
 
 /*
@@ -91,4 +88,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_X86_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 13/32] x86: reuse asm-generic/barrier.h
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski

As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty.  Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
 #define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif /* SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #if defined(CONFIG_X86_PPRO_FENCE)
 
 /*
@@ -91,4 +88,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_X86_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 13/32] x86: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (36 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, Ingo Molnar, xen-devel, Ingo Molnar,
	Borislav Petkov, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Andy Lutomirski, Thomas Gleixner, linux-metag, linux-arm-kernel,
	Andrew Cooper, linuxppc-dev

As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty.  Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
 #define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif /* SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #if defined(CONFIG_X86_PPRO_FENCE)
 
 /*
@@ -91,4 +88,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_X86_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 13/32] x86: reuse asm-generic/barrier.h
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-arm-kernel

As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty.  Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
 #define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif /* SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #if defined(CONFIG_X86_PPRO_FENCE)
 
 /*
@@ -91,4 +88,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_X86_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 13/32] x86: reuse asm-generic/barrier.h
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (34 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, Ingo Molnar, xen-devel, Ingo Molnar,
	Borislav Petkov, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Andy Lutomirski, Thomas Gleixner, linux-metag, linux-arm-kernel,
	Andrew Cooper, linuxppc-dev

As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty.  Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
 #define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif /* SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #if defined(CONFIG_X86_PPRO_FENCE)
 
 /*
@@ -91,4 +88,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_X86_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 13/32] x86: reuse asm-generic/barrier.h
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, Andr

As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty.  Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
 #define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif /* SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #if defined(CONFIG_X86_PPRO_FENCE)
 
 /*
@@ -91,4 +88,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_X86_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 13/32] x86: reuse asm-generic/barrier.h
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, Andr

As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty.  Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.

This is in preparation to refactoring this code area.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
 #define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif /* SMP */
 
-#define read_barrier_depends()		do { } while (0)
-#define smp_read_barrier_depends()	do { } while (0)
-
 #if defined(CONFIG_X86_PPRO_FENCE)
 
 /*
@@ -91,4 +88,6 @@ do {									\
 #define smp_mb__before_atomic()	barrier()
 #define smp_mb__after_atomic()	barrier()
 
+#include <asm-generic/barrier.h>
+
 #endif /* _ASM_X86_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 14/32] asm-generic: add __smp_xxx wrappers
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b

On !SMP, most architectures define their
barriers as compiler barriers.
On SMP, most need an actual barrier.

Make it possible to remove the code duplication for
!SMP by defining low-level __smp_xxx barriers
which do not depend on the value of SMP, then
use them from asm-generic conditionally.

Besides reducing code duplication, these low level APIs will also be
useful for virtualization, where a barrier is sometimes needed even if
!SMP since we might be talking to another kernel on the same SMP system.

Both virtio and Xen drivers will benefit.

The smp_xxx variants should use __smp_XXX ones or barrier() depending on
SMP, identically for all architectures.

We keep ifndef guards around them for now - once/if all
architectures are converted to use the generic
code, we'll be able to remove these.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 91 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 987b2e0..8752964 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -54,22 +54,38 @@
 #define read_barrier_depends()		do { } while (0)
 #endif
 
+#ifndef __smp_mb
+#define __smp_mb()	mb()
+#endif
+
+#ifndef __smp_rmb
+#define __smp_rmb()	rmb()
+#endif
+
+#ifndef __smp_wmb
+#define __smp_wmb()	wmb()
+#endif
+
+#ifndef __smp_read_barrier_depends
+#define __smp_read_barrier_depends()	read_barrier_depends()
+#endif
+
 #ifdef CONFIG_SMP
 
 #ifndef smp_mb
-#define smp_mb()	mb()
+#define smp_mb()	__smp_mb()
 #endif
 
 #ifndef smp_rmb
-#define smp_rmb()	rmb()
+#define smp_rmb()	__smp_rmb()
 #endif
 
 #ifndef smp_wmb
-#define smp_wmb()	wmb()
+#define smp_wmb()	__smp_wmb()
 #endif
 
 #ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends()	read_barrier_depends()
+#define smp_read_barrier_depends()	__smp_read_barrier_depends()
 #endif
 
 #else	/* !CONFIG_SMP */
@@ -92,23 +108,78 @@
 
 #endif	/* CONFIG_SMP */
 
+#ifndef __smp_store_mb
+#define __smp_store_mb(var, value)  do { WRITE_ONCE(var, value); __smp_mb(); } while (0)
+#endif
+
+#ifndef __smp_mb__before_atomic
+#define __smp_mb__before_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_mb__after_atomic
+#define __smp_mb__after_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_store_release
+#define __smp_store_release(p, v)					\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	WRITE_ONCE(*p, v);						\
+} while (0)
+#endif
+
+#ifndef __smp_load_acquire
+#define __smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = READ_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	___p1;								\
+})
+#endif
+
+#ifdef CONFIG_SMP
+
+#ifndef smp_store_mb
+#define smp_store_mb(var, value)  __smp_store_mb(var, value)
+#endif
+
+#ifndef smp_mb__before_atomic
+#define smp_mb__before_atomic()	__smp_mb__before_atomic()
+#endif
+
+#ifndef smp_mb__after_atomic
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
+#endif
+
+#ifndef smp_store_release
+#define smp_store_release(p, v) __smp_store_release(p, v)
+#endif
+
+#ifndef smp_load_acquire
+#define smp_load_acquire(p) __smp_load_acquire(p)
+#endif
+
+#else	/* !CONFIG_SMP */
+
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-#define smp_mb__before_atomic()	smp_mb()
+#define smp_mb__before_atomic()	barrier()
 #endif
 
 #ifndef smp_mb__after_atomic
-#define smp_mb__after_atomic()	smp_mb()
+#define smp_mb__after_atomic()	barrier()
 #endif
 
 #ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 #endif
@@ -118,10 +189,12 @@ do {									\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	___p1;								\
 })
 #endif
 
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 14/32] asm-generic: add __smp_xxx wrappers
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel

On !SMP, most architectures define their
barriers as compiler barriers.
On SMP, most need an actual barrier.

Make it possible to remove the code duplication for
!SMP by defining low-level __smp_xxx barriers
which do not depend on the value of SMP, then
use them from asm-generic conditionally.

Besides reducing code duplication, these low level APIs will also be
useful for virtualization, where a barrier is sometimes needed even if
!SMP since we might be talking to another kernel on the same SMP system.

Both virtio and Xen drivers will benefit.

The smp_xxx variants should use __smp_XXX ones or barrier() depending on
SMP, identically for all architectures.

We keep ifndef guards around them for now - once/if all
architectures are converted to use the generic
code, we'll be able to remove these.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 91 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 987b2e0..8752964 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -54,22 +54,38 @@
 #define read_barrier_depends()		do { } while (0)
 #endif
 
+#ifndef __smp_mb
+#define __smp_mb()	mb()
+#endif
+
+#ifndef __smp_rmb
+#define __smp_rmb()	rmb()
+#endif
+
+#ifndef __smp_wmb
+#define __smp_wmb()	wmb()
+#endif
+
+#ifndef __smp_read_barrier_depends
+#define __smp_read_barrier_depends()	read_barrier_depends()
+#endif
+
 #ifdef CONFIG_SMP
 
 #ifndef smp_mb
-#define smp_mb()	mb()
+#define smp_mb()	__smp_mb()
 #endif
 
 #ifndef smp_rmb
-#define smp_rmb()	rmb()
+#define smp_rmb()	__smp_rmb()
 #endif
 
 #ifndef smp_wmb
-#define smp_wmb()	wmb()
+#define smp_wmb()	__smp_wmb()
 #endif
 
 #ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends()	read_barrier_depends()
+#define smp_read_barrier_depends()	__smp_read_barrier_depends()
 #endif
 
 #else	/* !CONFIG_SMP */
@@ -92,23 +108,78 @@
 
 #endif	/* CONFIG_SMP */
 
+#ifndef __smp_store_mb
+#define __smp_store_mb(var, value)  do { WRITE_ONCE(var, value); __smp_mb(); } while (0)
+#endif
+
+#ifndef __smp_mb__before_atomic
+#define __smp_mb__before_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_mb__after_atomic
+#define __smp_mb__after_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_store_release
+#define __smp_store_release(p, v)					\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	WRITE_ONCE(*p, v);						\
+} while (0)
+#endif
+
+#ifndef __smp_load_acquire
+#define __smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = READ_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	___p1;								\
+})
+#endif
+
+#ifdef CONFIG_SMP
+
+#ifndef smp_store_mb
+#define smp_store_mb(var, value)  __smp_store_mb(var, value)
+#endif
+
+#ifndef smp_mb__before_atomic
+#define smp_mb__before_atomic()	__smp_mb__before_atomic()
+#endif
+
+#ifndef smp_mb__after_atomic
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
+#endif
+
+#ifndef smp_store_release
+#define smp_store_release(p, v) __smp_store_release(p, v)
+#endif
+
+#ifndef smp_load_acquire
+#define smp_load_acquire(p) __smp_load_acquire(p)
+#endif
+
+#else	/* !CONFIG_SMP */
+
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-#define smp_mb__before_atomic()	smp_mb()
+#define smp_mb__before_atomic()	barrier()
 #endif
 
 #ifndef smp_mb__after_atomic
-#define smp_mb__after_atomic()	smp_mb()
+#define smp_mb__after_atomic()	barrier()
 #endif
 
 #ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 #endif
@@ -118,10 +189,12 @@ do {									\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	___p1;								\
 })
 #endif
 
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 14/32] asm-generic: add __smp_xxx wrappers
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b

On !SMP, most architectures define their
barriers as compiler barriers.
On SMP, most need an actual barrier.

Make it possible to remove the code duplication for
!SMP by defining low-level __smp_xxx barriers
which do not depend on the value of SMP, then
use them from asm-generic conditionally.

Besides reducing code duplication, these low level APIs will also be
useful for virtualization, where a barrier is sometimes needed even if
!SMP since we might be talking to another kernel on the same SMP system.

Both virtio and Xen drivers will benefit.

The smp_xxx variants should use __smp_XXX ones or barrier() depending on
SMP, identically for all architectures.

We keep ifndef guards around them for now - once/if all
architectures are converted to use the generic
code, we'll be able to remove these.

Suggested-by: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
---
 include/asm-generic/barrier.h | 91 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 987b2e0..8752964 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -54,22 +54,38 @@
 #define read_barrier_depends()		do { } while (0)
 #endif
 
+#ifndef __smp_mb
+#define __smp_mb()	mb()
+#endif
+
+#ifndef __smp_rmb
+#define __smp_rmb()	rmb()
+#endif
+
+#ifndef __smp_wmb
+#define __smp_wmb()	wmb()
+#endif
+
+#ifndef __smp_read_barrier_depends
+#define __smp_read_barrier_depends()	read_barrier_depends()
+#endif
+
 #ifdef CONFIG_SMP
 
 #ifndef smp_mb
-#define smp_mb()	mb()
+#define smp_mb()	__smp_mb()
 #endif
 
 #ifndef smp_rmb
-#define smp_rmb()	rmb()
+#define smp_rmb()	__smp_rmb()
 #endif
 
 #ifndef smp_wmb
-#define smp_wmb()	wmb()
+#define smp_wmb()	__smp_wmb()
 #endif
 
 #ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends()	read_barrier_depends()
+#define smp_read_barrier_depends()	__smp_read_barrier_depends()
 #endif
 
 #else	/* !CONFIG_SMP */
@@ -92,23 +108,78 @@
 
 #endif	/* CONFIG_SMP */
 
+#ifndef __smp_store_mb
+#define __smp_store_mb(var, value)  do { WRITE_ONCE(var, value); __smp_mb(); } while (0)
+#endif
+
+#ifndef __smp_mb__before_atomic
+#define __smp_mb__before_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_mb__after_atomic
+#define __smp_mb__after_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_store_release
+#define __smp_store_release(p, v)					\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	WRITE_ONCE(*p, v);						\
+} while (0)
+#endif
+
+#ifndef __smp_load_acquire
+#define __smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = READ_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	___p1;								\
+})
+#endif
+
+#ifdef CONFIG_SMP
+
+#ifndef smp_store_mb
+#define smp_store_mb(var, value)  __smp_store_mb(var, value)
+#endif
+
+#ifndef smp_mb__before_atomic
+#define smp_mb__before_atomic()	__smp_mb__before_atomic()
+#endif
+
+#ifndef smp_mb__after_atomic
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
+#endif
+
+#ifndef smp_store_release
+#define smp_store_release(p, v) __smp_store_release(p, v)
+#endif
+
+#ifndef smp_load_acquire
+#define smp_load_acquire(p) __smp_load_acquire(p)
+#endif
+
+#else	/* !CONFIG_SMP */
+
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-#define smp_mb__before_atomic()	smp_mb()
+#define smp_mb__before_atomic()	barrier()
 #endif
 
 #ifndef smp_mb__after_atomic
-#define smp_mb__after_atomic()	smp_mb()
+#define smp_mb__after_atomic()	barrier()
 #endif
 
 #ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 #endif
@@ -118,10 +189,12 @@ do {									\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	___p1;								\
 })
 #endif
 
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 14/32] asm-generic: add __smp_xxx wrappers
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (37 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

On !SMP, most architectures define their
barriers as compiler barriers.
On SMP, most need an actual barrier.

Make it possible to remove the code duplication for
!SMP by defining low-level __smp_xxx barriers
which do not depend on the value of SMP, then
use them from asm-generic conditionally.

Besides reducing code duplication, these low level APIs will also be
useful for virtualization, where a barrier is sometimes needed even if
!SMP since we might be talking to another kernel on the same SMP system.

Both virtio and Xen drivers will benefit.

The smp_xxx variants should use __smp_XXX ones or barrier() depending on
SMP, identically for all architectures.

We keep ifndef guards around them for now - once/if all
architectures are converted to use the generic
code, we'll be able to remove these.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 91 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 987b2e0..8752964 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -54,22 +54,38 @@
 #define read_barrier_depends()		do { } while (0)
 #endif
 
+#ifndef __smp_mb
+#define __smp_mb()	mb()
+#endif
+
+#ifndef __smp_rmb
+#define __smp_rmb()	rmb()
+#endif
+
+#ifndef __smp_wmb
+#define __smp_wmb()	wmb()
+#endif
+
+#ifndef __smp_read_barrier_depends
+#define __smp_read_barrier_depends()	read_barrier_depends()
+#endif
+
 #ifdef CONFIG_SMP
 
 #ifndef smp_mb
-#define smp_mb()	mb()
+#define smp_mb()	__smp_mb()
 #endif
 
 #ifndef smp_rmb
-#define smp_rmb()	rmb()
+#define smp_rmb()	__smp_rmb()
 #endif
 
 #ifndef smp_wmb
-#define smp_wmb()	wmb()
+#define smp_wmb()	__smp_wmb()
 #endif
 
 #ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends()	read_barrier_depends()
+#define smp_read_barrier_depends()	__smp_read_barrier_depends()
 #endif
 
 #else	/* !CONFIG_SMP */
@@ -92,23 +108,78 @@
 
 #endif	/* CONFIG_SMP */
 
+#ifndef __smp_store_mb
+#define __smp_store_mb(var, value)  do { WRITE_ONCE(var, value); __smp_mb(); } while (0)
+#endif
+
+#ifndef __smp_mb__before_atomic
+#define __smp_mb__before_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_mb__after_atomic
+#define __smp_mb__after_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_store_release
+#define __smp_store_release(p, v)					\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	WRITE_ONCE(*p, v);						\
+} while (0)
+#endif
+
+#ifndef __smp_load_acquire
+#define __smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = READ_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	___p1;								\
+})
+#endif
+
+#ifdef CONFIG_SMP
+
+#ifndef smp_store_mb
+#define smp_store_mb(var, value)  __smp_store_mb(var, value)
+#endif
+
+#ifndef smp_mb__before_atomic
+#define smp_mb__before_atomic()	__smp_mb__before_atomic()
+#endif
+
+#ifndef smp_mb__after_atomic
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
+#endif
+
+#ifndef smp_store_release
+#define smp_store_release(p, v) __smp_store_release(p, v)
+#endif
+
+#ifndef smp_load_acquire
+#define smp_load_acquire(p) __smp_load_acquire(p)
+#endif
+
+#else	/* !CONFIG_SMP */
+
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-#define smp_mb__before_atomic()	smp_mb()
+#define smp_mb__before_atomic()	barrier()
 #endif
 
 #ifndef smp_mb__after_atomic
-#define smp_mb__after_atomic()	smp_mb()
+#define smp_mb__after_atomic()	barrier()
 #endif
 
 #ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 #endif
@@ -118,10 +189,12 @@ do {									\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	___p1;								\
 })
 #endif
 
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 14/32] asm-generic: add __smp_xxx wrappers
@ 2015-12-31 19:07     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-arm-kernel

On !SMP, most architectures define their
barriers as compiler barriers.
On SMP, most need an actual barrier.

Make it possible to remove the code duplication for
!SMP by defining low-level __smp_xxx barriers
which do not depend on the value of SMP, then
use them from asm-generic conditionally.

Besides reducing code duplication, these low level APIs will also be
useful for virtualization, where a barrier is sometimes needed even if
!SMP since we might be talking to another kernel on the same SMP system.

Both virtio and Xen drivers will benefit.

The smp_xxx variants should use __smp_XXX ones or barrier() depending on
SMP, identically for all architectures.

We keep ifndef guards around them for now - once/if all
architectures are converted to use the generic
code, we'll be able to remove these.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 91 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 987b2e0..8752964 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -54,22 +54,38 @@
 #define read_barrier_depends()		do { } while (0)
 #endif
 
+#ifndef __smp_mb
+#define __smp_mb()	mb()
+#endif
+
+#ifndef __smp_rmb
+#define __smp_rmb()	rmb()
+#endif
+
+#ifndef __smp_wmb
+#define __smp_wmb()	wmb()
+#endif
+
+#ifndef __smp_read_barrier_depends
+#define __smp_read_barrier_depends()	read_barrier_depends()
+#endif
+
 #ifdef CONFIG_SMP
 
 #ifndef smp_mb
-#define smp_mb()	mb()
+#define smp_mb()	__smp_mb()
 #endif
 
 #ifndef smp_rmb
-#define smp_rmb()	rmb()
+#define smp_rmb()	__smp_rmb()
 #endif
 
 #ifndef smp_wmb
-#define smp_wmb()	wmb()
+#define smp_wmb()	__smp_wmb()
 #endif
 
 #ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends()	read_barrier_depends()
+#define smp_read_barrier_depends()	__smp_read_barrier_depends()
 #endif
 
 #else	/* !CONFIG_SMP */
@@ -92,23 +108,78 @@
 
 #endif	/* CONFIG_SMP */
 
+#ifndef __smp_store_mb
+#define __smp_store_mb(var, value)  do { WRITE_ONCE(var, value); __smp_mb(); } while (0)
+#endif
+
+#ifndef __smp_mb__before_atomic
+#define __smp_mb__before_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_mb__after_atomic
+#define __smp_mb__after_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_store_release
+#define __smp_store_release(p, v)					\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	WRITE_ONCE(*p, v);						\
+} while (0)
+#endif
+
+#ifndef __smp_load_acquire
+#define __smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = READ_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	___p1;								\
+})
+#endif
+
+#ifdef CONFIG_SMP
+
+#ifndef smp_store_mb
+#define smp_store_mb(var, value)  __smp_store_mb(var, value)
+#endif
+
+#ifndef smp_mb__before_atomic
+#define smp_mb__before_atomic()	__smp_mb__before_atomic()
+#endif
+
+#ifndef smp_mb__after_atomic
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
+#endif
+
+#ifndef smp_store_release
+#define smp_store_release(p, v) __smp_store_release(p, v)
+#endif
+
+#ifndef smp_load_acquire
+#define smp_load_acquire(p) __smp_load_acquire(p)
+#endif
+
+#else	/* !CONFIG_SMP */
+
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-#define smp_mb__before_atomic()	smp_mb()
+#define smp_mb__before_atomic()	barrier()
 #endif
 
 #ifndef smp_mb__after_atomic
-#define smp_mb__after_atomic()	smp_mb()
+#define smp_mb__after_atomic()	barrier()
 #endif
 
 #ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 #endif
@@ -118,10 +189,12 @@ do {									\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	___p1;								\
 })
 #endif
 
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 14/32] asm-generic: add __smp_xxx wrappers
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (38 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

On !SMP, most architectures define their
barriers as compiler barriers.
On SMP, most need an actual barrier.

Make it possible to remove the code duplication for
!SMP by defining low-level __smp_xxx barriers
which do not depend on the value of SMP, then
use them from asm-generic conditionally.

Besides reducing code duplication, these low level APIs will also be
useful for virtualization, where a barrier is sometimes needed even if
!SMP since we might be talking to another kernel on the same SMP system.

Both virtio and Xen drivers will benefit.

The smp_xxx variants should use __smp_XXX ones or barrier() depending on
SMP, identically for all architectures.

We keep ifndef guards around them for now - once/if all
architectures are converted to use the generic
code, we'll be able to remove these.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/barrier.h | 91 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 987b2e0..8752964 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -54,22 +54,38 @@
 #define read_barrier_depends()		do { } while (0)
 #endif
 
+#ifndef __smp_mb
+#define __smp_mb()	mb()
+#endif
+
+#ifndef __smp_rmb
+#define __smp_rmb()	rmb()
+#endif
+
+#ifndef __smp_wmb
+#define __smp_wmb()	wmb()
+#endif
+
+#ifndef __smp_read_barrier_depends
+#define __smp_read_barrier_depends()	read_barrier_depends()
+#endif
+
 #ifdef CONFIG_SMP
 
 #ifndef smp_mb
-#define smp_mb()	mb()
+#define smp_mb()	__smp_mb()
 #endif
 
 #ifndef smp_rmb
-#define smp_rmb()	rmb()
+#define smp_rmb()	__smp_rmb()
 #endif
 
 #ifndef smp_wmb
-#define smp_wmb()	wmb()
+#define smp_wmb()	__smp_wmb()
 #endif
 
 #ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends()	read_barrier_depends()
+#define smp_read_barrier_depends()	__smp_read_barrier_depends()
 #endif
 
 #else	/* !CONFIG_SMP */
@@ -92,23 +108,78 @@
 
 #endif	/* CONFIG_SMP */
 
+#ifndef __smp_store_mb
+#define __smp_store_mb(var, value)  do { WRITE_ONCE(var, value); __smp_mb(); } while (0)
+#endif
+
+#ifndef __smp_mb__before_atomic
+#define __smp_mb__before_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_mb__after_atomic
+#define __smp_mb__after_atomic()	__smp_mb()
+#endif
+
+#ifndef __smp_store_release
+#define __smp_store_release(p, v)					\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	WRITE_ONCE(*p, v);						\
+} while (0)
+#endif
+
+#ifndef __smp_load_acquire
+#define __smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = READ_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	__smp_mb();							\
+	___p1;								\
+})
+#endif
+
+#ifdef CONFIG_SMP
+
+#ifndef smp_store_mb
+#define smp_store_mb(var, value)  __smp_store_mb(var, value)
+#endif
+
+#ifndef smp_mb__before_atomic
+#define smp_mb__before_atomic()	__smp_mb__before_atomic()
+#endif
+
+#ifndef smp_mb__after_atomic
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
+#endif
+
+#ifndef smp_store_release
+#define smp_store_release(p, v) __smp_store_release(p, v)
+#endif
+
+#ifndef smp_load_acquire
+#define smp_load_acquire(p) __smp_load_acquire(p)
+#endif
+
+#else	/* !CONFIG_SMP */
+
 #ifndef smp_store_mb
-#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); smp_mb(); } while (0)
+#define smp_store_mb(var, value)  do { WRITE_ONCE(var, value); barrier(); } while (0)
 #endif
 
 #ifndef smp_mb__before_atomic
-#define smp_mb__before_atomic()	smp_mb()
+#define smp_mb__before_atomic()	barrier()
 #endif
 
 #ifndef smp_mb__after_atomic
-#define smp_mb__after_atomic()	smp_mb()
+#define smp_mb__after_atomic()	barrier()
 #endif
 
 #ifndef smp_store_release
 #define smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 #endif
@@ -118,10 +189,12 @@ do {									\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	barrier();							\
 	___p1;								\
 })
 #endif
 
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Benjamin Herrenschmidt,
	Paul Mackerras, Michael

This defines __smp_xxx barriers for powerpc
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 980ad0c..c0deafc 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -44,19 +44,11 @@
 #define dma_rmb()	__lwsync()
 #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
-#ifdef CONFIG_SMP
-#define smp_lwsync()	__lwsync()
+#define __smp_lwsync()	__lwsync()
 
-#define smp_mb()	mb()
-#define smp_rmb()	__lwsync()
-#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#else
-#define smp_lwsync()	barrier()
-
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#endif /* CONFIG_SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	__lwsync()
+#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
 /*
  * This is a barrier which prevents following instructions from being
@@ -67,18 +59,18 @@
 #define data_barrier(x)	\
 	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	___p1;								\
 })
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov, Paul E. McKenney

This defines __smp_xxx barriers for powerpc
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 980ad0c..c0deafc 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -44,19 +44,11 @@
 #define dma_rmb()	__lwsync()
 #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
-#ifdef CONFIG_SMP
-#define smp_lwsync()	__lwsync()
+#define __smp_lwsync()	__lwsync()
 
-#define smp_mb()	mb()
-#define smp_rmb()	__lwsync()
-#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#else
-#define smp_lwsync()	barrier()
-
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#endif /* CONFIG_SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	__lwsync()
+#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
 /*
  * This is a barrier which prevents following instructions from being
@@ -67,18 +59,18 @@
 #define data_barrier(x)	\
 	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	___p1;								\
 })
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Benjamin Herrenschmidt,
	Paul Mackerras, Michael

This defines __smp_xxx barriers for powerpc
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 980ad0c..c0deafc 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -44,19 +44,11 @@
 #define dma_rmb()	__lwsync()
 #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
-#ifdef CONFIG_SMP
-#define smp_lwsync()	__lwsync()
+#define __smp_lwsync()	__lwsync()
 
-#define smp_mb()	mb()
-#define smp_rmb()	__lwsync()
-#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#else
-#define smp_lwsync()	barrier()
-
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#endif /* CONFIG_SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	__lwsync()
+#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
 /*
  * This is a barrier which prevents following instructions from being
@@ -67,18 +59,18 @@
 #define data_barrier(x)	\
 	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	___p1;								\
 })
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (40 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

This defines __smp_xxx barriers for powerpc
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 980ad0c..c0deafc 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -44,19 +44,11 @@
 #define dma_rmb()	__lwsync()
 #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
-#ifdef CONFIG_SMP
-#define smp_lwsync()	__lwsync()
+#define __smp_lwsync()	__lwsync()
 
-#define smp_mb()	mb()
-#define smp_rmb()	__lwsync()
-#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#else
-#define smp_lwsync()	barrier()
-
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#endif /* CONFIG_SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	__lwsync()
+#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
 /*
  * This is a barrier which prevents following instructions from being
@@ -67,18 +59,18 @@
 #define data_barrier(x)	\
 	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	___p1;								\
 })
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for powerpc
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 980ad0c..c0deafc 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -44,19 +44,11 @@
 #define dma_rmb()	__lwsync()
 #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
-#ifdef CONFIG_SMP
-#define smp_lwsync()	__lwsync()
+#define __smp_lwsync()	__lwsync()
 
-#define smp_mb()	mb()
-#define smp_rmb()	__lwsync()
-#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#else
-#define smp_lwsync()	barrier()
-
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#endif /* CONFIG_SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	__lwsync()
+#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
 /*
  * This is a barrier which prevents following instructions from being
@@ -67,18 +59,18 @@
 #define data_barrier(x)	\
 	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	___p1;								\
 })
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (41 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

This defines __smp_xxx barriers for powerpc
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 980ad0c..c0deafc 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -44,19 +44,11 @@
 #define dma_rmb()	__lwsync()
 #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
-#ifdef CONFIG_SMP
-#define smp_lwsync()	__lwsync()
+#define __smp_lwsync()	__lwsync()
 
-#define smp_mb()	mb()
-#define smp_rmb()	__lwsync()
-#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#else
-#define smp_lwsync()	barrier()
-
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#endif /* CONFIG_SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	__lwsync()
+#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 
 /*
  * This is a barrier which prevents following instructions from being
@@ -67,18 +59,18 @@
 #define data_barrier(x)	\
 	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_lwsync();							\
+	__smp_lwsync();							\
 	___p1;								\
 })
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 16/32] arm64: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Catalin Marinas, Will Deacon,
	Ingo Molnar

This defines __smp_xxx barriers for arm64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: arm64 does not support !SMP config,
so smp_xxx and __smp_xxx are always equivalent.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 91a43f4..dae5c49 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -35,11 +35,11 @@
 #define dma_rmb()	dmb(oshld)
 #define dma_wmb()	dmb(oshst)
 
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	dmb(ishld)
-#define smp_wmb()	dmb(ishst)
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	dmb(ishld)
+#define __smp_wmb()	dmb(ishst)
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	switch (sizeof(*p)) {						\
@@ -62,7 +62,7 @@ do {									\
 	}								\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	union { typeof(*p) __val; char __c[1]; } __u;			\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 16/32] arm64: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Catalin Marinas, Will Deacon,
	Ingo Molnar, Andrey Konovalov, Andre Przywara

This defines __smp_xxx barriers for arm64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: arm64 does not support !SMP config,
so smp_xxx and __smp_xxx are always equivalent.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 91a43f4..dae5c49 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -35,11 +35,11 @@
 #define dma_rmb()	dmb(oshld)
 #define dma_wmb()	dmb(oshst)
 
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	dmb(ishld)
-#define smp_wmb()	dmb(ishst)
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	dmb(ishld)
+#define __smp_wmb()	dmb(ishst)
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	switch (sizeof(*p)) {						\
@@ -62,7 +62,7 @@ do {									\
 	}								\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	union { typeof(*p) __val; char __c[1]; } __u;			\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 16/32] arm64: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Catalin Marinas, Will Deacon,
	Ingo Molnar

This defines __smp_xxx barriers for arm64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: arm64 does not support !SMP config,
so smp_xxx and __smp_xxx are always equivalent.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 91a43f4..dae5c49 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -35,11 +35,11 @@
 #define dma_rmb()	dmb(oshld)
 #define dma_wmb()	dmb(oshst)
 
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	dmb(ishld)
-#define smp_wmb()	dmb(ishst)
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	dmb(ishld)
+#define __smp_wmb()	dmb(ishst)
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	switch (sizeof(*p)) {						\
@@ -62,7 +62,7 @@ do {									\
 	}								\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	union { typeof(*p) __val; char __c[1]; } __u;			\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 16/32] arm64: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (44 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Catalin Marinas, Will Deacon, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	x86, Andre Przywara, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel

This defines __smp_xxx barriers for arm64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: arm64 does not support !SMP config,
so smp_xxx and __smp_xxx are always equivalent.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 91a43f4..dae5c49 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -35,11 +35,11 @@
 #define dma_rmb()	dmb(oshld)
 #define dma_wmb()	dmb(oshst)
 
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	dmb(ishld)
-#define smp_wmb()	dmb(ishst)
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	dmb(ishld)
+#define __smp_wmb()	dmb(ishst)
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	switch (sizeof(*p)) {						\
@@ -62,7 +62,7 @@ do {									\
 	}								\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	union { typeof(*p) __val; char __c[1]; } __u;			\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 16/32] arm64: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for arm64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: arm64 does not support !SMP config,
so smp_xxx and __smp_xxx are always equivalent.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 91a43f4..dae5c49 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -35,11 +35,11 @@
 #define dma_rmb()	dmb(oshld)
 #define dma_wmb()	dmb(oshst)
 
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	dmb(ishld)
-#define smp_wmb()	dmb(ishst)
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	dmb(ishld)
+#define __smp_wmb()	dmb(ishst)
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	switch (sizeof(*p)) {						\
@@ -62,7 +62,7 @@ do {									\
 	}								\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	union { typeof(*p) __val; char __c[1]; } __u;			\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 16/32] arm64: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (43 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Catalin Marinas, Will Deacon, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	x86, Andre Przywara, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel

This defines __smp_xxx barriers for arm64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: arm64 does not support !SMP config,
so smp_xxx and __smp_xxx are always equivalent.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/barrier.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 91a43f4..dae5c49 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -35,11 +35,11 @@
 #define dma_rmb()	dmb(oshld)
 #define dma_wmb()	dmb(oshst)
 
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	dmb(ishld)
-#define smp_wmb()	dmb(ishst)
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	dmb(ishld)
+#define __smp_wmb()	dmb(ishst)
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	switch (sizeof(*p)) {						\
@@ -62,7 +62,7 @@ do {									\
 	}								\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	union { typeof(*p) __val; char __c[1]; } __u;			\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Russell King, Ingo Molnar,
	Tony Lindgren

This defines __smp_xxx barriers for arm,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 31152e8..112cc1a 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -60,15 +60,9 @@ extern void arm_heavy_mb(void);
 #define dma_wmb()	barrier()
 #endif
 
-#ifndef CONFIG_SMP
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#else
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	dmb(ishst)
-#endif
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	__smp_mb()
+#define __smp_wmb()	dmb(ishst)
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Russell King, Ingo Molnar,
	Tony Lindgren, Andrey Konovalov

This defines __smp_xxx barriers for arm,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 31152e8..112cc1a 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -60,15 +60,9 @@ extern void arm_heavy_mb(void);
 #define dma_wmb()	barrier()
 #endif
 
-#ifndef CONFIG_SMP
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#else
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	dmb(ishst)
-#endif
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	__smp_mb()
+#define __smp_wmb()	dmb(ishst)
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Russell King, Ingo Molnar,
	Tony Lindgren

This defines __smp_xxx barriers for arm,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 31152e8..112cc1a 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -60,15 +60,9 @@ extern void arm_heavy_mb(void);
 #define dma_wmb()	barrier()
 #endif
 
-#ifndef CONFIG_SMP
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#else
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	dmb(ishst)
-#endif
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	__smp_mb()
+#define __smp_wmb()	dmb(ishst)
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (45 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King, Arnd Bergmann, x86, Tony Lindgren, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev

This defines __smp_xxx barriers for arm,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 31152e8..112cc1a 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -60,15 +60,9 @@ extern void arm_heavy_mb(void);
 #define dma_wmb()	barrier()
 #endif
 
-#ifndef CONFIG_SMP
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#else
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	dmb(ishst)
-#endif
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	__smp_mb()
+#define __smp_wmb()	dmb(ishst)
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2015-12-31 19:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for arm,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 31152e8..112cc1a 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -60,15 +60,9 @@ extern void arm_heavy_mb(void);
 #define dma_wmb()	barrier()
 #endif
 
-#ifndef CONFIG_SMP
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#else
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	dmb(ishst)
-#endif
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	__smp_mb()
+#define __smp_wmb()	dmb(ishst)
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (46 preceding siblings ...)
  (?)
@ 2015-12-31 19:07 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King, Arnd Bergmann, x86, Tony Lindgren, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev

This defines __smp_xxx barriers for arm,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/barrier.h | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 31152e8..112cc1a 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -60,15 +60,9 @@ extern void arm_heavy_mb(void);
 #define dma_wmb()	barrier()
 #endif
 
-#ifndef CONFIG_SMP
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#else
-#define smp_mb()	dmb(ish)
-#define smp_rmb()	smp_mb()
-#define smp_wmb()	dmb(ishst)
-#endif
+#define __smp_mb()	dmb(ish)
+#define __smp_rmb()	__smp_mb()
+#define __smp_wmb()	dmb(ishst)
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 18/32] blackfin: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Steven Miao

This defines __smp_xxx barriers for blackfin,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/blackfin/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/include/asm/barrier.h b/arch/blackfin/include/asm/barrier.h
index dfb66fe..7cca51c 100644
--- a/arch/blackfin/include/asm/barrier.h
+++ b/arch/blackfin/include/asm/barrier.h
@@ -78,8 +78,8 @@
 
 #endif /* !CONFIG_SMP */
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 18/32] blackfin: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Steven Miao

This defines __smp_xxx barriers for blackfin,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/blackfin/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/include/asm/barrier.h b/arch/blackfin/include/asm/barrier.h
index dfb66fe..7cca51c 100644
--- a/arch/blackfin/include/asm/barrier.h
+++ b/arch/blackfin/include/asm/barrier.h
@@ -78,8 +78,8 @@
 
 #endif /* !CONFIG_SMP */
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 18/32] blackfin: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (48 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

This defines __smp_xxx barriers for blackfin,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/blackfin/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/include/asm/barrier.h b/arch/blackfin/include/asm/barrier.h
index dfb66fe..7cca51c 100644
--- a/arch/blackfin/include/asm/barrier.h
+++ b/arch/blackfin/include/asm/barrier.h
@@ -78,8 +78,8 @@
 
 #endif /* !CONFIG_SMP */
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 18/32] blackfin: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for blackfin,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/blackfin/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/include/asm/barrier.h b/arch/blackfin/include/asm/barrier.h
index dfb66fe..7cca51c 100644
--- a/arch/blackfin/include/asm/barrier.h
+++ b/arch/blackfin/include/asm/barrier.h
@@ -78,8 +78,8 @@
 
 #endif /* !CONFIG_SMP */
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 18/32] blackfin: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (50 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, Steven Miao, linux-sh, Peter Zijlstra,
	virtualization, H. Peter Anvin, sparclinux, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linuxppc-dev, David Miller

This defines __smp_xxx barriers for blackfin,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/blackfin/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/include/asm/barrier.h b/arch/blackfin/include/asm/barrier.h
index dfb66fe..7cca51c 100644
--- a/arch/blackfin/include/asm/barrier.h
+++ b/arch/blackfin/include/asm/barrier.h
@@ -78,8 +78,8 @@
 
 #endif /* !CONFIG_SMP */
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 19/32] ia64: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar

This defines __smp_xxx barriers for ia64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
 #define dma_rmb()	mb()
 #define dma_wmb()	mb()
 
-#ifdef CONFIG_SMP
-# define smp_mb()	mb()
-#else
-# define smp_mb()	barrier()
-#endif
+# define __smp_mb()	mb()
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 /*
  * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
  * need for asm trickery!
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 19/32] ia64: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar, Davidlohr Bueso, Andrey Konovalov

This defines __smp_xxx barriers for ia64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
 #define dma_rmb()	mb()
 #define dma_wmb()	mb()
 
-#ifdef CONFIG_SMP
-# define smp_mb()	mb()
-#else
-# define smp_mb()	barrier()
-#endif
+# define __smp_mb()	mb()
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 /*
  * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
  * need for asm trickery!
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 19/32] ia64: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar

This defines __smp_xxx barriers for ia64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
 #define dma_rmb()	mb()
 #define dma_wmb()	mb()
 
-#ifdef CONFIG_SMP
-# define smp_mb()	mb()
-#else
-# define smp_mb()	barrier()
-#endif
+# define __smp_mb()	mb()
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 /*
  * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
  * need for asm trickery!
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 19/32] ia64: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (53 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Tony Luck, Andrew Cooper,
	Fenghua Yu

This defines __smp_xxx barriers for ia64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
 #define dma_rmb()	mb()
 #define dma_wmb()	mb()
 
-#ifdef CONFIG_SMP
-# define smp_mb()	mb()
-#else
-# define smp_mb()	barrier()
-#endif
+# define __smp_mb()	mb()
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 /*
  * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
  * need for asm trickery!
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 19/32] ia64: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for ia64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
 #define dma_rmb()	mb()
 #define dma_wmb()	mb()
 
-#ifdef CONFIG_SMP
-# define smp_mb()	mb()
-#else
-# define smp_mb()	barrier()
-#endif
+# define __smp_mb()	mb()
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 /*
  * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
  * need for asm trickery!
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 19/32] ia64: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (52 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Tony Luck, Andrew Cooper,
	Fenghua Yu

This defines __smp_xxx barriers for ia64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
 #define dma_rmb()	mb()
 #define dma_wmb()	mb()
 
-#ifdef CONFIG_SMP
-# define smp_mb()	mb()
-#else
-# define smp_mb()	barrier()
-#endif
+# define __smp_mb()	mb()
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 /*
  * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
  * need for asm trickery!
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 19/32] ia64: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar, D

This defines __smp_xxx barriers for ia64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
 #define dma_rmb()	mb()
 #define dma_wmb()	mb()
 
-#ifdef CONFIG_SMP
-# define smp_mb()	mb()
-#else
-# define smp_mb()	barrier()
-#endif
+# define __smp_mb()	mb()
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 /*
  * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
  * need for asm trickery!
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 19/32] ia64: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Tony Luck, Fenghua Yu,
	Ingo Molnar, D

This defines __smp_xxx barriers for ia64,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

This reduces the amount of arch-specific boiler-plate code.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/ia64/include/asm/barrier.h | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
 #define dma_rmb()	mb()
 #define dma_wmb()	mb()
 
-#ifdef CONFIG_SMP
-# define smp_mb()	mb()
-#else
-# define smp_mb()	barrier()
-#endif
+# define __smp_mb()	mb()
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 /*
  * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
  * need for asm trickery!
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:08     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, James Hogan,
	Ingo Molnar, Davidlohr Bueso

This defines __smp_xxx barriers for metag,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
use the existing fence() macro since that is defined differently between
SMP and !SMP.  For this reason, this patch introduces a wrapper
metag_fence() that doesn't depend on CONFIG_SMP.
fence() is then defined using that, depending on CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index b5b778b..84880c9 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,13 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#ifndef CONFIG_SMP
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
-#else
-
 #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
 /*
  * Write to the atomic memory unlock system event register (command 0). This is
@@ -60,26 +53,31 @@ static inline void wr_fence(void)
  * incoherence). It is therefore ineffective if used after and on the same
  * thread as a write.
  */
-static inline void fence(void)
+static inline void metag_fence(void)
 {
 	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
 	barrier();
 	*flushptr = 0;
 	barrier();
 }
-#define smp_mb()        fence()
-#define smp_rmb()       fence()
-#define smp_wmb()       barrier()
+#define __smp_mb()        metag_fence()
+#define __smp_rmb()       metag_fence()
+#define __smp_wmb()       barrier()
 #else
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
+#define metag_fence()		do { } while (0)
+#define __smp_mb()        barrier()
+#define __smp_rmb()       barrier()
+#define __smp_wmb()       barrier()
 #endif
+
+#ifdef CONFIG_SMP
+#define fence() metag_fence()
+#else
+#define fence()		do { } while (0)
 #endif
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2015-12-31 19:08     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, James Hogan, Ingo Molnar,
	Davidlohr Bueso, Andrey Konovalov

This defines __smp_xxx barriers for metag,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
use the existing fence() macro since that is defined differently between
SMP and !SMP.  For this reason, this patch introduces a wrapper
metag_fence() that doesn't depend on CONFIG_SMP.
fence() is then defined using that, depending on CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index b5b778b..84880c9 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,13 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#ifndef CONFIG_SMP
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
-#else
-
 #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
 /*
  * Write to the atomic memory unlock system event register (command 0). This is
@@ -60,26 +53,31 @@ static inline void wr_fence(void)
  * incoherence). It is therefore ineffective if used after and on the same
  * thread as a write.
  */
-static inline void fence(void)
+static inline void metag_fence(void)
 {
 	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
 	barrier();
 	*flushptr = 0;
 	barrier();
 }
-#define smp_mb()        fence()
-#define smp_rmb()       fence()
-#define smp_wmb()       barrier()
+#define __smp_mb()        metag_fence()
+#define __smp_rmb()       metag_fence()
+#define __smp_wmb()       barrier()
 #else
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
+#define metag_fence()		do { } while (0)
+#define __smp_mb()        barrier()
+#define __smp_rmb()       barrier()
+#define __smp_wmb()       barrier()
 #endif
+
+#ifdef CONFIG_SMP
+#define fence() metag_fence()
+#else
+#define fence()		do { } while (0)
 #endif
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2015-12-31 19:08     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, James Hogan,
	Ingo Molnar, Davidlohr Bueso

This defines __smp_xxx barriers for metag,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
use the existing fence() macro since that is defined differently between
SMP and !SMP.  For this reason, this patch introduces a wrapper
metag_fence() that doesn't depend on CONFIG_SMP.
fence() is then defined using that, depending on CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
---
 arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index b5b778b..84880c9 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,13 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#ifndef CONFIG_SMP
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
-#else
-
 #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
 /*
  * Write to the atomic memory unlock system event register (command 0). This is
@@ -60,26 +53,31 @@ static inline void wr_fence(void)
  * incoherence). It is therefore ineffective if used after and on the same
  * thread as a write.
  */
-static inline void fence(void)
+static inline void metag_fence(void)
 {
 	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
 	barrier();
 	*flushptr = 0;
 	barrier();
 }
-#define smp_mb()        fence()
-#define smp_rmb()       fence()
-#define smp_wmb()       barrier()
+#define __smp_mb()        metag_fence()
+#define __smp_rmb()       metag_fence()
+#define __smp_wmb()       barrier()
 #else
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
+#define metag_fence()		do { } while (0)
+#define __smp_mb()        barrier()
+#define __smp_rmb()       barrier()
+#define __smp_wmb()       barrier()
 #endif
+
+#ifdef CONFIG_SMP
+#define fence() metag_fence()
+#else
+#define fence()		do { } while (0)
 #endif
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (54 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, x86, xen-devel, Ingo Molnar,
	linux-xtensa, James Hogan, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev

This defines __smp_xxx barriers for metag,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
use the existing fence() macro since that is defined differently between
SMP and !SMP.  For this reason, this patch introduces a wrapper
metag_fence() that doesn't depend on CONFIG_SMP.
fence() is then defined using that, depending on CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index b5b778b..84880c9 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,13 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#ifndef CONFIG_SMP
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
-#else
-
 #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
 /*
  * Write to the atomic memory unlock system event register (command 0). This is
@@ -60,26 +53,31 @@ static inline void wr_fence(void)
  * incoherence). It is therefore ineffective if used after and on the same
  * thread as a write.
  */
-static inline void fence(void)
+static inline void metag_fence(void)
 {
 	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
 	barrier();
 	*flushptr = 0;
 	barrier();
 }
-#define smp_mb()        fence()
-#define smp_rmb()       fence()
-#define smp_wmb()       barrier()
+#define __smp_mb()        metag_fence()
+#define __smp_rmb()       metag_fence()
+#define __smp_wmb()       barrier()
 #else
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
+#define metag_fence()		do { } while (0)
+#define __smp_mb()        barrier()
+#define __smp_rmb()       barrier()
+#define __smp_wmb()       barrier()
 #endif
+
+#ifdef CONFIG_SMP
+#define fence() metag_fence()
+#else
+#define fence()		do { } while (0)
 #endif
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2015-12-31 19:08     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for metag,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
use the existing fence() macro since that is defined differently between
SMP and !SMP.  For this reason, this patch introduces a wrapper
metag_fence() that doesn't depend on CONFIG_SMP.
fence() is then defined using that, depending on CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index b5b778b..84880c9 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,13 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#ifndef CONFIG_SMP
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
-#else
-
 #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
 /*
  * Write to the atomic memory unlock system event register (command 0). This is
@@ -60,26 +53,31 @@ static inline void wr_fence(void)
  * incoherence). It is therefore ineffective if used after and on the same
  * thread as a write.
  */
-static inline void fence(void)
+static inline void metag_fence(void)
 {
 	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
 	barrier();
 	*flushptr = 0;
 	barrier();
 }
-#define smp_mb()        fence()
-#define smp_rmb()       fence()
-#define smp_wmb()       barrier()
+#define __smp_mb()        metag_fence()
+#define __smp_rmb()       metag_fence()
+#define __smp_wmb()       barrier()
 #else
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
+#define metag_fence()		do { } while (0)
+#define __smp_mb()        barrier()
+#define __smp_rmb()       barrier()
+#define __smp_wmb()       barrier()
 #endif
+
+#ifdef CONFIG_SMP
+#define fence() metag_fence()
+#else
+#define fence()		do { } while (0)
 #endif
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (55 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, x86, xen-devel, Ingo Molnar,
	linux-xtensa, James Hogan, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev

This defines __smp_xxx barriers for metag,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
use the existing fence() macro since that is defined differently between
SMP and !SMP.  For this reason, this patch introduces a wrapper
metag_fence() that doesn't depend on CONFIG_SMP.
fence() is then defined using that, depending on CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index b5b778b..84880c9 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,13 +44,6 @@ static inline void wr_fence(void)
 #define rmb()		barrier()
 #define wmb()		mb()
 
-#ifndef CONFIG_SMP
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
-#else
-
 #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
 /*
  * Write to the atomic memory unlock system event register (command 0). This is
@@ -60,26 +53,31 @@ static inline void wr_fence(void)
  * incoherence). It is therefore ineffective if used after and on the same
  * thread as a write.
  */
-static inline void fence(void)
+static inline void metag_fence(void)
 {
 	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
 	barrier();
 	*flushptr = 0;
 	barrier();
 }
-#define smp_mb()        fence()
-#define smp_rmb()       fence()
-#define smp_wmb()       barrier()
+#define __smp_mb()        metag_fence()
+#define __smp_rmb()       metag_fence()
+#define __smp_wmb()       barrier()
 #else
-#define fence()		do { } while (0)
-#define smp_mb()        barrier()
-#define smp_rmb()       barrier()
-#define smp_wmb()       barrier()
+#define metag_fence()		do { } while (0)
+#define __smp_mb()        barrier()
+#define __smp_rmb()       barrier()
+#define __smp_wmb()       barrier()
 #endif
+
+#ifdef CONFIG_SMP
+#define fence() metag_fence()
+#else
+#define fence()		do { } while (0)
 #endif
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 21/32] mips: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, Ralf Baechle, linuxppc-dev,
	David Miller

This defines __smp_xxx barriers for mips,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: the only exception is smp_mb__before_llsc which is mips-specific.
We define both the __smp_mb__before_llsc variant (for use in
asm/barriers.h) and smp_mb__before_llsc (for use elsewhere on this
architecture).

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 3eac4b9..d296633 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -85,20 +85,20 @@
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
 
-#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
+#if defined(CONFIG_WEAK_ORDERING)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
-#  define smp_mb()	__sync()
-#  define smp_rmb()	barrier()
-#  define smp_wmb()	__syncw()
+#  define __smp_mb()	__sync()
+#  define __smp_rmb()	barrier()
+#  define __smp_wmb()	__syncw()
 # else
-#  define smp_mb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_mb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
 # endif
 #else
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
+#define __smp_mb()	barrier()
+#define __smp_rmb()	barrier()
+#define __smp_wmb()	barrier()
 #endif
 
 #if defined(CONFIG_WEAK_REORDERING_BEYOND_LLSC) && defined(CONFIG_SMP)
@@ -111,6 +111,7 @@
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
 #define smp_mb__before_llsc() smp_wmb()
+#define __smp_mb__before_llsc() __smp_wmb()
 /* Cause previous writes to become visible on all CPUs as soon as possible */
 #define nudge_writes() __asm__ __volatile__(".set push\n\t"		\
 					    ".set arch=octeon\n\t"	\
@@ -118,11 +119,12 @@
 					    ".set pop" : : : "memory")
 #else
 #define smp_mb__before_llsc() smp_llsc_mb()
+#define __smp_mb__before_llsc() smp_llsc_mb()
 #define nudge_writes() mb()
 #endif
 
-#define smp_mb__before_atomic()	smp_mb__before_llsc()
-#define smp_mb__after_atomic()	smp_llsc_mb()
+#define __smp_mb__before_atomic()	__smp_mb__before_llsc()
+#define __smp_mb__after_atomic()	smp_llsc_mb()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 21/32] mips: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ralf Baechle, Ingo Molnar,
	Andrey Konovalov

This defines __smp_xxx barriers for mips,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: the only exception is smp_mb__before_llsc which is mips-specific.
We define both the __smp_mb__before_llsc variant (for use in
asm/barriers.h) and smp_mb__before_llsc (for use elsewhere on this
architecture).

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 3eac4b9..d296633 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -85,20 +85,20 @@
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
 
-#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
+#if defined(CONFIG_WEAK_ORDERING)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
-#  define smp_mb()	__sync()
-#  define smp_rmb()	barrier()
-#  define smp_wmb()	__syncw()
+#  define __smp_mb()	__sync()
+#  define __smp_rmb()	barrier()
+#  define __smp_wmb()	__syncw()
 # else
-#  define smp_mb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_mb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
 # endif
 #else
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
+#define __smp_mb()	barrier()
+#define __smp_rmb()	barrier()
+#define __smp_wmb()	barrier()
 #endif
 
 #if defined(CONFIG_WEAK_REORDERING_BEYOND_LLSC) && defined(CONFIG_SMP)
@@ -111,6 +111,7 @@
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
 #define smp_mb__before_llsc() smp_wmb()
+#define __smp_mb__before_llsc() __smp_wmb()
 /* Cause previous writes to become visible on all CPUs as soon as possible */
 #define nudge_writes() __asm__ __volatile__(".set push\n\t"		\
 					    ".set arch=octeon\n\t"	\
@@ -118,11 +119,12 @@
 					    ".set pop" : : : "memory")
 #else
 #define smp_mb__before_llsc() smp_llsc_mb()
+#define __smp_mb__before_llsc() smp_llsc_mb()
 #define nudge_writes() mb()
 #endif
 
-#define smp_mb__before_atomic()	smp_mb__before_llsc()
-#define smp_mb__after_atomic()	smp_llsc_mb()
+#define __smp_mb__before_atomic()	__smp_mb__before_llsc()
+#define __smp_mb__after_atomic()	smp_llsc_mb()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 21/32] mips: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, Ralf Baechle, linuxppc-dev,
	David Miller

This defines __smp_xxx barriers for mips,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: the only exception is smp_mb__before_llsc which is mips-specific.
We define both the __smp_mb__before_llsc variant (for use in
asm/barriers.h) and smp_mb__before_llsc (for use elsewhere on this
architecture).

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 3eac4b9..d296633 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -85,20 +85,20 @@
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
 
-#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
+#if defined(CONFIG_WEAK_ORDERING)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
-#  define smp_mb()	__sync()
-#  define smp_rmb()	barrier()
-#  define smp_wmb()	__syncw()
+#  define __smp_mb()	__sync()
+#  define __smp_rmb()	barrier()
+#  define __smp_wmb()	__syncw()
 # else
-#  define smp_mb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_mb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
 # endif
 #else
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
+#define __smp_mb()	barrier()
+#define __smp_rmb()	barrier()
+#define __smp_wmb()	barrier()
 #endif
 
 #if defined(CONFIG_WEAK_REORDERING_BEYOND_LLSC) && defined(CONFIG_SMP)
@@ -111,6 +111,7 @@
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
 #define smp_mb__before_llsc() smp_wmb()
+#define __smp_mb__before_llsc() __smp_wmb()
 /* Cause previous writes to become visible on all CPUs as soon as possible */
 #define nudge_writes() __asm__ __volatile__(".set push\n\t"		\
 					    ".set arch=octeon\n\t"	\
@@ -118,11 +119,12 @@
 					    ".set pop" : : : "memory")
 #else
 #define smp_mb__before_llsc() smp_llsc_mb()
+#define __smp_mb__before_llsc() smp_llsc_mb()
 #define nudge_writes() mb()
 #endif
 
-#define smp_mb__before_atomic()	smp_mb__before_llsc()
-#define smp_mb__after_atomic()	smp_llsc_mb()
+#define __smp_mb__before_atomic()	__smp_mb__before_llsc()
+#define __smp_mb__after_atomic()	smp_llsc_mb()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 21/32] mips: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (56 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, Ralf Baechle, linuxppc-dev,
	David Miller

This defines __smp_xxx barriers for mips,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: the only exception is smp_mb__before_llsc which is mips-specific.
We define both the __smp_mb__before_llsc variant (for use in
asm/barriers.h) and smp_mb__before_llsc (for use elsewhere on this
architecture).

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 3eac4b9..d296633 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -85,20 +85,20 @@
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
 
-#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
+#if defined(CONFIG_WEAK_ORDERING)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
-#  define smp_mb()	__sync()
-#  define smp_rmb()	barrier()
-#  define smp_wmb()	__syncw()
+#  define __smp_mb()	__sync()
+#  define __smp_rmb()	barrier()
+#  define __smp_wmb()	__syncw()
 # else
-#  define smp_mb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_mb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
 # endif
 #else
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
+#define __smp_mb()	barrier()
+#define __smp_rmb()	barrier()
+#define __smp_wmb()	barrier()
 #endif
 
 #if defined(CONFIG_WEAK_REORDERING_BEYOND_LLSC) && defined(CONFIG_SMP)
@@ -111,6 +111,7 @@
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
 #define smp_mb__before_llsc() smp_wmb()
+#define __smp_mb__before_llsc() __smp_wmb()
 /* Cause previous writes to become visible on all CPUs as soon as possible */
 #define nudge_writes() __asm__ __volatile__(".set push\n\t"		\
 					    ".set arch=octeon\n\t"	\
@@ -118,11 +119,12 @@
 					    ".set pop" : : : "memory")
 #else
 #define smp_mb__before_llsc() smp_llsc_mb()
+#define __smp_mb__before_llsc() smp_llsc_mb()
 #define nudge_writes() mb()
 #endif
 
-#define smp_mb__before_atomic()	smp_mb__before_llsc()
-#define smp_mb__after_atomic()	smp_llsc_mb()
+#define __smp_mb__before_atomic()	__smp_mb__before_llsc()
+#define __smp_mb__after_atomic()	smp_llsc_mb()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 21/32] mips: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for mips,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: the only exception is smp_mb__before_llsc which is mips-specific.
We define both the __smp_mb__before_llsc variant (for use in
asm/barriers.h) and smp_mb__before_llsc (for use elsewhere on this
architecture).

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/mips/include/asm/barrier.h | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 3eac4b9..d296633 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -85,20 +85,20 @@
 #define wmb()		fast_wmb()
 #define rmb()		fast_rmb()
 
-#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
+#if defined(CONFIG_WEAK_ORDERING)
 # ifdef CONFIG_CPU_CAVIUM_OCTEON
-#  define smp_mb()	__sync()
-#  define smp_rmb()	barrier()
-#  define smp_wmb()	__syncw()
+#  define __smp_mb()	__sync()
+#  define __smp_rmb()	barrier()
+#  define __smp_wmb()	__syncw()
 # else
-#  define smp_mb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
-#  define smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_mb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_rmb()	__asm__ __volatile__("sync" : : :"memory")
+#  define __smp_wmb()	__asm__ __volatile__("sync" : : :"memory")
 # endif
 #else
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
+#define __smp_mb()	barrier()
+#define __smp_rmb()	barrier()
+#define __smp_wmb()	barrier()
 #endif
 
 #if defined(CONFIG_WEAK_REORDERING_BEYOND_LLSC) && defined(CONFIG_SMP)
@@ -111,6 +111,7 @@
 
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
 #define smp_mb__before_llsc() smp_wmb()
+#define __smp_mb__before_llsc() __smp_wmb()
 /* Cause previous writes to become visible on all CPUs as soon as possible */
 #define nudge_writes() __asm__ __volatile__(".set push\n\t"		\
 					    ".set arch=octeon\n\t"	\
@@ -118,11 +119,12 @@
 					    ".set pop" : : : "memory")
 #else
 #define smp_mb__before_llsc() smp_llsc_mb()
+#define __smp_mb__before_llsc() smp_llsc_mb()
 #define nudge_writes() mb()
 #endif
 
-#define smp_mb__before_atomic()	smp_mb__before_llsc()
-#define smp_mb__after_atomic()	smp_llsc_mb()
+#define __smp_mb__before_atomic()	__smp_mb__before_llsc()
+#define __smp_mb__after_atomic()	smp_llsc_mb()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens

This defines __smp_xxx barriers for s390,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
 #define wmb()				barrier()
 #define dma_rmb()			mb()
 #define dma_wmb()			mb()
-#define smp_mb()			mb()
-#define smp_rmb()			rmb()
-#define smp_wmb()			wmb()
-
-#define smp_store_release(p, v)						\
+#define __smp_mb()			mb()
+#define __smp_rmb()			rmb()
+#define __smp_wmb()			wmb()
+#define smp_mb()			__smp_mb()
+#define smp_rmb()			__smp_rmb()
+#define smp_wmb()			__smp_wmb()
+
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar, Davidlohr Bueso,
	Christian Borntraeger, Andrey Konovalov

This defines __smp_xxx barriers for s390,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
 #define wmb()				barrier()
 #define dma_rmb()			mb()
 #define dma_wmb()			mb()
-#define smp_mb()			mb()
-#define smp_rmb()			rmb()
-#define smp_wmb()			wmb()
-
-#define smp_store_release(p, v)						\
+#define __smp_mb()			mb()
+#define __smp_rmb()			rmb()
+#define __smp_wmb()			wmb()
+#define smp_mb()			__smp_mb()
+#define smp_rmb()			__smp_rmb()
+#define smp_wmb()			__smp_wmb()
+
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens

This defines __smp_xxx barriers for s390,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
 #define wmb()				barrier()
 #define dma_rmb()			mb()
 #define dma_wmb()			mb()
-#define smp_mb()			mb()
-#define smp_rmb()			rmb()
-#define smp_wmb()			wmb()
-
-#define smp_store_release(p, v)						\
+#define __smp_mb()			mb()
+#define __smp_rmb()			rmb()
+#define __smp_wmb()			wmb()
+#define smp_mb()			__smp_mb()
+#define smp_rmb()			__smp_rmb()
+#define smp_wmb()			__smp_wmb()
+
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (59 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

This defines __smp_xxx barriers for s390,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
 #define wmb()				barrier()
 #define dma_rmb()			mb()
 #define dma_wmb()			mb()
-#define smp_mb()			mb()
-#define smp_rmb()			rmb()
-#define smp_wmb()			wmb()
-
-#define smp_store_release(p, v)						\
+#define __smp_mb()			mb()
+#define __smp_rmb()			rmb()
+#define __smp_wmb()			wmb()
+#define smp_mb()			__smp_mb()
+#define smp_rmb()			__smp_rmb()
+#define smp_wmb()			__smp_wmb()
+
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for s390,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
 #define wmb()				barrier()
 #define dma_rmb()			mb()
 #define dma_wmb()			mb()
-#define smp_mb()			mb()
-#define smp_rmb()			rmb()
-#define smp_wmb()			wmb()
-
-#define smp_store_release(p, v)						\
+#define __smp_mb()			mb()
+#define __smp_rmb()			rmb()
+#define __smp_wmb()			wmb()
+#define smp_mb()			__smp_mb()
+#define smp_rmb()			__smp_rmb()
+#define smp_wmb()			__smp_wmb()
+
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (58 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

This defines __smp_xxx barriers for s390,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
 #define wmb()				barrier()
 #define dma_rmb()			mb()
 #define dma_wmb()			mb()
-#define smp_mb()			mb()
-#define smp_rmb()			rmb()
-#define smp_wmb()			wmb()
-
-#define smp_store_release(p, v)						\
+#define __smp_mb()			mb()
+#define __smp_rmb()			rmb()
+#define __smp_wmb()			wmb()
+#define smp_mb()			__smp_mb()
+#define smp_rmb()			__smp_rmb()
+#define smp_wmb()			__smp_wmb()
+
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo

This defines __smp_xxx barriers for s390,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
 #define wmb()				barrier()
 #define dma_rmb()			mb()
 #define dma_wmb()			mb()
-#define smp_mb()			mb()
-#define smp_rmb()			rmb()
-#define smp_wmb()			wmb()
-
-#define smp_store_release(p, v)						\
+#define __smp_mb()			mb()
+#define __smp_rmb()			rmb()
+#define __smp_wmb()			wmb()
+#define smp_mb()			__smp_mb()
+#define smp_rmb()			__smp_rmb()
+#define smp_wmb()			__smp_wmb()
+
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo

This defines __smp_xxx barriers for s390,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/s390/include/asm/barrier.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
 #define wmb()				barrier()
 #define dma_rmb()			mb()
 #define dma_wmb()			mb()
-#define smp_mb()			mb()
-#define smp_rmb()			rmb()
-#define smp_wmb()			wmb()
-
-#define smp_store_release(p, v)						\
+#define __smp_mb()			mb()
+#define __smp_rmb()			rmb()
+#define __smp_wmb()			wmb()
+#define smp_mb()			__smp_mb()
+#define smp_rmb()			__smp_rmb()
+#define smp_wmb()			__smp_wmb()
+
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 23/32] sh: define __smp_xxx, fix smp_store_mb for !SMP
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar

sh variant of smp_store_mb() calls xchg() on !SMP which is stronger than
implied by both the name and the documentation.

define __smp_store_mb instead: code in asm-generic/barrier.h
will then define smp_store_mb correctly depending on
CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sh/include/asm/barrier.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index bf91037..f887c64 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,8 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 23/32] sh: define __smp_xxx, fix smp_store_mb for !SMP
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar

sh variant of smp_store_mb() calls xchg() on !SMP which is stronger than
implied by both the name and the documentation.

define __smp_store_mb instead: code in asm-generic/barrier.h
will then define smp_store_mb correctly depending on
CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sh/include/asm/barrier.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index bf91037..f887c64 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,8 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 23/32] sh: define __smp_xxx, fix smp_store_mb for !SMP
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (62 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

sh variant of smp_store_mb() calls xchg() on !SMP which is stronger than
implied by both the name and the documentation.

define __smp_store_mb instead: code in asm-generic/barrier.h
will then define smp_store_mb correctly depending on
CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sh/include/asm/barrier.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index bf91037..f887c64 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,8 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 23/32] sh: define __smp_xxx, fix smp_store_mb for !SMP
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-arm-kernel

sh variant of smp_store_mb() calls xchg() on !SMP which is stronger than
implied by both the name and the documentation.

define __smp_store_mb instead: code in asm-generic/barrier.h
will then define smp_store_mb correctly depending on
CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sh/include/asm/barrier.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index bf91037..f887c64 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,8 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 23/32] sh: define __smp_xxx, fix smp_store_mb for !SMP
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (63 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

sh variant of smp_store_mb() calls xchg() on !SMP which is stronger than
implied by both the name and the documentation.

define __smp_store_mb instead: code in asm-generic/barrier.h
will then define smp_store_mb correctly depending on
CONFIG_SMP.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sh/include/asm/barrier.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index bf91037..f887c64 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,8 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 24/32] sparc: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Ralf Baechle,
	Andrey Konovalov

This defines __smp_xxx barriers for sparc,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_64.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 26c3f72..c9f6ee6 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,14 +37,14 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -52,8 +52,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 24/32] sparc: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Ralf Baechle,
	Andrey Konovalov

This defines __smp_xxx barriers for sparc,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_64.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 26c3f72..c9f6ee6 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,14 +37,14 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -52,8 +52,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 24/32] sparc: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Ralf Baechle,
	Andrey Konovalov

This defines __smp_xxx barriers for sparc,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_64.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 26c3f72..c9f6ee6 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,14 +37,14 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -52,8 +52,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 24/32] sparc: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (65 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, Ralf Baechle, linuxppc-dev,
	David Miller

This defines __smp_xxx barriers for sparc,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_64.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 26c3f72..c9f6ee6 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,14 +37,14 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -52,8 +52,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 24/32] sparc: define __smp_xxx
@ 2015-12-31 19:08   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for sparc,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_64.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 26c3f72..c9f6ee6 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,14 +37,14 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -52,8 +52,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 24/32] sparc: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (64 preceding siblings ...)
  (?)
@ 2015-12-31 19:08 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:08 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, Ralf Baechle, linuxppc-dev,
	David Miller

This defines __smp_xxx barriers for sparc,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/sparc/include/asm/barrier_64.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 26c3f72..c9f6ee6 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,14 +37,14 @@ do {	__asm__ __volatile__("ba,pt	%%xcc, 1f\n\t" \
 #define rmb()	__asm__ __volatile__("":::"memory")
 #define wmb()	__asm__ __volatile__("":::"memory")
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)						\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -52,8 +52,8 @@ do {									\
 	___p1;								\
 })
 
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 25/32] tile: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Chris Metcalf

This defines __smp_xxx barriers for tile,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: for 32 bit, keep smp_mb__after_atomic around since it's faster
than the generic implementation.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/tile/include/asm/barrier.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/tile/include/asm/barrier.h b/arch/tile/include/asm/barrier.h
index 96a42ae..d552228 100644
--- a/arch/tile/include/asm/barrier.h
+++ b/arch/tile/include/asm/barrier.h
@@ -79,11 +79,12 @@ mb_incoherent(void)
  * But after the word is updated, the routine issues an "mf" before returning,
  * and since it's a function call, we don't even need a compiler barrier.
  */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	do { } while (0)
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	do { } while (0)
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
 #else /* 64 bit */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	__smp_mb()
 #endif
 
 #include <asm-generic/barrier.h>
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 25/32] tile: define __smp_xxx
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Chris Metcalf

This defines __smp_xxx barriers for tile,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: for 32 bit, keep smp_mb__after_atomic around since it's faster
than the generic implementation.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/tile/include/asm/barrier.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/tile/include/asm/barrier.h b/arch/tile/include/asm/barrier.h
index 96a42ae..d552228 100644
--- a/arch/tile/include/asm/barrier.h
+++ b/arch/tile/include/asm/barrier.h
@@ -79,11 +79,12 @@ mb_incoherent(void)
  * But after the word is updated, the routine issues an "mf" before returning,
  * and since it's a function call, we don't even need a compiler barrier.
  */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	do { } while (0)
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	do { } while (0)
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
 #else /* 64 bit */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	__smp_mb()
 #endif
 
 #include <asm-generic/barrier.h>
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 25/32] tile: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (67 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	Chris Metcalf, H. Peter Anvin, sparclinux, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linuxppc-dev, David Miller

This defines __smp_xxx barriers for tile,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: for 32 bit, keep smp_mb__after_atomic around since it's faster
than the generic implementation.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/tile/include/asm/barrier.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/tile/include/asm/barrier.h b/arch/tile/include/asm/barrier.h
index 96a42ae..d552228 100644
--- a/arch/tile/include/asm/barrier.h
+++ b/arch/tile/include/asm/barrier.h
@@ -79,11 +79,12 @@ mb_incoherent(void)
  * But after the word is updated, the routine issues an "mf" before returning,
  * and since it's a function call, we don't even need a compiler barrier.
  */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	do { } while (0)
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	do { } while (0)
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
 #else /* 64 bit */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	__smp_mb()
 #endif
 
 #include <asm-generic/barrier.h>
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 25/32] tile: define __smp_xxx
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for tile,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: for 32 bit, keep smp_mb__after_atomic around since it's faster
than the generic implementation.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/tile/include/asm/barrier.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/tile/include/asm/barrier.h b/arch/tile/include/asm/barrier.h
index 96a42ae..d552228 100644
--- a/arch/tile/include/asm/barrier.h
+++ b/arch/tile/include/asm/barrier.h
@@ -79,11 +79,12 @@ mb_incoherent(void)
  * But after the word is updated, the routine issues an "mf" before returning,
  * and since it's a function call, we don't even need a compiler barrier.
  */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	do { } while (0)
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	do { } while (0)
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
 #else /* 64 bit */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	__smp_mb()
 #endif
 
 #include <asm-generic/barrier.h>
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 25/32] tile: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (69 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	Chris Metcalf, H. Peter Anvin, sparclinux, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linuxppc-dev, David Miller

This defines __smp_xxx barriers for tile,
for use by virtualization.

Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Note: for 32 bit, keep smp_mb__after_atomic around since it's faster
than the generic implementation.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/tile/include/asm/barrier.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/tile/include/asm/barrier.h b/arch/tile/include/asm/barrier.h
index 96a42ae..d552228 100644
--- a/arch/tile/include/asm/barrier.h
+++ b/arch/tile/include/asm/barrier.h
@@ -79,11 +79,12 @@ mb_incoherent(void)
  * But after the word is updated, the routine issues an "mf" before returning,
  * and since it's a function call, we don't even need a compiler barrier.
  */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	do { } while (0)
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	do { } while (0)
+#define smp_mb__after_atomic()	__smp_mb__after_atomic()
 #else /* 64 bit */
-#define smp_mb__before_atomic()	smp_mb()
-#define smp_mb__after_atomic()	smp_mb()
+#define __smp_mb__before_atomic()	__smp_mb()
+#define __smp_mb__after_atomic()	__smp_mb()
 #endif
 
 #include <asm-generic/barrier.h>
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 26/32] xtensa: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	Max Filippov, H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Chris Zankel,
	Andrew Cooper, linuxppc-dev, David Miller

This defines __smp_xxx barriers for xtensa,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/xtensa/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/barrier.h b/arch/xtensa/include/asm/barrier.h
index 5b88774..956596e 100644
--- a/arch/xtensa/include/asm/barrier.h
+++ b/arch/xtensa/include/asm/barrier.h
@@ -13,8 +13,8 @@
 #define rmb() barrier()
 #define wmb() mb()
 
-#define smp_mb__before_atomic()		barrier()
-#define smp_mb__after_atomic()		barrier()
+#define __smp_mb__before_atomic()		barrier()
+#define __smp_mb__after_atomic()		barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 26/32] xtensa: define __smp_xxx
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Chris Zankel, Max Filippov

This defines __smp_xxx barriers for xtensa,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/xtensa/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/barrier.h b/arch/xtensa/include/asm/barrier.h
index 5b88774..956596e 100644
--- a/arch/xtensa/include/asm/barrier.h
+++ b/arch/xtensa/include/asm/barrier.h
@@ -13,8 +13,8 @@
 #define rmb() barrier()
 #define wmb() mb()
 
-#define smp_mb__before_atomic()		barrier()
-#define smp_mb__after_atomic()		barrier()
+#define __smp_mb__before_atomic()		barrier()
+#define __smp_mb__after_atomic()		barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 26/32] xtensa: define __smp_xxx
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	Max Filippov, H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Chris Zankel,
	Andrew Cooper, linuxppc-dev, David Miller

This defines __smp_xxx barriers for xtensa,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/xtensa/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/barrier.h b/arch/xtensa/include/asm/barrier.h
index 5b88774..956596e 100644
--- a/arch/xtensa/include/asm/barrier.h
+++ b/arch/xtensa/include/asm/barrier.h
@@ -13,8 +13,8 @@
 #define rmb() barrier()
 #define wmb() mb()
 
-#define smp_mb__before_atomic()		barrier()
-#define smp_mb__after_atomic()		barrier()
+#define __smp_mb__before_atomic()		barrier()
+#define __smp_mb__after_atomic()		barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 26/32] xtensa: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (71 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	Max Filippov, H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Chris Zankel,
	Andrew Cooper, linuxppc-dev, David Miller

This defines __smp_xxx barriers for xtensa,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/xtensa/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/barrier.h b/arch/xtensa/include/asm/barrier.h
index 5b88774..956596e 100644
--- a/arch/xtensa/include/asm/barrier.h
+++ b/arch/xtensa/include/asm/barrier.h
@@ -13,8 +13,8 @@
 #define rmb() barrier()
 #define wmb() mb()
 
-#define smp_mb__before_atomic()		barrier()
-#define smp_mb__after_atomic()		barrier()
+#define __smp_mb__before_atomic()		barrier()
+#define __smp_mb__after_atomic()		barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 26/32] xtensa: define __smp_xxx
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for xtensa,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/xtensa/include/asm/barrier.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/barrier.h b/arch/xtensa/include/asm/barrier.h
index 5b88774..956596e 100644
--- a/arch/xtensa/include/asm/barrier.h
+++ b/arch/xtensa/include/asm/barrier.h
@@ -13,8 +13,8 @@
 #define rmb() barrier()
 #define wmb() mb()
 
-#define smp_mb__before_atomic()		barrier()
-#define smp_mb__after_atomic()		barrier()
+#define __smp_mb__before_atomic()		barrier()
+#define __smp_mb__after_atomic()		barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 27/32] x86: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski

This defines __smp_xxx barriers for x86,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
 #endif
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	dma_rmb()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	dma_rmb()
+#define __smp_wmb()	barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
 
 #if defined(CONFIG_X86_PPRO_FENCE)
 
@@ -50,31 +43,31 @@
  * model and we should fall back to full barriers.
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	___p1;								\
 })
 
 #else /* regular x86 TSO memory ordering */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -85,8 +78,8 @@ do {									\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 27/32] x86: define __smp_xxx
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, Andrey Konovalov

This defines __smp_xxx barriers for x86,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
 #endif
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	dma_rmb()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	dma_rmb()
+#define __smp_wmb()	barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
 
 #if defined(CONFIG_X86_PPRO_FENCE)
 
@@ -50,31 +43,31 @@
  * model and we should fall back to full barriers.
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	___p1;								\
 })
 
 #else /* regular x86 TSO memory ordering */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -85,8 +78,8 @@ do {									\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 27/32] x86: define __smp_xxx
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski

This defines __smp_xxx barriers for x86,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
---
 arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
 #endif
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	dma_rmb()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	dma_rmb()
+#define __smp_wmb()	barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
 
 #if defined(CONFIG_X86_PPRO_FENCE)
 
@@ -50,31 +43,31 @@
  * model and we should fall back to full barriers.
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	___p1;								\
 })
 
 #else /* regular x86 TSO memory ordering */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -85,8 +78,8 @@ do {									\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 27/32] x86: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (72 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, Ingo Molnar, xen-devel, Ingo Molnar,
	Borislav Petkov, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Andy Lutomirski, Thomas Gleixner, linux-metag, linux-arm-kernel,
	Andrew Cooper, linuxppc-dev

This defines __smp_xxx barriers for x86,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
 #endif
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	dma_rmb()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	dma_rmb()
+#define __smp_wmb()	barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
 
 #if defined(CONFIG_X86_PPRO_FENCE)
 
@@ -50,31 +43,31 @@
  * model and we should fall back to full barriers.
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	___p1;								\
 })
 
 #else /* regular x86 TSO memory ordering */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -85,8 +78,8 @@ do {									\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 27/32] x86: define __smp_xxx
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

This defines __smp_xxx barriers for x86,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
 #endif
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	dma_rmb()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	dma_rmb()
+#define __smp_wmb()	barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
 
 #if defined(CONFIG_X86_PPRO_FENCE)
 
@@ -50,31 +43,31 @@
  * model and we should fall back to full barriers.
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	___p1;								\
 })
 
 #else /* regular x86 TSO memory ordering */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -85,8 +78,8 @@ do {									\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 27/32] x86: define __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (74 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, Ingo Molnar, xen-devel, Ingo Molnar,
	Borislav Petkov, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Andy Lutomirski, Thomas Gleixner, linux-metag, linux-arm-kernel,
	Andrew Cooper, linuxppc-dev

This defines __smp_xxx barriers for x86,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
 #endif
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	dma_rmb()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	dma_rmb()
+#define __smp_wmb()	barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
 
 #if defined(CONFIG_X86_PPRO_FENCE)
 
@@ -50,31 +43,31 @@
  * model and we should fall back to full barriers.
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	___p1;								\
 })
 
 #else /* regular x86 TSO memory ordering */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -85,8 +78,8 @@ do {									\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 27/32] x86: define __smp_xxx
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, Andr

This defines __smp_xxx barriers for x86,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
 #endif
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	dma_rmb()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	dma_rmb()
+#define __smp_wmb()	barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
 
 #if defined(CONFIG_X86_PPRO_FENCE)
 
@@ -50,31 +43,31 @@
  * model and we should fall back to full barriers.
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	___p1;								\
 })
 
 #else /* regular x86 TSO memory ordering */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -85,8 +78,8 @@ do {									\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 27/32] x86: define __smp_xxx
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, Andr

This defines __smp_xxx barriers for x86,
for use by virtualization.

smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h

Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
---
 arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
 #endif
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_SMP
-#define smp_mb()	mb()
-#define smp_rmb()	dma_rmb()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb()	barrier()
-#define smp_rmb()	barrier()
-#define smp_wmb()	barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb()	mb()
+#define __smp_rmb()	dma_rmb()
+#define __smp_wmb()	barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
 
 #if defined(CONFIG_X86_PPRO_FENCE)
 
@@ -50,31 +43,31 @@
  * model and we should fall back to full barriers.
  */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
-	smp_mb();							\
+	__smp_mb();							\
 	___p1;								\
 })
 
 #else /* regular x86 TSO memory ordering */
 
-#define smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
 	compiletime_assert_atomic_type(*p);				\
 	barrier();							\
 	WRITE_ONCE(*p, v);						\
 } while (0)
 
-#define smp_load_acquire(p)						\
+#define __smp_load_acquire(p)						\
 ({									\
 	typeof(*p) ___p1 = READ_ONCE(*p);				\
 	compiletime_assert_atomic_type(*p);				\
@@ -85,8 +78,8 @@ do {									\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic()	barrier()
-#define smp_mb__after_atomic()	barrier()
+#define __smp_mb__before_atomic()	barrier()
+#define __smp_mb__after_atomic()	barrier()
 
 #include <asm-generic/barrier.h>
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 28/32] asm-generic: implement virt_xxx memory barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Jonathan Corbet, linux-doc

Guests running within virtual machines might be affected by SMP effects even if
the guest itself is compiled without SMP support.  This is an artifact of
interfacing with an SMP host while running an UP kernel.  Using mandatory
barriers for this use-case would be possible but is often suboptimal.

In particular, virtio uses a bunch of confusing ifdefs to work around
this, while xen just uses the mandatory barriers.

To better handle this case, low-level virt_mb() etc macros are made available.
These are implemented trivially using the low-level __smp_xxx macros,
the purpose of these wrappers is to annotate those specific cases.

These have the same effect as smp_mb() etc when SMP is enabled, but generate
identical code for SMP and non-SMP systems. For example, virtual machine guests
should use virt_mb() rather than smp_mb() when synchronizing against a
(possibly SMP) host.

Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/asm-generic/barrier.h     | 11 +++++++++++
 Documentation/memory-barriers.txt | 28 +++++++++++++++++++++++-----
 2 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 8752964..1cceca14 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -196,5 +196,16 @@ do {									\
 
 #endif
 
+/* Barriers for virtual machine guests when talking to an SMP host */
+#define virt_mb() __smp_mb()
+#define virt_rmb() __smp_rmb()
+#define virt_wmb() __smp_wmb()
+#define virt_read_barrier_depends() __smp_read_barrier_depends()
+#define virt_store_mb(var, value) __smp_store_mb(var, value)
+#define virt_mb__before_atomic() __smp_mb__before_atomic()
+#define virt_mb__after_atomic()	__smp_mb__after_atomic()
+#define virt_store_release(p, v) __smp_store_release(p, v)
+#define virt_load_acquire(p) __smp_load_acquire(p)
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index aef9487..8f4a93a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1655,17 +1655,18 @@ macro is a good place to start looking.
 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
 systems because it is assumed that a CPU will appear to be self-consistent,
 and will order overlapping accesses correctly with respect to itself.
+However, see the subsection on "Virtual Machine Guests" below.
 
 [!] Note that SMP memory barriers _must_ be used to control the ordering of
 references to shared memory on SMP systems, though the use of locking instead
 is sufficient.
 
 Mandatory barriers should not be used to control SMP effects, since mandatory
-barriers unnecessarily impose overhead on UP systems. They may, however, be
-used to control MMIO effects on accesses through relaxed memory I/O windows.
-These are required even on non-SMP systems as they affect the order in which
-memory operations appear to a device by prohibiting both the compiler and the
-CPU from reordering them.
+barriers impose unnecessary overhead on both SMP and UP systems. They may,
+however, be used to control MMIO effects on accesses through relaxed memory I/O
+windows.  These barriers are required even on non-SMP systems as they affect
+the order in which memory operations appear to a device by prohibiting both the
+compiler and the CPU from reordering them.
 
 
 There are some more advanced barrier functions:
@@ -2948,6 +2949,23 @@ The Alpha defines the Linux kernel's memory barrier model.
 
 See the subsection on "Cache Coherency" above.
 
+VIRTUAL MACHINE GUESTS
+-------------------
+
+Guests running within virtual machines might be affected by SMP effects even if
+the guest itself is compiled without SMP support.  This is an artifact of
+interfacing with an SMP host while running an UP kernel.  Using mandatory
+barriers for this use-case would be possible but is often suboptimal.
+
+To handle this case optimally, low-level virt_mb() etc macros are available.
+These have the same effect as smp_mb() etc when SMP is enabled, but generate
+identical code for SMP and non-SMP systems. For example, virtual machine guests
+should use virt_mb() rather than smp_mb() when synchronizing against a
+(possibly SMP) host.
+
+These are equivalent to smp_mb() etc counterparts in all other respects,
+in particular, they do not control MMIO effects: to control
+MMIO effects, use mandatory barriers.
 
 ======
 EXAMPLE USES
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 28/32] asm-generic: implement virt_xxx memory barriers
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Jonathan Corbet, linux-doc

Guests running within virtual machines might be affected by SMP effects even if
the guest itself is compiled without SMP support.  This is an artifact of
interfacing with an SMP host while running an UP kernel.  Using mandatory
barriers for this use-case would be possible but is often suboptimal.

In particular, virtio uses a bunch of confusing ifdefs to work around
this, while xen just uses the mandatory barriers.

To better handle this case, low-level virt_mb() etc macros are made available.
These are implemented trivially using the low-level __smp_xxx macros,
the purpose of these wrappers is to annotate those specific cases.

These have the same effect as smp_mb() etc when SMP is enabled, but generate
identical code for SMP and non-SMP systems. For example, virtual machine guests
should use virt_mb() rather than smp_mb() when synchronizing against a
(possibly SMP) host.

Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/asm-generic/barrier.h     | 11 +++++++++++
 Documentation/memory-barriers.txt | 28 +++++++++++++++++++++++-----
 2 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 8752964..1cceca14 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -196,5 +196,16 @@ do {									\
 
 #endif
 
+/* Barriers for virtual machine guests when talking to an SMP host */
+#define virt_mb() __smp_mb()
+#define virt_rmb() __smp_rmb()
+#define virt_wmb() __smp_wmb()
+#define virt_read_barrier_depends() __smp_read_barrier_depends()
+#define virt_store_mb(var, value) __smp_store_mb(var, value)
+#define virt_mb__before_atomic() __smp_mb__before_atomic()
+#define virt_mb__after_atomic()	__smp_mb__after_atomic()
+#define virt_store_release(p, v) __smp_store_release(p, v)
+#define virt_load_acquire(p) __smp_load_acquire(p)
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index aef9487..8f4a93a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1655,17 +1655,18 @@ macro is a good place to start looking.
 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
 systems because it is assumed that a CPU will appear to be self-consistent,
 and will order overlapping accesses correctly with respect to itself.
+However, see the subsection on "Virtual Machine Guests" below.
 
 [!] Note that SMP memory barriers _must_ be used to control the ordering of
 references to shared memory on SMP systems, though the use of locking instead
 is sufficient.
 
 Mandatory barriers should not be used to control SMP effects, since mandatory
-barriers unnecessarily impose overhead on UP systems. They may, however, be
-used to control MMIO effects on accesses through relaxed memory I/O windows.
-These are required even on non-SMP systems as they affect the order in which
-memory operations appear to a device by prohibiting both the compiler and the
-CPU from reordering them.
+barriers impose unnecessary overhead on both SMP and UP systems. They may,
+however, be used to control MMIO effects on accesses through relaxed memory I/O
+windows.  These barriers are required even on non-SMP systems as they affect
+the order in which memory operations appear to a device by prohibiting both the
+compiler and the CPU from reordering them.
 
 
 There are some more advanced barrier functions:
@@ -2948,6 +2949,23 @@ The Alpha defines the Linux kernel's memory barrier model.
 
 See the subsection on "Cache Coherency" above.
 
+VIRTUAL MACHINE GUESTS
+-------------------
+
+Guests running within virtual machines might be affected by SMP effects even if
+the guest itself is compiled without SMP support.  This is an artifact of
+interfacing with an SMP host while running an UP kernel.  Using mandatory
+barriers for this use-case would be possible but is often suboptimal.
+
+To handle this case optimally, low-level virt_mb() etc macros are available.
+These have the same effect as smp_mb() etc when SMP is enabled, but generate
+identical code for SMP and non-SMP systems. For example, virtual machine guests
+should use virt_mb() rather than smp_mb() when synchronizing against a
+(possibly SMP) host.
+
+These are equivalent to smp_mb() etc counterparts in all other respects,
+in particular, they do not control MMIO effects: to control
+MMIO effects, use mandatory barriers.
 
 ============
 EXAMPLE USES
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 28/32] asm-generic: implement virt_xxx memory barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (75 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, Jonathan Corbet, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linux-doc, linuxppc-dev,
	David Miller

Guests running within virtual machines might be affected by SMP effects even if
the guest itself is compiled without SMP support.  This is an artifact of
interfacing with an SMP host while running an UP kernel.  Using mandatory
barriers for this use-case would be possible but is often suboptimal.

In particular, virtio uses a bunch of confusing ifdefs to work around
this, while xen just uses the mandatory barriers.

To better handle this case, low-level virt_mb() etc macros are made available.
These are implemented trivially using the low-level __smp_xxx macros,
the purpose of these wrappers is to annotate those specific cases.

These have the same effect as smp_mb() etc when SMP is enabled, but generate
identical code for SMP and non-SMP systems. For example, virtual machine guests
should use virt_mb() rather than smp_mb() when synchronizing against a
(possibly SMP) host.

Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/asm-generic/barrier.h     | 11 +++++++++++
 Documentation/memory-barriers.txt | 28 +++++++++++++++++++++++-----
 2 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 8752964..1cceca14 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -196,5 +196,16 @@ do {									\
 
 #endif
 
+/* Barriers for virtual machine guests when talking to an SMP host */
+#define virt_mb() __smp_mb()
+#define virt_rmb() __smp_rmb()
+#define virt_wmb() __smp_wmb()
+#define virt_read_barrier_depends() __smp_read_barrier_depends()
+#define virt_store_mb(var, value) __smp_store_mb(var, value)
+#define virt_mb__before_atomic() __smp_mb__before_atomic()
+#define virt_mb__after_atomic()	__smp_mb__after_atomic()
+#define virt_store_release(p, v) __smp_store_release(p, v)
+#define virt_load_acquire(p) __smp_load_acquire(p)
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index aef9487..8f4a93a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1655,17 +1655,18 @@ macro is a good place to start looking.
 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
 systems because it is assumed that a CPU will appear to be self-consistent,
 and will order overlapping accesses correctly with respect to itself.
+However, see the subsection on "Virtual Machine Guests" below.
 
 [!] Note that SMP memory barriers _must_ be used to control the ordering of
 references to shared memory on SMP systems, though the use of locking instead
 is sufficient.
 
 Mandatory barriers should not be used to control SMP effects, since mandatory
-barriers unnecessarily impose overhead on UP systems. They may, however, be
-used to control MMIO effects on accesses through relaxed memory I/O windows.
-These are required even on non-SMP systems as they affect the order in which
-memory operations appear to a device by prohibiting both the compiler and the
-CPU from reordering them.
+barriers impose unnecessary overhead on both SMP and UP systems. They may,
+however, be used to control MMIO effects on accesses through relaxed memory I/O
+windows.  These barriers are required even on non-SMP systems as they affect
+the order in which memory operations appear to a device by prohibiting both the
+compiler and the CPU from reordering them.
 
 
 There are some more advanced barrier functions:
@@ -2948,6 +2949,23 @@ The Alpha defines the Linux kernel's memory barrier model.
 
 See the subsection on "Cache Coherency" above.
 
+VIRTUAL MACHINE GUESTS
+-------------------
+
+Guests running within virtual machines might be affected by SMP effects even if
+the guest itself is compiled without SMP support.  This is an artifact of
+interfacing with an SMP host while running an UP kernel.  Using mandatory
+barriers for this use-case would be possible but is often suboptimal.
+
+To handle this case optimally, low-level virt_mb() etc macros are available.
+These have the same effect as smp_mb() etc when SMP is enabled, but generate
+identical code for SMP and non-SMP systems. For example, virtual machine guests
+should use virt_mb() rather than smp_mb() when synchronizing against a
+(possibly SMP) host.
+
+These are equivalent to smp_mb() etc counterparts in all other respects,
+in particular, they do not control MMIO effects: to control
+MMIO effects, use mandatory barriers.
 
 ============
 EXAMPLE USES
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 28/32] asm-generic: implement virt_xxx memory barriers
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

Guests running within virtual machines might be affected by SMP effects even if
the guest itself is compiled without SMP support.  This is an artifact of
interfacing with an SMP host while running an UP kernel.  Using mandatory
barriers for this use-case would be possible but is often suboptimal.

In particular, virtio uses a bunch of confusing ifdefs to work around
this, while xen just uses the mandatory barriers.

To better handle this case, low-level virt_mb() etc macros are made available.
These are implemented trivially using the low-level __smp_xxx macros,
the purpose of these wrappers is to annotate those specific cases.

These have the same effect as smp_mb() etc when SMP is enabled, but generate
identical code for SMP and non-SMP systems. For example, virtual machine guests
should use virt_mb() rather than smp_mb() when synchronizing against a
(possibly SMP) host.

Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/asm-generic/barrier.h     | 11 +++++++++++
 Documentation/memory-barriers.txt | 28 +++++++++++++++++++++++-----
 2 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 8752964..1cceca14 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -196,5 +196,16 @@ do {									\
 
 #endif
 
+/* Barriers for virtual machine guests when talking to an SMP host */
+#define virt_mb() __smp_mb()
+#define virt_rmb() __smp_rmb()
+#define virt_wmb() __smp_wmb()
+#define virt_read_barrier_depends() __smp_read_barrier_depends()
+#define virt_store_mb(var, value) __smp_store_mb(var, value)
+#define virt_mb__before_atomic() __smp_mb__before_atomic()
+#define virt_mb__after_atomic()	__smp_mb__after_atomic()
+#define virt_store_release(p, v) __smp_store_release(p, v)
+#define virt_load_acquire(p) __smp_load_acquire(p)
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index aef9487..8f4a93a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1655,17 +1655,18 @@ macro is a good place to start looking.
 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
 systems because it is assumed that a CPU will appear to be self-consistent,
 and will order overlapping accesses correctly with respect to itself.
+However, see the subsection on "Virtual Machine Guests" below.
 
 [!] Note that SMP memory barriers _must_ be used to control the ordering of
 references to shared memory on SMP systems, though the use of locking instead
 is sufficient.
 
 Mandatory barriers should not be used to control SMP effects, since mandatory
-barriers unnecessarily impose overhead on UP systems. They may, however, be
-used to control MMIO effects on accesses through relaxed memory I/O windows.
-These are required even on non-SMP systems as they affect the order in which
-memory operations appear to a device by prohibiting both the compiler and the
-CPU from reordering them.
+barriers impose unnecessary overhead on both SMP and UP systems. They may,
+however, be used to control MMIO effects on accesses through relaxed memory I/O
+windows.  These barriers are required even on non-SMP systems as they affect
+the order in which memory operations appear to a device by prohibiting both the
+compiler and the CPU from reordering them.
 
 
 There are some more advanced barrier functions:
@@ -2948,6 +2949,23 @@ The Alpha defines the Linux kernel's memory barrier model.
 
 See the subsection on "Cache Coherency" above.
 
+VIRTUAL MACHINE GUESTS
+-------------------
+
+Guests running within virtual machines might be affected by SMP effects even if
+the guest itself is compiled without SMP support.  This is an artifact of
+interfacing with an SMP host while running an UP kernel.  Using mandatory
+barriers for this use-case would be possible but is often suboptimal.
+
+To handle this case optimally, low-level virt_mb() etc macros are available.
+These have the same effect as smp_mb() etc when SMP is enabled, but generate
+identical code for SMP and non-SMP systems. For example, virtual machine guests
+should use virt_mb() rather than smp_mb() when synchronizing against a
+(possibly SMP) host.
+
+These are equivalent to smp_mb() etc counterparts in all other respects,
+in particular, they do not control MMIO effects: to control
+MMIO effects, use mandatory barriers.
 
 ============
 EXAMPLE USES
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 28/32] asm-generic: implement virt_xxx memory barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (77 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, Jonathan Corbet, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linux-doc, linuxppc-dev,
	David Miller

Guests running within virtual machines might be affected by SMP effects even if
the guest itself is compiled without SMP support.  This is an artifact of
interfacing with an SMP host while running an UP kernel.  Using mandatory
barriers for this use-case would be possible but is often suboptimal.

In particular, virtio uses a bunch of confusing ifdefs to work around
this, while xen just uses the mandatory barriers.

To better handle this case, low-level virt_mb() etc macros are made available.
These are implemented trivially using the low-level __smp_xxx macros,
the purpose of these wrappers is to annotate those specific cases.

These have the same effect as smp_mb() etc when SMP is enabled, but generate
identical code for SMP and non-SMP systems. For example, virtual machine guests
should use virt_mb() rather than smp_mb() when synchronizing against a
(possibly SMP) host.

Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/asm-generic/barrier.h     | 11 +++++++++++
 Documentation/memory-barriers.txt | 28 +++++++++++++++++++++++-----
 2 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 8752964..1cceca14 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -196,5 +196,16 @@ do {									\
 
 #endif
 
+/* Barriers for virtual machine guests when talking to an SMP host */
+#define virt_mb() __smp_mb()
+#define virt_rmb() __smp_rmb()
+#define virt_wmb() __smp_wmb()
+#define virt_read_barrier_depends() __smp_read_barrier_depends()
+#define virt_store_mb(var, value) __smp_store_mb(var, value)
+#define virt_mb__before_atomic() __smp_mb__before_atomic()
+#define virt_mb__after_atomic()	__smp_mb__after_atomic()
+#define virt_store_release(p, v) __smp_store_release(p, v)
+#define virt_load_acquire(p) __smp_load_acquire(p)
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index aef9487..8f4a93a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1655,17 +1655,18 @@ macro is a good place to start looking.
 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
 systems because it is assumed that a CPU will appear to be self-consistent,
 and will order overlapping accesses correctly with respect to itself.
+However, see the subsection on "Virtual Machine Guests" below.
 
 [!] Note that SMP memory barriers _must_ be used to control the ordering of
 references to shared memory on SMP systems, though the use of locking instead
 is sufficient.
 
 Mandatory barriers should not be used to control SMP effects, since mandatory
-barriers unnecessarily impose overhead on UP systems. They may, however, be
-used to control MMIO effects on accesses through relaxed memory I/O windows.
-These are required even on non-SMP systems as they affect the order in which
-memory operations appear to a device by prohibiting both the compiler and the
-CPU from reordering them.
+barriers impose unnecessary overhead on both SMP and UP systems. They may,
+however, be used to control MMIO effects on accesses through relaxed memory I/O
+windows.  These barriers are required even on non-SMP systems as they affect
+the order in which memory operations appear to a device by prohibiting both the
+compiler and the CPU from reordering them.
 
 
 There are some more advanced barrier functions:
@@ -2948,6 +2949,23 @@ The Alpha defines the Linux kernel's memory barrier model.
 
 See the subsection on "Cache Coherency" above.
 
+VIRTUAL MACHINE GUESTS
+-------------------
+
+Guests running within virtual machines might be affected by SMP effects even if
+the guest itself is compiled without SMP support.  This is an artifact of
+interfacing with an SMP host while running an UP kernel.  Using mandatory
+barriers for this use-case would be possible but is often suboptimal.
+
+To handle this case optimally, low-level virt_mb() etc macros are available.
+These have the same effect as smp_mb() etc when SMP is enabled, but generate
+identical code for SMP and non-SMP systems. For example, virtual machine guests
+should use virt_mb() rather than smp_mb() when synchronizing against a
+(possibly SMP) host.
+
+These are equivalent to smp_mb() etc counterparts in all other respects,
+in particular, they do not control MMIO effects: to control
+MMIO effects, use mandatory barriers.
 
 ============
 EXAMPLE USES
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 29/32] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Alexander Duyck

This reverts commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9.

While that commit optimizes !CONFIG_SMP, it mixes
up DMA and SMP concepts, making the code hard
to figure out.

A better way to optimize this is with the new __smp_XXX
barriers.

As a first step, go back to full rmb/wmb barriers
for !SMP.
We switch to __smp_XXX barriers in the next patch.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 8e50888..67e06fe 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -21,20 +21,19 @@
  * actually quite cheap.
  */
 
+#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
-#ifdef CONFIG_SMP
 	if (weak_barriers)
 		smp_mb();
 	else
-#endif
 		mb();
 }
 
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_rmb();
+		smp_rmb();
 	else
 		rmb();
 }
@@ -42,10 +41,26 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_wmb();
+		smp_wmb();
 	else
 		wmb();
 }
+#else
+static inline void virtio_mb(bool weak_barriers)
+{
+	mb();
+}
+
+static inline void virtio_rmb(bool weak_barriers)
+{
+	rmb();
+}
+
+static inline void virtio_wmb(bool weak_barriers)
+{
+	wmb();
+}
+#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 29/32] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Alexander Duyck

This reverts commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9.

While that commit optimizes !CONFIG_SMP, it mixes
up DMA and SMP concepts, making the code hard
to figure out.

A better way to optimize this is with the new __smp_XXX
barriers.

As a first step, go back to full rmb/wmb barriers
for !SMP.
We switch to __smp_XXX barriers in the next patch.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 8e50888..67e06fe 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -21,20 +21,19 @@
  * actually quite cheap.
  */
 
+#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
-#ifdef CONFIG_SMP
 	if (weak_barriers)
 		smp_mb();
 	else
-#endif
 		mb();
 }
 
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_rmb();
+		smp_rmb();
 	else
 		rmb();
 }
@@ -42,10 +41,26 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_wmb();
+		smp_wmb();
 	else
 		wmb();
 }
+#else
+static inline void virtio_mb(bool weak_barriers)
+{
+	mb();
+}
+
+static inline void virtio_rmb(bool weak_barriers)
+{
+	rmb();
+}
+
+static inline void virtio_wmb(bool weak_barriers)
+{
+	wmb();
+}
+#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 29/32] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (78 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Alexander Duyck, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linuxppc-dev,
	David Miller

This reverts commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9.

While that commit optimizes !CONFIG_SMP, it mixes
up DMA and SMP concepts, making the code hard
to figure out.

A better way to optimize this is with the new __smp_XXX
barriers.

As a first step, go back to full rmb/wmb barriers
for !SMP.
We switch to __smp_XXX barriers in the next patch.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 8e50888..67e06fe 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -21,20 +21,19 @@
  * actually quite cheap.
  */
 
+#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
-#ifdef CONFIG_SMP
 	if (weak_barriers)
 		smp_mb();
 	else
-#endif
 		mb();
 }
 
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_rmb();
+		smp_rmb();
 	else
 		rmb();
 }
@@ -42,10 +41,26 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_wmb();
+		smp_wmb();
 	else
 		wmb();
 }
+#else
+static inline void virtio_mb(bool weak_barriers)
+{
+	mb();
+}
+
+static inline void virtio_rmb(bool weak_barriers)
+{
+	rmb();
+}
+
+static inline void virtio_wmb(bool weak_barriers)
+{
+	wmb();
+}
+#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 29/32] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

This reverts commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9.

While that commit optimizes !CONFIG_SMP, it mixes
up DMA and SMP concepts, making the code hard
to figure out.

A better way to optimize this is with the new __smp_XXX
barriers.

As a first step, go back to full rmb/wmb barriers
for !SMP.
We switch to __smp_XXX barriers in the next patch.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 8e50888..67e06fe 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -21,20 +21,19 @@
  * actually quite cheap.
  */
 
+#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
-#ifdef CONFIG_SMP
 	if (weak_barriers)
 		smp_mb();
 	else
-#endif
 		mb();
 }
 
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_rmb();
+		smp_rmb();
 	else
 		rmb();
 }
@@ -42,10 +41,26 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_wmb();
+		smp_wmb();
 	else
 		wmb();
 }
+#else
+static inline void virtio_mb(bool weak_barriers)
+{
+	mb();
+}
+
+static inline void virtio_rmb(bool weak_barriers)
+{
+	rmb();
+}
+
+static inline void virtio_wmb(bool weak_barriers)
+{
+	wmb();
+}
+#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 29/32] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (79 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Alexander Duyck, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linuxppc-dev,
	David Miller

This reverts commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9.

While that commit optimizes !CONFIG_SMP, it mixes
up DMA and SMP concepts, making the code hard
to figure out.

A better way to optimize this is with the new __smp_XXX
barriers.

As a first step, go back to full rmb/wmb barriers
for !SMP.
We switch to __smp_XXX barriers in the next patch.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 8e50888..67e06fe 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -21,20 +21,19 @@
  * actually quite cheap.
  */
 
+#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
-#ifdef CONFIG_SMP
 	if (weak_barriers)
 		smp_mb();
 	else
-#endif
 		mb();
 }
 
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_rmb();
+		smp_rmb();
 	else
 		rmb();
 }
@@ -42,10 +41,26 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		dma_wmb();
+		smp_wmb();
 	else
 		wmb();
 }
+#else
+static inline void virtio_mb(bool weak_barriers)
+{
+	mb();
+}
+
+static inline void virtio_rmb(bool weak_barriers)
+{
+	rmb();
+}
+
+static inline void virtio_wmb(bool weak_barriers)
+{
+	wmb();
+}
+#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_XXX
  2015-12-31 19:05 ` Michael S. Tsirkin
                     ` (2 preceding siblings ...)
  (?)
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Alexander Duyck, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linuxppc-dev,
	David Miller

virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.

This is exactly what __smp_XXX barriers do,
so switch to these instead of homegrown ifdef hacks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 25 ++++---------------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 67e06fe..f3fa55b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -12,7 +12,7 @@
  * anyone care?
  *
  * For virtio_pci on SMP, we don't need to order with respect to MMIO
- * accesses through relaxed memory I/O windows, so smp_mb() et al are
+ * accesses through relaxed memory I/O windows, so virt_mb() et al are
  * sufficient.
  *
  * For using virtio to talk to real devices (eg. other heterogeneous
@@ -21,11 +21,10 @@
  * actually quite cheap.
  */
 
-#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_mb();
+		virt_mb();
 	else
 		mb();
 }
@@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_rmb();
+		virt_rmb();
 	else
 		rmb();
 }
@@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_wmb();
+		virt_wmb();
 	else
 		wmb();
 }
-#else
-static inline void virtio_mb(bool weak_barriers)
-{
-	mb();
-}
-
-static inline void virtio_rmb(bool weak_barriers)
-{
-	rmb();
-}
-
-static inline void virtio_wmb(bool weak_barriers)
-{
-	wmb();
-}
-#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_XXX
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Alexander Duyck

virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.

This is exactly what __smp_XXX barriers do,
so switch to these instead of homegrown ifdef hacks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 25 ++++---------------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 67e06fe..f3fa55b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -12,7 +12,7 @@
  * anyone care?
  *
  * For virtio_pci on SMP, we don't need to order with respect to MMIO
- * accesses through relaxed memory I/O windows, so smp_mb() et al are
+ * accesses through relaxed memory I/O windows, so virt_mb() et al are
  * sufficient.
  *
  * For using virtio to talk to real devices (eg. other heterogeneous
@@ -21,11 +21,10 @@
  * actually quite cheap.
  */
 
-#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_mb();
+		virt_mb();
 	else
 		mb();
 }
@@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_rmb();
+		virt_rmb();
 	else
 		rmb();
 }
@@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_wmb();
+		virt_wmb();
 	else
 		wmb();
 }
-#else
-static inline void virtio_mb(bool weak_barriers)
-{
-	mb();
-}
-
-static inline void virtio_rmb(bool weak_barriers)
-{
-	rmb();
-}
-
-static inline void virtio_wmb(bool weak_barriers)
-{
-	wmb();
-}
-#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_XXX
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (81 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Alexander Duyck, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linuxppc-dev,
	David Miller

virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.

This is exactly what __smp_XXX barriers do,
so switch to these instead of homegrown ifdef hacks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 25 ++++---------------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 67e06fe..f3fa55b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -12,7 +12,7 @@
  * anyone care?
  *
  * For virtio_pci on SMP, we don't need to order with respect to MMIO
- * accesses through relaxed memory I/O windows, so smp_mb() et al are
+ * accesses through relaxed memory I/O windows, so virt_mb() et al are
  * sufficient.
  *
  * For using virtio to talk to real devices (eg. other heterogeneous
@@ -21,11 +21,10 @@
  * actually quite cheap.
  */
 
-#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_mb();
+		virt_mb();
 	else
 		mb();
 }
@@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_rmb();
+		virt_rmb();
 	else
 		rmb();
 }
@@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_wmb();
+		virt_wmb();
 	else
 		wmb();
 }
-#else
-static inline void virtio_mb(bool weak_barriers)
-{
-	mb();
-}
-
-static inline void virtio_rmb(bool weak_barriers)
-{
-	rmb();
-}
-
-static inline void virtio_wmb(bool weak_barriers)
-{
-	wmb();
-}
-#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar

At the moment, xchg on sh only supports 4 and 1 byte values, so using it
from smp_store_mb means attempts to store a 2 byte value using this
macro fail.

And happens to be exactly what virtio drivers want to do.

Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 arch/sh/include/asm/barrier.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index f887c64..0cc5735 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,15 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { \
+	if (sizeof(var) != 4 && sizeof(var) != 1) { \
+		 WRITE_ONCE(var, value); \
+		__smp_mb(); \
+	} else { \
+		(void)xchg(&var, value);  \
+	} \
+} while (0)
+
 #define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar

At the moment, xchg on sh only supports 4 and 1 byte values, so using it
from smp_store_mb means attempts to store a 2 byte value using this
macro fail.

And happens to be exactly what virtio drivers want to do.

Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 arch/sh/include/asm/barrier.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index f887c64..0cc5735 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,15 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { \
+	if (sizeof(var) != 4 && sizeof(var) != 1) { \
+		 WRITE_ONCE(var, value); \
+		__smp_mb(); \
+	} else { \
+		(void)xchg(&var, value);  \
+	} \
+} while (0)
+
 #define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar

At the moment, xchg on sh only supports 4 and 1 byte values, so using it
from smp_store_mb means attempts to store a 2 byte value using this
macro fail.

And happens to be exactly what virtio drivers want to do.

Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.

Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 arch/sh/include/asm/barrier.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index f887c64..0cc5735 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,15 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { \
+	if (sizeof(var) != 4 && sizeof(var) != 1) { \
+		 WRITE_ONCE(var, value); \
+		__smp_mb(); \
+	} else { \
+		(void)xchg(&var, value);  \
+	} \
+} while (0)
+
 #define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (83 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

At the moment, xchg on sh only supports 4 and 1 byte values, so using it
from smp_store_mb means attempts to store a 2 byte value using this
macro fail.

And happens to be exactly what virtio drivers want to do.

Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 arch/sh/include/asm/barrier.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index f887c64..0cc5735 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,15 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { \
+	if (sizeof(var) != 4 && sizeof(var) != 1) { \
+		 WRITE_ONCE(var, value); \
+		__smp_mb(); \
+	} else { \
+		(void)xchg(&var, value);  \
+	} \
+} while (0)
+
 #define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2015-12-31 19:09     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

At the moment, xchg on sh only supports 4 and 1 byte values, so using it
from smp_store_mb means attempts to store a 2 byte value using this
macro fail.

And happens to be exactly what virtio drivers want to do.

Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 arch/sh/include/asm/barrier.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index f887c64..0cc5735 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,15 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { \
+	if (sizeof(var) != 4 && sizeof(var) != 1) { \
+		 WRITE_ONCE(var, value); \
+		__smp_mb(); \
+	} else { \
+		(void)xchg(&var, value);  \
+	} \
+} while (0)
+
 #define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (84 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

At the moment, xchg on sh only supports 4 and 1 byte values, so using it
from smp_store_mb means attempts to store a 2 byte value using this
macro fail.

And happens to be exactly what virtio drivers want to do.

Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 arch/sh/include/asm/barrier.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index f887c64..0cc5735 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,15 @@
 #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
 #endif
 
-#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { \
+	if (sizeof(var) != 4 && sizeof(var) != 1) { \
+		 WRITE_ONCE(var, value); \
+		__smp_mb(); \
+	} else { \
+		(void)xchg(&var, value);  \
+	} \
+} while (0)
+
 #define smp_store_mb(var, value) __smp_store_mb(var, value)
 
 #include <asm-generic/barrier.h>
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 32/32] virtio_ring: use virt_store_mb
  2015-12-31 19:05 ` Michael S. Tsirkin
  (?)
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel

We need a full barrier after writing out event index, using
virt_store_mb there seems better than open-coding.  As usual, we need a
wrapper to account for strong barriers.

It's tempting to use this in vhost as well, for that, we'll
need a variant of smp_store_mb that works on __user pointers.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h  | 12 ++++++++++++
 drivers/virtio/virtio_ring.c | 15 +++++++++------
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index f3fa55b..3a74d91 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
 		wmb();
 }
 
+static inline void virtio_store_mb(bool weak_barriers,
+				   __virtio16 *p, __virtio16 v)
+{
+	if (weak_barriers)
+		virt_store_mb(*p, v);
+	else
+	{
+		WRITE_ONCE(*p, v);
+		mb();
+	}
+}
+
 struct virtio_device;
 struct virtqueue;
 
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ee663c4..e12e385 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -517,10 +517,10 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 	/* If we expect an interrupt for the next entry, tell host
 	 * by writing event index and flush out the write before
 	 * the read in the next get_buf call. */
-	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
-		vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx);
-		virtio_mb(vq->weak_barriers);
-	}
+	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
+		virtio_store_mb(vq->weak_barriers,
+				&vring_used_event(&vq->vring),
+				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
 
 #ifdef DEBUG
 	vq->last_add_time_valid = false;
@@ -653,8 +653,11 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
 	}
 	/* TODO: tune this threshold */
 	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
-	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs);
-	virtio_mb(vq->weak_barriers);
+
+	virtio_store_mb(vq->weak_barriers,
+			&vring_used_event(&vq->vring),
+			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
+
 	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
 		END_USE(vq);
 		return false;
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 32/32] virtio_ring: use virt_store_mb
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel

We need a full barrier after writing out event index, using
virt_store_mb there seems better than open-coding.  As usual, we need a
wrapper to account for strong barriers.

It's tempting to use this in vhost as well, for that, we'll
need a variant of smp_store_mb that works on __user pointers.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h  | 12 ++++++++++++
 drivers/virtio/virtio_ring.c | 15 +++++++++------
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index f3fa55b..3a74d91 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
 		wmb();
 }
 
+static inline void virtio_store_mb(bool weak_barriers,
+				   __virtio16 *p, __virtio16 v)
+{
+	if (weak_barriers)
+		virt_store_mb(*p, v);
+	else
+	{
+		WRITE_ONCE(*p, v);
+		mb();
+	}
+}
+
 struct virtio_device;
 struct virtqueue;
 
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ee663c4..e12e385 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -517,10 +517,10 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 	/* If we expect an interrupt for the next entry, tell host
 	 * by writing event index and flush out the write before
 	 * the read in the next get_buf call. */
-	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
-		vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx);
-		virtio_mb(vq->weak_barriers);
-	}
+	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
+		virtio_store_mb(vq->weak_barriers,
+				&vring_used_event(&vq->vring),
+				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
 
 #ifdef DEBUG
 	vq->last_add_time_valid = false;
@@ -653,8 +653,11 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
 	}
 	/* TODO: tune this threshold */
 	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
-	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs);
-	virtio_mb(vq->weak_barriers);
+
+	virtio_store_mb(vq->weak_barriers,
+			&vring_used_event(&vq->vring),
+			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
+
 	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
 		END_USE(vq);
 		return false;
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 32/32] virtio_ring: use virt_store_mb
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (85 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

We need a full barrier after writing out event index, using
virt_store_mb there seems better than open-coding.  As usual, we need a
wrapper to account for strong barriers.

It's tempting to use this in vhost as well, for that, we'll
need a variant of smp_store_mb that works on __user pointers.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h  | 12 ++++++++++++
 drivers/virtio/virtio_ring.c | 15 +++++++++------
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index f3fa55b..3a74d91 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
 		wmb();
 }
 
+static inline void virtio_store_mb(bool weak_barriers,
+				   __virtio16 *p, __virtio16 v)
+{
+	if (weak_barriers)
+		virt_store_mb(*p, v);
+	else
+	{
+		WRITE_ONCE(*p, v);
+		mb();
+	}
+}
+
 struct virtio_device;
 struct virtqueue;
 
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ee663c4..e12e385 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -517,10 +517,10 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 	/* If we expect an interrupt for the next entry, tell host
 	 * by writing event index and flush out the write before
 	 * the read in the next get_buf call. */
-	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
-		vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx);
-		virtio_mb(vq->weak_barriers);
-	}
+	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
+		virtio_store_mb(vq->weak_barriers,
+				&vring_used_event(&vq->vring),
+				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
 
 #ifdef DEBUG
 	vq->last_add_time_valid = false;
@@ -653,8 +653,11 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
 	}
 	/* TODO: tune this threshold */
 	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
-	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs);
-	virtio_mb(vq->weak_barriers);
+
+	virtio_store_mb(vq->weak_barriers,
+			&vring_used_event(&vq->vring),
+			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
+
 	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
 		END_USE(vq);
 		return false;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 32/32] virtio_ring: use virt_store_mb
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

We need a full barrier after writing out event index, using
virt_store_mb there seems better than open-coding.  As usual, we need a
wrapper to account for strong barriers.

It's tempting to use this in vhost as well, for that, we'll
need a variant of smp_store_mb that works on __user pointers.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h  | 12 ++++++++++++
 drivers/virtio/virtio_ring.c | 15 +++++++++------
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index f3fa55b..3a74d91 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
 		wmb();
 }
 
+static inline void virtio_store_mb(bool weak_barriers,
+				   __virtio16 *p, __virtio16 v)
+{
+	if (weak_barriers)
+		virt_store_mb(*p, v);
+	else
+	{
+		WRITE_ONCE(*p, v);
+		mb();
+	}
+}
+
 struct virtio_device;
 struct virtqueue;
 
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ee663c4..e12e385 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -517,10 +517,10 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 	/* If we expect an interrupt for the next entry, tell host
 	 * by writing event index and flush out the write before
 	 * the read in the next get_buf call. */
-	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
-		vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx);
-		virtio_mb(vq->weak_barriers);
-	}
+	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
+		virtio_store_mb(vq->weak_barriers,
+				&vring_used_event(&vq->vring),
+				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
 
 #ifdef DEBUG
 	vq->last_add_time_valid = false;
@@ -653,8 +653,11 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
 	}
 	/* TODO: tune this threshold */
 	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
-	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs);
-	virtio_mb(vq->weak_barriers);
+
+	virtio_store_mb(vq->weak_barriers,
+			&vring_used_event(&vq->vring),
+			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
+
 	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
 		END_USE(vq);
 		return false;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 32/32] virtio_ring: use virt_store_mb
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (86 preceding siblings ...)
  (?)
@ 2015-12-31 19:09 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

We need a full barrier after writing out event index, using
virt_store_mb there seems better than open-coding.  As usual, we need a
wrapper to account for strong barriers.

It's tempting to use this in vhost as well, for that, we'll
need a variant of smp_store_mb that works on __user pointers.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h  | 12 ++++++++++++
 drivers/virtio/virtio_ring.c | 15 +++++++++------
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index f3fa55b..3a74d91 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
 		wmb();
 }
 
+static inline void virtio_store_mb(bool weak_barriers,
+				   __virtio16 *p, __virtio16 v)
+{
+	if (weak_barriers)
+		virt_store_mb(*p, v);
+	else
+	{
+		WRITE_ONCE(*p, v);
+		mb();
+	}
+}
+
 struct virtio_device;
 struct virtqueue;
 
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ee663c4..e12e385 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -517,10 +517,10 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 	/* If we expect an interrupt for the next entry, tell host
 	 * by writing event index and flush out the write before
 	 * the read in the next get_buf call. */
-	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
-		vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx);
-		virtio_mb(vq->weak_barriers);
-	}
+	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
+		virtio_store_mb(vq->weak_barriers,
+				&vring_used_event(&vq->vring),
+				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
 
 #ifdef DEBUG
 	vq->last_add_time_valid = false;
@@ -653,8 +653,11 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
 	}
 	/* TODO: tune this threshold */
 	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
-	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs);
-	virtio_mb(vq->weak_barriers);
+
+	virtio_store_mb(vq->weak_barriers,
+			&vring_used_event(&vq->vring),
+			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
+
 	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
 		END_USE(vq);
 		return false;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:10     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk, Boris Ostrovsky

drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
 			avail = len;
 
 		/* Must write data /after/ reading the consumer index. */
-		mb();
+		virt_mb();
 
 		memcpy(dst, data, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see new producer until data is there. */
-		wmb();
+		virt_wmb();
 		intf->req_prod += avail;
 
 		/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
 			avail = len;
 
 		/* Must read data /after/ reading the producer index. */
-		rmb();
+		virt_rmb();
 
 		memcpy(data, src, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see free space until we've copied out */
-		mb();
+		virt_mb();
 		intf->rsp_cons += avail;
 
 		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2015-12-31 19:10     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Konrad Rzeszutek Wilk,
	Boris Ostrovsky, David Vrabel

drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
 			avail = len;
 
 		/* Must write data /after/ reading the consumer index. */
-		mb();
+		virt_mb();
 
 		memcpy(dst, data, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see new producer until data is there. */
-		wmb();
+		virt_wmb();
 		intf->req_prod += avail;
 
 		/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
 			avail = len;
 
 		/* Must read data /after/ reading the producer index. */
-		rmb();
+		virt_rmb();
 
 		memcpy(data, src, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see free space until we've copied out */
-		mb();
+		virt_mb();
 		intf->rsp_cons += avail;
 
 		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2015-12-31 19:10     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk, Boris Ostrovsky

drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
 			avail = len;
 
 		/* Must write data /after/ reading the consumer index. */
-		mb();
+		virt_mb();
 
 		memcpy(dst, data, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see new producer until data is there. */
-		wmb();
+		virt_wmb();
 		intf->req_prod += avail;
 
 		/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
 			avail = len;
 
 		/* Must read data /after/ reading the producer index. */
-		rmb();
+		virt_rmb();
 
 		memcpy(data, src, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see free space until we've copied out */
-		mb();
+		virt_mb();
 		intf->rsp_cons += avail;
 
 		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (89 preceding siblings ...)
  (?)
@ 2015-12-31 19:10 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Konrad Rzeszutek Wilk, Andrew Cooper,
	David Vrabel, linuxppc-dev

drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
 			avail = len;
 
 		/* Must write data /after/ reading the consumer index. */
-		mb();
+		virt_mb();
 
 		memcpy(dst, data, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see new producer until data is there. */
-		wmb();
+		virt_wmb();
 		intf->req_prod += avail;
 
 		/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
 			avail = len;
 
 		/* Must read data /after/ reading the producer index. */
-		rmb();
+		virt_rmb();
 
 		memcpy(data, src, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see free space until we've copied out */
-		mb();
+		virt_mb();
 		intf->rsp_cons += avail;
 
 		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2015-12-31 19:10     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-arm-kernel

drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
 			avail = len;
 
 		/* Must write data /after/ reading the consumer index. */
-		mb();
+		virt_mb();
 
 		memcpy(dst, data, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see new producer until data is there. */
-		wmb();
+		virt_wmb();
 		intf->req_prod += avail;
 
 		/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
 			avail = len;
 
 		/* Must read data /after/ reading the producer index. */
-		rmb();
+		virt_rmb();
 
 		memcpy(data, src, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see free space until we've copied out */
-		mb();
+		virt_mb();
 		intf->rsp_cons += avail;
 
 		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (88 preceding siblings ...)
  (?)
@ 2015-12-31 19:10 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
 			avail = len;
 
 		/* Must write data /after/ reading the consumer index. */
-		mb();
+		virt_mb();
 
 		memcpy(dst, data, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see new producer until data is there. */
-		wmb();
+		virt_wmb();
 		intf->req_prod += avail;
 
 		/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
 			avail = len;
 
 		/* Must read data /after/ reading the producer index. */
-		rmb();
+		virt_rmb();
 
 		memcpy(data, src, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see free space until we've copied out */
-		mb();
+		virt_mb();
 		intf->rsp_cons += avail;
 
 		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2015-12-31 19:10     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk, Boris Ostrovsky, David

drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
 			avail = len;
 
 		/* Must write data /after/ reading the consumer index. */
-		mb();
+		virt_mb();
 
 		memcpy(dst, data, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see new producer until data is there. */
-		wmb();
+		virt_wmb();
 		intf->req_prod += avail;
 
 		/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
 			avail = len;
 
 		/* Must read data /after/ reading the producer index. */
-		rmb();
+		virt_rmb();
 
 		memcpy(data, src, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see free space until we've copied out */
-		mb();
+		virt_mb();
 		intf->rsp_cons += avail;
 
 		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2015-12-31 19:10     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk, Boris Ostrovsky, David

drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
 			avail = len;
 
 		/* Must write data /after/ reading the consumer index. */
-		mb();
+		virt_mb();
 
 		memcpy(dst, data, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see new producer until data is there. */
-		wmb();
+		virt_wmb();
 		intf->req_prod += avail;
 
 		/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
 			avail = len;
 
 		/* Must read data /after/ reading the producer index. */
-		rmb();
+		virt_rmb();
 
 		memcpy(data, src, avail);
 		data += avail;
 		len -= avail;
 
 		/* Other side must not see free space until we've copied out */
-		mb();
+		virt_mb();
 		intf->rsp_cons += avail;
 
 		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:10   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Konrad Rzeszutek Wilk,
	Boris Ostrovsky

include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/xen/interface/io/ring.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring {						\
 
 
 #define RING_PUSH_REQUESTS(_r) do {					\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
 } while (0)
 
 #define RING_PUSH_RESPONSES(_r) do {					\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
 } while (0)
 
@@ -250,9 +250,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->req_prod;				\
     RING_IDX __new = (_r)->req_prod_pvt;				\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = __new;					\
-    mb(); /* back sees new requests /before/ we check req_event */	\
+    virt_mb(); /* back sees new requests /before/ we check req_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->rsp_prod;				\
     RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = __new;					\
-    mb(); /* front sees new responses /before/ we check rsp_event */	\
+    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
 } while (0)
 
@@ -279,7 +279,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
 } while (0)
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2015-12-31 19:10   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Konrad Rzeszutek Wilk,
	Boris Ostrovsky, David Vrabel

include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/xen/interface/io/ring.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring {						\
 
 
 #define RING_PUSH_REQUESTS(_r) do {					\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
 } while (0)
 
 #define RING_PUSH_RESPONSES(_r) do {					\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
 } while (0)
 
@@ -250,9 +250,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->req_prod;				\
     RING_IDX __new = (_r)->req_prod_pvt;				\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = __new;					\
-    mb(); /* back sees new requests /before/ we check req_event */	\
+    virt_mb(); /* back sees new requests /before/ we check req_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->rsp_prod;				\
     RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = __new;					\
-    mb(); /* front sees new responses /before/ we check rsp_event */	\
+    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
 } while (0)
 
@@ -279,7 +279,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
 } while (0)
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2015-12-31 19:10   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Konrad Rzeszutek Wilk,
	Boris Ostrovsky

include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/xen/interface/io/ring.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring {						\
 
 
 #define RING_PUSH_REQUESTS(_r) do {					\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
 } while (0)
 
 #define RING_PUSH_RESPONSES(_r) do {					\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
 } while (0)
 
@@ -250,9 +250,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->req_prod;				\
     RING_IDX __new = (_r)->req_prod_pvt;				\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = __new;					\
-    mb(); /* back sees new requests /before/ we check req_event */	\
+    virt_mb(); /* back sees new requests /before/ we check req_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->rsp_prod;				\
     RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = __new;					\
-    mb(); /* front sees new responses /before/ we check rsp_event */	\
+    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
 } while (0)
 
@@ -279,7 +279,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
 } while (0)
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (92 preceding siblings ...)
  (?)
@ 2015-12-31 19:10 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Konrad Rzeszutek Wilk, Andrew Cooper,
	David Vrabel, linuxppc-dev

include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/xen/interface/io/ring.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring {						\
 
 
 #define RING_PUSH_REQUESTS(_r) do {					\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
 } while (0)
 
 #define RING_PUSH_RESPONSES(_r) do {					\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
 } while (0)
 
@@ -250,9 +250,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->req_prod;				\
     RING_IDX __new = (_r)->req_prod_pvt;				\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = __new;					\
-    mb(); /* back sees new requests /before/ we check req_event */	\
+    virt_mb(); /* back sees new requests /before/ we check req_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->rsp_prod;				\
     RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = __new;					\
-    mb(); /* front sees new responses /before/ we check rsp_event */	\
+    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
 } while (0)
 
@@ -279,7 +279,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
 } while (0)
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2015-12-31 19:10   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-arm-kernel

include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/xen/interface/io/ring.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring {						\
 
 
 #define RING_PUSH_REQUESTS(_r) do {					\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
 } while (0)
 
 #define RING_PUSH_RESPONSES(_r) do {					\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
 } while (0)
 
@@ -250,9 +250,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->req_prod;				\
     RING_IDX __new = (_r)->req_prod_pvt;				\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = __new;					\
-    mb(); /* back sees new requests /before/ we check req_event */	\
+    virt_mb(); /* back sees new requests /before/ we check req_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->rsp_prod;				\
     RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = __new;					\
-    mb(); /* front sees new responses /before/ we check rsp_event */	\
+    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
 } while (0)
 
@@ -279,7 +279,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
 } while (0)
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (90 preceding siblings ...)
  (?)
@ 2015-12-31 19:10 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/xen/interface/io/ring.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring {						\
 
 
 #define RING_PUSH_REQUESTS(_r) do {					\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
 } while (0)
 
 #define RING_PUSH_RESPONSES(_r) do {					\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
 } while (0)
 
@@ -250,9 +250,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->req_prod;				\
     RING_IDX __new = (_r)->req_prod_pvt;				\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = __new;					\
-    mb(); /* back sees new requests /before/ we check req_event */	\
+    virt_mb(); /* back sees new requests /before/ we check req_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->rsp_prod;				\
     RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = __new;					\
-    mb(); /* front sees new responses /before/ we check rsp_event */	\
+    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
 } while (0)
 
@@ -279,7 +279,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
 } while (0)
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2015-12-31 19:10   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Konrad Rzeszutek Wilk,
	Boris Ostrovsky, David

include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/xen/interface/io/ring.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring {						\
 
 
 #define RING_PUSH_REQUESTS(_r) do {					\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
 } while (0)
 
 #define RING_PUSH_RESPONSES(_r) do {					\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
 } while (0)
 
@@ -250,9 +250,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->req_prod;				\
     RING_IDX __new = (_r)->req_prod_pvt;				\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = __new;					\
-    mb(); /* back sees new requests /before/ we check req_event */	\
+    virt_mb(); /* back sees new requests /before/ we check req_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->rsp_prod;				\
     RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = __new;					\
-    mb(); /* front sees new responses /before/ we check rsp_event */	\
+    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
 } while (0)
 
@@ -279,7 +279,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
 } while (0)
 
-- 
MST


^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2015-12-31 19:10   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2015-12-31 19:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Konrad Rzeszutek Wilk,
	Boris Ostrovsky, David

include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.

For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.

Switch to virt_xxx barriers which serve this exact purpose.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/xen/interface/io/ring.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring {						\
 
 
 #define RING_PUSH_REQUESTS(_r) do {					\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
 } while (0)
 
 #define RING_PUSH_RESPONSES(_r) do {					\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
 } while (0)
 
@@ -250,9 +250,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->req_prod;				\
     RING_IDX __new = (_r)->req_prod_pvt;				\
-    wmb(); /* back sees requests /before/ updated producer index */	\
+    virt_wmb(); /* back sees requests /before/ updated producer index */	\
     (_r)->sring->req_prod = __new;					\
-    mb(); /* back sees new requests /before/ we check req_event */	\
+    virt_mb(); /* back sees new requests /before/ we check req_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring {						\
 #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
     RING_IDX __old = (_r)->sring->rsp_prod;				\
     RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    wmb(); /* front sees responses /before/ updated producer index */	\
+    virt_wmb(); /* front sees responses /before/ updated producer index */	\
     (_r)->sring->rsp_prod = __new;					\
-    mb(); /* front sees new responses /before/ we check rsp_event */	\
+    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
     (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
 		 (RING_IDX)(__new - __old));				\
 } while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
 } while (0)
 
@@ -279,7 +279,7 @@ struct __name##_back_ring {						\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
     if (_work_to_do) break;						\
     (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    mb();								\
+    virt_mb();								\
     (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
 } while (0)
 
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
  2015-12-31 19:06   ` Michael S. Tsirkin
  (?)
@ 2015-12-31 19:43     ` David Miller
  -1 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:43 UTC (permalink / raw)
  To: mst
  Cc: linux-kernel, peterz, arnd, linux-arch, andrew.cooper3,
	virtualization, stefano.stabellini, tglx, mingo, hpa, linux-ia64,
	linuxppc-dev, linux-s390, sparclinux, linux-arm-kernel,
	linux-metag, linux-mips, x86, user-mode-linux-devel,
	adi-buildroot-devel, linux-sh, linux-xtensa, xen-devel, mingo,
	ralf, andreyknvl

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:06:38 +0200

> On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
> smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> nop uses __asm__ __volatile but is otherwise identical to
> the generic version, drop that as well.
> 
> This is in preparation to refactoring this code area.
> 
> Note: nop() was in processor.h and not in barrier.h as on other
> architectures. Nothing seems to depend on it being there though.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
@ 2015-12-31 19:43     ` David Miller
  0 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:43 UTC (permalink / raw)
  To: mst
  Cc: linux-kernel, peterz, arnd, linux-arch, andrew.cooper3,
	virtualization, stefano.stabellini, tglx, mingo, hpa, linux-ia64,
	linuxppc-dev, linux-s390, sparclinux, linux-arm-kernel,
	linux-metag, linux-mips, x86, user-mode-linux-devel,
	adi-buildroot-devel, linux-sh, linux-xtensa, xen-devel, mingo,
	ralf, andreyknvl

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:06:38 +0200

> On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
> smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> nop uses __asm__ __volatile but is otherwise identical to
> the generic version, drop that as well.
> 
> This is in preparation to refactoring this code area.
> 
> Note: nop() was in processor.h and not in barrier.h as on other
> architectures. Nothing seems to depend on it being there though.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
  2015-12-31 19:06   ` Michael S. Tsirkin
                     ` (4 preceding siblings ...)
  (?)
@ 2015-12-31 19:43   ` David Miller
  -1 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:43 UTC (permalink / raw)
  To: mst
  Cc: linux-mips, linux-ia64, linux-sh, peterz, virtualization, hpa,
	sparclinux, mingo, linux-arch, linux-s390, arnd, x86, xen-devel,
	mingo, linux-xtensa, user-mode-linux-devel, stefano.stabellini,
	andreyknvl, adi-buildroot-devel, tglx, linux-metag,
	linux-arm-kernel, andrew.cooper3, linux-kernel, ralf,
	linuxppc-dev

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:06:38 +0200

> On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
> smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> nop uses __asm__ __volatile but is otherwise identical to
> the generic version, drop that as well.
> 
> This is in preparation to refactoring this code area.
> 
> Note: nop() was in processor.h and not in barrier.h as on other
> architectures. Nothing seems to depend on it being there though.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
@ 2015-12-31 19:43     ` David Miller
  0 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:43 UTC (permalink / raw)
  To: linux-arm-kernel

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:06:38 +0200

> On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
> smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> nop uses __asm__ __volatile but is otherwise identical to
> the generic version, drop that as well.
> 
> This is in preparation to refactoring this code area.
> 
> Note: nop() was in processor.h and not in barrier.h as on other
> architectures. Nothing seems to depend on it being there though.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 07/32] sparc: reuse asm-generic/barrier.h
  2015-12-31 19:06   ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2015-12-31 19:43   ` David Miller
  -1 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:43 UTC (permalink / raw)
  To: mst
  Cc: linux-mips, linux-ia64, linux-sh, peterz, virtualization, hpa,
	sparclinux, mingo, linux-arch, linux-s390, arnd, x86, xen-devel,
	mingo, linux-xtensa, user-mode-linux-devel, stefano.stabellini,
	andreyknvl, adi-buildroot-devel, tglx, linux-metag,
	linux-arm-kernel, andrew.cooper3, linux-kernel, ralf,
	linuxppc-dev

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:06:38 +0200

> On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
> smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> nop uses __asm__ __volatile but is otherwise identical to
> the generic version, drop that as well.
> 
> This is in preparation to refactoring this code area.
> 
> Note: nop() was in processor.h and not in barrier.h as on other
> architectures. Nothing seems to depend on it being there though.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 24/32] sparc: define __smp_xxx
  2015-12-31 19:08   ` Michael S. Tsirkin
  (?)
  (?)
@ 2015-12-31 19:44     ` David Miller
  -1 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:44 UTC (permalink / raw)
  To: mst
  Cc: linux-mips, linux-ia64, linux-sh, peterz, virtualization, hpa,
	sparclinux, mingo, linux-arch, linux-s390, arnd, x86, xen-devel,
	mingo, linux-xtensa, user-mode-linux-devel, stefano.stabellini,
	andreyknvl, adi-buildroot-devel, tglx, linux-metag,
	linux-arm-kernel, andrew.cooper3, linux-kernel, ralf,
	linuxppc-dev

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:08:53 +0200

> This defines __smp_xxx barriers for sparc,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 24/32] sparc: define __smp_xxx
@ 2015-12-31 19:44     ` David Miller
  0 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:44 UTC (permalink / raw)
  To: mst
  Cc: linux-kernel, peterz, arnd, linux-arch, andrew.cooper3,
	virtualization, stefano.stabellini, tglx, mingo, hpa, linux-ia64,
	linuxppc-dev, linux-s390, sparclinux, linux-arm-kernel,
	linux-metag, linux-mips, x86, user-mode-linux-devel,
	adi-buildroot-devel, linux-sh, linux-xtensa, xen-devel, mingo,
	ralf, andreyknvl

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:08:53 +0200

> This defines __smp_xxx barriers for sparc,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 24/32] sparc: define __smp_xxx
@ 2015-12-31 19:44     ` David Miller
  0 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:44 UTC (permalink / raw)
  To: mst
  Cc: linux-mips, linux-ia64, linux-sh, peterz, virtualization, hpa,
	sparclinux, mingo, linux-arch, linux-s390, arnd, x86, xen-devel,
	mingo, linux-xtensa, user-mode-linux-devel, stefano.stabellini,
	andreyknvl, adi-buildroot-devel, tglx, linux-metag,
	linux-arm-kernel, andrew.cooper3, linux-kernel, ralf,
	linuxppc-dev

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:08:53 +0200

> This defines __smp_xxx barriers for sparc,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 24/32] sparc: define __smp_xxx
@ 2015-12-31 19:44     ` David Miller
  0 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:44 UTC (permalink / raw)
  To: linux-arm-kernel

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:08:53 +0200

> This defines __smp_xxx barriers for sparc,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 24/32] sparc: define __smp_xxx
  2015-12-31 19:08   ` Michael S. Tsirkin
                     ` (2 preceding siblings ...)
  (?)
@ 2015-12-31 19:44   ` David Miller
  -1 siblings, 0 replies; 572+ messages in thread
From: David Miller @ 2015-12-31 19:44 UTC (permalink / raw)
  To: mst
  Cc: linux-mips, linux-ia64, linux-sh, peterz, virtualization, hpa,
	sparclinux, mingo, linux-arch, linux-s390, arnd, x86, xen-devel,
	mingo, linux-xtensa, user-mode-linux-devel, stefano.stabellini,
	andreyknvl, adi-buildroot-devel, tglx, linux-metag,
	linux-arm-kernel, andrew.cooper3, linux-kernel, ralf,
	linuxppc-dev

From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Thu, 31 Dec 2015 21:08:53 +0200

> This defines __smp_xxx barriers for sparc,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01  9:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Alexander Duyck, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linuxppc-dev,
	David Miller

virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.

This is exactly what __smp_XXX barriers do,
so switch to these instead of homegrown ifdef hacks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 25 ++++---------------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 67e06fe..f3fa55b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -12,7 +12,7 @@
  * anyone care?
  *
  * For virtio_pci on SMP, we don't need to order with respect to MMIO
- * accesses through relaxed memory I/O windows, so smp_mb() et al are
+ * accesses through relaxed memory I/O windows, so virt_mb() et al are
  * sufficient.
  *
  * For using virtio to talk to real devices (eg. other heterogeneous
@@ -21,11 +21,10 @@
  * actually quite cheap.
  */
 
-#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_mb();
+		virt_mb();
 	else
 		mb();
 }
@@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_rmb();
+		virt_rmb();
 	else
 		rmb();
 }
@@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_wmb();
+		virt_wmb();
 	else
 		wmb();
 }
-#else
-static inline void virtio_mb(bool weak_barriers)
-{
-	mb();
-}
-
-static inline void virtio_rmb(bool weak_barriers)
-{
-	rmb();
-}
-
-static inline void virtio_wmb(bool weak_barriers)
-{
-	wmb();
-}
-#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01  9:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Alexander Duyck

virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.

This is exactly what __smp_XXX barriers do,
so switch to these instead of homegrown ifdef hacks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 25 ++++---------------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 67e06fe..f3fa55b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -12,7 +12,7 @@
  * anyone care?
  *
  * For virtio_pci on SMP, we don't need to order with respect to MMIO
- * accesses through relaxed memory I/O windows, so smp_mb() et al are
+ * accesses through relaxed memory I/O windows, so virt_mb() et al are
  * sufficient.
  *
  * For using virtio to talk to real devices (eg. other heterogeneous
@@ -21,11 +21,10 @@
  * actually quite cheap.
  */
 
-#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_mb();
+		virt_mb();
 	else
 		mb();
 }
@@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_rmb();
+		virt_rmb();
 	else
 		rmb();
 }
@@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_wmb();
+		virt_wmb();
 	else
 		wmb();
 }
-#else
-static inline void virtio_mb(bool weak_barriers)
-{
-	mb();
-}
-
-static inline void virtio_rmb(bool weak_barriers)
-{
-	rmb();
-}
-
-static inline void virtio_wmb(bool weak_barriers)
-{
-	wmb();
-}
-#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
  2015-12-31 19:05 ` Michael S. Tsirkin
                   ` (93 preceding siblings ...)
  (?)
@ 2016-01-01  9:39 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01  9:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Alexander Duyck, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linuxppc-dev,
	David Miller

virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.

This is exactly what __smp_XXX barriers do,
so switch to these instead of homegrown ifdef hacks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 25 ++++---------------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 67e06fe..f3fa55b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -12,7 +12,7 @@
  * anyone care?
  *
  * For virtio_pci on SMP, we don't need to order with respect to MMIO
- * accesses through relaxed memory I/O windows, so smp_mb() et al are
+ * accesses through relaxed memory I/O windows, so virt_mb() et al are
  * sufficient.
  *
  * For using virtio to talk to real devices (eg. other heterogeneous
@@ -21,11 +21,10 @@
  * actually quite cheap.
  */
 
-#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_mb();
+		virt_mb();
 	else
 		mb();
 }
@@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_rmb();
+		virt_rmb();
 	else
 		rmb();
 }
@@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_wmb();
+		virt_wmb();
 	else
 		wmb();
 }
-#else
-static inline void virtio_mb(bool weak_barriers)
-{
-	mb();
-}
-
-static inline void virtio_rmb(bool weak_barriers)
-{
-	rmb();
-}
-
-static inline void virtio_wmb(bool weak_barriers)
-{
-	wmb();
-}
-#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
@ 2015-12-31 19:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01  9:39 UTC (permalink / raw)
  To: linux-arm-kernel

virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.

This is exactly what __smp_XXX barriers do,
so switch to these instead of homegrown ifdef hacks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h | 25 ++++---------------------
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 67e06fe..f3fa55b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -12,7 +12,7 @@
  * anyone care?
  *
  * For virtio_pci on SMP, we don't need to order with respect to MMIO
- * accesses through relaxed memory I/O windows, so smp_mb() et al are
+ * accesses through relaxed memory I/O windows, so virt_mb() et al are
  * sufficient.
  *
  * For using virtio to talk to real devices (eg. other heterogeneous
@@ -21,11 +21,10 @@
  * actually quite cheap.
  */
 
-#ifdef CONFIG_SMP
 static inline void virtio_mb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_mb();
+		virt_mb();
 	else
 		mb();
 }
@@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
 static inline void virtio_rmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_rmb();
+		virt_rmb();
 	else
 		rmb();
 }
@@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
 static inline void virtio_wmb(bool weak_barriers)
 {
 	if (weak_barriers)
-		smp_wmb();
+		virt_wmb();
 	else
 		wmb();
 }
-#else
-static inline void virtio_mb(bool weak_barriers)
-{
-	mb();
-}
-
-static inline void virtio_rmb(bool weak_barriers)
-{
-	rmb();
-}
-
-static inline void virtio_wmb(bool weak_barriers)
-{
-	wmb();
-}
-#endif
 
 struct virtio_device;
 struct virtqueue;
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
  2015-12-31 19:09   ` [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_XXX Michael S. Tsirkin
  (?)
@ 2016-01-01 10:21     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01 10:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Alexander Duyck

On Fri, Jan 01, 2016 at 11:39:40AM +0200, Michael S. Tsirkin wrote:
> virtio ring uses smp_wmb on SMP and wmb on !SMP,
> the reason for the later being that it might be
> talking to another kernel on the same SMP machine.
> 
> This is exactly what __smp_XXX barriers do,
> so switch to these instead of homegrown ifdef hacks.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

The subject and commit log should say
virt_xxx and not __smp_xxx - I fixed this up in
my tree.

> ---
>  include/linux/virtio_ring.h | 25 ++++---------------------
>  1 file changed, 4 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index 67e06fe..f3fa55b 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -12,7 +12,7 @@
>   * anyone care?
>   *
>   * For virtio_pci on SMP, we don't need to order with respect to MMIO
> - * accesses through relaxed memory I/O windows, so smp_mb() et al are
> + * accesses through relaxed memory I/O windows, so virt_mb() et al are
>   * sufficient.
>   *
>   * For using virtio to talk to real devices (eg. other heterogeneous
> @@ -21,11 +21,10 @@
>   * actually quite cheap.
>   */
>  
> -#ifdef CONFIG_SMP
>  static inline void virtio_mb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_mb();
> +		virt_mb();
>  	else
>  		mb();
>  }
> @@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
>  static inline void virtio_rmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_rmb();
> +		virt_rmb();
>  	else
>  		rmb();
>  }
> @@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
>  static inline void virtio_wmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_wmb();
> +		virt_wmb();
>  	else
>  		wmb();
>  }
> -#else
> -static inline void virtio_mb(bool weak_barriers)
> -{
> -	mb();
> -}
> -
> -static inline void virtio_rmb(bool weak_barriers)
> -{
> -	rmb();
> -}
> -
> -static inline void virtio_wmb(bool weak_barriers)
> -{
> -	wmb();
> -}
> -#endif
>  
>  struct virtio_device;
>  struct virtqueue;
> -- 
> MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
@ 2016-01-01 10:21     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01 10:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Alexander Duyck

On Fri, Jan 01, 2016 at 11:39:40AM +0200, Michael S. Tsirkin wrote:
> virtio ring uses smp_wmb on SMP and wmb on !SMP,
> the reason for the later being that it might be
> talking to another kernel on the same SMP machine.
> 
> This is exactly what __smp_XXX barriers do,
> so switch to these instead of homegrown ifdef hacks.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

The subject and commit log should say
virt_xxx and not __smp_xxx - I fixed this up in
my tree.

> ---
>  include/linux/virtio_ring.h | 25 ++++---------------------
>  1 file changed, 4 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index 67e06fe..f3fa55b 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -12,7 +12,7 @@
>   * anyone care?
>   *
>   * For virtio_pci on SMP, we don't need to order with respect to MMIO
> - * accesses through relaxed memory I/O windows, so smp_mb() et al are
> + * accesses through relaxed memory I/O windows, so virt_mb() et al are
>   * sufficient.
>   *
>   * For using virtio to talk to real devices (eg. other heterogeneous
> @@ -21,11 +21,10 @@
>   * actually quite cheap.
>   */
>  
> -#ifdef CONFIG_SMP
>  static inline void virtio_mb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_mb();
> +		virt_mb();
>  	else
>  		mb();
>  }
> @@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
>  static inline void virtio_rmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_rmb();
> +		virt_rmb();
>  	else
>  		rmb();
>  }
> @@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
>  static inline void virtio_wmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_wmb();
> +		virt_wmb();
>  	else
>  		wmb();
>  }
> -#else
> -static inline void virtio_mb(bool weak_barriers)
> -{
> -	mb();
> -}
> -
> -static inline void virtio_rmb(bool weak_barriers)
> -{
> -	rmb();
> -}
> -
> -static inline void virtio_wmb(bool weak_barriers)
> -{
> -	wmb();
> -}
> -#endif
>  
>  struct virtio_device;
>  struct virtqueue;
> -- 
> MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
  2015-12-31 19:09   ` [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_XXX Michael S. Tsirkin
                     ` (5 preceding siblings ...)
  (?)
@ 2016-01-01 10:21   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01 10:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Alexander Duyck, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linuxppc-dev,
	David Miller

On Fri, Jan 01, 2016 at 11:39:40AM +0200, Michael S. Tsirkin wrote:
> virtio ring uses smp_wmb on SMP and wmb on !SMP,
> the reason for the later being that it might be
> talking to another kernel on the same SMP machine.
> 
> This is exactly what __smp_XXX barriers do,
> so switch to these instead of homegrown ifdef hacks.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

The subject and commit log should say
virt_xxx and not __smp_xxx - I fixed this up in
my tree.

> ---
>  include/linux/virtio_ring.h | 25 ++++---------------------
>  1 file changed, 4 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index 67e06fe..f3fa55b 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -12,7 +12,7 @@
>   * anyone care?
>   *
>   * For virtio_pci on SMP, we don't need to order with respect to MMIO
> - * accesses through relaxed memory I/O windows, so smp_mb() et al are
> + * accesses through relaxed memory I/O windows, so virt_mb() et al are
>   * sufficient.
>   *
>   * For using virtio to talk to real devices (eg. other heterogeneous
> @@ -21,11 +21,10 @@
>   * actually quite cheap.
>   */
>  
> -#ifdef CONFIG_SMP
>  static inline void virtio_mb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_mb();
> +		virt_mb();
>  	else
>  		mb();
>  }
> @@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
>  static inline void virtio_rmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_rmb();
> +		virt_rmb();
>  	else
>  		rmb();
>  }
> @@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
>  static inline void virtio_wmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_wmb();
> +		virt_wmb();
>  	else
>  		wmb();
>  }
> -#else
> -static inline void virtio_mb(bool weak_barriers)
> -{
> -	mb();
> -}
> -
> -static inline void virtio_rmb(bool weak_barriers)
> -{
> -	rmb();
> -}
> -
> -static inline void virtio_wmb(bool weak_barriers)
> -{
> -	wmb();
> -}
> -#endif
>  
>  struct virtio_device;
>  struct virtqueue;
> -- 
> MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
@ 2016-01-01 10:21     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01 10:21 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 01, 2016 at 11:39:40AM +0200, Michael S. Tsirkin wrote:
> virtio ring uses smp_wmb on SMP and wmb on !SMP,
> the reason for the later being that it might be
> talking to another kernel on the same SMP machine.
> 
> This is exactly what __smp_XXX barriers do,
> so switch to these instead of homegrown ifdef hacks.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

The subject and commit log should say
virt_xxx and not __smp_xxx - I fixed this up in
my tree.

> ---
>  include/linux/virtio_ring.h | 25 ++++---------------------
>  1 file changed, 4 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index 67e06fe..f3fa55b 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -12,7 +12,7 @@
>   * anyone care?
>   *
>   * For virtio_pci on SMP, we don't need to order with respect to MMIO
> - * accesses through relaxed memory I/O windows, so smp_mb() et al are
> + * accesses through relaxed memory I/O windows, so virt_mb() et al are
>   * sufficient.
>   *
>   * For using virtio to talk to real devices (eg. other heterogeneous
> @@ -21,11 +21,10 @@
>   * actually quite cheap.
>   */
>  
> -#ifdef CONFIG_SMP
>  static inline void virtio_mb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_mb();
> +		virt_mb();
>  	else
>  		mb();
>  }
> @@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
>  static inline void virtio_rmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_rmb();
> +		virt_rmb();
>  	else
>  		rmb();
>  }
> @@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
>  static inline void virtio_wmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_wmb();
> +		virt_wmb();
>  	else
>  		wmb();
>  }
> -#else
> -static inline void virtio_mb(bool weak_barriers)
> -{
> -	mb();
> -}
> -
> -static inline void virtio_rmb(bool weak_barriers)
> -{
> -	rmb();
> -}
> -
> -static inline void virtio_wmb(bool weak_barriers)
> -{
> -	wmb();
> -}
> -#endif
>  
>  struct virtio_device;
>  struct virtqueue;
> -- 
> MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
  2015-12-31 19:09   ` [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_XXX Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2016-01-01 10:21   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-01 10:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Alexander Duyck, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linuxppc-dev,
	David Miller

On Fri, Jan 01, 2016 at 11:39:40AM +0200, Michael S. Tsirkin wrote:
> virtio ring uses smp_wmb on SMP and wmb on !SMP,
> the reason for the later being that it might be
> talking to another kernel on the same SMP machine.
> 
> This is exactly what __smp_XXX barriers do,
> so switch to these instead of homegrown ifdef hacks.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

The subject and commit log should say
virt_xxx and not __smp_xxx - I fixed this up in
my tree.

> ---
>  include/linux/virtio_ring.h | 25 ++++---------------------
>  1 file changed, 4 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index 67e06fe..f3fa55b 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -12,7 +12,7 @@
>   * anyone care?
>   *
>   * For virtio_pci on SMP, we don't need to order with respect to MMIO
> - * accesses through relaxed memory I/O windows, so smp_mb() et al are
> + * accesses through relaxed memory I/O windows, so virt_mb() et al are
>   * sufficient.
>   *
>   * For using virtio to talk to real devices (eg. other heterogeneous
> @@ -21,11 +21,10 @@
>   * actually quite cheap.
>   */
>  
> -#ifdef CONFIG_SMP
>  static inline void virtio_mb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_mb();
> +		virt_mb();
>  	else
>  		mb();
>  }
> @@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
>  static inline void virtio_rmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_rmb();
> +		virt_rmb();
>  	else
>  		rmb();
>  }
> @@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
>  static inline void virtio_wmb(bool weak_barriers)
>  {
>  	if (weak_barriers)
> -		smp_wmb();
> +		virt_wmb();
>  	else
>  		wmb();
>  }
> -#else
> -static inline void virtio_mb(bool weak_barriers)
> -{
> -	mb();
> -}
> -
> -static inline void virtio_rmb(bool weak_barriers)
> -{
> -	rmb();
> -}
> -
> -static inline void virtio_wmb(bool weak_barriers)
> -{
> -	wmb();
> -}
> -#endif
>  
>  struct virtio_device;
>  struct virtqueue;
> -- 
> MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
  2015-12-31 19:09   ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-01 17:23     ` Sergei Shtylyov
  -1 siblings, 0 replies; 572+ messages in thread
From: Sergei Shtylyov @ 2016-01-01 17:23 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Hello.

On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:

> We need a full barrier after writing out event index, using
> virt_store_mb there seems better than open-coding.  As usual, we need a
> wrapper to account for strong barriers.
>
> It's tempting to use this in vhost as well, for that, we'll
> need a variant of smp_store_mb that works on __user pointers.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>   include/linux/virtio_ring.h  | 12 ++++++++++++
>   drivers/virtio/virtio_ring.c | 15 +++++++++------
>   2 files changed, 21 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index f3fa55b..3a74d91 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
>   		wmb();
>   }
>
> +static inline void virtio_store_mb(bool weak_barriers,
> +				   __virtio16 *p, __virtio16 v)
> +{
> +	if (weak_barriers)
> +		virt_store_mb(*p, v);
> +	else
> +	{

    The kernel coding style dictates:

	if (weak_barriers) {
		virt_store_mb(*p, v);
	} else {

> +		WRITE_ONCE(*p, v);
> +		mb();
> +	}
> +}
> +
[...]

MBR, Sergei


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
@ 2016-01-01 17:23     ` Sergei Shtylyov
  0 siblings, 0 replies; 572+ messages in thread
From: Sergei Shtylyov @ 2016-01-01 17:23 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel

Hello.

On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:

> We need a full barrier after writing out event index, using
> virt_store_mb there seems better than open-coding.  As usual, we need a
> wrapper to account for strong barriers.
>
> It's tempting to use this in vhost as well, for that, we'll
> need a variant of smp_store_mb that works on __user pointers.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>   include/linux/virtio_ring.h  | 12 ++++++++++++
>   drivers/virtio/virtio_ring.c | 15 +++++++++------
>   2 files changed, 21 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index f3fa55b..3a74d91 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
>   		wmb();
>   }
>
> +static inline void virtio_store_mb(bool weak_barriers,
> +				   __virtio16 *p, __virtio16 v)
> +{
> +	if (weak_barriers)
> +		virt_store_mb(*p, v);
> +	else
> +	{

    The kernel coding style dictates:

	if (weak_barriers) {
		virt_store_mb(*p, v);
	} else {

> +		WRITE_ONCE(*p, v);
> +		mb();
> +	}
> +}
> +
[...]

MBR, Sergei


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
@ 2016-01-01 17:23     ` Sergei Shtylyov
  0 siblings, 0 replies; 572+ messages in thread
From: Sergei Shtylyov @ 2016-01-01 17:23 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Hello.

On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:

> We need a full barrier after writing out event index, using
> virt_store_mb there seems better than open-coding.  As usual, we need a
> wrapper to account for strong barriers.
>
> It's tempting to use this in vhost as well, for that, we'll
> need a variant of smp_store_mb that works on __user pointers.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>   include/linux/virtio_ring.h  | 12 ++++++++++++
>   drivers/virtio/virtio_ring.c | 15 +++++++++------
>   2 files changed, 21 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index f3fa55b..3a74d91 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
>   		wmb();
>   }
>
> +static inline void virtio_store_mb(bool weak_barriers,
> +				   __virtio16 *p, __virtio16 v)
> +{
> +	if (weak_barriers)
> +		virt_store_mb(*p, v);
> +	else
> +	{

    The kernel coding style dictates:

	if (weak_barriers) {
		virt_store_mb(*p, v);
	} else {

> +		WRITE_ONCE(*p, v);
> +		mb();
> +	}
> +}
> +
[...]

MBR, Sergei

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 32/32] virtio_ring: use virt_store_mb
@ 2016-01-01 17:23     ` Sergei Shtylyov
  0 siblings, 0 replies; 572+ messages in thread
From: Sergei Shtylyov @ 2016-01-01 17:23 UTC (permalink / raw)
  To: linux-arm-kernel

Hello.

On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:

> We need a full barrier after writing out event index, using
> virt_store_mb there seems better than open-coding.  As usual, we need a
> wrapper to account for strong barriers.
>
> It's tempting to use this in vhost as well, for that, we'll
> need a variant of smp_store_mb that works on __user pointers.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>   include/linux/virtio_ring.h  | 12 ++++++++++++
>   drivers/virtio/virtio_ring.c | 15 +++++++++------
>   2 files changed, 21 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index f3fa55b..3a74d91 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
>   		wmb();
>   }
>
> +static inline void virtio_store_mb(bool weak_barriers,
> +				   __virtio16 *p, __virtio16 v)
> +{
> +	if (weak_barriers)
> +		virt_store_mb(*p, v);
> +	else
> +	{

    The kernel coding style dictates:

	if (weak_barriers) {
		virt_store_mb(*p, v);
	} else {

> +		WRITE_ONCE(*p, v);
> +		mb();
> +	}
> +}
> +
[...]

MBR, Sergei

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
  2015-12-31 19:09   ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-01 17:23   ` Sergei Shtylyov
  -1 siblings, 0 replies; 572+ messages in thread
From: Sergei Shtylyov @ 2016-01-01 17:23 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linuxppc-dev, David Miller

Hello.

On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:

> We need a full barrier after writing out event index, using
> virt_store_mb there seems better than open-coding.  As usual, we need a
> wrapper to account for strong barriers.
>
> It's tempting to use this in vhost as well, for that, we'll
> need a variant of smp_store_mb that works on __user pointers.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>   include/linux/virtio_ring.h  | 12 ++++++++++++
>   drivers/virtio/virtio_ring.c | 15 +++++++++------
>   2 files changed, 21 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index f3fa55b..3a74d91 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
>   		wmb();
>   }
>
> +static inline void virtio_store_mb(bool weak_barriers,
> +				   __virtio16 *p, __virtio16 v)
> +{
> +	if (weak_barriers)
> +		virt_store_mb(*p, v);
> +	else
> +	{

    The kernel coding style dictates:

	if (weak_barriers) {
		virt_store_mb(*p, v);
	} else {

> +		WRITE_ONCE(*p, v);
> +		mb();
> +	}
> +}
> +
[...]

MBR, Sergei

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
  2015-12-31 19:06   ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-02 11:20     ` Russell King - ARM Linux
  -1 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:06:46PM +0200, Michael S. Tsirkin wrote:
> On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
> smp_store_release, smp_load_acquire, smp_mb__before_atomic and
> smp_mb__after_atomic match the asm-generic variants exactly. Drop the
> local definitions and pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Thanks, the asm-generic versions looks identical to me, so this should
result in no code generation difference.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
@ 2016-01-02 11:20     ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Thu, Dec 31, 2015 at 09:06:46PM +0200, Michael S. Tsirkin wrote:
> On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
> smp_store_release, smp_load_acquire, smp_mb__before_atomic and
> smp_mb__after_atomic match the asm-generic variants exactly. Drop the
> local definitions and pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Thanks, the asm-generic versions looks identical to me, so this should
result in no code generation difference.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
@ 2016-01-02 11:20     ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:06:46PM +0200, Michael S. Tsirkin wrote:
> On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
> smp_store_release, smp_load_acquire, smp_mb__before_atomic and
> smp_mb__after_atomic match the asm-generic variants exactly. Drop the
> local definitions and pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Thanks, the asm-generic versions looks identical to me, so this should
result in no code generation difference.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
@ 2016-01-02 11:20     ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:06:46PM +0200, Michael S. Tsirkin wrote:
> On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
> smp_store_release, smp_load_acquire, smp_mb__before_atomic and
> smp_mb__after_atomic match the asm-generic variants exactly. Drop the
> local definitions and pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Thanks, the asm-generic versions looks identical to me, so this should
result in no code generation difference.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 08/32] arm: reuse asm-generic/barrier.h
  2015-12-31 19:06   ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2016-01-02 11:20   ` Russell King - ARM Linux
  -1 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:06:46PM +0200, Michael S. Tsirkin wrote:
> On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
> smp_store_release, smp_load_acquire, smp_mb__before_atomic and
> smp_mb__after_atomic match the asm-generic variants exactly. Drop the
> local definitions and pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Thanks, the asm-generic versions looks identical to me, so this should
result in no code generation difference.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2015-12-31 19:07   ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2016-01-02 11:24     ` Russell King - ARM Linux
  -1 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren

On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for arm,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

In combination with patch 14, this looks like it should result in no
change to the resulting code.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

My only concern is that it gives people an additional handle onto a
"new" set of barriers - just because they're prefixed with __*
unfortunately doesn't stop anyone from using it (been there with
other arch stuff before.)

I wonder whether we should consider making the smp memory barriers
inline functions, so these __smp_xxx() variants can be undef'd
afterwards, thereby preventing drivers getting their hands on these
new macros?

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-02 11:24     ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for arm,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

In combination with patch 14, this looks like it should result in no
change to the resulting code.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

My only concern is that it gives people an additional handle onto a
"new" set of barriers - just because they're prefixed with __*
unfortunately doesn't stop anyone from using it (been there with
other arch stuff before.)

I wonder whether we should consider making the smp memory barriers
inline functions, so these __smp_xxx() variants can be undef'd
afterwards, thereby preventing drivers getting their hands on these
new macros?

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-02 11:24     ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren

On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for arm,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

In combination with patch 14, this looks like it should result in no
change to the resulting code.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

My only concern is that it gives people an additional handle onto a
"new" set of barriers - just because they're prefixed with __*
unfortunately doesn't stop anyone from using it (been there with
other arch stuff before.)

I wonder whether we should consider making the smp memory barriers
inline functions, so these __smp_xxx() variants can be undef'd
afterwards, thereby preventing drivers getting their hands on these
new macros?

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2015-12-31 19:07   ` Michael S. Tsirkin
                     ` (4 preceding siblings ...)
  (?)
@ 2016-01-02 11:24   ` Russell King - ARM Linux
  -1 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for arm,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

In combination with patch 14, this looks like it should result in no
change to the resulting code.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

My only concern is that it gives people an additional handle onto a
"new" set of barriers - just because they're prefixed with __*
unfortunately doesn't stop anyone from using it (been there with
other arch stuff before.)

I wonder whether we should consider making the smp memory barriers
inline functions, so these __smp_xxx() variants can be undef'd
afterwards, thereby preventing drivers getting their hands on these
new macros?

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-02 11:24     ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for arm,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

In combination with patch 14, this looks like it should result in no
change to the resulting code.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

My only concern is that it gives people an additional handle onto a
"new" set of barriers - just because they're prefixed with __*
unfortunately doesn't stop anyone from using it (been there with
other arch stuff before.)

I wonder whether we should consider making the smp memory barriers
inline functions, so these __smp_xxx() variants can be undef'd
afterwards, thereby preventing drivers getting their hands on these
new macros?

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2015-12-31 19:07   ` Michael S. Tsirkin
                     ` (2 preceding siblings ...)
  (?)
@ 2016-01-02 11:24   ` Russell King - ARM Linux
  -1 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for arm,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

In combination with patch 14, this looks like it should result in no
change to the resulting code.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

My only concern is that it gives people an additional handle onto a
"new" set of barriers - just because they're prefixed with __*
unfortunately doesn't stop anyone from using it (been there with
other arch stuff before.)

I wonder whether we should consider making the smp memory barriers
inline functions, so these __smp_xxx() variants can be undef'd
afterwards, thereby preventing drivers getting their hands on these
new macros?

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-02 11:24     ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren, Andre

On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for arm,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

In combination with patch 14, this looks like it should result in no
change to the resulting code.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

My only concern is that it gives people an additional handle onto a
"new" set of barriers - just because they're prefixed with __*
unfortunately doesn't stop anyone from using it (been there with
other arch stuff before.)

I wonder whether we should consider making the smp memory barriers
inline functions, so these __smp_xxx() variants can be undef'd
afterwards, thereby preventing drivers getting their hands on these
new macros?

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-02 11:24     ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-02 11:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren, Andre

On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for arm,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

In combination with patch 14, this looks like it should result in no
change to the resulting code.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

My only concern is that it gives people an additional handle onto a
"new" set of barriers - just because they're prefixed with __*
unfortunately doesn't stop anyone from using it (been there with
other arch stuff before.)

I wonder whether we should consider making the smp memory barriers
inline functions, so these __smp_xxx() variants can be undef'd
afterwards, thereby preventing drivers getting their hands on these
new macros?

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
  2016-01-01 17:23     ` Sergei Shtylyov
  (?)
  (?)
@ 2016-01-03  9:01         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:01 UTC (permalink / raw)
  To: Sergei Shtylyov
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b

On Fri, Jan 01, 2016 at 08:23:46PM +0300, Sergei Shtylyov wrote:
> Hello.
> 
> On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:
> 
> >We need a full barrier after writing out event index, using
> >virt_store_mb there seems better than open-coding.  As usual, we need a
> >wrapper to account for strong barriers.
> >
> >It's tempting to use this in vhost as well, for that, we'll
> >need a variant of smp_store_mb that works on __user pointers.
> >
> >Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >---
> >  include/linux/virtio_ring.h  | 12 ++++++++++++
> >  drivers/virtio/virtio_ring.c | 15 +++++++++------
> >  2 files changed, 21 insertions(+), 6 deletions(-)
> >
> >diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> >index f3fa55b..3a74d91 100644
> >--- a/include/linux/virtio_ring.h
> >+++ b/include/linux/virtio_ring.h
> >@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
> >  		wmb();
> >  }
> >
> >+static inline void virtio_store_mb(bool weak_barriers,
> >+				   __virtio16 *p, __virtio16 v)
> >+{
> >+	if (weak_barriers)
> >+		virt_store_mb(*p, v);
> >+	else
> >+	{
> 
>    The kernel coding style dictates:
> 
> 	if (weak_barriers) {
> 		virt_store_mb(*p, v);
> 	} else {
> 
> >+		WRITE_ONCE(*p, v);
> >+		mb();
> >+	}
> >+}
> >+
> [...]
> 
> MBR, Sergei

Will fix, thanks!

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
@ 2016-01-03  9:01         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:01 UTC (permalink / raw)
  To: Sergei Shtylyov
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel

On Fri, Jan 01, 2016 at 08:23:46PM +0300, Sergei Shtylyov wrote:
> Hello.
> 
> On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:
> 
> >We need a full barrier after writing out event index, using
> >virt_store_mb there seems better than open-coding.  As usual, we need a
> >wrapper to account for strong barriers.
> >
> >It's tempting to use this in vhost as well, for that, we'll
> >need a variant of smp_store_mb that works on __user pointers.
> >
> >Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >---
> >  include/linux/virtio_ring.h  | 12 ++++++++++++
> >  drivers/virtio/virtio_ring.c | 15 +++++++++------
> >  2 files changed, 21 insertions(+), 6 deletions(-)
> >
> >diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> >index f3fa55b..3a74d91 100644
> >--- a/include/linux/virtio_ring.h
> >+++ b/include/linux/virtio_ring.h
> >@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
> >  		wmb();
> >  }
> >
> >+static inline void virtio_store_mb(bool weak_barriers,
> >+				   __virtio16 *p, __virtio16 v)
> >+{
> >+	if (weak_barriers)
> >+		virt_store_mb(*p, v);
> >+	else
> >+	{
> 
>    The kernel coding style dictates:
> 
> 	if (weak_barriers) {
> 		virt_store_mb(*p, v);
> 	} else {
> 
> >+		WRITE_ONCE(*p, v);
> >+		mb();
> >+	}
> >+}
> >+
> [...]
> 
> MBR, Sergei

Will fix, thanks!

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
@ 2016-01-03  9:01         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:01 UTC (permalink / raw)
  To: Sergei Shtylyov
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b

On Fri, Jan 01, 2016 at 08:23:46PM +0300, Sergei Shtylyov wrote:
> Hello.
> 
> On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:
> 
> >We need a full barrier after writing out event index, using
> >virt_store_mb there seems better than open-coding.  As usual, we need a
> >wrapper to account for strong barriers.
> >
> >It's tempting to use this in vhost as well, for that, we'll
> >need a variant of smp_store_mb that works on __user pointers.
> >
> >Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >---
> >  include/linux/virtio_ring.h  | 12 ++++++++++++
> >  drivers/virtio/virtio_ring.c | 15 +++++++++------
> >  2 files changed, 21 insertions(+), 6 deletions(-)
> >
> >diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> >index f3fa55b..3a74d91 100644
> >--- a/include/linux/virtio_ring.h
> >+++ b/include/linux/virtio_ring.h
> >@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
> >  		wmb();
> >  }
> >
> >+static inline void virtio_store_mb(bool weak_barriers,
> >+				   __virtio16 *p, __virtio16 v)
> >+{
> >+	if (weak_barriers)
> >+		virt_store_mb(*p, v);
> >+	else
> >+	{
> 
>    The kernel coding style dictates:
> 
> 	if (weak_barriers) {
> 		virt_store_mb(*p, v);
> 	} else {
> 
> >+		WRITE_ONCE(*p, v);
> >+		mb();
> >+	}
> >+}
> >+
> [...]
> 
> MBR, Sergei

Will fix, thanks!

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
  2016-01-01 17:23     ` Sergei Shtylyov
                       ` (2 preceding siblings ...)
  (?)
@ 2016-01-03  9:01     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:01 UTC (permalink / raw)
  To: Sergei Shtylyov
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev, David Miller

On Fri, Jan 01, 2016 at 08:23:46PM +0300, Sergei Shtylyov wrote:
> Hello.
> 
> On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:
> 
> >We need a full barrier after writing out event index, using
> >virt_store_mb there seems better than open-coding.  As usual, we need a
> >wrapper to account for strong barriers.
> >
> >It's tempting to use this in vhost as well, for that, we'll
> >need a variant of smp_store_mb that works on __user pointers.
> >
> >Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >---
> >  include/linux/virtio_ring.h  | 12 ++++++++++++
> >  drivers/virtio/virtio_ring.c | 15 +++++++++------
> >  2 files changed, 21 insertions(+), 6 deletions(-)
> >
> >diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> >index f3fa55b..3a74d91 100644
> >--- a/include/linux/virtio_ring.h
> >+++ b/include/linux/virtio_ring.h
> >@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
> >  		wmb();
> >  }
> >
> >+static inline void virtio_store_mb(bool weak_barriers,
> >+				   __virtio16 *p, __virtio16 v)
> >+{
> >+	if (weak_barriers)
> >+		virt_store_mb(*p, v);
> >+	else
> >+	{
> 
>    The kernel coding style dictates:
> 
> 	if (weak_barriers) {
> 		virt_store_mb(*p, v);
> 	} else {
> 
> >+		WRITE_ONCE(*p, v);
> >+		mb();
> >+	}
> >+}
> >+
> [...]
> 
> MBR, Sergei

Will fix, thanks!

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 32/32] virtio_ring: use virt_store_mb
@ 2016-01-03  9:01         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 01, 2016 at 08:23:46PM +0300, Sergei Shtylyov wrote:
> Hello.
> 
> On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:
> 
> >We need a full barrier after writing out event index, using
> >virt_store_mb there seems better than open-coding.  As usual, we need a
> >wrapper to account for strong barriers.
> >
> >It's tempting to use this in vhost as well, for that, we'll
> >need a variant of smp_store_mb that works on __user pointers.
> >
> >Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >---
> >  include/linux/virtio_ring.h  | 12 ++++++++++++
> >  drivers/virtio/virtio_ring.c | 15 +++++++++------
> >  2 files changed, 21 insertions(+), 6 deletions(-)
> >
> >diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> >index f3fa55b..3a74d91 100644
> >--- a/include/linux/virtio_ring.h
> >+++ b/include/linux/virtio_ring.h
> >@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
> >  		wmb();
> >  }
> >
> >+static inline void virtio_store_mb(bool weak_barriers,
> >+				   __virtio16 *p, __virtio16 v)
> >+{
> >+	if (weak_barriers)
> >+		virt_store_mb(*p, v);
> >+	else
> >+	{
> 
>    The kernel coding style dictates:
> 
> 	if (weak_barriers) {
> 		virt_store_mb(*p, v);
> 	} else {
> 
> >+		WRITE_ONCE(*p, v);
> >+		mb();
> >+	}
> >+}
> >+
> [...]
> 
> MBR, Sergei

Will fix, thanks!

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 32/32] virtio_ring: use virt_store_mb
  2016-01-01 17:23     ` Sergei Shtylyov
                       ` (4 preceding siblings ...)
  (?)
@ 2016-01-03  9:01     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:01 UTC (permalink / raw)
  To: Sergei Shtylyov
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev, David Miller

On Fri, Jan 01, 2016 at 08:23:46PM +0300, Sergei Shtylyov wrote:
> Hello.
> 
> On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote:
> 
> >We need a full barrier after writing out event index, using
> >virt_store_mb there seems better than open-coding.  As usual, we need a
> >wrapper to account for strong barriers.
> >
> >It's tempting to use this in vhost as well, for that, we'll
> >need a variant of smp_store_mb that works on __user pointers.
> >
> >Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >---
> >  include/linux/virtio_ring.h  | 12 ++++++++++++
> >  drivers/virtio/virtio_ring.c | 15 +++++++++------
> >  2 files changed, 21 insertions(+), 6 deletions(-)
> >
> >diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> >index f3fa55b..3a74d91 100644
> >--- a/include/linux/virtio_ring.h
> >+++ b/include/linux/virtio_ring.h
> >@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers)
> >  		wmb();
> >  }
> >
> >+static inline void virtio_store_mb(bool weak_barriers,
> >+				   __virtio16 *p, __virtio16 v)
> >+{
> >+	if (weak_barriers)
> >+		virt_store_mb(*p, v);
> >+	else
> >+	{
> 
>    The kernel coding style dictates:
> 
> 	if (weak_barriers) {
> 		virt_store_mb(*p, v);
> 	} else {
> 
> >+		WRITE_ONCE(*p, v);
> >+		mb();
> >+	}
> >+}
> >+
> [...]
> 
> MBR, Sergei

Will fix, thanks!

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-02 11:24     ` Russell King - ARM Linux
  (?)
  (?)
@ 2016-01-03  9:12       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:12 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for arm,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> 
> In combination with patch 14, this looks like it should result in no
> change to the resulting code.
> 
> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
> 
> My only concern is that it gives people an additional handle onto a
> "new" set of barriers - just because they're prefixed with __*
> unfortunately doesn't stop anyone from using it (been there with
> other arch stuff before.)
> 
> I wonder whether we should consider making the smp memory barriers
> inline functions, so these __smp_xxx() variants can be undef'd
> afterwards, thereby preventing drivers getting their hands on these
> new macros?

That'd be tricky to do cleanly since asm-generic depends on
ifndef to add generic variants where needed.

But it would be possible to add a checkpatch test for this.


> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-03  9:12       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:12 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for arm,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> 
> In combination with patch 14, this looks like it should result in no
> change to the resulting code.
> 
> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
> 
> My only concern is that it gives people an additional handle onto a
> "new" set of barriers - just because they're prefixed with __*
> unfortunately doesn't stop anyone from using it (been there with
> other arch stuff before.)
> 
> I wonder whether we should consider making the smp memory barriers
> inline functions, so these __smp_xxx() variants can be undef'd
> afterwards, thereby preventing drivers getting their hands on these
> new macros?

That'd be tricky to do cleanly since asm-generic depends on
ifndef to add generic variants where needed.

But it would be possible to add a checkpatch test for this.


> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-03  9:12       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:12 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for arm,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> 
> In combination with patch 14, this looks like it should result in no
> change to the resulting code.
> 
> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
> 
> My only concern is that it gives people an additional handle onto a
> "new" set of barriers - just because they're prefixed with __*
> unfortunately doesn't stop anyone from using it (been there with
> other arch stuff before.)
> 
> I wonder whether we should consider making the smp memory barriers
> inline functions, so these __smp_xxx() variants can be undef'd
> afterwards, thereby preventing drivers getting their hands on these
> new macros?

That'd be tricky to do cleanly since asm-generic depends on
ifndef to add generic variants where needed.

But it would be possible to add a checkpatch test for this.


> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-03  9:12       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for arm,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> 
> In combination with patch 14, this looks like it should result in no
> change to the resulting code.
> 
> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
> 
> My only concern is that it gives people an additional handle onto a
> "new" set of barriers - just because they're prefixed with __*
> unfortunately doesn't stop anyone from using it (been there with
> other arch stuff before.)
> 
> I wonder whether we should consider making the smp memory barriers
> inline functions, so these __smp_xxx() variants can be undef'd
> afterwards, thereby preventing drivers getting their hands on these
> new macros?

That'd be tricky to do cleanly since asm-generic depends on
ifndef to add generic variants where needed.

But it would be possible to add a checkpatch test for this.


> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-02 11:24     ` Russell King - ARM Linux
                       ` (4 preceding siblings ...)
  (?)
@ 2016-01-03  9:12     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-03  9:12 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> On Thu, Dec 31, 2015 at 09:07:59PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for arm,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> 
> In combination with patch 14, this looks like it should result in no
> change to the resulting code.
> 
> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
> 
> My only concern is that it gives people an additional handle onto a
> "new" set of barriers - just because they're prefixed with __*
> unfortunately doesn't stop anyone from using it (been there with
> other arch stuff before.)
> 
> I wonder whether we should consider making the smp memory barriers
> inline functions, so these __smp_xxx() variants can be undef'd
> afterwards, thereby preventing drivers getting their hands on these
> new macros?

That'd be tricky to do cleanly since asm-generic depends on
ifndef to add generic variants where needed.

But it would be possible to add a checkpatch test for this.


> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                         ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 11:32       ` David Vrabel
  -1 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

If you're feeling particularly keen there's a rmb() consume_one_event()
in drivers/xen/events/events_fifo.c that can be converted to virt_rmb()
as well.

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 11:32       ` David Vrabel
  0 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

If you're feeling particularly keen there's a rmb() consume_one_event()
in drivers/xen/events/events_fifo.c that can be converted to virt_rmb()
as well.

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 11:32       ` David Vrabel
  0 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

If you're feeling particularly keen there's a rmb() consume_one_event()
in drivers/xen/events/events_fifo.c that can be converted to virt_rmb()
as well.

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 11:32       ` David Vrabel
  0 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

If you're feeling particularly keen there's a rmb() consume_one_event()
in drivers/xen/events/events_fifo.c that can be converted to virt_rmb()
as well.

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                       ` (5 preceding siblings ...)
  (?)
@ 2016-01-04 11:32     ` David Vrabel
  -1 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

If you're feeling particularly keen there's a rmb() consume_one_event()
in drivers/xen/events/events_fifo.c that can be converted to virt_rmb()
as well.

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [Xen-devel] [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 11:32       ` David Vrabel
  0 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: linux-arm-kernel

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

If you're feeling particularly keen there's a rmb() consume_one_event()
in drivers/xen/events/events_fifo.c that can be converted to virt_rmb()
as well.

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                       ` (6 preceding siblings ...)
  (?)
@ 2016-01-04 11:32     ` David Vrabel
  -1 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

If you're feeling particularly keen there's a rmb() consume_one_event()
in drivers/xen/events/events_fifo.c that can be converted to virt_rmb()
as well.

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:10   ` Michael S. Tsirkin
                       ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 11:32     ` David Vrabel
  -1 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2016-01-04 11:32     ` David Vrabel
  0 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2016-01-04 11:32     ` David Vrabel
  0 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2016-01-04 11:32     ` David Vrabel
  0 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [Xen-devel] [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:10   ` Michael S. Tsirkin
                     ` (6 preceding siblings ...)
  (?)
@ 2016-01-04 11:32   ` David Vrabel
  -1 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [Xen-devel] [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2016-01-04 11:32     ` David Vrabel
  0 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: linux-arm-kernel

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:10   ` Michael S. Tsirkin
                     ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 11:32   ` David Vrabel
  -1 siblings, 0 replies; 572+ messages in thread
From: David Vrabel @ 2016-01-04 11:32 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, David Vrabel, linuxppc-dev,
	David Miller

On 31/12/15 19:10, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                         ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 12:03       ` Stefano Stabellini
  -1 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Konrad Rzeszutek Wilk

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Are you also going to take care of

drivers/xen/grant-table.c
drivers/xen/evtchn.c
drivers/xen/events/events_fifo.c
drivers/xen/xen-scsiback.c
drivers/xen/tmem.c
drivers/xen/xen-pciback/pci_stub.c
drivers/xen/xen-pciback/pciback_ops.c

?


>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  
>  		memcpy(dst, data, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see new producer until data is there. */
> -		wmb();
> +		virt_wmb();
>  		intf->req_prod += avail;
>  
>  		/* Implies mb(): other side will see the updated producer. */
> @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must read data /after/ reading the producer index. */
> -		rmb();
> +		virt_rmb();
>  
>  		memcpy(data, src, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see free space until we've copied out */
> -		mb();
> +		virt_mb();
>  		intf->rsp_cons += avail;
>  
>  		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 12:03       ` Stefano Stabellini
  0 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Konrad Rzeszutek Wilk, Boris Ostrovsky,
	David Vrabel

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Are you also going to take care of

drivers/xen/grant-table.c
drivers/xen/evtchn.c
drivers/xen/events/events_fifo.c
drivers/xen/xen-scsiback.c
drivers/xen/tmem.c
drivers/xen/xen-pciback/pci_stub.c
drivers/xen/xen-pciback/pciback_ops.c

?


>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  
>  		memcpy(dst, data, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see new producer until data is there. */
> -		wmb();
> +		virt_wmb();
>  		intf->req_prod += avail;
>  
>  		/* Implies mb(): other side will see the updated producer. */
> @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must read data /after/ reading the producer index. */
> -		rmb();
> +		virt_rmb();
>  
>  		memcpy(data, src, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see free space until we've copied out */
> -		mb();
> +		virt_mb();
>  		intf->rsp_cons += avail;
>  
>  		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 12:03       ` Stefano Stabellini
  0 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Konrad Rzeszutek Wilk

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Are you also going to take care of

drivers/xen/grant-table.c
drivers/xen/evtchn.c
drivers/xen/events/events_fifo.c
drivers/xen/xen-scsiback.c
drivers/xen/tmem.c
drivers/xen/xen-pciback/pci_stub.c
drivers/xen/xen-pciback/pciback_ops.c

?


>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  
>  		memcpy(dst, data, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see new producer until data is there. */
> -		wmb();
> +		virt_wmb();
>  		intf->req_prod += avail;
>  
>  		/* Implies mb(): other side will see the updated producer. */
> @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must read data /after/ reading the producer index. */
> -		rmb();
> +		virt_rmb();
>  
>  		memcpy(data, src, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see free space until we've copied out */
> -		mb();
> +		virt_mb();
>  		intf->rsp_cons += avail;
>  
>  		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 12:03       ` Stefano Stabellini
  0 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Konrad Rzeszutek Wilk, Boris Ostrovsky,
	David Vrabel

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Are you also going to take care of

drivers/xen/grant-table.c
drivers/xen/evtchn.c
drivers/xen/events/events_fifo.c
drivers/xen/xen-scsiback.c
drivers/xen/tmem.c
drivers/xen/xen-pciback/pci_stub.c
drivers/xen/xen-pciback/pciback_ops.c

?


>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  
>  		memcpy(dst, data, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see new producer until data is there. */
> -		wmb();
> +		virt_wmb();
>  		intf->req_prod += avail;
>  
>  		/* Implies mb(): other side will see the updated producer. */
> @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must read data /after/ reading the producer index. */
> -		rmb();
> +		virt_rmb();
>  
>  		memcpy(data, src, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see free space until we've copied out */
> -		mb();
> +		virt_mb();
>  		intf->rsp_cons += avail;
>  
>  		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                       ` (9 preceding siblings ...)
  (?)
@ 2016-01-04 12:03     ` Stefano Stabellini
  -1 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Konrad Rzeszutek Wilk, Andrew Cooper,
	linux-kernel, David Vrabel, linuxp

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Are you also going to take care of

drivers/xen/grant-table.c
drivers/xen/evtchn.c
drivers/xen/events/events_fifo.c
drivers/xen/xen-scsiback.c
drivers/xen/tmem.c
drivers/xen/xen-pciback/pci_stub.c
drivers/xen/xen-pciback/pciback_ops.c

?


>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  
>  		memcpy(dst, data, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see new producer until data is there. */
> -		wmb();
> +		virt_wmb();
>  		intf->req_prod += avail;
>  
>  		/* Implies mb(): other side will see the updated producer. */
> @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must read data /after/ reading the producer index. */
> -		rmb();
> +		virt_rmb();
>  
>  		memcpy(data, src, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see free space until we've copied out */
> -		mb();
> +		virt_mb();
>  		intf->rsp_cons += avail;
>  
>  		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 12:03       ` Stefano Stabellini
  0 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Are you also going to take care of

drivers/xen/grant-table.c
drivers/xen/evtchn.c
drivers/xen/events/events_fifo.c
drivers/xen/xen-scsiback.c
drivers/xen/tmem.c
drivers/xen/xen-pciback/pci_stub.c
drivers/xen/xen-pciback/pciback_ops.c

?


>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  
>  		memcpy(dst, data, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see new producer until data is there. */
> -		wmb();
> +		virt_wmb();
>  		intf->req_prod += avail;
>  
>  		/* Implies mb(): other side will see the updated producer. */
> @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must read data /after/ reading the producer index. */
> -		rmb();
> +		virt_rmb();
>  
>  		memcpy(data, src, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see free space until we've copied out */
> -		mb();
> +		virt_mb();
>  		intf->rsp_cons += avail;
>  
>  		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                       ` (7 preceding siblings ...)
  (?)
@ 2016-01-04 12:03     ` Stefano Stabellini
  -1 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linux-kernel, David Vrabel,
	linuxppc-dev, David Miller

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Are you also going to take care of

drivers/xen/grant-table.c
drivers/xen/evtchn.c
drivers/xen/events/events_fifo.c
drivers/xen/xen-scsiback.c
drivers/xen/tmem.c
drivers/xen/xen-pciback/pci_stub.c
drivers/xen/xen-pciback/pciback_ops.c

?


>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  
>  		memcpy(dst, data, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see new producer until data is there. */
> -		wmb();
> +		virt_wmb();
>  		intf->req_prod += avail;
>  
>  		/* Implies mb(): other side will see the updated producer. */
> @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must read data /after/ reading the producer index. */
> -		rmb();
> +		virt_rmb();
>  
>  		memcpy(data, src, avail);
>  		data += avail;
>  		len -= avail;
>  
>  		/* Other side must not see free space until we've copied out */
> -		mb();
> +		virt_mb();
>  		intf->rsp_cons += avail;
>  
>  		pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:10   ` Michael S. Tsirkin
                         ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 12:05       ` Stefano Stabellini
  -1 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  include/xen/interface/io/ring.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 7dc685b..21f4fbd 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -208,12 +208,12 @@ struct __name##_back_ring {						\
>  
>  
>  #define RING_PUSH_REQUESTS(_r) do {					\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
>  } while (0)
>  
>  #define RING_PUSH_RESPONSES(_r) do {					\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
>  } while (0)
>  
> @@ -250,9 +250,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->req_prod;				\
>      RING_IDX __new = (_r)->req_prod_pvt;				\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = __new;					\
> -    mb(); /* back sees new requests /before/ we check req_event */	\
> +    virt_mb(); /* back sees new requests /before/ we check req_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -260,9 +260,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->rsp_prod;				\
>      RING_IDX __new = (_r)->rsp_prod_pvt;				\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = __new;					\
> -    mb(); /* front sees new responses /before/ we check rsp_event */	\
> +    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -271,7 +271,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->req_event = (_r)->req_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>  } while (0)
>  
> @@ -279,7 +279,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>  } while (0)
>  
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2016-01-04 12:05       ` Stefano Stabellini
  0 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Konrad Rzeszutek Wilk, Boris Ostrovsky,
	David Vrabel

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  include/xen/interface/io/ring.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 7dc685b..21f4fbd 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -208,12 +208,12 @@ struct __name##_back_ring {						\
>  
>  
>  #define RING_PUSH_REQUESTS(_r) do {					\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
>  } while (0)
>  
>  #define RING_PUSH_RESPONSES(_r) do {					\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
>  } while (0)
>  
> @@ -250,9 +250,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->req_prod;				\
>      RING_IDX __new = (_r)->req_prod_pvt;				\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = __new;					\
> -    mb(); /* back sees new requests /before/ we check req_event */	\
> +    virt_mb(); /* back sees new requests /before/ we check req_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -260,9 +260,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->rsp_prod;				\
>      RING_IDX __new = (_r)->rsp_prod_pvt;				\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = __new;					\
> -    mb(); /* front sees new responses /before/ we check rsp_event */	\
> +    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -271,7 +271,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->req_event = (_r)->req_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>  } while (0)
>  
> @@ -279,7 +279,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>  } while (0)
>  
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2016-01-04 12:05       ` Stefano Stabellini
  0 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Konrad Rzeszutek Wilk, Boris Ostrovsky,
	David Vrabel

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  include/xen/interface/io/ring.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 7dc685b..21f4fbd 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -208,12 +208,12 @@ struct __name##_back_ring {						\
>  
>  
>  #define RING_PUSH_REQUESTS(_r) do {					\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
>  } while (0)
>  
>  #define RING_PUSH_RESPONSES(_r) do {					\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
>  } while (0)
>  
> @@ -250,9 +250,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->req_prod;				\
>      RING_IDX __new = (_r)->req_prod_pvt;				\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = __new;					\
> -    mb(); /* back sees new requests /before/ we check req_event */	\
> +    virt_mb(); /* back sees new requests /before/ we check req_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -260,9 +260,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->rsp_prod;				\
>      RING_IDX __new = (_r)->rsp_prod_pvt;				\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = __new;					\
> -    mb(); /* front sees new responses /before/ we check rsp_event */	\
> +    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -271,7 +271,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->req_event = (_r)->req_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>  } while (0)
>  
> @@ -279,7 +279,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>  } while (0)
>  
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2016-01-04 12:05       ` Stefano Stabellini
  0 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

Reviewed-by: Stefano Stabellini <stefano.stabellini-mvvWK6WmYclDPfheJLI6IQ@public.gmane.org>


>  include/xen/interface/io/ring.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 7dc685b..21f4fbd 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -208,12 +208,12 @@ struct __name##_back_ring {						\
>  
>  
>  #define RING_PUSH_REQUESTS(_r) do {					\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
>  } while (0)
>  
>  #define RING_PUSH_RESPONSES(_r) do {					\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
>  } while (0)
>  
> @@ -250,9 +250,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->req_prod;				\
>      RING_IDX __new = (_r)->req_prod_pvt;				\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = __new;					\
> -    mb(); /* back sees new requests /before/ we check req_event */	\
> +    virt_mb(); /* back sees new requests /before/ we check req_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -260,9 +260,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->rsp_prod;				\
>      RING_IDX __new = (_r)->rsp_prod_pvt;				\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = __new;					\
> -    mb(); /* front sees new responses /before/ we check rsp_event */	\
> +    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -271,7 +271,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->req_event = (_r)->req_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>  } while (0)
>  
> @@ -279,7 +279,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>  } while (0)
>  
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:10   ` Michael S. Tsirkin
                     ` (8 preceding siblings ...)
  (?)
@ 2016-01-04 12:05   ` Stefano Stabellini
  -1 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Konrad Rzeszutek Wilk, Andrew Cooper,
	linux-kernel, David Vrabel, linuxp

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  include/xen/interface/io/ring.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 7dc685b..21f4fbd 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -208,12 +208,12 @@ struct __name##_back_ring {						\
>  
>  
>  #define RING_PUSH_REQUESTS(_r) do {					\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
>  } while (0)
>  
>  #define RING_PUSH_RESPONSES(_r) do {					\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
>  } while (0)
>  
> @@ -250,9 +250,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->req_prod;				\
>      RING_IDX __new = (_r)->req_prod_pvt;				\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = __new;					\
> -    mb(); /* back sees new requests /before/ we check req_event */	\
> +    virt_mb(); /* back sees new requests /before/ we check req_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -260,9 +260,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->rsp_prod;				\
>      RING_IDX __new = (_r)->rsp_prod_pvt;				\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = __new;					\
> -    mb(); /* front sees new responses /before/ we check rsp_event */	\
> +    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -271,7 +271,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->req_event = (_r)->req_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>  } while (0)
>  
> @@ -279,7 +279,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>  } while (0)
>  
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 34/34] xen/io: use virt_xxx barriers
@ 2016-01-04 12:05       ` Stefano Stabellini
  0 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  include/xen/interface/io/ring.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 7dc685b..21f4fbd 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -208,12 +208,12 @@ struct __name##_back_ring {						\
>  
>  
>  #define RING_PUSH_REQUESTS(_r) do {					\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
>  } while (0)
>  
>  #define RING_PUSH_RESPONSES(_r) do {					\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
>  } while (0)
>  
> @@ -250,9 +250,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->req_prod;				\
>      RING_IDX __new = (_r)->req_prod_pvt;				\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = __new;					\
> -    mb(); /* back sees new requests /before/ we check req_event */	\
> +    virt_mb(); /* back sees new requests /before/ we check req_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -260,9 +260,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->rsp_prod;				\
>      RING_IDX __new = (_r)->rsp_prod_pvt;				\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = __new;					\
> -    mb(); /* front sees new responses /before/ we check rsp_event */	\
> +    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -271,7 +271,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->req_event = (_r)->req_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>  } while (0)
>  
> @@ -279,7 +279,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>  } while (0)
>  
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 34/34] xen/io: use virt_xxx barriers
  2015-12-31 19:10   ` Michael S. Tsirkin
                     ` (7 preceding siblings ...)
  (?)
@ 2016-01-04 12:05   ` Stefano Stabellini
  -1 siblings, 0 replies; 572+ messages in thread
From: Stefano Stabellini @ 2016-01-04 12:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
	linux-s390, Arnd Bergmann, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linux-kernel, David Vrabel,
	linuxppc-dev, David Miller

On Thu, 31 Dec 2015, Michael S. Tsirkin wrote:
> include/xen/interface/io/ring.h uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  include/xen/interface/io/ring.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 7dc685b..21f4fbd 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -208,12 +208,12 @@ struct __name##_back_ring {						\
>  
>  
>  #define RING_PUSH_REQUESTS(_r) do {					\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
>  } while (0)
>  
>  #define RING_PUSH_RESPONSES(_r) do {					\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
>  } while (0)
>  
> @@ -250,9 +250,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->req_prod;				\
>      RING_IDX __new = (_r)->req_prod_pvt;				\
> -    wmb(); /* back sees requests /before/ updated producer index */	\
> +    virt_wmb(); /* back sees requests /before/ updated producer index */	\
>      (_r)->sring->req_prod = __new;					\
> -    mb(); /* back sees new requests /before/ we check req_event */	\
> +    virt_mb(); /* back sees new requests /before/ we check req_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -260,9 +260,9 @@ struct __name##_back_ring {						\
>  #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
>      RING_IDX __old = (_r)->sring->rsp_prod;				\
>      RING_IDX __new = (_r)->rsp_prod_pvt;				\
> -    wmb(); /* front sees responses /before/ updated producer index */	\
> +    virt_wmb(); /* front sees responses /before/ updated producer index */	\
>      (_r)->sring->rsp_prod = __new;					\
> -    mb(); /* front sees new responses /before/ we check rsp_event */	\
> +    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
>      (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
>  		 (RING_IDX)(__new - __old));				\
>  } while (0)
> @@ -271,7 +271,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->req_event = (_r)->req_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
>  } while (0)
>  
> @@ -279,7 +279,7 @@ struct __name##_back_ring {						\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>      if (_work_to_do) break;						\
>      (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
> -    mb();								\
> +    virt_mb();								\
>      (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
>  } while (0)
>  
> -- 
> MST
> 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2015-12-31 19:06   ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:20     ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper

On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> On s390 read_barrier_depends, smp_read_barrier_depends
> smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index 7ffd0b1..c358c31 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -30,14 +30,6 @@
>  #define smp_rmb()			rmb()
>  #define smp_wmb()			wmb()
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_mb__before_atomic()		smp_mb()
> -#define smp_mb__after_atomic()		smp_mb()

As per:

  lkml.kernel.org/r/20150921112252.3c2937e1@mschwide

s390 should change this to barrier() instead of smp_mb() and hence
should not use the generic versions.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 13:20     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Christian Borntraeger

On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> On s390 read_barrier_depends, smp_read_barrier_depends
> smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index 7ffd0b1..c358c31 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -30,14 +30,6 @@
>  #define smp_rmb()			rmb()
>  #define smp_wmb()			wmb()
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_mb__before_atomic()		smp_mb()
> -#define smp_mb__after_atomic()		smp_mb()

As per:

  lkml.kernel.org/r/20150921112252.3c2937e1@mschwide

s390 should change this to barrier() instead of smp_mb() and hence
should not use the generic versions.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 13:20     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper

On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> On s390 read_barrier_depends, smp_read_barrier_depends
> smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index 7ffd0b1..c358c31 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -30,14 +30,6 @@
>  #define smp_rmb()			rmb()
>  #define smp_wmb()			wmb()
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_mb__before_atomic()		smp_mb()
> -#define smp_mb__after_atomic()		smp_mb()

As per:

  lkml.kernel.org/r/20150921112252.3c2937e1@mschwide

s390 should change this to barrier() instead of smp_mb() and hence
should not use the generic versions.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 13:20     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> On s390 read_barrier_depends, smp_read_barrier_depends
> smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index 7ffd0b1..c358c31 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -30,14 +30,6 @@
>  #define smp_rmb()			rmb()
>  #define smp_wmb()			wmb()
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_mb__before_atomic()		smp_mb()
> -#define smp_mb__after_atomic()		smp_mb()

As per:

  lkml.kernel.org/r/20150921112252.3c2937e1 at mschwide

s390 should change this to barrier() instead of smp_mb() and hence
should not use the generic versions.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2015-12-31 19:06   ` Michael S. Tsirkin
                     ` (5 preceding siblings ...)
  (?)
@ 2016-01-04 13:20   ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper

On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> On s390 read_barrier_depends, smp_read_barrier_depends
> smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index 7ffd0b1..c358c31 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -30,14 +30,6 @@
>  #define smp_rmb()			rmb()
>  #define smp_wmb()			wmb()
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_mb__before_atomic()		smp_mb()
> -#define smp_mb__after_atomic()		smp_mb()

As per:

  lkml.kernel.org/r/20150921112252.3c2937e1@mschwide

s390 should change this to barrier() instead of smp_mb() and hence
should not use the generic versions.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 13:20     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-

On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> On s390 read_barrier_depends, smp_read_barrier_depends
> smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index 7ffd0b1..c358c31 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -30,14 +30,6 @@
>  #define smp_rmb()			rmb()
>  #define smp_wmb()			wmb()
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_mb__before_atomic()		smp_mb()
> -#define smp_mb__after_atomic()		smp_mb()

As per:

  lkml.kernel.org/r/20150921112252.3c2937e1@mschwide

s390 should change this to barrier() instead of smp_mb() and hence
should not use the generic versions.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 13:20     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-

On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> On s390 read_barrier_depends, smp_read_barrier_depends
> smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index 7ffd0b1..c358c31 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -30,14 +30,6 @@
>  #define smp_rmb()			rmb()
>  #define smp_wmb()			wmb()
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_mb__before_atomic()		smp_mb()
> -#define smp_mb__after_atomic()		smp_mb()

As per:

  lkml.kernel.org/r/20150921112252.3c2937e1@mschwide

s390 should change this to barrier() instead of smp_mb() and hence
should not use the generic versions.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
  2015-12-31 19:07   ` Michael S. Tsirkin
                         ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:26       ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:26 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ralf Baechle,
	Ingo Molnar, Michael Ellerman

On Thu, Dec 31, 2015 at 09:07:10PM +0200, Michael S. Tsirkin wrote:
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})

David Daney wanted to use fancy new MIPS barriers to provide better
implementations of this.

This patch isn't in the way of that, just a FYI.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
@ 2016-01-04 13:26       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:26 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ralf Baechle, Ingo Molnar,
	Michael Ellerman, Andrey Konovalov, David Daney

On Thu, Dec 31, 2015 at 09:07:10PM +0200, Michael S. Tsirkin wrote:
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})

David Daney wanted to use fancy new MIPS barriers to provide better
implementations of this.

This patch isn't in the way of that, just a FYI.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
@ 2016-01-04 13:26       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:26 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ralf Baechle,
	Ingo Molnar, Michael Ellerman

On Thu, Dec 31, 2015 at 09:07:10PM +0200, Michael S. Tsirkin wrote:
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})

David Daney wanted to use fancy new MIPS barriers to provide better
implementations of this.

This patch isn't in the way of that, just a FYI.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
  2015-12-31 19:07   ` Michael S. Tsirkin
                     ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 13:26   ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:26 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	Michael Ellerman, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, David Daney, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linux-kernel, Ralf Baechle

On Thu, Dec 31, 2015 at 09:07:10PM +0200, Michael S. Tsirkin wrote:
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})

David Daney wanted to use fancy new MIPS barriers to provide better
implementations of this.

This patch isn't in the way of that, just a FYI.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
@ 2016-01-04 13:26       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:07:10PM +0200, Michael S. Tsirkin wrote:
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})

David Daney wanted to use fancy new MIPS barriers to provide better
implementations of this.

This patch isn't in the way of that, just a FYI.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
  2015-12-31 19:07   ` Michael S. Tsirkin
                     ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:26   ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:26 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	Michael Ellerman, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, David Daney, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew Cooper, linux-kernel, Ralf Baechle

On Thu, Dec 31, 2015 at 09:07:10PM +0200, Michael S. Tsirkin wrote:
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})

David Daney wanted to use fancy new MIPS barriers to provide better
implementations of this.

This patch isn't in the way of that, just a FYI.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
@ 2016-01-04 13:26       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:26 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ralf Baechle,
	Ingo Molnar, Michael Ellerman, And

On Thu, Dec 31, 2015 at 09:07:10PM +0200, Michael S. Tsirkin wrote:
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})

David Daney wanted to use fancy new MIPS barriers to provide better
implementations of this.

This patch isn't in the way of that, just a FYI.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 11/32] mips: reuse asm-generic/barrier.h
@ 2016-01-04 13:26       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:26 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ralf Baechle,
	Ingo Molnar, Michael Ellerman, And

On Thu, Dec 31, 2015 at 09:07:10PM +0200, Michael S. Tsirkin wrote:
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})

David Daney wanted to use fancy new MIPS barriers to provide better
implementations of this.

This patch isn't in the way of that, just a FYI.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-03  9:12       ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-04 13:36         ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Russell King - ARM Linux, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren

On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:

> > My only concern is that it gives people an additional handle onto a
> > "new" set of barriers - just because they're prefixed with __*
> > unfortunately doesn't stop anyone from using it (been there with
> > other arch stuff before.)
> > 
> > I wonder whether we should consider making the smp memory barriers
> > inline functions, so these __smp_xxx() variants can be undef'd
> > afterwards, thereby preventing drivers getting their hands on these
> > new macros?
> 
> That'd be tricky to do cleanly since asm-generic depends on
> ifndef to add generic variants where needed.
> 
> But it would be possible to add a checkpatch test for this.

Wasn't the whole purpose of these things for 'drivers' (namely
virtio/xen hypervisor interaction) to use these?

And I suppose most of virtio would actually be modules, so you cannot do
what I did with preempt_enable_no_resched() either.

But yes, it would be good to limit the use of these things.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:36         ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Russell King - ARM Linux, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:

> > My only concern is that it gives people an additional handle onto a
> > "new" set of barriers - just because they're prefixed with __*
> > unfortunately doesn't stop anyone from using it (been there with
> > other arch stuff before.)
> > 
> > I wonder whether we should consider making the smp memory barriers
> > inline functions, so these __smp_xxx() variants can be undef'd
> > afterwards, thereby preventing drivers getting their hands on these
> > new macros?
> 
> That'd be tricky to do cleanly since asm-generic depends on
> ifndef to add generic variants where needed.
> 
> But it would be possible to add a checkpatch test for this.

Wasn't the whole purpose of these things for 'drivers' (namely
virtio/xen hypervisor interaction) to use these?

And I suppose most of virtio would actually be modules, so you cannot do
what I did with preempt_enable_no_resched() either.

But yes, it would be good to limit the use of these things.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:36         ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Russell King - ARM Linux, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren

On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:

> > My only concern is that it gives people an additional handle onto a
> > "new" set of barriers - just because they're prefixed with __*
> > unfortunately doesn't stop anyone from using it (been there with
> > other arch stuff before.)
> > 
> > I wonder whether we should consider making the smp memory barriers
> > inline functions, so these __smp_xxx() variants can be undef'd
> > afterwards, thereby preventing drivers getting their hands on these
> > new macros?
> 
> That'd be tricky to do cleanly since asm-generic depends on
> ifndef to add generic variants where needed.
> 
> But it would be possible to add a checkpatch test for this.

Wasn't the whole purpose of these things for 'drivers' (namely
virtio/xen hypervisor interaction) to use these?

And I suppose most of virtio would actually be modules, so you cannot do
what I did with preempt_enable_no_resched() either.

But yes, it would be good to limit the use of these things.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-03  9:12       ` Michael S. Tsirkin
                         ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 13:36       ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:

> > My only concern is that it gives people an additional handle onto a
> > "new" set of barriers - just because they're prefixed with __*
> > unfortunately doesn't stop anyone from using it (been there with
> > other arch stuff before.)
> > 
> > I wonder whether we should consider making the smp memory barriers
> > inline functions, so these __smp_xxx() variants can be undef'd
> > afterwards, thereby preventing drivers getting their hands on these
> > new macros?
> 
> That'd be tricky to do cleanly since asm-generic depends on
> ifndef to add generic variants where needed.
> 
> But it would be possible to add a checkpatch test for this.

Wasn't the whole purpose of these things for 'drivers' (namely
virtio/xen hypervisor interaction) to use these?

And I suppose most of virtio would actually be modules, so you cannot do
what I did with preempt_enable_no_resched() either.

But yes, it would be good to limit the use of these things.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:36         ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:

> > My only concern is that it gives people an additional handle onto a
> > "new" set of barriers - just because they're prefixed with __*
> > unfortunately doesn't stop anyone from using it (been there with
> > other arch stuff before.)
> > 
> > I wonder whether we should consider making the smp memory barriers
> > inline functions, so these __smp_xxx() variants can be undef'd
> > afterwards, thereby preventing drivers getting their hands on these
> > new macros?
> 
> That'd be tricky to do cleanly since asm-generic depends on
> ifndef to add generic variants where needed.
> 
> But it would be possible to add a checkpatch test for this.

Wasn't the whole purpose of these things for 'drivers' (namely
virtio/xen hypervisor interaction) to use these?

And I suppose most of virtio would actually be modules, so you cannot do
what I did with preempt_enable_no_resched() either.

But yes, it would be good to limit the use of these things.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-03  9:12       ` Michael S. Tsirkin
                         ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:36       ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:

> > My only concern is that it gives people an additional handle onto a
> > "new" set of barriers - just because they're prefixed with __*
> > unfortunately doesn't stop anyone from using it (been there with
> > other arch stuff before.)
> > 
> > I wonder whether we should consider making the smp memory barriers
> > inline functions, so these __smp_xxx() variants can be undef'd
> > afterwards, thereby preventing drivers getting their hands on these
> > new macros?
> 
> That'd be tricky to do cleanly since asm-generic depends on
> ifndef to add generic variants where needed.
> 
> But it would be possible to add a checkpatch test for this.

Wasn't the whole purpose of these things for 'drivers' (namely
virtio/xen hypervisor interaction) to use these?

And I suppose most of virtio would actually be modules, so you cannot do
what I did with preempt_enable_no_resched() either.

But yes, it would be good to limit the use of these things.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2015-12-31 19:08     ` Michael S. Tsirkin
                         ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 13:41       ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:41 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	Davidlohr Bueso, x86, xen-devel, Ingo Molnar, linux-xtensa,
	James Hogan, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif

James, it strikes me as odd that fence() is a no-op instead of a
barrier() for UP, can you verify/explain?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 13:41       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:41 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, James Hogan, Ingo Molnar,
	Davidlohr Bueso, Andrey Konovalov

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif

James, it strikes me as odd that fence() is a no-op instead of a
barrier() for UP, can you verify/explain?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 13:41       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:41 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	Davidlohr Bueso, x86, xen-devel, Ingo Molnar, linux-xtensa,
	James Hogan, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif

James, it strikes me as odd that fence() is a no-op instead of a
barrier() for UP, can you verify/explain?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 13:41       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif

James, it strikes me as odd that fence() is a no-op instead of a
barrier() for UP, can you verify/explain?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2015-12-31 19:08     ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:41     ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:41 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	Davidlohr Bueso, x86, xen-devel, Ingo Molnar, linux-xtensa,
	James Hogan, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif

James, it strikes me as odd that fence() is a no-op instead of a
barrier() for UP, can you verify/explain?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 13:41       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:41 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	Davidlohr Bueso, x86, xen-devel, Ingo Molnar, linux-xtensa,
	James Hogan, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev, D

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif

James, it strikes me as odd that fence() is a no-op instead of a
barrier() for UP, can you verify/explain?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2015-12-31 19:08   ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:45     ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper

On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for s390,
> for use by virtualization.
> 
> Some smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> unconditionally on this architecture.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index c358c31..fbd25b2 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -26,18 +26,21 @@
>  #define wmb()				barrier()
>  #define dma_rmb()			mb()
>  #define dma_wmb()			mb()
> -#define smp_mb()			mb()
> -#define smp_rmb()			rmb()
> -#define smp_wmb()			wmb()
> -
> -#define smp_store_release(p, v)						\
> +#define __smp_mb()			mb()
> +#define __smp_rmb()			rmb()
> +#define __smp_wmb()			wmb()
> +#define smp_mb()			__smp_mb()
> +#define smp_rmb()			__smp_rmb()
> +#define smp_wmb()			__smp_wmb()

Why define the smp_*mb() primitives here? Would not the inclusion of
asm-generic/barrier.h do this?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-04 13:45     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar, Davidlohr Bueso,
	Christian Borntraeger, Andrey Konovalov

On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for s390,
> for use by virtualization.
> 
> Some smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> unconditionally on this architecture.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index c358c31..fbd25b2 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -26,18 +26,21 @@
>  #define wmb()				barrier()
>  #define dma_rmb()			mb()
>  #define dma_wmb()			mb()
> -#define smp_mb()			mb()
> -#define smp_rmb()			rmb()
> -#define smp_wmb()			wmb()
> -
> -#define smp_store_release(p, v)						\
> +#define __smp_mb()			mb()
> +#define __smp_rmb()			rmb()
> +#define __smp_wmb()			wmb()
> +#define smp_mb()			__smp_mb()
> +#define smp_rmb()			__smp_rmb()
> +#define smp_wmb()			__smp_wmb()

Why define the smp_*mb() primitives here? Would not the inclusion of
asm-generic/barrier.h do this?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-04 13:45     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper

On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for s390,
> for use by virtualization.
> 
> Some smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> unconditionally on this architecture.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index c358c31..fbd25b2 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -26,18 +26,21 @@
>  #define wmb()				barrier()
>  #define dma_rmb()			mb()
>  #define dma_wmb()			mb()
> -#define smp_mb()			mb()
> -#define smp_rmb()			rmb()
> -#define smp_wmb()			wmb()
> -
> -#define smp_store_release(p, v)						\
> +#define __smp_mb()			mb()
> +#define __smp_rmb()			rmb()
> +#define __smp_wmb()			wmb()
> +#define smp_mb()			__smp_mb()
> +#define smp_rmb()			__smp_rmb()
> +#define smp_wmb()			__smp_wmb()

Why define the smp_*mb() primitives here? Would not the inclusion of
asm-generic/barrier.h do this?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-04 13:45     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for s390,
> for use by virtualization.
> 
> Some smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> unconditionally on this architecture.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index c358c31..fbd25b2 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -26,18 +26,21 @@
>  #define wmb()				barrier()
>  #define dma_rmb()			mb()
>  #define dma_wmb()			mb()
> -#define smp_mb()			mb()
> -#define smp_rmb()			rmb()
> -#define smp_wmb()			wmb()
> -
> -#define smp_store_release(p, v)						\
> +#define __smp_mb()			mb()
> +#define __smp_rmb()			rmb()
> +#define __smp_wmb()			wmb()
> +#define smp_mb()			__smp_mb()
> +#define smp_rmb()			__smp_rmb()
> +#define smp_wmb()			__smp_wmb()

Why define the smp_*mb() primitives here? Would not the inclusion of
asm-generic/barrier.h do this?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2015-12-31 19:08   ` Michael S. Tsirkin
                     ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 13:45   ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper

On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for s390,
> for use by virtualization.
> 
> Some smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> unconditionally on this architecture.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index c358c31..fbd25b2 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -26,18 +26,21 @@
>  #define wmb()				barrier()
>  #define dma_rmb()			mb()
>  #define dma_wmb()			mb()
> -#define smp_mb()			mb()
> -#define smp_rmb()			rmb()
> -#define smp_wmb()			wmb()
> -
> -#define smp_store_release(p, v)						\
> +#define __smp_mb()			mb()
> +#define __smp_rmb()			rmb()
> +#define __smp_wmb()			wmb()
> +#define smp_mb()			__smp_mb()
> +#define smp_rmb()			__smp_rmb()
> +#define smp_wmb()			__smp_wmb()

Why define the smp_*mb() primitives here? Would not the inclusion of
asm-generic/barrier.h do this?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-04 13:45     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-

On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for s390,
> for use by virtualization.
> 
> Some smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> unconditionally on this architecture.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index c358c31..fbd25b2 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -26,18 +26,21 @@
>  #define wmb()				barrier()
>  #define dma_rmb()			mb()
>  #define dma_wmb()			mb()
> -#define smp_mb()			mb()
> -#define smp_rmb()			rmb()
> -#define smp_wmb()			wmb()
> -
> -#define smp_store_release(p, v)						\
> +#define __smp_mb()			mb()
> +#define __smp_rmb()			rmb()
> +#define __smp_wmb()			wmb()
> +#define smp_mb()			__smp_mb()
> +#define smp_rmb()			__smp_rmb()
> +#define smp_wmb()			__smp_wmb()

Why define the smp_*mb() primitives here? Would not the inclusion of
asm-generic/barrier.h do this?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-04 13:45     ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-

On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for s390,
> for use by virtualization.
> 
> Some smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> unconditionally on this architecture.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/s390/include/asm/barrier.h | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index c358c31..fbd25b2 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -26,18 +26,21 @@
>  #define wmb()				barrier()
>  #define dma_rmb()			mb()
>  #define dma_wmb()			mb()
> -#define smp_mb()			mb()
> -#define smp_rmb()			rmb()
> -#define smp_wmb()			wmb()
> -
> -#define smp_store_release(p, v)						\
> +#define __smp_mb()			mb()
> +#define __smp_rmb()			rmb()
> +#define __smp_wmb()			wmb()
> +#define smp_mb()			__smp_mb()
> +#define smp_rmb()			__smp_rmb()
> +#define smp_wmb()			__smp_wmb()

Why define the smp_*mb() primitives here? Would not the inclusion of
asm-generic/barrier.h do this?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:36         ` Peter Zijlstra
  (?)
  (?)
@ 2016-01-04 13:54           ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

Ah, I see, you add virt_*mb() stuff later on for that use case.

So, assuming everybody does include asm-generic/barrier.h, you could
simply #undef the __smp version at the end of that, once we've generated
all the regular primitives from it, no?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:54           ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Russell King - ARM Linux, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

Ah, I see, you add virt_*mb() stuff later on for that use case.

So, assuming everybody does include asm-generic/barrier.h, you could
simply #undef the __smp version at the end of that, once we've generated
all the regular primitives from it, no?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:54           ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

Ah, I see, you add virt_*mb() stuff later on for that use case.

So, assuming everybody does include asm-generic/barrier.h, you could
simply #undef the __smp version at the end of that, once we've generated
all the regular primitives from it, no?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:54           ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

Ah, I see, you add virt_*mb() stuff later on for that use case.

So, assuming everybody does include asm-generic/barrier.h, you could
simply #undef the __smp version at the end of that, once we've generated
all the regular primitives from it, no?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:36         ` Peter Zijlstra
                           ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:54         ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 13:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

Ah, I see, you add virt_*mb() stuff later on for that use case.

So, assuming everybody does include asm-generic/barrier.h, you could
simply #undef the __smp version at the end of that, once we've generated
all the regular primitives from it, no?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:54           ` Peter Zijlstra
                               ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:59             ` Russell King - ARM Linux
  -1 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-04 13:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Not so simple - that's why I mentioned using inline functions.

The new smp_* _macros_ are:

+#define smp_mb()       __smp_mb()

which means if we simply #undef __smp_mb(), smp_mb() then points at
something which is no longer available, and we'll end up with errors
saying that __smp_mb() doesn't exist.

My suggestion was to change:

#ifndef smp_mb
#define smp_mb()	__smp_mb()
#endif

to:

#ifndef smp_mb
static inline void smp_mb(void)
{
	__smp_mb();
}
#endif

which then means __smp_mb() and friends can be #undef'd afterwards.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:59             ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-04 13:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Not so simple - that's why I mentioned using inline functions.

The new smp_* _macros_ are:

+#define smp_mb()       __smp_mb()

which means if we simply #undef __smp_mb(), smp_mb() then points at
something which is no longer available, and we'll end up with errors
saying that __smp_mb() doesn't exist.

My suggestion was to change:

#ifndef smp_mb
#define smp_mb()	__smp_mb()
#endif

to:

#ifndef smp_mb
static inline void smp_mb(void)
{
	__smp_mb();
}
#endif

which then means __smp_mb() and friends can be #undef'd afterwards.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:59             ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-04 13:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Not so simple - that's why I mentioned using inline functions.

The new smp_* _macros_ are:

+#define smp_mb()       __smp_mb()

which means if we simply #undef __smp_mb(), smp_mb() then points at
something which is no longer available, and we'll end up with errors
saying that __smp_mb() doesn't exist.

My suggestion was to change:

#ifndef smp_mb
#define smp_mb()	__smp_mb()
#endif

to:

#ifndef smp_mb
static inline void smp_mb(void)
{
	__smp_mb();
}
#endif

which then means __smp_mb() and friends can be #undef'd afterwards.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:54           ` Peter Zijlstra
                             ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 13:59           ` Russell King - ARM Linux
  -1 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-04 13:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, Tony Lindgren,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Arnd Bergmann, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Not so simple - that's why I mentioned using inline functions.

The new smp_* _macros_ are:

+#define smp_mb()       __smp_mb()

which means if we simply #undef __smp_mb(), smp_mb() then points at
something which is no longer available, and we'll end up with errors
saying that __smp_mb() doesn't exist.

My suggestion was to change:

#ifndef smp_mb
#define smp_mb()	__smp_mb()
#endif

to:

#ifndef smp_mb
static inline void smp_mb(void)
{
	__smp_mb();
}
#endif

which then means __smp_mb() and friends can be #undef'd afterwards.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:59             ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-04 13:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Not so simple - that's why I mentioned using inline functions.

The new smp_* _macros_ are:

+#define smp_mb()       __smp_mb()

which means if we simply #undef __smp_mb(), smp_mb() then points at
something which is no longer available, and we'll end up with errors
saying that __smp_mb() doesn't exist.

My suggestion was to change:

#ifndef smp_mb
#define smp_mb()	__smp_mb()
#endif

to:

#ifndef smp_mb
static inline void smp_mb(void)
{
	__smp_mb();
}
#endif

which then means __smp_mb() and friends can be #undef'd afterwards.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:54           ` Peter Zijlstra
                             ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 13:59           ` Russell King - ARM Linux
  -1 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-04 13:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, Tony Lindgren,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Arnd Bergmann, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Not so simple - that's why I mentioned using inline functions.

The new smp_* _macros_ are:

+#define smp_mb()       __smp_mb()

which means if we simply #undef __smp_mb(), smp_mb() then points at
something which is no longer available, and we'll end up with errors
saying that __smp_mb() doesn't exist.

My suggestion was to change:

#ifndef smp_mb
#define smp_mb()	__smp_mb()
#endif

to:

#ifndef smp_mb
static inline void smp_mb(void)
{
	__smp_mb();
}
#endif

which then means __smp_mb() and friends can be #undef'd afterwards.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:59             ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-04 13:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren, Andre

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Not so simple - that's why I mentioned using inline functions.

The new smp_* _macros_ are:

+#define smp_mb()       __smp_mb()

which means if we simply #undef __smp_mb(), smp_mb() then points at
something which is no longer available, and we'll end up with errors
saying that __smp_mb() doesn't exist.

My suggestion was to change:

#ifndef smp_mb
#define smp_mb()	__smp_mb()
#endif

to:

#ifndef smp_mb
static inline void smp_mb(void)
{
	__smp_mb();
}
#endif

which then means __smp_mb() and friends can be #undef'd afterwards.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 13:59             ` Russell King - ARM Linux
  0 siblings, 0 replies; 572+ messages in thread
From: Russell King - ARM Linux @ 2016-01-04 13:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren, Andre

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Not so simple - that's why I mentioned using inline functions.

The new smp_* _macros_ are:

+#define smp_mb()       __smp_mb()

which means if we simply #undef __smp_mb(), smp_mb() then points at
something which is no longer available, and we'll end up with errors
saying that __smp_mb() doesn't exist.

My suggestion was to change:

#ifndef smp_mb
#define smp_mb()	__smp_mb()
#endif

to:

#ifndef smp_mb
static inline void smp_mb(void)
{
	__smp_mb();
}
#endif

which then means __smp_mb() and friends can be #undef'd afterwards.

-- 
RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2015-12-31 19:09     ` Michael S. Tsirkin
  (?)
@ 2016-01-04 14:05       ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar

On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> from smp_store_mb means attempts to store a 2 byte value using this
> macro fail.
> 
> And happens to be exactly what virtio drivers want to do.
> 
> Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  arch/sh/include/asm/barrier.h | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
> index f887c64..0cc5735 100644
> --- a/arch/sh/include/asm/barrier.h
> +++ b/arch/sh/include/asm/barrier.h
> @@ -32,7 +32,15 @@
>  #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
>  #endif
>  
> -#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
> +#define __smp_store_mb(var, value) do { \
> +	if (sizeof(var) != 4 && sizeof(var) != 1) { \
> +		 WRITE_ONCE(var, value); \
> +		__smp_mb(); \
> +	} else { \
> +		(void)xchg(&var, value);  \
> +	} \
> +} while (0)

So SH is an orphaned arch, which is also why I did not comment on using
xchg() for the UP smp_store_mb() thing.

But I really think we should try fixing the xchg() implementation
instead of this duct-tape.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-04 14:05       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Ingo Molnar

On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> from smp_store_mb means attempts to store a 2 byte value using this
> macro fail.
> 
> And happens to be exactly what virtio drivers want to do.
> 
> Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  arch/sh/include/asm/barrier.h | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
> index f887c64..0cc5735 100644
> --- a/arch/sh/include/asm/barrier.h
> +++ b/arch/sh/include/asm/barrier.h
> @@ -32,7 +32,15 @@
>  #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
>  #endif
>  
> -#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
> +#define __smp_store_mb(var, value) do { \
> +	if (sizeof(var) != 4 && sizeof(var) != 1) { \
> +		 WRITE_ONCE(var, value); \
> +		__smp_mb(); \
> +	} else { \
> +		(void)xchg(&var, value);  \
> +	} \
> +} while (0)

So SH is an orphaned arch, which is also why I did not comment on using
xchg() for the UP smp_store_mb() thing.

But I really think we should try fixing the xchg() implementation
instead of this duct-tape.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2015-12-31 19:09     ` Michael S. Tsirkin
                       ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 14:05     ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	x86, xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev, David Miller

On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> from smp_store_mb means attempts to store a 2 byte value using this
> macro fail.
> 
> And happens to be exactly what virtio drivers want to do.
> 
> Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  arch/sh/include/asm/barrier.h | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
> index f887c64..0cc5735 100644
> --- a/arch/sh/include/asm/barrier.h
> +++ b/arch/sh/include/asm/barrier.h
> @@ -32,7 +32,15 @@
>  #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
>  #endif
>  
> -#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
> +#define __smp_store_mb(var, value) do { \
> +	if (sizeof(var) != 4 && sizeof(var) != 1) { \
> +		 WRITE_ONCE(var, value); \
> +		__smp_mb(); \
> +	} else { \
> +		(void)xchg(&var, value);  \
> +	} \
> +} while (0)

So SH is an orphaned arch, which is also why I did not comment on using
xchg() for the UP smp_store_mb() thing.

But I really think we should try fixing the xchg() implementation
instead of this duct-tape.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-04 14:05       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> from smp_store_mb means attempts to store a 2 byte value using this
> macro fail.
> 
> And happens to be exactly what virtio drivers want to do.
> 
> Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  arch/sh/include/asm/barrier.h | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
> index f887c64..0cc5735 100644
> --- a/arch/sh/include/asm/barrier.h
> +++ b/arch/sh/include/asm/barrier.h
> @@ -32,7 +32,15 @@
>  #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
>  #endif
>  
> -#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
> +#define __smp_store_mb(var, value) do { \
> +	if (sizeof(var) != 4 && sizeof(var) != 1) { \
> +		 WRITE_ONCE(var, value); \
> +		__smp_mb(); \
> +	} else { \
> +		(void)xchg(&var, value);  \
> +	} \
> +} while (0)

So SH is an orphaned arch, which is also why I did not comment on using
xchg() for the UP smp_store_mb() thing.

But I really think we should try fixing the xchg() implementation
instead of this duct-tape.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2015-12-31 19:09     ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 14:05     ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Ingo Molnar, linux-arch, linux-s390, Arnd Bergmann,
	x86, xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev, David Miller

On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> from smp_store_mb means attempts to store a 2 byte value using this
> macro fail.
> 
> And happens to be exactly what virtio drivers want to do.
> 
> Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  arch/sh/include/asm/barrier.h | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
> index f887c64..0cc5735 100644
> --- a/arch/sh/include/asm/barrier.h
> +++ b/arch/sh/include/asm/barrier.h
> @@ -32,7 +32,15 @@
>  #define ctrl_barrier()	__asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
>  #endif
>  
> -#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
> +#define __smp_store_mb(var, value) do { \
> +	if (sizeof(var) != 4 && sizeof(var) != 1) { \
> +		 WRITE_ONCE(var, value); \
> +		__smp_mb(); \
> +	} else { \
> +		(void)xchg(&var, value);  \
> +	} \
> +} while (0)

So SH is an orphaned arch, which is also why I did not comment on using
xchg() for the UP smp_store_mb() thing.

But I really think we should try fixing the xchg() implementation
instead of this duct-tape.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                           ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 14:09         ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk, Boris Ostrovsky

On Thu, Dec 31, 2015 at 09:10:01PM +0200, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  

So its possible to remove this barrier entirely, see the "CONTROL
DEPENDNCIES" chunk of memory-barrier.txt.

But do that in a separate patch series and only if you really really
need the performance.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 14:09         ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Konrad Rzeszutek Wilk,
	Boris Ostrovsky, David Vrabel

On Thu, Dec 31, 2015 at 09:10:01PM +0200, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  

So its possible to remove this barrier entirely, see the "CONTROL
DEPENDNCIES" chunk of memory-barrier.txt.

But do that in a separate patch series and only if you really really
need the performance.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 14:09         ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk, Boris Ostrovsky

On Thu, Dec 31, 2015 at 09:10:01PM +0200, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  

So its possible to remove this barrier entirely, see the "CONTROL
DEPENDNCIES" chunk of memory-barrier.txt.

But do that in a separate patch series and only if you really really
need the performance.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                       ` (10 preceding siblings ...)
  (?)
@ 2016-01-04 14:09     ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Boris Ostrovsky, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel,
	Konrad Rzeszutek Wilk, Andrew Cooper, linux-kernel, David Vrabel,
	linuxppc-dev

On Thu, Dec 31, 2015 at 09:10:01PM +0200, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  

So its possible to remove this barrier entirely, see the "CONTROL
DEPENDNCIES" chunk of memory-barrier.txt.

But do that in a separate patch series and only if you really really
need the performance.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 14:09         ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:10:01PM +0200, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  

So its possible to remove this barrier entirely, see the "CONTROL
DEPENDNCIES" chunk of memory-barrier.txt.

But do that in a separate patch series and only if you really really
need the performance.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
  2015-12-31 19:10     ` Michael S. Tsirkin
                       ` (11 preceding siblings ...)
  (?)
@ 2016-01-04 14:09     ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
	sparclinux, Boris Ostrovsky, linux-arch, linux-s390,
	Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, David Vrabel, linuxppc-dev, David Miller

On Thu, Dec 31, 2015 at 09:10:01PM +0200, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  

So its possible to remove this barrier entirely, see the "CONTROL
DEPENDNCIES" chunk of memory-barrier.txt.

But do that in a separate patch series and only if you really really
need the performance.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 14:09         ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk, Boris Ostrovsky, David Vrabel

On Thu, Dec 31, 2015 at 09:10:01PM +0200, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  

So its possible to remove this barrier entirely, see the "CONTROL
DEPENDNCIES" chunk of memory-barrier.txt.

But do that in a separate patch series and only if you really really
need the performance.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 33/34] xenbus: use virt_xxx barriers
@ 2016-01-04 14:09         ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 14:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Arnd Bergmann,
	linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Konrad Rzeszutek Wilk, Boris Ostrovsky, David Vrabel

On Thu, Dec 31, 2015 at 09:10:01PM +0200, Michael S. Tsirkin wrote:
> drivers/xen/xenbus/xenbus_comms.c uses
> full memory barriers to communicate with the other side.
> 
> For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
> would be sufficient, so mb() and wmb() here are only needed if
> a non-SMP guest runs on an SMP host.
> 
> Switch to virt_xxx barriers which serve this exact purpose.
> 
> Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index fdb0f33..ecdecce 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
>  			avail = len;
>  
>  		/* Must write data /after/ reading the consumer index. */
> -		mb();
> +		virt_mb();
>  

So its possible to remove this barrier entirely, see the "CONTROL
DEPENDNCIES" chunk of memory-barrier.txt.

But do that in a separate patch series and only if you really really
need the performance.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 13:20     ` Peter Zijlstra
  (?)
  (?)
@ 2016-01-04 15:03       ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-04 15:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, linux-sh,
	x86, Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, 4 Jan 2016 14:20:42 +0100
Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.
 
Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
a patch.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 15:03       ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-04 15:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Andrey Konovalov, Christian Borntraeger

On Mon, 4 Jan 2016 14:20:42 +0100
Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.
 
Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
a patch.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 15:03       ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-04 15:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, linux-sh,
	x86, Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, 4 Jan 2016 14:20:42 +0100
Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.
 
Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
a patch.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 15:03       ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-04 15:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 4 Jan 2016 14:20:42 +0100
Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1 at mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.
 
Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
a patch.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 13:20     ` Peter Zijlstra
                       ` (5 preceding siblings ...)
  (?)
@ 2016-01-04 15:03     ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-04 15:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, linux-sh,
	x86, Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, 4 Jan 2016 14:20:42 +0100
Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.
 
Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
a patch.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-04 13:41       ` Peter Zijlstra
                           ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 15:25         ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 1789 bytes --]

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:25         ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 1789 bytes --]

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:25         ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 1789 bytes --]

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:25         ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 1789 bytes --]

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-04 13:41       ` Peter Zijlstra
                         ` (5 preceding siblings ...)
  (?)
@ 2016-01-04 15:25       ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 1789 bytes --]

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:25         ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160104/c898072c/attachment.sig>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-04 13:41       ` Peter Zijlstra
                         ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 15:25       ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 1789 bytes --]

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:25         ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, David

[-- Attachment #1: Type: text/plain, Size: 1789 bytes --]

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:25         ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 15:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, David

[-- Attachment #1: Type: text/plain, Size: 1789 bytes --]

Hi Peter,

On Mon, Jan 04, 2016 at 02:41:28PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> 
> James, it strikes me as odd that fence() is a no-op instead of a
> barrier() for UP, can you verify/explain?

fence() is an unfortunate workaround for a specific issue on a certain
SoC, where writes from different hw threads get reordered outside of the
core, resulting in incoherency between RAM and cache. It has slightly
different semantics to the normal SMP barriers, since I was assured it
is required before a write rather than after it.

Here's the comment:

> This is needed before a write to shared memory in a critical section,
> to prevent external reordering of writes before the fence on other
> threads with writes after the fence on this thread (and to prevent the
> ensuing cache-memory incoherence). It is therefore ineffective if used
> after and on the same thread as a write.

It is used along with the metag specific __global_lock1() (global
voluntary lock between hw threads) whenever a write is performed, and by
smp_mb/smp_rmb to try to catch other cases, but I've never been
confident this fixes every single corner case, since there could be
other places where multiple CPUs perform unsynchronised writes to the
same memory location, and expect cache not to become incoherent at that
location.

It seemed to be sufficient to achieve stability however, and SMP on Meta
Linux never made it into a product anyway, since the other hw thread
tended to be used for RTOS stuff, so it didn't seem worth extending the
generic barrier API for it.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-04 15:25         ` James Hogan
  (?)
  (?)
@ 2016-01-04 15:30           ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 15:30 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> It is used along with the metag specific __global_lock1() (global
> voluntary lock between hw threads) whenever a write is performed, and by
> smp_mb/smp_rmb to try to catch other cases, but I've never been
> confident this fixes every single corner case, since there could be
> other places where multiple CPUs perform unsynchronised writes to the
> same memory location, and expect cache not to become incoherent at that
> location.

Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
And yes, this is bound to break in lots of places in subtle ways. We
very much assume cache coherency for SMP in generic code.

> It seemed to be sufficient to achieve stability however, and SMP on Meta
> Linux never made it into a product anyway, since the other hw thread
> tended to be used for RTOS stuff, so it didn't seem worth extending the
> generic barrier API for it.

*phew*, should we take it out then, just to be sure nobody accidentally
tries to use it then?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:30           ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 15:30 UTC (permalink / raw)
  To: James Hogan
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov

On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> It is used along with the metag specific __global_lock1() (global
> voluntary lock between hw threads) whenever a write is performed, and by
> smp_mb/smp_rmb to try to catch other cases, but I've never been
> confident this fixes every single corner case, since there could be
> other places where multiple CPUs perform unsynchronised writes to the
> same memory location, and expect cache not to become incoherent at that
> location.

Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
And yes, this is bound to break in lots of places in subtle ways. We
very much assume cache coherency for SMP in generic code.

> It seemed to be sufficient to achieve stability however, and SMP on Meta
> Linux never made it into a product anyway, since the other hw thread
> tended to be used for RTOS stuff, so it didn't seem worth extending the
> generic barrier API for it.

*phew*, should we take it out then, just to be sure nobody accidentally
tries to use it then?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:30           ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 15:30 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> It is used along with the metag specific __global_lock1() (global
> voluntary lock between hw threads) whenever a write is performed, and by
> smp_mb/smp_rmb to try to catch other cases, but I've never been
> confident this fixes every single corner case, since there could be
> other places where multiple CPUs perform unsynchronised writes to the
> same memory location, and expect cache not to become incoherent at that
> location.

Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
And yes, this is bound to break in lots of places in subtle ways. We
very much assume cache coherency for SMP in generic code.

> It seemed to be sufficient to achieve stability however, and SMP on Meta
> Linux never made it into a product anyway, since the other hw thread
> tended to be used for RTOS stuff, so it didn't seem worth extending the
> generic barrier API for it.

*phew*, should we take it out then, just to be sure nobody accidentally
tries to use it then?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 15:30           ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 15:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> It is used along with the metag specific __global_lock1() (global
> voluntary lock between hw threads) whenever a write is performed, and by
> smp_mb/smp_rmb to try to catch other cases, but I've never been
> confident this fixes every single corner case, since there could be
> other places where multiple CPUs perform unsynchronised writes to the
> same memory location, and expect cache not to become incoherent at that
> location.

Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
And yes, this is bound to break in lots of places in subtle ways. We
very much assume cache coherency for SMP in generic code.

> It seemed to be sufficient to achieve stability however, and SMP on Meta
> Linux never made it into a product anyway, since the other hw thread
> tended to be used for RTOS stuff, so it didn't seem worth extending the
> generic barrier API for it.

*phew*, should we take it out then, just to be sure nobody accidentally
tries to use it then?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-04 15:25         ` James Hogan
                           ` (5 preceding siblings ...)
  (?)
@ 2016-01-04 15:30         ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-04 15:30 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> It is used along with the metag specific __global_lock1() (global
> voluntary lock between hw threads) whenever a write is performed, and by
> smp_mb/smp_rmb to try to catch other cases, but I've never been
> confident this fixes every single corner case, since there could be
> other places where multiple CPUs perform unsynchronised writes to the
> same memory location, and expect cache not to become incoherent at that
> location.

Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
And yes, this is bound to break in lots of places in subtle ways. We
very much assume cache coherency for SMP in generic code.

> It seemed to be sufficient to achieve stability however, and SMP on Meta
> Linux never made it into a product anyway, since the other hw thread
> tended to be used for RTOS stuff, so it didn't seem worth extending the
> generic barrier API for it.

*phew*, should we take it out then, just to be sure nobody accidentally
tries to use it then?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-04 15:30           ` Peter Zijlstra
                               ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 16:04             ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 1505 bytes --]

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 16:04             ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 1505 bytes --]

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 16:04             ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 1505 bytes --]

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 16:04             ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 1505 bytes --]

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-04 15:30           ` Peter Zijlstra
                             ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 16:04           ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 1505 bytes --]

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 16:04             ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160104/faded8fa/attachment.sig>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-04 15:30           ` Peter Zijlstra
                             ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 16:04           ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, Michael S. Tsirkin, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, linux-sh, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 1505 bytes --]

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 16:04             ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, David

[-- Attachment #1: Type: text/plain, Size: 1505 bytes --]

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-04 16:04             ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 16:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, David

[-- Attachment #1: Type: text/plain, Size: 1505 bytes --]

On Mon, Jan 04, 2016 at 04:30:36PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 03:25:58PM +0000, James Hogan wrote:
> > It is used along with the metag specific __global_lock1() (global
> > voluntary lock between hw threads) whenever a write is performed, and by
> > smp_mb/smp_rmb to try to catch other cases, but I've never been
> > confident this fixes every single corner case, since there could be
> > other places where multiple CPUs perform unsynchronised writes to the
> > same memory location, and expect cache not to become incoherent at that
> > location.
> 
> Ah, yuck, I thought blackfin was the only one attempting !coherent SMP.
> And yes, this is bound to break in lots of places in subtle ways. We
> very much assume cache coherency for SMP in generic code.

Well, its usually completely coherent, its just a bit dodgy in a
particular hardware corner case, which was pretty hard to hit, even
without these workarounds.

> 
> > It seemed to be sufficient to achieve stability however, and SMP on Meta
> > Linux never made it into a product anyway, since the other hw thread
> > tended to be used for RTOS stuff, so it didn't seem worth extending the
> > generic barrier API for it.
> 
> *phew*, should we take it out then, just to be sure nobody accidentally
> tries to use it then?

SMP support on this SoC you mean? I doubt it'll be a problem tbh, and
it'd work fine in QEMU when emulating this SoC, so I'd prefer to keep it
in.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:36         ` Peter Zijlstra
  (?)
  (?)
@ 2016-01-04 20:12           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

My take out from discussion with you was that virtualization is probably
the only valid use-case.  So at David Miller's suggestion there's a
patch later in the series that adds virt_xxxx wrappers and these are
then used by virtio xen and later maybe others.

> And I suppose most of virtio would actually be modules, so you cannot do
> what I did with preempt_enable_no_resched() either.
> 
> But yes, it would be good to limit the use of these things.

Right so the trick is checkpatch warns about use of
__smp_xxx and hopefully people are not crazy enough
to use virt_xxx variants for non-virtual drivers.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 20:12           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Russell King - ARM Linux, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

My take out from discussion with you was that virtualization is probably
the only valid use-case.  So at David Miller's suggestion there's a
patch later in the series that adds virt_xxxx wrappers and these are
then used by virtio xen and later maybe others.

> And I suppose most of virtio would actually be modules, so you cannot do
> what I did with preempt_enable_no_resched() either.
> 
> But yes, it would be good to limit the use of these things.

Right so the trick is checkpatch warns about use of
__smp_xxx and hopefully people are not crazy enough
to use virt_xxx variants for non-virtual drivers.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 20:12           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

My take out from discussion with you was that virtualization is probably
the only valid use-case.  So at David Miller's suggestion there's a
patch later in the series that adds virt_xxxx wrappers and these are
then used by virtio xen and later maybe others.

> And I suppose most of virtio would actually be modules, so you cannot do
> what I did with preempt_enable_no_resched() either.
> 
> But yes, it would be good to limit the use of these things.

Right so the trick is checkpatch warns about use of
__smp_xxx and hopefully people are not crazy enough
to use virt_xxx variants for non-virtual drivers.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 20:12           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

My take out from discussion with you was that virtualization is probably
the only valid use-case.  So at David Miller's suggestion there's a
patch later in the series that adds virt_xxxx wrappers and these are
then used by virtio xen and later maybe others.

> And I suppose most of virtio would actually be modules, so you cannot do
> what I did with preempt_enable_no_resched() either.
> 
> But yes, it would be good to limit the use of these things.

Right so the trick is checkpatch warns about use of
__smp_xxx and hopefully people are not crazy enough
to use virt_xxx variants for non-virtual drivers.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:36         ` Peter Zijlstra
                           ` (5 preceding siblings ...)
  (?)
@ 2016-01-04 20:12         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> 
> > > My only concern is that it gives people an additional handle onto a
> > > "new" set of barriers - just because they're prefixed with __*
> > > unfortunately doesn't stop anyone from using it (been there with
> > > other arch stuff before.)
> > > 
> > > I wonder whether we should consider making the smp memory barriers
> > > inline functions, so these __smp_xxx() variants can be undef'd
> > > afterwards, thereby preventing drivers getting their hands on these
> > > new macros?
> > 
> > That'd be tricky to do cleanly since asm-generic depends on
> > ifndef to add generic variants where needed.
> > 
> > But it would be possible to add a checkpatch test for this.
> 
> Wasn't the whole purpose of these things for 'drivers' (namely
> virtio/xen hypervisor interaction) to use these?

My take out from discussion with you was that virtualization is probably
the only valid use-case.  So at David Miller's suggestion there's a
patch later in the series that adds virt_xxxx wrappers and these are
then used by virtio xen and later maybe others.

> And I suppose most of virtio would actually be modules, so you cannot do
> what I did with preempt_enable_no_resched() either.
> 
> But yes, it would be good to limit the use of these things.

Right so the trick is checkpatch warns about use of
__smp_xxx and hopefully people are not crazy enough
to use virt_xxx variants for non-virtual drivers.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-04 13:45     ` Peter Zijlstra
  (?)
  (?)
@ 2016-01-04 20:18       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar

On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for s390,
> > for use by virtualization.
> > 
> > Some smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > unconditionally on this architecture.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index c358c31..fbd25b2 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -26,18 +26,21 @@
> >  #define wmb()				barrier()
> >  #define dma_rmb()			mb()
> >  #define dma_wmb()			mb()
> > -#define smp_mb()			mb()
> > -#define smp_rmb()			rmb()
> > -#define smp_wmb()			wmb()
> > -
> > -#define smp_store_release(p, v)						\
> > +#define __smp_mb()			mb()
> > +#define __smp_rmb()			rmb()
> > +#define __smp_wmb()			wmb()
> > +#define smp_mb()			__smp_mb()
> > +#define smp_rmb()			__smp_rmb()
> > +#define smp_wmb()			__smp_wmb()
> 
> Why define the smp_*mb() primitives here? Would not the inclusion of
> asm-generic/barrier.h do this?

No because the generic one is a nop on !SMP, this one isn't.

Pls note this patch is just reordering code without making
functional changes.
And at the moment, on s390 smp_xxx barriers are always non empty.

Some of this could be sub-optimal, but
since on s390 Linux always runs on a hypervisor,
I am not sure it's safe to use the generic version -
in other words, it just might be that for s390 smp_ and virt_
barriers must be equivalent.

If in fact this turns out to be wrong, I can pick up
a patch to change this, but I'd rather make this
a patch on top so that my patches are testable
just by compiling and comparing the binary.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-04 20:18       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar, Davidlohr Bueso,
	Christian Borntraeger, Andrey Konovalov

On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for s390,
> > for use by virtualization.
> > 
> > Some smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > unconditionally on this architecture.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index c358c31..fbd25b2 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -26,18 +26,21 @@
> >  #define wmb()				barrier()
> >  #define dma_rmb()			mb()
> >  #define dma_wmb()			mb()
> > -#define smp_mb()			mb()
> > -#define smp_rmb()			rmb()
> > -#define smp_wmb()			wmb()
> > -
> > -#define smp_store_release(p, v)						\
> > +#define __smp_mb()			mb()
> > +#define __smp_rmb()			rmb()
> > +#define __smp_wmb()			wmb()
> > +#define smp_mb()			__smp_mb()
> > +#define smp_rmb()			__smp_rmb()
> > +#define smp_wmb()			__smp_wmb()
> 
> Why define the smp_*mb() primitives here? Would not the inclusion of
> asm-generic/barrier.h do this?

No because the generic one is a nop on !SMP, this one isn't.

Pls note this patch is just reordering code without making
functional changes.
And at the moment, on s390 smp_xxx barriers are always non empty.

Some of this could be sub-optimal, but
since on s390 Linux always runs on a hypervisor,
I am not sure it's safe to use the generic version -
in other words, it just might be that for s390 smp_ and virt_
barriers must be equivalent.

If in fact this turns out to be wrong, I can pick up
a patch to change this, but I'd rather make this
a patch on top so that my patches are testable
just by compiling and comparing the binary.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-04 20:18       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar

On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for s390,
> > for use by virtualization.
> > 
> > Some smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > unconditionally on this architecture.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index c358c31..fbd25b2 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -26,18 +26,21 @@
> >  #define wmb()				barrier()
> >  #define dma_rmb()			mb()
> >  #define dma_wmb()			mb()
> > -#define smp_mb()			mb()
> > -#define smp_rmb()			rmb()
> > -#define smp_wmb()			wmb()
> > -
> > -#define smp_store_release(p, v)						\
> > +#define __smp_mb()			mb()
> > +#define __smp_rmb()			rmb()
> > +#define __smp_wmb()			wmb()
> > +#define smp_mb()			__smp_mb()
> > +#define smp_rmb()			__smp_rmb()
> > +#define smp_wmb()			__smp_wmb()
> 
> Why define the smp_*mb() primitives here? Would not the inclusion of
> asm-generic/barrier.h do this?

No because the generic one is a nop on !SMP, this one isn't.

Pls note this patch is just reordering code without making
functional changes.
And at the moment, on s390 smp_xxx barriers are always non empty.

Some of this could be sub-optimal, but
since on s390 Linux always runs on a hypervisor,
I am not sure it's safe to use the generic version -
in other words, it just might be that for s390 smp_ and virt_
barriers must be equivalent.

If in fact this turns out to be wrong, I can pick up
a patch to change this, but I'd rather make this
a patch on top so that my patches are testable
just by compiling and comparing the binary.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-04 13:45     ` Peter Zijlstra
                       ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 20:18     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper

On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for s390,
> > for use by virtualization.
> > 
> > Some smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > unconditionally on this architecture.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index c358c31..fbd25b2 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -26,18 +26,21 @@
> >  #define wmb()				barrier()
> >  #define dma_rmb()			mb()
> >  #define dma_wmb()			mb()
> > -#define smp_mb()			mb()
> > -#define smp_rmb()			rmb()
> > -#define smp_wmb()			wmb()
> > -
> > -#define smp_store_release(p, v)						\
> > +#define __smp_mb()			mb()
> > +#define __smp_rmb()			rmb()
> > +#define __smp_wmb()			wmb()
> > +#define smp_mb()			__smp_mb()
> > +#define smp_rmb()			__smp_rmb()
> > +#define smp_wmb()			__smp_wmb()
> 
> Why define the smp_*mb() primitives here? Would not the inclusion of
> asm-generic/barrier.h do this?

No because the generic one is a nop on !SMP, this one isn't.

Pls note this patch is just reordering code without making
functional changes.
And at the moment, on s390 smp_xxx barriers are always non empty.

Some of this could be sub-optimal, but
since on s390 Linux always runs on a hypervisor,
I am not sure it's safe to use the generic version -
in other words, it just might be that for s390 smp_ and virt_
barriers must be equivalent.

If in fact this turns out to be wrong, I can pick up
a patch to change this, but I'd rather make this
a patch on top so that my patches are testable
just by compiling and comparing the binary.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-04 20:18       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for s390,
> > for use by virtualization.
> > 
> > Some smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > unconditionally on this architecture.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index c358c31..fbd25b2 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -26,18 +26,21 @@
> >  #define wmb()				barrier()
> >  #define dma_rmb()			mb()
> >  #define dma_wmb()			mb()
> > -#define smp_mb()			mb()
> > -#define smp_rmb()			rmb()
> > -#define smp_wmb()			wmb()
> > -
> > -#define smp_store_release(p, v)						\
> > +#define __smp_mb()			mb()
> > +#define __smp_rmb()			rmb()
> > +#define __smp_wmb()			wmb()
> > +#define smp_mb()			__smp_mb()
> > +#define smp_rmb()			__smp_rmb()
> > +#define smp_wmb()			__smp_wmb()
> 
> Why define the smp_*mb() primitives here? Would not the inclusion of
> asm-generic/barrier.h do this?

No because the generic one is a nop on !SMP, this one isn't.

Pls note this patch is just reordering code without making
functional changes.
And at the moment, on s390 smp_xxx barriers are always non empty.

Some of this could be sub-optimal, but
since on s390 Linux always runs on a hypervisor,
I am not sure it's safe to use the generic version -
in other words, it just might be that for s390 smp_ and virt_
barriers must be equivalent.

If in fact this turns out to be wrong, I can pick up
a patch to change this, but I'd rather make this
a patch on top so that my patches are testable
just by compiling and comparing the binary.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-04 13:45     ` Peter Zijlstra
                       ` (6 preceding siblings ...)
  (?)
@ 2016-01-04 20:18     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper

On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for s390,
> > for use by virtualization.
> > 
> > Some smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > unconditionally on this architecture.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index c358c31..fbd25b2 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -26,18 +26,21 @@
> >  #define wmb()				barrier()
> >  #define dma_rmb()			mb()
> >  #define dma_wmb()			mb()
> > -#define smp_mb()			mb()
> > -#define smp_rmb()			rmb()
> > -#define smp_wmb()			wmb()
> > -
> > -#define smp_store_release(p, v)						\
> > +#define __smp_mb()			mb()
> > +#define __smp_rmb()			rmb()
> > +#define __smp_wmb()			wmb()
> > +#define smp_mb()			__smp_mb()
> > +#define smp_rmb()			__smp_rmb()
> > +#define smp_wmb()			__smp_wmb()
> 
> Why define the smp_*mb() primitives here? Would not the inclusion of
> asm-generic/barrier.h do this?

No because the generic one is a nop on !SMP, this one isn't.

Pls note this patch is just reordering code without making
functional changes.
And at the moment, on s390 smp_xxx barriers are always non empty.

Some of this could be sub-optimal, but
since on s390 Linux always runs on a hypervisor,
I am not sure it's safe to use the generic version -
in other words, it just might be that for s390 smp_ and virt_
barriers must be equivalent.

If in fact this turns out to be wrong, I can pick up
a patch to change this, but I'd rather make this
a patch on top so that my patches are testable
just by compiling and comparing the binary.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 13:20     ` Peter Zijlstra
  (?)
  (?)
@ 2016-01-04 20:34       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar

On Mon, Jan 04, 2016 at 02:20:42PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.

Thanks Peter!

OK so I will just rename this to __smp_mb__before_atomic and
__smp_mb__after_atomic but keep them around.

I'm not changing these - that's best left to s390 maintainers.

Should I add a TODO comment to change them to barrier so
we don't forget?

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 20:34       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Christian Borntraeger, Paul E. McKenney

On Mon, Jan 04, 2016 at 02:20:42PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.

Thanks Peter!

OK so I will just rename this to __smp_mb__before_atomic and
__smp_mb__after_atomic but keep them around.

I'm not changing these - that's best left to s390 maintainers.

Should I add a TODO comment to change them to barrier so
we don't forget?

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 20:34       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Martin Schwidefsky,
	Heiko Carstens, Ingo Molnar

On Mon, Jan 04, 2016 at 02:20:42PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.

Thanks Peter!

OK so I will just rename this to __smp_mb__before_atomic and
__smp_mb__after_atomic but keep them around.

I'm not changing these - that's best left to s390 maintainers.

Should I add a TODO comment to change them to barrier so
we don't forget?

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 13:20     ` Peter Zijlstra
                       ` (7 preceding siblings ...)
  (?)
@ 2016-01-04 20:34     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel

On Mon, Jan 04, 2016 at 02:20:42PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.

Thanks Peter!

OK so I will just rename this to __smp_mb__before_atomic and
__smp_mb__after_atomic but keep them around.

I'm not changing these - that's best left to s390 maintainers.

Should I add a TODO comment to change them to barrier so
we don't forget?

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 20:34       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 02:20:42PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1 at mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.

Thanks Peter!

OK so I will just rename this to __smp_mb__before_atomic and
__smp_mb__after_atomic but keep them around.

I'm not changing these - that's best left to s390 maintainers.

Should I add a TODO comment to change them to barrier so
we don't forget?

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 13:20     ` Peter Zijlstra
                       ` (6 preceding siblings ...)
  (?)
@ 2016-01-04 20:34     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Heiko Carstens, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, x86, Christian Borntraeger,
	xen-devel, Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel

On Mon, Jan 04, 2016 at 02:20:42PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > On s390 read_barrier_depends, smp_read_barrier_depends
> > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > asm-generic variants exactly. Drop the local definitions and pull in
> > asm-generic/barrier.h instead.
> > 
> > This is in preparation to refactoring this code area.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/s390/include/asm/barrier.h | 10 ++--------
> >  1 file changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index 7ffd0b1..c358c31 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -30,14 +30,6 @@
> >  #define smp_rmb()			rmb()
> >  #define smp_wmb()			wmb()
> >  
> > -#define read_barrier_depends()		do { } while (0)
> > -#define smp_read_barrier_depends()	do { } while (0)
> > -
> > -#define smp_mb__before_atomic()		smp_mb()
> > -#define smp_mb__after_atomic()		smp_mb()
> 
> As per:
> 
>   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> 
> s390 should change this to barrier() instead of smp_mb() and hence
> should not use the generic versions.

Thanks Peter!

OK so I will just rename this to __smp_mb__before_atomic and
__smp_mb__after_atomic but keep them around.

I'm not changing these - that's best left to s390 maintainers.

Should I add a TODO comment to change them to barrier so
we don't forget?

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:54           ` Peter Zijlstra
  (?)
  (?)
@ 2016-01-04 20:39             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Russell King - ARM Linux, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Maybe I misunderstand, but I don't think so:

------>
#define __smp_xxx FOO
#define smp_xxx __smp_xxx
#undef __smp_xxx

smp_xxx
<------

resolves to __smp_xxx, not FOO.

That's why I went the checkpatch way.


-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 20:39             ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Russell King - ARM Linux, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Maybe I misunderstand, but I don't think so:

------>
#define __smp_xxx FOO
#define smp_xxx __smp_xxx
#undef __smp_xxx

smp_xxx
<------

resolves to __smp_xxx, not FOO.

That's why I went the checkpatch way.


-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 20:39             ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Russell King - ARM Linux, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Maybe I misunderstand, but I don't think so:

------>
#define __smp_xxx FOO
#define smp_xxx __smp_xxx
#undef __smp_xxx

smp_xxx
<------

resolves to __smp_xxx, not FOO.

That's why I went the checkpatch way.


-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:54           ` Peter Zijlstra
                             ` (6 preceding siblings ...)
  (?)
@ 2016-01-04 20:39           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Maybe I misunderstand, but I don't think so:

------>
#define __smp_xxx FOO
#define smp_xxx __smp_xxx
#undef __smp_xxx

smp_xxx
<------

resolves to __smp_xxx, not FOO.

That's why I went the checkpatch way.


-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-04 20:39             ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Maybe I misunderstand, but I don't think so:

------>
#define __smp_xxx FOO
#define smp_xxx __smp_xxx
#undef __smp_xxx

smp_xxx
<------

resolves to __smp_xxx, not FOO.

That's why I went the checkpatch way.


-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:54           ` Peter Zijlstra
                             ` (5 preceding siblings ...)
  (?)
@ 2016-01-04 20:39           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-mips, linux-ia64, linux-sh, Tony Lindgren, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel, linuxppc-dev

On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > 
> > > > My only concern is that it gives people an additional handle onto a
> > > > "new" set of barriers - just because they're prefixed with __*
> > > > unfortunately doesn't stop anyone from using it (been there with
> > > > other arch stuff before.)
> > > > 
> > > > I wonder whether we should consider making the smp memory barriers
> > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > afterwards, thereby preventing drivers getting their hands on these
> > > > new macros?
> > > 
> > > That'd be tricky to do cleanly since asm-generic depends on
> > > ifndef to add generic variants where needed.
> > > 
> > > But it would be possible to add a checkpatch test for this.
> > 
> > Wasn't the whole purpose of these things for 'drivers' (namely
> > virtio/xen hypervisor interaction) to use these?
> 
> Ah, I see, you add virt_*mb() stuff later on for that use case.
> 
> So, assuming everybody does include asm-generic/barrier.h, you could
> simply #undef the __smp version at the end of that, once we've generated
> all the regular primitives from it, no?

Maybe I misunderstand, but I don't think so:

------>
#define __smp_xxx FOO
#define smp_xxx __smp_xxx
#undef __smp_xxx

smp_xxx
<------

resolves to __smp_xxx, not FOO.

That's why I went the checkpatch way.


-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 15:03       ` Martin Schwidefsky
  (?)
  (?)
@ 2016-01-04 20:42         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:42 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 14:20:42 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > asm-generic variants exactly. Drop the local definitions and pull in
> > > asm-generic/barrier.h instead.
> > > 
> > > This is in preparation to refactoring this code area.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index 7ffd0b1..c358c31 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -30,14 +30,6 @@
> > >  #define smp_rmb()			rmb()
> > >  #define smp_wmb()			wmb()
> > >  
> > > -#define read_barrier_depends()		do { } while (0)
> > > -#define smp_read_barrier_depends()	do { } while (0)
> > > -
> > > -#define smp_mb__before_atomic()		smp_mb()
> > > -#define smp_mb__after_atomic()		smp_mb()
> > 
> > As per:
> > 
> >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > 
> > s390 should change this to barrier() instead of smp_mb() and hence
> > should not use the generic versions.
>  
> Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> a patch.

Could you base on my patchset maybe, to avoid conflicts,
and I'll merge it?
Or if it's just replacing these 2 with barrier() I can do this
myself easily.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 20:42         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:42 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Andrey Konovalov, Christian Borntraeger

On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 14:20:42 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > asm-generic variants exactly. Drop the local definitions and pull in
> > > asm-generic/barrier.h instead.
> > > 
> > > This is in preparation to refactoring this code area.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index 7ffd0b1..c358c31 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -30,14 +30,6 @@
> > >  #define smp_rmb()			rmb()
> > >  #define smp_wmb()			wmb()
> > >  
> > > -#define read_barrier_depends()		do { } while (0)
> > > -#define smp_read_barrier_depends()	do { } while (0)
> > > -
> > > -#define smp_mb__before_atomic()		smp_mb()
> > > -#define smp_mb__after_atomic()		smp_mb()
> > 
> > As per:
> > 
> >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > 
> > s390 should change this to barrier() instead of smp_mb() and hence
> > should not use the generic versions.
>  
> Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> a patch.

Could you base on my patchset maybe, to avoid conflicts,
and I'll merge it?
Or if it's just replacing these 2 with barrier() I can do this
myself easily.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 20:42         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:42 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 14:20:42 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > asm-generic variants exactly. Drop the local definitions and pull in
> > > asm-generic/barrier.h instead.
> > > 
> > > This is in preparation to refactoring this code area.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index 7ffd0b1..c358c31 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -30,14 +30,6 @@
> > >  #define smp_rmb()			rmb()
> > >  #define smp_wmb()			wmb()
> > >  
> > > -#define read_barrier_depends()		do { } while (0)
> > > -#define smp_read_barrier_depends()	do { } while (0)
> > > -
> > > -#define smp_mb__before_atomic()		smp_mb()
> > > -#define smp_mb__after_atomic()		smp_mb()
> > 
> > As per:
> > 
> >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > 
> > s390 should change this to barrier() instead of smp_mb() and hence
> > should not use the generic versions.
>  
> Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> a patch.

Could you base on my patchset maybe, to avoid conflicts,
and I'll merge it?
Or if it's just replacing these 2 with barrier() I can do this
myself easily.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 15:03       ` Martin Schwidefsky
                         ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 20:42       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:42 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 14:20:42 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > asm-generic variants exactly. Drop the local definitions and pull in
> > > asm-generic/barrier.h instead.
> > > 
> > > This is in preparation to refactoring this code area.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index 7ffd0b1..c358c31 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -30,14 +30,6 @@
> > >  #define smp_rmb()			rmb()
> > >  #define smp_wmb()			wmb()
> > >  
> > > -#define read_barrier_depends()		do { } while (0)
> > > -#define smp_read_barrier_depends()	do { } while (0)
> > > -
> > > -#define smp_mb__before_atomic()		smp_mb()
> > > -#define smp_mb__after_atomic()		smp_mb()
> > 
> > As per:
> > 
> >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > 
> > s390 should change this to barrier() instead of smp_mb() and hence
> > should not use the generic versions.
>  
> Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> a patch.

Could you base on my patchset maybe, to avoid conflicts,
and I'll merge it?
Or if it's just replacing these 2 with barrier() I can do this
myself easily.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-04 20:42         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 14:20:42 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > asm-generic variants exactly. Drop the local definitions and pull in
> > > asm-generic/barrier.h instead.
> > > 
> > > This is in preparation to refactoring this code area.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index 7ffd0b1..c358c31 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -30,14 +30,6 @@
> > >  #define smp_rmb()			rmb()
> > >  #define smp_wmb()			wmb()
> > >  
> > > -#define read_barrier_depends()		do { } while (0)
> > > -#define smp_read_barrier_depends()	do { } while (0)
> > > -
> > > -#define smp_mb__before_atomic()		smp_mb()
> > > -#define smp_mb__after_atomic()		smp_mb()
> > 
> > As per:
> > 
> >   lkml.kernel.org/r/20150921112252.3c2937e1 at mschwide
> > 
> > s390 should change this to barrier() instead of smp_mb() and hence
> > should not use the generic versions.
>  
> Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> a patch.

Could you base on my patchset maybe, to avoid conflicts,
and I'll merge it?
Or if it's just replacing these 2 with barrier() I can do this
myself easily.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 15:03       ` Martin Schwidefsky
                         ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 20:42       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-04 20:42 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 14:20:42 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > asm-generic variants exactly. Drop the local definitions and pull in
> > > asm-generic/barrier.h instead.
> > > 
> > > This is in preparation to refactoring this code area.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index 7ffd0b1..c358c31 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -30,14 +30,6 @@
> > >  #define smp_rmb()			rmb()
> > >  #define smp_wmb()			wmb()
> > >  
> > > -#define read_barrier_depends()		do { } while (0)
> > > -#define smp_read_barrier_depends()	do { } while (0)
> > > -
> > > -#define smp_mb__before_atomic()		smp_mb()
> > > -#define smp_mb__after_atomic()		smp_mb()
> > 
> > As per:
> > 
> >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > 
> > s390 should change this to barrier() instead of smp_mb() and hence
> > should not use the generic versions.
>  
> Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> a patch.

Could you base on my patchset maybe, to avoid conflicts,
and I'll merge it?
Or if it's just replacing these 2 with barrier() I can do this
myself easily.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
  2015-12-31 19:07     ` Michael S. Tsirkin
                         ` (3 preceding siblings ...)
  (?)
@ 2016-01-04 23:24       ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 23:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 2103 bytes --]

On Thu, Dec 31, 2015 at 09:07:02PM +0200, Michael S. Tsirkin wrote:
> On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
> smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
> the asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Looks good, and confirmed no text change (once patch 1 is applied that
is).

Acked-by: James Hogan <james.hogan@imgtec.com>

Thanks
James

> ---
>  arch/metag/include/asm/barrier.h | 25 ++-----------------------
>  1 file changed, 2 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index 172b7e5..b5b778b 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,9 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#define dma_rmb()	rmb()
> -#define dma_wmb()	wmb()
> -
>  #ifndef CONFIG_SMP
>  #define fence()		do { } while (0)
>  #define smp_mb()        barrier()
> @@ -81,27 +78,9 @@ static inline void fence(void)
>  #endif
>  #endif
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})
> -
>  #define smp_mb__before_atomic()	barrier()
>  #define smp_mb__after_atomic()	barrier()
>  
> +#include <asm-generic/barrier.h>
> +
>  #endif /* _ASM_METAG_BARRIER_H */
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
@ 2016-01-04 23:24       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 23:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Michael Ellerman,
	Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 2103 bytes --]

On Thu, Dec 31, 2015 at 09:07:02PM +0200, Michael S. Tsirkin wrote:
> On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
> smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
> the asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Looks good, and confirmed no text change (once patch 1 is applied that
is).

Acked-by: James Hogan <james.hogan@imgtec.com>

Thanks
James

> ---
>  arch/metag/include/asm/barrier.h | 25 ++-----------------------
>  1 file changed, 2 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index 172b7e5..b5b778b 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,9 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#define dma_rmb()	rmb()
> -#define dma_wmb()	wmb()
> -
>  #ifndef CONFIG_SMP
>  #define fence()		do { } while (0)
>  #define smp_mb()        barrier()
> @@ -81,27 +78,9 @@ static inline void fence(void)
>  #endif
>  #endif
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})
> -
>  #define smp_mb__before_atomic()	barrier()
>  #define smp_mb__after_atomic()	barrier()
>  
> +#include <asm-generic/barrier.h>
> +
>  #endif /* _ASM_METAG_BARRIER_H */
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
@ 2016-01-04 23:24       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 23:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 2103 bytes --]

On Thu, Dec 31, 2015 at 09:07:02PM +0200, Michael S. Tsirkin wrote:
> On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
> smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
> the asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Looks good, and confirmed no text change (once patch 1 is applied that
is).

Acked-by: James Hogan <james.hogan@imgtec.com>

Thanks
James

> ---
>  arch/metag/include/asm/barrier.h | 25 ++-----------------------
>  1 file changed, 2 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index 172b7e5..b5b778b 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,9 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#define dma_rmb()	rmb()
> -#define dma_wmb()	wmb()
> -
>  #ifndef CONFIG_SMP
>  #define fence()		do { } while (0)
>  #define smp_mb()        barrier()
> @@ -81,27 +78,9 @@ static inline void fence(void)
>  #endif
>  #endif
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})
> -
>  #define smp_mb__before_atomic()	barrier()
>  #define smp_mb__after_atomic()	barrier()
>  
> +#include <asm-generic/barrier.h>
> +
>  #endif /* _ASM_METAG_BARRIER_H */
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
@ 2016-01-04 23:24       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 23:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Micha

[-- Attachment #1: Type: text/plain, Size: 2103 bytes --]

On Thu, Dec 31, 2015 at 09:07:02PM +0200, Michael S. Tsirkin wrote:
> On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
> smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
> the asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Looks good, and confirmed no text change (once patch 1 is applied that
is).

Acked-by: James Hogan <james.hogan@imgtec.com>

Thanks
James

> ---
>  arch/metag/include/asm/barrier.h | 25 ++-----------------------
>  1 file changed, 2 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index 172b7e5..b5b778b 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,9 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#define dma_rmb()	rmb()
> -#define dma_wmb()	wmb()
> -
>  #ifndef CONFIG_SMP
>  #define fence()		do { } while (0)
>  #define smp_mb()        barrier()
> @@ -81,27 +78,9 @@ static inline void fence(void)
>  #endif
>  #endif
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})
> -
>  #define smp_mb__before_atomic()	barrier()
>  #define smp_mb__after_atomic()	barrier()
>  
> +#include <asm-generic/barrier.h>
> +
>  #endif /* _ASM_METAG_BARRIER_H */
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
@ 2016-01-04 23:24       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 23:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Michael Ellerman,
	Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 2103 bytes --]

On Thu, Dec 31, 2015 at 09:07:02PM +0200, Michael S. Tsirkin wrote:
> On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
> smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
> the asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Looks good, and confirmed no text change (once patch 1 is applied that
is).

Acked-by: James Hogan <james.hogan@imgtec.com>

Thanks
James

> ---
>  arch/metag/include/asm/barrier.h | 25 ++-----------------------
>  1 file changed, 2 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index 172b7e5..b5b778b 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,9 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#define dma_rmb()	rmb()
> -#define dma_wmb()	wmb()
> -
>  #ifndef CONFIG_SMP
>  #define fence()		do { } while (0)
>  #define smp_mb()        barrier()
> @@ -81,27 +78,9 @@ static inline void fence(void)
>  #endif
>  #endif
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})
> -
>  #define smp_mb__before_atomic()	barrier()
>  #define smp_mb__after_atomic()	barrier()
>  
> +#include <asm-generic/barrier.h>
> +
>  #endif /* _ASM_METAG_BARRIER_H */
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
  2015-12-31 19:07     ` Michael S. Tsirkin
                       ` (2 preceding siblings ...)
  (?)
@ 2016-01-04 23:24     ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 23:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Michael Ellerman, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 2103 bytes --]

On Thu, Dec 31, 2015 at 09:07:02PM +0200, Michael S. Tsirkin wrote:
> On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
> smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
> the asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Looks good, and confirmed no text change (once patch 1 is applied that
is).

Acked-by: James Hogan <james.hogan@imgtec.com>

Thanks
James

> ---
>  arch/metag/include/asm/barrier.h | 25 ++-----------------------
>  1 file changed, 2 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index 172b7e5..b5b778b 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,9 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#define dma_rmb()	rmb()
> -#define dma_wmb()	wmb()
> -
>  #ifndef CONFIG_SMP
>  #define fence()		do { } while (0)
>  #define smp_mb()        barrier()
> @@ -81,27 +78,9 @@ static inline void fence(void)
>  #endif
>  #endif
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})
> -
>  #define smp_mb__before_atomic()	barrier()
>  #define smp_mb__after_atomic()	barrier()
>  
> +#include <asm-generic/barrier.h>
> +
>  #endif /* _ASM_METAG_BARRIER_H */
> -- 
> MST
> 

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
@ 2016-01-04 23:24       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 23:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 31, 2015 at 09:07:02PM +0200, Michael S. Tsirkin wrote:
> On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
> smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
> the asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Looks good, and confirmed no text change (once patch 1 is applied that
is).

Acked-by: James Hogan <james.hogan@imgtec.com>

Thanks
James

> ---
>  arch/metag/include/asm/barrier.h | 25 ++-----------------------
>  1 file changed, 2 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index 172b7e5..b5b778b 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,9 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#define dma_rmb()	rmb()
> -#define dma_wmb()	wmb()
> -
>  #ifndef CONFIG_SMP
>  #define fence()		do { } while (0)
>  #define smp_mb()        barrier()
> @@ -81,27 +78,9 @@ static inline void fence(void)
>  #endif
>  #endif
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})
> -
>  #define smp_mb__before_atomic()	barrier()
>  #define smp_mb__after_atomic()	barrier()
>  
> +#include <asm-generic/barrier.h>
> +
>  #endif /* _ASM_METAG_BARRIER_H */
> -- 
> MST
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160104/fd33fa53/attachment.sig>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 10/32] metag: reuse asm-generic/barrier.h
  2015-12-31 19:07     ` Michael S. Tsirkin
                       ` (4 preceding siblings ...)
  (?)
@ 2016-01-04 23:24     ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-04 23:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Michael Ellerman, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 2103 bytes --]

On Thu, Dec 31, 2015 at 09:07:02PM +0200, Michael S. Tsirkin wrote:
> On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
> smp_read_barrier_depends, smp_store_release and smp_load_acquire  match
> the asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Looks good, and confirmed no text change (once patch 1 is applied that
is).

Acked-by: James Hogan <james.hogan@imgtec.com>

Thanks
James

> ---
>  arch/metag/include/asm/barrier.h | 25 ++-----------------------
>  1 file changed, 2 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index 172b7e5..b5b778b 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,9 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#define dma_rmb()	rmb()
> -#define dma_wmb()	wmb()
> -
>  #ifndef CONFIG_SMP
>  #define fence()		do { } while (0)
>  #define smp_mb()        barrier()
> @@ -81,27 +78,9 @@ static inline void fence(void)
>  #endif
>  #endif
>  
> -#define read_barrier_depends()		do { } while (0)
> -#define smp_read_barrier_depends()	do { } while (0)
> -
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> -#define smp_store_release(p, v)						\
> -do {									\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	WRITE_ONCE(*p, v);						\
> -} while (0)
> -
> -#define smp_load_acquire(p)						\
> -({									\
> -	typeof(*p) ___p1 = READ_ONCE(*p);				\
> -	compiletime_assert_atomic_type(*p);				\
> -	smp_mb();							\
> -	___p1;								\
> -})
> -
>  #define smp_mb__before_atomic()	barrier()
>  #define smp_mb__after_atomic()	barrier()
>  
> +#include <asm-generic/barrier.h>
> +
>  #endif /* _ASM_METAG_BARRIER_H */
> -- 
> MST
> 

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2015-12-31 19:08     ` Michael S. Tsirkin
                         ` (4 preceding siblings ...)
  (?)
@ 2016-01-05  0:09       ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 3506 bytes --]

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-05  0:09       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 3506 bytes --]

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-05  0:09       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 3506 bytes --]

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-05  0:09       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 3506 bytes --]

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2015-12-31 19:08     ` Michael S. Tsirkin
                       ` (6 preceding siblings ...)
  (?)
@ 2016-01-05  0:09     ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 3506 bytes --]

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-05  0:09       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160105/822d84d1/attachment-0001.sig>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2015-12-31 19:08     ` Michael S. Tsirkin
                       ` (5 preceding siblings ...)
  (?)
@ 2016-01-05  0:09     ` James Hogan
  -1 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev


[-- Attachment #1.1: Type: text/plain, Size: 3506 bytes --]

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-05  0:09       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, David

[-- Attachment #1: Type: text/plain, Size: 3506 bytes --]

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-05  0:09       ` James Hogan
  0 siblings, 0 replies; 572+ messages in thread
From: James Hogan @ 2016-01-05  0:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, David

[-- Attachment #1: Type: text/plain, Size: 3506 bytes --]

Hi Michael,

On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for metag,
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> use the existing fence() macro since that is defined differently between
> SMP and !SMP.  For this reason, this patch introduces a wrapper
> metag_fence() that doesn't depend on CONFIG_SMP.
> fence() is then defined using that, depending on CONFIG_SMP.

I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
columns (although I now notice SubmittingPatches says to use 75...).

> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
>  1 file changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> index b5b778b..84880c9 100644
> --- a/arch/metag/include/asm/barrier.h
> +++ b/arch/metag/include/asm/barrier.h
> @@ -44,13 +44,6 @@ static inline void wr_fence(void)
>  #define rmb()		barrier()
>  #define wmb()		mb()
>  
> -#ifndef CONFIG_SMP
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> -#else

!SMP kernel text differs, but only because of new presence of unused
metag_fence() inline function. If I #if 0 that out, then it matches, so
thats fine.

> -
>  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
>  /*
>   * Write to the atomic memory unlock system event register (command 0). This is
> @@ -60,26 +53,31 @@ static inline void wr_fence(void)
>   * incoherence). It is therefore ineffective if used after and on the same
>   * thread as a write.
>   */
> -static inline void fence(void)
> +static inline void metag_fence(void)
>  {
>  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
>  	barrier();
>  	*flushptr = 0;
>  	barrier();
>  }
> -#define smp_mb()        fence()
> -#define smp_rmb()       fence()
> -#define smp_wmb()       barrier()
> +#define __smp_mb()        metag_fence()
> +#define __smp_rmb()       metag_fence()
> +#define __smp_wmb()       barrier()
>  #else
> -#define fence()		do { } while (0)
> -#define smp_mb()        barrier()
> -#define smp_rmb()       barrier()
> -#define smp_wmb()       barrier()
> +#define metag_fence()		do { } while (0)
> +#define __smp_mb()        barrier()
> +#define __smp_rmb()       barrier()
> +#define __smp_wmb()       barrier()

Whitespace is now messed up. Admitedly its already inconsistent
tabs/spaces, but it'd be nice if the definitions at least still all
lined up. You're touching all the definitions which use spaces anyway,
so feel free to convert them to tabs while you're at it.

Other than those niggles, it looks sensible to me:
Acked-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

>  #endif
> +
> +#ifdef CONFIG_SMP
> +#define fence() metag_fence()
> +#else
> +#define fence()		do { } while (0)
>  #endif
>  
> -#define smp_mb__before_atomic()	barrier()
> -#define smp_mb__after_atomic()	barrier()
> +#define __smp_mb__before_atomic()	barrier()
> +#define __smp_mb__after_atomic()	barrier()
>  
>  #include <asm-generic/barrier.h>
>  
> -- 
> MST
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2015-12-31 19:07   ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-05  1:36     ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  1:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras

Hi Michael,

On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for powerpc
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index 980ad0c..c0deafc 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -44,19 +44,11 @@
>  #define dma_rmb()	__lwsync()
>  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
> -#ifdef CONFIG_SMP
> -#define smp_lwsync()	__lwsync()
> +#define __smp_lwsync()	__lwsync()
>  

so __smp_lwsync() is always mapped to lwsync, right?

> -#define smp_mb()	mb()
> -#define smp_rmb()	__lwsync()
> -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> -#else
> -#define smp_lwsync()	barrier()
> -
> -#define smp_mb()	barrier()
> -#define smp_rmb()	barrier()
> -#define smp_wmb()	barrier()
> -#endif /* CONFIG_SMP */
> +#define __smp_mb()	mb()
> +#define __smp_rmb()	__lwsync()
> +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
>  /*
>   * This is a barrier which prevents following instructions from being
> @@ -67,18 +59,18 @@
>  #define data_barrier(x)	\
>  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
>  
> -#define smp_store_release(p, v)						\
> +#define __smp_store_release(p, v)						\
>  do {									\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\

, therefore this will emit an lwsync no matter SMP or UP.

Another thing is that smp_lwsync() may have a third user(other than
smp_load_acquire() and smp_store_release()):

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

I'm OK to change my patch accordingly, but do we really want
smp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.

Regards,
Boqun

>  	WRITE_ONCE(*p, v);						\
>  } while (0)
>  
> -#define smp_load_acquire(p)						\
> +#define __smp_load_acquire(p)						\
>  ({									\
>  	typeof(*p) ___p1 = READ_ONCE(*p);				\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\
>  	___p1;								\
>  })
>  
> -- 
> MST
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  1:36     ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  1:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Paul E. McKenney

Hi Michael,

On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for powerpc
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index 980ad0c..c0deafc 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -44,19 +44,11 @@
>  #define dma_rmb()	__lwsync()
>  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
> -#ifdef CONFIG_SMP
> -#define smp_lwsync()	__lwsync()
> +#define __smp_lwsync()	__lwsync()
>  

so __smp_lwsync() is always mapped to lwsync, right?

> -#define smp_mb()	mb()
> -#define smp_rmb()	__lwsync()
> -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> -#else
> -#define smp_lwsync()	barrier()
> -
> -#define smp_mb()	barrier()
> -#define smp_rmb()	barrier()
> -#define smp_wmb()	barrier()
> -#endif /* CONFIG_SMP */
> +#define __smp_mb()	mb()
> +#define __smp_rmb()	__lwsync()
> +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
>  /*
>   * This is a barrier which prevents following instructions from being
> @@ -67,18 +59,18 @@
>  #define data_barrier(x)	\
>  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
>  
> -#define smp_store_release(p, v)						\
> +#define __smp_store_release(p, v)						\
>  do {									\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\

, therefore this will emit an lwsync no matter SMP or UP.

Another thing is that smp_lwsync() may have a third user(other than
smp_load_acquire() and smp_store_release()):

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

I'm OK to change my patch accordingly, but do we really want
smp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.

Regards,
Boqun

>  	WRITE_ONCE(*p, v);						\
>  } while (0)
>  
> -#define smp_load_acquire(p)						\
> +#define __smp_load_acquire(p)						\
>  ({									\
>  	typeof(*p) ___p1 = READ_ONCE(*p);				\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\
>  	___p1;								\
>  })
>  
> -- 
> MST
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  1:36     ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  1:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras

Hi Michael,

On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for powerpc
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index 980ad0c..c0deafc 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -44,19 +44,11 @@
>  #define dma_rmb()	__lwsync()
>  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
> -#ifdef CONFIG_SMP
> -#define smp_lwsync()	__lwsync()
> +#define __smp_lwsync()	__lwsync()
>  

so __smp_lwsync() is always mapped to lwsync, right?

> -#define smp_mb()	mb()
> -#define smp_rmb()	__lwsync()
> -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> -#else
> -#define smp_lwsync()	barrier()
> -
> -#define smp_mb()	barrier()
> -#define smp_rmb()	barrier()
> -#define smp_wmb()	barrier()
> -#endif /* CONFIG_SMP */
> +#define __smp_mb()	mb()
> +#define __smp_rmb()	__lwsync()
> +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
>  /*
>   * This is a barrier which prevents following instructions from being
> @@ -67,18 +59,18 @@
>  #define data_barrier(x)	\
>  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
>  
> -#define smp_store_release(p, v)						\
> +#define __smp_store_release(p, v)						\
>  do {									\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\

, therefore this will emit an lwsync no matter SMP or UP.

Another thing is that smp_lwsync() may have a third user(other than
smp_load_acquire() and smp_store_release()):

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

I'm OK to change my patch accordingly, but do we really want
smp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.

Regards,
Boqun

>  	WRITE_ONCE(*p, v);						\
>  } while (0)
>  
> -#define smp_load_acquire(p)						\
> +#define __smp_load_acquire(p)						\
>  ({									\
>  	typeof(*p) ___p1 = READ_ONCE(*p);				\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\
>  	___p1;								\
>  })
>  
> -- 
> MST
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2015-12-31 19:07   ` Michael S. Tsirkin
                     ` (4 preceding siblings ...)
  (?)
@ 2016-01-05  1:36   ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  1:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

Hi Michael,

On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for powerpc
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index 980ad0c..c0deafc 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -44,19 +44,11 @@
>  #define dma_rmb()	__lwsync()
>  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
> -#ifdef CONFIG_SMP
> -#define smp_lwsync()	__lwsync()
> +#define __smp_lwsync()	__lwsync()
>  

so __smp_lwsync() is always mapped to lwsync, right?

> -#define smp_mb()	mb()
> -#define smp_rmb()	__lwsync()
> -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> -#else
> -#define smp_lwsync()	barrier()
> -
> -#define smp_mb()	barrier()
> -#define smp_rmb()	barrier()
> -#define smp_wmb()	barrier()
> -#endif /* CONFIG_SMP */
> +#define __smp_mb()	mb()
> +#define __smp_rmb()	__lwsync()
> +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
>  /*
>   * This is a barrier which prevents following instructions from being
> @@ -67,18 +59,18 @@
>  #define data_barrier(x)	\
>  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
>  
> -#define smp_store_release(p, v)						\
> +#define __smp_store_release(p, v)						\
>  do {									\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\

, therefore this will emit an lwsync no matter SMP or UP.

Another thing is that smp_lwsync() may have a third user(other than
smp_load_acquire() and smp_store_release()):

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

I'm OK to change my patch accordingly, but do we really want
smp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.

Regards,
Boqun

>  	WRITE_ONCE(*p, v);						\
>  } while (0)
>  
> -#define smp_load_acquire(p)						\
> +#define __smp_load_acquire(p)						\
>  ({									\
>  	typeof(*p) ___p1 = READ_ONCE(*p);				\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\
>  	___p1;								\
>  })
>  
> -- 
> MST
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  1:36     ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  1:36 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Michael,

On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for powerpc
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index 980ad0c..c0deafc 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -44,19 +44,11 @@
>  #define dma_rmb()	__lwsync()
>  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
> -#ifdef CONFIG_SMP
> -#define smp_lwsync()	__lwsync()
> +#define __smp_lwsync()	__lwsync()
>  

so __smp_lwsync() is always mapped to lwsync, right?

> -#define smp_mb()	mb()
> -#define smp_rmb()	__lwsync()
> -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> -#else
> -#define smp_lwsync()	barrier()
> -
> -#define smp_mb()	barrier()
> -#define smp_rmb()	barrier()
> -#define smp_wmb()	barrier()
> -#endif /* CONFIG_SMP */
> +#define __smp_mb()	mb()
> +#define __smp_rmb()	__lwsync()
> +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
>  /*
>   * This is a barrier which prevents following instructions from being
> @@ -67,18 +59,18 @@
>  #define data_barrier(x)	\
>  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
>  
> -#define smp_store_release(p, v)						\
> +#define __smp_store_release(p, v)						\
>  do {									\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\

, therefore this will emit an lwsync no matter SMP or UP.

Another thing is that smp_lwsync() may have a third user(other than
smp_load_acquire() and smp_store_release()):

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

I'm OK to change my patch accordingly, but do we really want
smp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.

Regards,
Boqun

>  	WRITE_ONCE(*p, v);						\
>  } while (0)
>  
> -#define smp_load_acquire(p)						\
> +#define __smp_load_acquire(p)						\
>  ({									\
>  	typeof(*p) ___p1 = READ_ONCE(*p);				\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\
>  	___p1;								\
>  })
>  
> -- 
> MST
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2015-12-31 19:07   ` Michael S. Tsirkin
                     ` (2 preceding siblings ...)
  (?)
@ 2016-01-05  1:36   ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  1:36 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

Hi Michael,

On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for powerpc
> for use by virtualization.
> 
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
> 
> This reduces the amount of arch-specific boiler-plate code.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index 980ad0c..c0deafc 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -44,19 +44,11 @@
>  #define dma_rmb()	__lwsync()
>  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
> -#ifdef CONFIG_SMP
> -#define smp_lwsync()	__lwsync()
> +#define __smp_lwsync()	__lwsync()
>  

so __smp_lwsync() is always mapped to lwsync, right?

> -#define smp_mb()	mb()
> -#define smp_rmb()	__lwsync()
> -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> -#else
> -#define smp_lwsync()	barrier()
> -
> -#define smp_mb()	barrier()
> -#define smp_rmb()	barrier()
> -#define smp_wmb()	barrier()
> -#endif /* CONFIG_SMP */
> +#define __smp_mb()	mb()
> +#define __smp_rmb()	__lwsync()
> +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>  
>  /*
>   * This is a barrier which prevents following instructions from being
> @@ -67,18 +59,18 @@
>  #define data_barrier(x)	\
>  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
>  
> -#define smp_store_release(p, v)						\
> +#define __smp_store_release(p, v)						\
>  do {									\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\

, therefore this will emit an lwsync no matter SMP or UP.

Another thing is that smp_lwsync() may have a third user(other than
smp_load_acquire() and smp_store_release()):

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

I'm OK to change my patch accordingly, but do we really want
smp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.

Regards,
Boqun

>  	WRITE_ONCE(*p, v);						\
>  } while (0)
>  
> -#define smp_load_acquire(p)						\
> +#define __smp_load_acquire(p)						\
>  ({									\
>  	typeof(*p) ___p1 = READ_ONCE(*p);				\
>  	compiletime_assert_atomic_type(*p);				\
> -	smp_lwsync();							\
> +	__smp_lwsync();							\
>  	___p1;								\
>  })
>  
> -- 
> MST
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 20:42         ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-05  8:03           ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, 4 Jan 2016 22:42:44 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 14:20:42 +0100
> > Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > > asm-generic variants exactly. Drop the local definitions and pull in
> > > > asm-generic/barrier.h instead.
> > > > 
> > > > This is in preparation to refactoring this code area.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index 7ffd0b1..c358c31 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -30,14 +30,6 @@
> > > >  #define smp_rmb()			rmb()
> > > >  #define smp_wmb()			wmb()
> > > >  
> > > > -#define read_barrier_depends()		do { } while (0)
> > > > -#define smp_read_barrier_depends()	do { } while (0)
> > > > -
> > > > -#define smp_mb__before_atomic()		smp_mb()
> > > > -#define smp_mb__after_atomic()		smp_mb()
> > > 
> > > As per:
> > > 
> > >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > > 
> > > s390 should change this to barrier() instead of smp_mb() and hence
> > > should not use the generic versions.
> >  
> > Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> > a patch.
> 
> Could you base on my patchset maybe, to avoid conflicts,
> and I'll merge it?
> Or if it's just replacing these 2 with barrier() I can do this
> myself easily.

Probably the easiest solution if you do the patch yourself and
include it in your patch set. 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-05  8:03           ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Andrey Konovalov, Christian Borntraeger

On Mon, 4 Jan 2016 22:42:44 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 14:20:42 +0100
> > Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > > asm-generic variants exactly. Drop the local definitions and pull in
> > > > asm-generic/barrier.h instead.
> > > > 
> > > > This is in preparation to refactoring this code area.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index 7ffd0b1..c358c31 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -30,14 +30,6 @@
> > > >  #define smp_rmb()			rmb()
> > > >  #define smp_wmb()			wmb()
> > > >  
> > > > -#define read_barrier_depends()		do { } while (0)
> > > > -#define smp_read_barrier_depends()	do { } while (0)
> > > > -
> > > > -#define smp_mb__before_atomic()		smp_mb()
> > > > -#define smp_mb__after_atomic()		smp_mb()
> > > 
> > > As per:
> > > 
> > >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > > 
> > > s390 should change this to barrier() instead of smp_mb() and hence
> > > should not use the generic versions.
> >  
> > Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> > a patch.
> 
> Could you base on my patchset maybe, to avoid conflicts,
> and I'll merge it?
> Or if it's just replacing these 2 with barrier() I can do this
> myself easily.

Probably the easiest solution if you do the patch yourself and
include it in your patch set. 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-05  8:03           ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, 4 Jan 2016 22:42:44 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 14:20:42 +0100
> > Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > > asm-generic variants exactly. Drop the local definitions and pull in
> > > > asm-generic/barrier.h instead.
> > > > 
> > > > This is in preparation to refactoring this code area.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index 7ffd0b1..c358c31 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -30,14 +30,6 @@
> > > >  #define smp_rmb()			rmb()
> > > >  #define smp_wmb()			wmb()
> > > >  
> > > > -#define read_barrier_depends()		do { } while (0)
> > > > -#define smp_read_barrier_depends()	do { } while (0)
> > > > -
> > > > -#define smp_mb__before_atomic()		smp_mb()
> > > > -#define smp_mb__after_atomic()		smp_mb()
> > > 
> > > As per:
> > > 
> > >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > > 
> > > s390 should change this to barrier() instead of smp_mb() and hence
> > > should not use the generic versions.
> >  
> > Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> > a patch.
> 
> Could you base on my patchset maybe, to avoid conflicts,
> and I'll merge it?
> Or if it's just replacing these 2 with barrier() I can do this
> myself easily.

Probably the easiest solution if you do the patch yourself and
include it in your patch set. 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
@ 2016-01-05  8:03           ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 4 Jan 2016 22:42:44 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 14:20:42 +0100
> > Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > > asm-generic variants exactly. Drop the local definitions and pull in
> > > > asm-generic/barrier.h instead.
> > > > 
> > > > This is in preparation to refactoring this code area.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index 7ffd0b1..c358c31 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -30,14 +30,6 @@
> > > >  #define smp_rmb()			rmb()
> > > >  #define smp_wmb()			wmb()
> > > >  
> > > > -#define read_barrier_depends()		do { } while (0)
> > > > -#define smp_read_barrier_depends()	do { } while (0)
> > > > -
> > > > -#define smp_mb__before_atomic()		smp_mb()
> > > > -#define smp_mb__after_atomic()		smp_mb()
> > > 
> > > As per:
> > > 
> > >   lkml.kernel.org/r/20150921112252.3c2937e1 at mschwide
> > > 
> > > s390 should change this to barrier() instead of smp_mb() and hence
> > > should not use the generic versions.
> >  
> > Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> > a patch.
> 
> Could you base on my patchset maybe, to avoid conflicts,
> and I'll merge it?
> Or if it's just replacing these 2 with barrier() I can do this
> myself easily.

Probably the easiest solution if you do the patch yourself and
include it in your patch set. 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 06/32] s390: reuse asm-generic/barrier.h
  2016-01-04 20:42         ` Michael S. Tsirkin
                           ` (2 preceding siblings ...)
  (?)
@ 2016-01-05  8:03         ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, 4 Jan 2016 22:42:44 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 04:03:39PM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 14:20:42 +0100
> > Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > On Thu, Dec 31, 2015 at 09:06:30PM +0200, Michael S. Tsirkin wrote:
> > > > On s390 read_barrier_depends, smp_read_barrier_depends
> > > > smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> > > > asm-generic variants exactly. Drop the local definitions and pull in
> > > > asm-generic/barrier.h instead.
> > > > 
> > > > This is in preparation to refactoring this code area.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 10 ++--------
> > > >  1 file changed, 2 insertions(+), 8 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index 7ffd0b1..c358c31 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -30,14 +30,6 @@
> > > >  #define smp_rmb()			rmb()
> > > >  #define smp_wmb()			wmb()
> > > >  
> > > > -#define read_barrier_depends()		do { } while (0)
> > > > -#define smp_read_barrier_depends()	do { } while (0)
> > > > -
> > > > -#define smp_mb__before_atomic()		smp_mb()
> > > > -#define smp_mb__after_atomic()		smp_mb()
> > > 
> > > As per:
> > > 
> > >   lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
> > > 
> > > s390 should change this to barrier() instead of smp_mb() and hence
> > > should not use the generic versions.
> >  
> > Yes, we wanted to simplify this. Thanks for the reminder, I'll queue
> > a patch.
> 
> Could you base on my patchset maybe, to avoid conflicts,
> and I'll merge it?
> Or if it's just replacing these 2 with barrier() I can do this
> myself easily.

Probably the easiest solution if you do the patch yourself and
include it in your patch set. 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-04 20:18       ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-05  8:13         ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Mon, 4 Jan 2016 22:18:58 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for s390,
> > > for use by virtualization.
> > > 
> > > Some smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> > > 
> > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > unconditionally on this architecture.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index c358c31..fbd25b2 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -26,18 +26,21 @@
> > >  #define wmb()				barrier()
> > >  #define dma_rmb()			mb()
> > >  #define dma_wmb()			mb()
> > > -#define smp_mb()			mb()
> > > -#define smp_rmb()			rmb()
> > > -#define smp_wmb()			wmb()
> > > -
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_mb()			mb()
> > > +#define __smp_rmb()			rmb()
> > > +#define __smp_wmb()			wmb()
> > > +#define smp_mb()			__smp_mb()
> > > +#define smp_rmb()			__smp_rmb()
> > > +#define smp_wmb()			__smp_wmb()
> > 
> > Why define the smp_*mb() primitives here? Would not the inclusion of
> > asm-generic/barrier.h do this?
> 
> No because the generic one is a nop on !SMP, this one isn't.
> 
> Pls note this patch is just reordering code without making
> functional changes.
> And at the moment, on s390 smp_xxx barriers are always non empty.

The s390 kernel is SMP to 99.99%, we just didn't bother with a
non-smp variant for the memory-barriers. If the generic header
is used we'd get the non-smp version for free. It will save a
small amount of text space for CONFIG_SMP=n. 
 
> Some of this could be sub-optimal, but
> since on s390 Linux always runs on a hypervisor,
> I am not sure it's safe to use the generic version -
> in other words, it just might be that for s390 smp_ and virt_
> barriers must be equivalent.

The definition of the memory barriers is independent from the fact
if the system is running on an hypervisor or not. Is there really
an architecture where you need special virt_xxx barriers?!? 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05  8:13         ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Christian Borntraeger, Andrey Konovalov

On Mon, 4 Jan 2016 22:18:58 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for s390,
> > > for use by virtualization.
> > > 
> > > Some smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> > > 
> > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > unconditionally on this architecture.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index c358c31..fbd25b2 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -26,18 +26,21 @@
> > >  #define wmb()				barrier()
> > >  #define dma_rmb()			mb()
> > >  #define dma_wmb()			mb()
> > > -#define smp_mb()			mb()
> > > -#define smp_rmb()			rmb()
> > > -#define smp_wmb()			wmb()
> > > -
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_mb()			mb()
> > > +#define __smp_rmb()			rmb()
> > > +#define __smp_wmb()			wmb()
> > > +#define smp_mb()			__smp_mb()
> > > +#define smp_rmb()			__smp_rmb()
> > > +#define smp_wmb()			__smp_wmb()
> > 
> > Why define the smp_*mb() primitives here? Would not the inclusion of
> > asm-generic/barrier.h do this?
> 
> No because the generic one is a nop on !SMP, this one isn't.
> 
> Pls note this patch is just reordering code without making
> functional changes.
> And at the moment, on s390 smp_xxx barriers are always non empty.

The s390 kernel is SMP to 99.99%, we just didn't bother with a
non-smp variant for the memory-barriers. If the generic header
is used we'd get the non-smp version for free. It will save a
small amount of text space for CONFIG_SMP=n. 
 
> Some of this could be sub-optimal, but
> since on s390 Linux always runs on a hypervisor,
> I am not sure it's safe to use the generic version -
> in other words, it just might be that for s390 smp_ and virt_
> barriers must be equivalent.

The definition of the memory barriers is independent from the fact
if the system is running on an hypervisor or not. Is there really
an architecture where you need special virt_xxx barriers?!? 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05  8:13         ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Mon, 4 Jan 2016 22:18:58 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for s390,
> > > for use by virtualization.
> > > 
> > > Some smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> > > 
> > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > unconditionally on this architecture.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index c358c31..fbd25b2 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -26,18 +26,21 @@
> > >  #define wmb()				barrier()
> > >  #define dma_rmb()			mb()
> > >  #define dma_wmb()			mb()
> > > -#define smp_mb()			mb()
> > > -#define smp_rmb()			rmb()
> > > -#define smp_wmb()			wmb()
> > > -
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_mb()			mb()
> > > +#define __smp_rmb()			rmb()
> > > +#define __smp_wmb()			wmb()
> > > +#define smp_mb()			__smp_mb()
> > > +#define smp_rmb()			__smp_rmb()
> > > +#define smp_wmb()			__smp_wmb()
> > 
> > Why define the smp_*mb() primitives here? Would not the inclusion of
> > asm-generic/barrier.h do this?
> 
> No because the generic one is a nop on !SMP, this one isn't.
> 
> Pls note this patch is just reordering code without making
> functional changes.
> And at the moment, on s390 smp_xxx barriers are always non empty.

The s390 kernel is SMP to 99.99%, we just didn't bother with a
non-smp variant for the memory-barriers. If the generic header
is used we'd get the non-smp version for free. It will save a
small amount of text space for CONFIG_SMP=n. 
 
> Some of this could be sub-optimal, but
> since on s390 Linux always runs on a hypervisor,
> I am not sure it's safe to use the generic version -
> in other words, it just might be that for s390 smp_ and virt_
> barriers must be equivalent.

The definition of the memory barriers is independent from the fact
if the system is running on an hypervisor or not. Is there really
an architecture where you need special virt_xxx barriers?!? 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-04 20:18       ` Michael S. Tsirkin
                         ` (2 preceding siblings ...)
  (?)
@ 2016-01-05  8:13       ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, 4 Jan 2016 22:18:58 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for s390,
> > > for use by virtualization.
> > > 
> > > Some smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> > > 
> > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > unconditionally on this architecture.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index c358c31..fbd25b2 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -26,18 +26,21 @@
> > >  #define wmb()				barrier()
> > >  #define dma_rmb()			mb()
> > >  #define dma_wmb()			mb()
> > > -#define smp_mb()			mb()
> > > -#define smp_rmb()			rmb()
> > > -#define smp_wmb()			wmb()
> > > -
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_mb()			mb()
> > > +#define __smp_rmb()			rmb()
> > > +#define __smp_wmb()			wmb()
> > > +#define smp_mb()			__smp_mb()
> > > +#define smp_rmb()			__smp_rmb()
> > > +#define smp_wmb()			__smp_wmb()
> > 
> > Why define the smp_*mb() primitives here? Would not the inclusion of
> > asm-generic/barrier.h do this?
> 
> No because the generic one is a nop on !SMP, this one isn't.
> 
> Pls note this patch is just reordering code without making
> functional changes.
> And at the moment, on s390 smp_xxx barriers are always non empty.

The s390 kernel is SMP to 99.99%, we just didn't bother with a
non-smp variant for the memory-barriers. If the generic header
is used we'd get the non-smp version for free. It will save a
small amount of text space for CONFIG_SMP=n. 
 
> Some of this could be sub-optimal, but
> since on s390 Linux always runs on a hypervisor,
> I am not sure it's safe to use the generic version -
> in other words, it just might be that for s390 smp_ and virt_
> barriers must be equivalent.

The definition of the memory barriers is independent from the fact
if the system is running on an hypervisor or not. Is there really
an architecture where you need special virt_xxx barriers?!? 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05  8:13         ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 4 Jan 2016 22:18:58 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for s390,
> > > for use by virtualization.
> > > 
> > > Some smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> > > 
> > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > unconditionally on this architecture.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index c358c31..fbd25b2 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -26,18 +26,21 @@
> > >  #define wmb()				barrier()
> > >  #define dma_rmb()			mb()
> > >  #define dma_wmb()			mb()
> > > -#define smp_mb()			mb()
> > > -#define smp_rmb()			rmb()
> > > -#define smp_wmb()			wmb()
> > > -
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_mb()			mb()
> > > +#define __smp_rmb()			rmb()
> > > +#define __smp_wmb()			wmb()
> > > +#define smp_mb()			__smp_mb()
> > > +#define smp_rmb()			__smp_rmb()
> > > +#define smp_wmb()			__smp_wmb()
> > 
> > Why define the smp_*mb() primitives here? Would not the inclusion of
> > asm-generic/barrier.h do this?
> 
> No because the generic one is a nop on !SMP, this one isn't.
> 
> Pls note this patch is just reordering code without making
> functional changes.
> And at the moment, on s390 smp_xxx barriers are always non empty.

The s390 kernel is SMP to 99.99%, we just didn't bother with a
non-smp variant for the memory-barriers. If the generic header
is used we'd get the non-smp version for free. It will save a
small amount of text space for CONFIG_SMP=n. 
 
> Some of this could be sub-optimal, but
> since on s390 Linux always runs on a hypervisor,
> I am not sure it's safe to use the generic version -
> in other words, it just might be that for s390 smp_ and virt_
> barriers must be equivalent.

The definition of the memory barriers is independent from the fact
if the system is running on an hypervisor or not. Is there really
an architecture where you need special virt_xxx barriers?!? 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-04 20:18       ` Michael S. Tsirkin
                         ` (3 preceding siblings ...)
  (?)
@ 2016-01-05  8:13       ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05  8:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	Christian Borntraeger, xen-devel, Ingo Molnar, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner, linux-metag,
	linux-arm-kernel, Andrew

On Mon, 4 Jan 2016 22:18:58 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for s390,
> > > for use by virtualization.
> > > 
> > > Some smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> > > 
> > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > unconditionally on this architecture.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > index c358c31..fbd25b2 100644
> > > --- a/arch/s390/include/asm/barrier.h
> > > +++ b/arch/s390/include/asm/barrier.h
> > > @@ -26,18 +26,21 @@
> > >  #define wmb()				barrier()
> > >  #define dma_rmb()			mb()
> > >  #define dma_wmb()			mb()
> > > -#define smp_mb()			mb()
> > > -#define smp_rmb()			rmb()
> > > -#define smp_wmb()			wmb()
> > > -
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_mb()			mb()
> > > +#define __smp_rmb()			rmb()
> > > +#define __smp_wmb()			wmb()
> > > +#define smp_mb()			__smp_mb()
> > > +#define smp_rmb()			__smp_rmb()
> > > +#define smp_wmb()			__smp_wmb()
> > 
> > Why define the smp_*mb() primitives here? Would not the inclusion of
> > asm-generic/barrier.h do this?
> 
> No because the generic one is a nop on !SMP, this one isn't.
> 
> Pls note this patch is just reordering code without making
> functional changes.
> And at the moment, on s390 smp_xxx barriers are always non empty.

The s390 kernel is SMP to 99.99%, we just didn't bother with a
non-smp variant for the memory-barriers. If the generic header
is used we'd get the non-smp version for free. It will save a
small amount of text space for CONFIG_SMP=n. 
 
> Some of this could be sub-optimal, but
> since on s390 Linux always runs on a hypervisor,
> I am not sure it's safe to use the generic version -
> in other words, it just might be that for s390 smp_ and virt_
> barriers must be equivalent.

The definition of the memory barriers is independent from the fact
if the system is running on an hypervisor or not. Is there really
an architecture where you need special virt_xxx barriers?!? 

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  1:36     ` Boqun Feng
  (?)
  (?)
@ 2016-01-05  8:51         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  8:51 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Benjamin Herrenschmidt, Paul Mackerras

On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for powerpc
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h

I think this is the part that was missed in review.

> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> >  1 file changed, 8 insertions(+), 16 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index 980ad0c..c0deafc 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -44,19 +44,11 @@
> >  #define dma_rmb()	__lwsync()
> >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> > -#ifdef CONFIG_SMP
> > -#define smp_lwsync()	__lwsync()
> > +#define __smp_lwsync()	__lwsync()
> >  
> 
> so __smp_lwsync() is always mapped to lwsync, right?

Yes.

> > -#define smp_mb()	mb()
> > -#define smp_rmb()	__lwsync()
> > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > -#else
> > -#define smp_lwsync()	barrier()
> > -
> > -#define smp_mb()	barrier()
> > -#define smp_rmb()	barrier()
> > -#define smp_wmb()	barrier()
> > -#endif /* CONFIG_SMP */
> > +#define __smp_mb()	mb()
> > +#define __smp_rmb()	__lwsync()
> > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> >  /*
> >   * This is a barrier which prevents following instructions from being
> > @@ -67,18 +59,18 @@
> >  #define data_barrier(x)	\
> >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> >  
> > -#define smp_store_release(p, v)						\
> > +#define __smp_store_release(p, v)						\
> >  do {									\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> 
> , therefore this will emit an lwsync no matter SMP or UP.

Absolutely. But smp_store_release (without __) will not.

Please note I did test this: for ppc code before and after
this patch generates exactly the same binary on SMP and UP.


> Another thing is that smp_lwsync() may have a third user(other than
> smp_load_acquire() and smp_store_release()):
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> I'm OK to change my patch accordingly, but do we really want
> smp_lwsync() get involved in this cleanup? If I understand you
> correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> while smp_lwsync() is internal to PPC.
> 
> Regards,
> Boqun

I think you missed the leading ___ :)

smp_store_release is external and it needs __smp_lwsync as
defined here.

I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
but why do this? Still, if you prefer it this way,
please let me know.

> >  	WRITE_ONCE(*p, v);						\
> >  } while (0)
> >  
> > -#define smp_load_acquire(p)						\
> > +#define __smp_load_acquire(p)						\
> >  ({									\
> >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> >  	___p1;								\
> >  })
> >  
> > -- 
> > MST
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  8:51         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  8:51 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Paul E. McKenney

On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for powerpc
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h

I think this is the part that was missed in review.

> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> >  1 file changed, 8 insertions(+), 16 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index 980ad0c..c0deafc 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -44,19 +44,11 @@
> >  #define dma_rmb()	__lwsync()
> >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> > -#ifdef CONFIG_SMP
> > -#define smp_lwsync()	__lwsync()
> > +#define __smp_lwsync()	__lwsync()
> >  
> 
> so __smp_lwsync() is always mapped to lwsync, right?

Yes.

> > -#define smp_mb()	mb()
> > -#define smp_rmb()	__lwsync()
> > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > -#else
> > -#define smp_lwsync()	barrier()
> > -
> > -#define smp_mb()	barrier()
> > -#define smp_rmb()	barrier()
> > -#define smp_wmb()	barrier()
> > -#endif /* CONFIG_SMP */
> > +#define __smp_mb()	mb()
> > +#define __smp_rmb()	__lwsync()
> > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> >  /*
> >   * This is a barrier which prevents following instructions from being
> > @@ -67,18 +59,18 @@
> >  #define data_barrier(x)	\
> >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> >  
> > -#define smp_store_release(p, v)						\
> > +#define __smp_store_release(p, v)						\
> >  do {									\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> 
> , therefore this will emit an lwsync no matter SMP or UP.

Absolutely. But smp_store_release (without __) will not.

Please note I did test this: for ppc code before and after
this patch generates exactly the same binary on SMP and UP.


> Another thing is that smp_lwsync() may have a third user(other than
> smp_load_acquire() and smp_store_release()):
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> I'm OK to change my patch accordingly, but do we really want
> smp_lwsync() get involved in this cleanup? If I understand you
> correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> while smp_lwsync() is internal to PPC.
> 
> Regards,
> Boqun

I think you missed the leading ___ :)

smp_store_release is external and it needs __smp_lwsync as
defined here.

I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
but why do this? Still, if you prefer it this way,
please let me know.

> >  	WRITE_ONCE(*p, v);						\
> >  } while (0)
> >  
> > -#define smp_load_acquire(p)						\
> > +#define __smp_load_acquire(p)						\
> >  ({									\
> >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> >  	___p1;								\
> >  })
> >  
> > -- 
> > MST
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  8:51         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  8:51 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Benjamin Herrenschmidt, Paul Mackerras

On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for powerpc
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h

I think this is the part that was missed in review.

> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
> > ---
> >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> >  1 file changed, 8 insertions(+), 16 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index 980ad0c..c0deafc 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -44,19 +44,11 @@
> >  #define dma_rmb()	__lwsync()
> >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> > -#ifdef CONFIG_SMP
> > -#define smp_lwsync()	__lwsync()
> > +#define __smp_lwsync()	__lwsync()
> >  
> 
> so __smp_lwsync() is always mapped to lwsync, right?

Yes.

> > -#define smp_mb()	mb()
> > -#define smp_rmb()	__lwsync()
> > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > -#else
> > -#define smp_lwsync()	barrier()
> > -
> > -#define smp_mb()	barrier()
> > -#define smp_rmb()	barrier()
> > -#define smp_wmb()	barrier()
> > -#endif /* CONFIG_SMP */
> > +#define __smp_mb()	mb()
> > +#define __smp_rmb()	__lwsync()
> > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> >  /*
> >   * This is a barrier which prevents following instructions from being
> > @@ -67,18 +59,18 @@
> >  #define data_barrier(x)	\
> >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> >  
> > -#define smp_store_release(p, v)						\
> > +#define __smp_store_release(p, v)						\
> >  do {									\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> 
> , therefore this will emit an lwsync no matter SMP or UP.

Absolutely. But smp_store_release (without __) will not.

Please note I did test this: for ppc code before and after
this patch generates exactly the same binary on SMP and UP.


> Another thing is that smp_lwsync() may have a third user(other than
> smp_load_acquire() and smp_store_release()):
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> I'm OK to change my patch accordingly, but do we really want
> smp_lwsync() get involved in this cleanup? If I understand you
> correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> while smp_lwsync() is internal to PPC.
> 
> Regards,
> Boqun

I think you missed the leading ___ :)

smp_store_release is external and it needs __smp_lwsync as
defined here.

I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
but why do this? Still, if you prefer it this way,
please let me know.

> >  	WRITE_ONCE(*p, v);						\
> >  } while (0)
> >  
> > -#define smp_load_acquire(p)						\
> > +#define __smp_load_acquire(p)						\
> >  ({									\
> >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> >  	___p1;								\
> >  })
> >  
> > -- 
> > MST
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  1:36     ` Boqun Feng
                       ` (2 preceding siblings ...)
  (?)
@ 2016-01-05  8:51     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  8:51 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for powerpc
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h

I think this is the part that was missed in review.

> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> >  1 file changed, 8 insertions(+), 16 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index 980ad0c..c0deafc 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -44,19 +44,11 @@
> >  #define dma_rmb()	__lwsync()
> >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> > -#ifdef CONFIG_SMP
> > -#define smp_lwsync()	__lwsync()
> > +#define __smp_lwsync()	__lwsync()
> >  
> 
> so __smp_lwsync() is always mapped to lwsync, right?

Yes.

> > -#define smp_mb()	mb()
> > -#define smp_rmb()	__lwsync()
> > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > -#else
> > -#define smp_lwsync()	barrier()
> > -
> > -#define smp_mb()	barrier()
> > -#define smp_rmb()	barrier()
> > -#define smp_wmb()	barrier()
> > -#endif /* CONFIG_SMP */
> > +#define __smp_mb()	mb()
> > +#define __smp_rmb()	__lwsync()
> > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> >  /*
> >   * This is a barrier which prevents following instructions from being
> > @@ -67,18 +59,18 @@
> >  #define data_barrier(x)	\
> >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> >  
> > -#define smp_store_release(p, v)						\
> > +#define __smp_store_release(p, v)						\
> >  do {									\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> 
> , therefore this will emit an lwsync no matter SMP or UP.

Absolutely. But smp_store_release (without __) will not.

Please note I did test this: for ppc code before and after
this patch generates exactly the same binary on SMP and UP.


> Another thing is that smp_lwsync() may have a third user(other than
> smp_load_acquire() and smp_store_release()):
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> I'm OK to change my patch accordingly, but do we really want
> smp_lwsync() get involved in this cleanup? If I understand you
> correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> while smp_lwsync() is internal to PPC.
> 
> Regards,
> Boqun

I think you missed the leading ___ :)

smp_store_release is external and it needs __smp_lwsync as
defined here.

I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
but why do this? Still, if you prefer it this way,
please let me know.

> >  	WRITE_ONCE(*p, v);						\
> >  } while (0)
> >  
> > -#define smp_load_acquire(p)						\
> > +#define __smp_load_acquire(p)						\
> >  ({									\
> >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> >  	___p1;								\
> >  })
> >  
> > -- 
> > MST
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  8:51         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  8:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for powerpc
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h

I think this is the part that was missed in review.

> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> >  1 file changed, 8 insertions(+), 16 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index 980ad0c..c0deafc 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -44,19 +44,11 @@
> >  #define dma_rmb()	__lwsync()
> >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> > -#ifdef CONFIG_SMP
> > -#define smp_lwsync()	__lwsync()
> > +#define __smp_lwsync()	__lwsync()
> >  
> 
> so __smp_lwsync() is always mapped to lwsync, right?

Yes.

> > -#define smp_mb()	mb()
> > -#define smp_rmb()	__lwsync()
> > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > -#else
> > -#define smp_lwsync()	barrier()
> > -
> > -#define smp_mb()	barrier()
> > -#define smp_rmb()	barrier()
> > -#define smp_wmb()	barrier()
> > -#endif /* CONFIG_SMP */
> > +#define __smp_mb()	mb()
> > +#define __smp_rmb()	__lwsync()
> > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> >  /*
> >   * This is a barrier which prevents following instructions from being
> > @@ -67,18 +59,18 @@
> >  #define data_barrier(x)	\
> >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> >  
> > -#define smp_store_release(p, v)						\
> > +#define __smp_store_release(p, v)						\
> >  do {									\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> 
> , therefore this will emit an lwsync no matter SMP or UP.

Absolutely. But smp_store_release (without __) will not.

Please note I did test this: for ppc code before and after
this patch generates exactly the same binary on SMP and UP.


> Another thing is that smp_lwsync() may have a third user(other than
> smp_load_acquire() and smp_store_release()):
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> I'm OK to change my patch accordingly, but do we really want
> smp_lwsync() get involved in this cleanup? If I understand you
> correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> while smp_lwsync() is internal to PPC.
> 
> Regards,
> Boqun

I think you missed the leading ___ :)

smp_store_release is external and it needs __smp_lwsync as
defined here.

I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
but why do this? Still, if you prefer it this way,
please let me know.

> >  	WRITE_ONCE(*p, v);						\
> >  } while (0)
> >  
> > -#define smp_load_acquire(p)						\
> > +#define __smp_load_acquire(p)						\
> >  ({									\
> >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> >  	___p1;								\
> >  })
> >  
> > -- 
> > MST
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo at vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  1:36     ` Boqun Feng
                       ` (4 preceding siblings ...)
  (?)
@ 2016-01-05  8:51     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  8:51 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for powerpc
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h

I think this is the part that was missed in review.

> > This reduces the amount of arch-specific boiler-plate code.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> >  1 file changed, 8 insertions(+), 16 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index 980ad0c..c0deafc 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -44,19 +44,11 @@
> >  #define dma_rmb()	__lwsync()
> >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> > -#ifdef CONFIG_SMP
> > -#define smp_lwsync()	__lwsync()
> > +#define __smp_lwsync()	__lwsync()
> >  
> 
> so __smp_lwsync() is always mapped to lwsync, right?

Yes.

> > -#define smp_mb()	mb()
> > -#define smp_rmb()	__lwsync()
> > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > -#else
> > -#define smp_lwsync()	barrier()
> > -
> > -#define smp_mb()	barrier()
> > -#define smp_rmb()	barrier()
> > -#define smp_wmb()	barrier()
> > -#endif /* CONFIG_SMP */
> > +#define __smp_mb()	mb()
> > +#define __smp_rmb()	__lwsync()
> > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >  
> >  /*
> >   * This is a barrier which prevents following instructions from being
> > @@ -67,18 +59,18 @@
> >  #define data_barrier(x)	\
> >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> >  
> > -#define smp_store_release(p, v)						\
> > +#define __smp_store_release(p, v)						\
> >  do {									\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> 
> , therefore this will emit an lwsync no matter SMP or UP.

Absolutely. But smp_store_release (without __) will not.

Please note I did test this: for ppc code before and after
this patch generates exactly the same binary on SMP and UP.


> Another thing is that smp_lwsync() may have a third user(other than
> smp_load_acquire() and smp_store_release()):
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> I'm OK to change my patch accordingly, but do we really want
> smp_lwsync() get involved in this cleanup? If I understand you
> correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> while smp_lwsync() is internal to PPC.
> 
> Regards,
> Boqun

I think you missed the leading ___ :)

smp_store_release is external and it needs __smp_lwsync as
defined here.

I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
but why do this? Still, if you prefer it this way,
please let me know.

> >  	WRITE_ONCE(*p, v);						\
> >  } while (0)
> >  
> > -#define smp_load_acquire(p)						\
> > +#define __smp_load_acquire(p)						\
> >  ({									\
> >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> >  	compiletime_assert_atomic_type(*p);				\
> > -	smp_lwsync();							\
> > +	__smp_lwsync();							\
> >  	___p1;								\
> >  })
> >  
> > -- 
> > MST
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  8:13         ` Martin Schwidefsky
  (?)
  (?)
@ 2016-01-05  9:30           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  9:30 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 22:18:58 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for s390,
> > > > for use by virtualization.
> > > > 
> > > > Some smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > > > 
> > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > unconditionally on this architecture.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index c358c31..fbd25b2 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -26,18 +26,21 @@
> > > >  #define wmb()				barrier()
> > > >  #define dma_rmb()			mb()
> > > >  #define dma_wmb()			mb()
> > > > -#define smp_mb()			mb()
> > > > -#define smp_rmb()			rmb()
> > > > -#define smp_wmb()			wmb()
> > > > -
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_mb()			mb()
> > > > +#define __smp_rmb()			rmb()
> > > > +#define __smp_wmb()			wmb()
> > > > +#define smp_mb()			__smp_mb()
> > > > +#define smp_rmb()			__smp_rmb()
> > > > +#define smp_wmb()			__smp_wmb()
> > > 
> > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > asm-generic/barrier.h do this?
> > 
> > No because the generic one is a nop on !SMP, this one isn't.
> > 
> > Pls note this patch is just reordering code without making
> > functional changes.
> > And at the moment, on s390 smp_xxx barriers are always non empty.
> 
> The s390 kernel is SMP to 99.99%, we just didn't bother with a
> non-smp variant for the memory-barriers. If the generic header
> is used we'd get the non-smp version for free. It will save a
> small amount of text space for CONFIG_SMP=n. 

OK, so I'll queue a patch to do this then?

Just to make sure: the question would be, are smp_xxx barriers ever used
in s390 arch specific code to flush in/out memory accesses for
synchronization with the hypervisor?

I went over s390 arch code and it seems to me the answer is no
(except of course for virtio).

But I also see a lot of weirdness on this architecture.

I found these calls:

arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
arch/s390/include/asm/bitops.h: smp_mb();

Not used in arch specific code so this is likely OK.

arch/s390/kernel/vdso.c:        smp_mb();

Looking at
	Author: Christian Borntraeger <borntraeger@de.ibm.com>
	Date:   Fri Sep 11 16:23:06 2015 +0200

	    s390/vdso: use correct memory barrier

	    By definition smp_wmb only orders writes against writes. (Finish all
	    previous writes, and do not start any future write). To protect the
	    vdso init code against early reads on other CPUs, let's use a full
	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
	    as full serialization, this needs no stable backport, but this change
	    will be necessary if we reimplement smp_wmb.

ok from hypervisor point of view, but it's also strange:
1. why isn't this paired with another mb somewhere?
   this seems to violate barrier pairing rules.
2. how does smp_mb protect against early reads on other CPUs?
   It normally does not: it orders reads from this CPU versus writes
   from same CPU. But init code does not appear to read anything.
   Maybe this is some s390 specific trick?

I could not figure out the above commit.


arch/s390/kvm/kvm-s390.c:       smp_mb();

Does not appear to be paired with anything.


arch/s390/lib/spinlock.c:               smp_mb();
arch/s390/lib/spinlock.c:                       smp_mb();

Seems ok, and appears paired properly.
Just to make sure - spinlock is not paravirtualized on s390, is it?

rch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();

It's all around vdso, so I'm guessing userspace is using this,
this is why there's no pairing.



> > Some of this could be sub-optimal, but
> > since on s390 Linux always runs on a hypervisor,
> > I am not sure it's safe to use the generic version -
> > in other words, it just might be that for s390 smp_ and virt_
> > barriers must be equivalent.
> 
> The definition of the memory barriers is independent from the fact
> if the system is running on an hypervisor or not.
> Is there really
> an architecture where you need special virt_xxx barriers?!? 

It is whenever host and guest or two guests access memory at
the same time.

The optimization where smp_xxx barriers are compiled out when
CONFIG_SMP is cleared means that two UP guests running
on an SMP host can not use smp_xxx barriers for communication.

See explanation here:
http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05  9:30           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  9:30 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Christian Borntraeger, Andrey Konovalov,
	Carsten Otte, Christian Ehrhardt

On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 22:18:58 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for s390,
> > > > for use by virtualization.
> > > > 
> > > > Some smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > > > 
> > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > unconditionally on this architecture.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index c358c31..fbd25b2 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -26,18 +26,21 @@
> > > >  #define wmb()				barrier()
> > > >  #define dma_rmb()			mb()
> > > >  #define dma_wmb()			mb()
> > > > -#define smp_mb()			mb()
> > > > -#define smp_rmb()			rmb()
> > > > -#define smp_wmb()			wmb()
> > > > -
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_mb()			mb()
> > > > +#define __smp_rmb()			rmb()
> > > > +#define __smp_wmb()			wmb()
> > > > +#define smp_mb()			__smp_mb()
> > > > +#define smp_rmb()			__smp_rmb()
> > > > +#define smp_wmb()			__smp_wmb()
> > > 
> > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > asm-generic/barrier.h do this?
> > 
> > No because the generic one is a nop on !SMP, this one isn't.
> > 
> > Pls note this patch is just reordering code without making
> > functional changes.
> > And at the moment, on s390 smp_xxx barriers are always non empty.
> 
> The s390 kernel is SMP to 99.99%, we just didn't bother with a
> non-smp variant for the memory-barriers. If the generic header
> is used we'd get the non-smp version for free. It will save a
> small amount of text space for CONFIG_SMP=n. 

OK, so I'll queue a patch to do this then?

Just to make sure: the question would be, are smp_xxx barriers ever used
in s390 arch specific code to flush in/out memory accesses for
synchronization with the hypervisor?

I went over s390 arch code and it seems to me the answer is no
(except of course for virtio).

But I also see a lot of weirdness on this architecture.

I found these calls:

arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
arch/s390/include/asm/bitops.h: smp_mb();

Not used in arch specific code so this is likely OK.

arch/s390/kernel/vdso.c:        smp_mb();

Looking at
	Author: Christian Borntraeger <borntraeger@de.ibm.com>
	Date:   Fri Sep 11 16:23:06 2015 +0200

	    s390/vdso: use correct memory barrier

	    By definition smp_wmb only orders writes against writes. (Finish all
	    previous writes, and do not start any future write). To protect the
	    vdso init code against early reads on other CPUs, let's use a full
	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
	    as full serialization, this needs no stable backport, but this change
	    will be necessary if we reimplement smp_wmb.

ok from hypervisor point of view, but it's also strange:
1. why isn't this paired with another mb somewhere?
   this seems to violate barrier pairing rules.
2. how does smp_mb protect against early reads on other CPUs?
   It normally does not: it orders reads from this CPU versus writes
   from same CPU. But init code does not appear to read anything.
   Maybe this is some s390 specific trick?

I could not figure out the above commit.


arch/s390/kvm/kvm-s390.c:       smp_mb();

Does not appear to be paired with anything.


arch/s390/lib/spinlock.c:               smp_mb();
arch/s390/lib/spinlock.c:                       smp_mb();

Seems ok, and appears paired properly.
Just to make sure - spinlock is not paravirtualized on s390, is it?

rch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();

It's all around vdso, so I'm guessing userspace is using this,
this is why there's no pairing.



> > Some of this could be sub-optimal, but
> > since on s390 Linux always runs on a hypervisor,
> > I am not sure it's safe to use the generic version -
> > in other words, it just might be that for s390 smp_ and virt_
> > barriers must be equivalent.
> 
> The definition of the memory barriers is independent from the fact
> if the system is running on an hypervisor or not.
> Is there really
> an architecture where you need special virt_xxx barriers?!? 

It is whenever host and guest or two guests access memory at
the same time.

The optimization where smp_xxx barriers are compiled out when
CONFIG_SMP is cleared means that two UP guests running
on an SMP host can not use smp_xxx barriers for communication.

See explanation here:
http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05  9:30           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  9:30 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 22:18:58 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for s390,
> > > > for use by virtualization.
> > > > 
> > > > Some smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > > > 
> > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > unconditionally on this architecture.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index c358c31..fbd25b2 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -26,18 +26,21 @@
> > > >  #define wmb()				barrier()
> > > >  #define dma_rmb()			mb()
> > > >  #define dma_wmb()			mb()
> > > > -#define smp_mb()			mb()
> > > > -#define smp_rmb()			rmb()
> > > > -#define smp_wmb()			wmb()
> > > > -
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_mb()			mb()
> > > > +#define __smp_rmb()			rmb()
> > > > +#define __smp_wmb()			wmb()
> > > > +#define smp_mb()			__smp_mb()
> > > > +#define smp_rmb()			__smp_rmb()
> > > > +#define smp_wmb()			__smp_wmb()
> > > 
> > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > asm-generic/barrier.h do this?
> > 
> > No because the generic one is a nop on !SMP, this one isn't.
> > 
> > Pls note this patch is just reordering code without making
> > functional changes.
> > And at the moment, on s390 smp_xxx barriers are always non empty.
> 
> The s390 kernel is SMP to 99.99%, we just didn't bother with a
> non-smp variant for the memory-barriers. If the generic header
> is used we'd get the non-smp version for free. It will save a
> small amount of text space for CONFIG_SMP=n. 

OK, so I'll queue a patch to do this then?

Just to make sure: the question would be, are smp_xxx barriers ever used
in s390 arch specific code to flush in/out memory accesses for
synchronization with the hypervisor?

I went over s390 arch code and it seems to me the answer is no
(except of course for virtio).

But I also see a lot of weirdness on this architecture.

I found these calls:

arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
arch/s390/include/asm/bitops.h: smp_mb();

Not used in arch specific code so this is likely OK.

arch/s390/kernel/vdso.c:        smp_mb();

Looking at
	Author: Christian Borntraeger <borntraeger@de.ibm.com>
	Date:   Fri Sep 11 16:23:06 2015 +0200

	    s390/vdso: use correct memory barrier

	    By definition smp_wmb only orders writes against writes. (Finish all
	    previous writes, and do not start any future write). To protect the
	    vdso init code against early reads on other CPUs, let's use a full
	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
	    as full serialization, this needs no stable backport, but this change
	    will be necessary if we reimplement smp_wmb.

ok from hypervisor point of view, but it's also strange:
1. why isn't this paired with another mb somewhere?
   this seems to violate barrier pairing rules.
2. how does smp_mb protect against early reads on other CPUs?
   It normally does not: it orders reads from this CPU versus writes
   from same CPU. But init code does not appear to read anything.
   Maybe this is some s390 specific trick?

I could not figure out the above commit.


arch/s390/kvm/kvm-s390.c:       smp_mb();

Does not appear to be paired with anything.


arch/s390/lib/spinlock.c:               smp_mb();
arch/s390/lib/spinlock.c:                       smp_mb();

Seems ok, and appears paired properly.
Just to make sure - spinlock is not paravirtualized on s390, is it?

rch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();

It's all around vdso, so I'm guessing userspace is using this,
this is why there's no pairing.



> > Some of this could be sub-optimal, but
> > since on s390 Linux always runs on a hypervisor,
> > I am not sure it's safe to use the generic version -
> > in other words, it just might be that for s390 smp_ and virt_
> > barriers must be equivalent.
> 
> The definition of the memory barriers is independent from the fact
> if the system is running on an hypervisor or not.
> Is there really
> an architecture where you need special virt_xxx barriers?!? 

It is whenever host and guest or two guests access memory at
the same time.

The optimization where smp_xxx barriers are compiled out when
CONFIG_SMP is cleared means that two UP guests running
on an SMP host can not use smp_xxx barriers for communication.

See explanation here:
http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  8:13         ` Martin Schwidefsky
                           ` (3 preceding siblings ...)
  (?)
@ 2016-01-05  9:30         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  9:30 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Carsten Otte,
	Ingo Molnar, linux-arch, linux-s390, Davidlohr Bueso,
	Arnd Bergmann, x86, Christian Borntraeger, xen-devel,
	Ingo Molnar, linux-xtensa, Christian Ehrhardt,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 22:18:58 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for s390,
> > > > for use by virtualization.
> > > > 
> > > > Some smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > > > 
> > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > unconditionally on this architecture.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index c358c31..fbd25b2 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -26,18 +26,21 @@
> > > >  #define wmb()				barrier()
> > > >  #define dma_rmb()			mb()
> > > >  #define dma_wmb()			mb()
> > > > -#define smp_mb()			mb()
> > > > -#define smp_rmb()			rmb()
> > > > -#define smp_wmb()			wmb()
> > > > -
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_mb()			mb()
> > > > +#define __smp_rmb()			rmb()
> > > > +#define __smp_wmb()			wmb()
> > > > +#define smp_mb()			__smp_mb()
> > > > +#define smp_rmb()			__smp_rmb()
> > > > +#define smp_wmb()			__smp_wmb()
> > > 
> > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > asm-generic/barrier.h do this?
> > 
> > No because the generic one is a nop on !SMP, this one isn't.
> > 
> > Pls note this patch is just reordering code without making
> > functional changes.
> > And at the moment, on s390 smp_xxx barriers are always non empty.
> 
> The s390 kernel is SMP to 99.99%, we just didn't bother with a
> non-smp variant for the memory-barriers. If the generic header
> is used we'd get the non-smp version for free. It will save a
> small amount of text space for CONFIG_SMP=n. 

OK, so I'll queue a patch to do this then?

Just to make sure: the question would be, are smp_xxx barriers ever used
in s390 arch specific code to flush in/out memory accesses for
synchronization with the hypervisor?

I went over s390 arch code and it seems to me the answer is no
(except of course for virtio).

But I also see a lot of weirdness on this architecture.

I found these calls:

arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
arch/s390/include/asm/bitops.h: smp_mb();

Not used in arch specific code so this is likely OK.

arch/s390/kernel/vdso.c:        smp_mb();

Looking at
	Author: Christian Borntraeger <borntraeger@de.ibm.com>
	Date:   Fri Sep 11 16:23:06 2015 +0200

	    s390/vdso: use correct memory barrier

	    By definition smp_wmb only orders writes against writes. (Finish all
	    previous writes, and do not start any future write). To protect the
	    vdso init code against early reads on other CPUs, let's use a full
	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
	    as full serialization, this needs no stable backport, but this change
	    will be necessary if we reimplement smp_wmb.

ok from hypervisor point of view, but it's also strange:
1. why isn't this paired with another mb somewhere?
   this seems to violate barrier pairing rules.
2. how does smp_mb protect against early reads on other CPUs?
   It normally does not: it orders reads from this CPU versus writes
   from same CPU. But init code does not appear to read anything.
   Maybe this is some s390 specific trick?

I could not figure out the above commit.


arch/s390/kvm/kvm-s390.c:       smp_mb();

Does not appear to be paired with anything.


arch/s390/lib/spinlock.c:               smp_mb();
arch/s390/lib/spinlock.c:                       smp_mb();

Seems ok, and appears paired properly.
Just to make sure - spinlock is not paravirtualized on s390, is it?

rch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();

It's all around vdso, so I'm guessing userspace is using this,
this is why there's no pairing.



> > Some of this could be sub-optimal, but
> > since on s390 Linux always runs on a hypervisor,
> > I am not sure it's safe to use the generic version -
> > in other words, it just might be that for s390 smp_ and virt_
> > barriers must be equivalent.
> 
> The definition of the memory barriers is independent from the fact
> if the system is running on an hypervisor or not.
> Is there really
> an architecture where you need special virt_xxx barriers?!? 

It is whenever host and guest or two guests access memory at
the same time.

The optimization where smp_xxx barriers are compiled out when
CONFIG_SMP is cleared means that two UP guests running
on an SMP host can not use smp_xxx barriers for communication.

See explanation here:
http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05  9:30           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  9:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 22:18:58 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for s390,
> > > > for use by virtualization.
> > > > 
> > > > Some smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > > > 
> > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > unconditionally on this architecture.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index c358c31..fbd25b2 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -26,18 +26,21 @@
> > > >  #define wmb()				barrier()
> > > >  #define dma_rmb()			mb()
> > > >  #define dma_wmb()			mb()
> > > > -#define smp_mb()			mb()
> > > > -#define smp_rmb()			rmb()
> > > > -#define smp_wmb()			wmb()
> > > > -
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_mb()			mb()
> > > > +#define __smp_rmb()			rmb()
> > > > +#define __smp_wmb()			wmb()
> > > > +#define smp_mb()			__smp_mb()
> > > > +#define smp_rmb()			__smp_rmb()
> > > > +#define smp_wmb()			__smp_wmb()
> > > 
> > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > asm-generic/barrier.h do this?
> > 
> > No because the generic one is a nop on !SMP, this one isn't.
> > 
> > Pls note this patch is just reordering code without making
> > functional changes.
> > And at the moment, on s390 smp_xxx barriers are always non empty.
> 
> The s390 kernel is SMP to 99.99%, we just didn't bother with a
> non-smp variant for the memory-barriers. If the generic header
> is used we'd get the non-smp version for free. It will save a
> small amount of text space for CONFIG_SMP=n. 

OK, so I'll queue a patch to do this then?

Just to make sure: the question would be, are smp_xxx barriers ever used
in s390 arch specific code to flush in/out memory accesses for
synchronization with the hypervisor?

I went over s390 arch code and it seems to me the answer is no
(except of course for virtio).

But I also see a lot of weirdness on this architecture.

I found these calls:

arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
arch/s390/include/asm/bitops.h: smp_mb();

Not used in arch specific code so this is likely OK.

arch/s390/kernel/vdso.c:        smp_mb();

Looking at
	Author: Christian Borntraeger <borntraeger@de.ibm.com>
	Date:   Fri Sep 11 16:23:06 2015 +0200

	    s390/vdso: use correct memory barrier

	    By definition smp_wmb only orders writes against writes. (Finish all
	    previous writes, and do not start any future write). To protect the
	    vdso init code against early reads on other CPUs, let's use a full
	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
	    as full serialization, this needs no stable backport, but this change
	    will be necessary if we reimplement smp_wmb.

ok from hypervisor point of view, but it's also strange:
1. why isn't this paired with another mb somewhere?
   this seems to violate barrier pairing rules.
2. how does smp_mb protect against early reads on other CPUs?
   It normally does not: it orders reads from this CPU versus writes
   from same CPU. But init code does not appear to read anything.
   Maybe this is some s390 specific trick?

I could not figure out the above commit.


arch/s390/kvm/kvm-s390.c:       smp_mb();

Does not appear to be paired with anything.


arch/s390/lib/spinlock.c:               smp_mb();
arch/s390/lib/spinlock.c:                       smp_mb();

Seems ok, and appears paired properly.
Just to make sure - spinlock is not paravirtualized on s390, is it?

rch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();

It's all around vdso, so I'm guessing userspace is using this,
this is why there's no pairing.



> > Some of this could be sub-optimal, but
> > since on s390 Linux always runs on a hypervisor,
> > I am not sure it's safe to use the generic version -
> > in other words, it just might be that for s390 smp_ and virt_
> > barriers must be equivalent.
> 
> The definition of the memory barriers is independent from the fact
> if the system is running on an hypervisor or not.
> Is there really
> an architecture where you need special virt_xxx barriers?!? 

It is whenever host and guest or two guests access memory at
the same time.

The optimization where smp_xxx barriers are compiled out when
CONFIG_SMP is cleared means that two UP guests running
on an SMP host can not use smp_xxx barriers for communication.

See explanation here:
http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  8:13         ` Martin Schwidefsky
                           ` (2 preceding siblings ...)
  (?)
@ 2016-01-05  9:30         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05  9:30 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Carsten Otte,
	Ingo Molnar, linux-arch, linux-s390, Davidlohr Bueso,
	Arnd Bergmann, x86, Christian Borntraeger, xen-devel,
	Ingo Molnar, linux-xtensa, Christian Ehrhardt,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> On Mon, 4 Jan 2016 22:18:58 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for s390,
> > > > for use by virtualization.
> > > > 
> > > > Some smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > > > 
> > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > unconditionally on this architecture.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > index c358c31..fbd25b2 100644
> > > > --- a/arch/s390/include/asm/barrier.h
> > > > +++ b/arch/s390/include/asm/barrier.h
> > > > @@ -26,18 +26,21 @@
> > > >  #define wmb()				barrier()
> > > >  #define dma_rmb()			mb()
> > > >  #define dma_wmb()			mb()
> > > > -#define smp_mb()			mb()
> > > > -#define smp_rmb()			rmb()
> > > > -#define smp_wmb()			wmb()
> > > > -
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_mb()			mb()
> > > > +#define __smp_rmb()			rmb()
> > > > +#define __smp_wmb()			wmb()
> > > > +#define smp_mb()			__smp_mb()
> > > > +#define smp_rmb()			__smp_rmb()
> > > > +#define smp_wmb()			__smp_wmb()
> > > 
> > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > asm-generic/barrier.h do this?
> > 
> > No because the generic one is a nop on !SMP, this one isn't.
> > 
> > Pls note this patch is just reordering code without making
> > functional changes.
> > And at the moment, on s390 smp_xxx barriers are always non empty.
> 
> The s390 kernel is SMP to 99.99%, we just didn't bother with a
> non-smp variant for the memory-barriers. If the generic header
> is used we'd get the non-smp version for free. It will save a
> small amount of text space for CONFIG_SMP=n. 

OK, so I'll queue a patch to do this then?

Just to make sure: the question would be, are smp_xxx barriers ever used
in s390 arch specific code to flush in/out memory accesses for
synchronization with the hypervisor?

I went over s390 arch code and it seems to me the answer is no
(except of course for virtio).

But I also see a lot of weirdness on this architecture.

I found these calls:

arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
arch/s390/include/asm/bitops.h: smp_mb();

Not used in arch specific code so this is likely OK.

arch/s390/kernel/vdso.c:        smp_mb();

Looking at
	Author: Christian Borntraeger <borntraeger@de.ibm.com>
	Date:   Fri Sep 11 16:23:06 2015 +0200

	    s390/vdso: use correct memory barrier

	    By definition smp_wmb only orders writes against writes. (Finish all
	    previous writes, and do not start any future write). To protect the
	    vdso init code against early reads on other CPUs, let's use a full
	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
	    as full serialization, this needs no stable backport, but this change
	    will be necessary if we reimplement smp_wmb.

ok from hypervisor point of view, but it's also strange:
1. why isn't this paired with another mb somewhere?
   this seems to violate barrier pairing rules.
2. how does smp_mb protect against early reads on other CPUs?
   It normally does not: it orders reads from this CPU versus writes
   from same CPU. But init code does not appear to read anything.
   Maybe this is some s390 specific trick?

I could not figure out the above commit.


arch/s390/kvm/kvm-s390.c:       smp_mb();

Does not appear to be paired with anything.


arch/s390/lib/spinlock.c:               smp_mb();
arch/s390/lib/spinlock.c:                       smp_mb();

Seems ok, and appears paired properly.
Just to make sure - spinlock is not paravirtualized on s390, is it?

rch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();
arch/s390/kernel/time.c:        smp_wmb();

It's all around vdso, so I'm guessing userspace is using this,
this is why there's no pairing.



> > Some of this could be sub-optimal, but
> > since on s390 Linux always runs on a hypervisor,
> > I am not sure it's safe to use the generic version -
> > in other words, it just might be that for s390 smp_ and virt_
> > barriers must be equivalent.
> 
> The definition of the memory barriers is independent from the fact
> if the system is running on an hypervisor or not.
> Is there really
> an architecture where you need special virt_xxx barriers?!? 

It is whenever host and guest or two guests access memory at
the same time.

The optimization where smp_xxx barriers are compiled out when
CONFIG_SMP is cleared means that two UP guests running
on an SMP host can not use smp_xxx barriers for communication.

See explanation here:
http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  8:51         ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-05  9:53           ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  9:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras

On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> > 
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for powerpc
> > > for use by virtualization.
> > > 
> > > smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> 
> I think this is the part that was missed in review.
> 

Yes, I realized my mistake after reread the series. But smp_lwsync() is
not defined in asm-generic/barriers.h, right?

> > > This reduces the amount of arch-specific boiler-plate code.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > index 980ad0c..c0deafc 100644
> > > --- a/arch/powerpc/include/asm/barrier.h
> > > +++ b/arch/powerpc/include/asm/barrier.h
> > > @@ -44,19 +44,11 @@
> > >  #define dma_rmb()	__lwsync()
> > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > > -#ifdef CONFIG_SMP
> > > -#define smp_lwsync()	__lwsync()
> > > +#define __smp_lwsync()	__lwsync()
> > >  
> > 
> > so __smp_lwsync() is always mapped to lwsync, right?
> 
> Yes.
> 
> > > -#define smp_mb()	mb()
> > > -#define smp_rmb()	__lwsync()
> > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > -#else
> > > -#define smp_lwsync()	barrier()
> > > -
> > > -#define smp_mb()	barrier()
> > > -#define smp_rmb()	barrier()
> > > -#define smp_wmb()	barrier()
> > > -#endif /* CONFIG_SMP */
> > > +#define __smp_mb()	mb()
> > > +#define __smp_rmb()	__lwsync()
> > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > >  /*
> > >   * This is a barrier which prevents following instructions from being
> > > @@ -67,18 +59,18 @@
> > >  #define data_barrier(x)	\
> > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > >  
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_store_release(p, v)						\
> > >  do {									\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > 
> > , therefore this will emit an lwsync no matter SMP or UP.
> 
> Absolutely. But smp_store_release (without __) will not.
> 
> Please note I did test this: for ppc code before and after
> this patch generates exactly the same binary on SMP and UP.
> 

Yes, you're right, sorry for my mistake...

> 
> > Another thing is that smp_lwsync() may have a third user(other than
> > smp_load_acquire() and smp_store_release()):
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > I'm OK to change my patch accordingly, but do we really want
> > smp_lwsync() get involved in this cleanup? If I understand you
> > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > while smp_lwsync() is internal to PPC.
> > 
> > Regards,
> > Boqun
> 
> I think you missed the leading ___ :)
> 

What I mean here was smp_lwsync() was originally internal to PPC, but
never mind ;-)

> smp_store_release is external and it needs __smp_lwsync as
> defined here.
> 
> I can duplicate some code and have smp_lwsync *not* call __smp_lwsync

You mean bringing smp_lwsync() back? because I haven't seen you defining
in asm-generic/barriers.h in previous patches and you just delete it in
this patch.

> but why do this? Still, if you prefer it this way,
> please let me know.
> 

I think deleting smp_lwsync() is fine, though I need to change atomic
variants patches on PPC because of it ;-/

Regards,
Boqun

> > >  	WRITE_ONCE(*p, v);						\
> > >  } while (0)
> > >  
> > > -#define smp_load_acquire(p)						\
> > > +#define __smp_load_acquire(p)						\
> > >  ({									\
> > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > >  	___p1;								\
> > >  })
> > >  
> > > -- 
> > > MST
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  9:53           ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  9:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Paul E. McKenney

On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> > 
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for powerpc
> > > for use by virtualization.
> > > 
> > > smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> 
> I think this is the part that was missed in review.
> 

Yes, I realized my mistake after reread the series. But smp_lwsync() is
not defined in asm-generic/barriers.h, right?

> > > This reduces the amount of arch-specific boiler-plate code.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > index 980ad0c..c0deafc 100644
> > > --- a/arch/powerpc/include/asm/barrier.h
> > > +++ b/arch/powerpc/include/asm/barrier.h
> > > @@ -44,19 +44,11 @@
> > >  #define dma_rmb()	__lwsync()
> > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > > -#ifdef CONFIG_SMP
> > > -#define smp_lwsync()	__lwsync()
> > > +#define __smp_lwsync()	__lwsync()
> > >  
> > 
> > so __smp_lwsync() is always mapped to lwsync, right?
> 
> Yes.
> 
> > > -#define smp_mb()	mb()
> > > -#define smp_rmb()	__lwsync()
> > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > -#else
> > > -#define smp_lwsync()	barrier()
> > > -
> > > -#define smp_mb()	barrier()
> > > -#define smp_rmb()	barrier()
> > > -#define smp_wmb()	barrier()
> > > -#endif /* CONFIG_SMP */
> > > +#define __smp_mb()	mb()
> > > +#define __smp_rmb()	__lwsync()
> > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > >  /*
> > >   * This is a barrier which prevents following instructions from being
> > > @@ -67,18 +59,18 @@
> > >  #define data_barrier(x)	\
> > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > >  
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_store_release(p, v)						\
> > >  do {									\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > 
> > , therefore this will emit an lwsync no matter SMP or UP.
> 
> Absolutely. But smp_store_release (without __) will not.
> 
> Please note I did test this: for ppc code before and after
> this patch generates exactly the same binary on SMP and UP.
> 

Yes, you're right, sorry for my mistake...

> 
> > Another thing is that smp_lwsync() may have a third user(other than
> > smp_load_acquire() and smp_store_release()):
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > I'm OK to change my patch accordingly, but do we really want
> > smp_lwsync() get involved in this cleanup? If I understand you
> > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > while smp_lwsync() is internal to PPC.
> > 
> > Regards,
> > Boqun
> 
> I think you missed the leading ___ :)
> 

What I mean here was smp_lwsync() was originally internal to PPC, but
never mind ;-)

> smp_store_release is external and it needs __smp_lwsync as
> defined here.
> 
> I can duplicate some code and have smp_lwsync *not* call __smp_lwsync

You mean bringing smp_lwsync() back? because I haven't seen you defining
in asm-generic/barriers.h in previous patches and you just delete it in
this patch.

> but why do this? Still, if you prefer it this way,
> please let me know.
> 

I think deleting smp_lwsync() is fine, though I need to change atomic
variants patches on PPC because of it ;-/

Regards,
Boqun

> > >  	WRITE_ONCE(*p, v);						\
> > >  } while (0)
> > >  
> > > -#define smp_load_acquire(p)						\
> > > +#define __smp_load_acquire(p)						\
> > >  ({									\
> > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > >  	___p1;								\
> > >  })
> > >  
> > > -- 
> > > MST
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  9:53           ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  9:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras

On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> > 
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for powerpc
> > > for use by virtualization.
> > > 
> > > smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> 
> I think this is the part that was missed in review.
> 

Yes, I realized my mistake after reread the series. But smp_lwsync() is
not defined in asm-generic/barriers.h, right?

> > > This reduces the amount of arch-specific boiler-plate code.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > index 980ad0c..c0deafc 100644
> > > --- a/arch/powerpc/include/asm/barrier.h
> > > +++ b/arch/powerpc/include/asm/barrier.h
> > > @@ -44,19 +44,11 @@
> > >  #define dma_rmb()	__lwsync()
> > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > > -#ifdef CONFIG_SMP
> > > -#define smp_lwsync()	__lwsync()
> > > +#define __smp_lwsync()	__lwsync()
> > >  
> > 
> > so __smp_lwsync() is always mapped to lwsync, right?
> 
> Yes.
> 
> > > -#define smp_mb()	mb()
> > > -#define smp_rmb()	__lwsync()
> > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > -#else
> > > -#define smp_lwsync()	barrier()
> > > -
> > > -#define smp_mb()	barrier()
> > > -#define smp_rmb()	barrier()
> > > -#define smp_wmb()	barrier()
> > > -#endif /* CONFIG_SMP */
> > > +#define __smp_mb()	mb()
> > > +#define __smp_rmb()	__lwsync()
> > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > >  /*
> > >   * This is a barrier which prevents following instructions from being
> > > @@ -67,18 +59,18 @@
> > >  #define data_barrier(x)	\
> > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > >  
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_store_release(p, v)						\
> > >  do {									\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > 
> > , therefore this will emit an lwsync no matter SMP or UP.
> 
> Absolutely. But smp_store_release (without __) will not.
> 
> Please note I did test this: for ppc code before and after
> this patch generates exactly the same binary on SMP and UP.
> 

Yes, you're right, sorry for my mistake...

> 
> > Another thing is that smp_lwsync() may have a third user(other than
> > smp_load_acquire() and smp_store_release()):
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > I'm OK to change my patch accordingly, but do we really want
> > smp_lwsync() get involved in this cleanup? If I understand you
> > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > while smp_lwsync() is internal to PPC.
> > 
> > Regards,
> > Boqun
> 
> I think you missed the leading ___ :)
> 

What I mean here was smp_lwsync() was originally internal to PPC, but
never mind ;-)

> smp_store_release is external and it needs __smp_lwsync as
> defined here.
> 
> I can duplicate some code and have smp_lwsync *not* call __smp_lwsync

You mean bringing smp_lwsync() back? because I haven't seen you defining
in asm-generic/barriers.h in previous patches and you just delete it in
this patch.

> but why do this? Still, if you prefer it this way,
> please let me know.
> 

I think deleting smp_lwsync() is fine, though I need to change atomic
variants patches on PPC because of it ;-/

Regards,
Boqun

> > >  	WRITE_ONCE(*p, v);						\
> > >  } while (0)
> > >  
> > > -#define smp_load_acquire(p)						\
> > > +#define __smp_load_acquire(p)						\
> > >  ({									\
> > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > >  	___p1;								\
> > >  })
> > >  
> > > -- 
> > > MST
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  8:51         ` Michael S. Tsirkin
                           ` (2 preceding siblings ...)
  (?)
@ 2016-01-05  9:53         ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  9:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> > 
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for powerpc
> > > for use by virtualization.
> > > 
> > > smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> 
> I think this is the part that was missed in review.
> 

Yes, I realized my mistake after reread the series. But smp_lwsync() is
not defined in asm-generic/barriers.h, right?

> > > This reduces the amount of arch-specific boiler-plate code.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > index 980ad0c..c0deafc 100644
> > > --- a/arch/powerpc/include/asm/barrier.h
> > > +++ b/arch/powerpc/include/asm/barrier.h
> > > @@ -44,19 +44,11 @@
> > >  #define dma_rmb()	__lwsync()
> > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > > -#ifdef CONFIG_SMP
> > > -#define smp_lwsync()	__lwsync()
> > > +#define __smp_lwsync()	__lwsync()
> > >  
> > 
> > so __smp_lwsync() is always mapped to lwsync, right?
> 
> Yes.
> 
> > > -#define smp_mb()	mb()
> > > -#define smp_rmb()	__lwsync()
> > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > -#else
> > > -#define smp_lwsync()	barrier()
> > > -
> > > -#define smp_mb()	barrier()
> > > -#define smp_rmb()	barrier()
> > > -#define smp_wmb()	barrier()
> > > -#endif /* CONFIG_SMP */
> > > +#define __smp_mb()	mb()
> > > +#define __smp_rmb()	__lwsync()
> > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > >  /*
> > >   * This is a barrier which prevents following instructions from being
> > > @@ -67,18 +59,18 @@
> > >  #define data_barrier(x)	\
> > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > >  
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_store_release(p, v)						\
> > >  do {									\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > 
> > , therefore this will emit an lwsync no matter SMP or UP.
> 
> Absolutely. But smp_store_release (without __) will not.
> 
> Please note I did test this: for ppc code before and after
> this patch generates exactly the same binary on SMP and UP.
> 

Yes, you're right, sorry for my mistake...

> 
> > Another thing is that smp_lwsync() may have a third user(other than
> > smp_load_acquire() and smp_store_release()):
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > I'm OK to change my patch accordingly, but do we really want
> > smp_lwsync() get involved in this cleanup? If I understand you
> > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > while smp_lwsync() is internal to PPC.
> > 
> > Regards,
> > Boqun
> 
> I think you missed the leading ___ :)
> 

What I mean here was smp_lwsync() was originally internal to PPC, but
never mind ;-)

> smp_store_release is external and it needs __smp_lwsync as
> defined here.
> 
> I can duplicate some code and have smp_lwsync *not* call __smp_lwsync

You mean bringing smp_lwsync() back? because I haven't seen you defining
in asm-generic/barriers.h in previous patches and you just delete it in
this patch.

> but why do this? Still, if you prefer it this way,
> please let me know.
> 

I think deleting smp_lwsync() is fine, though I need to change atomic
variants patches on PPC because of it ;-/

Regards,
Boqun

> > >  	WRITE_ONCE(*p, v);						\
> > >  } while (0)
> > >  
> > > -#define smp_load_acquire(p)						\
> > > +#define __smp_load_acquire(p)						\
> > >  ({									\
> > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > >  	___p1;								\
> > >  })
> > >  
> > > -- 
> > > MST
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05  9:53           ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  9:53 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> > 
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for powerpc
> > > for use by virtualization.
> > > 
> > > smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> 
> I think this is the part that was missed in review.
> 

Yes, I realized my mistake after reread the series. But smp_lwsync() is
not defined in asm-generic/barriers.h, right?

> > > This reduces the amount of arch-specific boiler-plate code.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > index 980ad0c..c0deafc 100644
> > > --- a/arch/powerpc/include/asm/barrier.h
> > > +++ b/arch/powerpc/include/asm/barrier.h
> > > @@ -44,19 +44,11 @@
> > >  #define dma_rmb()	__lwsync()
> > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > > -#ifdef CONFIG_SMP
> > > -#define smp_lwsync()	__lwsync()
> > > +#define __smp_lwsync()	__lwsync()
> > >  
> > 
> > so __smp_lwsync() is always mapped to lwsync, right?
> 
> Yes.
> 
> > > -#define smp_mb()	mb()
> > > -#define smp_rmb()	__lwsync()
> > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > -#else
> > > -#define smp_lwsync()	barrier()
> > > -
> > > -#define smp_mb()	barrier()
> > > -#define smp_rmb()	barrier()
> > > -#define smp_wmb()	barrier()
> > > -#endif /* CONFIG_SMP */
> > > +#define __smp_mb()	mb()
> > > +#define __smp_rmb()	__lwsync()
> > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > >  /*
> > >   * This is a barrier which prevents following instructions from being
> > > @@ -67,18 +59,18 @@
> > >  #define data_barrier(x)	\
> > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > >  
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_store_release(p, v)						\
> > >  do {									\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > 
> > , therefore this will emit an lwsync no matter SMP or UP.
> 
> Absolutely. But smp_store_release (without __) will not.
> 
> Please note I did test this: for ppc code before and after
> this patch generates exactly the same binary on SMP and UP.
> 

Yes, you're right, sorry for my mistake...

> 
> > Another thing is that smp_lwsync() may have a third user(other than
> > smp_load_acquire() and smp_store_release()):
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > I'm OK to change my patch accordingly, but do we really want
> > smp_lwsync() get involved in this cleanup? If I understand you
> > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > while smp_lwsync() is internal to PPC.
> > 
> > Regards,
> > Boqun
> 
> I think you missed the leading ___ :)
> 

What I mean here was smp_lwsync() was originally internal to PPC, but
never mind ;-)

> smp_store_release is external and it needs __smp_lwsync as
> defined here.
> 
> I can duplicate some code and have smp_lwsync *not* call __smp_lwsync

You mean bringing smp_lwsync() back? because I haven't seen you defining
in asm-generic/barriers.h in previous patches and you just delete it in
this patch.

> but why do this? Still, if you prefer it this way,
> please let me know.
> 

I think deleting smp_lwsync() is fine, though I need to change atomic
variants patches on PPC because of it ;-/

Regards,
Boqun

> > >  	WRITE_ONCE(*p, v);						\
> > >  } while (0)
> > >  
> > > -#define smp_load_acquire(p)						\
> > > +#define __smp_load_acquire(p)						\
> > >  ({									\
> > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > >  	___p1;								\
> > >  })
> > >  
> > > -- 
> > > MST
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > the body of a message to majordomo at vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  8:51         ` Michael S. Tsirkin
                           ` (4 preceding siblings ...)
  (?)
@ 2016-01-05  9:53         ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-05  9:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> > 
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for powerpc
> > > for use by virtualization.
> > > 
> > > smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
> 
> I think this is the part that was missed in review.
> 

Yes, I realized my mistake after reread the series. But smp_lwsync() is
not defined in asm-generic/barriers.h, right?

> > > This reduces the amount of arch-specific boiler-plate code.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > ---
> > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > index 980ad0c..c0deafc 100644
> > > --- a/arch/powerpc/include/asm/barrier.h
> > > +++ b/arch/powerpc/include/asm/barrier.h
> > > @@ -44,19 +44,11 @@
> > >  #define dma_rmb()	__lwsync()
> > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > > -#ifdef CONFIG_SMP
> > > -#define smp_lwsync()	__lwsync()
> > > +#define __smp_lwsync()	__lwsync()
> > >  
> > 
> > so __smp_lwsync() is always mapped to lwsync, right?
> 
> Yes.
> 
> > > -#define smp_mb()	mb()
> > > -#define smp_rmb()	__lwsync()
> > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > -#else
> > > -#define smp_lwsync()	barrier()
> > > -
> > > -#define smp_mb()	barrier()
> > > -#define smp_rmb()	barrier()
> > > -#define smp_wmb()	barrier()
> > > -#endif /* CONFIG_SMP */
> > > +#define __smp_mb()	mb()
> > > +#define __smp_rmb()	__lwsync()
> > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > >  
> > >  /*
> > >   * This is a barrier which prevents following instructions from being
> > > @@ -67,18 +59,18 @@
> > >  #define data_barrier(x)	\
> > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > >  
> > > -#define smp_store_release(p, v)						\
> > > +#define __smp_store_release(p, v)						\
> > >  do {									\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > 
> > , therefore this will emit an lwsync no matter SMP or UP.
> 
> Absolutely. But smp_store_release (without __) will not.
> 
> Please note I did test this: for ppc code before and after
> this patch generates exactly the same binary on SMP and UP.
> 

Yes, you're right, sorry for my mistake...

> 
> > Another thing is that smp_lwsync() may have a third user(other than
> > smp_load_acquire() and smp_store_release()):
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > I'm OK to change my patch accordingly, but do we really want
> > smp_lwsync() get involved in this cleanup? If I understand you
> > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > while smp_lwsync() is internal to PPC.
> > 
> > Regards,
> > Boqun
> 
> I think you missed the leading ___ :)
> 

What I mean here was smp_lwsync() was originally internal to PPC, but
never mind ;-)

> smp_store_release is external and it needs __smp_lwsync as
> defined here.
> 
> I can duplicate some code and have smp_lwsync *not* call __smp_lwsync

You mean bringing smp_lwsync() back? because I haven't seen you defining
in asm-generic/barriers.h in previous patches and you just delete it in
this patch.

> but why do this? Still, if you prefer it this way,
> please let me know.
> 

I think deleting smp_lwsync() is fine, though I need to change atomic
variants patches on PPC because of it ;-/

Regards,
Boqun

> > >  	WRITE_ONCE(*p, v);						\
> > >  } while (0)
> > >  
> > > -#define smp_load_acquire(p)						\
> > > +#define __smp_load_acquire(p)						\
> > >  ({									\
> > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > >  	compiletime_assert_atomic_type(*p);				\
> > > -	smp_lwsync();							\
> > > +	__smp_lwsync();							\
> > >  	___p1;								\
> > >  })
> > >  
> > > -- 
> > > MST
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  9:30           ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-05 12:08             ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 12:08 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Tue, 5 Jan 2016 11:30:19 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 22:18:58 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > This defines __smp_xxx barriers for s390,
> > > > > for use by virtualization.
> > > > > 
> > > > > Some smp_xxx barriers are removed as they are
> > > > > defined correctly by asm-generic/barriers.h
> > > > > 
> > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > unconditionally on this architecture.
> > > > > 
> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > ---
> > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > index c358c31..fbd25b2 100644
> > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > @@ -26,18 +26,21 @@
> > > > >  #define wmb()				barrier()
> > > > >  #define dma_rmb()			mb()
> > > > >  #define dma_wmb()			mb()
> > > > > -#define smp_mb()			mb()
> > > > > -#define smp_rmb()			rmb()
> > > > > -#define smp_wmb()			wmb()
> > > > > -
> > > > > -#define smp_store_release(p, v)						\
> > > > > +#define __smp_mb()			mb()
> > > > > +#define __smp_rmb()			rmb()
> > > > > +#define __smp_wmb()			wmb()
> > > > > +#define smp_mb()			__smp_mb()
> > > > > +#define smp_rmb()			__smp_rmb()
> > > > > +#define smp_wmb()			__smp_wmb()
> > > > 
> > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > asm-generic/barrier.h do this?
> > > 
> > > No because the generic one is a nop on !SMP, this one isn't.
> > > 
> > > Pls note this patch is just reordering code without making
> > > functional changes.
> > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > 
> > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > non-smp variant for the memory-barriers. If the generic header
> > is used we'd get the non-smp version for free. It will save a
> > small amount of text space for CONFIG_SMP=n. 
> 
> OK, so I'll queue a patch to do this then?

Yes please.
 
> Just to make sure: the question would be, are smp_xxx barriers ever used
> in s390 arch specific code to flush in/out memory accesses for
> synchronization with the hypervisor?
> 
> I went over s390 arch code and it seems to me the answer is no
> (except of course for virtio).

Correct. Guest to host communication either uses instructions which
imply a memory barrier or QDIO which uses atomics.

> But I also see a lot of weirdness on this architecture.

Mostly historical, s390 actually is one of the easiest architectures in
regard to memory barriers.

> I found these calls:
> 
> arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> arch/s390/include/asm/bitops.h: smp_mb();
> 
> Not used in arch specific code so this is likely OK.

This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
and smp_mb__before_atomic are used in clear_bit_unlock and
__clear_bit_unlock which are 1:1 copies from the code in
include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
from the generic implementation.

> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

That smp_mb can be removed. The initial s390 vdso code is heavily influenced
by the powerpc version which does have a smp_wmb in vdso_init right before
the vdso_ready=1 assignment. s390 has no need for that.
 
> 
> arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> Does not appear to be paired with anything.

This one does not make sense to me. Imho can be removed as well. 
 
> arch/s390/lib/spinlock.c:               smp_mb();
> arch/s390/lib/spinlock.c:                       smp_mb();
> 
> Seems ok, and appears paired properly.
> Just to make sure - spinlock is not paravirtualized on s390, is it?

s390 just uses the compare-and-swap instruction for the basic lock/unlock
operation, this implies the memory barrier. We do call the hypervisor for
contended locks if the lock can not be acquired after a number of retries.

A while ago we did play with ticket spinlocks but they behaved badly in
out usual virtualized environments. If we find the time we might take a
closer look at the para-virtualized queued spinlocks.

> rch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> 
> It's all around vdso, so I'm guessing userspace is using this,
> this is why there's no pairing.

Correct, this is the update count mechanics with the vdso user space code.

> > > Some of this could be sub-optimal, but
> > > since on s390 Linux always runs on a hypervisor,
> > > I am not sure it's safe to use the generic version -
> > > in other words, it just might be that for s390 smp_ and virt_
> > > barriers must be equivalent.
> > 
> > The definition of the memory barriers is independent from the fact
> > if the system is running on an hypervisor or not.
> > Is there really
> > an architecture where you need special virt_xxx barriers?!? 
> 
> It is whenever host and guest or two guests access memory at
> the same time.
> 
> The optimization where smp_xxx barriers are compiled out when
> CONFIG_SMP is cleared means that two UP guests running
> on an SMP host can not use smp_xxx barriers for communication.
> 
> See explanation here:
> http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

Got it, makes sense.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 12:08             ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 12:08 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Christian Borntraeger, Andrey Konovalov,
	Carsten Otte, Christian Ehrhardt

On Tue, 5 Jan 2016 11:30:19 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 22:18:58 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > This defines __smp_xxx barriers for s390,
> > > > > for use by virtualization.
> > > > > 
> > > > > Some smp_xxx barriers are removed as they are
> > > > > defined correctly by asm-generic/barriers.h
> > > > > 
> > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > unconditionally on this architecture.
> > > > > 
> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > ---
> > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > index c358c31..fbd25b2 100644
> > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > @@ -26,18 +26,21 @@
> > > > >  #define wmb()				barrier()
> > > > >  #define dma_rmb()			mb()
> > > > >  #define dma_wmb()			mb()
> > > > > -#define smp_mb()			mb()
> > > > > -#define smp_rmb()			rmb()
> > > > > -#define smp_wmb()			wmb()
> > > > > -
> > > > > -#define smp_store_release(p, v)						\
> > > > > +#define __smp_mb()			mb()
> > > > > +#define __smp_rmb()			rmb()
> > > > > +#define __smp_wmb()			wmb()
> > > > > +#define smp_mb()			__smp_mb()
> > > > > +#define smp_rmb()			__smp_rmb()
> > > > > +#define smp_wmb()			__smp_wmb()
> > > > 
> > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > asm-generic/barrier.h do this?
> > > 
> > > No because the generic one is a nop on !SMP, this one isn't.
> > > 
> > > Pls note this patch is just reordering code without making
> > > functional changes.
> > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > 
> > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > non-smp variant for the memory-barriers. If the generic header
> > is used we'd get the non-smp version for free. It will save a
> > small amount of text space for CONFIG_SMP=n. 
> 
> OK, so I'll queue a patch to do this then?

Yes please.
 
> Just to make sure: the question would be, are smp_xxx barriers ever used
> in s390 arch specific code to flush in/out memory accesses for
> synchronization with the hypervisor?
> 
> I went over s390 arch code and it seems to me the answer is no
> (except of course for virtio).

Correct. Guest to host communication either uses instructions which
imply a memory barrier or QDIO which uses atomics.

> But I also see a lot of weirdness on this architecture.

Mostly historical, s390 actually is one of the easiest architectures in
regard to memory barriers.

> I found these calls:
> 
> arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> arch/s390/include/asm/bitops.h: smp_mb();
> 
> Not used in arch specific code so this is likely OK.

This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
and smp_mb__before_atomic are used in clear_bit_unlock and
__clear_bit_unlock which are 1:1 copies from the code in
include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
from the generic implementation.

> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

That smp_mb can be removed. The initial s390 vdso code is heavily influenced
by the powerpc version which does have a smp_wmb in vdso_init right before
the vdso_ready=1 assignment. s390 has no need for that.
 
> 
> arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> Does not appear to be paired with anything.

This one does not make sense to me. Imho can be removed as well. 
 
> arch/s390/lib/spinlock.c:               smp_mb();
> arch/s390/lib/spinlock.c:                       smp_mb();
> 
> Seems ok, and appears paired properly.
> Just to make sure - spinlock is not paravirtualized on s390, is it?

s390 just uses the compare-and-swap instruction for the basic lock/unlock
operation, this implies the memory barrier. We do call the hypervisor for
contended locks if the lock can not be acquired after a number of retries.

A while ago we did play with ticket spinlocks but they behaved badly in
out usual virtualized environments. If we find the time we might take a
closer look at the para-virtualized queued spinlocks.

> rch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> 
> It's all around vdso, so I'm guessing userspace is using this,
> this is why there's no pairing.

Correct, this is the update count mechanics with the vdso user space code.

> > > Some of this could be sub-optimal, but
> > > since on s390 Linux always runs on a hypervisor,
> > > I am not sure it's safe to use the generic version -
> > > in other words, it just might be that for s390 smp_ and virt_
> > > barriers must be equivalent.
> > 
> > The definition of the memory barriers is independent from the fact
> > if the system is running on an hypervisor or not.
> > Is there really
> > an architecture where you need special virt_xxx barriers?!? 
> 
> It is whenever host and guest or two guests access memory at
> the same time.
> 
> The optimization where smp_xxx barriers are compiled out when
> CONFIG_SMP is cleared means that two UP guests running
> on an SMP host can not use smp_xxx barriers for communication.
> 
> See explanation here:
> http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

Got it, makes sense.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 12:08             ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 12:08 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Tue, 5 Jan 2016 11:30:19 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 22:18:58 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > This defines __smp_xxx barriers for s390,
> > > > > for use by virtualization.
> > > > > 
> > > > > Some smp_xxx barriers are removed as they are
> > > > > defined correctly by asm-generic/barriers.h
> > > > > 
> > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > unconditionally on this architecture.
> > > > > 
> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > ---
> > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > index c358c31..fbd25b2 100644
> > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > @@ -26,18 +26,21 @@
> > > > >  #define wmb()				barrier()
> > > > >  #define dma_rmb()			mb()
> > > > >  #define dma_wmb()			mb()
> > > > > -#define smp_mb()			mb()
> > > > > -#define smp_rmb()			rmb()
> > > > > -#define smp_wmb()			wmb()
> > > > > -
> > > > > -#define smp_store_release(p, v)						\
> > > > > +#define __smp_mb()			mb()
> > > > > +#define __smp_rmb()			rmb()
> > > > > +#define __smp_wmb()			wmb()
> > > > > +#define smp_mb()			__smp_mb()
> > > > > +#define smp_rmb()			__smp_rmb()
> > > > > +#define smp_wmb()			__smp_wmb()
> > > > 
> > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > asm-generic/barrier.h do this?
> > > 
> > > No because the generic one is a nop on !SMP, this one isn't.
> > > 
> > > Pls note this patch is just reordering code without making
> > > functional changes.
> > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > 
> > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > non-smp variant for the memory-barriers. If the generic header
> > is used we'd get the non-smp version for free. It will save a
> > small amount of text space for CONFIG_SMP=n. 
> 
> OK, so I'll queue a patch to do this then?

Yes please.
 
> Just to make sure: the question would be, are smp_xxx barriers ever used
> in s390 arch specific code to flush in/out memory accesses for
> synchronization with the hypervisor?
> 
> I went over s390 arch code and it seems to me the answer is no
> (except of course for virtio).

Correct. Guest to host communication either uses instructions which
imply a memory barrier or QDIO which uses atomics.

> But I also see a lot of weirdness on this architecture.

Mostly historical, s390 actually is one of the easiest architectures in
regard to memory barriers.

> I found these calls:
> 
> arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> arch/s390/include/asm/bitops.h: smp_mb();
> 
> Not used in arch specific code so this is likely OK.

This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
and smp_mb__before_atomic are used in clear_bit_unlock and
__clear_bit_unlock which are 1:1 copies from the code in
include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
from the generic implementation.

> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

That smp_mb can be removed. The initial s390 vdso code is heavily influenced
by the powerpc version which does have a smp_wmb in vdso_init right before
the vdso_ready=1 assignment. s390 has no need for that.
 
> 
> arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> Does not appear to be paired with anything.

This one does not make sense to me. Imho can be removed as well. 
 
> arch/s390/lib/spinlock.c:               smp_mb();
> arch/s390/lib/spinlock.c:                       smp_mb();
> 
> Seems ok, and appears paired properly.
> Just to make sure - spinlock is not paravirtualized on s390, is it?

s390 just uses the compare-and-swap instruction for the basic lock/unlock
operation, this implies the memory barrier. We do call the hypervisor for
contended locks if the lock can not be acquired after a number of retries.

A while ago we did play with ticket spinlocks but they behaved badly in
out usual virtualized environments. If we find the time we might take a
closer look at the para-virtualized queued spinlocks.

> rch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> 
> It's all around vdso, so I'm guessing userspace is using this,
> this is why there's no pairing.

Correct, this is the update count mechanics with the vdso user space code.

> > > Some of this could be sub-optimal, but
> > > since on s390 Linux always runs on a hypervisor,
> > > I am not sure it's safe to use the generic version -
> > > in other words, it just might be that for s390 smp_ and virt_
> > > barriers must be equivalent.
> > 
> > The definition of the memory barriers is independent from the fact
> > if the system is running on an hypervisor or not.
> > Is there really
> > an architecture where you need special virt_xxx barriers?!? 
> 
> It is whenever host and guest or two guests access memory at
> the same time.
> 
> The optimization where smp_xxx barriers are compiled out when
> CONFIG_SMP is cleared means that two UP guests running
> on an SMP host can not use smp_xxx barriers for communication.
> 
> See explanation here:
> http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

Got it, makes sense.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  9:30           ` Michael S. Tsirkin
                             ` (3 preceding siblings ...)
  (?)
@ 2016-01-05 12:08           ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 12:08 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Carsten Otte,
	Ingo Molnar, linux-arch, linux-s390, Davidlohr Bueso,
	Arnd Bergmann, x86, Christian Borntraeger, xen-devel,
	Ingo Molnar, linux-xtensa, Christian Ehrhardt,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, 5 Jan 2016 11:30:19 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 22:18:58 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > This defines __smp_xxx barriers for s390,
> > > > > for use by virtualization.
> > > > > 
> > > > > Some smp_xxx barriers are removed as they are
> > > > > defined correctly by asm-generic/barriers.h
> > > > > 
> > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > unconditionally on this architecture.
> > > > > 
> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > ---
> > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > index c358c31..fbd25b2 100644
> > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > @@ -26,18 +26,21 @@
> > > > >  #define wmb()				barrier()
> > > > >  #define dma_rmb()			mb()
> > > > >  #define dma_wmb()			mb()
> > > > > -#define smp_mb()			mb()
> > > > > -#define smp_rmb()			rmb()
> > > > > -#define smp_wmb()			wmb()
> > > > > -
> > > > > -#define smp_store_release(p, v)						\
> > > > > +#define __smp_mb()			mb()
> > > > > +#define __smp_rmb()			rmb()
> > > > > +#define __smp_wmb()			wmb()
> > > > > +#define smp_mb()			__smp_mb()
> > > > > +#define smp_rmb()			__smp_rmb()
> > > > > +#define smp_wmb()			__smp_wmb()
> > > > 
> > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > asm-generic/barrier.h do this?
> > > 
> > > No because the generic one is a nop on !SMP, this one isn't.
> > > 
> > > Pls note this patch is just reordering code without making
> > > functional changes.
> > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > 
> > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > non-smp variant for the memory-barriers. If the generic header
> > is used we'd get the non-smp version for free. It will save a
> > small amount of text space for CONFIG_SMP=n. 
> 
> OK, so I'll queue a patch to do this then?

Yes please.
 
> Just to make sure: the question would be, are smp_xxx barriers ever used
> in s390 arch specific code to flush in/out memory accesses for
> synchronization with the hypervisor?
> 
> I went over s390 arch code and it seems to me the answer is no
> (except of course for virtio).

Correct. Guest to host communication either uses instructions which
imply a memory barrier or QDIO which uses atomics.

> But I also see a lot of weirdness on this architecture.

Mostly historical, s390 actually is one of the easiest architectures in
regard to memory barriers.

> I found these calls:
> 
> arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> arch/s390/include/asm/bitops.h: smp_mb();
> 
> Not used in arch specific code so this is likely OK.

This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
and smp_mb__before_atomic are used in clear_bit_unlock and
__clear_bit_unlock which are 1:1 copies from the code in
include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
from the generic implementation.

> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

That smp_mb can be removed. The initial s390 vdso code is heavily influenced
by the powerpc version which does have a smp_wmb in vdso_init right before
the vdso_ready=1 assignment. s390 has no need for that.
 
> 
> arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> Does not appear to be paired with anything.

This one does not make sense to me. Imho can be removed as well. 
 
> arch/s390/lib/spinlock.c:               smp_mb();
> arch/s390/lib/spinlock.c:                       smp_mb();
> 
> Seems ok, and appears paired properly.
> Just to make sure - spinlock is not paravirtualized on s390, is it?

s390 just uses the compare-and-swap instruction for the basic lock/unlock
operation, this implies the memory barrier. We do call the hypervisor for
contended locks if the lock can not be acquired after a number of retries.

A while ago we did play with ticket spinlocks but they behaved badly in
out usual virtualized environments. If we find the time we might take a
closer look at the para-virtualized queued spinlocks.

> rch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> 
> It's all around vdso, so I'm guessing userspace is using this,
> this is why there's no pairing.

Correct, this is the update count mechanics with the vdso user space code.

> > > Some of this could be sub-optimal, but
> > > since on s390 Linux always runs on a hypervisor,
> > > I am not sure it's safe to use the generic version -
> > > in other words, it just might be that for s390 smp_ and virt_
> > > barriers must be equivalent.
> > 
> > The definition of the memory barriers is independent from the fact
> > if the system is running on an hypervisor or not.
> > Is there really
> > an architecture where you need special virt_xxx barriers?!? 
> 
> It is whenever host and guest or two guests access memory at
> the same time.
> 
> The optimization where smp_xxx barriers are compiled out when
> CONFIG_SMP is cleared means that two UP guests running
> on an SMP host can not use smp_xxx barriers for communication.
> 
> See explanation here:
> http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

Got it, makes sense.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 12:08             ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 12:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 5 Jan 2016 11:30:19 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 22:18:58 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > This defines __smp_xxx barriers for s390,
> > > > > for use by virtualization.
> > > > > 
> > > > > Some smp_xxx barriers are removed as they are
> > > > > defined correctly by asm-generic/barriers.h
> > > > > 
> > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > unconditionally on this architecture.
> > > > > 
> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > ---
> > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > index c358c31..fbd25b2 100644
> > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > @@ -26,18 +26,21 @@
> > > > >  #define wmb()				barrier()
> > > > >  #define dma_rmb()			mb()
> > > > >  #define dma_wmb()			mb()
> > > > > -#define smp_mb()			mb()
> > > > > -#define smp_rmb()			rmb()
> > > > > -#define smp_wmb()			wmb()
> > > > > -
> > > > > -#define smp_store_release(p, v)						\
> > > > > +#define __smp_mb()			mb()
> > > > > +#define __smp_rmb()			rmb()
> > > > > +#define __smp_wmb()			wmb()
> > > > > +#define smp_mb()			__smp_mb()
> > > > > +#define smp_rmb()			__smp_rmb()
> > > > > +#define smp_wmb()			__smp_wmb()
> > > > 
> > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > asm-generic/barrier.h do this?
> > > 
> > > No because the generic one is a nop on !SMP, this one isn't.
> > > 
> > > Pls note this patch is just reordering code without making
> > > functional changes.
> > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > 
> > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > non-smp variant for the memory-barriers. If the generic header
> > is used we'd get the non-smp version for free. It will save a
> > small amount of text space for CONFIG_SMP=n. 
> 
> OK, so I'll queue a patch to do this then?

Yes please.
 
> Just to make sure: the question would be, are smp_xxx barriers ever used
> in s390 arch specific code to flush in/out memory accesses for
> synchronization with the hypervisor?
> 
> I went over s390 arch code and it seems to me the answer is no
> (except of course for virtio).

Correct. Guest to host communication either uses instructions which
imply a memory barrier or QDIO which uses atomics.

> But I also see a lot of weirdness on this architecture.

Mostly historical, s390 actually is one of the easiest architectures in
regard to memory barriers.

> I found these calls:
> 
> arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> arch/s390/include/asm/bitops.h: smp_mb();
> 
> Not used in arch specific code so this is likely OK.

This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
and smp_mb__before_atomic are used in clear_bit_unlock and
__clear_bit_unlock which are 1:1 copies from the code in
include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
from the generic implementation.

> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

That smp_mb can be removed. The initial s390 vdso code is heavily influenced
by the powerpc version which does have a smp_wmb in vdso_init right before
the vdso_ready=1 assignment. s390 has no need for that.
 
> 
> arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> Does not appear to be paired with anything.

This one does not make sense to me. Imho can be removed as well. 
 
> arch/s390/lib/spinlock.c:               smp_mb();
> arch/s390/lib/spinlock.c:                       smp_mb();
> 
> Seems ok, and appears paired properly.
> Just to make sure - spinlock is not paravirtualized on s390, is it?

s390 just uses the compare-and-swap instruction for the basic lock/unlock
operation, this implies the memory barrier. We do call the hypervisor for
contended locks if the lock can not be acquired after a number of retries.

A while ago we did play with ticket spinlocks but they behaved badly in
out usual virtualized environments. If we find the time we might take a
closer look at the para-virtualized queued spinlocks.

> rch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> 
> It's all around vdso, so I'm guessing userspace is using this,
> this is why there's no pairing.

Correct, this is the update count mechanics with the vdso user space code.

> > > Some of this could be sub-optimal, but
> > > since on s390 Linux always runs on a hypervisor,
> > > I am not sure it's safe to use the generic version -
> > > in other words, it just might be that for s390 smp_ and virt_
> > > barriers must be equivalent.
> > 
> > The definition of the memory barriers is independent from the fact
> > if the system is running on an hypervisor or not.
> > Is there really
> > an architecture where you need special virt_xxx barriers?!? 
> 
> It is whenever host and guest or two guests access memory at
> the same time.
> 
> The optimization where smp_xxx barriers are compiled out when
> CONFIG_SMP is cleared means that two UP guests running
> on an SMP host can not use smp_xxx barriers for communication.
> 
> See explanation here:
> http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

Got it, makes sense.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  9:30           ` Michael S. Tsirkin
                             ` (4 preceding siblings ...)
  (?)
@ 2016-01-05 12:08           ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 12:08 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Carsten Otte,
	Ingo Molnar, linux-arch, linux-s390, Davidlohr Bueso,
	Arnd Bergmann, x86, Christian Borntraeger, xen-devel,
	Ingo Molnar, linux-xtensa, Christian Ehrhardt,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, 5 Jan 2016 11:30:19 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > On Mon, 4 Jan 2016 22:18:58 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > This defines __smp_xxx barriers for s390,
> > > > > for use by virtualization.
> > > > > 
> > > > > Some smp_xxx barriers are removed as they are
> > > > > defined correctly by asm-generic/barriers.h
> > > > > 
> > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > unconditionally on this architecture.
> > > > > 
> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > ---
> > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > index c358c31..fbd25b2 100644
> > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > @@ -26,18 +26,21 @@
> > > > >  #define wmb()				barrier()
> > > > >  #define dma_rmb()			mb()
> > > > >  #define dma_wmb()			mb()
> > > > > -#define smp_mb()			mb()
> > > > > -#define smp_rmb()			rmb()
> > > > > -#define smp_wmb()			wmb()
> > > > > -
> > > > > -#define smp_store_release(p, v)						\
> > > > > +#define __smp_mb()			mb()
> > > > > +#define __smp_rmb()			rmb()
> > > > > +#define __smp_wmb()			wmb()
> > > > > +#define smp_mb()			__smp_mb()
> > > > > +#define smp_rmb()			__smp_rmb()
> > > > > +#define smp_wmb()			__smp_wmb()
> > > > 
> > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > asm-generic/barrier.h do this?
> > > 
> > > No because the generic one is a nop on !SMP, this one isn't.
> > > 
> > > Pls note this patch is just reordering code without making
> > > functional changes.
> > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > 
> > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > non-smp variant for the memory-barriers. If the generic header
> > is used we'd get the non-smp version for free. It will save a
> > small amount of text space for CONFIG_SMP=n. 
> 
> OK, so I'll queue a patch to do this then?

Yes please.
 
> Just to make sure: the question would be, are smp_xxx barriers ever used
> in s390 arch specific code to flush in/out memory accesses for
> synchronization with the hypervisor?
> 
> I went over s390 arch code and it seems to me the answer is no
> (except of course for virtio).

Correct. Guest to host communication either uses instructions which
imply a memory barrier or QDIO which uses atomics.

> But I also see a lot of weirdness on this architecture.

Mostly historical, s390 actually is one of the easiest architectures in
regard to memory barriers.

> I found these calls:
> 
> arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> arch/s390/include/asm/bitops.h: smp_mb();
> 
> Not used in arch specific code so this is likely OK.

This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
and smp_mb__before_atomic are used in clear_bit_unlock and
__clear_bit_unlock which are 1:1 copies from the code in
include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
from the generic implementation.

> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

That smp_mb can be removed. The initial s390 vdso code is heavily influenced
by the powerpc version which does have a smp_wmb in vdso_init right before
the vdso_ready=1 assignment. s390 has no need for that.
 
> 
> arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> Does not appear to be paired with anything.

This one does not make sense to me. Imho can be removed as well. 
 
> arch/s390/lib/spinlock.c:               smp_mb();
> arch/s390/lib/spinlock.c:                       smp_mb();
> 
> Seems ok, and appears paired properly.
> Just to make sure - spinlock is not paravirtualized on s390, is it?

s390 just uses the compare-and-swap instruction for the basic lock/unlock
operation, this implies the memory barrier. We do call the hypervisor for
contended locks if the lock can not be acquired after a number of retries.

A while ago we did play with ticket spinlocks but they behaved badly in
out usual virtualized environments. If we find the time we might take a
closer look at the para-virtualized queued spinlocks.

> rch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> arch/s390/kernel/time.c:        smp_wmb();
> 
> It's all around vdso, so I'm guessing userspace is using this,
> this is why there's no pairing.

Correct, this is the update count mechanics with the vdso user space code.

> > > Some of this could be sub-optimal, but
> > > since on s390 Linux always runs on a hypervisor,
> > > I am not sure it's safe to use the generic version -
> > > in other words, it just might be that for s390 smp_ and virt_
> > > barriers must be equivalent.
> > 
> > The definition of the memory barriers is independent from the fact
> > if the system is running on an hypervisor or not.
> > Is there really
> > an architecture where you need special virt_xxx barriers?!? 
> 
> It is whenever host and guest or two guests access memory at
> the same time.
> 
> The optimization where smp_xxx barriers are compiled out when
> CONFIG_SMP is cleared means that two UP guests running
> on an SMP host can not use smp_xxx barriers for communication.
> 
> See explanation here:
> http://thread.gmane.org/gmane.linux.kernel.virtualization/26555

Got it, makes sense.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 12:08             ` Martin Schwidefsky
  (?)
  (?)
@ 2016-01-05 13:04               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 13:04 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Heiko Carstens,
	Ingo Molnar

On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> On Tue, 5 Jan 2016 11:30:19 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > 
> > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > This defines __smp_xxx barriers for s390,
> > > > > > for use by virtualization.
> > > > > > 
> > > > > > Some smp_xxx barriers are removed as they are
> > > > > > defined correctly by asm-generic/barriers.h
> > > > > > 
> > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > unconditionally on this architecture.
> > > > > > 
> > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > ---
> > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > index c358c31..fbd25b2 100644
> > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > @@ -26,18 +26,21 @@
> > > > > >  #define wmb()				barrier()
> > > > > >  #define dma_rmb()			mb()
> > > > > >  #define dma_wmb()			mb()
> > > > > > -#define smp_mb()			mb()
> > > > > > -#define smp_rmb()			rmb()
> > > > > > -#define smp_wmb()			wmb()
> > > > > > -
> > > > > > -#define smp_store_release(p, v)						\
> > > > > > +#define __smp_mb()			mb()
> > > > > > +#define __smp_rmb()			rmb()
> > > > > > +#define __smp_wmb()			wmb()
> > > > > > +#define smp_mb()			__smp_mb()
> > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > 
> > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > asm-generic/barrier.h do this?
> > > > 
> > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > 
> > > > Pls note this patch is just reordering code without making
> > > > functional changes.
> > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > 
> > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > non-smp variant for the memory-barriers. If the generic header
> > > is used we'd get the non-smp version for free. It will save a
> > > small amount of text space for CONFIG_SMP=n. 
> > 
> > OK, so I'll queue a patch to do this then?
> 
> Yes please.

OK, I'll add a patch on top in v3.

> > Just to make sure: the question would be, are smp_xxx barriers ever used
> > in s390 arch specific code to flush in/out memory accesses for
> > synchronization with the hypervisor?
> > 
> > I went over s390 arch code and it seems to me the answer is no
> > (except of course for virtio).
> 
> Correct. Guest to host communication either uses instructions which
> imply a memory barrier or QDIO which uses atomics.

And atomics imply a barrier on s390, right?

> > But I also see a lot of weirdness on this architecture.
> 
> Mostly historical, s390 actually is one of the easiest architectures in
> regard to memory barriers.
> 
> > I found these calls:
> > 
> > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > arch/s390/include/asm/bitops.h: smp_mb();
> > 
> > Not used in arch specific code so this is likely OK.
> 
> This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> and smp_mb__before_atomic are used in clear_bit_unlock and
> __clear_bit_unlock which are 1:1 copies from the code in
> include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> from the generic implementation.

something to keep in mind, but
I'd rather not touch bitops at the moment - this patchset is already too big.

> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> That smp_mb can be removed. The initial s390 vdso code is heavily influenced
> by the powerpc version which does have a smp_wmb in vdso_init right before
> the vdso_ready=1 assignment. s390 has no need for that.
>  
> > 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> > 
> > Does not appear to be paired with anything.
> 
> This one does not make sense to me. Imho can be removed as well. 
>  
> > arch/s390/lib/spinlock.c:               smp_mb();
> > arch/s390/lib/spinlock.c:                       smp_mb();
> > 
> > Seems ok, and appears paired properly.
> > Just to make sure - spinlock is not paravirtualized on s390, is it?
> 
> s390 just uses the compare-and-swap instruction for the basic lock/unlock
> operation, this implies the memory barrier. We do call the hypervisor for
> contended locks if the lock can not be acquired after a number of retries.
> 
> A while ago we did play with ticket spinlocks but they behaved badly in
> out usual virtualized environments. If we find the time we might take a
> closer look at the para-virtualized queued spinlocks.
> 
> > rch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > 
> > It's all around vdso, so I'm guessing userspace is using this,
> > this is why there's no pairing.
> 
> Correct, this is the update count mechanics with the vdso user space code.
> 
> > > > Some of this could be sub-optimal, but
> > > > since on s390 Linux always runs on a hypervisor,
> > > > I am not sure it's safe to use the generic version -
> > > > in other words, it just might be that for s390 smp_ and virt_
> > > > barriers must be equivalent.
> > > 
> > > The definition of the memory barriers is independent from the fact
> > > if the system is running on an hypervisor or not.
> > > Is there really
> > > an architecture where you need special virt_xxx barriers?!? 
> > 
> > It is whenever host and guest or two guests access memory at
> > the same time.
> > 
> > The optimization where smp_xxx barriers are compiled out when
> > CONFIG_SMP is cleared means that two UP guests running
> > on an SMP host can not use smp_xxx barriers for communication.
> > 
> > See explanation here:
> > http://thread.gmane.org/gmane.linux.kernel.virtualization/26555
> 
> Got it, makes sense.

An ack would be appreciated.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 13:04               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 13:04 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Christian Borntraeger, Andrey Konovalov,
	Carsten Otte, Christian Ehrhardt

On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> On Tue, 5 Jan 2016 11:30:19 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > 
> > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > This defines __smp_xxx barriers for s390,
> > > > > > for use by virtualization.
> > > > > > 
> > > > > > Some smp_xxx barriers are removed as they are
> > > > > > defined correctly by asm-generic/barriers.h
> > > > > > 
> > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > unconditionally on this architecture.
> > > > > > 
> > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > ---
> > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > index c358c31..fbd25b2 100644
> > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > @@ -26,18 +26,21 @@
> > > > > >  #define wmb()				barrier()
> > > > > >  #define dma_rmb()			mb()
> > > > > >  #define dma_wmb()			mb()
> > > > > > -#define smp_mb()			mb()
> > > > > > -#define smp_rmb()			rmb()
> > > > > > -#define smp_wmb()			wmb()
> > > > > > -
> > > > > > -#define smp_store_release(p, v)						\
> > > > > > +#define __smp_mb()			mb()
> > > > > > +#define __smp_rmb()			rmb()
> > > > > > +#define __smp_wmb()			wmb()
> > > > > > +#define smp_mb()			__smp_mb()
> > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > 
> > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > asm-generic/barrier.h do this?
> > > > 
> > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > 
> > > > Pls note this patch is just reordering code without making
> > > > functional changes.
> > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > 
> > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > non-smp variant for the memory-barriers. If the generic header
> > > is used we'd get the non-smp version for free. It will save a
> > > small amount of text space for CONFIG_SMP=n. 
> > 
> > OK, so I'll queue a patch to do this then?
> 
> Yes please.

OK, I'll add a patch on top in v3.

> > Just to make sure: the question would be, are smp_xxx barriers ever used
> > in s390 arch specific code to flush in/out memory accesses for
> > synchronization with the hypervisor?
> > 
> > I went over s390 arch code and it seems to me the answer is no
> > (except of course for virtio).
> 
> Correct. Guest to host communication either uses instructions which
> imply a memory barrier or QDIO which uses atomics.

And atomics imply a barrier on s390, right?

> > But I also see a lot of weirdness on this architecture.
> 
> Mostly historical, s390 actually is one of the easiest architectures in
> regard to memory barriers.
> 
> > I found these calls:
> > 
> > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > arch/s390/include/asm/bitops.h: smp_mb();
> > 
> > Not used in arch specific code so this is likely OK.
> 
> This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> and smp_mb__before_atomic are used in clear_bit_unlock and
> __clear_bit_unlock which are 1:1 copies from the code in
> include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> from the generic implementation.

something to keep in mind, but
I'd rather not touch bitops at the moment - this patchset is already too big.

> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> That smp_mb can be removed. The initial s390 vdso code is heavily influenced
> by the powerpc version which does have a smp_wmb in vdso_init right before
> the vdso_ready=1 assignment. s390 has no need for that.
>  
> > 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> > 
> > Does not appear to be paired with anything.
> 
> This one does not make sense to me. Imho can be removed as well. 
>  
> > arch/s390/lib/spinlock.c:               smp_mb();
> > arch/s390/lib/spinlock.c:                       smp_mb();
> > 
> > Seems ok, and appears paired properly.
> > Just to make sure - spinlock is not paravirtualized on s390, is it?
> 
> s390 just uses the compare-and-swap instruction for the basic lock/unlock
> operation, this implies the memory barrier. We do call the hypervisor for
> contended locks if the lock can not be acquired after a number of retries.
> 
> A while ago we did play with ticket spinlocks but they behaved badly in
> out usual virtualized environments. If we find the time we might take a
> closer look at the para-virtualized queued spinlocks.
> 
> > rch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > 
> > It's all around vdso, so I'm guessing userspace is using this,
> > this is why there's no pairing.
> 
> Correct, this is the update count mechanics with the vdso user space code.
> 
> > > > Some of this could be sub-optimal, but
> > > > since on s390 Linux always runs on a hypervisor,
> > > > I am not sure it's safe to use the generic version -
> > > > in other words, it just might be that for s390 smp_ and virt_
> > > > barriers must be equivalent.
> > > 
> > > The definition of the memory barriers is independent from the fact
> > > if the system is running on an hypervisor or not.
> > > Is there really
> > > an architecture where you need special virt_xxx barriers?!? 
> > 
> > It is whenever host and guest or two guests access memory at
> > the same time.
> > 
> > The optimization where smp_xxx barriers are compiled out when
> > CONFIG_SMP is cleared means that two UP guests running
> > on an SMP host can not use smp_xxx barriers for communication.
> > 
> > See explanation here:
> > http://thread.gmane.org/gmane.linux.kernel.virtualization/26555
> 
> Got it, makes sense.

An ack would be appreciated.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 13:04               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 13:04 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Heiko Carstens,
	Ingo Molnar

On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> On Tue, 5 Jan 2016 11:30:19 +0200
> "Michael S. Tsirkin" <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> 
> > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > "Michael S. Tsirkin" <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > 
> > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > This defines __smp_xxx barriers for s390,
> > > > > > for use by virtualization.
> > > > > > 
> > > > > > Some smp_xxx barriers are removed as they are
> > > > > > defined correctly by asm-generic/barriers.h
> > > > > > 
> > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > unconditionally on this architecture.
> > > > > > 
> > > > > > Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > > > > Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
> > > > > > ---
> > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > index c358c31..fbd25b2 100644
> > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > @@ -26,18 +26,21 @@
> > > > > >  #define wmb()				barrier()
> > > > > >  #define dma_rmb()			mb()
> > > > > >  #define dma_wmb()			mb()
> > > > > > -#define smp_mb()			mb()
> > > > > > -#define smp_rmb()			rmb()
> > > > > > -#define smp_wmb()			wmb()
> > > > > > -
> > > > > > -#define smp_store_release(p, v)						\
> > > > > > +#define __smp_mb()			mb()
> > > > > > +#define __smp_rmb()			rmb()
> > > > > > +#define __smp_wmb()			wmb()
> > > > > > +#define smp_mb()			__smp_mb()
> > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > 
> > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > asm-generic/barrier.h do this?
> > > > 
> > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > 
> > > > Pls note this patch is just reordering code without making
> > > > functional changes.
> > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > 
> > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > non-smp variant for the memory-barriers. If the generic header
> > > is used we'd get the non-smp version for free. It will save a
> > > small amount of text space for CONFIG_SMP=n. 
> > 
> > OK, so I'll queue a patch to do this then?
> 
> Yes please.

OK, I'll add a patch on top in v3.

> > Just to make sure: the question would be, are smp_xxx barriers ever used
> > in s390 arch specific code to flush in/out memory accesses for
> > synchronization with the hypervisor?
> > 
> > I went over s390 arch code and it seems to me the answer is no
> > (except of course for virtio).
> 
> Correct. Guest to host communication either uses instructions which
> imply a memory barrier or QDIO which uses atomics.

And atomics imply a barrier on s390, right?

> > But I also see a lot of weirdness on this architecture.
> 
> Mostly historical, s390 actually is one of the easiest architectures in
> regard to memory barriers.
> 
> > I found these calls:
> > 
> > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > arch/s390/include/asm/bitops.h: smp_mb();
> > 
> > Not used in arch specific code so this is likely OK.
> 
> This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> and smp_mb__before_atomic are used in clear_bit_unlock and
> __clear_bit_unlock which are 1:1 copies from the code in
> include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> from the generic implementation.

something to keep in mind, but
I'd rather not touch bitops at the moment - this patchset is already too big.

> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> That smp_mb can be removed. The initial s390 vdso code is heavily influenced
> by the powerpc version which does have a smp_wmb in vdso_init right before
> the vdso_ready=1 assignment. s390 has no need for that.
>  
> > 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> > 
> > Does not appear to be paired with anything.
> 
> This one does not make sense to me. Imho can be removed as well. 
>  
> > arch/s390/lib/spinlock.c:               smp_mb();
> > arch/s390/lib/spinlock.c:                       smp_mb();
> > 
> > Seems ok, and appears paired properly.
> > Just to make sure - spinlock is not paravirtualized on s390, is it?
> 
> s390 just uses the compare-and-swap instruction for the basic lock/unlock
> operation, this implies the memory barrier. We do call the hypervisor for
> contended locks if the lock can not be acquired after a number of retries.
> 
> A while ago we did play with ticket spinlocks but they behaved badly in
> out usual virtualized environments. If we find the time we might take a
> closer look at the para-virtualized queued spinlocks.
> 
> > rch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > 
> > It's all around vdso, so I'm guessing userspace is using this,
> > this is why there's no pairing.
> 
> Correct, this is the update count mechanics with the vdso user space code.
> 
> > > > Some of this could be sub-optimal, but
> > > > since on s390 Linux always runs on a hypervisor,
> > > > I am not sure it's safe to use the generic version -
> > > > in other words, it just might be that for s390 smp_ and virt_
> > > > barriers must be equivalent.
> > > 
> > > The definition of the memory barriers is independent from the fact
> > > if the system is running on an hypervisor or not.
> > > Is there really
> > > an architecture where you need special virt_xxx barriers?!? 
> > 
> > It is whenever host and guest or two guests access memory at
> > the same time.
> > 
> > The optimization where smp_xxx barriers are compiled out when
> > CONFIG_SMP is cleared means that two UP guests running
> > on an SMP host can not use smp_xxx barriers for communication.
> > 
> > See explanation here:
> > http://thread.gmane.org/gmane.linux.kernel.virtualization/26555
> 
> Got it, makes sense.

An ack would be appreciated.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 12:08             ` Martin Schwidefsky
                               ` (2 preceding siblings ...)
  (?)
@ 2016-01-05 13:04             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 13:04 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Carsten Otte,
	Ingo Molnar, linux-arch, linux-s390, Davidlohr Bueso,
	Arnd Bergmann, x86, Christian Borntraeger, xen-devel,
	Ingo Molnar, linux-xtensa, Christian Ehrhardt,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> On Tue, 5 Jan 2016 11:30:19 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > 
> > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > This defines __smp_xxx barriers for s390,
> > > > > > for use by virtualization.
> > > > > > 
> > > > > > Some smp_xxx barriers are removed as they are
> > > > > > defined correctly by asm-generic/barriers.h
> > > > > > 
> > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > unconditionally on this architecture.
> > > > > > 
> > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > ---
> > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > index c358c31..fbd25b2 100644
> > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > @@ -26,18 +26,21 @@
> > > > > >  #define wmb()				barrier()
> > > > > >  #define dma_rmb()			mb()
> > > > > >  #define dma_wmb()			mb()
> > > > > > -#define smp_mb()			mb()
> > > > > > -#define smp_rmb()			rmb()
> > > > > > -#define smp_wmb()			wmb()
> > > > > > -
> > > > > > -#define smp_store_release(p, v)						\
> > > > > > +#define __smp_mb()			mb()
> > > > > > +#define __smp_rmb()			rmb()
> > > > > > +#define __smp_wmb()			wmb()
> > > > > > +#define smp_mb()			__smp_mb()
> > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > 
> > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > asm-generic/barrier.h do this?
> > > > 
> > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > 
> > > > Pls note this patch is just reordering code without making
> > > > functional changes.
> > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > 
> > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > non-smp variant for the memory-barriers. If the generic header
> > > is used we'd get the non-smp version for free. It will save a
> > > small amount of text space for CONFIG_SMP=n. 
> > 
> > OK, so I'll queue a patch to do this then?
> 
> Yes please.

OK, I'll add a patch on top in v3.

> > Just to make sure: the question would be, are smp_xxx barriers ever used
> > in s390 arch specific code to flush in/out memory accesses for
> > synchronization with the hypervisor?
> > 
> > I went over s390 arch code and it seems to me the answer is no
> > (except of course for virtio).
> 
> Correct. Guest to host communication either uses instructions which
> imply a memory barrier or QDIO which uses atomics.

And atomics imply a barrier on s390, right?

> > But I also see a lot of weirdness on this architecture.
> 
> Mostly historical, s390 actually is one of the easiest architectures in
> regard to memory barriers.
> 
> > I found these calls:
> > 
> > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > arch/s390/include/asm/bitops.h: smp_mb();
> > 
> > Not used in arch specific code so this is likely OK.
> 
> This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> and smp_mb__before_atomic are used in clear_bit_unlock and
> __clear_bit_unlock which are 1:1 copies from the code in
> include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> from the generic implementation.

something to keep in mind, but
I'd rather not touch bitops at the moment - this patchset is already too big.

> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> That smp_mb can be removed. The initial s390 vdso code is heavily influenced
> by the powerpc version which does have a smp_wmb in vdso_init right before
> the vdso_ready=1 assignment. s390 has no need for that.
>  
> > 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> > 
> > Does not appear to be paired with anything.
> 
> This one does not make sense to me. Imho can be removed as well. 
>  
> > arch/s390/lib/spinlock.c:               smp_mb();
> > arch/s390/lib/spinlock.c:                       smp_mb();
> > 
> > Seems ok, and appears paired properly.
> > Just to make sure - spinlock is not paravirtualized on s390, is it?
> 
> s390 just uses the compare-and-swap instruction for the basic lock/unlock
> operation, this implies the memory barrier. We do call the hypervisor for
> contended locks if the lock can not be acquired after a number of retries.
> 
> A while ago we did play with ticket spinlocks but they behaved badly in
> out usual virtualized environments. If we find the time we might take a
> closer look at the para-virtualized queued spinlocks.
> 
> > rch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > 
> > It's all around vdso, so I'm guessing userspace is using this,
> > this is why there's no pairing.
> 
> Correct, this is the update count mechanics with the vdso user space code.
> 
> > > > Some of this could be sub-optimal, but
> > > > since on s390 Linux always runs on a hypervisor,
> > > > I am not sure it's safe to use the generic version -
> > > > in other words, it just might be that for s390 smp_ and virt_
> > > > barriers must be equivalent.
> > > 
> > > The definition of the memory barriers is independent from the fact
> > > if the system is running on an hypervisor or not.
> > > Is there really
> > > an architecture where you need special virt_xxx barriers?!? 
> > 
> > It is whenever host and guest or two guests access memory at
> > the same time.
> > 
> > The optimization where smp_xxx barriers are compiled out when
> > CONFIG_SMP is cleared means that two UP guests running
> > on an SMP host can not use smp_xxx barriers for communication.
> > 
> > See explanation here:
> > http://thread.gmane.org/gmane.linux.kernel.virtualization/26555
> 
> Got it, makes sense.

An ack would be appreciated.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 13:04               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 13:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> On Tue, 5 Jan 2016 11:30:19 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > 
> > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > This defines __smp_xxx barriers for s390,
> > > > > > for use by virtualization.
> > > > > > 
> > > > > > Some smp_xxx barriers are removed as they are
> > > > > > defined correctly by asm-generic/barriers.h
> > > > > > 
> > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > unconditionally on this architecture.
> > > > > > 
> > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > ---
> > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > index c358c31..fbd25b2 100644
> > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > @@ -26,18 +26,21 @@
> > > > > >  #define wmb()				barrier()
> > > > > >  #define dma_rmb()			mb()
> > > > > >  #define dma_wmb()			mb()
> > > > > > -#define smp_mb()			mb()
> > > > > > -#define smp_rmb()			rmb()
> > > > > > -#define smp_wmb()			wmb()
> > > > > > -
> > > > > > -#define smp_store_release(p, v)						\
> > > > > > +#define __smp_mb()			mb()
> > > > > > +#define __smp_rmb()			rmb()
> > > > > > +#define __smp_wmb()			wmb()
> > > > > > +#define smp_mb()			__smp_mb()
> > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > 
> > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > asm-generic/barrier.h do this?
> > > > 
> > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > 
> > > > Pls note this patch is just reordering code without making
> > > > functional changes.
> > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > 
> > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > non-smp variant for the memory-barriers. If the generic header
> > > is used we'd get the non-smp version for free. It will save a
> > > small amount of text space for CONFIG_SMP=n. 
> > 
> > OK, so I'll queue a patch to do this then?
> 
> Yes please.

OK, I'll add a patch on top in v3.

> > Just to make sure: the question would be, are smp_xxx barriers ever used
> > in s390 arch specific code to flush in/out memory accesses for
> > synchronization with the hypervisor?
> > 
> > I went over s390 arch code and it seems to me the answer is no
> > (except of course for virtio).
> 
> Correct. Guest to host communication either uses instructions which
> imply a memory barrier or QDIO which uses atomics.

And atomics imply a barrier on s390, right?

> > But I also see a lot of weirdness on this architecture.
> 
> Mostly historical, s390 actually is one of the easiest architectures in
> regard to memory barriers.
> 
> > I found these calls:
> > 
> > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > arch/s390/include/asm/bitops.h: smp_mb();
> > 
> > Not used in arch specific code so this is likely OK.
> 
> This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> and smp_mb__before_atomic are used in clear_bit_unlock and
> __clear_bit_unlock which are 1:1 copies from the code in
> include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> from the generic implementation.

something to keep in mind, but
I'd rather not touch bitops at the moment - this patchset is already too big.

> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> That smp_mb can be removed. The initial s390 vdso code is heavily influenced
> by the powerpc version which does have a smp_wmb in vdso_init right before
> the vdso_ready=1 assignment. s390 has no need for that.
>  
> > 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> > 
> > Does not appear to be paired with anything.
> 
> This one does not make sense to me. Imho can be removed as well. 
>  
> > arch/s390/lib/spinlock.c:               smp_mb();
> > arch/s390/lib/spinlock.c:                       smp_mb();
> > 
> > Seems ok, and appears paired properly.
> > Just to make sure - spinlock is not paravirtualized on s390, is it?
> 
> s390 just uses the compare-and-swap instruction for the basic lock/unlock
> operation, this implies the memory barrier. We do call the hypervisor for
> contended locks if the lock can not be acquired after a number of retries.
> 
> A while ago we did play with ticket spinlocks but they behaved badly in
> out usual virtualized environments. If we find the time we might take a
> closer look at the para-virtualized queued spinlocks.
> 
> > rch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > 
> > It's all around vdso, so I'm guessing userspace is using this,
> > this is why there's no pairing.
> 
> Correct, this is the update count mechanics with the vdso user space code.
> 
> > > > Some of this could be sub-optimal, but
> > > > since on s390 Linux always runs on a hypervisor,
> > > > I am not sure it's safe to use the generic version -
> > > > in other words, it just might be that for s390 smp_ and virt_
> > > > barriers must be equivalent.
> > > 
> > > The definition of the memory barriers is independent from the fact
> > > if the system is running on an hypervisor or not.
> > > Is there really
> > > an architecture where you need special virt_xxx barriers?!? 
> > 
> > It is whenever host and guest or two guests access memory at
> > the same time.
> > 
> > The optimization where smp_xxx barriers are compiled out when
> > CONFIG_SMP is cleared means that two UP guests running
> > on an SMP host can not use smp_xxx barriers for communication.
> > 
> > See explanation here:
> > http://thread.gmane.org/gmane.linux.kernel.virtualization/26555
> 
> Got it, makes sense.

An ack would be appreciated.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 12:08             ` Martin Schwidefsky
                               ` (4 preceding siblings ...)
  (?)
@ 2016-01-05 13:04             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 13:04 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Carsten Otte,
	Ingo Molnar, linux-arch, linux-s390, Davidlohr Bueso,
	Arnd Bergmann, x86, Christian Borntraeger, xen-devel,
	Ingo Molnar, linux-xtensa, Christian Ehrhardt,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> On Tue, 5 Jan 2016 11:30:19 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > 
> > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > This defines __smp_xxx barriers for s390,
> > > > > > for use by virtualization.
> > > > > > 
> > > > > > Some smp_xxx barriers are removed as they are
> > > > > > defined correctly by asm-generic/barriers.h
> > > > > > 
> > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > unconditionally on this architecture.
> > > > > > 
> > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > ---
> > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > index c358c31..fbd25b2 100644
> > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > @@ -26,18 +26,21 @@
> > > > > >  #define wmb()				barrier()
> > > > > >  #define dma_rmb()			mb()
> > > > > >  #define dma_wmb()			mb()
> > > > > > -#define smp_mb()			mb()
> > > > > > -#define smp_rmb()			rmb()
> > > > > > -#define smp_wmb()			wmb()
> > > > > > -
> > > > > > -#define smp_store_release(p, v)						\
> > > > > > +#define __smp_mb()			mb()
> > > > > > +#define __smp_rmb()			rmb()
> > > > > > +#define __smp_wmb()			wmb()
> > > > > > +#define smp_mb()			__smp_mb()
> > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > 
> > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > asm-generic/barrier.h do this?
> > > > 
> > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > 
> > > > Pls note this patch is just reordering code without making
> > > > functional changes.
> > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > 
> > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > non-smp variant for the memory-barriers. If the generic header
> > > is used we'd get the non-smp version for free. It will save a
> > > small amount of text space for CONFIG_SMP=n. 
> > 
> > OK, so I'll queue a patch to do this then?
> 
> Yes please.

OK, I'll add a patch on top in v3.

> > Just to make sure: the question would be, are smp_xxx barriers ever used
> > in s390 arch specific code to flush in/out memory accesses for
> > synchronization with the hypervisor?
> > 
> > I went over s390 arch code and it seems to me the answer is no
> > (except of course for virtio).
> 
> Correct. Guest to host communication either uses instructions which
> imply a memory barrier or QDIO which uses atomics.

And atomics imply a barrier on s390, right?

> > But I also see a lot of weirdness on this architecture.
> 
> Mostly historical, s390 actually is one of the easiest architectures in
> regard to memory barriers.
> 
> > I found these calls:
> > 
> > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > arch/s390/include/asm/bitops.h: smp_mb();
> > 
> > Not used in arch specific code so this is likely OK.
> 
> This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> and smp_mb__before_atomic are used in clear_bit_unlock and
> __clear_bit_unlock which are 1:1 copies from the code in
> include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> from the generic implementation.

something to keep in mind, but
I'd rather not touch bitops at the moment - this patchset is already too big.

> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> That smp_mb can be removed. The initial s390 vdso code is heavily influenced
> by the powerpc version which does have a smp_wmb in vdso_init right before
> the vdso_ready=1 assignment. s390 has no need for that.
>  
> > 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> > 
> > Does not appear to be paired with anything.
> 
> This one does not make sense to me. Imho can be removed as well. 
>  
> > arch/s390/lib/spinlock.c:               smp_mb();
> > arch/s390/lib/spinlock.c:                       smp_mb();
> > 
> > Seems ok, and appears paired properly.
> > Just to make sure - spinlock is not paravirtualized on s390, is it?
> 
> s390 just uses the compare-and-swap instruction for the basic lock/unlock
> operation, this implies the memory barrier. We do call the hypervisor for
> contended locks if the lock can not be acquired after a number of retries.
> 
> A while ago we did play with ticket spinlocks but they behaved badly in
> out usual virtualized environments. If we find the time we might take a
> closer look at the para-virtualized queued spinlocks.
> 
> > rch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > arch/s390/kernel/time.c:        smp_wmb();
> > 
> > It's all around vdso, so I'm guessing userspace is using this,
> > this is why there's no pairing.
> 
> Correct, this is the update count mechanics with the vdso user space code.
> 
> > > > Some of this could be sub-optimal, but
> > > > since on s390 Linux always runs on a hypervisor,
> > > > I am not sure it's safe to use the generic version -
> > > > in other words, it just might be that for s390 smp_ and virt_
> > > > barriers must be equivalent.
> > > 
> > > The definition of the memory barriers is independent from the fact
> > > if the system is running on an hypervisor or not.
> > > Is there really
> > > an architecture where you need special virt_xxx barriers?!? 
> > 
> > It is whenever host and guest or two guests access memory at
> > the same time.
> > 
> > The optimization where smp_xxx barriers are compiled out when
> > CONFIG_SMP is cleared means that two UP guests running
> > on an SMP host can not use smp_xxx barriers for communication.
> > 
> > See explanation here:
> > http://thread.gmane.org/gmane.linux.kernel.virtualization/26555
> 
> Got it, makes sense.

An ack would be appreciated.

> -- 
> blue skies,
>    Martin.
> 
> "Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 13:04               ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-05 14:21                 ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 14:21 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Tue, 5 Jan 2016 15:04:43 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> > On Tue, 5 Jan 2016 11:30:19 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > 
> > > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > > This defines __smp_xxx barriers for s390,
> > > > > > > for use by virtualization.
> > > > > > > 
> > > > > > > Some smp_xxx barriers are removed as they are
> > > > > > > defined correctly by asm-generic/barriers.h
> > > > > > > 
> > > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > > unconditionally on this architecture.
> > > > > > > 
> > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > > ---
> > > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > > index c358c31..fbd25b2 100644
> > > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > > @@ -26,18 +26,21 @@
> > > > > > >  #define wmb()				barrier()
> > > > > > >  #define dma_rmb()			mb()
> > > > > > >  #define dma_wmb()			mb()
> > > > > > > -#define smp_mb()			mb()
> > > > > > > -#define smp_rmb()			rmb()
> > > > > > > -#define smp_wmb()			wmb()
> > > > > > > -
> > > > > > > -#define smp_store_release(p, v)						\
> > > > > > > +#define __smp_mb()			mb()
> > > > > > > +#define __smp_rmb()			rmb()
> > > > > > > +#define __smp_wmb()			wmb()
> > > > > > > +#define smp_mb()			__smp_mb()
> > > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > > 
> > > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > > asm-generic/barrier.h do this?
> > > > > 
> > > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > > 
> > > > > Pls note this patch is just reordering code without making
> > > > > functional changes.
> > > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > > 
> > > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > > non-smp variant for the memory-barriers. If the generic header
> > > > is used we'd get the non-smp version for free. It will save a
> > > > small amount of text space for CONFIG_SMP=n. 
> > > 
> > > OK, so I'll queue a patch to do this then?
> > 
> > Yes please.
> 
> OK, I'll add a patch on top in v3.

Good, with this addition:

Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

> > > Just to make sure: the question would be, are smp_xxx barriers ever used
> > > in s390 arch specific code to flush in/out memory accesses for
> > > synchronization with the hypervisor?
> > > 
> > > I went over s390 arch code and it seems to me the answer is no
> > > (except of course for virtio).
> > 
> > Correct. Guest to host communication either uses instructions which
> > imply a memory barrier or QDIO which uses atomics.
> 
> And atomics imply a barrier on s390, right?

Yes they do.

> > > But I also see a lot of weirdness on this architecture.
> > 
> > Mostly historical, s390 actually is one of the easiest architectures in
> > regard to memory barriers.
> > 
> > > I found these calls:
> > > 
> > > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > > arch/s390/include/asm/bitops.h: smp_mb();
> > > 
> > > Not used in arch specific code so this is likely OK.
> > 
> > This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> > and smp_mb__before_atomic are used in clear_bit_unlock and
> > __clear_bit_unlock which are 1:1 copies from the code in
> > include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> > from the generic implementation.
> 
> something to keep in mind, but
> I'd rather not touch bitops at the moment - this patchset is already too big.

With the conversion smp_mb__before_atomic to a barrier() it does the
correct thing. I don't think that any chance is necessary.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 14:21                 ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 14:21 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Christian Borntraeger, Andrey Konovalov,
	Carsten Otte

On Tue, 5 Jan 2016 15:04:43 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> > On Tue, 5 Jan 2016 11:30:19 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > 
> > > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > > This defines __smp_xxx barriers for s390,
> > > > > > > for use by virtualization.
> > > > > > > 
> > > > > > > Some smp_xxx barriers are removed as they are
> > > > > > > defined correctly by asm-generic/barriers.h
> > > > > > > 
> > > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > > unconditionally on this architecture.
> > > > > > > 
> > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > > ---
> > > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > > index c358c31..fbd25b2 100644
> > > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > > @@ -26,18 +26,21 @@
> > > > > > >  #define wmb()				barrier()
> > > > > > >  #define dma_rmb()			mb()
> > > > > > >  #define dma_wmb()			mb()
> > > > > > > -#define smp_mb()			mb()
> > > > > > > -#define smp_rmb()			rmb()
> > > > > > > -#define smp_wmb()			wmb()
> > > > > > > -
> > > > > > > -#define smp_store_release(p, v)						\
> > > > > > > +#define __smp_mb()			mb()
> > > > > > > +#define __smp_rmb()			rmb()
> > > > > > > +#define __smp_wmb()			wmb()
> > > > > > > +#define smp_mb()			__smp_mb()
> > > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > > 
> > > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > > asm-generic/barrier.h do this?
> > > > > 
> > > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > > 
> > > > > Pls note this patch is just reordering code without making
> > > > > functional changes.
> > > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > > 
> > > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > > non-smp variant for the memory-barriers. If the generic header
> > > > is used we'd get the non-smp version for free. It will save a
> > > > small amount of text space for CONFIG_SMP=n. 
> > > 
> > > OK, so I'll queue a patch to do this then?
> > 
> > Yes please.
> 
> OK, I'll add a patch on top in v3.

Good, with this addition:

Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

> > > Just to make sure: the question would be, are smp_xxx barriers ever used
> > > in s390 arch specific code to flush in/out memory accesses for
> > > synchronization with the hypervisor?
> > > 
> > > I went over s390 arch code and it seems to me the answer is no
> > > (except of course for virtio).
> > 
> > Correct. Guest to host communication either uses instructions which
> > imply a memory barrier or QDIO which uses atomics.
> 
> And atomics imply a barrier on s390, right?

Yes they do.

> > > But I also see a lot of weirdness on this architecture.
> > 
> > Mostly historical, s390 actually is one of the easiest architectures in
> > regard to memory barriers.
> > 
> > > I found these calls:
> > > 
> > > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > > arch/s390/include/asm/bitops.h: smp_mb();
> > > 
> > > Not used in arch specific code so this is likely OK.
> > 
> > This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> > and smp_mb__before_atomic are used in clear_bit_unlock and
> > __clear_bit_unlock which are 1:1 copies from the code in
> > include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> > from the generic implementation.
> 
> something to keep in mind, but
> I'd rather not touch bitops at the moment - this patchset is already too big.

With the conversion smp_mb__before_atomic to a barrier() it does the
correct thing. I don't think that any chance is necessary.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 14:21                 ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 14:21 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar

On Tue, 5 Jan 2016 15:04:43 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> > On Tue, 5 Jan 2016 11:30:19 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > 
> > > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > > This defines __smp_xxx barriers for s390,
> > > > > > > for use by virtualization.
> > > > > > > 
> > > > > > > Some smp_xxx barriers are removed as they are
> > > > > > > defined correctly by asm-generic/barriers.h
> > > > > > > 
> > > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > > unconditionally on this architecture.
> > > > > > > 
> > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > > ---
> > > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > > index c358c31..fbd25b2 100644
> > > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > > @@ -26,18 +26,21 @@
> > > > > > >  #define wmb()				barrier()
> > > > > > >  #define dma_rmb()			mb()
> > > > > > >  #define dma_wmb()			mb()
> > > > > > > -#define smp_mb()			mb()
> > > > > > > -#define smp_rmb()			rmb()
> > > > > > > -#define smp_wmb()			wmb()
> > > > > > > -
> > > > > > > -#define smp_store_release(p, v)						\
> > > > > > > +#define __smp_mb()			mb()
> > > > > > > +#define __smp_rmb()			rmb()
> > > > > > > +#define __smp_wmb()			wmb()
> > > > > > > +#define smp_mb()			__smp_mb()
> > > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > > 
> > > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > > asm-generic/barrier.h do this?
> > > > > 
> > > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > > 
> > > > > Pls note this patch is just reordering code without making
> > > > > functional changes.
> > > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > > 
> > > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > > non-smp variant for the memory-barriers. If the generic header
> > > > is used we'd get the non-smp version for free. It will save a
> > > > small amount of text space for CONFIG_SMP=n. 
> > > 
> > > OK, so I'll queue a patch to do this then?
> > 
> > Yes please.
> 
> OK, I'll add a patch on top in v3.

Good, with this addition:

Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

> > > Just to make sure: the question would be, are smp_xxx barriers ever used
> > > in s390 arch specific code to flush in/out memory accesses for
> > > synchronization with the hypervisor?
> > > 
> > > I went over s390 arch code and it seems to me the answer is no
> > > (except of course for virtio).
> > 
> > Correct. Guest to host communication either uses instructions which
> > imply a memory barrier or QDIO which uses atomics.
> 
> And atomics imply a barrier on s390, right?

Yes they do.

> > > But I also see a lot of weirdness on this architecture.
> > 
> > Mostly historical, s390 actually is one of the easiest architectures in
> > regard to memory barriers.
> > 
> > > I found these calls:
> > > 
> > > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > > arch/s390/include/asm/bitops.h: smp_mb();
> > > 
> > > Not used in arch specific code so this is likely OK.
> > 
> > This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> > and smp_mb__before_atomic are used in clear_bit_unlock and
> > __clear_bit_unlock which are 1:1 copies from the code in
> > include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> > from the generic implementation.
> 
> something to keep in mind, but
> I'd rather not touch bitops at the moment - this patchset is already too big.

With the conversion smp_mb__before_atomic to a barrier() it does the
correct thing. I don't think that any chance is necessary.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 13:04               ` Michael S. Tsirkin
                                 ` (4 preceding siblings ...)
  (?)
@ 2016-01-05 14:21               ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 14:21 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Carsten Otte,
	Ingo Molnar, linux-arch, linux-s390, Davidlohr Bueso,
	Arnd Bergmann, x86, Christian Borntraeger, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kern

On Tue, 5 Jan 2016 15:04:43 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> > On Tue, 5 Jan 2016 11:30:19 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > 
> > > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > > This defines __smp_xxx barriers for s390,
> > > > > > > for use by virtualization.
> > > > > > > 
> > > > > > > Some smp_xxx barriers are removed as they are
> > > > > > > defined correctly by asm-generic/barriers.h
> > > > > > > 
> > > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > > unconditionally on this architecture.
> > > > > > > 
> > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > > ---
> > > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > > index c358c31..fbd25b2 100644
> > > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > > @@ -26,18 +26,21 @@
> > > > > > >  #define wmb()				barrier()
> > > > > > >  #define dma_rmb()			mb()
> > > > > > >  #define dma_wmb()			mb()
> > > > > > > -#define smp_mb()			mb()
> > > > > > > -#define smp_rmb()			rmb()
> > > > > > > -#define smp_wmb()			wmb()
> > > > > > > -
> > > > > > > -#define smp_store_release(p, v)						\
> > > > > > > +#define __smp_mb()			mb()
> > > > > > > +#define __smp_rmb()			rmb()
> > > > > > > +#define __smp_wmb()			wmb()
> > > > > > > +#define smp_mb()			__smp_mb()
> > > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > > 
> > > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > > asm-generic/barrier.h do this?
> > > > > 
> > > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > > 
> > > > > Pls note this patch is just reordering code without making
> > > > > functional changes.
> > > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > > 
> > > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > > non-smp variant for the memory-barriers. If the generic header
> > > > is used we'd get the non-smp version for free. It will save a
> > > > small amount of text space for CONFIG_SMP=n. 
> > > 
> > > OK, so I'll queue a patch to do this then?
> > 
> > Yes please.
> 
> OK, I'll add a patch on top in v3.

Good, with this addition:

Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

> > > Just to make sure: the question would be, are smp_xxx barriers ever used
> > > in s390 arch specific code to flush in/out memory accesses for
> > > synchronization with the hypervisor?
> > > 
> > > I went over s390 arch code and it seems to me the answer is no
> > > (except of course for virtio).
> > 
> > Correct. Guest to host communication either uses instructions which
> > imply a memory barrier or QDIO which uses atomics.
> 
> And atomics imply a barrier on s390, right?

Yes they do.

> > > But I also see a lot of weirdness on this architecture.
> > 
> > Mostly historical, s390 actually is one of the easiest architectures in
> > regard to memory barriers.
> > 
> > > I found these calls:
> > > 
> > > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > > arch/s390/include/asm/bitops.h: smp_mb();
> > > 
> > > Not used in arch specific code so this is likely OK.
> > 
> > This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> > and smp_mb__before_atomic are used in clear_bit_unlock and
> > __clear_bit_unlock which are 1:1 copies from the code in
> > include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> > from the generic implementation.
> 
> something to keep in mind, but
> I'd rather not touch bitops at the moment - this patchset is already too big.

With the conversion smp_mb__before_atomic to a barrier() it does the
correct thing. I don't think that any chance is necessary.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 14:21                 ` Martin Schwidefsky
  0 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 14:21 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 5 Jan 2016 15:04:43 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> > On Tue, 5 Jan 2016 11:30:19 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > 
> > > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > > This defines __smp_xxx barriers for s390,
> > > > > > > for use by virtualization.
> > > > > > > 
> > > > > > > Some smp_xxx barriers are removed as they are
> > > > > > > defined correctly by asm-generic/barriers.h
> > > > > > > 
> > > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > > unconditionally on this architecture.
> > > > > > > 
> > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > > ---
> > > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > > index c358c31..fbd25b2 100644
> > > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > > @@ -26,18 +26,21 @@
> > > > > > >  #define wmb()				barrier()
> > > > > > >  #define dma_rmb()			mb()
> > > > > > >  #define dma_wmb()			mb()
> > > > > > > -#define smp_mb()			mb()
> > > > > > > -#define smp_rmb()			rmb()
> > > > > > > -#define smp_wmb()			wmb()
> > > > > > > -
> > > > > > > -#define smp_store_release(p, v)						\
> > > > > > > +#define __smp_mb()			mb()
> > > > > > > +#define __smp_rmb()			rmb()
> > > > > > > +#define __smp_wmb()			wmb()
> > > > > > > +#define smp_mb()			__smp_mb()
> > > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > > 
> > > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > > asm-generic/barrier.h do this?
> > > > > 
> > > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > > 
> > > > > Pls note this patch is just reordering code without making
> > > > > functional changes.
> > > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > > 
> > > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > > non-smp variant for the memory-barriers. If the generic header
> > > > is used we'd get the non-smp version for free. It will save a
> > > > small amount of text space for CONFIG_SMP=n. 
> > > 
> > > OK, so I'll queue a patch to do this then?
> > 
> > Yes please.
> 
> OK, I'll add a patch on top in v3.

Good, with this addition:

Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

> > > Just to make sure: the question would be, are smp_xxx barriers ever used
> > > in s390 arch specific code to flush in/out memory accesses for
> > > synchronization with the hypervisor?
> > > 
> > > I went over s390 arch code and it seems to me the answer is no
> > > (except of course for virtio).
> > 
> > Correct. Guest to host communication either uses instructions which
> > imply a memory barrier or QDIO which uses atomics.
> 
> And atomics imply a barrier on s390, right?

Yes they do.

> > > But I also see a lot of weirdness on this architecture.
> > 
> > Mostly historical, s390 actually is one of the easiest architectures in
> > regard to memory barriers.
> > 
> > > I found these calls:
> > > 
> > > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > > arch/s390/include/asm/bitops.h: smp_mb();
> > > 
> > > Not used in arch specific code so this is likely OK.
> > 
> > This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> > and smp_mb__before_atomic are used in clear_bit_unlock and
> > __clear_bit_unlock which are 1:1 copies from the code in
> > include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> > from the generic implementation.
> 
> something to keep in mind, but
> I'd rather not touch bitops at the moment - this patchset is already too big.

With the conversion smp_mb__before_atomic to a barrier() it does the
correct thing. I don't think that any chance is necessary.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 13:04               ` Michael S. Tsirkin
                                 ` (3 preceding siblings ...)
  (?)
@ 2016-01-05 14:21               ` Martin Schwidefsky
  -1 siblings, 0 replies; 572+ messages in thread
From: Martin Schwidefsky @ 2016-01-05 14:21 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Carsten Otte,
	Ingo Molnar, linux-arch, linux-s390, Davidlohr Bueso,
	Arnd Bergmann, x86, Christian Borntraeger, xen-devel,
	Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kern

On Tue, 5 Jan 2016 15:04:43 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Jan 05, 2016 at 01:08:52PM +0100, Martin Schwidefsky wrote:
> > On Tue, 5 Jan 2016 11:30:19 +0200
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Tue, Jan 05, 2016 at 09:13:19AM +0100, Martin Schwidefsky wrote:
> > > > On Mon, 4 Jan 2016 22:18:58 +0200
> > > > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > 
> > > > > On Mon, Jan 04, 2016 at 02:45:25PM +0100, Peter Zijlstra wrote:
> > > > > > On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote:
> > > > > > > This defines __smp_xxx barriers for s390,
> > > > > > > for use by virtualization.
> > > > > > > 
> > > > > > > Some smp_xxx barriers are removed as they are
> > > > > > > defined correctly by asm-generic/barriers.h
> > > > > > > 
> > > > > > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
> > > > > > > unconditionally on this architecture.
> > > > > > > 
> > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > > ---
> > > > > > >  arch/s390/include/asm/barrier.h | 15 +++++++++------
> > > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > > > > > > index c358c31..fbd25b2 100644
> > > > > > > --- a/arch/s390/include/asm/barrier.h
> > > > > > > +++ b/arch/s390/include/asm/barrier.h
> > > > > > > @@ -26,18 +26,21 @@
> > > > > > >  #define wmb()				barrier()
> > > > > > >  #define dma_rmb()			mb()
> > > > > > >  #define dma_wmb()			mb()
> > > > > > > -#define smp_mb()			mb()
> > > > > > > -#define smp_rmb()			rmb()
> > > > > > > -#define smp_wmb()			wmb()
> > > > > > > -
> > > > > > > -#define smp_store_release(p, v)						\
> > > > > > > +#define __smp_mb()			mb()
> > > > > > > +#define __smp_rmb()			rmb()
> > > > > > > +#define __smp_wmb()			wmb()
> > > > > > > +#define smp_mb()			__smp_mb()
> > > > > > > +#define smp_rmb()			__smp_rmb()
> > > > > > > +#define smp_wmb()			__smp_wmb()
> > > > > > 
> > > > > > Why define the smp_*mb() primitives here? Would not the inclusion of
> > > > > > asm-generic/barrier.h do this?
> > > > > 
> > > > > No because the generic one is a nop on !SMP, this one isn't.
> > > > > 
> > > > > Pls note this patch is just reordering code without making
> > > > > functional changes.
> > > > > And at the moment, on s390 smp_xxx barriers are always non empty.
> > > > 
> > > > The s390 kernel is SMP to 99.99%, we just didn't bother with a
> > > > non-smp variant for the memory-barriers. If the generic header
> > > > is used we'd get the non-smp version for free. It will save a
> > > > small amount of text space for CONFIG_SMP=n. 
> > > 
> > > OK, so I'll queue a patch to do this then?
> > 
> > Yes please.
> 
> OK, I'll add a patch on top in v3.

Good, with this addition:

Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

> > > Just to make sure: the question would be, are smp_xxx barriers ever used
> > > in s390 arch specific code to flush in/out memory accesses for
> > > synchronization with the hypervisor?
> > > 
> > > I went over s390 arch code and it seems to me the answer is no
> > > (except of course for virtio).
> > 
> > Correct. Guest to host communication either uses instructions which
> > imply a memory barrier or QDIO which uses atomics.
> 
> And atomics imply a barrier on s390, right?

Yes they do.

> > > But I also see a lot of weirdness on this architecture.
> > 
> > Mostly historical, s390 actually is one of the easiest architectures in
> > regard to memory barriers.
> > 
> > > I found these calls:
> > > 
> > > arch/s390/include/asm/bitops.h: smp_mb__before_atomic();
> > > arch/s390/include/asm/bitops.h: smp_mb();
> > > 
> > > Not used in arch specific code so this is likely OK.
> > 
> > This has been introduced with git commit 5402ea6af11dc5a9, the smp_mb
> > and smp_mb__before_atomic are used in clear_bit_unlock and
> > __clear_bit_unlock which are 1:1 copies from the code in
> > include/asm-generic/bitops/lock.h. Only test_and_set_bit_lock differs
> > from the generic implementation.
> 
> something to keep in mind, but
> I'd rather not touch bitops at the moment - this patchset is already too big.

With the conversion smp_mb__before_atomic to a barrier() it does the
correct thing. I don't think that any chance is necessary.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:59             ` Russell King - ARM Linux
  (?)
  (?)
@ 2016-01-05 14:38               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 14:38 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Mon, Jan 04, 2016 at 01:59:34PM +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > > 
> > > > > My only concern is that it gives people an additional handle onto a
> > > > > "new" set of barriers - just because they're prefixed with __*
> > > > > unfortunately doesn't stop anyone from using it (been there with
> > > > > other arch stuff before.)
> > > > > 
> > > > > I wonder whether we should consider making the smp memory barriers
> > > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > > afterwards, thereby preventing drivers getting their hands on these
> > > > > new macros?
> > > > 
> > > > That'd be tricky to do cleanly since asm-generic depends on
> > > > ifndef to add generic variants where needed.
> > > > 
> > > > But it would be possible to add a checkpatch test for this.
> > > 
> > > Wasn't the whole purpose of these things for 'drivers' (namely
> > > virtio/xen hypervisor interaction) to use these?
> > 
> > Ah, I see, you add virt_*mb() stuff later on for that use case.
> > 
> > So, assuming everybody does include asm-generic/barrier.h, you could
> > simply #undef the __smp version at the end of that, once we've generated
> > all the regular primitives from it, no?
> 
> Not so simple - that's why I mentioned using inline functions.
> 
> The new smp_* _macros_ are:
> 
> +#define smp_mb()       __smp_mb()
> 
> which means if we simply #undef __smp_mb(), smp_mb() then points at
> something which is no longer available, and we'll end up with errors
> saying that __smp_mb() doesn't exist.
> 
> My suggestion was to change:
> 
> #ifndef smp_mb
> #define smp_mb()	__smp_mb()
> #endif
> 
> to:
> 
> #ifndef smp_mb
> static inline void smp_mb(void)
> {
> 	__smp_mb();
> }
> #endif
> 
> which then means __smp_mb() and friends can be #undef'd afterwards.

Absolutely, I got it.
The issue is that e.g. tile has:
#define __smp_mb__after_atomic()        do { } while (0)

and this is cheaper than barrier().

For this reason I left
#define smp_mb__after_atomic()  __smp_mb__after_atomic()
in place there.

Now, of course I can do (in asm-generic):

#ifndef smp_mb__after_atomic
static inline void smp_mb__after_atomic(void)
{
...
}
#endif

but this seems ugly: architectures do defines, generic
version does inline.


And that is not all: APIs like smp_store_mb can take
a variety of types as arguments so they pretty much
must be implemented as macros.

Teaching checkpatch.pl to complain about it seems like the cleanest
approach.

> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-05 14:38               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 14:38 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Tony Lindgren,
	Andrey Konovalov

On Mon, Jan 04, 2016 at 01:59:34PM +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > > 
> > > > > My only concern is that it gives people an additional handle onto a
> > > > > "new" set of barriers - just because they're prefixed with __*
> > > > > unfortunately doesn't stop anyone from using it (been there with
> > > > > other arch stuff before.)
> > > > > 
> > > > > I wonder whether we should consider making the smp memory barriers
> > > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > > afterwards, thereby preventing drivers getting their hands on these
> > > > > new macros?
> > > > 
> > > > That'd be tricky to do cleanly since asm-generic depends on
> > > > ifndef to add generic variants where needed.
> > > > 
> > > > But it would be possible to add a checkpatch test for this.
> > > 
> > > Wasn't the whole purpose of these things for 'drivers' (namely
> > > virtio/xen hypervisor interaction) to use these?
> > 
> > Ah, I see, you add virt_*mb() stuff later on for that use case.
> > 
> > So, assuming everybody does include asm-generic/barrier.h, you could
> > simply #undef the __smp version at the end of that, once we've generated
> > all the regular primitives from it, no?
> 
> Not so simple - that's why I mentioned using inline functions.
> 
> The new smp_* _macros_ are:
> 
> +#define smp_mb()       __smp_mb()
> 
> which means if we simply #undef __smp_mb(), smp_mb() then points at
> something which is no longer available, and we'll end up with errors
> saying that __smp_mb() doesn't exist.
> 
> My suggestion was to change:
> 
> #ifndef smp_mb
> #define smp_mb()	__smp_mb()
> #endif
> 
> to:
> 
> #ifndef smp_mb
> static inline void smp_mb(void)
> {
> 	__smp_mb();
> }
> #endif
> 
> which then means __smp_mb() and friends can be #undef'd afterwards.

Absolutely, I got it.
The issue is that e.g. tile has:
#define __smp_mb__after_atomic()        do { } while (0)

and this is cheaper than barrier().

For this reason I left
#define smp_mb__after_atomic()  __smp_mb__after_atomic()
in place there.

Now, of course I can do (in asm-generic):

#ifndef smp_mb__after_atomic
static inline void smp_mb__after_atomic(void)
{
...
}
#endif

but this seems ugly: architectures do defines, generic
version does inline.


And that is not all: APIs like smp_store_mb can take
a variety of types as arguments so they pretty much
must be implemented as macros.

Teaching checkpatch.pl to complain about it seems like the cleanest
approach.

> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-05 14:38               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 14:38 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Mon, Jan 04, 2016 at 01:59:34PM +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > > 
> > > > > My only concern is that it gives people an additional handle onto a
> > > > > "new" set of barriers - just because they're prefixed with __*
> > > > > unfortunately doesn't stop anyone from using it (been there with
> > > > > other arch stuff before.)
> > > > > 
> > > > > I wonder whether we should consider making the smp memory barriers
> > > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > > afterwards, thereby preventing drivers getting their hands on these
> > > > > new macros?
> > > > 
> > > > That'd be tricky to do cleanly since asm-generic depends on
> > > > ifndef to add generic variants where needed.
> > > > 
> > > > But it would be possible to add a checkpatch test for this.
> > > 
> > > Wasn't the whole purpose of these things for 'drivers' (namely
> > > virtio/xen hypervisor interaction) to use these?
> > 
> > Ah, I see, you add virt_*mb() stuff later on for that use case.
> > 
> > So, assuming everybody does include asm-generic/barrier.h, you could
> > simply #undef the __smp version at the end of that, once we've generated
> > all the regular primitives from it, no?
> 
> Not so simple - that's why I mentioned using inline functions.
> 
> The new smp_* _macros_ are:
> 
> +#define smp_mb()       __smp_mb()
> 
> which means if we simply #undef __smp_mb(), smp_mb() then points at
> something which is no longer available, and we'll end up with errors
> saying that __smp_mb() doesn't exist.
> 
> My suggestion was to change:
> 
> #ifndef smp_mb
> #define smp_mb()	__smp_mb()
> #endif
> 
> to:
> 
> #ifndef smp_mb
> static inline void smp_mb(void)
> {
> 	__smp_mb();
> }
> #endif
> 
> which then means __smp_mb() and friends can be #undef'd afterwards.

Absolutely, I got it.
The issue is that e.g. tile has:
#define __smp_mb__after_atomic()        do { } while (0)

and this is cheaper than barrier().

For this reason I left
#define smp_mb__after_atomic()  __smp_mb__after_atomic()
in place there.

Now, of course I can do (in asm-generic):

#ifndef smp_mb__after_atomic
static inline void smp_mb__after_atomic(void)
{
...
}
#endif

but this seems ugly: architectures do defines, generic
version does inline.


And that is not all: APIs like smp_store_mb can take
a variety of types as arguments so they pretty much
must be implemented as macros.

Teaching checkpatch.pl to complain about it seems like the cleanest
approach.

> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 17/32] arm: define __smp_xxx
@ 2016-01-05 14:38               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 04, 2016 at 01:59:34PM +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > > 
> > > > > My only concern is that it gives people an additional handle onto a
> > > > > "new" set of barriers - just because they're prefixed with __*
> > > > > unfortunately doesn't stop anyone from using it (been there with
> > > > > other arch stuff before.)
> > > > > 
> > > > > I wonder whether we should consider making the smp memory barriers
> > > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > > afterwards, thereby preventing drivers getting their hands on these
> > > > > new macros?
> > > > 
> > > > That'd be tricky to do cleanly since asm-generic depends on
> > > > ifndef to add generic variants where needed.
> > > > 
> > > > But it would be possible to add a checkpatch test for this.
> > > 
> > > Wasn't the whole purpose of these things for 'drivers' (namely
> > > virtio/xen hypervisor interaction) to use these?
> > 
> > Ah, I see, you add virt_*mb() stuff later on for that use case.
> > 
> > So, assuming everybody does include asm-generic/barrier.h, you could
> > simply #undef the __smp version at the end of that, once we've generated
> > all the regular primitives from it, no?
> 
> Not so simple - that's why I mentioned using inline functions.
> 
> The new smp_* _macros_ are:
> 
> +#define smp_mb()       __smp_mb()
> 
> which means if we simply #undef __smp_mb(), smp_mb() then points at
> something which is no longer available, and we'll end up with errors
> saying that __smp_mb() doesn't exist.
> 
> My suggestion was to change:
> 
> #ifndef smp_mb
> #define smp_mb()	__smp_mb()
> #endif
> 
> to:
> 
> #ifndef smp_mb
> static inline void smp_mb(void)
> {
> 	__smp_mb();
> }
> #endif
> 
> which then means __smp_mb() and friends can be #undef'd afterwards.

Absolutely, I got it.
The issue is that e.g. tile has:
#define __smp_mb__after_atomic()        do { } while (0)

and this is cheaper than barrier().

For this reason I left
#define smp_mb__after_atomic()  __smp_mb__after_atomic()
in place there.

Now, of course I can do (in asm-generic):

#ifndef smp_mb__after_atomic
static inline void smp_mb__after_atomic(void)
{
...
}
#endif

but this seems ugly: architectures do defines, generic
version does inline.


And that is not all: APIs like smp_store_mb can take
a variety of types as arguments so they pretty much
must be implemented as macros.

Teaching checkpatch.pl to complain about it seems like the cleanest
approach.

> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 17/32] arm: define __smp_xxx
  2016-01-04 13:59             ` Russell King - ARM Linux
                               ` (4 preceding siblings ...)
  (?)
@ 2016-01-05 14:38             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 14:38 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, x86, Tony Lindgren, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Mon, Jan 04, 2016 at 01:59:34PM +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 04, 2016 at 02:54:20PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 04, 2016 at 02:36:58PM +0100, Peter Zijlstra wrote:
> > > On Sun, Jan 03, 2016 at 11:12:44AM +0200, Michael S. Tsirkin wrote:
> > > > On Sat, Jan 02, 2016 at 11:24:38AM +0000, Russell King - ARM Linux wrote:
> > > 
> > > > > My only concern is that it gives people an additional handle onto a
> > > > > "new" set of barriers - just because they're prefixed with __*
> > > > > unfortunately doesn't stop anyone from using it (been there with
> > > > > other arch stuff before.)
> > > > > 
> > > > > I wonder whether we should consider making the smp memory barriers
> > > > > inline functions, so these __smp_xxx() variants can be undef'd
> > > > > afterwards, thereby preventing drivers getting their hands on these
> > > > > new macros?
> > > > 
> > > > That'd be tricky to do cleanly since asm-generic depends on
> > > > ifndef to add generic variants where needed.
> > > > 
> > > > But it would be possible to add a checkpatch test for this.
> > > 
> > > Wasn't the whole purpose of these things for 'drivers' (namely
> > > virtio/xen hypervisor interaction) to use these?
> > 
> > Ah, I see, you add virt_*mb() stuff later on for that use case.
> > 
> > So, assuming everybody does include asm-generic/barrier.h, you could
> > simply #undef the __smp version at the end of that, once we've generated
> > all the regular primitives from it, no?
> 
> Not so simple - that's why I mentioned using inline functions.
> 
> The new smp_* _macros_ are:
> 
> +#define smp_mb()       __smp_mb()
> 
> which means if we simply #undef __smp_mb(), smp_mb() then points at
> something which is no longer available, and we'll end up with errors
> saying that __smp_mb() doesn't exist.
> 
> My suggestion was to change:
> 
> #ifndef smp_mb
> #define smp_mb()	__smp_mb()
> #endif
> 
> to:
> 
> #ifndef smp_mb
> static inline void smp_mb(void)
> {
> 	__smp_mb();
> }
> #endif
> 
> which then means __smp_mb() and friends can be #undef'd afterwards.

Absolutely, I got it.
The issue is that e.g. tile has:
#define __smp_mb__after_atomic()        do { } while (0)

and this is cheaper than barrier().

For this reason I left
#define smp_mb__after_atomic()  __smp_mb__after_atomic()
in place there.

Now, of course I can do (in asm-generic):

#ifndef smp_mb__after_atomic
static inline void smp_mb__after_atomic(void)
{
...
}
#endif

but this seems ugly: architectures do defines, generic
version does inline.


And that is not all: APIs like smp_store_mb can take
a variety of types as arguments so they pretty much
must be implemented as macros.

Teaching checkpatch.pl to complain about it seems like the cleanest
approach.

> -- 
> RMK's Patch system: http://www.arm.linux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  9:30           ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-05 15:39               ` Christian Borntraeger
  -1 siblings, 0 replies; 572+ messages in thread
From: Christian Borntraeger @ 2016-01-05 15:39 UTC (permalink / raw)
  To: Michael S. Tsirkin, Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Heiko Carstens,
	Ingo Molnar

On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:

> 
> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

It was probably me misreading the code. I change a wmb into a full mb here
since I was changing the defintion of wmb to a compiler barrier. I tried to
fixup all users of wmb that really pair with other code. I assumed that there
must be some reader (as there was a wmb before) but I could not figure out
which. So I just played safe here.

But it probably can be removed.

> arch/s390/kvm/kvm-s390.c:       smp_mb();

This can go. If you have a patch, I can carry that via the kvms390 tree,
or I will spin a new patch with you as suggested-by.

Christian


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 15:39               ` Christian Borntraeger
  0 siblings, 0 replies; 572+ messages in thread
From: Christian Borntraeger @ 2016-01-05 15:39 UTC (permalink / raw)
  To: Michael S. Tsirkin, Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Andrey Konovalov

On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:

> 
> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

It was probably me misreading the code. I change a wmb into a full mb here
since I was changing the defintion of wmb to a compiler barrier. I tried to
fixup all users of wmb that really pair with other code. I assumed that there
must be some reader (as there was a wmb before) but I could not figure out
which. So I just played safe here.

But it probably can be removed.

> arch/s390/kvm/kvm-s390.c:       smp_mb();

This can go. If you have a patch, I can carry that via the kvms390 tree,
or I will spin a new patch with you as suggested-by.

Christian


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 15:39               ` Christian Borntraeger
  0 siblings, 0 replies; 572+ messages in thread
From: Christian Borntraeger @ 2016-01-05 15:39 UTC (permalink / raw)
  To: Michael S. Tsirkin, Martin Schwidefsky
  Cc: Peter Zijlstra, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Heiko Carstens,
	Ingo Molnar

On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:

> 
> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

It was probably me misreading the code. I change a wmb into a full mb here
since I was changing the defintion of wmb to a compiler barrier. I tried to
fixup all users of wmb that really pair with other code. I assumed that there
must be some reader (as there was a wmb before) but I could not figure out
which. So I just played safe here.

But it probably can be removed.

> arch/s390/kvm/kvm-s390.c:       smp_mb();

This can go. If you have a patch, I can carry that via the kvms390 tree,
or I will spin a new patch with you as suggested-by.

Christian

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  9:30           ` Michael S. Tsirkin
                             ` (5 preceding siblings ...)
  (?)
@ 2016-01-05 15:39           ` Christian Borntraeger
  -1 siblings, 0 replies; 572+ messages in thread
From: Christian Borntraeger @ 2016-01-05 15:39 UTC (permalink / raw)
  To: Michael S. Tsirkin, Martin Schwidefsky
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel

On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:

> 
> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

It was probably me misreading the code. I change a wmb into a full mb here
since I was changing the defintion of wmb to a compiler barrier. I tried to
fixup all users of wmb that really pair with other code. I assumed that there
must be some reader (as there was a wmb before) but I could not figure out
which. So I just played safe here.

But it probably can be removed.

> arch/s390/kvm/kvm-s390.c:       smp_mb();

This can go. If you have a patch, I can carry that via the kvms390 tree,
or I will spin a new patch with you as suggested-by.

Christian

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 15:39               ` Christian Borntraeger
  0 siblings, 0 replies; 572+ messages in thread
From: Christian Borntraeger @ 2016-01-05 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:

> 
> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

It was probably me misreading the code. I change a wmb into a full mb here
since I was changing the defintion of wmb to a compiler barrier. I tried to
fixup all users of wmb that really pair with other code. I assumed that there
must be some reader (as there was a wmb before) but I could not figure out
which. So I just played safe here.

But it probably can be removed.

> arch/s390/kvm/kvm-s390.c:       smp_mb();

This can go. If you have a patch, I can carry that via the kvms390 tree,
or I will spin a new patch with you as suggested-by.

Christian

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05  9:30           ` Michael S. Tsirkin
                             ` (7 preceding siblings ...)
  (?)
@ 2016-01-05 15:39           ` Christian Borntraeger
  -1 siblings, 0 replies; 572+ messages in thread
From: Christian Borntraeger @ 2016-01-05 15:39 UTC (permalink / raw)
  To: Michael S. Tsirkin, Martin Schwidefsky
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel

On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:

> 
> arch/s390/kernel/vdso.c:        smp_mb();
> 
> Looking at
> 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> 	Date:   Fri Sep 11 16:23:06 2015 +0200
> 
> 	    s390/vdso: use correct memory barrier
> 
> 	    By definition smp_wmb only orders writes against writes. (Finish all
> 	    previous writes, and do not start any future write). To protect the
> 	    vdso init code against early reads on other CPUs, let's use a full
> 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> 	    as full serialization, this needs no stable backport, but this change
> 	    will be necessary if we reimplement smp_wmb.
> 
> ok from hypervisor point of view, but it's also strange:
> 1. why isn't this paired with another mb somewhere?
>    this seems to violate barrier pairing rules.
> 2. how does smp_mb protect against early reads on other CPUs?
>    It normally does not: it orders reads from this CPU versus writes
>    from same CPU. But init code does not appear to read anything.
>    Maybe this is some s390 specific trick?
> 
> I could not figure out the above commit.

It was probably me misreading the code. I change a wmb into a full mb here
since I was changing the defintion of wmb to a compiler barrier. I tried to
fixup all users of wmb that really pair with other code. I assumed that there
must be some reader (as there was a wmb before) but I could not figure out
which. So I just played safe here.

But it probably can be removed.

> arch/s390/kvm/kvm-s390.c:       smp_mb();

This can go. If you have a patch, I can carry that via the kvms390 tree,
or I will spin a new patch with you as suggested-by.

Christian

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 15:39               ` Christian Borntraeger
  (?)
  (?)
@ 2016-01-05 16:04                 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:04 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Martin Schwidefsky, Peter Zijlstra, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens

On Tue, Jan 05, 2016 at 04:39:37PM +0100, Christian Borntraeger wrote:
> On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:
> 
> > 
> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> It was probably me misreading the code. I change a wmb into a full mb here
> since I was changing the defintion of wmb to a compiler barrier. I tried to
> fixup all users of wmb that really pair with other code. I assumed that there
> must be some reader (as there was a wmb before) but I could not figure out
> which. So I just played safe here.
> 
> But it probably can be removed.
> 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> This can go. If you have a patch, I can carry that via the kvms390 tree,
> or I will spin a new patch with you as suggested-by.
> 
> Christian

I have both, will post shortly.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 16:04                 ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:04 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Martin Schwidefsky, Peter Zijlstra, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens, Ingo Molnar,
	Davidlohr Bueso, Andrey Konovalov

On Tue, Jan 05, 2016 at 04:39:37PM +0100, Christian Borntraeger wrote:
> On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:
> 
> > 
> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> It was probably me misreading the code. I change a wmb into a full mb here
> since I was changing the defintion of wmb to a compiler barrier. I tried to
> fixup all users of wmb that really pair with other code. I assumed that there
> must be some reader (as there was a wmb before) but I could not figure out
> which. So I just played safe here.
> 
> But it probably can be removed.
> 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> This can go. If you have a patch, I can carry that via the kvms390 tree,
> or I will spin a new patch with you as suggested-by.
> 
> Christian

I have both, will post shortly.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 16:04                 ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:04 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Martin Schwidefsky, Peter Zijlstra, linux-kernel, Arnd Bergmann,
	linux-arch, Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Heiko Carstens

On Tue, Jan 05, 2016 at 04:39:37PM +0100, Christian Borntraeger wrote:
> On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:
> 
> > 
> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> It was probably me misreading the code. I change a wmb into a full mb here
> since I was changing the defintion of wmb to a compiler barrier. I tried to
> fixup all users of wmb that really pair with other code. I assumed that there
> must be some reader (as there was a wmb before) but I could not figure out
> which. So I just played safe here.
> 
> But it probably can be removed.
> 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> This can go. If you have a patch, I can carry that via the kvms390 tree,
> or I will spin a new patch with you as suggested-by.
> 
> Christian

I have both, will post shortly.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 15:39               ` Christian Borntraeger
                                 ` (4 preceding siblings ...)
  (?)
@ 2016-01-05 16:04               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:04 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel

On Tue, Jan 05, 2016 at 04:39:37PM +0100, Christian Borntraeger wrote:
> On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:
> 
> > 
> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> It was probably me misreading the code. I change a wmb into a full mb here
> since I was changing the defintion of wmb to a compiler barrier. I tried to
> fixup all users of wmb that really pair with other code. I assumed that there
> must be some reader (as there was a wmb before) but I could not figure out
> which. So I just played safe here.
> 
> But it probably can be removed.
> 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> This can go. If you have a patch, I can carry that via the kvms390 tree,
> or I will spin a new patch with you as suggested-by.
> 
> Christian

I have both, will post shortly.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 22/32] s390: define __smp_xxx
@ 2016-01-05 16:04                 ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 05, 2016 at 04:39:37PM +0100, Christian Borntraeger wrote:
> On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:
> 
> > 
> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> It was probably me misreading the code. I change a wmb into a full mb here
> since I was changing the defintion of wmb to a compiler barrier. I tried to
> fixup all users of wmb that really pair with other code. I assumed that there
> must be some reader (as there was a wmb before) but I could not figure out
> which. So I just played safe here.
> 
> But it probably can be removed.
> 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> This can go. If you have a patch, I can carry that via the kvms390 tree,
> or I will spin a new patch with you as suggested-by.
> 
> Christian

I have both, will post shortly.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 22/32] s390: define __smp_xxx
  2016-01-05 15:39               ` Christian Borntraeger
                                 ` (3 preceding siblings ...)
  (?)
@ 2016-01-05 16:04               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:04 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
	virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
	linux-arch, linux-s390, Davidlohr Bueso, Arnd Bergmann, x86,
	xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
	Stefano Stabellini, Andrey Konovalov, adi-buildroot-devel,
	Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
	linux-kernel

On Tue, Jan 05, 2016 at 04:39:37PM +0100, Christian Borntraeger wrote:
> On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:
> 
> > 
> > arch/s390/kernel/vdso.c:        smp_mb();
> > 
> > Looking at
> > 	Author: Christian Borntraeger <borntraeger@de.ibm.com>
> > 	Date:   Fri Sep 11 16:23:06 2015 +0200
> > 
> > 	    s390/vdso: use correct memory barrier
> > 
> > 	    By definition smp_wmb only orders writes against writes. (Finish all
> > 	    previous writes, and do not start any future write). To protect the
> > 	    vdso init code against early reads on other CPUs, let's use a full
> > 	    smp_mb at the end of vdso init. As right now smp_wmb is implemented
> > 	    as full serialization, this needs no stable backport, but this change
> > 	    will be necessary if we reimplement smp_wmb.
> > 
> > ok from hypervisor point of view, but it's also strange:
> > 1. why isn't this paired with another mb somewhere?
> >    this seems to violate barrier pairing rules.
> > 2. how does smp_mb protect against early reads on other CPUs?
> >    It normally does not: it orders reads from this CPU versus writes
> >    from same CPU. But init code does not appear to read anything.
> >    Maybe this is some s390 specific trick?
> > 
> > I could not figure out the above commit.
> 
> It was probably me misreading the code. I change a wmb into a full mb here
> since I was changing the defintion of wmb to a compiler barrier. I tried to
> fixup all users of wmb that really pair with other code. I assumed that there
> must be some reader (as there was a wmb before) but I could not figure out
> which. So I just played safe here.
> 
> But it probably can be removed.
> 
> > arch/s390/kvm/kvm-s390.c:       smp_mb();
> 
> This can go. If you have a patch, I can carry that via the kvms390 tree,
> or I will spin a new patch with you as suggested-by.
> 
> Christian

I have both, will post shortly.

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  9:53           ` Boqun Feng
  (?)
  (?)
@ 2016-01-05 16:16               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:16 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Benjamin Herrenschmidt, Paul Mackerras

On Tue, Jan 05, 2016 at 05:53:41PM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > > Hi Michael,
> > > 
> > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for powerpc
> > > > for use by virtualization.
> > > > 
> > > > smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > 
> > I think this is the part that was missed in review.
> > 
> 
> Yes, I realized my mistake after reread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?

It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.

And these are now gone.

Instead there are __smp_store_release and __smp_load_acquire
which call __smp_lwsync.
These are only used for virt and on SMP.
UP variants are generic - they just call barrier().


> > > > This reduces the amount of arch-specific boiler-plate code.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > > 
> > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > > index 980ad0c..c0deafc 100644
> > > > --- a/arch/powerpc/include/asm/barrier.h
> > > > +++ b/arch/powerpc/include/asm/barrier.h
> > > > @@ -44,19 +44,11 @@
> > > >  #define dma_rmb()	__lwsync()
> > > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > > -#ifdef CONFIG_SMP
> > > > -#define smp_lwsync()	__lwsync()
> > > > +#define __smp_lwsync()	__lwsync()
> > > >  
> > > 
> > > so __smp_lwsync() is always mapped to lwsync, right?
> > 
> > Yes.
> > 
> > > > -#define smp_mb()	mb()
> > > > -#define smp_rmb()	__lwsync()
> > > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > > -#else
> > > > -#define smp_lwsync()	barrier()
> > > > -
> > > > -#define smp_mb()	barrier()
> > > > -#define smp_rmb()	barrier()
> > > > -#define smp_wmb()	barrier()
> > > > -#endif /* CONFIG_SMP */
> > > > +#define __smp_mb()	mb()
> > > > +#define __smp_rmb()	__lwsync()
> > > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > >  /*
> > > >   * This is a barrier which prevents following instructions from being
> > > > @@ -67,18 +59,18 @@
> > > >  #define data_barrier(x)	\
> > > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > > >  
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_store_release(p, v)						\
> > > >  do {									\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > 
> > > , therefore this will emit an lwsync no matter SMP or UP.
> > 
> > Absolutely. But smp_store_release (without __) will not.
> > 
> > Please note I did test this: for ppc code before and after
> > this patch generates exactly the same binary on SMP and UP.
> > 
> 
> Yes, you're right, sorry for my mistake...
> 
> > 
> > > Another thing is that smp_lwsync() may have a third user(other than
> > > smp_load_acquire() and smp_store_release()):
> > > 
> > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > 
> > > I'm OK to change my patch accordingly, but do we really want
> > > smp_lwsync() get involved in this cleanup? If I understand you
> > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > while smp_lwsync() is internal to PPC.
> > > 
> > > Regards,
> > > Boqun
> > 
> > I think you missed the leading ___ :)
> > 
> 
> What I mean here was smp_lwsync() was originally internal to PPC, but
> never mind ;-)
> 
> > smp_store_release is external and it needs __smp_lwsync as
> > defined here.
> > 
> > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> 
> You mean bringing smp_lwsync() back? because I haven't seen you defining
> in asm-generic/barriers.h in previous patches and you just delete it in
> this patch.
> 
> > but why do this? Still, if you prefer it this way,
> > please let me know.
> > 
> 
> I think deleting smp_lwsync() is fine, though I need to change atomic
> variants patches on PPC because of it ;-/
> 
> Regards,
> Boqun

Sorry, I don't understand - why do you have to do anything?
I changed all users of smp_lwsync so they
use __smp_lwsync on SMP and barrier() on !SMP.

This is exactly the current behaviour, I also tested that
generated code does not change at all.

Is there a patch in your tree that conflicts with this?


> > > >  	WRITE_ONCE(*p, v);						\
> > > >  } while (0)
> > > >  
> > > > -#define smp_load_acquire(p)						\
> > > > +#define __smp_load_acquire(p)						\
> > > >  ({									\
> > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > >  	___p1;								\
> > > >  })
> > > >  
> > > > -- 
> > > > MST
> > > > 
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05 16:16               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:16 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Paul E. McKenney

On Tue, Jan 05, 2016 at 05:53:41PM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > > Hi Michael,
> > > 
> > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for powerpc
> > > > for use by virtualization.
> > > > 
> > > > smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > 
> > I think this is the part that was missed in review.
> > 
> 
> Yes, I realized my mistake after reread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?

It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.

And these are now gone.

Instead there are __smp_store_release and __smp_load_acquire
which call __smp_lwsync.
These are only used for virt and on SMP.
UP variants are generic - they just call barrier().


> > > > This reduces the amount of arch-specific boiler-plate code.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > > 
> > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > > index 980ad0c..c0deafc 100644
> > > > --- a/arch/powerpc/include/asm/barrier.h
> > > > +++ b/arch/powerpc/include/asm/barrier.h
> > > > @@ -44,19 +44,11 @@
> > > >  #define dma_rmb()	__lwsync()
> > > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > > -#ifdef CONFIG_SMP
> > > > -#define smp_lwsync()	__lwsync()
> > > > +#define __smp_lwsync()	__lwsync()
> > > >  
> > > 
> > > so __smp_lwsync() is always mapped to lwsync, right?
> > 
> > Yes.
> > 
> > > > -#define smp_mb()	mb()
> > > > -#define smp_rmb()	__lwsync()
> > > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > > -#else
> > > > -#define smp_lwsync()	barrier()
> > > > -
> > > > -#define smp_mb()	barrier()
> > > > -#define smp_rmb()	barrier()
> > > > -#define smp_wmb()	barrier()
> > > > -#endif /* CONFIG_SMP */
> > > > +#define __smp_mb()	mb()
> > > > +#define __smp_rmb()	__lwsync()
> > > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > >  /*
> > > >   * This is a barrier which prevents following instructions from being
> > > > @@ -67,18 +59,18 @@
> > > >  #define data_barrier(x)	\
> > > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > > >  
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_store_release(p, v)						\
> > > >  do {									\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > 
> > > , therefore this will emit an lwsync no matter SMP or UP.
> > 
> > Absolutely. But smp_store_release (without __) will not.
> > 
> > Please note I did test this: for ppc code before and after
> > this patch generates exactly the same binary on SMP and UP.
> > 
> 
> Yes, you're right, sorry for my mistake...
> 
> > 
> > > Another thing is that smp_lwsync() may have a third user(other than
> > > smp_load_acquire() and smp_store_release()):
> > > 
> > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > 
> > > I'm OK to change my patch accordingly, but do we really want
> > > smp_lwsync() get involved in this cleanup? If I understand you
> > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > while smp_lwsync() is internal to PPC.
> > > 
> > > Regards,
> > > Boqun
> > 
> > I think you missed the leading ___ :)
> > 
> 
> What I mean here was smp_lwsync() was originally internal to PPC, but
> never mind ;-)
> 
> > smp_store_release is external and it needs __smp_lwsync as
> > defined here.
> > 
> > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> 
> You mean bringing smp_lwsync() back? because I haven't seen you defining
> in asm-generic/barriers.h in previous patches and you just delete it in
> this patch.
> 
> > but why do this? Still, if you prefer it this way,
> > please let me know.
> > 
> 
> I think deleting smp_lwsync() is fine, though I need to change atomic
> variants patches on PPC because of it ;-/
> 
> Regards,
> Boqun

Sorry, I don't understand - why do you have to do anything?
I changed all users of smp_lwsync so they
use __smp_lwsync on SMP and barrier() on !SMP.

This is exactly the current behaviour, I also tested that
generated code does not change at all.

Is there a patch in your tree that conflicts with this?


> > > >  	WRITE_ONCE(*p, v);						\
> > > >  } while (0)
> > > >  
> > > > -#define smp_load_acquire(p)						\
> > > > +#define __smp_load_acquire(p)						\
> > > >  ({									\
> > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > >  	___p1;								\
> > > >  })
> > > >  
> > > > -- 
> > > > MST
> > > > 
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05 16:16               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:16 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Benjamin Herrenschmidt, Paul Mackerras

On Tue, Jan 05, 2016 at 05:53:41PM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > > Hi Michael,
> > > 
> > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for powerpc
> > > > for use by virtualization.
> > > > 
> > > > smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > 
> > I think this is the part that was missed in review.
> > 
> 
> Yes, I realized my mistake after reread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?

It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.

And these are now gone.

Instead there are __smp_store_release and __smp_load_acquire
which call __smp_lwsync.
These are only used for virt and on SMP.
UP variants are generic - they just call barrier().


> > > > This reduces the amount of arch-specific boiler-plate code.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > > Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
> > > > ---
> > > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > > 
> > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > > index 980ad0c..c0deafc 100644
> > > > --- a/arch/powerpc/include/asm/barrier.h
> > > > +++ b/arch/powerpc/include/asm/barrier.h
> > > > @@ -44,19 +44,11 @@
> > > >  #define dma_rmb()	__lwsync()
> > > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > > -#ifdef CONFIG_SMP
> > > > -#define smp_lwsync()	__lwsync()
> > > > +#define __smp_lwsync()	__lwsync()
> > > >  
> > > 
> > > so __smp_lwsync() is always mapped to lwsync, right?
> > 
> > Yes.
> > 
> > > > -#define smp_mb()	mb()
> > > > -#define smp_rmb()	__lwsync()
> > > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > > -#else
> > > > -#define smp_lwsync()	barrier()
> > > > -
> > > > -#define smp_mb()	barrier()
> > > > -#define smp_rmb()	barrier()
> > > > -#define smp_wmb()	barrier()
> > > > -#endif /* CONFIG_SMP */
> > > > +#define __smp_mb()	mb()
> > > > +#define __smp_rmb()	__lwsync()
> > > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > >  /*
> > > >   * This is a barrier which prevents following instructions from being
> > > > @@ -67,18 +59,18 @@
> > > >  #define data_barrier(x)	\
> > > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > > >  
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_store_release(p, v)						\
> > > >  do {									\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > 
> > > , therefore this will emit an lwsync no matter SMP or UP.
> > 
> > Absolutely. But smp_store_release (without __) will not.
> > 
> > Please note I did test this: for ppc code before and after
> > this patch generates exactly the same binary on SMP and UP.
> > 
> 
> Yes, you're right, sorry for my mistake...
> 
> > 
> > > Another thing is that smp_lwsync() may have a third user(other than
> > > smp_load_acquire() and smp_store_release()):
> > > 
> > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > 
> > > I'm OK to change my patch accordingly, but do we really want
> > > smp_lwsync() get involved in this cleanup? If I understand you
> > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > while smp_lwsync() is internal to PPC.
> > > 
> > > Regards,
> > > Boqun
> > 
> > I think you missed the leading ___ :)
> > 
> 
> What I mean here was smp_lwsync() was originally internal to PPC, but
> never mind ;-)
> 
> > smp_store_release is external and it needs __smp_lwsync as
> > defined here.
> > 
> > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> 
> You mean bringing smp_lwsync() back? because I haven't seen you defining
> in asm-generic/barriers.h in previous patches and you just delete it in
> this patch.
> 
> > but why do this? Still, if you prefer it this way,
> > please let me know.
> > 
> 
> I think deleting smp_lwsync() is fine, though I need to change atomic
> variants patches on PPC because of it ;-/
> 
> Regards,
> Boqun

Sorry, I don't understand - why do you have to do anything?
I changed all users of smp_lwsync so they
use __smp_lwsync on SMP and barrier() on !SMP.

This is exactly the current behaviour, I also tested that
generated code does not change at all.

Is there a patch in your tree that conflicts with this?


> > > >  	WRITE_ONCE(*p, v);						\
> > > >  } while (0)
> > > >  
> > > > -#define smp_load_acquire(p)						\
> > > > +#define __smp_load_acquire(p)						\
> > > >  ({									\
> > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > >  	___p1;								\
> > > >  })
> > > >  
> > > > -- 
> > > > MST
> > > > 
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  9:53           ` Boqun Feng
                             ` (4 preceding siblings ...)
  (?)
@ 2016-01-05 16:16           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:16 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 05:53:41PM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > > Hi Michael,
> > > 
> > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for powerpc
> > > > for use by virtualization.
> > > > 
> > > > smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > 
> > I think this is the part that was missed in review.
> > 
> 
> Yes, I realized my mistake after reread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?

It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.

And these are now gone.

Instead there are __smp_store_release and __smp_load_acquire
which call __smp_lwsync.
These are only used for virt and on SMP.
UP variants are generic - they just call barrier().


> > > > This reduces the amount of arch-specific boiler-plate code.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > > 
> > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > > index 980ad0c..c0deafc 100644
> > > > --- a/arch/powerpc/include/asm/barrier.h
> > > > +++ b/arch/powerpc/include/asm/barrier.h
> > > > @@ -44,19 +44,11 @@
> > > >  #define dma_rmb()	__lwsync()
> > > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > > -#ifdef CONFIG_SMP
> > > > -#define smp_lwsync()	__lwsync()
> > > > +#define __smp_lwsync()	__lwsync()
> > > >  
> > > 
> > > so __smp_lwsync() is always mapped to lwsync, right?
> > 
> > Yes.
> > 
> > > > -#define smp_mb()	mb()
> > > > -#define smp_rmb()	__lwsync()
> > > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > > -#else
> > > > -#define smp_lwsync()	barrier()
> > > > -
> > > > -#define smp_mb()	barrier()
> > > > -#define smp_rmb()	barrier()
> > > > -#define smp_wmb()	barrier()
> > > > -#endif /* CONFIG_SMP */
> > > > +#define __smp_mb()	mb()
> > > > +#define __smp_rmb()	__lwsync()
> > > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > >  /*
> > > >   * This is a barrier which prevents following instructions from being
> > > > @@ -67,18 +59,18 @@
> > > >  #define data_barrier(x)	\
> > > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > > >  
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_store_release(p, v)						\
> > > >  do {									\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > 
> > > , therefore this will emit an lwsync no matter SMP or UP.
> > 
> > Absolutely. But smp_store_release (without __) will not.
> > 
> > Please note I did test this: for ppc code before and after
> > this patch generates exactly the same binary on SMP and UP.
> > 
> 
> Yes, you're right, sorry for my mistake...
> 
> > 
> > > Another thing is that smp_lwsync() may have a third user(other than
> > > smp_load_acquire() and smp_store_release()):
> > > 
> > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > 
> > > I'm OK to change my patch accordingly, but do we really want
> > > smp_lwsync() get involved in this cleanup? If I understand you
> > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > while smp_lwsync() is internal to PPC.
> > > 
> > > Regards,
> > > Boqun
> > 
> > I think you missed the leading ___ :)
> > 
> 
> What I mean here was smp_lwsync() was originally internal to PPC, but
> never mind ;-)
> 
> > smp_store_release is external and it needs __smp_lwsync as
> > defined here.
> > 
> > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> 
> You mean bringing smp_lwsync() back? because I haven't seen you defining
> in asm-generic/barriers.h in previous patches and you just delete it in
> this patch.
> 
> > but why do this? Still, if you prefer it this way,
> > please let me know.
> > 
> 
> I think deleting smp_lwsync() is fine, though I need to change atomic
> variants patches on PPC because of it ;-/
> 
> Regards,
> Boqun

Sorry, I don't understand - why do you have to do anything?
I changed all users of smp_lwsync so they
use __smp_lwsync on SMP and barrier() on !SMP.

This is exactly the current behaviour, I also tested that
generated code does not change at all.

Is there a patch in your tree that conflicts with this?


> > > >  	WRITE_ONCE(*p, v);						\
> > > >  } while (0)
> > > >  
> > > > -#define smp_load_acquire(p)						\
> > > > +#define __smp_load_acquire(p)						\
> > > >  ({									\
> > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > >  	___p1;								\
> > > >  })
> > > >  
> > > > -- 
> > > > MST
> > > > 
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-05 16:16               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:16 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 05, 2016 at 05:53:41PM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > > Hi Michael,
> > > 
> > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for powerpc
> > > > for use by virtualization.
> > > > 
> > > > smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > 
> > I think this is the part that was missed in review.
> > 
> 
> Yes, I realized my mistake after reread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?

It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.

And these are now gone.

Instead there are __smp_store_release and __smp_load_acquire
which call __smp_lwsync.
These are only used for virt and on SMP.
UP variants are generic - they just call barrier().


> > > > This reduces the amount of arch-specific boiler-plate code.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > > 
> > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > > index 980ad0c..c0deafc 100644
> > > > --- a/arch/powerpc/include/asm/barrier.h
> > > > +++ b/arch/powerpc/include/asm/barrier.h
> > > > @@ -44,19 +44,11 @@
> > > >  #define dma_rmb()	__lwsync()
> > > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > > -#ifdef CONFIG_SMP
> > > > -#define smp_lwsync()	__lwsync()
> > > > +#define __smp_lwsync()	__lwsync()
> > > >  
> > > 
> > > so __smp_lwsync() is always mapped to lwsync, right?
> > 
> > Yes.
> > 
> > > > -#define smp_mb()	mb()
> > > > -#define smp_rmb()	__lwsync()
> > > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > > -#else
> > > > -#define smp_lwsync()	barrier()
> > > > -
> > > > -#define smp_mb()	barrier()
> > > > -#define smp_rmb()	barrier()
> > > > -#define smp_wmb()	barrier()
> > > > -#endif /* CONFIG_SMP */
> > > > +#define __smp_mb()	mb()
> > > > +#define __smp_rmb()	__lwsync()
> > > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > >  /*
> > > >   * This is a barrier which prevents following instructions from being
> > > > @@ -67,18 +59,18 @@
> > > >  #define data_barrier(x)	\
> > > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > > >  
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_store_release(p, v)						\
> > > >  do {									\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > 
> > > , therefore this will emit an lwsync no matter SMP or UP.
> > 
> > Absolutely. But smp_store_release (without __) will not.
> > 
> > Please note I did test this: for ppc code before and after
> > this patch generates exactly the same binary on SMP and UP.
> > 
> 
> Yes, you're right, sorry for my mistake...
> 
> > 
> > > Another thing is that smp_lwsync() may have a third user(other than
> > > smp_load_acquire() and smp_store_release()):
> > > 
> > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > 
> > > I'm OK to change my patch accordingly, but do we really want
> > > smp_lwsync() get involved in this cleanup? If I understand you
> > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > while smp_lwsync() is internal to PPC.
> > > 
> > > Regards,
> > > Boqun
> > 
> > I think you missed the leading ___ :)
> > 
> 
> What I mean here was smp_lwsync() was originally internal to PPC, but
> never mind ;-)
> 
> > smp_store_release is external and it needs __smp_lwsync as
> > defined here.
> > 
> > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> 
> You mean bringing smp_lwsync() back? because I haven't seen you defining
> in asm-generic/barriers.h in previous patches and you just delete it in
> this patch.
> 
> > but why do this? Still, if you prefer it this way,
> > please let me know.
> > 
> 
> I think deleting smp_lwsync() is fine, though I need to change atomic
> variants patches on PPC because of it ;-/
> 
> Regards,
> Boqun

Sorry, I don't understand - why do you have to do anything?
I changed all users of smp_lwsync so they
use __smp_lwsync on SMP and barrier() on !SMP.

This is exactly the current behaviour, I also tested that
generated code does not change at all.

Is there a patch in your tree that conflicts with this?


> > > >  	WRITE_ONCE(*p, v);						\
> > > >  } while (0)
> > > >  
> > > > -#define smp_load_acquire(p)						\
> > > > +#define __smp_load_acquire(p)						\
> > > >  ({									\
> > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > >  	___p1;								\
> > > >  })
> > > >  
> > > > -- 
> > > > MST
> > > > 
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > the body of a message to majordomo at vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05  9:53           ` Boqun Feng
                             ` (2 preceding siblings ...)
  (?)
@ 2016-01-05 16:16           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-05 16:16 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 05:53:41PM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > > Hi Michael,
> > > 
> > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > > This defines __smp_xxx barriers for powerpc
> > > > for use by virtualization.
> > > > 
> > > > smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> > 
> > I think this is the part that was missed in review.
> > 
> 
> Yes, I realized my mistake after reread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?

It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.

And these are now gone.

Instead there are __smp_store_release and __smp_load_acquire
which call __smp_lwsync.
These are only used for virt and on SMP.
UP variants are generic - they just call barrier().


> > > > This reduces the amount of arch-specific boiler-plate code.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > > > ---
> > > >  arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
> > > >  1 file changed, 8 insertions(+), 16 deletions(-)
> > > > 
> > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > > index 980ad0c..c0deafc 100644
> > > > --- a/arch/powerpc/include/asm/barrier.h
> > > > +++ b/arch/powerpc/include/asm/barrier.h
> > > > @@ -44,19 +44,11 @@
> > > >  #define dma_rmb()	__lwsync()
> > > >  #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > > -#ifdef CONFIG_SMP
> > > > -#define smp_lwsync()	__lwsync()
> > > > +#define __smp_lwsync()	__lwsync()
> > > >  
> > > 
> > > so __smp_lwsync() is always mapped to lwsync, right?
> > 
> > Yes.
> > 
> > > > -#define smp_mb()	mb()
> > > > -#define smp_rmb()	__lwsync()
> > > > -#define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > > -#else
> > > > -#define smp_lwsync()	barrier()
> > > > -
> > > > -#define smp_mb()	barrier()
> > > > -#define smp_rmb()	barrier()
> > > > -#define smp_wmb()	barrier()
> > > > -#endif /* CONFIG_SMP */
> > > > +#define __smp_mb()	mb()
> > > > +#define __smp_rmb()	__lwsync()
> > > > +#define __smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> > > >  
> > > >  /*
> > > >   * This is a barrier which prevents following instructions from being
> > > > @@ -67,18 +59,18 @@
> > > >  #define data_barrier(x)	\
> > > >  	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > > >  
> > > > -#define smp_store_release(p, v)						\
> > > > +#define __smp_store_release(p, v)						\
> > > >  do {									\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > 
> > > , therefore this will emit an lwsync no matter SMP or UP.
> > 
> > Absolutely. But smp_store_release (without __) will not.
> > 
> > Please note I did test this: for ppc code before and after
> > this patch generates exactly the same binary on SMP and UP.
> > 
> 
> Yes, you're right, sorry for my mistake...
> 
> > 
> > > Another thing is that smp_lwsync() may have a third user(other than
> > > smp_load_acquire() and smp_store_release()):
> > > 
> > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > 
> > > I'm OK to change my patch accordingly, but do we really want
> > > smp_lwsync() get involved in this cleanup? If I understand you
> > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > while smp_lwsync() is internal to PPC.
> > > 
> > > Regards,
> > > Boqun
> > 
> > I think you missed the leading ___ :)
> > 
> 
> What I mean here was smp_lwsync() was originally internal to PPC, but
> never mind ;-)
> 
> > smp_store_release is external and it needs __smp_lwsync as
> > defined here.
> > 
> > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> 
> You mean bringing smp_lwsync() back? because I haven't seen you defining
> in asm-generic/barriers.h in previous patches and you just delete it in
> this patch.
> 
> > but why do this? Still, if you prefer it this way,
> > please let me know.
> > 
> 
> I think deleting smp_lwsync() is fine, though I need to change atomic
> variants patches on PPC because of it ;-/
> 
> Regards,
> Boqun

Sorry, I don't understand - why do you have to do anything?
I changed all users of smp_lwsync so they
use __smp_lwsync on SMP and barrier() on !SMP.

This is exactly the current behaviour, I also tested that
generated code does not change at all.

Is there a patch in your tree that conflicts with this?


> > > >  	WRITE_ONCE(*p, v);						\
> > > >  } while (0)
> > > >  
> > > > -#define smp_load_acquire(p)						\
> > > > +#define __smp_load_acquire(p)						\
> > > >  ({									\
> > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > >  	compiletime_assert_atomic_type(*p);				\
> > > > -	smp_lwsync();							\
> > > > +	__smp_lwsync();							\
> > > >  	___p1;								\
> > > >  })
> > > >  
> > > > -- 
> > > > MST
> > > > 
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
  2015-12-31 19:07     ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-05 23:12         ` Richard Weinberger
  -1 siblings, 0 replies; 572+ messages in thread
From: Richard Weinberger @ 2016-01-05 23:12 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Jeff Dike,
	Ingo Molnar, Borislav Petkov, Andy

Am 31.12.2015 um 20:07 schrieb Michael S. Tsirkin:
> On x86/um CONFIG_SMP is never defined.  As a result, several macros
> match the asm-generic variant exactly. Drop the local definitions and
> pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: Richard Weinberger <richard@nod.at>

Thanks,
//richard

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
@ 2016-01-05 23:12         ` Richard Weinberger
  0 siblings, 0 replies; 572+ messages in thread
From: Richard Weinberger @ 2016-01-05 23:12 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch, Andrew Cooper,
	virtualization, Stefano Stabellini, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, David Miller, linux-ia64, linuxppc-dev,
	linux-s390, sparclinux, linux-arm-kernel, linux-metag,
	linux-mips, x86, user-mode-linux-devel, adi-buildroot-devel,
	linux-sh, linux-xtensa, xen-devel, Jeff Dike, Ingo Molnar,
	Borislav Petkov, Andy Lutomirski, user-mode-linux-user

Am 31.12.2015 um 20:07 schrieb Michael S. Tsirkin:
> On x86/um CONFIG_SMP is never defined.  As a result, several macros
> match the asm-generic variant exactly. Drop the local definitions and
> pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: Richard Weinberger <richard@nod.at>

Thanks,
//richard

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
@ 2016-01-05 23:12         ` Richard Weinberger
  0 siblings, 0 replies; 572+ messages in thread
From: Richard Weinberger @ 2016-01-05 23:12 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA,
	Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Jeff Dike,
	Ingo Molnar, Borislav Petkov, Andy

Am 31.12.2015 um 20:07 schrieb Michael S. Tsirkin:
> On x86/um CONFIG_SMP is never defined.  As a result, several macros
> match the asm-generic variant exactly. Drop the local definitions and
> pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>

Acked-by: Richard Weinberger <richard-/L3Ra7n9ekc@public.gmane.org>

Thanks,
//richard

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
  2015-12-31 19:07     ` Michael S. Tsirkin
                       ` (4 preceding siblings ...)
  (?)
@ 2016-01-05 23:12     ` Richard Weinberger
  -1 siblings, 0 replies; 572+ messages in thread
From: Richard Weinberger @ 2016-01-05 23:12 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, user-mode-linux-user, linux-sh,
	Peter Zijlstra, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, Ingo Molnar,
	xen-devel, Ingo Molnar, Borislav Petkov, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Jeff Dike,
	adi-buildroot-devel, Andy Lutomirski, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper

Am 31.12.2015 um 20:07 schrieb Michael S. Tsirkin:
> On x86/um CONFIG_SMP is never defined.  As a result, several macros
> match the asm-generic variant exactly. Drop the local definitions and
> pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: Richard Weinberger <richard@nod.at>

Thanks,
//richard

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
@ 2016-01-05 23:12         ` Richard Weinberger
  0 siblings, 0 replies; 572+ messages in thread
From: Richard Weinberger @ 2016-01-05 23:12 UTC (permalink / raw)
  To: linux-arm-kernel

Am 31.12.2015 um 20:07 schrieb Michael S. Tsirkin:
> On x86/um CONFIG_SMP is never defined.  As a result, several macros
> match the asm-generic variant exactly. Drop the local definitions and
> pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: Richard Weinberger <richard@nod.at>

Thanks,
//richard

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 12/32] x86/um: reuse asm-generic/barrier.h
  2015-12-31 19:07     ` Michael S. Tsirkin
                       ` (2 preceding siblings ...)
  (?)
@ 2016-01-05 23:12     ` Richard Weinberger
  -1 siblings, 0 replies; 572+ messages in thread
From: Richard Weinberger @ 2016-01-05 23:12 UTC (permalink / raw)
  To: Michael S. Tsirkin, linux-kernel
  Cc: linux-mips, linux-ia64, user-mode-linux-user, linux-sh,
	Peter Zijlstra, virtualization, H. Peter Anvin, sparclinux,
	linux-arch, linux-s390, Arnd Bergmann, x86, Ingo Molnar,
	xen-devel, Ingo Molnar, Borislav Petkov, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Jeff Dike,
	adi-buildroot-devel, Andy Lutomirski, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper

Am 31.12.2015 um 20:07 schrieb Michael S. Tsirkin:
> On x86/um CONFIG_SMP is never defined.  As a result, several macros
> match the asm-generic variant exactly. Drop the local definitions and
> pull in asm-generic/barrier.h instead.
> 
> This is in preparation to refactoring this code area.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Acked-by: Richard Weinberger <richard@nod.at>

Thanks,
//richard

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2015-12-31 19:09     ` Michael S. Tsirkin
@ 2016-01-05 23:27       ` Rich Felker
  -1 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-05 23:27 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: linux-kernel, linux-sh

On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> from smp_store_mb means attempts to store a 2 byte value using this
> macro fail.
> 
> And happens to be exactly what virtio drivers want to do.
> 
> Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.

Can you please do this for size 1 as well (i.e. all sizes != 4)? If
you check the source, the code for size-1 xchg in sh cmpxchg-llsc.h is
completely wrong and operates on a 32-bit object at the address passed
to it. This code is presently unused anyway and I plan to submit a
patch to remove the size 1 case.

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-05 23:27       ` Rich Felker
  0 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-05 23:27 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: linux-kernel, linux-sh

On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> from smp_store_mb means attempts to store a 2 byte value using this
> macro fail.
> 
> And happens to be exactly what virtio drivers want to do.
> 
> Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.

Can you please do this for size 1 as well (i.e. all sizes != 4)? If
you check the source, the code for size-1 xchg in sh cmpxchg-llsc.h is
completely wrong and operates on a 32-bit object at the address passed
to it. This code is presently unused anyway and I plan to submit a
patch to remove the size 1 case.

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05 16:16               ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-06  1:51                 ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-06  1:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras

On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > > 
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > 
> > > > I'm OK to change my patch accordingly, but do we really want
> > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > while smp_lwsync() is internal to PPC.
> > > > 
> > > > Regards,
> > > > Boqun
> > > 
> > > I think you missed the leading ___ :)
> > > 
> > 
> > What I mean here was smp_lwsync() was originally internal to PPC, but
> > never mind ;-)
> > 
> > > smp_store_release is external and it needs __smp_lwsync as
> > > defined here.
> > > 
> > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > 
> > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > in asm-generic/barriers.h in previous patches and you just delete it in
> > this patch.
> > 
> > > but why do this? Still, if you prefer it this way,
> > > please let me know.
> > > 
> > 
> > I think deleting smp_lwsync() is fine, though I need to change atomic
> > variants patches on PPC because of it ;-/
> > 
> > Regards,
> > Boqun
> 
> Sorry, I don't understand - why do you have to do anything?
> I changed all users of smp_lwsync so they
> use __smp_lwsync on SMP and barrier() on !SMP.
> 
> This is exactly the current behaviour, I also tested that
> generated code does not change at all.
> 
> Is there a patch in your tree that conflicts with this?
> 

Because in a patchset which implements atomic relaxed/acquire/release
variants on PPC I use smp_lwsync(), this makes it have another user,
please see this mail:

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

in definition of PPC's __atomic_op_release().


But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.

Anyway, I will modify my patch.

Regards,
Boqun

> 
> > > > >  	WRITE_ONCE(*p, v);						\
> > > > >  } while (0)
> > > > >  
> > > > > -#define smp_load_acquire(p)						\
> > > > > +#define __smp_load_acquire(p)						\
> > > > >  ({									\
> > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > -	smp_lwsync();							\
> > > > > +	__smp_lwsync();							\
> > > > >  	___p1;								\
> > > > >  })
> > > > >  
> > > > > -- 
> > > > > MST
> > > > > 
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-06  1:51                 ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-06  1:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Paul E. McKenney

On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > > 
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > 
> > > > I'm OK to change my patch accordingly, but do we really want
> > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > while smp_lwsync() is internal to PPC.
> > > > 
> > > > Regards,
> > > > Boqun
> > > 
> > > I think you missed the leading ___ :)
> > > 
> > 
> > What I mean here was smp_lwsync() was originally internal to PPC, but
> > never mind ;-)
> > 
> > > smp_store_release is external and it needs __smp_lwsync as
> > > defined here.
> > > 
> > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > 
> > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > in asm-generic/barriers.h in previous patches and you just delete it in
> > this patch.
> > 
> > > but why do this? Still, if you prefer it this way,
> > > please let me know.
> > > 
> > 
> > I think deleting smp_lwsync() is fine, though I need to change atomic
> > variants patches on PPC because of it ;-/
> > 
> > Regards,
> > Boqun
> 
> Sorry, I don't understand - why do you have to do anything?
> I changed all users of smp_lwsync so they
> use __smp_lwsync on SMP and barrier() on !SMP.
> 
> This is exactly the current behaviour, I also tested that
> generated code does not change at all.
> 
> Is there a patch in your tree that conflicts with this?
> 

Because in a patchset which implements atomic relaxed/acquire/release
variants on PPC I use smp_lwsync(), this makes it have another user,
please see this mail:

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

in definition of PPC's __atomic_op_release().


But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.

Anyway, I will modify my patch.

Regards,
Boqun

> 
> > > > >  	WRITE_ONCE(*p, v);						\
> > > > >  } while (0)
> > > > >  
> > > > > -#define smp_load_acquire(p)						\
> > > > > +#define __smp_load_acquire(p)						\
> > > > >  ({									\
> > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > -	smp_lwsync();							\
> > > > > +	__smp_lwsync();							\
> > > > >  	___p1;								\
> > > > >  })
> > > > >  
> > > > > -- 
> > > > > MST
> > > > > 
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-06  1:51                 ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-06  1:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras

On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > > 
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > 
> > > > I'm OK to change my patch accordingly, but do we really want
> > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > while smp_lwsync() is internal to PPC.
> > > > 
> > > > Regards,
> > > > Boqun
> > > 
> > > I think you missed the leading ___ :)
> > > 
> > 
> > What I mean here was smp_lwsync() was originally internal to PPC, but
> > never mind ;-)
> > 
> > > smp_store_release is external and it needs __smp_lwsync as
> > > defined here.
> > > 
> > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > 
> > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > in asm-generic/barriers.h in previous patches and you just delete it in
> > this patch.
> > 
> > > but why do this? Still, if you prefer it this way,
> > > please let me know.
> > > 
> > 
> > I think deleting smp_lwsync() is fine, though I need to change atomic
> > variants patches on PPC because of it ;-/
> > 
> > Regards,
> > Boqun
> 
> Sorry, I don't understand - why do you have to do anything?
> I changed all users of smp_lwsync so they
> use __smp_lwsync on SMP and barrier() on !SMP.
> 
> This is exactly the current behaviour, I also tested that
> generated code does not change at all.
> 
> Is there a patch in your tree that conflicts with this?
> 

Because in a patchset which implements atomic relaxed/acquire/release
variants on PPC I use smp_lwsync(), this makes it have another user,
please see this mail:

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

in definition of PPC's __atomic_op_release().


But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.

Anyway, I will modify my patch.

Regards,
Boqun

> 
> > > > >  	WRITE_ONCE(*p, v);						\
> > > > >  } while (0)
> > > > >  
> > > > > -#define smp_load_acquire(p)						\
> > > > > +#define __smp_load_acquire(p)						\
> > > > >  ({									\
> > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > -	smp_lwsync();							\
> > > > > +	__smp_lwsync();							\
> > > > >  	___p1;								\
> > > > >  })
> > > > >  
> > > > > -- 
> > > > > MST
> > > > > 
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05 16:16               ` Michael S. Tsirkin
                                 ` (4 preceding siblings ...)
  (?)
@ 2016-01-06  1:51               ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-06  1:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > > 
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > 
> > > > I'm OK to change my patch accordingly, but do we really want
> > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > while smp_lwsync() is internal to PPC.
> > > > 
> > > > Regards,
> > > > Boqun
> > > 
> > > I think you missed the leading ___ :)
> > > 
> > 
> > What I mean here was smp_lwsync() was originally internal to PPC, but
> > never mind ;-)
> > 
> > > smp_store_release is external and it needs __smp_lwsync as
> > > defined here.
> > > 
> > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > 
> > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > in asm-generic/barriers.h in previous patches and you just delete it in
> > this patch.
> > 
> > > but why do this? Still, if you prefer it this way,
> > > please let me know.
> > > 
> > 
> > I think deleting smp_lwsync() is fine, though I need to change atomic
> > variants patches on PPC because of it ;-/
> > 
> > Regards,
> > Boqun
> 
> Sorry, I don't understand - why do you have to do anything?
> I changed all users of smp_lwsync so they
> use __smp_lwsync on SMP and barrier() on !SMP.
> 
> This is exactly the current behaviour, I also tested that
> generated code does not change at all.
> 
> Is there a patch in your tree that conflicts with this?
> 

Because in a patchset which implements atomic relaxed/acquire/release
variants on PPC I use smp_lwsync(), this makes it have another user,
please see this mail:

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

in definition of PPC's __atomic_op_release().


But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.

Anyway, I will modify my patch.

Regards,
Boqun

> 
> > > > >  	WRITE_ONCE(*p, v);						\
> > > > >  } while (0)
> > > > >  
> > > > > -#define smp_load_acquire(p)						\
> > > > > +#define __smp_load_acquire(p)						\
> > > > >  ({									\
> > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > -	smp_lwsync();							\
> > > > > +	__smp_lwsync();							\
> > > > >  	___p1;								\
> > > > >  })
> > > > >  
> > > > > -- 
> > > > > MST
> > > > > 
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-06  1:51                 ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-06  1:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > > 
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > 
> > > > I'm OK to change my patch accordingly, but do we really want
> > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > while smp_lwsync() is internal to PPC.
> > > > 
> > > > Regards,
> > > > Boqun
> > > 
> > > I think you missed the leading ___ :)
> > > 
> > 
> > What I mean here was smp_lwsync() was originally internal to PPC, but
> > never mind ;-)
> > 
> > > smp_store_release is external and it needs __smp_lwsync as
> > > defined here.
> > > 
> > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > 
> > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > in asm-generic/barriers.h in previous patches and you just delete it in
> > this patch.
> > 
> > > but why do this? Still, if you prefer it this way,
> > > please let me know.
> > > 
> > 
> > I think deleting smp_lwsync() is fine, though I need to change atomic
> > variants patches on PPC because of it ;-/
> > 
> > Regards,
> > Boqun
> 
> Sorry, I don't understand - why do you have to do anything?
> I changed all users of smp_lwsync so they
> use __smp_lwsync on SMP and barrier() on !SMP.
> 
> This is exactly the current behaviour, I also tested that
> generated code does not change at all.
> 
> Is there a patch in your tree that conflicts with this?
> 

Because in a patchset which implements atomic relaxed/acquire/release
variants on PPC I use smp_lwsync(), this makes it have another user,
please see this mail:

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

in definition of PPC's __atomic_op_release().


But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.

Anyway, I will modify my patch.

Regards,
Boqun

> 
> > > > >  	WRITE_ONCE(*p, v);						\
> > > > >  } while (0)
> > > > >  
> > > > > -#define smp_load_acquire(p)						\
> > > > > +#define __smp_load_acquire(p)						\
> > > > >  ({									\
> > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > -	smp_lwsync();							\
> > > > > +	__smp_lwsync();							\
> > > > >  	___p1;								\
> > > > >  })
> > > > >  
> > > > > -- 
> > > > > MST
> > > > > 
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > the body of a message to majordomo at vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-05 16:16               ` Michael S. Tsirkin
                                 ` (3 preceding siblings ...)
  (?)
@ 2016-01-06  1:51               ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-06  1:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > > 
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > 
> > > > I'm OK to change my patch accordingly, but do we really want
> > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > while smp_lwsync() is internal to PPC.
> > > > 
> > > > Regards,
> > > > Boqun
> > > 
> > > I think you missed the leading ___ :)
> > > 
> > 
> > What I mean here was smp_lwsync() was originally internal to PPC, but
> > never mind ;-)
> > 
> > > smp_store_release is external and it needs __smp_lwsync as
> > > defined here.
> > > 
> > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > 
> > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > in asm-generic/barriers.h in previous patches and you just delete it in
> > this patch.
> > 
> > > but why do this? Still, if you prefer it this way,
> > > please let me know.
> > > 
> > 
> > I think deleting smp_lwsync() is fine, though I need to change atomic
> > variants patches on PPC because of it ;-/
> > 
> > Regards,
> > Boqun
> 
> Sorry, I don't understand - why do you have to do anything?
> I changed all users of smp_lwsync so they
> use __smp_lwsync on SMP and barrier() on !SMP.
> 
> This is exactly the current behaviour, I also tested that
> generated code does not change at all.
> 
> Is there a patch in your tree that conflicts with this?
> 

Because in a patchset which implements atomic relaxed/acquire/release
variants on PPC I use smp_lwsync(), this makes it have another user,
please see this mail:

http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877

in definition of PPC's __atomic_op_release().


But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.

Anyway, I will modify my patch.

Regards,
Boqun

> 
> > > > >  	WRITE_ONCE(*p, v);						\
> > > > >  } while (0)
> > > > >  
> > > > > -#define smp_load_acquire(p)						\
> > > > > +#define __smp_load_acquire(p)						\
> > > > >  ({									\
> > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > -	smp_lwsync();							\
> > > > > +	__smp_lwsync();							\
> > > > >  	___p1;								\
> > > > >  })
> > > > >  
> > > > > -- 
> > > > > MST
> > > > > 
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-05 23:27       ` Rich Felker
@ 2016-01-06 11:19         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 11:19 UTC (permalink / raw)
  To: Rich Felker; +Cc: linux-kernel, linux-sh, peterz

On Tue, Jan 05, 2016 at 06:27:35PM -0500, Rich Felker wrote:
> On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> > At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> > from smp_store_mb means attempts to store a 2 byte value using this
> > macro fail.
> > 
> > And happens to be exactly what virtio drivers want to do.
> > 
> > Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> 
> Can you please do this for size 1 as well (i.e. all sizes != 4)? If
> you check the source, the code for size-1 xchg in sh cmpxchg-llsc.h is
> completely wrong and operates on a 32-bit object at the address passed
> to it. This code is presently unused anyway and I plan to submit a
> patch to remove the size 1 case.
> 
> Rich

Ouch. And PeterZ says I should write a 2-byte xchg in asm instead,
and Fedora can't even build a full kernel for this arch at the moment :(

Peter, what do you think? How about I leave this patch as is for now?

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 11:19         ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 11:19 UTC (permalink / raw)
  To: Rich Felker; +Cc: linux-kernel, linux-sh, peterz

On Tue, Jan 05, 2016 at 06:27:35PM -0500, Rich Felker wrote:
> On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> > At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> > from smp_store_mb means attempts to store a 2 byte value using this
> > macro fail.
> > 
> > And happens to be exactly what virtio drivers want to do.
> > 
> > Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> 
> Can you please do this for size 1 as well (i.e. all sizes != 4)? If
> you check the source, the code for size-1 xchg in sh cmpxchg-llsc.h is
> completely wrong and operates on a 32-bit object at the address passed
> to it. This code is presently unused anyway and I plan to submit a
> patch to remove the size 1 case.
> 
> Rich

Ouch. And PeterZ says I should write a 2-byte xchg in asm instead,
and Fedora can't even build a full kernel for this arch at the moment :(

Peter, what do you think? How about I leave this patch as is for now?

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 11:19         ` Michael S. Tsirkin
@ 2016-01-06 11:40           ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-06 11:40 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Rich Felker, linux-kernel, linux-sh

On Wed, Jan 06, 2016 at 01:19:44PM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 06:27:35PM -0500, Rich Felker wrote:
> > On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> > > At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> > > from smp_store_mb means attempts to store a 2 byte value using this
> > > macro fail.
> > > 
> > > And happens to be exactly what virtio drivers want to do.
> > > 
> > > Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> > 
> > Can you please do this for size 1 as well (i.e. all sizes != 4)? If
> > you check the source, the code for size-1 xchg in sh cmpxchg-llsc.h is
> > completely wrong and operates on a 32-bit object at the address passed
> > to it. This code is presently unused anyway and I plan to submit a
> > patch to remove the size 1 case.
> > 
> > Rich
> 
> Ouch. And PeterZ says I should write a 2-byte xchg in asm instead,
> and Fedora can't even build a full kernel for this arch at the moment :(

Does the kernel.org hosted cross compiler work?

> Peter, what do you think? How about I leave this patch as is for now?

No, and I object to removing the single byte implementation too. Either
remove the full arch or fix xchg() to conform. xchg() should work on all
native word sizes, for SH that would be 1,2 and 4 bytes.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 11:40           ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-06 11:40 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Rich Felker, linux-kernel, linux-sh

On Wed, Jan 06, 2016 at 01:19:44PM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 06:27:35PM -0500, Rich Felker wrote:
> > On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> > > At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> > > from smp_store_mb means attempts to store a 2 byte value using this
> > > macro fail.
> > > 
> > > And happens to be exactly what virtio drivers want to do.
> > > 
> > > Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> > 
> > Can you please do this for size 1 as well (i.e. all sizes != 4)? If
> > you check the source, the code for size-1 xchg in sh cmpxchg-llsc.h is
> > completely wrong and operates on a 32-bit object at the address passed
> > to it. This code is presently unused anyway and I plan to submit a
> > patch to remove the size 1 case.
> > 
> > Rich
> 
> Ouch. And PeterZ says I should write a 2-byte xchg in asm instead,
> and Fedora can't even build a full kernel for this arch at the moment :(

Does the kernel.org hosted cross compiler work?

> Peter, what do you think? How about I leave this patch as is for now?

No, and I object to removing the single byte implementation too. Either
remove the full arch or fix xchg() to conform. xchg() should work on all
native word sizes, for SH that would be 1,2 and 4 bytes.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 11:40           ` Peter Zijlstra
@ 2016-01-06 11:52             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 11:52 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Rich Felker, linux-kernel, linux-sh

On Wed, Jan 06, 2016 at 12:40:23PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 01:19:44PM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 06:27:35PM -0500, Rich Felker wrote:
> > > On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> > > > At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> > > > from smp_store_mb means attempts to store a 2 byte value using this
> > > > macro fail.
> > > > 
> > > > And happens to be exactly what virtio drivers want to do.
> > > > 
> > > > Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> > > 
> > > Can you please do this for size 1 as well (i.e. all sizes != 4)? If
> > > you check the source, the code for size-1 xchg in sh cmpxchg-llsc.h is
> > > completely wrong and operates on a 32-bit object at the address passed
> > > to it. This code is presently unused anyway and I plan to submit a
> > > patch to remove the size 1 case.
> > > 
> > > Rich
> > 
> > Ouch. And PeterZ says I should write a 2-byte xchg in asm instead,
> > and Fedora can't even build a full kernel for this arch at the moment :(
> 
> Does the kernel.org hosted cross compiler work?

I'll test, thanks for the hint.

> > Peter, what do you think? How about I leave this patch as is for now?
> 
> No, and I object to removing the single byte implementation too. Either
> remove the full arch or fix xchg() to conform. xchg() should work on all
> native word sizes, for SH that would be 1,2 and 4 bytes.

Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?

It does use 4 byte accesses but IIUC that is all that exists on
this architecture.


-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 11:52             ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 11:52 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Rich Felker, linux-kernel, linux-sh

On Wed, Jan 06, 2016 at 12:40:23PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 01:19:44PM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 06:27:35PM -0500, Rich Felker wrote:
> > > On Thu, Dec 31, 2015 at 09:09:47PM +0200, Michael S. Tsirkin wrote:
> > > > At the moment, xchg on sh only supports 4 and 1 byte values, so using it
> > > > from smp_store_mb means attempts to store a 2 byte value using this
> > > > macro fail.
> > > > 
> > > > And happens to be exactly what virtio drivers want to do.
> > > > 
> > > > Check size and fall back to a slower, but safe, WRITE_ONCE+smp_mb.
> > > 
> > > Can you please do this for size 1 as well (i.e. all sizes != 4)? If
> > > you check the source, the code for size-1 xchg in sh cmpxchg-llsc.h is
> > > completely wrong and operates on a 32-bit object at the address passed
> > > to it. This code is presently unused anyway and I plan to submit a
> > > patch to remove the size 1 case.
> > > 
> > > Rich
> > 
> > Ouch. And PeterZ says I should write a 2-byte xchg in asm instead,
> > and Fedora can't even build a full kernel for this arch at the moment :(
> 
> Does the kernel.org hosted cross compiler work?

I'll test, thanks for the hint.

> > Peter, what do you think? How about I leave this patch as is for now?
> 
> No, and I object to removing the single byte implementation too. Either
> remove the full arch or fix xchg() to conform. xchg() should work on all
> native word sizes, for SH that would be 1,2 and 4 bytes.

Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?

It does use 4 byte accesses but IIUC that is all that exists on
this architecture.


-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 11:52             ` Michael S. Tsirkin
@ 2016-01-06 14:32               ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-06 14:32 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Rich Felker, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > Peter, what do you think? How about I leave this patch as is for now?
> > 
> > No, and I object to removing the single byte implementation too. Either
> > remove the full arch or fix xchg() to conform. xchg() should work on all
> > native word sizes, for SH that would be 1,2 and 4 bytes.
> 
> Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?

It doesn't seem to preserve the 3 other bytes in the word.

> It does use 4 byte accesses but IIUC that is all that exists on
> this architecture.

Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
example. A store to another portion of the word should make the
store-conditional fail and we'll retry the loop.

The short versions should however preserve the other bytes in the word.

SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
versions.

In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
is always good, but ISTR some people wanting to resurrect SH:

  http://old.lwn.net/Articles/647636/

Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
take up an active interest in SH lest someone 'accidentally' nukes it?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 14:32               ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-06 14:32 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Rich Felker, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > Peter, what do you think? How about I leave this patch as is for now?
> > 
> > No, and I object to removing the single byte implementation too. Either
> > remove the full arch or fix xchg() to conform. xchg() should work on all
> > native word sizes, for SH that would be 1,2 and 4 bytes.
> 
> Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?

It doesn't seem to preserve the 3 other bytes in the word.

> It does use 4 byte accesses but IIUC that is all that exists on
> this architecture.

Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
example. A store to another portion of the word should make the
store-conditional fail and we'll retry the loop.

The short versions should however preserve the other bytes in the word.

SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
versions.

In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
is always good, but ISTR some people wanting to resurrect SH:

  http://old.lwn.net/Articles/647636/

Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
take up an active interest in SH lest someone 'accidentally' nukes it?

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 14:32               ` Peter Zijlstra
@ 2016-01-06 15:42                 ` Rob Landley
  -1 siblings, 0 replies; 572+ messages in thread
From: Rob Landley @ 2016-01-06 15:42 UTC (permalink / raw)
  To: Peter Zijlstra, Michael S. Tsirkin
  Cc: Rich Felker, linux-kernel, linux-sh, Jeff Dionne, Yoshinori Sato



On 01/06/2016 08:32 AM, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> versions.

We added a new cmpxchg() in j-core (smp on sh2 was not previously a
thing), but still need to support the old ones.

> In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> is always good, but ISTR some people wanting to resurrect SH:
> 
>   http://old.lwn.net/Articles/647636/

I note that old architectures in general become interesting again as
their patents expire and they're available for reimplementation as open
hardware. This tends to be about when the original manufacturer loses
interest, and thus people try to delete existing (working) project
support before the clones can get up to speed.

(I would have thought the presence of working QEMU support would tide us
over providing an easy basic regression testing environment, but people
keep insisting that's not real and doesn't count. But if we can keep it
99% working until the sh4 patents expire later this year, we can add mmu
and have full sh4 in hardware again with BSD VHDL.)

> Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> take up an active interest in SH lest someone 'accidentally' nukes it?

We have an active interest, we just didn't think anybody would want a
MAINTAINERS patch until we had skin in the game, and as you saw from our
previous patch the kernel code wasn't remotely acceptable upstream yet.
(Rich has been redoing it as device tree.)

We've been talking about this offline. (The superh list is cc'd on a
bunch of Renesas arm drivers because Renesas, so it has a high noise to
signal ratio.)

Sato-san agreed to co-maintain but needs somebody else to take point. (I
similarly want to assist but don't have the expertise to be the main
guy.) That leaves Rich, who is ok with doing it but was trying to finish
the device tree port of the jcore board first.

That said, if you'd ack a submission, Rich already has my Acked-by line
on a maintainers patch (AND one to remove the extra cc's from the sh
kernel list, and I acked Chen Gang's syscall addition patch back in
https://lkml.org/lkml/2015/6/20/193 but nobody noticed...)

Rich? Maintain please.

Rob

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 15:42                 ` Rob Landley
  0 siblings, 0 replies; 572+ messages in thread
From: Rob Landley @ 2016-01-06 15:42 UTC (permalink / raw)
  To: Peter Zijlstra, Michael S. Tsirkin
  Cc: Rich Felker, linux-kernel, linux-sh, Jeff Dionne, Yoshinori Sato



On 01/06/2016 08:32 AM, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> versions.

We added a new cmpxchg() in j-core (smp on sh2 was not previously a
thing), but still need to support the old ones.

> In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> is always good, but ISTR some people wanting to resurrect SH:
> 
>   http://old.lwn.net/Articles/647636/

I note that old architectures in general become interesting again as
their patents expire and they're available for reimplementation as open
hardware. This tends to be about when the original manufacturer loses
interest, and thus people try to delete existing (working) project
support before the clones can get up to speed.

(I would have thought the presence of working QEMU support would tide us
over providing an easy basic regression testing environment, but people
keep insisting that's not real and doesn't count. But if we can keep it
99% working until the sh4 patents expire later this year, we can add mmu
and have full sh4 in hardware again with BSD VHDL.)

> Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> take up an active interest in SH lest someone 'accidentally' nukes it?

We have an active interest, we just didn't think anybody would want a
MAINTAINERS patch until we had skin in the game, and as you saw from our
previous patch the kernel code wasn't remotely acceptable upstream yet.
(Rich has been redoing it as device tree.)

We've been talking about this offline. (The superh list is cc'd on a
bunch of Renesas arm drivers because Renesas, so it has a high noise to
signal ratio.)

Sato-san agreed to co-maintain but needs somebody else to take point. (I
similarly want to assist but don't have the expertise to be the main
guy.) That leaves Rich, who is ok with doing it but was trying to finish
the device tree port of the jcore board first.

That said, if you'd ack a submission, Rich already has my Acked-by line
on a maintainers patch (AND one to remove the extra cc's from the sh
kernel list, and I acked Chen Gang's syscall addition patch back in
https://lkml.org/lkml/2015/6/20/193 but nobody noticed...)

Rich? Maintain please.

Rob

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 15:42                 ` Rob Landley
@ 2016-01-06 16:57                   ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-06 16:57 UTC (permalink / raw)
  To: Rob Landley
  Cc: Michael S. Tsirkin, Rich Felker, linux-kernel, linux-sh,
	Jeff Dionne, Yoshinori Sato

On Wed, Jan 06, 2016 at 09:42:35AM -0600, Rob Landley wrote:
> (I would have thought the presence of working QEMU support would tide us
> over providing an easy basic regression testing environment, but people
> keep insisting that's not real and doesn't count. But if we can keep it
> 99% working until the sh4 patents expire later this year, we can add mmu
> and have full sh4 in hardware again with BSD VHDL.)

I didn't know there was a 'working' qemu for SH.

My personal 'complaint' with SH is its lack of maintainer feedback. I do
full arch sweeps on semi-regular basis, and while I know in very board
terms how a fair number of archs work its impossible to know everything
about all 25+ we support.



^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 16:57                   ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-06 16:57 UTC (permalink / raw)
  To: Rob Landley
  Cc: Michael S. Tsirkin, Rich Felker, linux-kernel, linux-sh,
	Jeff Dionne, Yoshinori Sato

On Wed, Jan 06, 2016 at 09:42:35AM -0600, Rob Landley wrote:
> (I would have thought the presence of working QEMU support would tide us
> over providing an easy basic regression testing environment, but people
> keep insisting that's not real and doesn't count. But if we can keep it
> 99% working until the sh4 patents expire later this year, we can add mmu
> and have full sh4 in hardware again with BSD VHDL.)

I didn't know there was a 'working' qemu for SH.

My personal 'complaint' with SH is its lack of maintainer feedback. I do
full arch sweeps on semi-regular basis, and while I know in very board
terms how a fair number of archs work its impossible to know everything
about all 25+ we support.



^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 14:32               ` Peter Zijlstra
@ 2016-01-06 18:23                 ` Rich Felker
  -1 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-06 18:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, linux-sh, Rob Landley,
	Jeff Dionne, Yoshinori Sato

On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > Peter, what do you think? How about I leave this patch as is for now?
> > > 
> > > No, and I object to removing the single byte implementation too. Either
> > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > 
> > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> 
> It doesn't seem to preserve the 3 other bytes in the word.
> 
> > It does use 4 byte accesses but IIUC that is all that exists on
> > this architecture.
> 
> Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> example. A store to another portion of the word should make the
> store-conditional fail and we'll retry the loop.
> 
> The short versions should however preserve the other bytes in the word.

Indeed. Also, accesses must be aligned, so the asm needs to round down
to an aligned address and perform a correct read-modify-write on it,
placing the new byte in the correct offset in the word.

Alternatively (my preference) this logic can be impemented in C as a
wrapper around the 32-bit cmpxchg. I think this is less error-prone
and it can be shared between the multiple sh cmpxchg back-ends,
including the new cas.l one we need for J2.

> SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> versions.
> 
> In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> is always good, but ISTR some people wanting to resurrect SH:
> 
>   http://old.lwn.net/Articles/647636/
> 
> Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> take up an active interest in SH lest someone 'accidentally' nukes it?

We're in the process of preparing such a proposal right now. That
current intent is that Sato-san and I will co-maintain arch/sh. We'll
include more details about motivation, proposed development direction,
existing work to be merged, etc. in that proposal.

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 18:23                 ` Rich Felker
  0 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-06 18:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, linux-sh, Rob Landley,
	Jeff Dionne, Yoshinori Sato

On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > Peter, what do you think? How about I leave this patch as is for now?
> > > 
> > > No, and I object to removing the single byte implementation too. Either
> > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > 
> > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> 
> It doesn't seem to preserve the 3 other bytes in the word.
> 
> > It does use 4 byte accesses but IIUC that is all that exists on
> > this architecture.
> 
> Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> example. A store to another portion of the word should make the
> store-conditional fail and we'll retry the loop.
> 
> The short versions should however preserve the other bytes in the word.

Indeed. Also, accesses must be aligned, so the asm needs to round down
to an aligned address and perform a correct read-modify-write on it,
placing the new byte in the correct offset in the word.

Alternatively (my preference) this logic can be impemented in C as a
wrapper around the 32-bit cmpxchg. I think this is less error-prone
and it can be shared between the multiple sh cmpxchg back-ends,
including the new cas.l one we need for J2.

> SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> versions.
> 
> In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> is always good, but ISTR some people wanting to resurrect SH:
> 
>   http://old.lwn.net/Articles/647636/
> 
> Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> take up an active interest in SH lest someone 'accidentally' nukes it?

We're in the process of preparing such a proposal right now. That
current intent is that Sato-san and I will co-maintain arch/sh. We'll
include more details about motivation, proposed development direction,
existing work to be merged, etc. in that proposal.

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 15:42                 ` Rob Landley
@ 2016-01-06 18:57                   ` Geert Uytterhoeven
  -1 siblings, 0 replies; 572+ messages in thread
From: Geert Uytterhoeven @ 2016-01-06 18:57 UTC (permalink / raw)
  To: Rob Landley
  Cc: Peter Zijlstra, Michael S. Tsirkin, Rich Felker, linux-kernel,
	Linux-sh list, Jeff Dionne, Yoshinori Sato

On Wed, Jan 6, 2016 at 4:42 PM, Rob Landley <rob@landley.net> wrote:
> That said, if you'd ack a submission, Rich already has my Acked-by line
> on a maintainers patch (AND one to remove the extra cc's from the sh
> kernel list, and I acked Chen Gang's syscall addition patch back in
> https://lkml.org/lkml/2015/6/20/193 but nobody noticed...)

If you want patches for orphan architectures to be picked up, you should
CC Andrew Morton...

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 18:57                   ` Geert Uytterhoeven
  0 siblings, 0 replies; 572+ messages in thread
From: Geert Uytterhoeven @ 2016-01-06 18:57 UTC (permalink / raw)
  To: Rob Landley
  Cc: Peter Zijlstra, Michael S. Tsirkin, Rich Felker, linux-kernel,
	Linux-sh list, Jeff Dionne, Yoshinori Sato

On Wed, Jan 6, 2016 at 4:42 PM, Rob Landley <rob@landley.net> wrote:
> That said, if you'd ack a submission, Rich already has my Acked-by line
> on a maintainers patch (AND one to remove the extra cc's from the sh
> kernel list, and I acked Chen Gang's syscall addition patch back in
> https://lkml.org/lkml/2015/6/20/193 but nobody noticed...)

If you want patches for orphan architectures to be picked up, you should
CC Andrew Morton...

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 16:57                   ` Peter Zijlstra
@ 2016-01-06 20:21                     ` Rob Landley
  -1 siblings, 0 replies; 572+ messages in thread
From: Rob Landley @ 2016-01-06 20:21 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, Rich Felker, linux-kernel, linux-sh,
	Jeff Dionne, Yoshinori Sato



On 01/06/2016 10:57 AM, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 09:42:35AM -0600, Rob Landley wrote:
>> (I would have thought the presence of working QEMU support would tide us
>> over providing an easy basic regression testing environment, but people
>> keep insisting that's not real and doesn't count. But if we can keep it
>> 99% working until the sh4 patents expire later this year, we can add mmu
>> and have full sh4 in hardware again with BSD VHDL.)
> 
> I didn't know there was a 'working' qemu for SH.

Yes, for several years now?

  https://lists.gnu.org/archive/html/qemu-devel/2010-03/msg00976.html

I try to build bootable images with each new kernel, although I'm a few
versions behind at the moment (this is 4.1 I think?):

  wget http://landley.net/aboriginal/bin/system-image-sh4.tar.gz
  tar xvzf system-image-sh4.tar.gz
  cd system-image-sh4
  ./run-emulator.sh

There's an sh2eb one in there too, but you need a $50 FPGA board to run
it (Numato Mimas v2, setup walkthrough is at http://nommu.org/jcore).

I keep meaning to poke at qemu and get their r2d board emulation to give
me more than 64 megs of memory so I can do native compiles. (I have a
native toolchain but building much more than "hello world" requires
setting up a swap file because the board emulation only gives me one
virtual disk. None of the other architectures need that...)

I'd _also_ like to get proper sh2 support into qemu (sh2 code runs under
sh4 but still), but the sh4 patents expire later this year and sometime
after that we want to add an MMU to the VHDL, so...

(We still want the nommu version because hard realtime is actually
easier to verify without page faults, and the big product needs
nanosecond accurate timestamps on stuff...)

> My personal 'complaint' with SH is its lack of maintainer feedback. I do
> full arch sweeps on semi-regular basis, and while I know in very board
> terms how a fair number of archs work its impossible to know everything
> about all 25+ we support.

We (the j-core guys) have wanted to take over arch/sh maintainership for
a while, we've just been trying to get the board we're working on in
position to be upstreamed first. The feedback on my craptacular first
effort to chip off a chunk that other people could at least reproduce
against a then-current kernel was "ew" and "redo it all as device tree".
So we went away again to work on that...

Meanwhile all $DAYJOB's in-house resources (at se-instruments.com) have
been tied up making SMP work for a product. (Yes, sh2 SMP. A NOMMU smp
system. There were some teething troubles, but it's working now. Alas,
not on the above $50 FPGA because that's only got an LX9 FPGA which one
SOC instance uses like 2/3 of the gates in. We're doing the SMP stuff in
LX45 boards, which are crazy expensive.)

I note I just sent a Numato board to the buildroot maintainer so I can
walk him through adding jcore support to buildroot. And I got toybox
working nommu, and Rich added sh2 support to musl-libc... There has been
activity on this arch, just not on this list due to the noise from a
bunch of different Renesas arm systems and whatever "arm/shmobile" is.
("Not superh or jcore compatible", that's all I know...)

Rob

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 20:21                     ` Rob Landley
  0 siblings, 0 replies; 572+ messages in thread
From: Rob Landley @ 2016-01-06 20:21 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, Rich Felker, linux-kernel, linux-sh,
	Jeff Dionne, Yoshinori Sato



On 01/06/2016 10:57 AM, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 09:42:35AM -0600, Rob Landley wrote:
>> (I would have thought the presence of working QEMU support would tide us
>> over providing an easy basic regression testing environment, but people
>> keep insisting that's not real and doesn't count. But if we can keep it
>> 99% working until the sh4 patents expire later this year, we can add mmu
>> and have full sh4 in hardware again with BSD VHDL.)
> 
> I didn't know there was a 'working' qemu for SH.

Yes, for several years now?

  https://lists.gnu.org/archive/html/qemu-devel/2010-03/msg00976.html

I try to build bootable images with each new kernel, although I'm a few
versions behind at the moment (this is 4.1 I think?):

  wget http://landley.net/aboriginal/bin/system-image-sh4.tar.gz
  tar xvzf system-image-sh4.tar.gz
  cd system-image-sh4
  ./run-emulator.sh

There's an sh2eb one in there too, but you need a $50 FPGA board to run
it (Numato Mimas v2, setup walkthrough is at http://nommu.org/jcore).

I keep meaning to poke at qemu and get their r2d board emulation to give
me more than 64 megs of memory so I can do native compiles. (I have a
native toolchain but building much more than "hello world" requires
setting up a swap file because the board emulation only gives me one
virtual disk. None of the other architectures need that...)

I'd _also_ like to get proper sh2 support into qemu (sh2 code runs under
sh4 but still), but the sh4 patents expire later this year and sometime
after that we want to add an MMU to the VHDL, so...

(We still want the nommu version because hard realtime is actually
easier to verify without page faults, and the big product needs
nanosecond accurate timestamps on stuff...)

> My personal 'complaint' with SH is its lack of maintainer feedback. I do
> full arch sweeps on semi-regular basis, and while I know in very board
> terms how a fair number of archs work its impossible to know everything
> about all 25+ we support.

We (the j-core guys) have wanted to take over arch/sh maintainership for
a while, we've just been trying to get the board we're working on in
position to be upstreamed first. The feedback on my craptacular first
effort to chip off a chunk that other people could at least reproduce
against a then-current kernel was "ew" and "redo it all as device tree".
So we went away again to work on that...

Meanwhile all $DAYJOB's in-house resources (at se-instruments.com) have
been tied up making SMP work for a product. (Yes, sh2 SMP. A NOMMU smp
system. There were some teething troubles, but it's working now. Alas,
not on the above $50 FPGA because that's only got an LX9 FPGA which one
SOC instance uses like 2/3 of the gates in. We're doing the SMP stuff in
LX45 boards, which are crazy expensive.)

I note I just sent a Numato board to the buildroot maintainer so I can
walk him through adding jcore support to buildroot. And I got toybox
working nommu, and Rich added sh2 support to musl-libc... There has been
activity on this arch, just not on this list due to the noise from a
bunch of different Renesas arm systems and whatever "arm/shmobile" is.
("Not superh or jcore compatible", that's all I know...)

Rob

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 18:23                 ` Rich Felker
@ 2016-01-06 20:23                   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 20:23 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > 
> > > > No, and I object to removing the single byte implementation too. Either
> > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > 
> > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > 
> > It doesn't seem to preserve the 3 other bytes in the word.
> > 
> > > It does use 4 byte accesses but IIUC that is all that exists on
> > > this architecture.
> > 
> > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > example. A store to another portion of the word should make the
> > store-conditional fail and we'll retry the loop.
> > 
> > The short versions should however preserve the other bytes in the word.
> 
> Indeed. Also, accesses must be aligned, so the asm needs to round down
> to an aligned address and perform a correct read-modify-write on it,
> placing the new byte in the correct offset in the word.
> 
> Alternatively (my preference) this logic can be impemented in C as a
> wrapper around the 32-bit cmpxchg. I think this is less error-prone
> and it can be shared between the multiple sh cmpxchg back-ends,
> including the new cas.l one we need for J2.
> 
> > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > versions.
> > 
> > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > is always good, but ISTR some people wanting to resurrect SH:
> > 
> >   http://old.lwn.net/Articles/647636/
> > 
> > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > take up an active interest in SH lest someone 'accidentally' nukes it?
> 
> We're in the process of preparing such a proposal right now. That
> current intent is that Sato-san and I will co-maintain arch/sh. We'll
> include more details about motivation, proposed development direction,
> existing work to be merged, etc. in that proposal.
> 
> Rich

Well I'd like to be able to make progress with generic
arch cleanups meanwhile.

Could you quickly write a version of 1 and 2 byte xchg that
works so I can include it?

Thanks!

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 20:23                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 20:23 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > 
> > > > No, and I object to removing the single byte implementation too. Either
> > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > 
> > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > 
> > It doesn't seem to preserve the 3 other bytes in the word.
> > 
> > > It does use 4 byte accesses but IIUC that is all that exists on
> > > this architecture.
> > 
> > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > example. A store to another portion of the word should make the
> > store-conditional fail and we'll retry the loop.
> > 
> > The short versions should however preserve the other bytes in the word.
> 
> Indeed. Also, accesses must be aligned, so the asm needs to round down
> to an aligned address and perform a correct read-modify-write on it,
> placing the new byte in the correct offset in the word.
> 
> Alternatively (my preference) this logic can be impemented in C as a
> wrapper around the 32-bit cmpxchg. I think this is less error-prone
> and it can be shared between the multiple sh cmpxchg back-ends,
> including the new cas.l one we need for J2.
> 
> > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > versions.
> > 
> > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > is always good, but ISTR some people wanting to resurrect SH:
> > 
> >   http://old.lwn.net/Articles/647636/
> > 
> > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > take up an active interest in SH lest someone 'accidentally' nukes it?
> 
> We're in the process of preparing such a proposal right now. That
> current intent is that Sato-san and I will co-maintain arch/sh. We'll
> include more details about motivation, proposed development direction,
> existing work to be merged, etc. in that proposal.
> 
> Rich

Well I'd like to be able to make progress with generic
arch cleanups meanwhile.

Could you quickly write a version of 1 and 2 byte xchg that
works so I can include it?

Thanks!

-- 
MST

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-06  1:51                 ` Boqun Feng
  (?)
  (?)
@ 2016-01-06 20:23                     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 20:23 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Benjamin Herrenschmidt, Paul Mackerras

On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > > 
> > > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > > 
> > > > > I'm OK to change my patch accordingly, but do we really want
> > > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > > while smp_lwsync() is internal to PPC.
> > > > > 
> > > > > Regards,
> > > > > Boqun
> > > > 
> > > > I think you missed the leading ___ :)
> > > > 
> > > 
> > > What I mean here was smp_lwsync() was originally internal to PPC, but
> > > never mind ;-)
> > > 
> > > > smp_store_release is external and it needs __smp_lwsync as
> > > > defined here.
> > > > 
> > > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > > 
> > > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > > in asm-generic/barriers.h in previous patches and you just delete it in
> > > this patch.
> > > 
> > > > but why do this? Still, if you prefer it this way,
> > > > please let me know.
> > > > 
> > > 
> > > I think deleting smp_lwsync() is fine, though I need to change atomic
> > > variants patches on PPC because of it ;-/
> > > 
> > > Regards,
> > > Boqun
> > 
> > Sorry, I don't understand - why do you have to do anything?
> > I changed all users of smp_lwsync so they
> > use __smp_lwsync on SMP and barrier() on !SMP.
> > 
> > This is exactly the current behaviour, I also tested that
> > generated code does not change at all.
> > 
> > Is there a patch in your tree that conflicts with this?
> > 
> 
> Because in a patchset which implements atomic relaxed/acquire/release
> variants on PPC I use smp_lwsync(), this makes it have another user,
> please see this mail:
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> in definition of PPC's __atomic_op_release().
> 
> 
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
> 
> Anyway, I will modify my patch.
> 
> Regards,
> Boqun


Thanks!
Could you send an ack then please?

> > 
> > > > > >  	WRITE_ONCE(*p, v);						\
> > > > > >  } while (0)
> > > > > >  
> > > > > > -#define smp_load_acquire(p)						\
> > > > > > +#define __smp_load_acquire(p)						\
> > > > > >  ({									\
> > > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > > -	smp_lwsync();							\
> > > > > > +	__smp_lwsync();							\
> > > > > >  	___p1;								\
> > > > > >  })
> > > > > >  
> > > > > > -- 
> > > > > > MST
> > > > > > 
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > > the body of a message to majordomo@vger.kernel.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-06 20:23                     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 20:23 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Paul E. McKenney

On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > > 
> > > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > > 
> > > > > I'm OK to change my patch accordingly, but do we really want
> > > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > > while smp_lwsync() is internal to PPC.
> > > > > 
> > > > > Regards,
> > > > > Boqun
> > > > 
> > > > I think you missed the leading ___ :)
> > > > 
> > > 
> > > What I mean here was smp_lwsync() was originally internal to PPC, but
> > > never mind ;-)
> > > 
> > > > smp_store_release is external and it needs __smp_lwsync as
> > > > defined here.
> > > > 
> > > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > > 
> > > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > > in asm-generic/barriers.h in previous patches and you just delete it in
> > > this patch.
> > > 
> > > > but why do this? Still, if you prefer it this way,
> > > > please let me know.
> > > > 
> > > 
> > > I think deleting smp_lwsync() is fine, though I need to change atomic
> > > variants patches on PPC because of it ;-/
> > > 
> > > Regards,
> > > Boqun
> > 
> > Sorry, I don't understand - why do you have to do anything?
> > I changed all users of smp_lwsync so they
> > use __smp_lwsync on SMP and barrier() on !SMP.
> > 
> > This is exactly the current behaviour, I also tested that
> > generated code does not change at all.
> > 
> > Is there a patch in your tree that conflicts with this?
> > 
> 
> Because in a patchset which implements atomic relaxed/acquire/release
> variants on PPC I use smp_lwsync(), this makes it have another user,
> please see this mail:
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> in definition of PPC's __atomic_op_release().
> 
> 
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
> 
> Anyway, I will modify my patch.
> 
> Regards,
> Boqun


Thanks!
Could you send an ack then please?

> > 
> > > > > >  	WRITE_ONCE(*p, v);						\
> > > > > >  } while (0)
> > > > > >  
> > > > > > -#define smp_load_acquire(p)						\
> > > > > > +#define __smp_load_acquire(p)						\
> > > > > >  ({									\
> > > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > > -	smp_lwsync();							\
> > > > > > +	__smp_lwsync();							\
> > > > > >  	___p1;								\
> > > > > >  })
> > > > > >  
> > > > > > -- 
> > > > > > MST
> > > > > > 
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > > the body of a message to majordomo@vger.kernel.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-06 20:23                     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 20:23 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b,
	Benjamin Herrenschmidt, Paul Mackerras

On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > > 
> > > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > > 
> > > > > I'm OK to change my patch accordingly, but do we really want
> > > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > > while smp_lwsync() is internal to PPC.
> > > > > 
> > > > > Regards,
> > > > > Boqun
> > > > 
> > > > I think you missed the leading ___ :)
> > > > 
> > > 
> > > What I mean here was smp_lwsync() was originally internal to PPC, but
> > > never mind ;-)
> > > 
> > > > smp_store_release is external and it needs __smp_lwsync as
> > > > defined here.
> > > > 
> > > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > > 
> > > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > > in asm-generic/barriers.h in previous patches and you just delete it in
> > > this patch.
> > > 
> > > > but why do this? Still, if you prefer it this way,
> > > > please let me know.
> > > > 
> > > 
> > > I think deleting smp_lwsync() is fine, though I need to change atomic
> > > variants patches on PPC because of it ;-/
> > > 
> > > Regards,
> > > Boqun
> > 
> > Sorry, I don't understand - why do you have to do anything?
> > I changed all users of smp_lwsync so they
> > use __smp_lwsync on SMP and barrier() on !SMP.
> > 
> > This is exactly the current behaviour, I also tested that
> > generated code does not change at all.
> > 
> > Is there a patch in your tree that conflicts with this?
> > 
> 
> Because in a patchset which implements atomic relaxed/acquire/release
> variants on PPC I use smp_lwsync(), this makes it have another user,
> please see this mail:
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> in definition of PPC's __atomic_op_release().
> 
> 
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
> 
> Anyway, I will modify my patch.
> 
> Regards,
> Boqun


Thanks!
Could you send an ack then please?

> > 
> > > > > >  	WRITE_ONCE(*p, v);						\
> > > > > >  } while (0)
> > > > > >  
> > > > > > -#define smp_load_acquire(p)						\
> > > > > > +#define __smp_load_acquire(p)						\
> > > > > >  ({									\
> > > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > > -	smp_lwsync();							\
> > > > > > +	__smp_lwsync();							\
> > > > > >  	___p1;								\
> > > > > >  })
> > > > > >  
> > > > > > -- 
> > > > > > MST
> > > > > > 
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-06  1:51                 ` Boqun Feng
                                   ` (2 preceding siblings ...)
  (?)
@ 2016-01-06 20:23                 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 20:23 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > > 
> > > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > > 
> > > > > I'm OK to change my patch accordingly, but do we really want
> > > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > > while smp_lwsync() is internal to PPC.
> > > > > 
> > > > > Regards,
> > > > > Boqun
> > > > 
> > > > I think you missed the leading ___ :)
> > > > 
> > > 
> > > What I mean here was smp_lwsync() was originally internal to PPC, but
> > > never mind ;-)
> > > 
> > > > smp_store_release is external and it needs __smp_lwsync as
> > > > defined here.
> > > > 
> > > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > > 
> > > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > > in asm-generic/barriers.h in previous patches and you just delete it in
> > > this patch.
> > > 
> > > > but why do this? Still, if you prefer it this way,
> > > > please let me know.
> > > > 
> > > 
> > > I think deleting smp_lwsync() is fine, though I need to change atomic
> > > variants patches on PPC because of it ;-/
> > > 
> > > Regards,
> > > Boqun
> > 
> > Sorry, I don't understand - why do you have to do anything?
> > I changed all users of smp_lwsync so they
> > use __smp_lwsync on SMP and barrier() on !SMP.
> > 
> > This is exactly the current behaviour, I also tested that
> > generated code does not change at all.
> > 
> > Is there a patch in your tree that conflicts with this?
> > 
> 
> Because in a patchset which implements atomic relaxed/acquire/release
> variants on PPC I use smp_lwsync(), this makes it have another user,
> please see this mail:
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> in definition of PPC's __atomic_op_release().
> 
> 
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
> 
> Anyway, I will modify my patch.
> 
> Regards,
> Boqun


Thanks!
Could you send an ack then please?

> > 
> > > > > >  	WRITE_ONCE(*p, v);						\
> > > > > >  } while (0)
> > > > > >  
> > > > > > -#define smp_load_acquire(p)						\
> > > > > > +#define __smp_load_acquire(p)						\
> > > > > >  ({									\
> > > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > > -	smp_lwsync();							\
> > > > > > +	__smp_lwsync();							\
> > > > > >  	___p1;								\
> > > > > >  })
> > > > > >  
> > > > > > -- 
> > > > > > MST
> > > > > > 
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > > the body of a message to majordomo@vger.kernel.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-06 20:23                     ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 20:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > > 
> > > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > > 
> > > > > I'm OK to change my patch accordingly, but do we really want
> > > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > > while smp_lwsync() is internal to PPC.
> > > > > 
> > > > > Regards,
> > > > > Boqun
> > > > 
> > > > I think you missed the leading ___ :)
> > > > 
> > > 
> > > What I mean here was smp_lwsync() was originally internal to PPC, but
> > > never mind ;-)
> > > 
> > > > smp_store_release is external and it needs __smp_lwsync as
> > > > defined here.
> > > > 
> > > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > > 
> > > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > > in asm-generic/barriers.h in previous patches and you just delete it in
> > > this patch.
> > > 
> > > > but why do this? Still, if you prefer it this way,
> > > > please let me know.
> > > > 
> > > 
> > > I think deleting smp_lwsync() is fine, though I need to change atomic
> > > variants patches on PPC because of it ;-/
> > > 
> > > Regards,
> > > Boqun
> > 
> > Sorry, I don't understand - why do you have to do anything?
> > I changed all users of smp_lwsync so they
> > use __smp_lwsync on SMP and barrier() on !SMP.
> > 
> > This is exactly the current behaviour, I also tested that
> > generated code does not change at all.
> > 
> > Is there a patch in your tree that conflicts with this?
> > 
> 
> Because in a patchset which implements atomic relaxed/acquire/release
> variants on PPC I use smp_lwsync(), this makes it have another user,
> please see this mail:
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> in definition of PPC's __atomic_op_release().
> 
> 
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
> 
> Anyway, I will modify my patch.
> 
> Regards,
> Boqun


Thanks!
Could you send an ack then please?

> > 
> > > > > >  	WRITE_ONCE(*p, v);						\
> > > > > >  } while (0)
> > > > > >  
> > > > > > -#define smp_load_acquire(p)						\
> > > > > > +#define __smp_load_acquire(p)						\
> > > > > >  ({									\
> > > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > > -	smp_lwsync();							\
> > > > > > +	__smp_lwsync();							\
> > > > > >  	___p1;								\
> > > > > >  })
> > > > > >  
> > > > > > -- 
> > > > > > MST
> > > > > > 
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > > the body of a message to majordomo at vger.kernel.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-06  1:51                 ` Boqun Feng
                                   ` (4 preceding siblings ...)
  (?)
@ 2016-01-06 20:23                 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 20:23 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner

On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > > 
> > > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > > 
> > > > > I'm OK to change my patch accordingly, but do we really want
> > > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > > while smp_lwsync() is internal to PPC.
> > > > > 
> > > > > Regards,
> > > > > Boqun
> > > > 
> > > > I think you missed the leading ___ :)
> > > > 
> > > 
> > > What I mean here was smp_lwsync() was originally internal to PPC, but
> > > never mind ;-)
> > > 
> > > > smp_store_release is external and it needs __smp_lwsync as
> > > > defined here.
> > > > 
> > > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > > 
> > > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > > in asm-generic/barriers.h in previous patches and you just delete it in
> > > this patch.
> > > 
> > > > but why do this? Still, if you prefer it this way,
> > > > please let me know.
> > > > 
> > > 
> > > I think deleting smp_lwsync() is fine, though I need to change atomic
> > > variants patches on PPC because of it ;-/
> > > 
> > > Regards,
> > > Boqun
> > 
> > Sorry, I don't understand - why do you have to do anything?
> > I changed all users of smp_lwsync so they
> > use __smp_lwsync on SMP and barrier() on !SMP.
> > 
> > This is exactly the current behaviour, I also tested that
> > generated code does not change at all.
> > 
> > Is there a patch in your tree that conflicts with this?
> > 
> 
> Because in a patchset which implements atomic relaxed/acquire/release
> variants on PPC I use smp_lwsync(), this makes it have another user,
> please see this mail:
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> in definition of PPC's __atomic_op_release().
> 
> 
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
> 
> Anyway, I will modify my patch.
> 
> Regards,
> Boqun


Thanks!
Could you send an ack then please?

> > 
> > > > > >  	WRITE_ONCE(*p, v);						\
> > > > > >  } while (0)
> > > > > >  
> > > > > > -#define smp_load_acquire(p)						\
> > > > > > +#define __smp_load_acquire(p)						\
> > > > > >  ({									\
> > > > > >  	typeof(*p) ___p1 = READ_ONCE(*p);				\
> > > > > >  	compiletime_assert_atomic_type(*p);				\
> > > > > > -	smp_lwsync();							\
> > > > > > +	__smp_lwsync();							\
> > > > > >  	___p1;								\
> > > > > >  })
> > > > > >  
> > > > > > -- 
> > > > > > MST
> > > > > > 
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > > > the body of a message to majordomo@vger.kernel.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > > Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 18:23                 ` Rich Felker
@ 2016-01-06 22:14                   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 22:14 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > 
> > > > No, and I object to removing the single byte implementation too. Either
> > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > 
> > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > 
> > It doesn't seem to preserve the 3 other bytes in the word.
> > 
> > > It does use 4 byte accesses but IIUC that is all that exists on
> > > this architecture.
> > 
> > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > example. A store to another portion of the word should make the
> > store-conditional fail and we'll retry the loop.
> > 
> > The short versions should however preserve the other bytes in the word.
> 
> Indeed. Also, accesses must be aligned, so the asm needs to round down
> to an aligned address and perform a correct read-modify-write on it,
> placing the new byte in the correct offset in the word.
> 
> Alternatively (my preference) this logic can be impemented in C as a
> wrapper around the 32-bit cmpxchg. I think this is less error-prone
> and it can be shared between the multiple sh cmpxchg back-ends,
> including the new cas.l one we need for J2.

Sounds much more reasonable.

> > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > versions.
> > 
> > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > is always good, but ISTR some people wanting to resurrect SH:
> > 
> >   http://old.lwn.net/Articles/647636/
> > 
> > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > take up an active interest in SH lest someone 'accidentally' nukes it?
> 
> We're in the process of preparing such a proposal right now. That
> current intent is that Sato-san and I will co-maintain arch/sh. We'll
> include more details about motivation, proposed development direction,
> existing work to be merged, etc. in that proposal.
> 
> Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 22:14                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-06 22:14 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > 
> > > > No, and I object to removing the single byte implementation too. Either
> > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > 
> > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > 
> > It doesn't seem to preserve the 3 other bytes in the word.
> > 
> > > It does use 4 byte accesses but IIUC that is all that exists on
> > > this architecture.
> > 
> > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > example. A store to another portion of the word should make the
> > store-conditional fail and we'll retry the loop.
> > 
> > The short versions should however preserve the other bytes in the word.
> 
> Indeed. Also, accesses must be aligned, so the asm needs to round down
> to an aligned address and perform a correct read-modify-write on it,
> placing the new byte in the correct offset in the word.
> 
> Alternatively (my preference) this logic can be impemented in C as a
> wrapper around the 32-bit cmpxchg. I think this is less error-prone
> and it can be shared between the multiple sh cmpxchg back-ends,
> including the new cas.l one we need for J2.

Sounds much more reasonable.

> > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > versions.
> > 
> > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > is always good, but ISTR some people wanting to resurrect SH:
> > 
> >   http://old.lwn.net/Articles/647636/
> > 
> > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > take up an active interest in SH lest someone 'accidentally' nukes it?
> 
> We're in the process of preparing such a proposal right now. That
> current intent is that Sato-san and I will co-maintain arch/sh. We'll
> include more details about motivation, proposed development direction,
> existing work to be merged, etc. in that proposal.
> 
> Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 20:23                   ` Michael S. Tsirkin
@ 2016-01-06 23:53                     ` Rich Felker
  -1 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-06 23:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 10:23:12PM +0200, Michael S. Tsirkin wrote:
> On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> > On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > > 
> > > > > No, and I object to removing the single byte implementation too. Either
> > > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > > 
> > > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > > 
> > > It doesn't seem to preserve the 3 other bytes in the word.
> > > 
> > > > It does use 4 byte accesses but IIUC that is all that exists on
> > > > this architecture.
> > > 
> > > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > > example. A store to another portion of the word should make the
> > > store-conditional fail and we'll retry the loop.
> > > 
> > > The short versions should however preserve the other bytes in the word.
> > 
> > Indeed. Also, accesses must be aligned, so the asm needs to round down
> > to an aligned address and perform a correct read-modify-write on it,
> > placing the new byte in the correct offset in the word.
> > 
> > Alternatively (my preference) this logic can be impemented in C as a
> > wrapper around the 32-bit cmpxchg. I think this is less error-prone
> > and it can be shared between the multiple sh cmpxchg back-ends,
> > including the new cas.l one we need for J2.
> > 
> > > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > > versions.
> > > 
> > > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > > is always good, but ISTR some people wanting to resurrect SH:
> > > 
> > >   http://old.lwn.net/Articles/647636/
> > > 
> > > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > > take up an active interest in SH lest someone 'accidentally' nukes it?
> > 
> > We're in the process of preparing such a proposal right now. That
> > current intent is that Sato-san and I will co-maintain arch/sh. We'll
> > include more details about motivation, proposed development direction,
> > existing work to be merged, etc. in that proposal.
> 
> Well I'd like to be able to make progress with generic
> arch cleanups meanwhile.
> 
> Could you quickly write a version of 1 and 2 byte xchg that
> works so I can include it?

Here are quick, untested generic ones:

static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
{
	u32 old;
	unsigned long offset = (unsigned long)m & 3;
	volatile u32 *w = (volatile u32 *)(m - offset);
	union { u32 w; u8 b[4]; } u;
	do {
		old = u.w = *w;
		result = w.b[offset];
		w.b[offset] = val;
	} while (cmpxchg(w, old, u.w) != old);
	return result;
}

static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
{
	u32 old;
	unsigned long result;
	unsigned long offset = ((unsigned long)m & 3) >> 1;
	volatile u32 *w = (volatile u32 *)(m - offset);
	union { u32 w; u16 h[2]; } u;
	do {
		old = u.w = *w;
		result = w.h[offset];
		w.h[offset] = val;
	} while (cmpxchg(w, old, u.w) != old);
	return result;
}

It would be nice to have these in asm-generic for archs which don't
define their own versions rather than having cruft like this repeated
per-arch. Strictly speaking, the volatile u32 used to access the
32-bit word containing the u8 or u16 should be
__attribute__((__may_alias__)) too. Is there an existing kernel type
for a "may_alias u32" or should it perhaps be added?

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-06 23:53                     ` Rich Felker
  0 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-06 23:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 10:23:12PM +0200, Michael S. Tsirkin wrote:
> On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> > On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > > 
> > > > > No, and I object to removing the single byte implementation too. Either
> > > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > > 
> > > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > > 
> > > It doesn't seem to preserve the 3 other bytes in the word.
> > > 
> > > > It does use 4 byte accesses but IIUC that is all that exists on
> > > > this architecture.
> > > 
> > > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > > example. A store to another portion of the word should make the
> > > store-conditional fail and we'll retry the loop.
> > > 
> > > The short versions should however preserve the other bytes in the word.
> > 
> > Indeed. Also, accesses must be aligned, so the asm needs to round down
> > to an aligned address and perform a correct read-modify-write on it,
> > placing the new byte in the correct offset in the word.
> > 
> > Alternatively (my preference) this logic can be impemented in C as a
> > wrapper around the 32-bit cmpxchg. I think this is less error-prone
> > and it can be shared between the multiple sh cmpxchg back-ends,
> > including the new cas.l one we need for J2.
> > 
> > > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > > versions.
> > > 
> > > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > > is always good, but ISTR some people wanting to resurrect SH:
> > > 
> > >   http://old.lwn.net/Articles/647636/
> > > 
> > > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > > take up an active interest in SH lest someone 'accidentally' nukes it?
> > 
> > We're in the process of preparing such a proposal right now. That
> > current intent is that Sato-san and I will co-maintain arch/sh. We'll
> > include more details about motivation, proposed development direction,
> > existing work to be merged, etc. in that proposal.
> 
> Well I'd like to be able to make progress with generic
> arch cleanups meanwhile.
> 
> Could you quickly write a version of 1 and 2 byte xchg that
> works so I can include it?

Here are quick, untested generic ones:

static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
{
	u32 old;
	unsigned long offset = (unsigned long)m & 3;
	volatile u32 *w = (volatile u32 *)(m - offset);
	union { u32 w; u8 b[4]; } u;
	do {
		old = u.w = *w;
		result = w.b[offset];
		w.b[offset] = val;
	} while (cmpxchg(w, old, u.w) != old);
	return result;
}

static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
{
	u32 old;
	unsigned long result;
	unsigned long offset = ((unsigned long)m & 3) >> 1;
	volatile u32 *w = (volatile u32 *)(m - offset);
	union { u32 w; u16 h[2]; } u;
	do {
		old = u.w = *w;
		result = w.h[offset];
		w.h[offset] = val;
	} while (cmpxchg(w, old, u.w) != old);
	return result;
}

It would be nice to have these in asm-generic for archs which don't
define their own versions rather than having cruft like this repeated
per-arch. Strictly speaking, the volatile u32 used to access the
32-bit word containing the u8 or u16 should be
__attribute__((__may_alias__)) too. Is there an existing kernel type
for a "may_alias u32" or should it perhaps be added?

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-06 20:23                     ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-01-07  0:43                       ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-07  0:43 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras

[-- Attachment #1: Type: text/plain, Size: 1283 bytes --]

On Wed, Jan 06, 2016 at 10:23:51PM +0200, Michael S. Tsirkin wrote:
[...]
> > > 
> > > Sorry, I don't understand - why do you have to do anything?
> > > I changed all users of smp_lwsync so they
> > > use __smp_lwsync on SMP and barrier() on !SMP.
> > > 
> > > This is exactly the current behaviour, I also tested that
> > > generated code does not change at all.
> > > 
> > > Is there a patch in your tree that conflicts with this?
> > > 
> > 
> > Because in a patchset which implements atomic relaxed/acquire/release
> > variants on PPC I use smp_lwsync(), this makes it have another user,
> > please see this mail:
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > in definition of PPC's __atomic_op_release().
> > 
> > 
> > But I think removing smp_lwsync() is a good idea and actually I think we
> > can go further to remove __smp_lwsync() and let __smp_load_acquire and
> > __smp_store_release call __lwsync() directly, but that is another thing.
> > 
> > Anyway, I will modify my patch.
> > 
> > Regards,
> > Boqun
> 
> 
> Thanks!
> Could you send an ack then please?
> 

Sure, if you need one from me, feel free to add my ack for this patch:

Acked-by: Boqun Feng <boqun.feng@gmail.com>

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-07  0:43                       ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-07  0:43 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Ingo Molnar, Davidlohr Bueso, Andrey Konovalov,
	Paul E. McKenney

[-- Attachment #1: Type: text/plain, Size: 1283 bytes --]

On Wed, Jan 06, 2016 at 10:23:51PM +0200, Michael S. Tsirkin wrote:
[...]
> > > 
> > > Sorry, I don't understand - why do you have to do anything?
> > > I changed all users of smp_lwsync so they
> > > use __smp_lwsync on SMP and barrier() on !SMP.
> > > 
> > > This is exactly the current behaviour, I also tested that
> > > generated code does not change at all.
> > > 
> > > Is there a patch in your tree that conflicts with this?
> > > 
> > 
> > Because in a patchset which implements atomic relaxed/acquire/release
> > variants on PPC I use smp_lwsync(), this makes it have another user,
> > please see this mail:
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > in definition of PPC's __atomic_op_release().
> > 
> > 
> > But I think removing smp_lwsync() is a good idea and actually I think we
> > can go further to remove __smp_lwsync() and let __smp_load_acquire and
> > __smp_store_release call __lwsync() directly, but that is another thing.
> > 
> > Anyway, I will modify my patch.
> > 
> > Regards,
> > Boqun
> 
> 
> Thanks!
> Could you send an ack then please?
> 

Sure, if you need one from me, feel free to add my ack for this patch:

Acked-by: Boqun Feng <boqun.feng@gmail.com>

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-07  0:43                       ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-07  0:43 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Benjamin Herrenschmidt, Paul Mackerras

[-- Attachment #1: Type: text/plain, Size: 1283 bytes --]

On Wed, Jan 06, 2016 at 10:23:51PM +0200, Michael S. Tsirkin wrote:
[...]
> > > 
> > > Sorry, I don't understand - why do you have to do anything?
> > > I changed all users of smp_lwsync so they
> > > use __smp_lwsync on SMP and barrier() on !SMP.
> > > 
> > > This is exactly the current behaviour, I also tested that
> > > generated code does not change at all.
> > > 
> > > Is there a patch in your tree that conflicts with this?
> > > 
> > 
> > Because in a patchset which implements atomic relaxed/acquire/release
> > variants on PPC I use smp_lwsync(), this makes it have another user,
> > please see this mail:
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > in definition of PPC's __atomic_op_release().
> > 
> > 
> > But I think removing smp_lwsync() is a good idea and actually I think we
> > can go further to remove __smp_lwsync() and let __smp_load_acquire and
> > __smp_store_release call __lwsync() directly, but that is another thing.
> > 
> > Anyway, I will modify my patch.
> > 
> > Regards,
> > Boqun
> 
> 
> Thanks!
> Could you send an ack then please?
> 

Sure, if you need one from me, feel free to add my ack for this patch:

Acked-by: Boqun Feng <boqun.feng@gmail.com>

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-06 20:23                     ` Michael S. Tsirkin
                                       ` (3 preceding siblings ...)
  (?)
@ 2016-01-07  0:43                     ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-07  0:43 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner


[-- Attachment #1.1: Type: text/plain, Size: 1283 bytes --]

On Wed, Jan 06, 2016 at 10:23:51PM +0200, Michael S. Tsirkin wrote:
[...]
> > > 
> > > Sorry, I don't understand - why do you have to do anything?
> > > I changed all users of smp_lwsync so they
> > > use __smp_lwsync on SMP and barrier() on !SMP.
> > > 
> > > This is exactly the current behaviour, I also tested that
> > > generated code does not change at all.
> > > 
> > > Is there a patch in your tree that conflicts with this?
> > > 
> > 
> > Because in a patchset which implements atomic relaxed/acquire/release
> > variants on PPC I use smp_lwsync(), this makes it have another user,
> > please see this mail:
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > in definition of PPC's __atomic_op_release().
> > 
> > 
> > But I think removing smp_lwsync() is a good idea and actually I think we
> > can go further to remove __smp_lwsync() and let __smp_load_acquire and
> > __smp_store_release call __lwsync() directly, but that is another thing.
> > 
> > Anyway, I will modify my patch.
> > 
> > Regards,
> > Boqun
> 
> 
> Thanks!
> Could you send an ack then please?
> 

Sure, if you need one from me, feel free to add my ack for this patch:

Acked-by: Boqun Feng <boqun.feng@gmail.com>

Regards,
Boqun

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 15/32] powerpc: define __smp_xxx
@ 2016-01-07  0:43                       ` Boqun Feng
  0 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-07  0:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 06, 2016 at 10:23:51PM +0200, Michael S. Tsirkin wrote:
[...]
> > > 
> > > Sorry, I don't understand - why do you have to do anything?
> > > I changed all users of smp_lwsync so they
> > > use __smp_lwsync on SMP and barrier() on !SMP.
> > > 
> > > This is exactly the current behaviour, I also tested that
> > > generated code does not change at all.
> > > 
> > > Is there a patch in your tree that conflicts with this?
> > > 
> > 
> > Because in a patchset which implements atomic relaxed/acquire/release
> > variants on PPC I use smp_lwsync(), this makes it have another user,
> > please see this mail:
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > in definition of PPC's __atomic_op_release().
> > 
> > 
> > But I think removing smp_lwsync() is a good idea and actually I think we
> > can go further to remove __smp_lwsync() and let __smp_load_acquire and
> > __smp_store_release call __lwsync() directly, but that is another thing.
> > 
> > Anyway, I will modify my patch.
> > 
> > Regards,
> > Boqun
> 
> 
> Thanks!
> Could you send an ack then please?
> 

Sure, if you need one from me, feel free to add my ack for this patch:

Acked-by: Boqun Feng <boqun.feng@gmail.com>

Regards,
Boqun
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160107/b3ca90ab/attachment-0001.sig>

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 15/32] powerpc: define __smp_xxx
  2016-01-06 20:23                     ` Michael S. Tsirkin
                                       ` (2 preceding siblings ...)
  (?)
@ 2016-01-07  0:43                     ` Boqun Feng
  -1 siblings, 0 replies; 572+ messages in thread
From: Boqun Feng @ 2016-01-07  0:43 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
	Benjamin Herrenschmidt, virtualization, Paul Mackerras,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Davidlohr Bueso, Arnd Bergmann, Michael Ellerman, x86, xen-devel,
	Ingo Molnar, Paul E. McKenney, linux-xtensa,
	user-mode-linux-devel, Stefano Stabellini, Andrey Konovalov,
	adi-buildroot-devel, Thomas Gleixner


[-- Attachment #1.1: Type: text/plain, Size: 1283 bytes --]

On Wed, Jan 06, 2016 at 10:23:51PM +0200, Michael S. Tsirkin wrote:
[...]
> > > 
> > > Sorry, I don't understand - why do you have to do anything?
> > > I changed all users of smp_lwsync so they
> > > use __smp_lwsync on SMP and barrier() on !SMP.
> > > 
> > > This is exactly the current behaviour, I also tested that
> > > generated code does not change at all.
> > > 
> > > Is there a patch in your tree that conflicts with this?
> > > 
> > 
> > Because in a patchset which implements atomic relaxed/acquire/release
> > variants on PPC I use smp_lwsync(), this makes it have another user,
> > please see this mail:
> > 
> > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > 
> > in definition of PPC's __atomic_op_release().
> > 
> > 
> > But I think removing smp_lwsync() is a good idea and actually I think we
> > can go further to remove __smp_lwsync() and let __smp_load_acquire and
> > __smp_store_release call __lwsync() directly, but that is another thing.
> > 
> > Anyway, I will modify my patch.
> > 
> > Regards,
> > Boqun
> 
> 
> Thanks!
> Could you send an ack then please?
> 

Sure, if you need one from me, feel free to add my ack for this patch:

Acked-by: Boqun Feng <boqun.feng@gmail.com>

Regards,
Boqun

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 23:53                     ` Rich Felker
@ 2016-01-07 13:37                       ` Peter Zijlstra
  -1 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-07 13:37 UTC (permalink / raw)
  To: Rich Felker
  Cc: Michael S. Tsirkin, linux-kernel, linux-sh, Rob Landley,
	Jeff Dionne, Yoshinori Sato

On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> It would be nice to have these in asm-generic for archs which don't
> define their own versions rather than having cruft like this repeated
> per-arch. 

Maybe, but I'm not sure how many archs would indeed suffer this problem,
so far I'm only aware of Alpha and SH that do not have short atomic ops.

> Strictly speaking, the volatile u32 used to access the
> 32-bit word containing the u8 or u16 should be
> __attribute__((__may_alias__)) too. Is there an existing kernel type
> for a "may_alias u32" or should it perhaps be added?

The kernel does -fno-strict-aliasing because the C aliasing rules are
crap (TM) :-), so I suspect we do not need the alias attribute here.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-07 13:37                       ` Peter Zijlstra
  0 siblings, 0 replies; 572+ messages in thread
From: Peter Zijlstra @ 2016-01-07 13:37 UTC (permalink / raw)
  To: Rich Felker
  Cc: Michael S. Tsirkin, linux-kernel, linux-sh, Rob Landley,
	Jeff Dionne, Yoshinori Sato

On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> It would be nice to have these in asm-generic for archs which don't
> define their own versions rather than having cruft like this repeated
> per-arch. 

Maybe, but I'm not sure how many archs would indeed suffer this problem,
so far I'm only aware of Alpha and SH that do not have short atomic ops.

> Strictly speaking, the volatile u32 used to access the
> 32-bit word containing the u8 or u16 should be
> __attribute__((__may_alias__)) too. Is there an existing kernel type
> for a "may_alias u32" or should it perhaps be added?

The kernel does -fno-strict-aliasing because the C aliasing rules are
crap (TM) :-), so I suspect we do not need the alias attribute here.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 23:53                     ` Rich Felker
@ 2016-01-07 15:50                       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-07 15:50 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> On Wed, Jan 06, 2016 at 10:23:12PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> > > On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > > > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > > > 
> > > > > > No, and I object to removing the single byte implementation too. Either
> > > > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > > > 
> > > > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > > > 
> > > > It doesn't seem to preserve the 3 other bytes in the word.
> > > > 
> > > > > It does use 4 byte accesses but IIUC that is all that exists on
> > > > > this architecture.
> > > > 
> > > > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > > > example. A store to another portion of the word should make the
> > > > store-conditional fail and we'll retry the loop.
> > > > 
> > > > The short versions should however preserve the other bytes in the word.
> > > 
> > > Indeed. Also, accesses must be aligned, so the asm needs to round down
> > > to an aligned address and perform a correct read-modify-write on it,
> > > placing the new byte in the correct offset in the word.
> > > 
> > > Alternatively (my preference) this logic can be impemented in C as a
> > > wrapper around the 32-bit cmpxchg. I think this is less error-prone
> > > and it can be shared between the multiple sh cmpxchg back-ends,
> > > including the new cas.l one we need for J2.
> > > 
> > > > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > > > versions.
> > > > 
> > > > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > > > is always good, but ISTR some people wanting to resurrect SH:
> > > > 
> > > >   http://old.lwn.net/Articles/647636/
> > > > 
> > > > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > > > take up an active interest in SH lest someone 'accidentally' nukes it?
> > > 
> > > We're in the process of preparing such a proposal right now. That
> > > current intent is that Sato-san and I will co-maintain arch/sh. We'll
> > > include more details about motivation, proposed development direction,
> > > existing work to be merged, etc. in that proposal.
> > 
> > Well I'd like to be able to make progress with generic
> > arch cleanups meanwhile.
> > 
> > Could you quickly write a version of 1 and 2 byte xchg that
> > works so I can include it?
> 
> Here are quick, untested generic ones:
> 
> static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
> {
> 	u32 old;
> 	unsigned long offset = (unsigned long)m & 3;
> 	volatile u32 *w = (volatile u32 *)(m - offset);
> 	union { u32 w; u8 b[4]; } u;
> 	do {
> 		old = u.w = *w;
> 		result = w.b[offset];
> 		w.b[offset] = val;
> 	} while (cmpxchg(w, old, u.w) != old);
> 	return result;
> }
> 
> static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
> {
> 	u32 old;
> 	unsigned long result;
> 	unsigned long offset = ((unsigned long)m & 3) >> 1;
> 	volatile u32 *w = (volatile u32 *)(m - offset);
> 	union { u32 w; u16 h[2]; } u;
> 	do {
> 		old = u.w = *w;
> 		result = w.h[offset];
> 		w.h[offset] = val;
> 	} while (cmpxchg(w, old, u.w) != old);
> 	return result;
> }
> 
> It would be nice to have these in asm-generic for archs which don't
> define their own versions rather than having cruft like this repeated
> per-arch. Strictly speaking, the volatile u32 used to access the
> 32-bit word containing the u8 or u16 should be
> __attribute__((__may_alias__)) too.
> Is there an existing kernel type
> for a "may_alias u32" or should it perhaps be added?
> 
> Rich

I'm inclined to write this using shifts, this way there's
no duplication between 1 and 2 byte variants.
And READ_ONCE is better than volatile IMHO.


diff --git a/arch/sh/include/asm/cmpxchg.h b/arch/sh/include/asm/cmpxchg.h
index 85c97b18..a858879 100644
--- a/arch/sh/include/asm/cmpxchg.h
+++ b/arch/sh/include/asm/cmpxchg.h
@@ -8,6 +8,7 @@
 
 #include <linux/compiler.h>
 #include <linux/types.h>
+#include <asm/byteorder.h>
 
 #if defined(CONFIG_GUSA_RB)
 #include <asm/cmpxchg-grb.h>
@@ -19,6 +20,26 @@
 
 extern void __xchg_called_with_bad_pointer(void);
 
+static inline u32 __xchg_cmpxchg(void *ptr, u32 x, int size)
+{
+	int off = (unsigned long)ptr % sizeof(u32);
+	u32 *p = ptr - off;
+	int bitoff = __BYTE_ORDER = __BIG_ENDIAN ?
+		((sizeof(u32) - 1 - off) * BITS_PER_BYTE) :
+		(off * BITS_PER_BYTE);
+	u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff;
+	u32 oldv, newv;
+	u32 ret;
+
+	do {
+		oldv = READ_ONCE(*p);
+		ret = (oldv & bitmask) >> bitoff;
+		newv = (oldv & ~bitmask) | (x << bitoff);
+	} while(cmpxchg(p, oldv, newv) != oldv);
+
+	return ret;
+}
+
 #define __xchg(ptr, x, size)				\
 ({							\
 	unsigned long __xchg__res;			\
@@ -27,8 +48,10 @@ extern void __xchg_called_with_bad_pointer(void);
 	case 4:						\
 		__xchg__res = xchg_u32(__xchg_ptr, x);	\
 		break;					\
+	case 2:						\
 	case 1:						\
-		__xchg__res = xchg_u8(__xchg_ptr, x);	\
+		__xchg__res = __xchg_cmpxchg(__xchg_ptr,\
+					     x, size);	\
 		break;					\
 	default:					\
 		__xchg_called_with_bad_pointer();	\


Testing the above now.
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-07 15:50                       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-07 15:50 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> On Wed, Jan 06, 2016 at 10:23:12PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> > > On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > > > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > > > 
> > > > > > No, and I object to removing the single byte implementation too. Either
> > > > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > > > 
> > > > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > > > 
> > > > It doesn't seem to preserve the 3 other bytes in the word.
> > > > 
> > > > > It does use 4 byte accesses but IIUC that is all that exists on
> > > > > this architecture.
> > > > 
> > > > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > > > example. A store to another portion of the word should make the
> > > > store-conditional fail and we'll retry the loop.
> > > > 
> > > > The short versions should however preserve the other bytes in the word.
> > > 
> > > Indeed. Also, accesses must be aligned, so the asm needs to round down
> > > to an aligned address and perform a correct read-modify-write on it,
> > > placing the new byte in the correct offset in the word.
> > > 
> > > Alternatively (my preference) this logic can be impemented in C as a
> > > wrapper around the 32-bit cmpxchg. I think this is less error-prone
> > > and it can be shared between the multiple sh cmpxchg back-ends,
> > > including the new cas.l one we need for J2.
> > > 
> > > > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > > > versions.
> > > > 
> > > > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > > > is always good, but ISTR some people wanting to resurrect SH:
> > > > 
> > > >   http://old.lwn.net/Articles/647636/
> > > > 
> > > > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > > > take up an active interest in SH lest someone 'accidentally' nukes it?
> > > 
> > > We're in the process of preparing such a proposal right now. That
> > > current intent is that Sato-san and I will co-maintain arch/sh. We'll
> > > include more details about motivation, proposed development direction,
> > > existing work to be merged, etc. in that proposal.
> > 
> > Well I'd like to be able to make progress with generic
> > arch cleanups meanwhile.
> > 
> > Could you quickly write a version of 1 and 2 byte xchg that
> > works so I can include it?
> 
> Here are quick, untested generic ones:
> 
> static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
> {
> 	u32 old;
> 	unsigned long offset = (unsigned long)m & 3;
> 	volatile u32 *w = (volatile u32 *)(m - offset);
> 	union { u32 w; u8 b[4]; } u;
> 	do {
> 		old = u.w = *w;
> 		result = w.b[offset];
> 		w.b[offset] = val;
> 	} while (cmpxchg(w, old, u.w) != old);
> 	return result;
> }
> 
> static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
> {
> 	u32 old;
> 	unsigned long result;
> 	unsigned long offset = ((unsigned long)m & 3) >> 1;
> 	volatile u32 *w = (volatile u32 *)(m - offset);
> 	union { u32 w; u16 h[2]; } u;
> 	do {
> 		old = u.w = *w;
> 		result = w.h[offset];
> 		w.h[offset] = val;
> 	} while (cmpxchg(w, old, u.w) != old);
> 	return result;
> }
> 
> It would be nice to have these in asm-generic for archs which don't
> define their own versions rather than having cruft like this repeated
> per-arch. Strictly speaking, the volatile u32 used to access the
> 32-bit word containing the u8 or u16 should be
> __attribute__((__may_alias__)) too.
> Is there an existing kernel type
> for a "may_alias u32" or should it perhaps be added?
> 
> Rich

I'm inclined to write this using shifts, this way there's
no duplication between 1 and 2 byte variants.
And READ_ONCE is better than volatile IMHO.


diff --git a/arch/sh/include/asm/cmpxchg.h b/arch/sh/include/asm/cmpxchg.h
index 85c97b18..a858879 100644
--- a/arch/sh/include/asm/cmpxchg.h
+++ b/arch/sh/include/asm/cmpxchg.h
@@ -8,6 +8,7 @@
 
 #include <linux/compiler.h>
 #include <linux/types.h>
+#include <asm/byteorder.h>
 
 #if defined(CONFIG_GUSA_RB)
 #include <asm/cmpxchg-grb.h>
@@ -19,6 +20,26 @@
 
 extern void __xchg_called_with_bad_pointer(void);
 
+static inline u32 __xchg_cmpxchg(void *ptr, u32 x, int size)
+{
+	int off = (unsigned long)ptr % sizeof(u32);
+	u32 *p = ptr - off;
+	int bitoff = __BYTE_ORDER == __BIG_ENDIAN ?
+		((sizeof(u32) - 1 - off) * BITS_PER_BYTE) :
+		(off * BITS_PER_BYTE);
+	u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff;
+	u32 oldv, newv;
+	u32 ret;
+
+	do {
+		oldv = READ_ONCE(*p);
+		ret = (oldv & bitmask) >> bitoff;
+		newv = (oldv & ~bitmask) | (x << bitoff);
+	} while(cmpxchg(p, oldv, newv) != oldv);
+
+	return ret;
+}
+
 #define __xchg(ptr, x, size)				\
 ({							\
 	unsigned long __xchg__res;			\
@@ -27,8 +48,10 @@ extern void __xchg_called_with_bad_pointer(void);
 	case 4:						\
 		__xchg__res = xchg_u32(__xchg_ptr, x);	\
 		break;					\
+	case 2:						\
 	case 1:						\
-		__xchg__res = xchg_u8(__xchg_ptr, x);	\
+		__xchg__res = __xchg_cmpxchg(__xchg_ptr,\
+					     x, size);	\
 		break;					\
 	default:					\
 		__xchg_called_with_bad_pointer();	\


Testing the above now.
-- 
MST

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-06 23:53                     ` Rich Felker
@ 2016-01-07 17:48                       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-07 17:48 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> On Wed, Jan 06, 2016 at 10:23:12PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> > > On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > > > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > > > 
> > > > > > No, and I object to removing the single byte implementation too. Either
> > > > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > > > 
> > > > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > > > 
> > > > It doesn't seem to preserve the 3 other bytes in the word.
> > > > 
> > > > > It does use 4 byte accesses but IIUC that is all that exists on
> > > > > this architecture.
> > > > 
> > > > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > > > example. A store to another portion of the word should make the
> > > > store-conditional fail and we'll retry the loop.
> > > > 
> > > > The short versions should however preserve the other bytes in the word.
> > > 
> > > Indeed. Also, accesses must be aligned, so the asm needs to round down
> > > to an aligned address and perform a correct read-modify-write on it,
> > > placing the new byte in the correct offset in the word.
> > > 
> > > Alternatively (my preference) this logic can be impemented in C as a
> > > wrapper around the 32-bit cmpxchg. I think this is less error-prone
> > > and it can be shared between the multiple sh cmpxchg back-ends,
> > > including the new cas.l one we need for J2.
> > > 
> > > > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > > > versions.
> > > > 
> > > > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > > > is always good, but ISTR some people wanting to resurrect SH:
> > > > 
> > > >   http://old.lwn.net/Articles/647636/
> > > > 
> > > > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > > > take up an active interest in SH lest someone 'accidentally' nukes it?
> > > 
> > > We're in the process of preparing such a proposal right now. That
> > > current intent is that Sato-san and I will co-maintain arch/sh. We'll
> > > include more details about motivation, proposed development direction,
> > > existing work to be merged, etc. in that proposal.
> > 
> > Well I'd like to be able to make progress with generic
> > arch cleanups meanwhile.
> > 
> > Could you quickly write a version of 1 and 2 byte xchg that
> > works so I can include it?
> 
> Here are quick, untested generic ones:
> 
> static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
> {
> 	u32 old;
> 	unsigned long offset = (unsigned long)m & 3;
> 	volatile u32 *w = (volatile u32 *)(m - offset);
> 	union { u32 w; u8 b[4]; } u;
> 	do {
> 		old = u.w = *w;
> 		result = w.b[offset];
> 		w.b[offset] = val;
> 	} while (cmpxchg(w, old, u.w) != old);
> 	return result;
> }
> 
> static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
> {
> 	u32 old;
> 	unsigned long result;
> 	unsigned long offset = ((unsigned long)m & 3) >> 1;
> 	volatile u32 *w = (volatile u32 *)(m - offset);
> 	union { u32 w; u16 h[2]; } u;
> 	do {
> 		old = u.w = *w;
> 		result = w.h[offset];
> 		w.h[offset] = val;
> 	} while (cmpxchg(w, old, u.w) != old);
> 	return result;
> }
> 
> It would be nice to have these in asm-generic for archs which don't
> define their own versions rather than having cruft like this repeated
> per-arch. Strictly speaking, the volatile u32 used to access the
> 32-bit word containing the u8 or u16 should be
> __attribute__((__may_alias__)) too. Is there an existing kernel type
> for a "may_alias u32" or should it perhaps be added?
> 
> Rich

BTW this seems suboptimal for grb and irq variants which apparently
can do things correctly.


^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-07 17:48                       ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-07 17:48 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> On Wed, Jan 06, 2016 at 10:23:12PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> > > On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > > > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > > > 
> > > > > > No, and I object to removing the single byte implementation too. Either
> > > > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > > > 
> > > > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > > > 
> > > > It doesn't seem to preserve the 3 other bytes in the word.
> > > > 
> > > > > It does use 4 byte accesses but IIUC that is all that exists on
> > > > > this architecture.
> > > > 
> > > > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > > > example. A store to another portion of the word should make the
> > > > store-conditional fail and we'll retry the loop.
> > > > 
> > > > The short versions should however preserve the other bytes in the word.
> > > 
> > > Indeed. Also, accesses must be aligned, so the asm needs to round down
> > > to an aligned address and perform a correct read-modify-write on it,
> > > placing the new byte in the correct offset in the word.
> > > 
> > > Alternatively (my preference) this logic can be impemented in C as a
> > > wrapper around the 32-bit cmpxchg. I think this is less error-prone
> > > and it can be shared between the multiple sh cmpxchg back-ends,
> > > including the new cas.l one we need for J2.
> > > 
> > > > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > > > versions.
> > > > 
> > > > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > > > is always good, but ISTR some people wanting to resurrect SH:
> > > > 
> > > >   http://old.lwn.net/Articles/647636/
> > > > 
> > > > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > > > take up an active interest in SH lest someone 'accidentally' nukes it?
> > > 
> > > We're in the process of preparing such a proposal right now. That
> > > current intent is that Sato-san and I will co-maintain arch/sh. We'll
> > > include more details about motivation, proposed development direction,
> > > existing work to be merged, etc. in that proposal.
> > 
> > Well I'd like to be able to make progress with generic
> > arch cleanups meanwhile.
> > 
> > Could you quickly write a version of 1 and 2 byte xchg that
> > works so I can include it?
> 
> Here are quick, untested generic ones:
> 
> static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
> {
> 	u32 old;
> 	unsigned long offset = (unsigned long)m & 3;
> 	volatile u32 *w = (volatile u32 *)(m - offset);
> 	union { u32 w; u8 b[4]; } u;
> 	do {
> 		old = u.w = *w;
> 		result = w.b[offset];
> 		w.b[offset] = val;
> 	} while (cmpxchg(w, old, u.w) != old);
> 	return result;
> }
> 
> static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
> {
> 	u32 old;
> 	unsigned long result;
> 	unsigned long offset = ((unsigned long)m & 3) >> 1;
> 	volatile u32 *w = (volatile u32 *)(m - offset);
> 	union { u32 w; u16 h[2]; } u;
> 	do {
> 		old = u.w = *w;
> 		result = w.h[offset];
> 		w.h[offset] = val;
> 	} while (cmpxchg(w, old, u.w) != old);
> 	return result;
> }
> 
> It would be nice to have these in asm-generic for archs which don't
> define their own versions rather than having cruft like this repeated
> per-arch. Strictly speaking, the volatile u32 used to access the
> 32-bit word containing the u8 or u16 should be
> __attribute__((__may_alias__)) too. Is there an existing kernel type
> for a "may_alias u32" or should it perhaps be added?
> 
> Rich

BTW this seems suboptimal for grb and irq variants which apparently
can do things correctly.

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-07 13:37                       ` Peter Zijlstra
@ 2016-01-07 19:05                         ` Rich Felker
  -1 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-07 19:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, linux-sh, Rob Landley,
	Jeff Dionne, Yoshinori Sato

On Thu, Jan 07, 2016 at 02:37:49PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> > It would be nice to have these in asm-generic for archs which don't
> > define their own versions rather than having cruft like this repeated
> > per-arch. 
> 
> Maybe, but I'm not sure how many archs would indeed suffer this problem,
> so far I'm only aware of Alpha and SH that do not have short atomic ops.

Apparently original armv6 (non-k) lacked u8 and u16 variants of
ldrex/strex. I'm pretty sure or1k also lacks them, and mips,
microblaze, and some powerpc versions might too. Not sure about
risc-v.

> > Strictly speaking, the volatile u32 used to access the
> > 32-bit word containing the u8 or u16 should be
> > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > for a "may_alias u32" or should it perhaps be added?
> 
> The kernel does -fno-strict-aliasing because the C aliasing rules are
> crap (TM) :-), so I suspect we do not need the alias attribute here.

I suspect working on the kernel I'm going to have to get used to
getting "corrected" for writing proper C... ;-)

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-07 19:05                         ` Rich Felker
  0 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-07 19:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael S. Tsirkin, linux-kernel, linux-sh, Rob Landley,
	Jeff Dionne, Yoshinori Sato

On Thu, Jan 07, 2016 at 02:37:49PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> > It would be nice to have these in asm-generic for archs which don't
> > define their own versions rather than having cruft like this repeated
> > per-arch. 
> 
> Maybe, but I'm not sure how many archs would indeed suffer this problem,
> so far I'm only aware of Alpha and SH that do not have short atomic ops.

Apparently original armv6 (non-k) lacked u8 and u16 variants of
ldrex/strex. I'm pretty sure or1k also lacks them, and mips,
microblaze, and some powerpc versions might too. Not sure about
risc-v.

> > Strictly speaking, the volatile u32 used to access the
> > 32-bit word containing the u8 or u16 should be
> > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > for a "may_alias u32" or should it perhaps be added?
> 
> The kernel does -fno-strict-aliasing because the C aliasing rules are
> crap (TM) :-), so I suspect we do not need the alias attribute here.

I suspect working on the kernel I'm going to have to get used to
getting "corrected" for writing proper C... ;-)

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-07 17:48                       ` Michael S. Tsirkin
@ 2016-01-07 19:10                         ` Rich Felker
  -1 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-07 19:10 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Thu, Jan 07, 2016 at 07:48:08PM +0200, Michael S. Tsirkin wrote:
> On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> > On Wed, Jan 06, 2016 at 10:23:12PM +0200, Michael S. Tsirkin wrote:
> > > On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> > > > On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > > > > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > > > > 
> > > > > > > No, and I object to removing the single byte implementation too. Either
> > > > > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > > > > 
> > > > > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > > > > 
> > > > > It doesn't seem to preserve the 3 other bytes in the word.
> > > > > 
> > > > > > It does use 4 byte accesses but IIUC that is all that exists on
> > > > > > this architecture.
> > > > > 
> > > > > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > > > > example. A store to another portion of the word should make the
> > > > > store-conditional fail and we'll retry the loop.
> > > > > 
> > > > > The short versions should however preserve the other bytes in the word.
> > > > 
> > > > Indeed. Also, accesses must be aligned, so the asm needs to round down
> > > > to an aligned address and perform a correct read-modify-write on it,
> > > > placing the new byte in the correct offset in the word.
> > > > 
> > > > Alternatively (my preference) this logic can be impemented in C as a
> > > > wrapper around the 32-bit cmpxchg. I think this is less error-prone
> > > > and it can be shared between the multiple sh cmpxchg back-ends,
> > > > including the new cas.l one we need for J2.
> > > > 
> > > > > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > > > > versions.
> > > > > 
> > > > > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > > > > is always good, but ISTR some people wanting to resurrect SH:
> > > > > 
> > > > >   http://old.lwn.net/Articles/647636/
> > > > > 
> > > > > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > > > > take up an active interest in SH lest someone 'accidentally' nukes it?
> > > > 
> > > > We're in the process of preparing such a proposal right now. That
> > > > current intent is that Sato-san and I will co-maintain arch/sh. We'll
> > > > include more details about motivation, proposed development direction,
> > > > existing work to be merged, etc. in that proposal.
> > > 
> > > Well I'd like to be able to make progress with generic
> > > arch cleanups meanwhile.
> > > 
> > > Could you quickly write a version of 1 and 2 byte xchg that
> > > works so I can include it?
> > 
> > Here are quick, untested generic ones:
> > 
> > static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
> > {
> > 	u32 old;
> > 	unsigned long offset = (unsigned long)m & 3;
> > 	volatile u32 *w = (volatile u32 *)(m - offset);
> > 	union { u32 w; u8 b[4]; } u;
> > 	do {
> > 		old = u.w = *w;
> > 		result = w.b[offset];
> > 		w.b[offset] = val;
> > 	} while (cmpxchg(w, old, u.w) != old);
> > 	return result;
> > }
> > 
> > static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
> > {
> > 	u32 old;
> > 	unsigned long result;
> > 	unsigned long offset = ((unsigned long)m & 3) >> 1;
> > 	volatile u32 *w = (volatile u32 *)(m - offset);
> > 	union { u32 w; u16 h[2]; } u;
> > 	do {
> > 		old = u.w = *w;
> > 		result = w.h[offset];
> > 		w.h[offset] = val;
> > 	} while (cmpxchg(w, old, u.w) != old);
> > 	return result;
> > }
> > 
> > It would be nice to have these in asm-generic for archs which don't
> > define their own versions rather than having cruft like this repeated
> > per-arch. Strictly speaking, the volatile u32 used to access the
> > 32-bit word containing the u8 or u16 should be
> > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > for a "may_alias u32" or should it perhaps be added?
> > 
> > Rich
> 
> BTW this seems suboptimal for grb and irq variants which apparently
> can do things correctly.

In principle I agree, but u8/u16 xchg is mostly unused (completely
unused in my builds) and unlikely to matter to performance. Also, the
irq variant is only for the original sh2 which is not even produced
anymore afaik. Our reimplementation of the sh2 ISA, the J2, has a
cas.l instruction that will be used instead because it supports SMP
where interrupt masking is insufficient to achieve atomicity.

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-07 19:10                         ` Rich Felker
  0 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-07 19:10 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Thu, Jan 07, 2016 at 07:48:08PM +0200, Michael S. Tsirkin wrote:
> On Wed, Jan 06, 2016 at 06:53:01PM -0500, Rich Felker wrote:
> > On Wed, Jan 06, 2016 at 10:23:12PM +0200, Michael S. Tsirkin wrote:
> > > On Wed, Jan 06, 2016 at 01:23:50PM -0500, Rich Felker wrote:
> > > > On Wed, Jan 06, 2016 at 03:32:18PM +0100, Peter Zijlstra wrote:
> > > > > On Wed, Jan 06, 2016 at 01:52:17PM +0200, Michael S. Tsirkin wrote:
> > > > > > > > Peter, what do you think? How about I leave this patch as is for now?
> > > > > > > 
> > > > > > > No, and I object to removing the single byte implementation too. Either
> > > > > > > remove the full arch or fix xchg() to conform. xchg() should work on all
> > > > > > > native word sizes, for SH that would be 1,2 and 4 bytes.
> > > > > > 
> > > > > > Rick, maybe you could explain how is current 1 byte xchg on llsc wrong?
> > > > > 
> > > > > It doesn't seem to preserve the 3 other bytes in the word.
> > > > > 
> > > > > > It does use 4 byte accesses but IIUC that is all that exists on
> > > > > > this architecture.
> > > > > 
> > > > > Right, that's not a problem, look at arch/alpha/include/asm/xchg.h for
> > > > > example. A store to another portion of the word should make the
> > > > > store-conditional fail and we'll retry the loop.
> > > > > 
> > > > > The short versions should however preserve the other bytes in the word.
> > > > 
> > > > Indeed. Also, accesses must be aligned, so the asm needs to round down
> > > > to an aligned address and perform a correct read-modify-write on it,
> > > > placing the new byte in the correct offset in the word.
> > > > 
> > > > Alternatively (my preference) this logic can be impemented in C as a
> > > > wrapper around the 32-bit cmpxchg. I think this is less error-prone
> > > > and it can be shared between the multiple sh cmpxchg back-ends,
> > > > including the new cas.l one we need for J2.
> > > > 
> > > > > SH's cmpxchg() is equally incomplete and does not provide 1 and 2 byte
> > > > > versions.
> > > > > 
> > > > > In any case, I'm all for rm -rf arch/sh/, one less arch to worry about
> > > > > is always good, but ISTR some people wanting to resurrect SH:
> > > > > 
> > > > >   http://old.lwn.net/Articles/647636/
> > > > > 
> > > > > Rob, Jeff, Sato-san, might I suggest you send a MAINTAINERS patch and
> > > > > take up an active interest in SH lest someone 'accidentally' nukes it?
> > > > 
> > > > We're in the process of preparing such a proposal right now. That
> > > > current intent is that Sato-san and I will co-maintain arch/sh. We'll
> > > > include more details about motivation, proposed development direction,
> > > > existing work to be merged, etc. in that proposal.
> > > 
> > > Well I'd like to be able to make progress with generic
> > > arch cleanups meanwhile.
> > > 
> > > Could you quickly write a version of 1 and 2 byte xchg that
> > > works so I can include it?
> > 
> > Here are quick, untested generic ones:
> > 
> > static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
> > {
> > 	u32 old;
> > 	unsigned long offset = (unsigned long)m & 3;
> > 	volatile u32 *w = (volatile u32 *)(m - offset);
> > 	union { u32 w; u8 b[4]; } u;
> > 	do {
> > 		old = u.w = *w;
> > 		result = w.b[offset];
> > 		w.b[offset] = val;
> > 	} while (cmpxchg(w, old, u.w) != old);
> > 	return result;
> > }
> > 
> > static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
> > {
> > 	u32 old;
> > 	unsigned long result;
> > 	unsigned long offset = ((unsigned long)m & 3) >> 1;
> > 	volatile u32 *w = (volatile u32 *)(m - offset);
> > 	union { u32 w; u16 h[2]; } u;
> > 	do {
> > 		old = u.w = *w;
> > 		result = w.h[offset];
> > 		w.h[offset] = val;
> > 	} while (cmpxchg(w, old, u.w) != old);
> > 	return result;
> > }
> > 
> > It would be nice to have these in asm-generic for archs which don't
> > define their own versions rather than having cruft like this repeated
> > per-arch. Strictly speaking, the volatile u32 used to access the
> > 32-bit word containing the u8 or u16 should be
> > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > for a "may_alias u32" or should it perhaps be added?
> > 
> > Rich
> 
> BTW this seems suboptimal for grb and irq variants which apparently
> can do things correctly.

In principle I agree, but u8/u16 xchg is mostly unused (completely
unused in my builds) and unlikely to matter to performance. Also, the
irq variant is only for the original sh2 which is not even produced
anymore afaik. Our reimplementation of the sh2 ISA, the J2, has a
cas.l instruction that will be used instead because it supports SMP
where interrupt masking is insufficient to achieve atomicity.

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-07 19:10                         ` Rich Felker
@ 2016-01-07 22:41                           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-07 22:41 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

> > > It would be nice to have these in asm-generic for archs which don't
> > > define their own versions rather than having cruft like this repeated
> > > per-arch. Strictly speaking, the volatile u32 used to access the
> > > 32-bit word containing the u8 or u16 should be
> > > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > > for a "may_alias u32" or should it perhaps be added?
> > > 
> > > Rich
> > 
> > BTW this seems suboptimal for grb and irq variants which apparently
> > can do things correctly.
> 
> In principle I agree, but u8/u16 xchg is mostly unused (completely
> unused in my builds) and unlikely to matter to performance. Also, the
> irq variant is only for the original sh2 which is not even produced
> anymore afaik. Our reimplementation of the sh2 ISA, the J2, has a
> cas.l instruction that will be used instead because it supports SMP
> where interrupt masking is insufficient to achieve atomicity.
> 
> Rich

Since it looks like there will soon be active maintainers
for this arch, I think it's best if I make the minimal possible
changes and then you guys can rewrite it any way you like,
drop irq variant or whatever.

The minimal change is probably the below code but
the grb variant is just copy paste from xchg_u8
with a minor tweak -
can you pls confirm it looks right?

I tested the llsc code on ppc and x86 and since it's
portable I know the logic is correct there.

Will post v3 with this included but would appreciate
your input first.

---->
sh: support 1 and 2 byte xchg

This completes the xchg implementation for sh architecture.  Note: The
llsc variant is tricky since this only supports 4 byte atomics, the
existing implementation of 1 byte xchg is wrong: we need to do a 4 byte
cmpxchg and retry if any bytes changed meanwhile.

Write this in C for clarity.

Suggested-by: Rich Felker <dalias@libc.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

---->

diff --git a/arch/sh/include/asm/cmpxchg-grb.h b/arch/sh/include/asm/cmpxchg-grb.h
index f848dec..2ed557b 100644
--- a/arch/sh/include/asm/cmpxchg-grb.h
+++ b/arch/sh/include/asm/cmpxchg-grb.h
@@ -23,6 +23,28 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
 	return retval;
 }
 
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+	unsigned long retval;
+
+	__asm__ __volatile__ (
+		"   .align  2             \n\t"
+		"   mova    1f,   r0      \n\t" /* r0 = end point */
+		"   mov    r15,   r1      \n\t" /* r1 = saved sp */
+		"   mov    #-6,   r15     \n\t" /* LOGIN */
+		"   mov.w  @%1,   %0      \n\t" /* load  old value */
+		"   extu.w  %0,   %0      \n\t" /* extend as unsigned */
+		"   mov.w   %2,   @%1     \n\t" /* store new value */
+		"1: mov     r1,   r15     \n\t" /* LOGOUT */
+		: "=&r" (retval),
+		  "+r"  (m),
+		  "+r"  (val)		/* inhibit r15 overloading */
+		:
+		: "memory" , "r0", "r1");
+
+	return retval;
+}
+
 static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
 {
 	unsigned long retval;
diff --git a/arch/sh/include/asm/cmpxchg-irq.h b/arch/sh/include/asm/cmpxchg-irq.h
index bd11f63..f888772 100644
--- a/arch/sh/include/asm/cmpxchg-irq.h
+++ b/arch/sh/include/asm/cmpxchg-irq.h
@@ -14,6 +14,17 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
 	return retval;
 }
 
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+	unsigned long flags, retval;
+
+	local_irq_save(flags);
+	retval = *m;
+	*m = val;
+	local_irq_restore(flags);
+	return retval;
+}
+
 static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
 {
 	unsigned long flags, retval;
diff --git a/arch/sh/include/asm/cmpxchg-llsc.h b/arch/sh/include/asm/cmpxchg-llsc.h
index 4713666..5dfdb06 100644
--- a/arch/sh/include/asm/cmpxchg-llsc.h
+++ b/arch/sh/include/asm/cmpxchg-llsc.h
@@ -1,6 +1,8 @@
 #ifndef __ASM_SH_CMPXCHG_LLSC_H
 #define __ASM_SH_CMPXCHG_LLSC_H
 
+#include <asm/byteorder.h>
+
 static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
 {
 	unsigned long retval;
@@ -22,29 +24,8 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
 	return retval;
 }
 
-static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
-{
-	unsigned long retval;
-	unsigned long tmp;
-
-	__asm__ __volatile__ (
-		"1:					\n\t"
-		"movli.l	@%2, %0	! xchg_u8	\n\t"
-		"mov		%0, %1			\n\t"
-		"mov		%3, %0			\n\t"
-		"movco.l	%0, @%2			\n\t"
-		"bf		1b			\n\t"
-		"synco					\n\t"
-		: "=&z"(tmp), "=&r" (retval)
-		: "r" (m), "r" (val & 0xff)
-		: "t", "memory"
-	);
-
-	return retval;
-}
-
 static inline unsigned long
-__cmpxchg_u32(volatile int *m, unsigned long old, unsigned long new)
+__cmpxchg_u32(volatile u32 *m, unsigned long old, unsigned long new)
 {
 	unsigned long retval;
 	unsigned long tmp;
@@ -68,4 +49,34 @@ __cmpxchg_u32(volatile int *m, unsigned long old, unsigned long new)
 	return retval;
 }
 
+static inline u32 __xchg_cmpxchg(volatile void *ptr, u32 x, int size)
+{
+	int off = (unsigned long)ptr % sizeof(u32);
+	volatile u32 *p = ptr - off;
+	int bitoff = __BYTE_ORDER = __BIG_ENDIAN ?
+		((sizeof(u32) - 1 - off) * BITS_PER_BYTE) :
+		(off * BITS_PER_BYTE);
+	u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff;
+	u32 oldv, newv;
+	u32 ret;
+
+	do {
+		oldv = READ_ONCE(*p);
+		ret = (oldv & bitmask) >> bitoff;
+		newv = (oldv & ~bitmask) | (x << bitoff);
+	} while (__cmpxchg_u32(p, oldv, newv) != oldv);
+
+	return ret;
+}
+
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+	return __xchg_cmpxchg(m, val, sizeof *m);
+}
+
+static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
+{
+	return __xchg_cmpxchg(m, val, sizeof *m);
+}
+
 #endif /* __ASM_SH_CMPXCHG_LLSC_H */
diff --git a/arch/sh/include/asm/cmpxchg.h b/arch/sh/include/asm/cmpxchg.h
index 85c97b18..5225916 100644
--- a/arch/sh/include/asm/cmpxchg.h
+++ b/arch/sh/include/asm/cmpxchg.h
@@ -27,6 +27,9 @@ extern void __xchg_called_with_bad_pointer(void);
 	case 4:						\
 		__xchg__res = xchg_u32(__xchg_ptr, x);	\
 		break;					\
+	case 2:						\
+		__xchg__res = xchg_u16(__xchg_ptr, x);	\
+		break;					\
 	case 1:						\
 		__xchg__res = xchg_u8(__xchg_ptr, x);	\
 		break;					\

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-07 22:41                           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-07 22:41 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

> > > It would be nice to have these in asm-generic for archs which don't
> > > define their own versions rather than having cruft like this repeated
> > > per-arch. Strictly speaking, the volatile u32 used to access the
> > > 32-bit word containing the u8 or u16 should be
> > > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > > for a "may_alias u32" or should it perhaps be added?
> > > 
> > > Rich
> > 
> > BTW this seems suboptimal for grb and irq variants which apparently
> > can do things correctly.
> 
> In principle I agree, but u8/u16 xchg is mostly unused (completely
> unused in my builds) and unlikely to matter to performance. Also, the
> irq variant is only for the original sh2 which is not even produced
> anymore afaik. Our reimplementation of the sh2 ISA, the J2, has a
> cas.l instruction that will be used instead because it supports SMP
> where interrupt masking is insufficient to achieve atomicity.
> 
> Rich

Since it looks like there will soon be active maintainers
for this arch, I think it's best if I make the minimal possible
changes and then you guys can rewrite it any way you like,
drop irq variant or whatever.

The minimal change is probably the below code but
the grb variant is just copy paste from xchg_u8
with a minor tweak -
can you pls confirm it looks right?

I tested the llsc code on ppc and x86 and since it's
portable I know the logic is correct there.

Will post v3 with this included but would appreciate
your input first.

---->
sh: support 1 and 2 byte xchg

This completes the xchg implementation for sh architecture.  Note: The
llsc variant is tricky since this only supports 4 byte atomics, the
existing implementation of 1 byte xchg is wrong: we need to do a 4 byte
cmpxchg and retry if any bytes changed meanwhile.

Write this in C for clarity.

Suggested-by: Rich Felker <dalias@libc.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

---->

diff --git a/arch/sh/include/asm/cmpxchg-grb.h b/arch/sh/include/asm/cmpxchg-grb.h
index f848dec..2ed557b 100644
--- a/arch/sh/include/asm/cmpxchg-grb.h
+++ b/arch/sh/include/asm/cmpxchg-grb.h
@@ -23,6 +23,28 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
 	return retval;
 }
 
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+	unsigned long retval;
+
+	__asm__ __volatile__ (
+		"   .align  2             \n\t"
+		"   mova    1f,   r0      \n\t" /* r0 = end point */
+		"   mov    r15,   r1      \n\t" /* r1 = saved sp */
+		"   mov    #-6,   r15     \n\t" /* LOGIN */
+		"   mov.w  @%1,   %0      \n\t" /* load  old value */
+		"   extu.w  %0,   %0      \n\t" /* extend as unsigned */
+		"   mov.w   %2,   @%1     \n\t" /* store new value */
+		"1: mov     r1,   r15     \n\t" /* LOGOUT */
+		: "=&r" (retval),
+		  "+r"  (m),
+		  "+r"  (val)		/* inhibit r15 overloading */
+		:
+		: "memory" , "r0", "r1");
+
+	return retval;
+}
+
 static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
 {
 	unsigned long retval;
diff --git a/arch/sh/include/asm/cmpxchg-irq.h b/arch/sh/include/asm/cmpxchg-irq.h
index bd11f63..f888772 100644
--- a/arch/sh/include/asm/cmpxchg-irq.h
+++ b/arch/sh/include/asm/cmpxchg-irq.h
@@ -14,6 +14,17 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
 	return retval;
 }
 
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+	unsigned long flags, retval;
+
+	local_irq_save(flags);
+	retval = *m;
+	*m = val;
+	local_irq_restore(flags);
+	return retval;
+}
+
 static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
 {
 	unsigned long flags, retval;
diff --git a/arch/sh/include/asm/cmpxchg-llsc.h b/arch/sh/include/asm/cmpxchg-llsc.h
index 4713666..5dfdb06 100644
--- a/arch/sh/include/asm/cmpxchg-llsc.h
+++ b/arch/sh/include/asm/cmpxchg-llsc.h
@@ -1,6 +1,8 @@
 #ifndef __ASM_SH_CMPXCHG_LLSC_H
 #define __ASM_SH_CMPXCHG_LLSC_H
 
+#include <asm/byteorder.h>
+
 static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
 {
 	unsigned long retval;
@@ -22,29 +24,8 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
 	return retval;
 }
 
-static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
-{
-	unsigned long retval;
-	unsigned long tmp;
-
-	__asm__ __volatile__ (
-		"1:					\n\t"
-		"movli.l	@%2, %0	! xchg_u8	\n\t"
-		"mov		%0, %1			\n\t"
-		"mov		%3, %0			\n\t"
-		"movco.l	%0, @%2			\n\t"
-		"bf		1b			\n\t"
-		"synco					\n\t"
-		: "=&z"(tmp), "=&r" (retval)
-		: "r" (m), "r" (val & 0xff)
-		: "t", "memory"
-	);
-
-	return retval;
-}
-
 static inline unsigned long
-__cmpxchg_u32(volatile int *m, unsigned long old, unsigned long new)
+__cmpxchg_u32(volatile u32 *m, unsigned long old, unsigned long new)
 {
 	unsigned long retval;
 	unsigned long tmp;
@@ -68,4 +49,34 @@ __cmpxchg_u32(volatile int *m, unsigned long old, unsigned long new)
 	return retval;
 }
 
+static inline u32 __xchg_cmpxchg(volatile void *ptr, u32 x, int size)
+{
+	int off = (unsigned long)ptr % sizeof(u32);
+	volatile u32 *p = ptr - off;
+	int bitoff = __BYTE_ORDER == __BIG_ENDIAN ?
+		((sizeof(u32) - 1 - off) * BITS_PER_BYTE) :
+		(off * BITS_PER_BYTE);
+	u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff;
+	u32 oldv, newv;
+	u32 ret;
+
+	do {
+		oldv = READ_ONCE(*p);
+		ret = (oldv & bitmask) >> bitoff;
+		newv = (oldv & ~bitmask) | (x << bitoff);
+	} while (__cmpxchg_u32(p, oldv, newv) != oldv);
+
+	return ret;
+}
+
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+	return __xchg_cmpxchg(m, val, sizeof *m);
+}
+
+static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
+{
+	return __xchg_cmpxchg(m, val, sizeof *m);
+}
+
 #endif /* __ASM_SH_CMPXCHG_LLSC_H */
diff --git a/arch/sh/include/asm/cmpxchg.h b/arch/sh/include/asm/cmpxchg.h
index 85c97b18..5225916 100644
--- a/arch/sh/include/asm/cmpxchg.h
+++ b/arch/sh/include/asm/cmpxchg.h
@@ -27,6 +27,9 @@ extern void __xchg_called_with_bad_pointer(void);
 	case 4:						\
 		__xchg__res = xchg_u32(__xchg_ptr, x);	\
 		break;					\
+	case 2:						\
+		__xchg__res = xchg_u16(__xchg_ptr, x);	\
+		break;					\
 	case 1:						\
 		__xchg__res = xchg_u8(__xchg_ptr, x);	\
 		break;					\

^ permalink raw reply related	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-07 22:41                           ` Michael S. Tsirkin
@ 2016-01-08  4:25                             ` Rich Felker
  -1 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-08  4:25 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Fri, Jan 08, 2016 at 12:41:35AM +0200, Michael S. Tsirkin wrote:
> > > > It would be nice to have these in asm-generic for archs which don't
> > > > define their own versions rather than having cruft like this repeated
> > > > per-arch. Strictly speaking, the volatile u32 used to access the
> > > > 32-bit word containing the u8 or u16 should be
> > > > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > > > for a "may_alias u32" or should it perhaps be added?
> > > > 
> > > > Rich
> > > 
> > > BTW this seems suboptimal for grb and irq variants which apparently
> > > can do things correctly.
> > 
> > In principle I agree, but u8/u16 xchg is mostly unused (completely
> > unused in my builds) and unlikely to matter to performance. Also, the
> > irq variant is only for the original sh2 which is not even produced
> > anymore afaik. Our reimplementation of the sh2 ISA, the J2, has a
> > cas.l instruction that will be used instead because it supports SMP
> > where interrupt masking is insufficient to achieve atomicity.
> > 
> > Rich
> 
> Since it looks like there will soon be active maintainers
> for this arch, I think it's best if I make the minimal possible
> changes and then you guys can rewrite it any way you like,
> drop irq variant or whatever.
> 
> The minimal change is probably the below code but
> the grb variant is just copy paste from xchg_u8
> with a minor tweak -
> can you pls confirm it looks right?

I haven't had a chance to test it, but I don't see anything obviously
wrong with it.

> I tested the llsc code on ppc and x86 and since it's
> portable I know the logic is correct there.

Sounds good. Since it will also be needed for the cas.l variant I'd
rather have this in the main asm/cmpxchg.h where it can be shared if
you see an easy way to do that now, but if not I can take care of it
later when merging cmpxchg-cas.h. Perhaps just putting __xchg_cmpxchg
in the main asm/cmpxchg.h would suffice so that only the thin wrappers
need to be duplicated. Ideally it could even be moved outside of the
arch asm headers, but then there might be annoying header ordering
issues to deal with.

> Will post v3 with this included but would appreciate
> your input first.

Go ahead. Thanks!

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-08  4:25                             ` Rich Felker
  0 siblings, 0 replies; 572+ messages in thread
From: Rich Felker @ 2016-01-08  4:25 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Fri, Jan 08, 2016 at 12:41:35AM +0200, Michael S. Tsirkin wrote:
> > > > It would be nice to have these in asm-generic for archs which don't
> > > > define their own versions rather than having cruft like this repeated
> > > > per-arch. Strictly speaking, the volatile u32 used to access the
> > > > 32-bit word containing the u8 or u16 should be
> > > > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > > > for a "may_alias u32" or should it perhaps be added?
> > > > 
> > > > Rich
> > > 
> > > BTW this seems suboptimal for grb and irq variants which apparently
> > > can do things correctly.
> > 
> > In principle I agree, but u8/u16 xchg is mostly unused (completely
> > unused in my builds) and unlikely to matter to performance. Also, the
> > irq variant is only for the original sh2 which is not even produced
> > anymore afaik. Our reimplementation of the sh2 ISA, the J2, has a
> > cas.l instruction that will be used instead because it supports SMP
> > where interrupt masking is insufficient to achieve atomicity.
> > 
> > Rich
> 
> Since it looks like there will soon be active maintainers
> for this arch, I think it's best if I make the minimal possible
> changes and then you guys can rewrite it any way you like,
> drop irq variant or whatever.
> 
> The minimal change is probably the below code but
> the grb variant is just copy paste from xchg_u8
> with a minor tweak -
> can you pls confirm it looks right?

I haven't had a chance to test it, but I don't see anything obviously
wrong with it.

> I tested the llsc code on ppc and x86 and since it's
> portable I know the logic is correct there.

Sounds good. Since it will also be needed for the cas.l variant I'd
rather have this in the main asm/cmpxchg.h where it can be shared if
you see an easy way to do that now, but if not I can take care of it
later when merging cmpxchg-cas.h. Perhaps just putting __xchg_cmpxchg
in the main asm/cmpxchg.h would suffice so that only the thin wrappers
need to be duplicated. Ideally it could even be moved outside of the
arch asm headers, but then there might be annoying header ordering
issues to deal with.

> Will post v3 with this included but would appreciate
> your input first.

Go ahead. Thanks!

Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
  2016-01-08  4:25                             ` Rich Felker
@ 2016-01-08  7:23                               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-08  7:23 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Thu, Jan 07, 2016 at 11:25:05PM -0500, Rich Felker wrote:
> On Fri, Jan 08, 2016 at 12:41:35AM +0200, Michael S. Tsirkin wrote:
> > > > > It would be nice to have these in asm-generic for archs which don't
> > > > > define their own versions rather than having cruft like this repeated
> > > > > per-arch. Strictly speaking, the volatile u32 used to access the
> > > > > 32-bit word containing the u8 or u16 should be
> > > > > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > > > > for a "may_alias u32" or should it perhaps be added?
> > > > > 
> > > > > Rich
> > > > 
> > > > BTW this seems suboptimal for grb and irq variants which apparently
> > > > can do things correctly.
> > > 
> > > In principle I agree, but u8/u16 xchg is mostly unused (completely
> > > unused in my builds) and unlikely to matter to performance. Also, the
> > > irq variant is only for the original sh2 which is not even produced
> > > anymore afaik. Our reimplementation of the sh2 ISA, the J2, has a
> > > cas.l instruction that will be used instead because it supports SMP
> > > where interrupt masking is insufficient to achieve atomicity.
> > > 
> > > Rich
> > 
> > Since it looks like there will soon be active maintainers
> > for this arch, I think it's best if I make the minimal possible
> > changes and then you guys can rewrite it any way you like,
> > drop irq variant or whatever.
> > 
> > The minimal change is probably the below code but
> > the grb variant is just copy paste from xchg_u8
> > with a minor tweak -
> > can you pls confirm it looks right?
> 
> I haven't had a chance to test it, but I don't see anything obviously
> wrong with it.
> 
> > I tested the llsc code on ppc and x86 and since it's
> > portable I know the logic is correct there.
> 
> Sounds good. Since it will also be needed for the cas.l variant I'd
> rather have this in the main asm/cmpxchg.h where it can be shared if
> you see an easy way to do that now, but if not I can take care of it
> later when merging cmpxchg-cas.h. Perhaps just putting __xchg_cmpxchg
> in the main asm/cmpxchg.h would suffice so that only the thin wrappers
> need to be duplicated.
> Ideally it could even be moved outside of the
> arch asm headers, but then there might be annoying header ordering
> issues to deal with.

Well it isn't possible to put it in cmpxchg.h because you get into
annoying ordering issues: __cmpxchg_u32 is needed
so it has to come after the headers, but the wrappers must
come before the headers.

I put it in a header by itself. This way it's easy to reuse,
and even the thin wrappers won't have to be duplicated.

> 
> > Will post v3 with this included but would appreciate
> > your input first.
> 
> Go ahead. Thanks!
> 
> Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 31/32] sh: support a 2-byte smp_store_mb
@ 2016-01-08  7:23                               ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-08  7:23 UTC (permalink / raw)
  To: Rich Felker
  Cc: Peter Zijlstra, linux-kernel, linux-sh, Rob Landley, Jeff Dionne,
	Yoshinori Sato

On Thu, Jan 07, 2016 at 11:25:05PM -0500, Rich Felker wrote:
> On Fri, Jan 08, 2016 at 12:41:35AM +0200, Michael S. Tsirkin wrote:
> > > > > It would be nice to have these in asm-generic for archs which don't
> > > > > define their own versions rather than having cruft like this repeated
> > > > > per-arch. Strictly speaking, the volatile u32 used to access the
> > > > > 32-bit word containing the u8 or u16 should be
> > > > > __attribute__((__may_alias__)) too. Is there an existing kernel type
> > > > > for a "may_alias u32" or should it perhaps be added?
> > > > > 
> > > > > Rich
> > > > 
> > > > BTW this seems suboptimal for grb and irq variants which apparently
> > > > can do things correctly.
> > > 
> > > In principle I agree, but u8/u16 xchg is mostly unused (completely
> > > unused in my builds) and unlikely to matter to performance. Also, the
> > > irq variant is only for the original sh2 which is not even produced
> > > anymore afaik. Our reimplementation of the sh2 ISA, the J2, has a
> > > cas.l instruction that will be used instead because it supports SMP
> > > where interrupt masking is insufficient to achieve atomicity.
> > > 
> > > Rich
> > 
> > Since it looks like there will soon be active maintainers
> > for this arch, I think it's best if I make the minimal possible
> > changes and then you guys can rewrite it any way you like,
> > drop irq variant or whatever.
> > 
> > The minimal change is probably the below code but
> > the grb variant is just copy paste from xchg_u8
> > with a minor tweak -
> > can you pls confirm it looks right?
> 
> I haven't had a chance to test it, but I don't see anything obviously
> wrong with it.
> 
> > I tested the llsc code on ppc and x86 and since it's
> > portable I know the logic is correct there.
> 
> Sounds good. Since it will also be needed for the cas.l variant I'd
> rather have this in the main asm/cmpxchg.h where it can be shared if
> you see an easy way to do that now, but if not I can take care of it
> later when merging cmpxchg-cas.h. Perhaps just putting __xchg_cmpxchg
> in the main asm/cmpxchg.h would suffice so that only the thin wrappers
> need to be duplicated.
> Ideally it could even be moved outside of the
> arch asm headers, but then there might be annoying header ordering
> issues to deal with.

Well it isn't possible to put it in cmpxchg.h because you get into
annoying ordering issues: __cmpxchg_u32 is needed
so it has to come after the headers, but the wrappers must
come before the headers.

I put it in a header by itself. This way it's easy to reuse,
and even the thin wrappers won't have to be duplicated.

> 
> > Will post v3 with this included but would appreciate
> > your input first.
> 
> Go ahead. Thanks!
> 
> Rich

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-05  0:09       ` James Hogan
                             ` (3 preceding siblings ...)
  (?)
@ 2016-01-11 11:10           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-11 11:10 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar,
	Davidlohr Bueso

On Tue, Jan 05, 2016 at 12:09:30AM +0000, James Hogan wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for metag,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> > use the existing fence() macro since that is defined differently between
> > SMP and !SMP.  For this reason, this patch introduces a wrapper
> > metag_fence() that doesn't depend on CONFIG_SMP.
> > fence() is then defined using that, depending on CONFIG_SMP.
> 
> I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
> columns (although I now notice SubmittingPatches says to use 75...).
> 
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
> >  1 file changed, 15 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> > index b5b778b..84880c9 100644
> > --- a/arch/metag/include/asm/barrier.h
> > +++ b/arch/metag/include/asm/barrier.h
> > @@ -44,13 +44,6 @@ static inline void wr_fence(void)
> >  #define rmb()		barrier()
> >  #define wmb()		mb()
> >  
> > -#ifndef CONFIG_SMP
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > -#else
> 
> !SMP kernel text differs, but only because of new presence of unused
> metag_fence() inline function. If I #if 0 that out, then it matches, so
> thats fine.
> 
> > -
> >  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
> >  /*
> >   * Write to the atomic memory unlock system event register (command 0). This is
> > @@ -60,26 +53,31 @@ static inline void wr_fence(void)
> >   * incoherence). It is therefore ineffective if used after and on the same
> >   * thread as a write.
> >   */
> > -static inline void fence(void)
> > +static inline void metag_fence(void)
> >  {
> >  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
> >  	barrier();
> >  	*flushptr = 0;
> >  	barrier();
> >  }
> > -#define smp_mb()        fence()
> > -#define smp_rmb()       fence()
> > -#define smp_wmb()       barrier()
> > +#define __smp_mb()        metag_fence()
> > +#define __smp_rmb()       metag_fence()
> > +#define __smp_wmb()       barrier()
> >  #else
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > +#define metag_fence()		do { } while (0)
> > +#define __smp_mb()        barrier()
> > +#define __smp_rmb()       barrier()
> > +#define __smp_wmb()       barrier()
> 
> Whitespace is now messed up. Admitedly its already inconsistent
> tabs/spaces, but it'd be nice if the definitions at least still all
> lined up. You're touching all the definitions which use spaces anyway,
> so feel free to convert them to tabs while you're at it.
> 
> Other than those niggles, it looks sensible to me:
> Acked-by: James Hogan <james.hogan@imgtec.com>
> 
> Cheers
> James

Thanks!

I did this in my tree (replaced spaces with tabs in the new
definitions); not reposting just because of this change.

> >  #endif
> > +
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> >  
> > -#define smp_mb__before_atomic()	barrier()
> > -#define smp_mb__after_atomic()	barrier()
> > +#define __smp_mb__before_atomic()	barrier()
> > +#define __smp_mb__after_atomic()	barrier()
> >  
> >  #include <asm-generic/barrier.h>
> >  
> > -- 
> > MST
> > 



^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-11 11:10           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-11 11:10 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-kernel, Peter Zijlstra, Arnd Bergmann, linux-arch,
	Andrew Cooper, virtualization, Stefano Stabellini,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, David Miller,
	linux-ia64, linuxppc-dev, linux-s390, sparclinux,
	linux-arm-kernel, linux-metag, linux-mips, x86,
	user-mode-linux-devel, adi-buildroot-devel, linux-sh,
	linux-xtensa, xen-devel, Ingo Molnar, Davidlohr Bueso,
	Andrey Konovalov

On Tue, Jan 05, 2016 at 12:09:30AM +0000, James Hogan wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for metag,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> > use the existing fence() macro since that is defined differently between
> > SMP and !SMP.  For this reason, this patch introduces a wrapper
> > metag_fence() that doesn't depend on CONFIG_SMP.
> > fence() is then defined using that, depending on CONFIG_SMP.
> 
> I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
> columns (although I now notice SubmittingPatches says to use 75...).
> 
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
> >  1 file changed, 15 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> > index b5b778b..84880c9 100644
> > --- a/arch/metag/include/asm/barrier.h
> > +++ b/arch/metag/include/asm/barrier.h
> > @@ -44,13 +44,6 @@ static inline void wr_fence(void)
> >  #define rmb()		barrier()
> >  #define wmb()		mb()
> >  
> > -#ifndef CONFIG_SMP
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > -#else
> 
> !SMP kernel text differs, but only because of new presence of unused
> metag_fence() inline function. If I #if 0 that out, then it matches, so
> thats fine.
> 
> > -
> >  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
> >  /*
> >   * Write to the atomic memory unlock system event register (command 0). This is
> > @@ -60,26 +53,31 @@ static inline void wr_fence(void)
> >   * incoherence). It is therefore ineffective if used after and on the same
> >   * thread as a write.
> >   */
> > -static inline void fence(void)
> > +static inline void metag_fence(void)
> >  {
> >  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
> >  	barrier();
> >  	*flushptr = 0;
> >  	barrier();
> >  }
> > -#define smp_mb()        fence()
> > -#define smp_rmb()       fence()
> > -#define smp_wmb()       barrier()
> > +#define __smp_mb()        metag_fence()
> > +#define __smp_rmb()       metag_fence()
> > +#define __smp_wmb()       barrier()
> >  #else
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > +#define metag_fence()		do { } while (0)
> > +#define __smp_mb()        barrier()
> > +#define __smp_rmb()       barrier()
> > +#define __smp_wmb()       barrier()
> 
> Whitespace is now messed up. Admitedly its already inconsistent
> tabs/spaces, but it'd be nice if the definitions at least still all
> lined up. You're touching all the definitions which use spaces anyway,
> so feel free to convert them to tabs while you're at it.
> 
> Other than those niggles, it looks sensible to me:
> Acked-by: James Hogan <james.hogan@imgtec.com>
> 
> Cheers
> James

Thanks!

I did this in my tree (replaced spaces with tabs in the new
definitions); not reposting just because of this change.

> >  #endif
> > +
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> >  
> > -#define smp_mb__before_atomic()	barrier()
> > -#define smp_mb__after_atomic()	barrier()
> > +#define __smp_mb__before_atomic()	barrier()
> > +#define __smp_mb__after_atomic()	barrier()
> >  
> >  #include <asm-generic/barrier.h>
> >  
> > -- 
> > MST
> > 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-11 11:10           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-11 11:10 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar,
	Davidlohr Bueso

On Tue, Jan 05, 2016 at 12:09:30AM +0000, James Hogan wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for metag,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> > use the existing fence() macro since that is defined differently between
> > SMP and !SMP.  For this reason, this patch introduces a wrapper
> > metag_fence() that doesn't depend on CONFIG_SMP.
> > fence() is then defined using that, depending on CONFIG_SMP.
> 
> I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
> columns (although I now notice SubmittingPatches says to use 75...).
> 
> > 
> > Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
> > ---
> >  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
> >  1 file changed, 15 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> > index b5b778b..84880c9 100644
> > --- a/arch/metag/include/asm/barrier.h
> > +++ b/arch/metag/include/asm/barrier.h
> > @@ -44,13 +44,6 @@ static inline void wr_fence(void)
> >  #define rmb()		barrier()
> >  #define wmb()		mb()
> >  
> > -#ifndef CONFIG_SMP
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > -#else
> 
> !SMP kernel text differs, but only because of new presence of unused
> metag_fence() inline function. If I #if 0 that out, then it matches, so
> thats fine.
> 
> > -
> >  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
> >  /*
> >   * Write to the atomic memory unlock system event register (command 0). This is
> > @@ -60,26 +53,31 @@ static inline void wr_fence(void)
> >   * incoherence). It is therefore ineffective if used after and on the same
> >   * thread as a write.
> >   */
> > -static inline void fence(void)
> > +static inline void metag_fence(void)
> >  {
> >  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
> >  	barrier();
> >  	*flushptr = 0;
> >  	barrier();
> >  }
> > -#define smp_mb()        fence()
> > -#define smp_rmb()       fence()
> > -#define smp_wmb()       barrier()
> > +#define __smp_mb()        metag_fence()
> > +#define __smp_rmb()       metag_fence()
> > +#define __smp_wmb()       barrier()
> >  #else
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > +#define metag_fence()		do { } while (0)
> > +#define __smp_mb()        barrier()
> > +#define __smp_rmb()       barrier()
> > +#define __smp_wmb()       barrier()
> 
> Whitespace is now messed up. Admitedly its already inconsistent
> tabs/spaces, but it'd be nice if the definitions at least still all
> lined up. You're touching all the definitions which use spaces anyway,
> so feel free to convert them to tabs while you're at it.
> 
> Other than those niggles, it looks sensible to me:
> Acked-by: James Hogan <james.hogan-1AXoQHu6uovQT0dZR+AlfA@public.gmane.org>
> 
> Cheers
> James

Thanks!

I did this in my tree (replaced spaces with tabs in the new
definitions); not reposting just because of this change.

> >  #endif
> > +
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> >  
> > -#define smp_mb__before_atomic()	barrier()
> > -#define smp_mb__after_atomic()	barrier()
> > +#define __smp_mb__before_atomic()	barrier()
> > +#define __smp_mb__after_atomic()	barrier()
> >  
> >  #include <asm-generic/barrier.h>
> >  
> > -- 
> > MST
> > 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-05  0:09       ` James Hogan
                         ` (5 preceding siblings ...)
  (?)
@ 2016-01-11 11:10       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-11 11:10 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Tue, Jan 05, 2016 at 12:09:30AM +0000, James Hogan wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for metag,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> > use the existing fence() macro since that is defined differently between
> > SMP and !SMP.  For this reason, this patch introduces a wrapper
> > metag_fence() that doesn't depend on CONFIG_SMP.
> > fence() is then defined using that, depending on CONFIG_SMP.
> 
> I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
> columns (although I now notice SubmittingPatches says to use 75...).
> 
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
> >  1 file changed, 15 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> > index b5b778b..84880c9 100644
> > --- a/arch/metag/include/asm/barrier.h
> > +++ b/arch/metag/include/asm/barrier.h
> > @@ -44,13 +44,6 @@ static inline void wr_fence(void)
> >  #define rmb()		barrier()
> >  #define wmb()		mb()
> >  
> > -#ifndef CONFIG_SMP
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > -#else
> 
> !SMP kernel text differs, but only because of new presence of unused
> metag_fence() inline function. If I #if 0 that out, then it matches, so
> thats fine.
> 
> > -
> >  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
> >  /*
> >   * Write to the atomic memory unlock system event register (command 0). This is
> > @@ -60,26 +53,31 @@ static inline void wr_fence(void)
> >   * incoherence). It is therefore ineffective if used after and on the same
> >   * thread as a write.
> >   */
> > -static inline void fence(void)
> > +static inline void metag_fence(void)
> >  {
> >  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
> >  	barrier();
> >  	*flushptr = 0;
> >  	barrier();
> >  }
> > -#define smp_mb()        fence()
> > -#define smp_rmb()       fence()
> > -#define smp_wmb()       barrier()
> > +#define __smp_mb()        metag_fence()
> > +#define __smp_rmb()       metag_fence()
> > +#define __smp_wmb()       barrier()
> >  #else
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > +#define metag_fence()		do { } while (0)
> > +#define __smp_mb()        barrier()
> > +#define __smp_rmb()       barrier()
> > +#define __smp_wmb()       barrier()
> 
> Whitespace is now messed up. Admitedly its already inconsistent
> tabs/spaces, but it'd be nice if the definitions at least still all
> lined up. You're touching all the definitions which use spaces anyway,
> so feel free to convert them to tabs while you're at it.
> 
> Other than those niggles, it looks sensible to me:
> Acked-by: James Hogan <james.hogan@imgtec.com>
> 
> Cheers
> James

Thanks!

I did this in my tree (replaced spaces with tabs in the new
definitions); not reposting just because of this change.

> >  #endif
> > +
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> >  
> > -#define smp_mb__before_atomic()	barrier()
> > -#define smp_mb__after_atomic()	barrier()
> > +#define __smp_mb__before_atomic()	barrier()
> > +#define __smp_mb__after_atomic()	barrier()
> >  
> >  #include <asm-generic/barrier.h>
> >  
> > -- 
> > MST
> > 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-11 11:10           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-11 11:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 05, 2016 at 12:09:30AM +0000, James Hogan wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for metag,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> > use the existing fence() macro since that is defined differently between
> > SMP and !SMP.  For this reason, this patch introduces a wrapper
> > metag_fence() that doesn't depend on CONFIG_SMP.
> > fence() is then defined using that, depending on CONFIG_SMP.
> 
> I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
> columns (although I now notice SubmittingPatches says to use 75...).
> 
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
> >  1 file changed, 15 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> > index b5b778b..84880c9 100644
> > --- a/arch/metag/include/asm/barrier.h
> > +++ b/arch/metag/include/asm/barrier.h
> > @@ -44,13 +44,6 @@ static inline void wr_fence(void)
> >  #define rmb()		barrier()
> >  #define wmb()		mb()
> >  
> > -#ifndef CONFIG_SMP
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > -#else
> 
> !SMP kernel text differs, but only because of new presence of unused
> metag_fence() inline function. If I #if 0 that out, then it matches, so
> thats fine.
> 
> > -
> >  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
> >  /*
> >   * Write to the atomic memory unlock system event register (command 0). This is
> > @@ -60,26 +53,31 @@ static inline void wr_fence(void)
> >   * incoherence). It is therefore ineffective if used after and on the same
> >   * thread as a write.
> >   */
> > -static inline void fence(void)
> > +static inline void metag_fence(void)
> >  {
> >  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
> >  	barrier();
> >  	*flushptr = 0;
> >  	barrier();
> >  }
> > -#define smp_mb()        fence()
> > -#define smp_rmb()       fence()
> > -#define smp_wmb()       barrier()
> > +#define __smp_mb()        metag_fence()
> > +#define __smp_rmb()       metag_fence()
> > +#define __smp_wmb()       barrier()
> >  #else
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > +#define metag_fence()		do { } while (0)
> > +#define __smp_mb()        barrier()
> > +#define __smp_rmb()       barrier()
> > +#define __smp_wmb()       barrier()
> 
> Whitespace is now messed up. Admitedly its already inconsistent
> tabs/spaces, but it'd be nice if the definitions at least still all
> lined up. You're touching all the definitions which use spaces anyway,
> so feel free to convert them to tabs while you're at it.
> 
> Other than those niggles, it looks sensible to me:
> Acked-by: James Hogan <james.hogan@imgtec.com>
> 
> Cheers
> James

Thanks!

I did this in my tree (replaced spaces with tabs in the new
definitions); not reposting just because of this change.

> >  #endif
> > +
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> >  
> > -#define smp_mb__before_atomic()	barrier()
> > -#define smp_mb__after_atomic()	barrier()
> > +#define __smp_mb__before_atomic()	barrier()
> > +#define __smp_mb__after_atomic()	barrier()
> >  
> >  #include <asm-generic/barrier.h>
> >  
> > -- 
> > MST
> > 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
  2016-01-05  0:09       ` James Hogan
                         ` (6 preceding siblings ...)
  (?)
@ 2016-01-11 11:10       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-11 11:10 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
	H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
	Arnd Bergmann, Davidlohr Bueso, x86, xen-devel, Ingo Molnar,
	linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
	Andrey Konovalov, adi-buildroot-devel, Thomas Gleixner,
	linux-metag, linux-arm-kernel, Andrew Cooper, linux-kernel,
	linuxppc-dev

On Tue, Jan 05, 2016 at 12:09:30AM +0000, James Hogan wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for metag,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> > use the existing fence() macro since that is defined differently between
> > SMP and !SMP.  For this reason, this patch introduces a wrapper
> > metag_fence() that doesn't depend on CONFIG_SMP.
> > fence() is then defined using that, depending on CONFIG_SMP.
> 
> I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
> columns (although I now notice SubmittingPatches says to use 75...).
> 
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
> >  1 file changed, 15 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> > index b5b778b..84880c9 100644
> > --- a/arch/metag/include/asm/barrier.h
> > +++ b/arch/metag/include/asm/barrier.h
> > @@ -44,13 +44,6 @@ static inline void wr_fence(void)
> >  #define rmb()		barrier()
> >  #define wmb()		mb()
> >  
> > -#ifndef CONFIG_SMP
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > -#else
> 
> !SMP kernel text differs, but only because of new presence of unused
> metag_fence() inline function. If I #if 0 that out, then it matches, so
> thats fine.
> 
> > -
> >  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
> >  /*
> >   * Write to the atomic memory unlock system event register (command 0). This is
> > @@ -60,26 +53,31 @@ static inline void wr_fence(void)
> >   * incoherence). It is therefore ineffective if used after and on the same
> >   * thread as a write.
> >   */
> > -static inline void fence(void)
> > +static inline void metag_fence(void)
> >  {
> >  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
> >  	barrier();
> >  	*flushptr = 0;
> >  	barrier();
> >  }
> > -#define smp_mb()        fence()
> > -#define smp_rmb()       fence()
> > -#define smp_wmb()       barrier()
> > +#define __smp_mb()        metag_fence()
> > +#define __smp_rmb()       metag_fence()
> > +#define __smp_wmb()       barrier()
> >  #else
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > +#define metag_fence()		do { } while (0)
> > +#define __smp_mb()        barrier()
> > +#define __smp_rmb()       barrier()
> > +#define __smp_wmb()       barrier()
> 
> Whitespace is now messed up. Admitedly its already inconsistent
> tabs/spaces, but it'd be nice if the definitions at least still all
> lined up. You're touching all the definitions which use spaces anyway,
> so feel free to convert them to tabs while you're at it.
> 
> Other than those niggles, it looks sensible to me:
> Acked-by: James Hogan <james.hogan@imgtec.com>
> 
> Cheers
> James

Thanks!

I did this in my tree (replaced spaces with tabs in the new
definitions); not reposting just because of this change.

> >  #endif
> > +
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> >  
> > -#define smp_mb__before_atomic()	barrier()
> > -#define smp_mb__after_atomic()	barrier()
> > +#define __smp_mb__before_atomic()	barrier()
> > +#define __smp_mb__after_atomic()	barrier()
> >  
> >  #include <asm-generic/barrier.h>
> >  
> > -- 
> > MST
> > 

^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-11 11:10           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-11 11:10 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar,
	Davidlohr Bueso, Andre

On Tue, Jan 05, 2016 at 12:09:30AM +0000, James Hogan wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for metag,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> > use the existing fence() macro since that is defined differently between
> > SMP and !SMP.  For this reason, this patch introduces a wrapper
> > metag_fence() that doesn't depend on CONFIG_SMP.
> > fence() is then defined using that, depending on CONFIG_SMP.
> 
> I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
> columns (although I now notice SubmittingPatches says to use 75...).
> 
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
> >  1 file changed, 15 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> > index b5b778b..84880c9 100644
> > --- a/arch/metag/include/asm/barrier.h
> > +++ b/arch/metag/include/asm/barrier.h
> > @@ -44,13 +44,6 @@ static inline void wr_fence(void)
> >  #define rmb()		barrier()
> >  #define wmb()		mb()
> >  
> > -#ifndef CONFIG_SMP
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > -#else
> 
> !SMP kernel text differs, but only because of new presence of unused
> metag_fence() inline function. If I #if 0 that out, then it matches, so
> thats fine.
> 
> > -
> >  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
> >  /*
> >   * Write to the atomic memory unlock system event register (command 0). This is
> > @@ -60,26 +53,31 @@ static inline void wr_fence(void)
> >   * incoherence). It is therefore ineffective if used after and on the same
> >   * thread as a write.
> >   */
> > -static inline void fence(void)
> > +static inline void metag_fence(void)
> >  {
> >  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
> >  	barrier();
> >  	*flushptr = 0;
> >  	barrier();
> >  }
> > -#define smp_mb()        fence()
> > -#define smp_rmb()       fence()
> > -#define smp_wmb()       barrier()
> > +#define __smp_mb()        metag_fence()
> > +#define __smp_rmb()       metag_fence()
> > +#define __smp_wmb()       barrier()
> >  #else
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > +#define metag_fence()		do { } while (0)
> > +#define __smp_mb()        barrier()
> > +#define __smp_rmb()       barrier()
> > +#define __smp_wmb()       barrier()
> 
> Whitespace is now messed up. Admitedly its already inconsistent
> tabs/spaces, but it'd be nice if the definitions at least still all
> lined up. You're touching all the definitions which use spaces anyway,
> so feel free to convert them to tabs while you're at it.
> 
> Other than those niggles, it looks sensible to me:
> Acked-by: James Hogan <james.hogan@imgtec.com>
> 
> Cheers
> James

Thanks!

I did this in my tree (replaced spaces with tabs in the new
definitions); not reposting just because of this change.

> >  #endif
> > +
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> >  
> > -#define smp_mb__before_atomic()	barrier()
> > -#define smp_mb__after_atomic()	barrier()
> > +#define __smp_mb__before_atomic()	barrier()
> > +#define __smp_mb__after_atomic()	barrier()
> >  
> >  #include <asm-generic/barrier.h>
> >  
> > -- 
> > MST
> > 



^ permalink raw reply	[flat|nested] 572+ messages in thread

* Re: [PATCH v2 20/32] metag: define __smp_xxx
@ 2016-01-11 11:10           ` Michael S. Tsirkin
  0 siblings, 0 replies; 572+ messages in thread
From: Michael S. Tsirkin @ 2016-01-11 11:10 UTC (permalink / raw)
  To: James Hogan
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Arnd Bergmann, linux-arch-u79uwXL29TY76Z2rM5mHXA, Andrew Cooper,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Stefano Stabellini, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	David Miller, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
	linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ,
	linux-s390-u79uwXL29TY76Z2rM5mHXA,
	sparclinux-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-metag-u79uwXL29TY76Z2rM5mHXA,
	linux-mips-6z/3iImG2C8G8FEW9MqTrA, x86-DgEjT+Ai2ygdnm+yROfE0A,
	user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
	xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b, Ingo Molnar,
	Davidlohr Bueso, Andre

On Tue, Jan 05, 2016 at 12:09:30AM +0000, James Hogan wrote:
> Hi Michael,
> 
> On Thu, Dec 31, 2015 at 09:08:22PM +0200, Michael S. Tsirkin wrote:
> > This defines __smp_xxx barriers for metag,
> > for use by virtualization.
> > 
> > smp_xxx barriers are removed as they are
> > defined correctly by asm-generic/barriers.h
> > 
> > Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
> > use the existing fence() macro since that is defined differently between
> > SMP and !SMP.  For this reason, this patch introduces a wrapper
> > metag_fence() that doesn't depend on CONFIG_SMP.
> > fence() is then defined using that, depending on CONFIG_SMP.
> 
> I'm not a fan of the inconsistent commit message wrapping. I wrap to 72
> columns (although I now notice SubmittingPatches says to use 75...).
> 
> > 
> > Signed-off-by: Michael S. Tsirkin <mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > Acked-by: Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
> > ---
> >  arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
> >  1 file changed, 15 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
> > index b5b778b..84880c9 100644
> > --- a/arch/metag/include/asm/barrier.h
> > +++ b/arch/metag/include/asm/barrier.h
> > @@ -44,13 +44,6 @@ static inline void wr_fence(void)
> >  #define rmb()		barrier()
> >  #define wmb()		mb()
> >  
> > -#ifndef CONFIG_SMP
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > -#else
> 
> !SMP kernel text differs, but only because of new presence of unused
> metag_fence() inline function. If I #if 0 that out, then it matches, so
> thats fine.
> 
> > -
> >  #ifdef CONFIG_METAG_SMP_WRITE_REORDERING
> >  /*
> >   * Write to the atomic memory unlock system event register (command 0). This is
> > @@ -60,26 +53,31 @@ static inline void wr_fence(void)
> >   * incoherence). It is therefore ineffective if used after and on the same
> >   * thread as a write.
> >   */
> > -static inline void fence(void)
> > +static inline void metag_fence(void)
> >  {
> >  	volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
> >  	barrier();
> >  	*flushptr = 0;
> >  	barrier();
> >  }
> > -#define smp_mb()        fence()
> > -#define smp_rmb()       fence()
> > -#define smp_wmb()       barrier()
> > +#define __smp_mb()        metag_fence()
> > +#define __smp_rmb()       metag_fence()
> > +#define __smp_wmb()       barrier()
> >  #else
> > -#define fence()		do { } while (0)
> > -#define smp_mb()        barrier()
> > -#define smp_rmb()       barrier()
> > -#define smp_wmb()       barrier()
> > +#define metag_fence()		do { } while (0)
> > +#define __smp_mb()        barrier()
> > +#define __smp_rmb()       barrier()
> > +#define __smp_wmb()       barrier()
> 
> Whitespace is now messed up. Admitedly its already inconsistent
> tabs/spaces, but it'd be nice if the definitions at least still all
> lined up. You're touching all the definitions which use spaces anyway,
> so feel free to convert them to tabs while you're at it.
> 
> Other than those niggles, it looks sensible to me:
> Acked-by: James Hogan <james.hogan-1AXoQHu6uovQT0dZR+AlfA@public.gmane.org>
> 
> Cheers
> James

Thanks!

I did this in my tree (replaced spaces with tabs in the new
definitions); not reposting just because of this change.

> >  #endif
> > +
> > +#ifdef CONFIG_SMP
> > +#define fence() metag_fence()
> > +#else
> > +#define fence()		do { } while (0)
> >  #endif
> >  
> > -#define smp_mb__before_atomic()	barrier()
> > -#define smp_mb__after_atomic()	barrier()
> > +#define __smp_mb__before_atomic()	barrier()
> > +#define __smp_mb__after_atomic()	barrier()
> >  
> >  #include <asm-generic/barrier.h>
> >  
> > -- 
> > MST
> > 


^ permalink raw reply	[flat|nested] 572+ messages in thread

end of thread, other threads:[~2016-01-11 11:11 UTC | newest]

Thread overview: 572+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-31 19:05 [PATCH v2 00/34] arch: barrier cleanup + barriers for virt Michael S. Tsirkin
2015-12-31 19:05 ` Michael S. Tsirkin
2015-12-31 19:05 ` Michael S. Tsirkin
2015-12-31 19:05 ` Michael S. Tsirkin
2015-12-31 19:05 ` [PATCH v2 01/32] lcoking/barriers, arch: Use smp barriers in smp_store_release() Michael S. Tsirkin
2015-12-31 19:05 ` Michael S. Tsirkin
2015-12-31 19:05   ` Michael S. Tsirkin
2015-12-31 19:05   ` Michael S. Tsirkin
2015-12-31 19:05   ` Michael S. Tsirkin
2015-12-31 19:05 ` [PATCH v2 02/32] asm-generic: guard smp_store_release/load_acquire Michael S. Tsirkin
2015-12-31 19:05   ` Michael S. Tsirkin
2015-12-31 19:05   ` Michael S. Tsirkin
2015-12-31 19:05   ` Michael S. Tsirkin
2015-12-31 19:05   ` Michael S. Tsirkin
2015-12-31 19:05 ` Michael S. Tsirkin
2015-12-31 19:06 ` [PATCH v2 03/32] ia64: rename nop->iosapic_nop Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` [PATCH v2 04/32] ia64: reuse asm-generic/barrier.h Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` [PATCH v2 05/32] powerpc: " Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` [PATCH v2 06/32] s390: " Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2016-01-04 13:20   ` Peter Zijlstra
2016-01-04 13:20     ` Peter Zijlstra
2016-01-04 13:20     ` Peter Zijlstra
2016-01-04 13:20     ` Peter Zijlstra
2016-01-04 13:20     ` Peter Zijlstra
2016-01-04 13:20     ` Peter Zijlstra
2016-01-04 15:03     ` Martin Schwidefsky
2016-01-04 15:03       ` Martin Schwidefsky
2016-01-04 15:03       ` Martin Schwidefsky
2016-01-04 15:03       ` Martin Schwidefsky
2016-01-04 20:42       ` Michael S. Tsirkin
2016-01-04 20:42         ` Michael S. Tsirkin
2016-01-04 20:42         ` Michael S. Tsirkin
2016-01-04 20:42         ` Michael S. Tsirkin
2016-01-05  8:03         ` Martin Schwidefsky
2016-01-05  8:03         ` Martin Schwidefsky
2016-01-05  8:03           ` Martin Schwidefsky
2016-01-05  8:03           ` Martin Schwidefsky
2016-01-05  8:03           ` Martin Schwidefsky
2016-01-04 20:42       ` Michael S. Tsirkin
2016-01-04 20:42       ` Michael S. Tsirkin
2016-01-04 15:03     ` Martin Schwidefsky
2016-01-04 20:34     ` Michael S. Tsirkin
2016-01-04 20:34     ` Michael S. Tsirkin
2016-01-04 20:34     ` Michael S. Tsirkin
2016-01-04 20:34       ` Michael S. Tsirkin
2016-01-04 20:34       ` Michael S. Tsirkin
2016-01-04 20:34       ` Michael S. Tsirkin
2016-01-04 13:20   ` Peter Zijlstra
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` [PATCH v2 07/32] sparc: " Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:43   ` David Miller
2015-12-31 19:43     ` David Miller
2015-12-31 19:43     ` David Miller
2015-12-31 19:43   ` David Miller
2015-12-31 19:43   ` David Miller
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` [PATCH v2 08/32] arm: " Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2016-01-02 11:20   ` Russell King - ARM Linux
2016-01-02 11:20     ` Russell King - ARM Linux
2016-01-02 11:20     ` Russell King - ARM Linux
2016-01-02 11:20     ` Russell King - ARM Linux
2016-01-02 11:20   ` Russell King - ARM Linux
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06 ` [PATCH v2 09/32] arm64: " Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06   ` Michael S. Tsirkin
2015-12-31 19:06 ` Michael S. Tsirkin
2015-12-31 19:07 ` [PATCH v2 10/32] metag: " Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` [PATCH v2 11/32] mips: " Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2016-01-04 13:26   ` Peter Zijlstra
2016-01-04 13:26   ` Peter Zijlstra
     [not found]   ` <1451572003-2440-12-git-send-email-mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-01-04 13:26     ` Peter Zijlstra
2016-01-04 13:26       ` Peter Zijlstra
2016-01-04 13:26       ` Peter Zijlstra
2016-01-04 13:26       ` Peter Zijlstra
2016-01-04 13:26       ` Peter Zijlstra
2016-01-04 13:26       ` Peter Zijlstra
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` [PATCH v2 12/32] x86/um: " Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` [PATCH v2 13/32] x86: " Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` [PATCH v2 14/32] asm-generic: add __smp_xxx wrappers Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` [PATCH v2 15/32] powerpc: define __smp_xxx Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2016-01-05  1:36   ` Boqun Feng
2016-01-05  1:36   ` Boqun Feng
2016-01-05  1:36     ` Boqun Feng
2016-01-05  1:36     ` Boqun Feng
2016-01-05  1:36     ` Boqun Feng
2016-01-05  8:51     ` Michael S. Tsirkin
     [not found]     ` <20160105013648.GA1256-nNqVUaWX1rAq6Sbylg4iGasjOiXwFzmk@public.gmane.org>
2016-01-05  8:51       ` Michael S. Tsirkin
2016-01-05  8:51         ` Michael S. Tsirkin
2016-01-05  8:51         ` Michael S. Tsirkin
2016-01-05  8:51         ` Michael S. Tsirkin
2016-01-05  9:53         ` Boqun Feng
2016-01-05  9:53         ` Boqun Feng
2016-01-05  9:53           ` Boqun Feng
2016-01-05  9:53           ` Boqun Feng
2016-01-05  9:53           ` Boqun Feng
2016-01-05 16:16           ` Michael S. Tsirkin
     [not found]           ` <20160105095341.GA5321-nNqVUaWX1rAq6Sbylg4iGasjOiXwFzmk@public.gmane.org>
2016-01-05 16:16             ` Michael S. Tsirkin
2016-01-05 16:16               ` Michael S. Tsirkin
2016-01-05 16:16               ` Michael S. Tsirkin
2016-01-05 16:16               ` Michael S. Tsirkin
2016-01-06  1:51               ` Boqun Feng
2016-01-06  1:51                 ` Boqun Feng
2016-01-06  1:51                 ` Boqun Feng
2016-01-06  1:51                 ` Boqun Feng
2016-01-06 20:23                 ` Michael S. Tsirkin
     [not found]                 ` <20160106015152.GA14605-nNqVUaWX1rAq6Sbylg4iGasjOiXwFzmk@public.gmane.org>
2016-01-06 20:23                   ` Michael S. Tsirkin
2016-01-06 20:23                     ` Michael S. Tsirkin
2016-01-06 20:23                     ` Michael S. Tsirkin
2016-01-06 20:23                     ` Michael S. Tsirkin
2016-01-07  0:43                     ` Boqun Feng
2016-01-07  0:43                     ` Boqun Feng
2016-01-07  0:43                     ` Boqun Feng
2016-01-07  0:43                       ` Boqun Feng
2016-01-07  0:43                       ` Boqun Feng
2016-01-07  0:43                       ` Boqun Feng
2016-01-06 20:23                 ` Michael S. Tsirkin
2016-01-06  1:51               ` Boqun Feng
2016-01-06  1:51               ` Boqun Feng
2016-01-05 16:16           ` Michael S. Tsirkin
2016-01-05  9:53         ` Boqun Feng
2016-01-05  8:51     ` Michael S. Tsirkin
2016-01-05  1:36   ` Boqun Feng
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` [PATCH v2 16/32] arm64: " Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` [PATCH v2 17/32] arm: " Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07 ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2015-12-31 19:07   ` Michael S. Tsirkin
2016-01-02 11:24   ` Russell King - ARM Linux
2016-01-02 11:24   ` Russell King - ARM Linux
2016-01-02 11:24     ` Russell King - ARM Linux
2016-01-02 11:24     ` Russell King - ARM Linux
2016-01-02 11:24     ` Russell King - ARM Linux
2016-01-02 11:24     ` Russell King - ARM Linux
2016-01-02 11:24     ` Russell King - ARM Linux
2016-01-03  9:12     ` Michael S. Tsirkin
2016-01-03  9:12     ` Michael S. Tsirkin
2016-01-03  9:12       ` Michael S. Tsirkin
2016-01-03  9:12       ` Michael S. Tsirkin
2016-01-03  9:12       ` Michael S. Tsirkin
2016-01-04 13:36       ` Peter Zijlstra
2016-01-04 13:36         ` Peter Zijlstra
2016-01-04 13:36         ` Peter Zijlstra
2016-01-04 13:36         ` Peter Zijlstra
2016-01-04 13:54         ` Peter Zijlstra
2016-01-04 13:54           ` Peter Zijlstra
2016-01-04 13:54           ` Peter Zijlstra
2016-01-04 13:54           ` Peter Zijlstra
2016-01-04 13:59           ` Russell King - ARM Linux
2016-01-04 13:59             ` Russell King - ARM Linux
2016-01-04 13:59             ` Russell King - ARM Linux
2016-01-04 13:59             ` Russell King - ARM Linux
2016-01-04 13:59             ` Russell King - ARM Linux
2016-01-04 13:59             ` Russell King - ARM Linux
2016-01-05 14:38             ` Michael S. Tsirkin
2016-01-05 14:38             ` Michael S. Tsirkin
2016-01-05 14:38               ` Michael S. Tsirkin
2016-01-05 14:38               ` Michael S. Tsirkin
2016-01-05 14:38               ` Michael S. Tsirkin
2016-01-04 13:59           ` Russell King - ARM Linux
2016-01-04 13:59           ` Russell King - ARM Linux
2016-01-04 20:39           ` Michael S. Tsirkin
2016-01-04 20:39           ` Michael S. Tsirkin
2016-01-04 20:39           ` Michael S. Tsirkin
2016-01-04 20:39             ` Michael S. Tsirkin
2016-01-04 20:39             ` Michael S. Tsirkin
2016-01-04 20:39             ` Michael S. Tsirkin
2016-01-04 13:54         ` Peter Zijlstra
2016-01-04 20:12         ` Michael S. Tsirkin
2016-01-04 20:12           ` Michael S. Tsirkin
2016-01-04 20:12           ` Michael S. Tsirkin
2016-01-04 20:12           ` Michael S. Tsirkin
2016-01-04 20:12         ` Michael S. Tsirkin
2016-01-04 13:36       ` Peter Zijlstra
2016-01-04 13:36       ` Peter Zijlstra
2016-01-02 11:24   ` Russell King - ARM Linux
2015-12-31 19:08 ` [PATCH v2 18/32] blackfin: " Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08 ` [PATCH v2 19/32] ia64: " Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08 ` [PATCH v2 20/32] metag: " Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08 ` [PATCH v2 21/32] mips: " Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08 ` [PATCH v2 22/32] s390: " Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2016-01-04 13:45   ` Peter Zijlstra
2016-01-04 13:45   ` Peter Zijlstra
2016-01-04 13:45     ` Peter Zijlstra
2016-01-04 13:45     ` Peter Zijlstra
2016-01-04 13:45     ` Peter Zijlstra
2016-01-04 13:45     ` Peter Zijlstra
2016-01-04 13:45     ` Peter Zijlstra
2016-01-04 20:18     ` Michael S. Tsirkin
2016-01-04 20:18     ` Michael S. Tsirkin
2016-01-04 20:18       ` Michael S. Tsirkin
2016-01-04 20:18       ` Michael S. Tsirkin
2016-01-04 20:18       ` Michael S. Tsirkin
2016-01-05  8:13       ` Martin Schwidefsky
2016-01-05  8:13       ` Martin Schwidefsky
2016-01-05  8:13       ` Martin Schwidefsky
2016-01-05  8:13         ` Martin Schwidefsky
2016-01-05  8:13         ` Martin Schwidefsky
2016-01-05  8:13         ` Martin Schwidefsky
2016-01-05  9:30         ` Michael S. Tsirkin
2016-01-05  9:30         ` Michael S. Tsirkin
2016-01-05  9:30         ` Michael S. Tsirkin
2016-01-05  9:30           ` Michael S. Tsirkin
2016-01-05  9:30           ` Michael S. Tsirkin
2016-01-05  9:30           ` Michael S. Tsirkin
2016-01-05 12:08           ` Martin Schwidefsky
2016-01-05 12:08             ` Martin Schwidefsky
2016-01-05 12:08             ` Martin Schwidefsky
2016-01-05 12:08             ` Martin Schwidefsky
2016-01-05 13:04             ` Michael S. Tsirkin
2016-01-05 13:04             ` Michael S. Tsirkin
2016-01-05 13:04               ` Michael S. Tsirkin
2016-01-05 13:04               ` Michael S. Tsirkin
2016-01-05 13:04               ` Michael S. Tsirkin
2016-01-05 14:21               ` Martin Schwidefsky
2016-01-05 14:21                 ` Martin Schwidefsky
2016-01-05 14:21                 ` Martin Schwidefsky
2016-01-05 14:21                 ` Martin Schwidefsky
2016-01-05 14:21               ` Martin Schwidefsky
2016-01-05 14:21               ` Martin Schwidefsky
2016-01-05 13:04             ` Michael S. Tsirkin
2016-01-05 12:08           ` Martin Schwidefsky
2016-01-05 12:08           ` Martin Schwidefsky
2016-01-05 15:39           ` Christian Borntraeger
     [not found]           ` <20160105105335-mutt-send-email-mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-01-05 15:39             ` Christian Borntraeger
2016-01-05 15:39               ` Christian Borntraeger
2016-01-05 15:39               ` Christian Borntraeger
2016-01-05 15:39               ` Christian Borntraeger
2016-01-05 16:04               ` Michael S. Tsirkin
2016-01-05 16:04                 ` Michael S. Tsirkin
2016-01-05 16:04                 ` Michael S. Tsirkin
2016-01-05 16:04                 ` Michael S. Tsirkin
2016-01-05 16:04               ` Michael S. Tsirkin
2016-01-05 16:04               ` Michael S. Tsirkin
2016-01-05 15:39           ` Christian Borntraeger
2016-01-04 20:18     ` Michael S. Tsirkin
2015-12-31 19:08 ` [PATCH v2 23/32] sh: define __smp_xxx, fix smp_store_mb for !SMP Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08 ` [PATCH v2 24/32] sparc: define __smp_xxx Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08 ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:08   ` Michael S. Tsirkin
2015-12-31 19:44   ` David Miller
2015-12-31 19:44   ` David Miller
2015-12-31 19:44     ` David Miller
2015-12-31 19:44     ` David Miller
2015-12-31 19:44     ` David Miller
2015-12-31 19:09 ` [PATCH v2 25/32] tile: " Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09 ` [PATCH v2 26/32] xtensa: " Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09 ` [PATCH v2 27/32] x86: " Michael S. Tsirkin
     [not found] ` <1451572003-2440-1-git-send-email-mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-12-31 19:07   ` [PATCH v2 10/32] metag: reuse asm-generic/barrier.h Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2016-01-04 23:24     ` James Hogan
2016-01-04 23:24     ` James Hogan
2016-01-04 23:24       ` James Hogan
2016-01-04 23:24       ` James Hogan
2016-01-04 23:24       ` James Hogan
2016-01-04 23:24       ` James Hogan
2016-01-04 23:24       ` James Hogan
2016-01-04 23:24     ` James Hogan
2015-12-31 19:07   ` [PATCH v2 12/32] x86/um: " Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2016-01-05 23:12     ` Richard Weinberger
     [not found]     ` <1451572003-2440-13-git-send-email-mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-01-05 23:12       ` Richard Weinberger
2016-01-05 23:12         ` Richard Weinberger
2016-01-05 23:12         ` Richard Weinberger
2016-01-05 23:12         ` Richard Weinberger
2016-01-05 23:12     ` Richard Weinberger
2015-12-31 19:07   ` [PATCH v2 14/32] asm-generic: add __smp_xxx wrappers Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2015-12-31 19:07     ` Michael S. Tsirkin
2015-12-31 19:08   ` [PATCH v2 20/32] metag: define __smp_xxx Michael S. Tsirkin
2015-12-31 19:08     ` Michael S. Tsirkin
2015-12-31 19:08     ` Michael S. Tsirkin
2015-12-31 19:08     ` Michael S. Tsirkin
2016-01-04 13:41     ` Peter Zijlstra
2016-01-04 13:41       ` Peter Zijlstra
2016-01-04 13:41       ` Peter Zijlstra
2016-01-04 13:41       ` Peter Zijlstra
2016-01-04 13:41       ` Peter Zijlstra
2016-01-04 15:25       ` James Hogan
2016-01-04 15:25       ` James Hogan
2016-01-04 15:25         ` James Hogan
2016-01-04 15:25         ` James Hogan
2016-01-04 15:25         ` James Hogan
2016-01-04 15:25         ` James Hogan
2016-01-04 15:25         ` James Hogan
2016-01-04 15:25         ` James Hogan
2016-01-04 15:30         ` Peter Zijlstra
2016-01-04 15:30         ` Peter Zijlstra
2016-01-04 15:30           ` Peter Zijlstra
2016-01-04 15:30           ` Peter Zijlstra
2016-01-04 15:30           ` Peter Zijlstra
2016-01-04 16:04           ` James Hogan
2016-01-04 16:04           ` James Hogan
2016-01-04 16:04             ` James Hogan
2016-01-04 16:04             ` James Hogan
2016-01-04 16:04             ` James Hogan
2016-01-04 16:04             ` James Hogan
2016-01-04 16:04             ` James Hogan
2016-01-04 16:04             ` James Hogan
2016-01-04 16:04           ` James Hogan
2016-01-04 15:25       ` James Hogan
2016-01-04 13:41     ` Peter Zijlstra
2016-01-05  0:09     ` James Hogan
2016-01-05  0:09       ` James Hogan
2016-01-05  0:09       ` James Hogan
2016-01-05  0:09       ` James Hogan
2016-01-05  0:09       ` James Hogan
2016-01-05  0:09       ` James Hogan
2016-01-05  0:09       ` James Hogan
2016-01-11 11:10       ` Michael S. Tsirkin
2016-01-11 11:10       ` Michael S. Tsirkin
     [not found]       ` <20160105000929.GM17861-4bYivNCBEGTR3KXKvIWQxtm+Uo4AYnCiHZ5vskTnxNA@public.gmane.org>
2016-01-11 11:10         ` Michael S. Tsirkin
2016-01-11 11:10           ` Michael S. Tsirkin
2016-01-11 11:10           ` Michael S. Tsirkin
2016-01-11 11:10           ` Michael S. Tsirkin
2016-01-11 11:10           ` Michael S. Tsirkin
2016-01-11 11:10           ` Michael S. Tsirkin
2016-01-05  0:09     ` James Hogan
2016-01-05  0:09     ` James Hogan
2015-12-31 19:09   ` [PATCH v2 27/32] x86: " Michael S. Tsirkin
2015-12-31 19:09     ` Michael S. Tsirkin
2015-12-31 19:09     ` Michael S. Tsirkin
2015-12-31 19:09     ` Michael S. Tsirkin
2015-12-31 19:09     ` Michael S. Tsirkin
2015-12-31 19:09     ` Michael S. Tsirkin
2015-12-31 19:09   ` [PATCH v2 31/32] sh: support a 2-byte smp_store_mb Michael S. Tsirkin
2015-12-31 19:09     ` Michael S. Tsirkin
2015-12-31 19:09     ` Michael S. Tsirkin
2015-12-31 19:09     ` Michael S. Tsirkin
2016-01-04 14:05     ` Peter Zijlstra
2016-01-04 14:05     ` Peter Zijlstra
2016-01-04 14:05     ` Peter Zijlstra
2016-01-04 14:05       ` Peter Zijlstra
2016-01-04 14:05       ` Peter Zijlstra
2016-01-05 23:27     ` Rich Felker
2016-01-05 23:27       ` Rich Felker
2016-01-06 11:19       ` Michael S. Tsirkin
2016-01-06 11:19         ` Michael S. Tsirkin
2016-01-06 11:40         ` Peter Zijlstra
2016-01-06 11:40           ` Peter Zijlstra
2016-01-06 11:52           ` Michael S. Tsirkin
2016-01-06 11:52             ` Michael S. Tsirkin
2016-01-06 14:32             ` Peter Zijlstra
2016-01-06 14:32               ` Peter Zijlstra
2016-01-06 15:42               ` Rob Landley
2016-01-06 15:42                 ` Rob Landley
2016-01-06 16:57                 ` Peter Zijlstra
2016-01-06 16:57                   ` Peter Zijlstra
2016-01-06 20:21                   ` Rob Landley
2016-01-06 20:21                     ` Rob Landley
2016-01-06 18:57                 ` Geert Uytterhoeven
2016-01-06 18:57                   ` Geert Uytterhoeven
2016-01-06 18:23               ` Rich Felker
2016-01-06 18:23                 ` Rich Felker
2016-01-06 20:23                 ` Michael S. Tsirkin
2016-01-06 20:23                   ` Michael S. Tsirkin
2016-01-06 23:53                   ` Rich Felker
2016-01-06 23:53                     ` Rich Felker
2016-01-07 13:37                     ` Peter Zijlstra
2016-01-07 13:37                       ` Peter Zijlstra
2016-01-07 19:05                       ` Rich Felker
2016-01-07 19:05                         ` Rich Felker
2016-01-07 15:50                     ` Michael S. Tsirkin
2016-01-07 15:50                       ` Michael S. Tsirkin
2016-01-07 17:48                     ` Michael S. Tsirkin
2016-01-07 17:48                       ` Michael S. Tsirkin
2016-01-07 19:10                       ` Rich Felker
2016-01-07 19:10                         ` Rich Felker
2016-01-07 22:41                         ` Michael S. Tsirkin
2016-01-07 22:41                           ` Michael S. Tsirkin
2016-01-08  4:25                           ` Rich Felker
2016-01-08  4:25                             ` Rich Felker
2016-01-08  7:23                             ` Michael S. Tsirkin
2016-01-08  7:23                               ` Michael S. Tsirkin
2016-01-06 22:14                 ` Michael S. Tsirkin
2016-01-06 22:14                   ` Michael S. Tsirkin
2015-12-31 19:10   ` [PATCH v2 33/34] xenbus: use virt_xxx barriers Michael S. Tsirkin
2015-12-31 19:10     ` Michael S. Tsirkin
2015-12-31 19:10     ` Michael S. Tsirkin
2015-12-31 19:10     ` Michael S. Tsirkin
2015-12-31 19:10     ` Michael S. Tsirkin
2015-12-31 19:10     ` Michael S. Tsirkin
2016-01-04 11:32     ` [Xen-devel] " David Vrabel
2016-01-04 11:32       ` David Vrabel
2016-01-04 11:32       ` David Vrabel
2016-01-04 11:32       ` David Vrabel
2016-01-04 11:32       ` David Vrabel
2016-01-04 11:32     ` David Vrabel
2016-01-04 11:32     ` David Vrabel
2016-01-04 12:03     ` Stefano Stabellini
2016-01-04 12:03     ` Stefano Stabellini
2016-01-04 12:03       ` Stefano Stabellini
2016-01-04 12:03       ` Stefano Stabellini
2016-01-04 12:03       ` Stefano Stabellini
2016-01-04 12:03       ` Stefano Stabellini
2016-01-04 12:03     ` Stefano Stabellini
2016-01-04 14:09     ` Peter Zijlstra
2016-01-04 14:09     ` Peter Zijlstra
     [not found]     ` <1451572003-2440-34-git-send-email-mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-01-04 14:09       ` Peter Zijlstra
2016-01-04 14:09         ` Peter Zijlstra
2016-01-04 14:09         ` Peter Zijlstra
2016-01-04 14:09         ` Peter Zijlstra
2016-01-04 14:09         ` Peter Zijlstra
2016-01-04 14:09         ` Peter Zijlstra
2015-12-31 19:09 ` [PATCH v2 27/32] x86: define __smp_xxx Michael S. Tsirkin
2015-12-31 19:09 ` [PATCH v2 28/32] asm-generic: implement virt_xxx memory barriers Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09 ` [PATCH v2 29/32] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb" Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09 ` [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_XXX Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2016-01-01  9:39   ` [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx Michael S. Tsirkin
2016-01-01  9:39   ` Michael S. Tsirkin
2016-01-01  9:39   ` Michael S. Tsirkin
2015-12-31 19:09   ` [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_XXX Michael S. Tsirkin
2016-01-01 10:21   ` [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx Michael S. Tsirkin
2016-01-01 10:21   ` Michael S. Tsirkin
2016-01-01 10:21     ` Michael S. Tsirkin
2016-01-01 10:21     ` Michael S. Tsirkin
2016-01-01 10:21   ` Michael S. Tsirkin
2015-12-31 19:09 ` [PATCH v2 31/32] sh: support a 2-byte smp_store_mb Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09 ` [PATCH v2 32/32] virtio_ring: use virt_store_mb Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09 ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2015-12-31 19:09   ` Michael S. Tsirkin
2016-01-01 17:23   ` Sergei Shtylyov
2016-01-01 17:23   ` Sergei Shtylyov
2016-01-01 17:23     ` Sergei Shtylyov
2016-01-01 17:23     ` Sergei Shtylyov
2016-01-01 17:23     ` Sergei Shtylyov
2016-01-03  9:01     ` Michael S. Tsirkin
     [not found]     ` <5686B622.6070600-M4DtvfQ/ZS1MRgGoP+s0PdBPR1lH4CV8@public.gmane.org>
2016-01-03  9:01       ` Michael S. Tsirkin
2016-01-03  9:01         ` Michael S. Tsirkin
2016-01-03  9:01         ` Michael S. Tsirkin
2016-01-03  9:01         ` Michael S. Tsirkin
2016-01-03  9:01     ` Michael S. Tsirkin
2015-12-31 19:10 ` [PATCH v2 33/34] xenbus: use virt_xxx barriers Michael S. Tsirkin
2015-12-31 19:10 ` Michael S. Tsirkin
2015-12-31 19:10 ` [PATCH v2 34/34] xen/io: " Michael S. Tsirkin
2015-12-31 19:10 ` Michael S. Tsirkin
2015-12-31 19:10   ` Michael S. Tsirkin
2015-12-31 19:10   ` Michael S. Tsirkin
2015-12-31 19:10   ` Michael S. Tsirkin
2015-12-31 19:10   ` Michael S. Tsirkin
2015-12-31 19:10   ` Michael S. Tsirkin
2016-01-04 11:32   ` David Vrabel
2016-01-04 11:32   ` [Xen-devel] " David Vrabel
2016-01-04 11:32     ` David Vrabel
2016-01-04 11:32     ` David Vrabel
2016-01-04 11:32     ` David Vrabel
2016-01-04 11:32     ` David Vrabel
2016-01-04 11:32   ` David Vrabel
2016-01-04 12:05   ` Stefano Stabellini
2016-01-04 12:05   ` Stefano Stabellini
     [not found]   ` <1451572003-2440-35-git-send-email-mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-01-04 12:05     ` Stefano Stabellini
2016-01-04 12:05       ` Stefano Stabellini
2016-01-04 12:05       ` Stefano Stabellini
2016-01-04 12:05       ` Stefano Stabellini
2016-01-04 12:05       ` Stefano Stabellini
2015-12-31 19:10 ` Michael S. Tsirkin
2016-01-01  9:39 ` [PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.