All of lore.kernel.org
 help / color / mirror / Atom feed
* Xen Security Advisory 314 v3 (CVE-2020-11739) - Missing memory barriers in read-write unlock paths
@ 2020-04-14 12:00 Xen.org security team
  0 siblings, 0 replies; only message in thread
From: Xen.org security team @ 2020-04-14 12:00 UTC (permalink / raw)
  To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team

[-- Attachment #1: Type: text/plain, Size: 3808 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-11739 / XSA-314
                               version 3

          Missing memory barriers in read-write unlock paths

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The read-write unlock paths don't contain a memory barrier.  On Arm, this
means a processor is allowed to re-order the memory access with the
preceding ones.

In other words, the unlock may be seen by another processor before all the
memory accesses within the "critical" section.

As a consequence, it may be possible to have a writer executing a critical
section at the same time as readers or another writer. In other words,
many of the assumptions (e.g a variable cannot be modified after a check)
in the critical sections are not safe anymore.

The read-write locks are used in hypercalls (such as grant-table ones), so
a malicious guest could exploit the race.  For instance, there is a small
window where Xen can leak memory if XENMAPSPACE_grant_table is used
concurrently.

IMPACT
======

A malicous guest may be able to leak memory, or cause a hypervisor crash
resulting in a Denial of Service (DoS). Information leak and privilege
escalation cannot be excluded.

VULNERABLE SYSTEMS
==================

Systems running all versions of Xen are affected.

Whether an individual Arm-based CPU is vulnerable depends on its memory
re-ordering properties.  Consult your CPU vendor.

x86 systems are not vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Julien Grall of Amazon.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa314.patch           xen-unstable
xsa314-4.13.patch      Xen 4.13 - Xen 4.9

$ sha256sum xsa314*
ff6e03780766d0358699ed0c5b0154be9ccbbc80796650f7568c295c5451ba0a  xsa314.meta
7c507e7b46568e94aa9595a549bd3020b16d1eca97b8bfc3bb1f5d96eb338cc1  xsa314.patch
a13e6a9cd531859882d1b0ef38245441d363d1ead1fa2a5ae5da7a0fce27e072  xsa314-4.13.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl6VpdcMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZNSoH/2TB+nP1KWB0LUkP5yD1tlSC6Q58k3ReUw7uVfLh
OOBhyOZz5jQOO9r6HDQtqxZBtihbmCDD9Ckl3V4au81TFxz8My24nMR+X1dqDcPi
0MQ2+Tu3z6S/NMw9DwLsN9b0MtHlmalOBrhbhif3/U0QDgLFhN2H8GtvFQ1imWmm
JHoTdBHDUwxCvThIHZCui/T69U/csdfyV6f/HgMVTzpNIOBkiwUuOVuMEO25KqVk
tO0z0CyK19K86VJu7k4q16RzCllUoe5bSU+7UVYOS1PxZ5XCvKTCYcZDz1HZMW/8
FOA8yNMzHV3b+0WvCnMpq9qHmmJXGx+vRSoeBF7YeU0wUkE=
=oA9H
-----END PGP SIGNATURE-----

[-- Attachment #2: xsa314.meta --]
[-- Type: application/octet-stream, Size: 1565 bytes --]

{
  "XSA": 314,
  "SupportedVersions": [
    "master",
    "4.13",
    "4.12",
    "4.11",
    "4.10",
    "4.9"
  ],
  "Trees": [
    "xen"
  ],
  "Recipes": {
    "4.10": {
      "Recipes": {
        "xen": {
          "StableRef": "49a5d6e92317a7d9acbf0bdbd25b2809dfd84260",
          "Prereqs": [],
          "Patches": [
            "xsa314-4.13.patch"
          ]
        }
      }
    },
    "4.11": {
      "Recipes": {
        "xen": {
          "StableRef": "ddffc4d8a072f146320f4ca58c768c4b563ab571",
          "Prereqs": [],
          "Patches": [
            "xsa314-4.13.patch"
          ]
        }
      }
    },
    "4.12": {
      "Recipes": {
        "xen": {
          "StableRef": "a5fcafbfbee55261853fba07149c1c795f2baf58",
          "Prereqs": [],
          "Patches": [
            "xsa314-4.13.patch"
          ]
        }
      }
    },
    "4.13": {
      "Recipes": {
        "xen": {
          "StableRef": "721f2c323ca55c77857c93e7275b4a93a0e15e1f",
          "Prereqs": [],
          "Patches": [
            "xsa314-4.13.patch"
          ]
        }
      }
    },
    "4.9": {
      "Recipes": {
        "xen": {
          "StableRef": "cf2e9cc0ba0432f05cdca36dcd46be5fdfd7ca0c",
          "Prereqs": [],
          "Patches": [
            "xsa314-4.13.patch"
          ]
        }
      }
    },
    "master": {
      "Recipes": {
        "xen": {
          "StableRef": "0d99c909d7e1cbe69329a00f7772946f10a7865b",
          "Prereqs": [],
          "Patches": [
            "xsa314.patch"
          ]
        }
      }
    }
  }
}

[-- Attachment #3: xsa314.patch --]
[-- Type: application/octet-stream, Size: 3929 bytes --]

From 242955ac56f175322af040df29ab8cfb54a3b4f7 Mon Sep 17 00:00:00 2001
From: Julien Grall <jgrall@amazon.com>
Date: Thu, 20 Feb 2020 20:54:40 +0000
Subject: [PATCH] xen/rwlock: Add missing memory barrier in the unlock path of
 rwlock

The rwlock unlock paths are using atomic_sub() to release the lock.
However the implementation of atomic_sub() rightfully doesn't contain a
memory barrier. On Arm, this means a processor is allowed to re-order
the memory access with the preceeding access.

In other words, the unlock may be seen by another processor before all
the memory accesses within the "critical" section.

The rwlock paths already contains barrier indirectly, but they are not
very useful without the counterpart in the unlock paths.

The memory barriers are not necessary on x86 because loads/stores are
not re-ordered with lock instructions.

So add arch_lock_release_barrier() in the unlock paths that will only
add memory barrier on Arm.

Take the opportunity to document each lock paths explaining why a
barrier is not necessary.

This is XSA-314.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/xen/rwlock.h | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 4d1b48c722..49ea9c010a 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -60,6 +60,10 @@ static inline int _read_trylock(rwlock_t *lock)
     if ( likely(_can_read_lock(cnts)) )
     {
         cnts = (u32)atomic_add_return(_QR_BIAS, &lock->cnts);
+        /*
+         * atomic_add_return() is a full barrier so no need for an
+         * arch_lock_acquire_barrier().
+         */
         if ( likely(_can_read_lock(cnts)) )
             return 1;
         atomic_sub(_QR_BIAS, &lock->cnts);
@@ -78,11 +82,19 @@ static inline void _read_lock(rwlock_t *lock)
 
     preempt_disable();
     cnts = atomic_add_return(_QR_BIAS, &lock->cnts);
+    /*
+     * atomic_add_return() is a full barrier so no need for an
+     * arch_lock_acquire_barrier().
+     */
     if ( likely(_can_read_lock(cnts)) )
         return;
 
     /* The slowpath will decrement the reader count, if necessary. */
     queue_read_lock_slowpath(lock);
+    /*
+     * queue_read_lock_slowpath() is using spinlock and therefore is a
+     * full barrier. So no need for an arch_lock_acquire_barrier().
+     */
 }
 
 static inline void _read_lock_irq(rwlock_t *lock)
@@ -106,6 +118,7 @@ static inline unsigned long _read_lock_irqsave(rwlock_t *lock)
  */
 static inline void _read_unlock(rwlock_t *lock)
 {
+    arch_lock_release_barrier();
     /*
      * Atomically decrement the reader count
      */
@@ -141,12 +154,21 @@ static inline unsigned int _write_lock_val(void)
  */
 static inline void _write_lock(rwlock_t *lock)
 {
-    /* Optimize for the unfair lock case where the fair flag is 0. */
     preempt_disable();
+    /*
+     * Optimize for the unfair lock case where the fair flag is 0.
+     *
+     * atomic_cmpxchg() is a full barrier so no need for an
+     * arch_lock_acquire_barrier().
+     */
     if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
         return;
 
     queue_write_lock_slowpath(lock);
+    /*
+     * queue_write_lock_slowpath() is using spinlock and therefore is a
+     * full barrier. So no need for an arch_lock_acquire_barrier().
+     */
 }
 
 static inline void _write_lock_irq(rwlock_t *lock)
@@ -183,12 +205,17 @@ static inline int _write_trylock(rwlock_t *lock)
         return 0;
     }
 
+    /*
+     * atomic_cmpxchg() is a full barrier so no need for an
+     * arch_lock_acquire_barrier().
+     */
     return 1;
 }
 
 static inline void _write_unlock(rwlock_t *lock)
 {
     ASSERT(_is_write_locked_by_me(atomic_read(&lock->cnts)));
+    arch_lock_release_barrier();
     atomic_and(~(_QW_CPUMASK | _QW_WMASK), &lock->cnts);
     preempt_enable();
 }
-- 
2.17.1


[-- Attachment #4: xsa314-4.13.patch --]
[-- Type: application/octet-stream, Size: 4056 bytes --]

From ab49f005f7d01d4004d76f2e295d31aca7d4f93a Mon Sep 17 00:00:00 2001
From: Julien Grall <jgrall@amazon.com>
Date: Thu, 20 Feb 2020 20:54:40 +0000
Subject: [PATCH] xen/rwlock: Add missing memory barrier in the unlock path of
 rwlock

The rwlock unlock paths are using atomic_sub() to release the lock.
However the implementation of atomic_sub() rightfully doesn't contain a
memory barrier. On Arm, this means a processor is allowed to re-order
the memory access with the preceeding access.

In other words, the unlock may be seen by another processor before all
the memory accesses within the "critical" section.

The rwlock paths already contains barrier indirectly, but they are not
very useful without the counterpart in the unlock paths.

The memory barriers are not necessary on x86 because loads/stores are
not re-ordered with lock instructions.

So add arch_lock_release_barrier() in the unlock paths that will only
add memory barrier on Arm.

Take the opportunity to document each lock paths explaining why a
barrier is not necessary.

This is XSA-314.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
 xen/include/xen/rwlock.h | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 3dfea1ac2a..516486306f 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -48,6 +48,10 @@ static inline int _read_trylock(rwlock_t *lock)
     if ( likely(!(cnts & _QW_WMASK)) )
     {
         cnts = (u32)atomic_add_return(_QR_BIAS, &lock->cnts);
+        /*
+         * atomic_add_return() is a full barrier so no need for an
+         * arch_lock_acquire_barrier().
+         */
         if ( likely(!(cnts & _QW_WMASK)) )
             return 1;
         atomic_sub(_QR_BIAS, &lock->cnts);
@@ -64,11 +68,19 @@ static inline void _read_lock(rwlock_t *lock)
     u32 cnts;
 
     cnts = atomic_add_return(_QR_BIAS, &lock->cnts);
+    /*
+     * atomic_add_return() is a full barrier so no need for an
+     * arch_lock_acquire_barrier().
+     */
     if ( likely(!(cnts & _QW_WMASK)) )
         return;
 
     /* The slowpath will decrement the reader count, if necessary. */
     queue_read_lock_slowpath(lock);
+    /*
+     * queue_read_lock_slowpath() is using spinlock and therefore is a
+     * full barrier. So no need for an arch_lock_acquire_barrier().
+     */
 }
 
 static inline void _read_lock_irq(rwlock_t *lock)
@@ -92,6 +104,7 @@ static inline unsigned long _read_lock_irqsave(rwlock_t *lock)
  */
 static inline void _read_unlock(rwlock_t *lock)
 {
+    arch_lock_release_barrier();
     /*
      * Atomically decrement the reader count
      */
@@ -121,11 +134,20 @@ static inline int _rw_is_locked(rwlock_t *lock)
  */
 static inline void _write_lock(rwlock_t *lock)
 {
-    /* Optimize for the unfair lock case where the fair flag is 0. */
+    /*
+     * Optimize for the unfair lock case where the fair flag is 0.
+     *
+     * atomic_cmpxchg() is a full barrier so no need for an
+     * arch_lock_acquire_barrier().
+     */
     if ( atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0 )
         return;
 
     queue_write_lock_slowpath(lock);
+    /*
+     * queue_write_lock_slowpath() is using spinlock and therefore is a
+     * full barrier. So no need for an arch_lock_acquire_barrier().
+     */
 }
 
 static inline void _write_lock_irq(rwlock_t *lock)
@@ -157,11 +179,16 @@ static inline int _write_trylock(rwlock_t *lock)
     if ( unlikely(cnts) )
         return 0;
 
+    /*
+     * atomic_cmpxchg() is a full barrier so no need for an
+     * arch_lock_acquire_barrier().
+     */
     return likely(atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0);
 }
 
 static inline void _write_unlock(rwlock_t *lock)
 {
+    arch_lock_release_barrier();
     /*
      * If the writer field is atomic, it can be cleared directly.
      * Otherwise, an atomic subtraction will be used to clear it.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2020-04-14 12:01 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-14 12:00 Xen Security Advisory 314 v3 (CVE-2020-11739) - Missing memory barriers in read-write unlock paths Xen.org security team

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.