All of lore.kernel.org
 help / color / mirror / Atom feed
* Xen Security Advisory 379 v2 (CVE-2021-28697) - grant table v2 status pages may remain accessible after de-allocation
@ 2021-08-25 12:01 Xen.org security team
  0 siblings, 0 replies; only message in thread
From: Xen.org security team @ 2021-08-25 12:01 UTC (permalink / raw)
  To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team

[-- Attachment #1: Type: text/plain, Size: 4514 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-28697 / XSA-379
                               version 2

 grant table v2 status pages may remain accessible after de-allocation

UPDATES IN VERSION 2
====================

Patches updated to fix a typo in a comment.

Public release.

ISSUE DESCRIPTION
=================

Guest get permitted access to certain Xen-owned pages of memory.  The
majority of such pages remain allocated / associated with a guest for
its entire lifetime.  Grant table v2 status pages, however, get
de-allocated when a guest switched (back) from v2 to v1.  The freeing
of such pages requires that the hypervisor know where in the guest
these pages were mapped.  The hypervisor tracks only one use within
guest space, but racing requests from the guest to insert mappings of
these pages may result in any of them to become mapped in multiple
locations.  Upon switching back from v2 to v1, the guest would then
retain access to a page that was freed and perhaps re-used for other
purposes.

IMPACT
======

A malicious guest may be able to elevate its privileges to that of the
host, cause host or guest Denial of Service (DoS), or cause information
leaks.

VULNERABLE SYSTEMS
==================

All Xen versions from 4.0 onwards are affected.  Xen versions 3.4 and
older are not affected.

Only x86 HVM and PVH guests permitted to use grant table version 2
interfaces can leverage this vulnerability.  x86 PV guests cannot
leverage this vulnerability.  On Arm, grant table v2 use is explicitly
unsupported.

MITIGATION
==========

Running only PV guests will avoid this vulnerability.

Suppressing use of grant table v2 interfaces for HVM or PVH guests will
also avoid this vulnerability.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa379.patch           xen-unstable
xsa379-4.15.patch      Xen 4.15.x
xsa379-4.14.patch      Xen 4.14.x - 4.13.x
xsa379-4.12.patch      Xen 4.12.x - 4.11.x

$ sha256sum xsa379*
bdda4cb431301551336388ff7300a6ae95bb75af8fcae09cfb12c22a91d399d9  xsa379.meta
508dbfcac7420ec780df39402116bf7d3f497c4a9d883a369df7bf5340778e6c  xsa379.patch
2a1db918f1fa387a97d7bcb525eaa928fd71a9967e6ced4e7ac6e39a79ab5b80  xsa379-4.12.patch
c57b72078460f45a5e003db5c4c3669f27310420e04eb16e4413318dfee54fa1  xsa379-4.14.patch
3154869b12fcde70ce845df723aae4bbb2eb9576d90267c1be01eb6d3c5196e9  xsa379-4.15.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or PV-guest-only mitigations described
above (or others which are substantially similar) is permitted during
the embargo, even on public-facing systems with untrusted guest users
and administrators.

HOWEVER, deployment of the grant table v2 disabling mitigation described
above is NOT permitted during the embargo on public-facing systems with
untrusted guest users and administrators.  This is because such a
configuration change is recognizable by the affected guests.

AND: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmEmMPYMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZLogH/icXkFdjXfxIFXLbvcX98qFYFcFbNgyitfngfR4d
VUbiuglViFtSxVY+LytV1RZHEqFiwYCLYcy7lf/EcDyzZT+BLA+S/z5r45F9rQcv
2gwMiQu+xoy1pDTSqVvGb+29NGT/btPRDfRlpaenqjQnGuOX2ymR9zGSmba/PDjp
QVIsSsvEbldlkVzwx9B3C7n+27mUPU6iVnU7j3s60mDfkjz/gIdnuzl8Tv/n6QR0
iZ8URQLn6wobbMBZM1+znfWNeT9dv4UiES1QuUrq1fT7LltQK8mZjAHP+VhXc3XZ
EA9H1LMp5G9Rw+IfgCquQXR5O1usACxnjHM1b9iG2ZP/jZU=
=eASl
-----END PGP SIGNATURE-----

[-- Attachment #2: xsa379.meta --]
[-- Type: application/octet-stream, Size: 1814 bytes --]

{
  "XSA": 379,
  "SupportedVersions": [
    "master",
    "4.15",
    "4.14",
    "4.13",
    "4.12",
    "4.11"
  ],
  "Trees": [
    "xen"
  ],
  "Recipes": {
    "4.11": {
      "Recipes": {
        "xen": {
          "StableRef": "ef32c7afa2731b758226d6e10a1e489b1a15fc41",
          "Prereqs": [
            378
          ],
          "Patches": [
            "xsa379-4.12.patch"
          ]
        }
      }
    },
    "4.12": {
      "Recipes": {
        "xen": {
          "StableRef": "ea20eee97e9e0861127a8070cc7b9ae3557b09fb",
          "Prereqs": [
            378,
            383
          ],
          "Patches": [
            "xsa379-4.12.patch"
          ]
        }
      }
    },
    "4.13": {
      "Recipes": {
        "xen": {
          "StableRef": "32d580902b959000d79d51dff03a3560653c4fcb",
          "Prereqs": [
            378,
            383
          ],
          "Patches": [
            "xsa379-4.14.patch"
          ]
        }
      }
    },
    "4.14": {
      "Recipes": {
        "xen": {
          "StableRef": "49299c4813b7847d29df07bf790f5489060f2a9c",
          "Prereqs": [
            378,
            383
          ],
          "Patches": [
            "xsa379-4.14.patch"
          ]
        }
      }
    },
    "4.15": {
      "Recipes": {
        "xen": {
          "StableRef": "dba774896f7dd74773c14d537643b7d7477fefcd",
          "Prereqs": [
            378,
            383
          ],
          "Patches": [
            "xsa379-4.15.patch"
          ]
        }
      }
    },
    "master": {
      "Recipes": {
        "xen": {
          "StableRef": "25da9455f1bb8a6d33039575a7b28bdfc4e3fcfe",
          "Prereqs": [
            378,
            383
          ],
          "Patches": [
            "xsa379.patch"
          ]
        }
      }
    }
  }
}

[-- Attachment #3: xsa379.patch --]
[-- Type: application/octet-stream, Size: 3208 bytes --]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/mm: widen locked region in xenmem_add_to_physmap_one()

For pages which can be made part of the P2M by the guest, but which can
also later be de-allocated (grant table v2 status pages being the
present example), it is imperative that they be mapped at no more than a
single GFN. We therefore need to make sure that of two parallel
XENMAPSPACE_grant_table requests for the same status page one completes
before the second checks at which other GFN the underlying MFN is
presently mapped.

Pull ahead the respective get_gfn() and push down the respective
put_gfn(). This leverages that gfn_lock() really aliases p2m_lock(), but
the function makes this assumption already anyway: In the
XENMAPSPACE_gmfn case lock nesting constraints for both involved GFNs
would otherwise need to be enforced to avoid ABBA deadlocks.

This is CVE-2021-28697 / XSA-379.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
v2: Add code comment.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2756,17 +2756,29 @@ int xenmem_add_to_physmap_one(
         goto put_both;
     }
 
+    /*
+     * Note that we're (ab)using GFN locking (to really be locking of the
+     * entire P2M) here in (at least) two ways: Finer grained locking would
+     * expose lock order violations in the XENMAPSPACE_gmfn case (due to the
+     * earlier get_gfn_unshare() above). Plus at the very least for the grant
+     * table v2 status page case we need to guarantee that the same page can
+     * only appear at a single GFN. While this is a property we want in
+     * general, for pages which can subsequently be freed this imperative:
+     * Upon freeing we wouldn't be able to find other mappings in the P2M
+     * (unless we did a brute force search).
+     */
+    prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
+
     /* XENMAPSPACE_gmfn: Check if the MFN is associated with another GFN. */
     old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
     ASSERT(!SHARED_M2P(old_gpfn));
     if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
     {
         rc = -EXDEV;
-        goto put_both;
+        goto put_all;
     }
 
     /* Remove previously mapped page if it was present. */
-    prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
     if ( p2mt == p2m_mmio_direct )
         rc = -EPERM;
     else if ( mfn_valid(prev_mfn) )
@@ -2778,20 +2790,18 @@ int xenmem_add_to_physmap_one(
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(d, gfn_x(gpfn));
     }
-    /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
-    put_gfn(d, gfn_x(gpfn));
-
-    if ( rc )
-        goto put_both;
 
     /* Unmap from old location, if any. */
-    if ( old_gpfn != INVALID_M2P_ENTRY )
+    if ( !rc && old_gpfn != INVALID_M2P_ENTRY )
         rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
 
     /* Map at new location. */
     if ( !rc )
         rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
 
+ put_all:
+    put_gfn(d, gfn_x(gpfn));
+
  put_both:
     /*
      * In the XENMAPSPACE_gmfn case, we took a ref of the gfn at the top.

[-- Attachment #4: xsa379-4.12.patch --]
[-- Type: application/octet-stream, Size: 3049 bytes --]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/mm: widen locked region in xenmem_add_to_physmap_one()

For pages which can be made part of the P2M by the guest, but which can
also later be de-allocated (grant table v2 status pages being the
present example), it is imperative that they be mapped at no more than a
single GFN. We therefore need to make sure that of two parallel
XENMAPSPACE_grant_table requests for the same status page one completes
before the second checks at which other GFN the underlying MFN is
presently mapped.

Push down the respective put_gfn(). This leverages that gfn_lock()
really aliases p2m_lock(), but the function makes this assumption
already anyway: In the XENMAPSPACE_gmfn case lock nesting constraints
for both involved GFNs would otherwise need to be enforced to avoid ABBA
deadlocks.

This is CVE-2021-28697 / XSA-379.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4807,8 +4807,20 @@ int xenmem_add_to_physmap_one(
         goto put_both;
     }
 
-    /* Remove previously mapped page if it was present. */
+    /*
+     * Note that we're (ab)using GFN locking (to really be locking of the
+     * entire P2M) here in (at least) two ways: Finer grained locking would
+     * expose lock order violations in the XENMAPSPACE_gmfn case (due to the
+     * earlier get_gfn_unshare() above). Plus at the very least for the grant
+     * table v2 status page case we need to guarantee that the same page can
+     * only appear at a single GFN. While this is a property we want in
+     * general, for pages which can subsequently be freed this imperative:
+     * Upon freeing we wouldn't be able to find other mappings in the P2M
+     * (unless we did a brute force search).
+     */
     prev_mfn = mfn_x(get_gfn(d, gfn_x(gpfn), &p2mt));
+
+    /* Remove previously mapped page if it was present. */
     if ( p2mt == p2m_mmio_direct )
         rc = -EPERM;
     else if ( mfn_valid(_mfn(prev_mfn)) )
@@ -4820,27 +4832,21 @@ int xenmem_add_to_physmap_one(
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(d, gfn_x(gpfn));
     }
-    /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
-    put_gfn(d, gfn_x(gpfn));
-
-    if ( rc )
-        goto put_both;
 
     /* Unmap from old location, if any. */
     old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
     ASSERT(!SHARED_M2P(old_gpfn));
     if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
-    {
         rc = -EXDEV;
-        goto put_both;
-    }
-    if ( old_gpfn != INVALID_M2P_ENTRY )
+    else if ( !rc && old_gpfn != INVALID_M2P_ENTRY )
         rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
 
     /* Map at new location. */
     if ( !rc )
         rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
 
+    put_gfn(d, gfn_x(gpfn));
+
  put_both:
     /*
      * In the XENMAPSPACE_gmfn case, we took a ref of the gfn at the top.

[-- Attachment #5: xsa379-4.14.patch --]
[-- Type: application/octet-stream, Size: 3036 bytes --]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/mm: widen locked region in xenmem_add_to_physmap_one()

For pages which can be made part of the P2M by the guest, but which can
also later be de-allocated (grant table v2 status pages being the
present example), it is imperative that they be mapped at no more than a
single GFN. We therefore need to make sure that of two parallel
XENMAPSPACE_grant_table requests for the same status page one completes
before the second checks at which other GFN the underlying MFN is
presently mapped.

Push down the respective put_gfn(). This leverages that gfn_lock()
really aliases p2m_lock(), but the function makes this assumption
already anyway: In the XENMAPSPACE_gmfn case lock nesting constraints
for both involved GFNs would otherwise need to be enforced to avoid ABBA
deadlocks.

This is CVE-2021-28697 / XSA-379.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4650,8 +4650,20 @@ int xenmem_add_to_physmap_one(
         goto put_both;
     }
 
-    /* Remove previously mapped page if it was present. */
+    /*
+     * Note that we're (ab)using GFN locking (to really be locking of the
+     * entire P2M) here in (at least) two ways: Finer grained locking would
+     * expose lock order violations in the XENMAPSPACE_gmfn case (due to the
+     * earlier get_gfn_unshare() above). Plus at the very least for the grant
+     * table v2 status page case we need to guarantee that the same page can
+     * only appear at a single GFN. While this is a property we want in
+     * general, for pages which can subsequently be freed this imperative:
+     * Upon freeing we wouldn't be able to find other mappings in the P2M
+     * (unless we did a brute force search).
+     */
     prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
+
+    /* Remove previously mapped page if it was present. */
     if ( p2mt == p2m_mmio_direct )
         rc = -EPERM;
     else if ( mfn_valid(prev_mfn) )
@@ -4663,27 +4675,21 @@ int xenmem_add_to_physmap_one(
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(d, gfn_x(gpfn));
     }
-    /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
-    put_gfn(d, gfn_x(gpfn));
-
-    if ( rc )
-        goto put_both;
 
     /* Unmap from old location, if any. */
     old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
     ASSERT(!SHARED_M2P(old_gpfn));
     if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
-    {
         rc = -EXDEV;
-        goto put_both;
-    }
-    if ( old_gpfn != INVALID_M2P_ENTRY )
+    else if ( !rc && old_gpfn != INVALID_M2P_ENTRY )
         rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
 
     /* Map at new location. */
     if ( !rc )
         rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
 
+    put_gfn(d, gfn_x(gpfn));
+
  put_both:
     /*
      * In the XENMAPSPACE_gmfn case, we took a ref of the gfn at the top.

[-- Attachment #6: xsa379-4.15.patch --]
[-- Type: application/octet-stream, Size: 3265 bytes --]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/mm: widen locked region in xenmem_add_to_physmap_one()

For pages which can be made part of the P2M by the guest, but which can
also later be de-allocated (grant table v2 status pages being the
present example), it is imperative that they be mapped at no more than a
single GFN. We therefore need to make sure that of two parallel
XENMAPSPACE_grant_table requests for the same status page one completes
before the second checks at which other GFN the underlying MFN is
presently mapped.

Push down the respective put_gfn(). This leverages that gfn_lock()
really aliases p2m_lock(), but the function makes this assumption
already anyway: In the XENMAPSPACE_gmfn case lock nesting constraints
for both involved GFNs would otherwise need to be enforced to avoid ABBA
deadlocks.

This is CVE-2021-28697 / XSA-379.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
Since there was some re-ordering of the checks in staging/master (the
-EXDEV now sitting earlier there), I deemed it better to drop the
earlier "if ( rc )" and allow an earlier error to be overwritten by
-EXDEV here.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2730,8 +2730,20 @@ int xenmem_add_to_physmap_one(
         goto put_both;
     }
 
-    /* Remove previously mapped page if it was present. */
+    /*
+     * Note that we're (ab)using GFN locking (to really be locking of the
+     * entire P2M) here in (at least) two ways: Finer grained locking would
+     * expose lock order violations in the XENMAPSPACE_gmfn case (due to the
+     * earlier get_gfn_unshare() above). Plus at the very least for the grant
+     * table v2 status page case we need to guarantee that the same page can
+     * only appear at a single GFN. While this is a property we want in
+     * general, for pages which can subsequently be freed this imperative:
+     * Upon freeing we wouldn't be able to find other mappings in the P2M
+     * (unless we did a brute force search).
+     */
     prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
+
+    /* Remove previously mapped page if it was present. */
     if ( p2mt == p2m_mmio_direct )
         rc = -EPERM;
     else if ( mfn_valid(prev_mfn) )
@@ -2743,27 +2755,21 @@ int xenmem_add_to_physmap_one(
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(d, gfn_x(gpfn));
     }
-    /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
-    put_gfn(d, gfn_x(gpfn));
-
-    if ( rc )
-        goto put_both;
 
     /* Unmap from old location, if any. */
     old_gpfn = get_gpfn_from_mfn(mfn_x(mfn));
     ASSERT(!SHARED_M2P(old_gpfn));
     if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn )
-    {
         rc = -EXDEV;
-        goto put_both;
-    }
-    if ( old_gpfn != INVALID_M2P_ENTRY )
+    else if ( !rc && old_gpfn != INVALID_M2P_ENTRY )
         rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
 
     /* Map at new location. */
     if ( !rc )
         rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
 
+    put_gfn(d, gfn_x(gpfn));
+
  put_both:
     /*
      * In the XENMAPSPACE_gmfn case, we took a ref of the gfn at the top.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-08-25 12:02 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-25 12:01 Xen Security Advisory 379 v2 (CVE-2021-28697) - grant table v2 status pages may remain accessible after de-allocation Xen.org security team

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.