All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] stricter ioreq server permissions checks
@ 2018-03-16 16:58 Paul Durrant
  2018-03-16 16:58 ` [PATCH v2 1/4] x86/hvm: stop passing explicit domid to hvm_create_ioreq_server() Paul Durrant
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Paul Durrant @ 2018-03-16 16:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant

This series tightens up permissions checking on ioreq server control plane
operations.

Paul Durrant (4):
  x86/hvm: stop passing explicit domid to hvm_create_ioreq_server()
  x86/hvm: take a reference on ioreq server emulating domain
  x86/hvm: re-structure some of the ioreq server look-up loops
  x86/hvm: add stricter permissions checks to ioreq server control plane

 xen/arch/x86/hvm/dm.c            |   5 +-
 xen/arch/x86/hvm/hvm.c           |  11 +-
 xen/arch/x86/hvm/ioreq.c         | 237 ++++++++++++++++-----------------------
 xen/include/asm-x86/hvm/domain.h |   4 +-
 xen/include/asm-x86/hvm/ioreq.h  |   7 +-
 5 files changed, 108 insertions(+), 156 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/4] x86/hvm: stop passing explicit domid to hvm_create_ioreq_server()
  2018-03-16 16:58 [PATCH v2 0/4] stricter ioreq server permissions checks Paul Durrant
@ 2018-03-16 16:58 ` Paul Durrant
  2018-03-20 16:26   ` Jan Beulich
  2018-03-16 16:58 ` [PATCH v2 2/4] x86/hvm: take a reference on ioreq server emulating domain Paul Durrant
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Paul Durrant @ 2018-03-16 16:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Jan Beulich

Only in the legacy 'default server' case do we pass anything other than
current->domain->domain_id, and in that case we pass the value of
HVM_PARAM_DM_DOMAIN.

The only known user of HVM_PARAM_DM_DOMAIN is qemu-trad, which always sets
it to DOMID_SELF (ignoring the return value of xc_set_hvm_param) [1] and
never reads it.

This patch:

- Disallows setting HVM_PARAM_DM_DOMAIN to anything other than DOMID_SELF
  and removes the call to hvm_set_dm_domain().
- Stops passing a domid to hvm_create_ioreq_server()
- Changes hvm_create_ioreq_server() to always set
  current->domain->domain_id as the domid of the emulating domain
- Removes the hvm_set_dm_domain() implementation since it is no longer
  needed.

[1] http://xenbits.xen.org/gitweb/?p=qemu-xen-traditional.git;a=blob;f=hw/xen_machine_fv.c;#l299

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v2:
 - Remove use of HVM_PARAM_DM_DOMAIN from the ioreq server code
---
 xen/arch/x86/hvm/dm.c           |  5 +--
 xen/arch/x86/hvm/hvm.c          | 11 +++---
 xen/arch/x86/hvm/ioreq.c        | 88 +++--------------------------------------
 xen/include/asm-x86/hvm/ioreq.h |  7 +---
 4 files changed, 15 insertions(+), 96 deletions(-)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 7788577a73..96b0d13f2f 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -402,7 +402,6 @@ static int dm_op(const struct dmop_args *op_args)
     {
     case XEN_DMOP_create_ioreq_server:
     {
-        struct domain *curr_d = current->domain;
         struct xen_dm_op_create_ioreq_server *data =
             &op.u.create_ioreq_server;
 
@@ -412,8 +411,8 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad[0] || data->pad[1] || data->pad[2] )
             break;
 
-        rc = hvm_create_ioreq_server(d, curr_d->domain_id, false,
-                                     data->handle_bufioreq, &data->id);
+        rc = hvm_create_ioreq_server(d, false, data->handle_bufioreq,
+                                     &data->id);
         break;
     }
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 91bc3e8b27..a162d59e26 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4199,10 +4199,11 @@ static int hvmop_set_param(
         domctl_lock_release();
         break;
     case HVM_PARAM_DM_DOMAIN:
-        if ( a.value == DOMID_SELF )
-            a.value = curr_d->domain_id;
+        /* The only value this should ever be set to is DOMID_SELF */
+        if ( a.value != DOMID_SELF )
+            rc = -EINVAL;
 
-        rc = hvm_set_dm_domain(d, a.value);
+        a.value = curr_d->domain_id;
         break;
     case HVM_PARAM_ACPI_S_STATE:
         rc = 0;
@@ -4443,9 +4444,7 @@ static int hvmop_get_param(
          */
         if ( !d->creation_finished )
         {
-            domid_t domid = d->arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
-
-            rc = hvm_create_ioreq_server(d, domid, true,
+            rc = hvm_create_ioreq_server(d, true,
                                          HVM_IOREQSRV_BUFIOREQ_LEGACY, NULL);
             if ( rc != 0 && rc != -EEXIST )
                 goto out;
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 7e66965bcd..2b9e5562dd 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -599,16 +599,15 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s,
 }
 
 static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
-                                 struct domain *d, domid_t domid,
-                                 bool is_default, int bufioreq_handling,
-                                 ioservid_t id)
+                                 struct domain *d, bool is_default,
+                                 int bufioreq_handling, ioservid_t id)
 {
     struct vcpu *v;
     int rc;
 
     s->id = id;
     s->domain = d;
-    s->domid = domid;
+    s->domid = current->domain->domain_id;
 
     spin_lock_init(&s->lock);
     INIT_LIST_HEAD(&s->ioreq_vcpu_list);
@@ -680,9 +679,8 @@ static ioservid_t next_ioservid(struct domain *d)
     return id;
 }
 
-int hvm_create_ioreq_server(struct domain *d, domid_t domid,
-                            bool is_default, int bufioreq_handling,
-                            ioservid_t *id)
+int hvm_create_ioreq_server(struct domain *d, bool is_default,
+                            int bufioreq_handling, ioservid_t *id)
 {
     struct hvm_ioreq_server *s;
     int rc;
@@ -702,7 +700,7 @@ int hvm_create_ioreq_server(struct domain *d, domid_t domid,
     if ( is_default && d->arch.hvm_domain.default_ioreq_server != NULL )
         goto fail2;
 
-    rc = hvm_ioreq_server_init(s, d, domid, is_default, bufioreq_handling,
+    rc = hvm_ioreq_server_init(s, d, is_default, bufioreq_handling,
                                next_ioservid(d));
     if ( rc )
         goto fail3;
@@ -1089,80 +1087,6 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
 }
 
-static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
-                                     evtchn_port_t *p_port)
-{
-    int old_port, new_port;
-
-    new_port = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id,
-                                               remote_domid, NULL);
-    if ( new_port < 0 )
-        return new_port;
-
-    /* xchg() ensures that only we call free_xen_event_channel(). */
-    old_port = xchg(p_port, new_port);
-    free_xen_event_channel(v->domain, old_port);
-    return 0;
-}
-
-int hvm_set_dm_domain(struct domain *d, domid_t domid)
-{
-    struct hvm_ioreq_server *s;
-    int rc = 0;
-
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
-
-    /*
-     * Lack of ioreq server is not a failure. HVM_PARAM_DM_DOMAIN will
-     * still be set and thus, when the server is created, it will have
-     * the correct domid.
-     */
-    s = d->arch.hvm_domain.default_ioreq_server;
-    if ( !s )
-        goto done;
-
-    domain_pause(d);
-    spin_lock(&s->lock);
-
-    if ( s->domid != domid )
-    {
-        struct hvm_ioreq_vcpu *sv;
-
-        list_for_each_entry ( sv,
-                              &s->ioreq_vcpu_list,
-                              list_entry )
-        {
-            struct vcpu *v = sv->vcpu;
-
-            if ( v->vcpu_id == 0 )
-            {
-                rc = hvm_replace_event_channel(v, domid,
-                                               &s->bufioreq_evtchn);
-                if ( rc )
-                    break;
-
-                d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] =
-                    s->bufioreq_evtchn;
-            }
-
-            rc = hvm_replace_event_channel(v, domid, &sv->ioreq_evtchn);
-            if ( rc )
-                break;
-
-            hvm_update_ioreq_evtchn(s, sv);
-        }
-
-        s->domid = domid;
-    }
-
-    spin_unlock(&s->lock);
-    domain_unpause(d);
-
- done:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
-    return rc;
-}
-
 struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
                                                  ioreq_t *p)
 {
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index 1829fcf43e..1bd1a02f23 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -23,9 +23,8 @@ bool hvm_io_pending(struct vcpu *v);
 bool handle_hvm_io_completion(struct vcpu *v);
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
 
-int hvm_create_ioreq_server(struct domain *d, domid_t domid,
-                            bool is_default, int bufioreq_handling,
-                            ioservid_t *id);
+int hvm_create_ioreq_server(struct domain *d, bool is_default,
+                            int bufioreq_handling, ioservid_t *id);
 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
 int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
                               unsigned long *ioreq_gfn,
@@ -46,8 +45,6 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
 void hvm_destroy_all_ioreq_servers(struct domain *d);
 
-int hvm_set_dm_domain(struct domain *d, domid_t domid);
-
 struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
                                                  ioreq_t *p);
 int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/4] x86/hvm: take a reference on ioreq server emulating domain
  2018-03-16 16:58 [PATCH v2 0/4] stricter ioreq server permissions checks Paul Durrant
  2018-03-16 16:58 ` [PATCH v2 1/4] x86/hvm: stop passing explicit domid to hvm_create_ioreq_server() Paul Durrant
@ 2018-03-16 16:58 ` Paul Durrant
  2018-03-20 16:37   ` Jan Beulich
  2018-03-16 16:58 ` [PATCH v2 3/4] x86/hvm: re-structure some of the ioreq server look-up loops Paul Durrant
  2018-03-16 16:58 ` [PATCH v2 4/4] x86/hvm: add stricter permissions checks to ioreq server control plane Paul Durrant
  3 siblings, 1 reply; 10+ messages in thread
From: Paul Durrant @ 2018-03-16 16:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Jan Beulich

When an ioreq server is created the code currently stores the id
of the emulating domain, but does not take a reference on that domain.

This patch modifies the code to hold a reference for the lifetime of the
ioreq server.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c         | 33 ++++++++++++++++++++++-----------
 xen/include/asm-x86/hvm/domain.h |  4 +---
 2 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 2b9e5562dd..066c997c93 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -218,7 +218,7 @@ static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool buf)
 static int hvm_map_ioreq_page(
     struct hvm_ioreq_server *s, bool buf, unsigned long gfn)
 {
-    struct domain *d = s->domain;
+    struct domain *d = s->target;
     struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page;
     void *va;
@@ -305,6 +305,7 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
                                      bool is_default, struct vcpu *v)
 {
     struct hvm_ioreq_vcpu *sv;
+    domid_t domid;
     int rc;
 
     sv = xzalloc(struct hvm_ioreq_vcpu);
@@ -315,7 +316,9 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
 
     spin_lock(&s->lock);
 
-    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, s->domid,
+    domid = s->emulator->domain_id;
+
+    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, domid,
                                          NULL);
     if ( rc < 0 )
         goto fail2;
@@ -324,9 +327,9 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
 
     if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
     {
-        struct domain *d = s->domain;
+        struct domain *d = s->target;
 
-        rc = alloc_unbound_xen_event_channel(v->domain, 0, s->domid, NULL);
+        rc = alloc_unbound_xen_event_channel(v->domain, 0, domid, NULL);
         if ( rc < 0 )
             goto fail3;
 
@@ -434,7 +437,7 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
                                         bool is_default,
                                         bool handle_bufioreq)
 {
-    struct domain *d = s->domain;
+    struct domain *d = s->target;
     unsigned long ioreq_gfn = gfn_x(INVALID_GFN);
     unsigned long bufioreq_gfn = gfn_x(INVALID_GFN);
     int rc;
@@ -471,7 +474,7 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
 static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s,
                                          bool is_default)
 {
-    struct domain *d = s->domain;
+    struct domain *d = s->target;
     bool handle_bufioreq = !!s->bufioreq.va;
 
     if ( handle_bufioreq )
@@ -521,7 +524,7 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
         if ( rc )
             goto fail;
 
-        s->range[i] = rangeset_new(s->domain, name,
+        s->range[i] = rangeset_new(s->target, name,
                                    RANGESETF_prettyprint_hex);
 
         xfree(name);
@@ -545,7 +548,7 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
 static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s,
                                     bool is_default)
 {
-    struct domain *d = s->domain;
+    struct domain *d = s->target;
     struct hvm_ioreq_vcpu *sv;
     bool handle_bufioreq = !!s->bufioreq.va;
 
@@ -576,7 +579,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s,
 static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s,
                                      bool is_default)
 {
-    struct domain *d = s->domain;
+    struct domain *d = s->target;
     bool handle_bufioreq = !!s->bufioreq.va;
 
     spin_lock(&s->lock);
@@ -602,12 +605,17 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
                                  struct domain *d, bool is_default,
                                  int bufioreq_handling, ioservid_t id)
 {
+    struct domain *currd = current->domain;
     struct vcpu *v;
     int rc;
 
     s->id = id;
-    s->domain = d;
-    s->domid = current->domain->domain_id;
+    s->target = d;
+
+    if ( !get_domain(currd) )
+        return -EACCES;
+
+    s->emulator = currd;
 
     spin_lock_init(&s->lock);
     INIT_LIST_HEAD(&s->ioreq_vcpu_list);
@@ -641,6 +649,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
  fail_map:
     hvm_ioreq_server_free_rangesets(s, is_default);
 
+    put_domain(s->emulator);
     return rc;
 }
 
@@ -651,6 +660,8 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s,
     hvm_ioreq_server_remove_all_vcpus(s);
     hvm_ioreq_server_unmap_pages(s, is_default);
     hvm_ioreq_server_free_rangesets(s, is_default);
+
+    put_domain(s->emulator);
 }
 
 static ioservid_t next_ioservid(struct domain *d)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 7f128c05ff..6e03d024c8 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -53,13 +53,11 @@ struct hvm_ioreq_vcpu {
 
 struct hvm_ioreq_server {
     struct list_head       list_entry;
-    struct domain          *domain;
+    struct domain          *target, *emulator;
 
     /* Lock to serialize toolstack modifications */
     spinlock_t             lock;
 
-    /* Domain id of emulating domain */
-    domid_t                domid;
     ioservid_t             id;
     struct hvm_ioreq_page  ioreq;
     struct list_head       ioreq_vcpu_list;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/4] x86/hvm: re-structure some of the ioreq server look-up loops
  2018-03-16 16:58 [PATCH v2 0/4] stricter ioreq server permissions checks Paul Durrant
  2018-03-16 16:58 ` [PATCH v2 1/4] x86/hvm: stop passing explicit domid to hvm_create_ioreq_server() Paul Durrant
  2018-03-16 16:58 ` [PATCH v2 2/4] x86/hvm: take a reference on ioreq server emulating domain Paul Durrant
@ 2018-03-16 16:58 ` Paul Durrant
  2018-03-20 16:41   ` Jan Beulich
  2018-03-16 16:58 ` [PATCH v2 4/4] x86/hvm: add stricter permissions checks to ioreq server control plane Paul Durrant
  3 siblings, 1 reply; 10+ messages in thread
From: Paul Durrant @ 2018-03-16 16:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Jan Beulich

This patch is a cosmetic re-structuring of some of the loops with look up
an ioreq server based on target domain and server id.

The restructuring is done separately here to ease review of a subsquent
patch.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c | 100 +++++++++++++++++++++++------------------------
 1 file changed, 50 insertions(+), 50 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 066c997c93..1675593ce3 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -839,37 +839,37 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
                           &d->arch.hvm_domain.ioreq_server.list,
                           list_entry )
     {
+        struct rangeset *r;
+
         if ( s == d->arch.hvm_domain.default_ioreq_server )
             continue;
 
-        if ( s->id == id )
-        {
-            struct rangeset *r;
-
-            switch ( type )
-            {
-            case XEN_DMOP_IO_RANGE_PORT:
-            case XEN_DMOP_IO_RANGE_MEMORY:
-            case XEN_DMOP_IO_RANGE_PCI:
-                r = s->range[type];
-                break;
+        if ( s->id != id )
+            continue;
 
-            default:
-                r = NULL;
-                break;
-            }
+        switch ( type )
+        {
+        case XEN_DMOP_IO_RANGE_PORT:
+        case XEN_DMOP_IO_RANGE_MEMORY:
+        case XEN_DMOP_IO_RANGE_PCI:
+            r = s->range[type];
+            break;
 
-            rc = -EINVAL;
-            if ( !r )
-                break;
+        default:
+            r = NULL;
+            break;
+        }
 
-            rc = -EEXIST;
-            if ( rangeset_overlaps_range(r, start, end) )
+        rc = -EINVAL;
+        if ( !r )
                 break;
 
-            rc = rangeset_add_range(r, start, end);
+        rc = -EEXIST;
+        if ( rangeset_overlaps_range(r, start, end) )
             break;
-        }
+
+        rc = rangeset_add_range(r, start, end);
+        break;
     }
 
     spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
@@ -894,37 +894,37 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                           &d->arch.hvm_domain.ioreq_server.list,
                           list_entry )
     {
+        struct rangeset *r;
+
         if ( s == d->arch.hvm_domain.default_ioreq_server )
             continue;
 
-        if ( s->id == id )
-        {
-            struct rangeset *r;
-
-            switch ( type )
-            {
-            case XEN_DMOP_IO_RANGE_PORT:
-            case XEN_DMOP_IO_RANGE_MEMORY:
-            case XEN_DMOP_IO_RANGE_PCI:
-                r = s->range[type];
-                break;
+        if ( s->id != id )
+            continue;
 
-            default:
-                r = NULL;
-                break;
-            }
+        switch ( type )
+        {
+        case XEN_DMOP_IO_RANGE_PORT:
+        case XEN_DMOP_IO_RANGE_MEMORY:
+        case XEN_DMOP_IO_RANGE_PCI:
+            r = s->range[type];
+            break;
 
-            rc = -EINVAL;
-            if ( !r )
-                break;
+        default:
+            r = NULL;
+            break;
+        }
 
-            rc = -ENOENT;
-            if ( !rangeset_contains_range(r, start, end) )
-                break;
+        rc = -EINVAL;
+        if ( !r )
+            break;
 
-            rc = rangeset_remove_range(r, start, end);
+        rc = -ENOENT;
+        if ( !rangeset_contains_range(r, start, end) )
             break;
-        }
+
+        rc = rangeset_remove_range(r, start, end);
+        break;
     }
 
     spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
@@ -962,11 +962,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
         if ( s == d->arch.hvm_domain.default_ioreq_server )
             continue;
 
-        if ( s->id == id )
-        {
-            rc = p2m_set_ioreq_server(d, flags, s);
-            break;
-        }
+        if ( s->id != id )
+            continue;
+
+        rc = p2m_set_ioreq_server(d, flags, s);
+        break;
     }
 
     spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/4] x86/hvm: add stricter permissions checks to ioreq server control plane
  2018-03-16 16:58 [PATCH v2 0/4] stricter ioreq server permissions checks Paul Durrant
                   ` (2 preceding siblings ...)
  2018-03-16 16:58 ` [PATCH v2 3/4] x86/hvm: re-structure some of the ioreq server look-up loops Paul Durrant
@ 2018-03-16 16:58 ` Paul Durrant
  2018-03-20 16:43   ` Jan Beulich
  3 siblings, 1 reply; 10+ messages in thread
From: Paul Durrant @ 2018-03-16 16:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Jan Beulich

There has always been an intention in the ioreq server API that only the
domain that creates an ioreq server should be able to manipulate it.
However, so far, nothing has enforced this. This means that two domains
with DM_PRIV over a target domain can currently manipulate each others
ioreq servers.

A previous patch added code to take a reference and store a pointer to the
domain that creates an ioreq server. This patch now adds checks to the
functions that manipulate the ioreq server to make sure they are being
called by the same domain.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 1675593ce3..8e2397858b 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -761,6 +761,10 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
         if ( s->id != id )
             continue;
 
+        rc = -EPERM;
+        if ( s->emulator != current->domain )
+            break;
+
         domain_pause(d);
 
         p2m_set_ioreq_server(d, 0, s);
@@ -805,6 +809,10 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
         if ( s->id != id )
             continue;
 
+        rc = -EPERM;
+        if ( s->emulator != current->domain )
+            break;
+
         *ioreq_gfn = s->ioreq.gfn;
 
         if ( s->bufioreq.va != NULL )
@@ -847,6 +855,10 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
         if ( s->id != id )
             continue;
 
+        rc = -EPERM;
+        if ( s->emulator != current->domain )
+            break;
+
         switch ( type )
         {
         case XEN_DMOP_IO_RANGE_PORT:
@@ -902,6 +914,10 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
         if ( s->id != id )
             continue;
 
+        rc = -EPERM;
+        if ( s->emulator != current->domain )
+            break;
+
         switch ( type )
         {
         case XEN_DMOP_IO_RANGE_PORT:
@@ -965,6 +981,10 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
         if ( s->id != id )
             continue;
 
+        rc = -EPERM;
+        if ( s->emulator != current->domain )
+            break;
+
         rc = p2m_set_ioreq_server(d, flags, s);
         break;
     }
@@ -1004,6 +1024,10 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
         if ( s->id != id )
             continue;
 
+        rc = -EPERM;
+        if ( s->emulator != current->domain )
+            break;
+
         domain_pause(d);
 
         if ( enabled )
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/4] x86/hvm: stop passing explicit domid to hvm_create_ioreq_server()
  2018-03-16 16:58 ` [PATCH v2 1/4] x86/hvm: stop passing explicit domid to hvm_create_ioreq_server() Paul Durrant
@ 2018-03-20 16:26   ` Jan Beulich
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Beulich @ 2018-03-20 16:26 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 16.03.18 at 17:58, <paul.durrant@citrix.com> wrote:
> Only in the legacy 'default server' case do we pass anything other than
> current->domain->domain_id, and in that case we pass the value of
> HVM_PARAM_DM_DOMAIN.
> 
> The only known user of HVM_PARAM_DM_DOMAIN is qemu-trad, which always sets
> it to DOMID_SELF (ignoring the return value of xc_set_hvm_param) [1] and
> never reads it.
> 
> This patch:
> 
> - Disallows setting HVM_PARAM_DM_DOMAIN to anything other than DOMID_SELF
>   and removes the call to hvm_set_dm_domain().
> - Stops passing a domid to hvm_create_ioreq_server()
> - Changes hvm_create_ioreq_server() to always set
>   current->domain->domain_id as the domid of the emulating domain
> - Removes the hvm_set_dm_domain() implementation since it is no longer
>   needed.
> 
> [1] 
> http://xenbits.xen.org/gitweb/?p=qemu-xen-traditional.git;a=blob;f=hw/xen_m 
> achine_fv.c;#l299
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/4] x86/hvm: take a reference on ioreq server emulating domain
  2018-03-16 16:58 ` [PATCH v2 2/4] x86/hvm: take a reference on ioreq server emulating domain Paul Durrant
@ 2018-03-20 16:37   ` Jan Beulich
  2018-03-20 16:57     ` Paul Durrant
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2018-03-20 16:37 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 16.03.18 at 17:58, <paul.durrant@citrix.com> wrote:
> @@ -305,6 +305,7 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
>                                       bool is_default, struct vcpu *v)
>  {
>      struct hvm_ioreq_vcpu *sv;
> +    domid_t domid;
>      int rc;
>  
>      sv = xzalloc(struct hvm_ioreq_vcpu);
> @@ -315,7 +316,9 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
>  
>      spin_lock(&s->lock);
>  
> -    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, s->domid,
> +    domid = s->emulator->domain_id;
> +
> +    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, domid,
>                                           NULL);
>      if ( rc < 0 )
>          goto fail2;
> @@ -324,9 +327,9 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
>  
>      if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
>      {
> -        struct domain *d = s->domain;
> +        struct domain *d = s->target;
>  
> -        rc = alloc_unbound_xen_event_channel(v->domain, 0, s->domid, NULL);
> +        rc = alloc_unbound_xen_event_channel(v->domain, 0, domid, NULL);

Especially for this second call site it would probably be more clear
at the first glance which domain is meant if you didn't latch
s->emulator->domain_id into a local variable.

> @@ -602,12 +605,17 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
>                                   struct domain *d, bool is_default,
>                                   int bufioreq_handling, ioservid_t id)
>  {
> +    struct domain *currd = current->domain;
>      struct vcpu *v;
>      int rc;
>  
>      s->id = id;
> -    s->domain = d;
> -    s->domid = current->domain->domain_id;
> +    s->target = d;
> +
> +    if ( !get_domain(currd) )
> +        return -EACCES;

Use get_knownalive_domain() here, which cannot fail?

> @@ -651,6 +660,8 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s,
>      hvm_ioreq_server_remove_all_vcpus(s);
>      hvm_ioreq_server_unmap_pages(s, is_default);
>      hvm_ioreq_server_free_rangesets(s, is_default);
> +
> +    put_domain(s->emulator);
>  }

Could you clarify in the description that this runs in the context of
XEN_DOMCTL_destroydomain and hence there's no risk of target
and emulator preventing each other from being cleaned up?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/4] x86/hvm: re-structure some of the ioreq server look-up loops
  2018-03-16 16:58 ` [PATCH v2 3/4] x86/hvm: re-structure some of the ioreq server look-up loops Paul Durrant
@ 2018-03-20 16:41   ` Jan Beulich
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Beulich @ 2018-03-20 16:41 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 16.03.18 at 17:58, <paul.durrant@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -839,37 +839,37 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
>                            &d->arch.hvm_domain.ioreq_server.list,
>                            list_entry )
>      {
> +        struct rangeset *r;
> +
>          if ( s == d->arch.hvm_domain.default_ioreq_server )
>              continue;
>  
> -        if ( s->id == id )
> -        {
> -            struct rangeset *r;
> -
> -            switch ( type )
> -            {
> -            case XEN_DMOP_IO_RANGE_PORT:
> -            case XEN_DMOP_IO_RANGE_MEMORY:
> -            case XEN_DMOP_IO_RANGE_PCI:
> -                r = s->range[type];
> -                break;
> +        if ( s->id != id )
> +            continue;
>  
> -            default:
> -                r = NULL;
> -                break;
> -            }
> +        switch ( type )
> +        {
> +        case XEN_DMOP_IO_RANGE_PORT:
> +        case XEN_DMOP_IO_RANGE_MEMORY:
> +        case XEN_DMOP_IO_RANGE_PCI:
> +            r = s->range[type];
> +            break;
>  
> -            rc = -EINVAL;
> -            if ( !r )
> -                break;
> +        default:
> +            r = NULL;
> +            break;
> +        }
>  
> -            rc = -EEXIST;
> -            if ( rangeset_overlaps_range(r, start, end) )
> +        rc = -EINVAL;
> +        if ( !r )
>                  break;

The break now looks to be mis-indented. With that cleaned up
(which probably could also be done while committing)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 4/4] x86/hvm: add stricter permissions checks to ioreq server control plane
  2018-03-16 16:58 ` [PATCH v2 4/4] x86/hvm: add stricter permissions checks to ioreq server control plane Paul Durrant
@ 2018-03-20 16:43   ` Jan Beulich
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Beulich @ 2018-03-20 16:43 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 16.03.18 at 17:58, <paul.durrant@citrix.com> wrote:
> There has always been an intention in the ioreq server API that only the
> domain that creates an ioreq server should be able to manipulate it.
> However, so far, nothing has enforced this. This means that two domains
> with DM_PRIV over a target domain can currently manipulate each others
> ioreq servers.
> 
> A previous patch added code to take a reference and store a pointer to the
> domain that creates an ioreq server. This patch now adds checks to the
> functions that manipulate the ioreq server to make sure they are being
> called by the same domain.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/4] x86/hvm: take a reference on ioreq server emulating domain
  2018-03-20 16:37   ` Jan Beulich
@ 2018-03-20 16:57     ` Paul Durrant
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Durrant @ 2018-03-20 16:57 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 20 March 2018 16:37
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-
> devel@lists.xenproject.org
> Subject: Re: [PATCH v2 2/4] x86/hvm: take a reference on ioreq server
> emulating domain
> 
> >>> On 16.03.18 at 17:58, <paul.durrant@citrix.com> wrote:
> > @@ -305,6 +305,7 @@ static int hvm_ioreq_server_add_vcpu(struct
> hvm_ioreq_server *s,
> >                                       bool is_default, struct vcpu *v)
> >  {
> >      struct hvm_ioreq_vcpu *sv;
> > +    domid_t domid;
> >      int rc;
> >
> >      sv = xzalloc(struct hvm_ioreq_vcpu);
> > @@ -315,7 +316,9 @@ static int hvm_ioreq_server_add_vcpu(struct
> hvm_ioreq_server *s,
> >
> >      spin_lock(&s->lock);
> >
> > -    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, s-
> >domid,
> > +    domid = s->emulator->domain_id;
> > +
> > +    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id,
> domid,
> >                                           NULL);
> >      if ( rc < 0 )
> >          goto fail2;
> > @@ -324,9 +327,9 @@ static int hvm_ioreq_server_add_vcpu(struct
> hvm_ioreq_server *s,
> >
> >      if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
> >      {
> > -        struct domain *d = s->domain;
> > +        struct domain *d = s->target;
> >
> > -        rc = alloc_unbound_xen_event_channel(v->domain, 0, s->domid,
> NULL);
> > +        rc = alloc_unbound_xen_event_channel(v->domain, 0, domid, NULL);
> 
> Especially for this second call site it would probably be more clear
> at the first glance which domain is meant if you didn't latch
> s->emulator->domain_id into a local variable.

Ok, I can get rid of that.

> 
> > @@ -602,12 +605,17 @@ static int hvm_ioreq_server_init(struct
> hvm_ioreq_server *s,
> >                                   struct domain *d, bool is_default,
> >                                   int bufioreq_handling, ioservid_t id)
> >  {
> > +    struct domain *currd = current->domain;
> >      struct vcpu *v;
> >      int rc;
> >
> >      s->id = id;
> > -    s->domain = d;
> > -    s->domid = current->domain->domain_id;
> > +    s->target = d;
> > +
> > +    if ( !get_domain(currd) )
> > +        return -EACCES;
> 
> Use get_knownalive_domain() here, which cannot fail?
> 

Oh, didn't know about that... sounds better.

> > @@ -651,6 +660,8 @@ static void hvm_ioreq_server_deinit(struct
> hvm_ioreq_server *s,
> >      hvm_ioreq_server_remove_all_vcpus(s);
> >      hvm_ioreq_server_unmap_pages(s, is_default);
> >      hvm_ioreq_server_free_rangesets(s, is_default);
> > +
> > +    put_domain(s->emulator);
> >  }
> 
> Could you clarify in the description that this runs in the context of
> XEN_DOMCTL_destroydomain and hence there's no risk of target
> and emulator preventing each other from being cleaned up?
> 

Ok, sure.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-03-20 17:16 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-16 16:58 [PATCH v2 0/4] stricter ioreq server permissions checks Paul Durrant
2018-03-16 16:58 ` [PATCH v2 1/4] x86/hvm: stop passing explicit domid to hvm_create_ioreq_server() Paul Durrant
2018-03-20 16:26   ` Jan Beulich
2018-03-16 16:58 ` [PATCH v2 2/4] x86/hvm: take a reference on ioreq server emulating domain Paul Durrant
2018-03-20 16:37   ` Jan Beulich
2018-03-20 16:57     ` Paul Durrant
2018-03-16 16:58 ` [PATCH v2 3/4] x86/hvm: re-structure some of the ioreq server look-up loops Paul Durrant
2018-03-20 16:41   ` Jan Beulich
2018-03-16 16:58 ` [PATCH v2 4/4] x86/hvm: add stricter permissions checks to ioreq server control plane Paul Durrant
2018-03-20 16:43   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.