All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] xen/vm-event: Cleanup
@ 2019-06-03 12:25 ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

This came about from reviewing Petre's "Per vcpu vm_event channels" while sat
in an airport with plenty of time to kill.  This started with patch 4 trying
to get rid of the "k = i % d->max_vcpus;" expression, but see patch 4 for
further details of why it has stayed.

Everything else was either ancillary cleanup I noticed while reviewing, or
issues where were copied/moved/extended during the series.

This series can be obtained in git form from:

  http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/vmevent-cleanup-v1

There are no large functional changes, but it should be a better base to
refactor the interface from.

Andrew Cooper (5):
  xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl()
  xen/vm-event: Expand vm_event_* spinlock macros and rename the lock
  xen/vm-event: Remove unnecessary vm_event_domain indirection
  xen/vm-event: Fix interactions with the vcpu list
  xen/vm-event: Misc fixups

 xen/common/domctl.c        |   6 +-
 xen/common/vm_event.c      | 199 +++++++++++++++++++++------------------------
 xen/include/xen/sched.h    |   3 +-
 xen/include/xen/vm_event.h |   3 +-
 4 files changed, 96 insertions(+), 115 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH 0/5] xen/vm-event: Cleanup
@ 2019-06-03 12:25 ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

This came about from reviewing Petre's "Per vcpu vm_event channels" while sat
in an airport with plenty of time to kill.  This started with patch 4 trying
to get rid of the "k = i % d->max_vcpus;" expression, but see patch 4 for
further details of why it has stayed.

Everything else was either ancillary cleanup I noticed while reviewing, or
issues where were copied/moved/extended during the series.

This series can be obtained in git form from:

  http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/vmevent-cleanup-v1

There are no large functional changes, but it should be a better base to
refactor the interface from.

Andrew Cooper (5):
  xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl()
  xen/vm-event: Expand vm_event_* spinlock macros and rename the lock
  xen/vm-event: Remove unnecessary vm_event_domain indirection
  xen/vm-event: Fix interactions with the vcpu list
  xen/vm-event: Misc fixups

 xen/common/domctl.c        |   6 +-
 xen/common/vm_event.c      | 199 +++++++++++++++++++++------------------------
 xen/include/xen/sched.h    |   3 +-
 xen/include/xen/vm_event.h |   3 +-
 4 files changed, 96 insertions(+), 115 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/5] xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl()
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

This parameter isn't used at all.  Futhermore, elide the copyback in
failing cases, as it is only successful paths which generate data which
needs sending back to the caller.

Finally, drop a redundant d == NULL check, as that logic is all common
at the begining of do_domctl().

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/domctl.c        | 6 +++---
 xen/common/vm_event.c      | 6 +-----
 xen/include/xen/vm_event.h | 3 +--
 3 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index bade9a6..72a4495 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1018,9 +1018,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         break;
 
     case XEN_DOMCTL_vm_event_op:
-        ret = vm_event_domctl(d, &op->u.vm_event_op,
-                              guest_handle_cast(u_domctl, void));
-        copyback = 1;
+        ret = vm_event_domctl(d, &op->u.vm_event_op);
+        if ( ret == 0 )
+            copyback = true;
         break;
 
 #ifdef CONFIG_MEM_ACCESS
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 74a4755..902e152 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -583,8 +583,7 @@ void vm_event_cleanup(struct domain *d)
 #endif
 }
 
-int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
-                    XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec)
 {
     int rc;
 
@@ -594,9 +593,6 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
         return 0;
     }
 
-    if ( unlikely(d == NULL) )
-        return -ESRCH;
-
     rc = xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op);
     if ( rc )
         return rc;
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 7f6fb6d..3cc2b20 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -64,8 +64,7 @@ void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved);
 void vm_event_put_request(struct domain *d, struct vm_event_domain *ved,
                           vm_event_request_t *req);
 
-int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
-                    XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec);
 
 void vm_event_vcpu_pause(struct vcpu *v);
 void vm_event_vcpu_unpause(struct vcpu *v);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH 1/5] xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl()
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

This parameter isn't used at all.  Futhermore, elide the copyback in
failing cases, as it is only successful paths which generate data which
needs sending back to the caller.

Finally, drop a redundant d == NULL check, as that logic is all common
at the begining of do_domctl().

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/domctl.c        | 6 +++---
 xen/common/vm_event.c      | 6 +-----
 xen/include/xen/vm_event.h | 3 +--
 3 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index bade9a6..72a4495 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1018,9 +1018,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         break;
 
     case XEN_DOMCTL_vm_event_op:
-        ret = vm_event_domctl(d, &op->u.vm_event_op,
-                              guest_handle_cast(u_domctl, void));
-        copyback = 1;
+        ret = vm_event_domctl(d, &op->u.vm_event_op);
+        if ( ret == 0 )
+            copyback = true;
         break;
 
 #ifdef CONFIG_MEM_ACCESS
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 74a4755..902e152 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -583,8 +583,7 @@ void vm_event_cleanup(struct domain *d)
 #endif
 }
 
-int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
-                    XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec)
 {
     int rc;
 
@@ -594,9 +593,6 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
         return 0;
     }
 
-    if ( unlikely(d == NULL) )
-        return -ESRCH;
-
     rc = xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op);
     if ( rc )
         return rc;
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 7f6fb6d..3cc2b20 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -64,8 +64,7 @@ void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved);
 void vm_event_put_request(struct domain *d, struct vm_event_domain *ved,
                           vm_event_request_t *req);
 
-int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
-                    XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec);
 
 void vm_event_vcpu_pause(struct vcpu *v);
 void vm_event_vcpu_unpause(struct vcpu *v);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/5] xen/vm-event: Expand vm_event_* spinlock macros and rename the lock
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

These serve no purpose, but to add to the congnitive load of following
the code.  Remove the level of indirection.

Furthermore, the lock protects all data in vm_event_domain, making
ring_lock a poor choice of name.

For vm_event_get_response() and vm_event_grab_slot(), fold the exit
paths to have a single unlock, as the compiler can't make this
optimisation itself.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/vm_event.c   | 58 ++++++++++++++++++++++++-------------------------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 30 insertions(+), 31 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 902e152..db975e9 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -35,10 +35,6 @@
 #define xen_rmb()  smp_rmb()
 #define xen_wmb()  smp_wmb()
 
-#define vm_event_ring_lock_init(_ved)  spin_lock_init(&(_ved)->ring_lock)
-#define vm_event_ring_lock(_ved)       spin_lock(&(_ved)->ring_lock)
-#define vm_event_ring_unlock(_ved)     spin_unlock(&(_ved)->ring_lock)
-
 static int vm_event_enable(
     struct domain *d,
     struct xen_domctl_vm_event_op *vec,
@@ -66,8 +62,8 @@ static int vm_event_enable(
     if ( ring_gfn == 0 )
         return -EOPNOTSUPP;
 
-    vm_event_ring_lock_init(*ved);
-    vm_event_ring_lock(*ved);
+    spin_lock_init(&(*ved)->lock);
+    spin_lock(&(*ved)->lock);
 
     rc = vm_event_init_domain(d);
 
@@ -101,13 +97,13 @@ static int vm_event_enable(
     /* Initialize the last-chance wait queue. */
     init_waitqueue_head(&(*ved)->wq);
 
-    vm_event_ring_unlock(*ved);
+    spin_unlock(&(*ved)->lock);
     return 0;
 
  err:
     destroy_ring_for_helper(&(*ved)->ring_page,
                             (*ved)->ring_pg_struct);
-    vm_event_ring_unlock(*ved);
+    spin_unlock(&(*ved)->lock);
     xfree(*ved);
     *ved = NULL;
 
@@ -200,11 +196,11 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **ved)
     {
         struct vcpu *v;
 
-        vm_event_ring_lock(*ved);
+        spin_lock(&(*ved)->lock);
 
         if ( !list_empty(&(*ved)->wq.list) )
         {
-            vm_event_ring_unlock(*ved);
+            spin_unlock(&(*ved)->lock);
             return -EBUSY;
         }
 
@@ -226,7 +222,7 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **ved)
 
         vm_event_cleanup_domain(d);
 
-        vm_event_ring_unlock(*ved);
+        spin_unlock(&(*ved)->lock);
     }
 
     xfree(*ved);
@@ -292,7 +288,7 @@ void vm_event_put_request(struct domain *d,
 
     req->version = VM_EVENT_INTERFACE_VERSION;
 
-    vm_event_ring_lock(ved);
+    spin_lock(&ved->lock);
 
     /* Due to the reservations, this step must succeed. */
     front_ring = &ved->front_ring;
@@ -319,7 +315,7 @@ void vm_event_put_request(struct domain *d,
         !atomic_read(&curr->vm_event_pause_count) )
         vm_event_mark_and_pause(curr, ved);
 
-    vm_event_ring_unlock(ved);
+    spin_unlock(&ved->lock);
 
     notify_via_xen_event_channel(d, ved->xen_port);
 }
@@ -329,17 +325,15 @@ static int vm_event_get_response(struct domain *d, struct vm_event_domain *ved,
 {
     vm_event_front_ring_t *front_ring;
     RING_IDX rsp_cons;
+    int rc = 0;
 
-    vm_event_ring_lock(ved);
+    spin_lock(&ved->lock);
 
     front_ring = &ved->front_ring;
     rsp_cons = front_ring->rsp_cons;
 
     if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
-    {
-        vm_event_ring_unlock(ved);
-        return 0;
-    }
+        goto out;
 
     /* Copy response */
     memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
@@ -353,9 +347,12 @@ static int vm_event_get_response(struct domain *d, struct vm_event_domain *ved,
      * there may be additional space available in the ring. */
     vm_event_wake(d, ved);
 
-    vm_event_ring_unlock(ved);
+    rc = 1;
 
-    return 1;
+ out:
+    spin_unlock(&ved->lock);
+
+    return rc;
 }
 
 /*
@@ -455,35 +452,38 @@ void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
     if( !vm_event_check_ring(ved) )
         return;
 
-    vm_event_ring_lock(ved);
+    spin_lock(&ved->lock);
     vm_event_release_slot(d, ved);
-    vm_event_ring_unlock(ved);
+    spin_unlock(&ved->lock);
 }
 
 static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign)
 {
     unsigned int avail_req;
+    int rc;
 
     if ( !ved->ring_page )
         return -EOPNOTSUPP;
 
-    vm_event_ring_lock(ved);
+    spin_lock(&ved->lock);
 
     avail_req = vm_event_ring_available(ved);
+
+    rc = -EBUSY;
     if ( avail_req == 0 )
-    {
-        vm_event_ring_unlock(ved);
-        return -EBUSY;
-    }
+        goto out;
 
     if ( !foreign )
         ved->target_producers++;
     else
         ved->foreign_producers++;
 
-    vm_event_ring_unlock(ved);
+    rc = 0;
 
-    return 0;
+ out:
+    spin_unlock(&ved->lock);
+
+    return rc;
 }
 
 /* Simple try_grab wrapper for use in the wait_event() macro. */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2201fac..b9691fc 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -282,8 +282,7 @@ struct vcpu
 /* VM event */
 struct vm_event_domain
 {
-    /* ring lock */
-    spinlock_t ring_lock;
+    spinlock_t lock;
     /* The ring has 64 entries */
     unsigned char foreign_producers;
     unsigned char target_producers;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH 2/5] xen/vm-event: Expand vm_event_* spinlock macros and rename the lock
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

These serve no purpose, but to add to the congnitive load of following
the code.  Remove the level of indirection.

Furthermore, the lock protects all data in vm_event_domain, making
ring_lock a poor choice of name.

For vm_event_get_response() and vm_event_grab_slot(), fold the exit
paths to have a single unlock, as the compiler can't make this
optimisation itself.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/vm_event.c   | 58 ++++++++++++++++++++++++-------------------------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 30 insertions(+), 31 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 902e152..db975e9 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -35,10 +35,6 @@
 #define xen_rmb()  smp_rmb()
 #define xen_wmb()  smp_wmb()
 
-#define vm_event_ring_lock_init(_ved)  spin_lock_init(&(_ved)->ring_lock)
-#define vm_event_ring_lock(_ved)       spin_lock(&(_ved)->ring_lock)
-#define vm_event_ring_unlock(_ved)     spin_unlock(&(_ved)->ring_lock)
-
 static int vm_event_enable(
     struct domain *d,
     struct xen_domctl_vm_event_op *vec,
@@ -66,8 +62,8 @@ static int vm_event_enable(
     if ( ring_gfn == 0 )
         return -EOPNOTSUPP;
 
-    vm_event_ring_lock_init(*ved);
-    vm_event_ring_lock(*ved);
+    spin_lock_init(&(*ved)->lock);
+    spin_lock(&(*ved)->lock);
 
     rc = vm_event_init_domain(d);
 
@@ -101,13 +97,13 @@ static int vm_event_enable(
     /* Initialize the last-chance wait queue. */
     init_waitqueue_head(&(*ved)->wq);
 
-    vm_event_ring_unlock(*ved);
+    spin_unlock(&(*ved)->lock);
     return 0;
 
  err:
     destroy_ring_for_helper(&(*ved)->ring_page,
                             (*ved)->ring_pg_struct);
-    vm_event_ring_unlock(*ved);
+    spin_unlock(&(*ved)->lock);
     xfree(*ved);
     *ved = NULL;
 
@@ -200,11 +196,11 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **ved)
     {
         struct vcpu *v;
 
-        vm_event_ring_lock(*ved);
+        spin_lock(&(*ved)->lock);
 
         if ( !list_empty(&(*ved)->wq.list) )
         {
-            vm_event_ring_unlock(*ved);
+            spin_unlock(&(*ved)->lock);
             return -EBUSY;
         }
 
@@ -226,7 +222,7 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **ved)
 
         vm_event_cleanup_domain(d);
 
-        vm_event_ring_unlock(*ved);
+        spin_unlock(&(*ved)->lock);
     }
 
     xfree(*ved);
@@ -292,7 +288,7 @@ void vm_event_put_request(struct domain *d,
 
     req->version = VM_EVENT_INTERFACE_VERSION;
 
-    vm_event_ring_lock(ved);
+    spin_lock(&ved->lock);
 
     /* Due to the reservations, this step must succeed. */
     front_ring = &ved->front_ring;
@@ -319,7 +315,7 @@ void vm_event_put_request(struct domain *d,
         !atomic_read(&curr->vm_event_pause_count) )
         vm_event_mark_and_pause(curr, ved);
 
-    vm_event_ring_unlock(ved);
+    spin_unlock(&ved->lock);
 
     notify_via_xen_event_channel(d, ved->xen_port);
 }
@@ -329,17 +325,15 @@ static int vm_event_get_response(struct domain *d, struct vm_event_domain *ved,
 {
     vm_event_front_ring_t *front_ring;
     RING_IDX rsp_cons;
+    int rc = 0;
 
-    vm_event_ring_lock(ved);
+    spin_lock(&ved->lock);
 
     front_ring = &ved->front_ring;
     rsp_cons = front_ring->rsp_cons;
 
     if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
-    {
-        vm_event_ring_unlock(ved);
-        return 0;
-    }
+        goto out;
 
     /* Copy response */
     memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
@@ -353,9 +347,12 @@ static int vm_event_get_response(struct domain *d, struct vm_event_domain *ved,
      * there may be additional space available in the ring. */
     vm_event_wake(d, ved);
 
-    vm_event_ring_unlock(ved);
+    rc = 1;
 
-    return 1;
+ out:
+    spin_unlock(&ved->lock);
+
+    return rc;
 }
 
 /*
@@ -455,35 +452,38 @@ void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
     if( !vm_event_check_ring(ved) )
         return;
 
-    vm_event_ring_lock(ved);
+    spin_lock(&ved->lock);
     vm_event_release_slot(d, ved);
-    vm_event_ring_unlock(ved);
+    spin_unlock(&ved->lock);
 }
 
 static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign)
 {
     unsigned int avail_req;
+    int rc;
 
     if ( !ved->ring_page )
         return -EOPNOTSUPP;
 
-    vm_event_ring_lock(ved);
+    spin_lock(&ved->lock);
 
     avail_req = vm_event_ring_available(ved);
+
+    rc = -EBUSY;
     if ( avail_req == 0 )
-    {
-        vm_event_ring_unlock(ved);
-        return -EBUSY;
-    }
+        goto out;
 
     if ( !foreign )
         ved->target_producers++;
     else
         ved->foreign_producers++;
 
-    vm_event_ring_unlock(ved);
+    rc = 0;
 
-    return 0;
+ out:
+    spin_unlock(&ved->lock);
+
+    return rc;
 }
 
 /* Simple try_grab wrapper for use in the wait_event() macro. */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2201fac..b9691fc 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -282,8 +282,7 @@ struct vcpu
 /* VM event */
 struct vm_event_domain
 {
-    /* ring lock */
-    spinlock_t ring_lock;
+    spinlock_t lock;
     /* The ring has 64 entries */
     unsigned char foreign_producers;
     unsigned char target_producers;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/5] xen/vm-event: Remove unnecessary vm_event_domain indirection
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

The use of (*ved)-> leads to poor code generation, as the compiler can't
assume the pointer hasn't changed, and results in hard-to-follow code.

For both vm_event_{en,dis}able(), rename the ved parameter to p_ved, and
work primarily with a local ved pointer.

This has a key advantage in vm_event_enable(), in that the partially
constructed vm_event_domain only becomes globally visible once it is
fully constructed.  As a consequence, the spinlock doesn't need holding.

Furthermore, rearrange the order of operations to be more sensible.
Check for repeated enables and an bad HVM_PARAM before allocating
memory, and gather the trivial setup into one place, dropping the
redundant zeroing.

No practical change that callers will notice.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/vm_event.c | 90 +++++++++++++++++++++++----------------------------
 1 file changed, 40 insertions(+), 50 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index db975e9..dcba98c 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -38,74 +38,63 @@
 static int vm_event_enable(
     struct domain *d,
     struct xen_domctl_vm_event_op *vec,
-    struct vm_event_domain **ved,
+    struct vm_event_domain **p_ved,
     int pause_flag,
     int param,
     xen_event_channel_notification_t notification_fn)
 {
     int rc;
     unsigned long ring_gfn = d->arch.hvm.params[param];
+    struct vm_event_domain *ved;
 
-    if ( !*ved )
-        *ved = xzalloc(struct vm_event_domain);
-    if ( !*ved )
-        return -ENOMEM;
-
-    /* Only one helper at a time. If the helper crashed,
-     * the ring is in an undefined state and so is the guest.
+    /*
+     * Only one connected agent at a time.  If the helper crashed, the ring is
+     * in an undefined state, and the guest is most likely unrecoverable.
      */
-    if ( (*ved)->ring_page )
-        return -EBUSY;;
+    if ( *p_ved != NULL )
+        return -EBUSY;
 
-    /* The parameter defaults to zero, and it should be
-     * set to something */
+    /* No chosen ring GFN?  Nothing we can do. */
     if ( ring_gfn == 0 )
         return -EOPNOTSUPP;
 
-    spin_lock_init(&(*ved)->lock);
-    spin_lock(&(*ved)->lock);
+    ved = xzalloc(struct vm_event_domain);
+    if ( !ved )
+        return -ENOMEM;
 
-    rc = vm_event_init_domain(d);
+    /* Trivial setup. */
+    spin_lock_init(&ved->lock);
+    init_waitqueue_head(&ved->wq);
+    ved->pause_flag = pause_flag;
 
+    rc = vm_event_init_domain(d);
     if ( rc < 0 )
         goto err;
 
-    rc = prepare_ring_for_helper(d, ring_gfn, &(*ved)->ring_pg_struct,
-                                 &(*ved)->ring_page);
+    rc = prepare_ring_for_helper(d, ring_gfn, &ved->ring_pg_struct,
+                                 &ved->ring_page);
     if ( rc < 0 )
         goto err;
 
-    /* Set the number of currently blocked vCPUs to 0. */
-    (*ved)->blocked = 0;
+    FRONT_RING_INIT(&ved->front_ring,
+                    (vm_event_sring_t *)ved->ring_page,
+                    PAGE_SIZE);
 
-    /* Allocate event channel */
     rc = alloc_unbound_xen_event_channel(d, 0, current->domain->domain_id,
                                          notification_fn);
     if ( rc < 0 )
         goto err;
 
-    (*ved)->xen_port = vec->u.enable.port = rc;
+    ved->xen_port = vec->u.enable.port = rc;
 
-    /* Prepare ring buffer */
-    FRONT_RING_INIT(&(*ved)->front_ring,
-                    (vm_event_sring_t *)(*ved)->ring_page,
-                    PAGE_SIZE);
-
-    /* Save the pause flag for this particular ring. */
-    (*ved)->pause_flag = pause_flag;
-
-    /* Initialize the last-chance wait queue. */
-    init_waitqueue_head(&(*ved)->wq);
+    /* Success.  Fill in the domain's appropriate ved. */
+    *p_ved = ved;
 
-    spin_unlock(&(*ved)->lock);
     return 0;
 
  err:
-    destroy_ring_for_helper(&(*ved)->ring_page,
-                            (*ved)->ring_pg_struct);
-    spin_unlock(&(*ved)->lock);
-    xfree(*ved);
-    *ved = NULL;
+    destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct);
+    xfree(ved);
 
     return rc;
 }
@@ -190,43 +179,44 @@ void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
         vm_event_wake_blocked(d, ved);
 }
 
-static int vm_event_disable(struct domain *d, struct vm_event_domain **ved)
+static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
 {
-    if ( vm_event_check_ring(*ved) )
+    struct vm_event_domain *ved = *p_ved;
+
+    if ( vm_event_check_ring(ved) )
     {
         struct vcpu *v;
 
-        spin_lock(&(*ved)->lock);
+        spin_lock(&ved->lock);
 
-        if ( !list_empty(&(*ved)->wq.list) )
+        if ( !list_empty(&ved->wq.list) )
         {
-            spin_unlock(&(*ved)->lock);
+            spin_unlock(&ved->lock);
             return -EBUSY;
         }
 
         /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d, (*ved)->xen_port);
+        free_xen_event_channel(d, ved->xen_port);
 
         /* Unblock all vCPUs */
         for_each_vcpu ( d, v )
         {
-            if ( test_and_clear_bit((*ved)->pause_flag, &v->pause_flags) )
+            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
             {
                 vcpu_unpause(v);
-                (*ved)->blocked--;
+                ved->blocked--;
             }
         }
 
-        destroy_ring_for_helper(&(*ved)->ring_page,
-                                (*ved)->ring_pg_struct);
+        destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct);
 
         vm_event_cleanup_domain(d);
 
-        spin_unlock(&(*ved)->lock);
+        spin_unlock(&ved->lock);
     }
 
-    xfree(*ved);
-    *ved = NULL;
+    xfree(ved);
+    *p_ved = NULL;
 
     return 0;
 }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH 3/5] xen/vm-event: Remove unnecessary vm_event_domain indirection
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

The use of (*ved)-> leads to poor code generation, as the compiler can't
assume the pointer hasn't changed, and results in hard-to-follow code.

For both vm_event_{en,dis}able(), rename the ved parameter to p_ved, and
work primarily with a local ved pointer.

This has a key advantage in vm_event_enable(), in that the partially
constructed vm_event_domain only becomes globally visible once it is
fully constructed.  As a consequence, the spinlock doesn't need holding.

Furthermore, rearrange the order of operations to be more sensible.
Check for repeated enables and an bad HVM_PARAM before allocating
memory, and gather the trivial setup into one place, dropping the
redundant zeroing.

No practical change that callers will notice.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/vm_event.c | 90 +++++++++++++++++++++++----------------------------
 1 file changed, 40 insertions(+), 50 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index db975e9..dcba98c 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -38,74 +38,63 @@
 static int vm_event_enable(
     struct domain *d,
     struct xen_domctl_vm_event_op *vec,
-    struct vm_event_domain **ved,
+    struct vm_event_domain **p_ved,
     int pause_flag,
     int param,
     xen_event_channel_notification_t notification_fn)
 {
     int rc;
     unsigned long ring_gfn = d->arch.hvm.params[param];
+    struct vm_event_domain *ved;
 
-    if ( !*ved )
-        *ved = xzalloc(struct vm_event_domain);
-    if ( !*ved )
-        return -ENOMEM;
-
-    /* Only one helper at a time. If the helper crashed,
-     * the ring is in an undefined state and so is the guest.
+    /*
+     * Only one connected agent at a time.  If the helper crashed, the ring is
+     * in an undefined state, and the guest is most likely unrecoverable.
      */
-    if ( (*ved)->ring_page )
-        return -EBUSY;;
+    if ( *p_ved != NULL )
+        return -EBUSY;
 
-    /* The parameter defaults to zero, and it should be
-     * set to something */
+    /* No chosen ring GFN?  Nothing we can do. */
     if ( ring_gfn == 0 )
         return -EOPNOTSUPP;
 
-    spin_lock_init(&(*ved)->lock);
-    spin_lock(&(*ved)->lock);
+    ved = xzalloc(struct vm_event_domain);
+    if ( !ved )
+        return -ENOMEM;
 
-    rc = vm_event_init_domain(d);
+    /* Trivial setup. */
+    spin_lock_init(&ved->lock);
+    init_waitqueue_head(&ved->wq);
+    ved->pause_flag = pause_flag;
 
+    rc = vm_event_init_domain(d);
     if ( rc < 0 )
         goto err;
 
-    rc = prepare_ring_for_helper(d, ring_gfn, &(*ved)->ring_pg_struct,
-                                 &(*ved)->ring_page);
+    rc = prepare_ring_for_helper(d, ring_gfn, &ved->ring_pg_struct,
+                                 &ved->ring_page);
     if ( rc < 0 )
         goto err;
 
-    /* Set the number of currently blocked vCPUs to 0. */
-    (*ved)->blocked = 0;
+    FRONT_RING_INIT(&ved->front_ring,
+                    (vm_event_sring_t *)ved->ring_page,
+                    PAGE_SIZE);
 
-    /* Allocate event channel */
     rc = alloc_unbound_xen_event_channel(d, 0, current->domain->domain_id,
                                          notification_fn);
     if ( rc < 0 )
         goto err;
 
-    (*ved)->xen_port = vec->u.enable.port = rc;
+    ved->xen_port = vec->u.enable.port = rc;
 
-    /* Prepare ring buffer */
-    FRONT_RING_INIT(&(*ved)->front_ring,
-                    (vm_event_sring_t *)(*ved)->ring_page,
-                    PAGE_SIZE);
-
-    /* Save the pause flag for this particular ring. */
-    (*ved)->pause_flag = pause_flag;
-
-    /* Initialize the last-chance wait queue. */
-    init_waitqueue_head(&(*ved)->wq);
+    /* Success.  Fill in the domain's appropriate ved. */
+    *p_ved = ved;
 
-    spin_unlock(&(*ved)->lock);
     return 0;
 
  err:
-    destroy_ring_for_helper(&(*ved)->ring_page,
-                            (*ved)->ring_pg_struct);
-    spin_unlock(&(*ved)->lock);
-    xfree(*ved);
-    *ved = NULL;
+    destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct);
+    xfree(ved);
 
     return rc;
 }
@@ -190,43 +179,44 @@ void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
         vm_event_wake_blocked(d, ved);
 }
 
-static int vm_event_disable(struct domain *d, struct vm_event_domain **ved)
+static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
 {
-    if ( vm_event_check_ring(*ved) )
+    struct vm_event_domain *ved = *p_ved;
+
+    if ( vm_event_check_ring(ved) )
     {
         struct vcpu *v;
 
-        spin_lock(&(*ved)->lock);
+        spin_lock(&ved->lock);
 
-        if ( !list_empty(&(*ved)->wq.list) )
+        if ( !list_empty(&ved->wq.list) )
         {
-            spin_unlock(&(*ved)->lock);
+            spin_unlock(&ved->lock);
             return -EBUSY;
         }
 
         /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d, (*ved)->xen_port);
+        free_xen_event_channel(d, ved->xen_port);
 
         /* Unblock all vCPUs */
         for_each_vcpu ( d, v )
         {
-            if ( test_and_clear_bit((*ved)->pause_flag, &v->pause_flags) )
+            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
             {
                 vcpu_unpause(v);
-                (*ved)->blocked--;
+                ved->blocked--;
             }
         }
 
-        destroy_ring_for_helper(&(*ved)->ring_page,
-                                (*ved)->ring_pg_struct);
+        destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct);
 
         vm_event_cleanup_domain(d);
 
-        spin_unlock(&(*ved)->lock);
+        spin_unlock(&ved->lock);
     }
 
-    xfree(*ved);
-    *ved = NULL;
+    xfree(ved);
+    *p_ved = NULL;
 
     return 0;
 }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/5] xen/vm-event: Fix interactions with the vcpu list
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

vm_event_resume() should use domain_vcpu(), rather than opencoding it
without its Spectre v1 safety.

vm_event_wake_blocked() can't ever be invoked in a case where d->vcpu is
NULL, so drop the outer if() and reindent, fixing up style issues.

The comment, which is left alone, is false.  This algorithm still has
starvation issues when there is an asymetric rate of generated events.

However, the existing logic is sufficiently complicated and fragile that
I don't think I've followed it fully, and because we're trying to
obsolete this interface, the safest course of action is to leave it
alone, rather than to end up making things subtly different.

Therefore, no practical change that callers would notice.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/vm_event.c | 38 ++++++++++++++++----------------------
 1 file changed, 16 insertions(+), 22 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index dcba98c..72f42b4 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -119,34 +119,29 @@ static unsigned int vm_event_ring_available(struct vm_event_domain *ved)
 static void vm_event_wake_blocked(struct domain *d, struct vm_event_domain *ved)
 {
     struct vcpu *v;
-    unsigned int avail_req = vm_event_ring_available(ved);
+    unsigned int i, j, k, avail_req = vm_event_ring_available(ved);
 
     if ( avail_req == 0 || ved->blocked == 0 )
         return;
 
     /* We remember which vcpu last woke up to avoid scanning always linearly
      * from zero and starving higher-numbered vcpus under high load */
-    if ( d->vcpu )
+    for ( i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++ )
     {
-        int i, j, k;
-
-        for (i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
-        {
-            k = i % d->max_vcpus;
-            v = d->vcpu[k];
-            if ( !v )
-                continue;
+        k = i % d->max_vcpus;
+        v = d->vcpu[k];
+        if ( !v )
+            continue;
 
-            if ( !(ved->blocked) || avail_req == 0 )
-               break;
+        if ( !ved->blocked || avail_req == 0 )
+            break;
 
-            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                avail_req--;
-                ved->blocked--;
-                ved->last_vcpu_wake_up = k;
-            }
+        if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
+        {
+            vcpu_unpause(v);
+            avail_req--;
+            ved->blocked--;
+            ved->last_vcpu_wake_up = k;
         }
     }
 }
@@ -382,11 +377,10 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved)
         }
 
         /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+        v = domain_vcpu(d, rsp.vcpu_id);
+        if ( !v )
             continue;
 
-        v = d->vcpu[rsp.vcpu_id];
-
         /*
          * In some cases the response type needs extra handling, so here
          * we call the appropriate handlers.
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH 4/5] xen/vm-event: Fix interactions with the vcpu list
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

vm_event_resume() should use domain_vcpu(), rather than opencoding it
without its Spectre v1 safety.

vm_event_wake_blocked() can't ever be invoked in a case where d->vcpu is
NULL, so drop the outer if() and reindent, fixing up style issues.

The comment, which is left alone, is false.  This algorithm still has
starvation issues when there is an asymetric rate of generated events.

However, the existing logic is sufficiently complicated and fragile that
I don't think I've followed it fully, and because we're trying to
obsolete this interface, the safest course of action is to leave it
alone, rather than to end up making things subtly different.

Therefore, no practical change that callers would notice.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/vm_event.c | 38 ++++++++++++++++----------------------
 1 file changed, 16 insertions(+), 22 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index dcba98c..72f42b4 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -119,34 +119,29 @@ static unsigned int vm_event_ring_available(struct vm_event_domain *ved)
 static void vm_event_wake_blocked(struct domain *d, struct vm_event_domain *ved)
 {
     struct vcpu *v;
-    unsigned int avail_req = vm_event_ring_available(ved);
+    unsigned int i, j, k, avail_req = vm_event_ring_available(ved);
 
     if ( avail_req == 0 || ved->blocked == 0 )
         return;
 
     /* We remember which vcpu last woke up to avoid scanning always linearly
      * from zero and starving higher-numbered vcpus under high load */
-    if ( d->vcpu )
+    for ( i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++ )
     {
-        int i, j, k;
-
-        for (i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
-        {
-            k = i % d->max_vcpus;
-            v = d->vcpu[k];
-            if ( !v )
-                continue;
+        k = i % d->max_vcpus;
+        v = d->vcpu[k];
+        if ( !v )
+            continue;
 
-            if ( !(ved->blocked) || avail_req == 0 )
-               break;
+        if ( !ved->blocked || avail_req == 0 )
+            break;
 
-            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                avail_req--;
-                ved->blocked--;
-                ved->last_vcpu_wake_up = k;
-            }
+        if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
+        {
+            vcpu_unpause(v);
+            avail_req--;
+            ved->blocked--;
+            ved->last_vcpu_wake_up = k;
         }
     }
 }
@@ -382,11 +377,10 @@ static int vm_event_resume(struct domain *d, struct vm_event_domain *ved)
         }
 
         /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+        v = domain_vcpu(d, rsp.vcpu_id);
+        if ( !v )
             continue;
 
-        v = d->vcpu[rsp.vcpu_id];
-
         /*
          * In some cases the response type needs extra handling, so here
          * we call the appropriate handlers.
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 5/5] xen/vm-event: Misc fixups
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

 * Drop redundant brackes, and inline qualifiers.
 * Insert newlines and spaces where appropriate.
 * Drop redundant NDEBUG - gdprint() is already conditional.  Fix the
   logging level, as gdprintk() already prefixes the guest marker.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/vm_event.c | 21 ++++++++++++---------
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 72f42b4..e872680 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -102,6 +102,7 @@ static int vm_event_enable(
 static unsigned int vm_event_ring_available(struct vm_event_domain *ved)
 {
     int avail_req = RING_FREE_REQUESTS(&ved->front_ring);
+
     avail_req -= ved->target_producers;
     avail_req -= ved->foreign_producers;
 
@@ -168,7 +169,7 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
  */
 void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
 {
-    if (!list_empty(&ved->wq.list))
+    if ( !list_empty(&ved->wq.list) )
         vm_event_wake_queued(d, ved);
     else
         vm_event_wake_blocked(d, ved);
@@ -216,8 +217,8 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
     return 0;
 }
 
-static inline void vm_event_release_slot(struct domain *d,
-                                         struct vm_event_domain *ved)
+static void vm_event_release_slot(struct domain *d,
+                                  struct vm_event_domain *ved)
 {
     /* Update the accounting */
     if ( current->domain == d )
@@ -258,17 +259,16 @@ void vm_event_put_request(struct domain *d,
     RING_IDX req_prod;
     struct vcpu *curr = current;
 
-    if( !vm_event_check_ring(ved))
+    if( !vm_event_check_ring(ved) )
         return;
 
     if ( curr->domain != d )
     {
         req->flags |= VM_EVENT_FLAG_FOREIGN;
-#ifndef NDEBUG
+
         if ( !(req->flags & VM_EVENT_FLAG_VCPU_PAUSED) )
-            gdprintk(XENLOG_G_WARNING, "d%dv%d was not paused.\n",
+            gdprintk(XENLOG_WARNING, "d%dv%d was not paused.\n",
                      d->domain_id, req->vcpu_id);
-#endif
     }
 
     req->version = VM_EVENT_INTERFACE_VERSION;
@@ -474,6 +474,7 @@ static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign)
 static int vm_event_wait_try_grab(struct vm_event_domain *ved, int *rc)
 {
     *rc = vm_event_grab_slot(ved, 0);
+
     return *rc;
 }
 
@@ -481,13 +482,15 @@ static int vm_event_wait_try_grab(struct vm_event_domain *ved, int *rc)
 static int vm_event_wait_slot(struct vm_event_domain *ved)
 {
     int rc = -EBUSY;
+
     wait_event(ved->wq, vm_event_wait_try_grab(ved, &rc) != -EBUSY);
+
     return rc;
 }
 
 bool vm_event_check_ring(struct vm_event_domain *ved)
 {
-    return (ved && ved->ring_page);
+    return ved && ved->ring_page;
 }
 
 /*
@@ -511,7 +514,7 @@ int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved,
     if ( (current->domain == d) && allow_sleep )
         return vm_event_wait_slot(ved);
     else
-        return vm_event_grab_slot(ved, (current->domain != d));
+        return vm_event_grab_slot(ved, current->domain != d);
 }
 
 #ifdef CONFIG_HAS_MEM_PAGING
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH 5/5] xen/vm-event: Misc fixups
@ 2019-06-03 12:25   ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2019-06-03 12:25 UTC (permalink / raw)
  To: Xen-devel
  Cc: Petre Pircalabu, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru

 * Drop redundant brackes, and inline qualifiers.
 * Insert newlines and spaces where appropriate.
 * Drop redundant NDEBUG - gdprint() is already conditional.  Fix the
   logging level, as gdprintk() already prefixes the guest marker.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xen/common/vm_event.c | 21 ++++++++++++---------
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 72f42b4..e872680 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -102,6 +102,7 @@ static int vm_event_enable(
 static unsigned int vm_event_ring_available(struct vm_event_domain *ved)
 {
     int avail_req = RING_FREE_REQUESTS(&ved->front_ring);
+
     avail_req -= ved->target_producers;
     avail_req -= ved->foreign_producers;
 
@@ -168,7 +169,7 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
  */
 void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
 {
-    if (!list_empty(&ved->wq.list))
+    if ( !list_empty(&ved->wq.list) )
         vm_event_wake_queued(d, ved);
     else
         vm_event_wake_blocked(d, ved);
@@ -216,8 +217,8 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
     return 0;
 }
 
-static inline void vm_event_release_slot(struct domain *d,
-                                         struct vm_event_domain *ved)
+static void vm_event_release_slot(struct domain *d,
+                                  struct vm_event_domain *ved)
 {
     /* Update the accounting */
     if ( current->domain == d )
@@ -258,17 +259,16 @@ void vm_event_put_request(struct domain *d,
     RING_IDX req_prod;
     struct vcpu *curr = current;
 
-    if( !vm_event_check_ring(ved))
+    if( !vm_event_check_ring(ved) )
         return;
 
     if ( curr->domain != d )
     {
         req->flags |= VM_EVENT_FLAG_FOREIGN;
-#ifndef NDEBUG
+
         if ( !(req->flags & VM_EVENT_FLAG_VCPU_PAUSED) )
-            gdprintk(XENLOG_G_WARNING, "d%dv%d was not paused.\n",
+            gdprintk(XENLOG_WARNING, "d%dv%d was not paused.\n",
                      d->domain_id, req->vcpu_id);
-#endif
     }
 
     req->version = VM_EVENT_INTERFACE_VERSION;
@@ -474,6 +474,7 @@ static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign)
 static int vm_event_wait_try_grab(struct vm_event_domain *ved, int *rc)
 {
     *rc = vm_event_grab_slot(ved, 0);
+
     return *rc;
 }
 
@@ -481,13 +482,15 @@ static int vm_event_wait_try_grab(struct vm_event_domain *ved, int *rc)
 static int vm_event_wait_slot(struct vm_event_domain *ved)
 {
     int rc = -EBUSY;
+
     wait_event(ved->wq, vm_event_wait_try_grab(ved, &rc) != -EBUSY);
+
     return rc;
 }
 
 bool vm_event_check_ring(struct vm_event_domain *ved)
 {
-    return (ved && ved->ring_page);
+    return ved && ved->ring_page;
 }
 
 /*
@@ -511,7 +514,7 @@ int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved,
     if ( (current->domain == d) && allow_sleep )
         return vm_event_wait_slot(ved);
     else
-        return vm_event_grab_slot(ved, (current->domain != d));
+        return vm_event_grab_slot(ved, current->domain != d);
 }
 
 #ifdef CONFIG_HAS_MEM_PAGING
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/5] xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl()
@ 2019-06-03 13:52     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 13:52 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
> This parameter isn't used at all.  Futhermore, elide the copyback in
> failing cases, as it is only successful paths which generate data which
> needs sending back to the caller.
> 
> Finally, drop a redundant d == NULL check, as that logic is all common
> at the begining of do_domctl().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH 1/5] xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl()
@ 2019-06-03 13:52     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 13:52 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
> This parameter isn't used at all.  Futhermore, elide the copyback in
> failing cases, as it is only successful paths which generate data which
> needs sending back to the caller.
> 
> Finally, drop a redundant d == NULL check, as that logic is all common
> at the begining of do_domctl().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/5] xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl()
@ 2019-06-03 13:52     ` Jan Beulich
  0 siblings, 0 replies; 24+ messages in thread
From: Jan Beulich @ 2019-06-03 13:52 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Petre Pircalabu, xen-devel, Tamas K Lengyel, Razvan Cojocaru

>>> On 03.06.19 at 14:25, <andrew.cooper3@citrix.com> wrote:
> This parameter isn't used at all.  Futhermore, elide the copyback in
> failing cases, as it is only successful paths which generate data which
> needs sending back to the caller.
> 
> Finally, drop a redundant d == NULL check, as that logic is all common
> at the begining of do_domctl().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
> CC: Tamas K Lengyel <tamas@tklengyel.com>
> CC: Petre Pircalabu <ppircalabu@bitdefender.com>
> ---
>  xen/common/domctl.c        | 6 +++---

Just in cases it's wanted/needed:
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH 1/5] xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl()
@ 2019-06-03 13:52     ` Jan Beulich
  0 siblings, 0 replies; 24+ messages in thread
From: Jan Beulich @ 2019-06-03 13:52 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Petre Pircalabu, xen-devel, Tamas K Lengyel, Razvan Cojocaru

>>> On 03.06.19 at 14:25, <andrew.cooper3@citrix.com> wrote:
> This parameter isn't used at all.  Futhermore, elide the copyback in
> failing cases, as it is only successful paths which generate data which
> needs sending back to the caller.
> 
> Finally, drop a redundant d == NULL check, as that logic is all common
> at the begining of do_domctl().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
> CC: Tamas K Lengyel <tamas@tklengyel.com>
> CC: Petre Pircalabu <ppircalabu@bitdefender.com>
> ---
>  xen/common/domctl.c        | 6 +++---

Just in cases it's wanted/needed:
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/5] xen/vm-event: Expand vm_event_* spinlock macros and rename the lock
@ 2019-06-03 14:02     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 14:02 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
> These serve no purpose, but to add to the congnitive load of following
> the code.  Remove the level of indirection.
> 
> Furthermore, the lock protects all data in vm_event_domain, making
> ring_lock a poor choice of name.
> 
> For vm_event_get_response() and vm_event_grab_slot(), fold the exit
> paths to have a single unlock, as the compiler can't make this
> optimisation itself.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH 2/5] xen/vm-event: Expand vm_event_* spinlock macros and rename the lock
@ 2019-06-03 14:02     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 14:02 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
> These serve no purpose, but to add to the congnitive load of following
> the code.  Remove the level of indirection.
> 
> Furthermore, the lock protects all data in vm_event_domain, making
> ring_lock a poor choice of name.
> 
> For vm_event_get_response() and vm_event_grab_slot(), fold the exit
> paths to have a single unlock, as the compiler can't make this
> optimisation itself.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/5] xen/vm-event: Misc fixups
@ 2019-06-03 14:16     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 14:16 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
>   * Drop redundant brackes, and inline qualifiers.
>   * Insert newlines and spaces where appropriate.
>   * Drop redundant NDEBUG - gdprint() is already conditional.  Fix the
>     logging level, as gdprintk() already prefixes the guest marker.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
> CC: Tamas K Lengyel <tamas@tklengyel.com>
> CC: Petre Pircalabu <ppircalabu@bitdefender.com>
> ---
>   xen/common/vm_event.c | 21 ++++++++++++---------
>   1 file changed, 12 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 72f42b4..e872680 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -102,6 +102,7 @@ static int vm_event_enable(
>   static unsigned int vm_event_ring_available(struct vm_event_domain *ved)
>   {
>       int avail_req = RING_FREE_REQUESTS(&ved->front_ring);
> +
>       avail_req -= ved->target_producers;
>       avail_req -= ved->foreign_producers;
>   
> @@ -168,7 +169,7 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
>    */
>   void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
>   {
> -    if (!list_empty(&ved->wq.list))
> +    if ( !list_empty(&ved->wq.list) )
>           vm_event_wake_queued(d, ved);
>       else
>           vm_event_wake_blocked(d, ved);
> @@ -216,8 +217,8 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
>       return 0;
>   }
>   
> -static inline void vm_event_release_slot(struct domain *d,
> -                                         struct vm_event_domain *ved)
> +static void vm_event_release_slot(struct domain *d,
> +                                  struct vm_event_domain *ved)

But inline is still asking the compiler to try and generate code that 
doesn't end up CALLing an actual function, so is it really redundant 
here? I do realize that for most cases the compiler will have its way 
with this code anyway - especially since the function is static - but 
"static" is not guaranteed to also mean "inline", is it?

In any case,
Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH 5/5] xen/vm-event: Misc fixups
@ 2019-06-03 14:16     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 14:16 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
>   * Drop redundant brackes, and inline qualifiers.
>   * Insert newlines and spaces where appropriate.
>   * Drop redundant NDEBUG - gdprint() is already conditional.  Fix the
>     logging level, as gdprintk() already prefixes the guest marker.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
> CC: Tamas K Lengyel <tamas@tklengyel.com>
> CC: Petre Pircalabu <ppircalabu@bitdefender.com>
> ---
>   xen/common/vm_event.c | 21 ++++++++++++---------
>   1 file changed, 12 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 72f42b4..e872680 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -102,6 +102,7 @@ static int vm_event_enable(
>   static unsigned int vm_event_ring_available(struct vm_event_domain *ved)
>   {
>       int avail_req = RING_FREE_REQUESTS(&ved->front_ring);
> +
>       avail_req -= ved->target_producers;
>       avail_req -= ved->foreign_producers;
>   
> @@ -168,7 +169,7 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
>    */
>   void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
>   {
> -    if (!list_empty(&ved->wq.list))
> +    if ( !list_empty(&ved->wq.list) )
>           vm_event_wake_queued(d, ved);
>       else
>           vm_event_wake_blocked(d, ved);
> @@ -216,8 +217,8 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
>       return 0;
>   }
>   
> -static inline void vm_event_release_slot(struct domain *d,
> -                                         struct vm_event_domain *ved)
> +static void vm_event_release_slot(struct domain *d,
> +                                  struct vm_event_domain *ved)

But inline is still asking the compiler to try and generate code that 
doesn't end up CALLing an actual function, so is it really redundant 
here? I do realize that for most cases the compiler will have its way 
with this code anyway - especially since the function is static - but 
"static" is not guaranteed to also mean "inline", is it?

In any case,
Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/5] xen/vm-event: Remove unnecessary vm_event_domain indirection
@ 2019-06-03 14:31     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 14:31 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
> The use of (*ved)-> leads to poor code generation, as the compiler can't
> assume the pointer hasn't changed, and results in hard-to-follow code.
> 
> For both vm_event_{en,dis}able(), rename the ved parameter to p_ved, and
> work primarily with a local ved pointer.
> 
> This has a key advantage in vm_event_enable(), in that the partially
> constructed vm_event_domain only becomes globally visible once it is
> fully constructed.  As a consequence, the spinlock doesn't need holding.
> 
> Furthermore, rearrange the order of operations to be more sensible.
> Check for repeated enables and an bad HVM_PARAM before allocating
> memory, and gather the trivial setup into one place, dropping the
> redundant zeroing.
> 
> No practical change that callers will notice.
> 
> Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH 3/5] xen/vm-event: Remove unnecessary vm_event_domain indirection
@ 2019-06-03 14:31     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 14:31 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
> The use of (*ved)-> leads to poor code generation, as the compiler can't
> assume the pointer hasn't changed, and results in hard-to-follow code.
> 
> For both vm_event_{en,dis}able(), rename the ved parameter to p_ved, and
> work primarily with a local ved pointer.
> 
> This has a key advantage in vm_event_enable(), in that the partially
> constructed vm_event_domain only becomes globally visible once it is
> fully constructed.  As a consequence, the spinlock doesn't need holding.
> 
> Furthermore, rearrange the order of operations to be more sensible.
> Check for repeated enables and an bad HVM_PARAM before allocating
> memory, and gather the trivial setup into one place, dropping the
> redundant zeroing.
> 
> No practical change that callers will notice.
> 
> Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/5] xen/vm-event: Fix interactions with the vcpu list
@ 2019-06-03 14:56     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 14:56 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
> vm_event_resume() should use domain_vcpu(), rather than opencoding it
> without its Spectre v1 safety.
> 
> vm_event_wake_blocked() can't ever be invoked in a case where d->vcpu is
> NULL, so drop the outer if() and reindent, fixing up style issues.
> 
> The comment, which is left alone, is false.  This algorithm still has
> starvation issues when there is an asymetric rate of generated events.
> 
> However, the existing logic is sufficiently complicated and fragile that
> I don't think I've followed it fully, and because we're trying to
> obsolete this interface, the safest course of action is to leave it
> alone, rather than to end up making things subtly different.
> 
> Therefore, no practical change that callers would notice.
> 
> Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH 4/5] xen/vm-event: Fix interactions with the vcpu list
@ 2019-06-03 14:56     ` Razvan Cojocaru
  0 siblings, 0 replies; 24+ messages in thread
From: Razvan Cojocaru @ 2019-06-03 14:56 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Petre Pircalabu, Tamas K Lengyel

On 6/3/19 3:25 PM, Andrew Cooper wrote:
> vm_event_resume() should use domain_vcpu(), rather than opencoding it
> without its Spectre v1 safety.
> 
> vm_event_wake_blocked() can't ever be invoked in a case where d->vcpu is
> NULL, so drop the outer if() and reindent, fixing up style issues.
> 
> The comment, which is left alone, is false.  This algorithm still has
> starvation issues when there is an asymetric rate of generated events.
> 
> However, the existing logic is sufficiently complicated and fragile that
> I don't think I've followed it fully, and because we're trying to
> obsolete this interface, the safest course of action is to leave it
> alone, rather than to end up making things subtly different.
> 
> Therefore, no practical change that callers would notice.
> 
> Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2019-06-03 14:57 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-03 12:25 [PATCH 0/5] xen/vm-event: Cleanup Andrew Cooper
2019-06-03 12:25 ` [Xen-devel] " Andrew Cooper
2019-06-03 12:25 ` [PATCH 1/5] xen/vm-event: Drop unused u_domctl parameter from vm_event_domctl() Andrew Cooper
2019-06-03 12:25   ` [Xen-devel] " Andrew Cooper
2019-06-03 13:52   ` Razvan Cojocaru
2019-06-03 13:52     ` [Xen-devel] " Razvan Cojocaru
2019-06-03 13:52   ` Jan Beulich
2019-06-03 13:52     ` [Xen-devel] " Jan Beulich
2019-06-03 12:25 ` [PATCH 2/5] xen/vm-event: Expand vm_event_* spinlock macros and rename the lock Andrew Cooper
2019-06-03 12:25   ` [Xen-devel] " Andrew Cooper
2019-06-03 14:02   ` Razvan Cojocaru
2019-06-03 14:02     ` [Xen-devel] " Razvan Cojocaru
2019-06-03 12:25 ` [PATCH 3/5] xen/vm-event: Remove unnecessary vm_event_domain indirection Andrew Cooper
2019-06-03 12:25   ` [Xen-devel] " Andrew Cooper
2019-06-03 14:31   ` Razvan Cojocaru
2019-06-03 14:31     ` [Xen-devel] " Razvan Cojocaru
2019-06-03 12:25 ` [PATCH 4/5] xen/vm-event: Fix interactions with the vcpu list Andrew Cooper
2019-06-03 12:25   ` [Xen-devel] " Andrew Cooper
2019-06-03 14:56   ` Razvan Cojocaru
2019-06-03 14:56     ` [Xen-devel] " Razvan Cojocaru
2019-06-03 12:25 ` [PATCH 5/5] xen/vm-event: Misc fixups Andrew Cooper
2019-06-03 12:25   ` [Xen-devel] " Andrew Cooper
2019-06-03 14:16   ` Razvan Cojocaru
2019-06-03 14:16     ` [Xen-devel] " Razvan Cojocaru

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.