All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type
@ 2022-09-10 15:49 Juergen Gross
  2022-09-10 15:49 ` [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions Juergen Gross
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Juergen Gross @ 2022-09-10 15:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Daniel P. Smith,
	Mateusz Mówka, Paul Durrant

Instead of being able to use normal spinlocks as recursive ones, too,
make recursive spinlocks a special lock type.

This will make the spinlock structure smaller in production builds and
add type-safety.

Juergen Gross (3):
  xen/spinlock: add explicit non-recursive locking functions
  xen/spinlock: split recursive spinlocks from normal ones
  xen/spinlock: support higher number of cpus

 xen/arch/arm/mm.c             |  4 +--
 xen/arch/x86/domain.c         | 12 +++----
 xen/arch/x86/include/asm/mm.h |  2 +-
 xen/arch/x86/mm.c             | 12 +++----
 xen/arch/x86/mm/mem_sharing.c |  8 ++---
 xen/arch/x86/mm/mm-locks.h    |  2 +-
 xen/arch/x86/mm/p2m-pod.c     |  6 ++--
 xen/arch/x86/mm/p2m.c         |  4 +--
 xen/arch/x86/numa.c           |  4 +--
 xen/arch/x86/tboot.c          |  4 +--
 xen/common/domain.c           |  6 ++--
 xen/common/domctl.c           |  4 +--
 xen/common/grant_table.c      | 10 +++---
 xen/common/ioreq.c            |  2 +-
 xen/common/memory.c           |  4 +--
 xen/common/page_alloc.c       | 18 +++++-----
 xen/common/spinlock.c         | 22 +++++++-----
 xen/drivers/char/console.c    | 24 ++++++-------
 xen/drivers/passthrough/pci.c |  4 +--
 xen/include/xen/sched.h       |  6 ++--
 xen/include/xen/spinlock.h    | 68 +++++++++++++++++++++++++----------
 21 files changed, 131 insertions(+), 95 deletions(-)

-- 
2.35.3



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions
  2022-09-10 15:49 [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Juergen Gross
@ 2022-09-10 15:49 ` Juergen Gross
  2022-12-14 10:21   ` Jan Beulich
  2022-09-10 15:49 ` [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones Juergen Gross
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Juergen Gross @ 2022-09-10 15:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Daniel P. Smith,
	Mateusz Mówka, Paul Durrant

In order to prepare a type-safe recursive spinlock structure, add
explicitly non-recursive locking functions to be used for non-recursive
locking of spinlocks, which are use recursively, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/arm/mm.c             |  4 ++--
 xen/arch/x86/domain.c         | 12 ++++++------
 xen/arch/x86/mm.c             | 12 ++++++------
 xen/arch/x86/mm/mem_sharing.c |  8 ++++----
 xen/arch/x86/mm/p2m-pod.c     |  4 ++--
 xen/arch/x86/mm/p2m.c         |  4 ++--
 xen/arch/x86/numa.c           |  4 ++--
 xen/arch/x86/tboot.c          |  4 ++--
 xen/common/domain.c           |  4 ++--
 xen/common/domctl.c           |  4 ++--
 xen/common/grant_table.c      | 10 +++++-----
 xen/common/ioreq.c            |  2 +-
 xen/common/memory.c           |  4 ++--
 xen/common/page_alloc.c       | 18 +++++++++---------
 xen/drivers/char/console.c    | 24 ++++++++++++------------
 xen/drivers/passthrough/pci.c |  4 ++--
 xen/include/xen/spinlock.h    | 17 ++++++++++++++++-
 17 files changed, 77 insertions(+), 62 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 11ee49598b..bf88d2cab8 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1284,7 +1284,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
     if ( page_get_owner(page) == d )
         return;
 
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
 
     /*
      * The incremented type count pins as writable or read-only.
@@ -1315,7 +1315,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
         page_list_add_tail(page, &d->xenpage_list);
     }
 
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
 }
 
 int xenmem_add_to_physmap_one(
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 41e1e3f272..a66846a6d1 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -213,7 +213,7 @@ void dump_pageframe_info(struct domain *d)
     {
         unsigned long total[MASK_EXTR(PGT_type_mask, PGT_type_mask) + 1] = {};
 
-        spin_lock(&d->page_alloc_lock);
+        spin_lock_nonrecursive(&d->page_alloc_lock);
         page_list_for_each ( page, &d->page_list )
         {
             unsigned int index = MASK_EXTR(page->u.inuse.type_info,
@@ -232,13 +232,13 @@ void dump_pageframe_info(struct domain *d)
                    _p(mfn_x(page_to_mfn(page))),
                    page->count_info, page->u.inuse.type_info);
         }
-        spin_unlock(&d->page_alloc_lock);
+        spin_unlock_nonrecursive(&d->page_alloc_lock);
     }
 
     if ( is_hvm_domain(d) )
         p2m_pod_dump_data(d);
 
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
 
     page_list_for_each ( page, &d->xenpage_list )
     {
@@ -254,7 +254,7 @@ void dump_pageframe_info(struct domain *d)
                page->count_info, page->u.inuse.type_info);
     }
 
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
 }
 
 void update_guest_memory_policy(struct vcpu *v,
@@ -2456,10 +2456,10 @@ int domain_relinquish_resources(struct domain *d)
         }
 #endif
 
-        spin_lock(&d->page_alloc_lock);
+        spin_lock_nonrecursive(&d->page_alloc_lock);
         page_list_splice(&d->arch.relmem_list, &d->page_list);
         INIT_PAGE_LIST_HEAD(&d->arch.relmem_list);
-        spin_unlock(&d->page_alloc_lock);
+        spin_unlock_nonrecursive(&d->page_alloc_lock);
 
     PROGRESS(xen):
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index db1817b691..e084ba04ad 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -499,7 +499,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
 
     set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), INVALID_M2P_ENTRY);
 
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
 
     /* The incremented type count pins as writable or read-only. */
     page->u.inuse.type_info =
@@ -519,7 +519,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
         page_list_add_tail(page, &d->xenpage_list);
     }
 
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
 }
 
 void make_cr3(struct vcpu *v, mfn_t mfn)
@@ -3586,11 +3586,11 @@ long do_mmuext_op(
             {
                 bool drop_ref;
 
-                spin_lock(&pg_owner->page_alloc_lock);
+                spin_lock_nonrecursive(&pg_owner->page_alloc_lock);
                 drop_ref = (pg_owner->is_dying &&
                             test_and_clear_bit(_PGT_pinned,
                                                &page->u.inuse.type_info));
-                spin_unlock(&pg_owner->page_alloc_lock);
+                spin_unlock_nonrecursive(&pg_owner->page_alloc_lock);
                 if ( drop_ref )
                 {
         pin_drop:
@@ -4413,7 +4413,7 @@ int steal_page(
      * that it might be upon return from alloc_domheap_pages with
      * MEMF_no_owner set.
      */
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
 
     BUG_ON(page->u.inuse.type_info & (PGT_count_mask | PGT_locked |
                                       PGT_pinned));
@@ -4425,7 +4425,7 @@ int steal_page(
     if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1) )
         drop_dom_ref = true;
 
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
 
     if ( unlikely(drop_dom_ref) )
         put_domain(d);
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 649d93dc54..89817dc427 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -758,11 +758,11 @@ static int page_make_private(struct domain *d, struct page_info *page)
     if ( !get_page(page, dom_cow) )
         return -EINVAL;
 
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
 
     if ( d->is_dying )
     {
-        spin_unlock(&d->page_alloc_lock);
+        spin_unlock_nonrecursive(&d->page_alloc_lock);
         put_page(page);
         return -EBUSY;
     }
@@ -770,7 +770,7 @@ static int page_make_private(struct domain *d, struct page_info *page)
     expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2);
     if ( page->u.inuse.type_info != expected_type )
     {
-        spin_unlock(&d->page_alloc_lock);
+        spin_unlock_nonrecursive(&d->page_alloc_lock);
         put_page(page);
         return -EEXIST;
     }
@@ -787,7 +787,7 @@ static int page_make_private(struct domain *d, struct page_info *page)
     if ( domain_adjust_tot_pages(d, 1) == 1 )
         get_knownalive_domain(d);
     page_list_add_tail(page, &d->page_list);
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
 
     put_page(page);
 
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index fc110506dc..deab55648c 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -39,7 +39,7 @@
 static inline void lock_page_alloc(struct p2m_domain *p2m)
 {
     page_alloc_mm_pre_lock(p2m->domain);
-    spin_lock(&(p2m->domain->page_alloc_lock));
+    spin_lock_nonrecursive(&(p2m->domain->page_alloc_lock));
     page_alloc_mm_post_lock(p2m->domain,
                             p2m->domain->arch.page_alloc_unlock_level);
 }
@@ -47,7 +47,7 @@ static inline void lock_page_alloc(struct p2m_domain *p2m)
 static inline void unlock_page_alloc(struct p2m_domain *p2m)
 {
     page_alloc_mm_unlock(p2m->domain->arch.page_alloc_unlock_level);
-    spin_unlock(&(p2m->domain->page_alloc_lock));
+    spin_unlock_nonrecursive(&(p2m->domain->page_alloc_lock));
 }
 
 /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index a405ee5fde..30bc248f72 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2245,7 +2245,7 @@ void audit_p2m(struct domain *d,
 
     /* Audit part two: walk the domain's page allocation list, checking
      * the m2p entries. */
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
     page_list_for_each ( page, &d->page_list )
     {
         mfn = mfn_x(page_to_mfn(page));
@@ -2297,7 +2297,7 @@ void audit_p2m(struct domain *d,
         P2M_PRINTK("OK: mfn=%#lx, gfn=%#lx, p2mfn=%#lx\n",
                        mfn, gfn, mfn_x(p2mfn));
     }
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
 
     pod_unlock(p2m);
     p2m_unlock(p2m);
diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 627ae8aa95..90fbfdcb31 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -425,13 +425,13 @@ static void cf_check dump_numa(unsigned char key)
         for_each_online_node ( i )
             page_num_node[i] = 0;
 
-        spin_lock(&d->page_alloc_lock);
+        spin_lock_nonrecursive(&d->page_alloc_lock);
         page_list_for_each(page, &d->page_list)
         {
             i = phys_to_nid(page_to_maddr(page));
             page_num_node[i]++;
         }
-        spin_unlock(&d->page_alloc_lock);
+        spin_unlock_nonrecursive(&d->page_alloc_lock);
 
         for_each_online_node ( i )
             printk("    Node %u: %u\n", i, page_num_node[i]);
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index fe1abfdf08..93e8e3e90f 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -215,14 +215,14 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
             continue;
         printk("MACing Domain %u\n", d->domain_id);
 
-        spin_lock(&d->page_alloc_lock);
+        spin_lock_nonrecursive(&d->page_alloc_lock);
         page_list_for_each(page, &d->page_list)
         {
             void *pg = __map_domain_page(page);
             vmac_update(pg, PAGE_SIZE, &ctx);
             unmap_domain_page(pg);
         }
-        spin_unlock(&d->page_alloc_lock);
+        spin_unlock_nonrecursive(&d->page_alloc_lock);
 
         if ( !is_idle_domain(d) )
         {
diff --git a/xen/common/domain.c b/xen/common/domain.c
index c23f449451..51160a4b5c 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -598,8 +598,8 @@ struct domain *domain_create(domid_t domid,
 
     atomic_set(&d->refcnt, 1);
     RCU_READ_LOCK_INIT(&d->rcu_lock);
-    spin_lock_init_prof(d, domain_lock);
-    spin_lock_init_prof(d, page_alloc_lock);
+    spin_lock_recursive_init_prof(d, domain_lock);
+    spin_lock_recursive_init_prof(d, page_alloc_lock);
     spin_lock_init(&d->hypercall_deadlock_mutex);
     INIT_PAGE_LIST_HEAD(&d->page_list);
     INIT_PAGE_LIST_HEAD(&d->extra_page_list);
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 452266710a..09870c87e0 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -651,14 +651,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     {
         uint64_t new_max = op->u.max_mem.max_memkb >> (PAGE_SHIFT - 10);
 
-        spin_lock(&d->page_alloc_lock);
+        spin_lock_nonrecursive(&d->page_alloc_lock);
         /*
          * NB. We removed a check that new_max >= current tot_pages; this means
          * that the domain will now be allowed to "ratchet" down to new_max. In
          * the meantime, while tot > max, all new allocations are disallowed.
          */
         d->max_pages = min(new_max, (uint64_t)(typeof(d->max_pages))-1);
-        spin_unlock(&d->page_alloc_lock);
+        spin_unlock_nonrecursive(&d->page_alloc_lock);
         break;
     }
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ad773a6996..7acf8a9f6c 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2349,7 +2349,7 @@ gnttab_transfer(
             mfn = page_to_mfn(page);
         }
 
-        spin_lock(&e->page_alloc_lock);
+        spin_lock_nonrecursive(&e->page_alloc_lock);
 
         /*
          * Check that 'e' will accept the page and has reservation
@@ -2360,7 +2360,7 @@ gnttab_transfer(
              unlikely(domain_tot_pages(e) >= e->max_pages) ||
              unlikely(!(e->tot_pages + 1)) )
         {
-            spin_unlock(&e->page_alloc_lock);
+            spin_unlock_nonrecursive(&e->page_alloc_lock);
 
             if ( e->is_dying )
                 gdprintk(XENLOG_INFO, "Transferee d%d is dying\n",
@@ -2384,7 +2384,7 @@ gnttab_transfer(
          * safely drop the lock and re-aquire it later to add page to the
          * pagelist.
          */
-        spin_unlock(&e->page_alloc_lock);
+        spin_unlock_nonrecursive(&e->page_alloc_lock);
         okay = gnttab_prepare_for_transfer(e, d, gop.ref);
 
         /*
@@ -2400,9 +2400,9 @@ gnttab_transfer(
              * Need to grab this again to safely free our "reserved"
              * page in the page total
              */
-            spin_lock(&e->page_alloc_lock);
+            spin_lock_nonrecursive(&e->page_alloc_lock);
             drop_dom_ref = !domain_adjust_tot_pages(e, -1);
-            spin_unlock(&e->page_alloc_lock);
+            spin_unlock_nonrecursive(&e->page_alloc_lock);
 
             if ( okay /* i.e. e->is_dying due to the surrounding if() */ )
                 gdprintk(XENLOG_INFO, "Transferee d%d is now dying\n",
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 4617aef29b..c46a5d70e6 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -1329,7 +1329,7 @@ unsigned int ioreq_broadcast(ioreq_t *p, bool buffered)
 
 void ioreq_domain_init(struct domain *d)
 {
-    spin_lock_init(&d->ioreq_server.lock);
+    spin_lock_recursive_init(&d->ioreq_server.lock);
 
     arch_ioreq_domain_init(d);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index ae8163a738..0b4313832e 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -769,10 +769,10 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
                               (1UL << in_chunk_order)) -
                              (j * (1UL << exch.out.extent_order)));
 
-                spin_lock(&d->page_alloc_lock);
+                spin_lock_nonrecursive(&d->page_alloc_lock);
                 drop_dom_ref = (dec_count &&
                                 !domain_adjust_tot_pages(d, -dec_count));
-                spin_unlock(&d->page_alloc_lock);
+                spin_unlock_nonrecursive(&d->page_alloc_lock);
 
                 if ( drop_dom_ref )
                     put_domain(d);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 62afb07bc6..35e6015ce2 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -469,7 +469,7 @@ unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
 {
     long dom_before, dom_after, dom_claimed, sys_before, sys_after;
 
-    ASSERT(spin_is_locked(&d->page_alloc_lock));
+    ASSERT(spin_recursive_is_locked(&d->page_alloc_lock));
     d->tot_pages += pages;
 
     /*
@@ -508,7 +508,7 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
      * must always take the global heap_lock rather than only in the much
      * rarer case that d->outstanding_pages is non-zero
      */
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
     spin_lock(&heap_lock);
 
     /* pages==0 means "unset" the claim. */
@@ -554,7 +554,7 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
 
 out:
     spin_unlock(&heap_lock);
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
     return ret;
 }
 
@@ -2328,7 +2328,7 @@ int assign_pages(
     int rc = 0;
     unsigned int i;
 
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
 
     if ( unlikely(d->is_dying) )
     {
@@ -2410,7 +2410,7 @@ int assign_pages(
     }
 
  out:
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
     return rc;
 }
 
@@ -2891,9 +2891,9 @@ mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags)
     ASSERT_ALLOC_CONTEXT();
 
     /* Acquire a page from reserved page list(resv_page_list). */
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
     page = page_list_remove_head(&d->resv_page_list);
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
     if ( unlikely(!page) )
         return INVALID_MFN;
 
@@ -2912,9 +2912,9 @@ mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags)
      */
     unprepare_staticmem_pages(page, 1, false);
  fail:
-    spin_lock(&d->page_alloc_lock);
+    spin_lock_nonrecursive(&d->page_alloc_lock);
     page_list_add_tail(page, &d->resv_page_list);
-    spin_unlock(&d->page_alloc_lock);
+    spin_unlock_nonrecursive(&d->page_alloc_lock);
     return INVALID_MFN;
 }
 #endif
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e8468c121a..2e861ad9d6 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -120,7 +120,7 @@ static int __read_mostly sercon_handle = -1;
 int8_t __read_mostly opt_console_xen; /* console=xen */
 #endif
 
-static DEFINE_SPINLOCK(console_lock);
+static DEFINE_SPINLOCK_RECURSIVE(console_lock);
 
 /*
  * To control the amount of printing, thresholds are added.
@@ -328,7 +328,7 @@ static void cf_check do_dec_thresh(unsigned char key, struct cpu_user_regs *regs
 
 static void conring_puts(const char *str, size_t len)
 {
-    ASSERT(spin_is_locked(&console_lock));
+    ASSERT(spin_recursive_is_locked(&console_lock));
 
     while ( len-- )
         conring[CONRING_IDX_MASK(conringp++)] = *str++;
@@ -369,9 +369,9 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
 
     if ( op->clear )
     {
-        spin_lock_irq(&console_lock);
+        spin_lock_nonrecursive_irq(&console_lock);
         conringc = p - c > conring_size ? p - conring_size : c;
-        spin_unlock_irq(&console_lock);
+        spin_unlock_nonrecursive_irq(&console_lock);
     }
 
     op->count = sofar;
@@ -612,7 +612,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer,
         if ( is_hardware_domain(cd) )
         {
             /* Use direct console output as it could be interactive */
-            spin_lock_irq(&console_lock);
+            spin_lock_nonrecursive_irq(&console_lock);
 
             console_serial_puts(kbuf, kcount);
             video_puts(kbuf, kcount);
@@ -633,7 +633,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer,
                 tasklet_schedule(&notify_dom0_con_ring_tasklet);
             }
 
-            spin_unlock_irq(&console_lock);
+            spin_unlock_nonrecursive_irq(&console_lock);
         }
         else
         {
@@ -739,7 +739,7 @@ static void __putstr(const char *str)
 {
     size_t len = strlen(str);
 
-    ASSERT(spin_is_locked(&console_lock));
+    ASSERT(spin_recursive_is_locked(&console_lock));
 
     console_serial_puts(str, len);
     video_puts(str, len);
@@ -1000,9 +1000,9 @@ void __init console_init_preirq(void)
     pv_console_set_rx_handler(serial_rx);
 
     /* HELLO WORLD --- start-of-day banner text. */
-    spin_lock(&console_lock);
+    spin_lock_nonrecursive(&console_lock);
     __putstr(xen_banner());
-    spin_unlock(&console_lock);
+    spin_unlock_nonrecursive(&console_lock);
     printk("Xen version %d.%d%s (%s@%s) (%s) %s %s\n",
            xen_major_version(), xen_minor_version(), xen_extra_version(),
            xen_compile_by(), xen_compile_domain(), xen_compiler(),
@@ -1039,13 +1039,13 @@ void __init console_init_ring(void)
     }
     opt_conring_size = PAGE_SIZE << order;
 
-    spin_lock_irqsave(&console_lock, flags);
+    spin_lock_nonrecursive_irqsave(&console_lock, flags);
     for ( i = conringc ; i != conringp; i++ )
         ring[i & (opt_conring_size - 1)] = conring[i & (conring_size - 1)];
     conring = ring;
     smp_wmb(); /* Allow users of console_force_unlock() to see larger buffer. */
     conring_size = opt_conring_size;
-    spin_unlock_irqrestore(&console_lock, flags);
+    spin_unlock_nonrecursive_irqrestore(&console_lock, flags);
 
     printk("Allocated console ring of %u KiB.\n", opt_conring_size >> 10);
 }
@@ -1151,7 +1151,7 @@ void console_force_unlock(void)
 {
     watchdog_disable();
     spin_debug_disable();
-    spin_lock_init(&console_lock);
+    spin_lock_recursive_init(&console_lock);
     serial_force_unlock(sercon_handle);
     console_locks_busted = 1;
     console_start_sync();
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index cdaf5c247f..c86b11be10 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -50,7 +50,7 @@ struct pci_seg {
     } bus2bridge[MAX_BUSES];
 };
 
-static spinlock_t _pcidevs_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK_RECURSIVE(_pcidevs_lock);
 
 void pcidevs_lock(void)
 {
@@ -64,7 +64,7 @@ void pcidevs_unlock(void)
 
 bool_t pcidevs_locked(void)
 {
-    return !!spin_is_locked(&_pcidevs_lock);
+    return !!spin_recursive_is_locked(&_pcidevs_lock);
 }
 
 static struct radix_tree_root pci_segments;
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index 961891bea4..20f64102c9 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -40,7 +40,7 @@ union lock_debug { };
     lock profiling on:
 
     Global locks which should be subject to profiling must be declared via
-    DEFINE_SPINLOCK.
+    DEFINE_SPINLOCK[_RECURSIVE].
 
     For locks in structures further measures are necessary:
     - the structure definition must include a profile_head with exactly this
@@ -146,6 +146,8 @@ struct lock_profile_qhead { };
 
 #endif
 
+#define DEFINE_SPINLOCK_RECURSIVE(l) DEFINE_SPINLOCK(l)
+
 typedef union {
     u32 head_tail;
     struct {
@@ -171,6 +173,8 @@ typedef struct spinlock {
 
 
 #define spin_lock_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED)
+#define spin_lock_recursive_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED)
+#define spin_lock_recursive_init_prof(s, l) spin_lock_init_prof(s, l)
 
 void _spin_lock(spinlock_t *lock);
 void _spin_lock_cb(spinlock_t *lock, void (*cond)(void *), void *data);
@@ -223,9 +227,20 @@ void _spin_unlock_recursive(spinlock_t *lock);
  * part of a recursively-nested set must be protected by these forms. If there
  * are any critical regions that cannot form part of such a set, they can use
  * standard spin_[un]lock().
+ * The related spin_[un]lock_nonrecursive() variants should be used when no
+ * recursion of locking is needed for locks, which might be taken recursively.
  */
 #define spin_trylock_recursive(l)     _spin_trylock_recursive(l)
 #define spin_lock_recursive(l)        _spin_lock_recursive(l)
 #define spin_unlock_recursive(l)      _spin_unlock_recursive(l)
+#define spin_recursive_is_locked(l)   spin_is_locked(l)
+
+#define spin_trylock_nonrecursive(l)     spin_trylock(l)
+#define spin_lock_nonrecursive(l)        spin_lock(l)
+#define spin_unlock_nonrecursive(l)      spin_unlock(l)
+#define spin_lock_nonrecursive_irq(l)    spin_lock_irq(l)
+#define spin_unlock_nonrecursive_irq(l)  spin_unlock_irq(l)
+#define spin_lock_nonrecursive_irqsave(l, f)      spin_lock_irqsave(l, f)
+#define spin_unlock_nonrecursive_irqrestore(l, f) spin_unlock_irqrestore(l, f)
 
 #endif /* __SPINLOCK_H__ */
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones
  2022-09-10 15:49 [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Juergen Gross
  2022-09-10 15:49 ` [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions Juergen Gross
@ 2022-09-10 15:49 ` Juergen Gross
  2022-12-14 10:39   ` Jan Beulich
  2022-09-10 15:49 ` [RFC PATCH 3/3] xen/spinlock: support higher number of cpus Juergen Gross
  2022-12-14 15:03 ` [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Daniel P. Smith
  3 siblings, 1 reply; 19+ messages in thread
From: Juergen Gross @ 2022-09-10 15:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Julien Grall, Stefano Stabellini

Recursive and normal spinlocks are sharing the same data structure for
representation of the lock. This has two major disadvantages:

- it is not clear from the definition of a lock, whether it is intended
  to be used recursive or not, while a mixture of both usage variants
  needs to be

- in production builds (builds without CONFIG_DEBUG_LOCKS) the needed
  data size of an ordinary spinlock is 8 bytes instead of 4, due to the
  additional recursion data needed (associated with that the rwlock
  data is using 12 instead of only 8 bytes)

Fix that by introducing a struct spinlock_recursive for recursive
spinlocks only, and switch recursive spinlock functions to require
pointers to this new struct.

This allows to check the correct usage at build time.

The sizes per lock will change:

lock type              debug build     non-debug build
                        old   new        old   new
spinlock                 8     8          8     4
recursive spinlock       8    12          8     8
rwlock                  12    12         12     8

So the only downside is an increase for recursive spinlocks in debug
builds, while in non-debug builds especially normal spinlocks and
rwlocks are consuming less memory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/include/asm/mm.h |  2 +-
 xen/arch/x86/mm/mm-locks.h    |  2 +-
 xen/arch/x86/mm/p2m-pod.c     |  2 +-
 xen/common/domain.c           |  2 +-
 xen/common/spinlock.c         | 21 ++++++-----
 xen/include/xen/sched.h       |  6 ++--
 xen/include/xen/spinlock.h    | 65 +++++++++++++++++++++--------------
 7 files changed, 60 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h
index 0fc826de46..8cf86b4796 100644
--- a/xen/arch/x86/include/asm/mm.h
+++ b/xen/arch/x86/include/asm/mm.h
@@ -610,7 +610,7 @@ unsigned long domain_get_maximum_gpfn(struct domain *d);
 
 /* Definition of an mm lock: spinlock with extra fields for debugging */
 typedef struct mm_lock {
-    spinlock_t         lock;
+    struct spinlock_recursive lock;
     int                unlock_level;
     int                locker;          /* processor which holds the lock */
     const char        *locker_function; /* func that took it */
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index c1523aeccf..7b54e6914b 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -32,7 +32,7 @@ DECLARE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
 
 static inline void mm_lock_init(mm_lock_t *l)
 {
-    spin_lock_init(&l->lock);
+    spin_lock_recursive_init(&l->lock);
     l->locker = -1;
     l->locker_function = "nobody";
     l->unlock_level = 0;
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index deab55648c..02c149f839 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -397,7 +397,7 @@ int p2m_pod_empty_cache(struct domain *d)
 
     /* After this barrier no new PoD activities can happen. */
     BUG_ON(!d->is_dying);
-    spin_barrier(&p2m->pod.lock.lock);
+    spin_barrier(&p2m->pod.lock.lock.lock);
 
     lock_page_alloc(p2m);
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 51160a4b5c..5e5ac4e74b 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -929,7 +929,7 @@ int domain_kill(struct domain *d)
     case DOMDYING_alive:
         domain_pause(d);
         d->is_dying = DOMDYING_dying;
-        spin_barrier(&d->domain_lock);
+        spin_barrier(&d->domain_lock.lock);
         argo_destroy(d);
         vnuma_destroy(d->vnuma);
         domain_set_outstanding_pages(d, 0);
diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 62c83aaa6a..a48ed17ac6 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -224,6 +224,11 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 }
 
 int _spin_is_locked(spinlock_t *lock)
+{
+    return lock->tickets.head != lock->tickets.tail;
+}
+
+int _spin_recursive_is_locked(struct spinlock_recursive *lock)
 {
     /*
      * Recursive locks may be locked by another CPU, yet we return
@@ -231,7 +236,7 @@ int _spin_is_locked(spinlock_t *lock)
      * ASSERT()s and alike.
      */
     return lock->recurse_cpu == SPINLOCK_NO_CPU
-           ? lock->tickets.head != lock->tickets.tail
+           ? _spin_is_locked(&lock->lock)
            : lock->recurse_cpu == smp_processor_id();
 }
 
@@ -292,7 +297,7 @@ void _spin_barrier(spinlock_t *lock)
     smp_mb();
 }
 
-int _spin_trylock_recursive(spinlock_t *lock)
+int _spin_trylock_recursive(struct spinlock_recursive *lock)
 {
     unsigned int cpu = smp_processor_id();
 
@@ -300,11 +305,11 @@ int _spin_trylock_recursive(spinlock_t *lock)
     BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
     BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
 
-    check_lock(&lock->debug, true);
+    check_lock(&lock->lock.debug, true);
 
     if ( likely(lock->recurse_cpu != cpu) )
     {
-        if ( !spin_trylock(lock) )
+        if ( !spin_trylock(&lock->lock) )
             return 0;
         lock->recurse_cpu = cpu;
     }
@@ -316,13 +321,13 @@ int _spin_trylock_recursive(spinlock_t *lock)
     return 1;
 }
 
-void _spin_lock_recursive(spinlock_t *lock)
+void _spin_lock_recursive(struct spinlock_recursive *lock)
 {
     unsigned int cpu = smp_processor_id();
 
     if ( likely(lock->recurse_cpu != cpu) )
     {
-        _spin_lock(lock);
+        _spin_lock(&lock->lock);
         lock->recurse_cpu = cpu;
     }
 
@@ -331,12 +336,12 @@ void _spin_lock_recursive(spinlock_t *lock)
     lock->recurse_cnt++;
 }
 
-void _spin_unlock_recursive(spinlock_t *lock)
+void _spin_unlock_recursive(struct spinlock_recursive *lock)
 {
     if ( likely(--lock->recurse_cnt == 0) )
     {
         lock->recurse_cpu = SPINLOCK_NO_CPU;
-        spin_unlock(lock);
+        spin_unlock(&lock->lock);
     }
 }
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 557b3229f6..8d45f522d5 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -375,9 +375,9 @@ struct domain
 
     rcu_read_lock_t  rcu_lock;
 
-    spinlock_t       domain_lock;
+    struct spinlock_recursive domain_lock;
 
-    spinlock_t       page_alloc_lock; /* protects all the following fields  */
+    struct spinlock_recursive page_alloc_lock; /* protects following fields  */
     struct page_list_head page_list;  /* linked list */
     struct page_list_head extra_page_list; /* linked list (size extra_pages) */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
@@ -595,7 +595,7 @@ struct domain
 #ifdef CONFIG_IOREQ_SERVER
     /* Lock protects all other values in the sub-struct */
     struct {
-        spinlock_t              lock;
+        struct spinlock_recursive lock;
         struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
     } ioreq_server;
 #endif
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index 20f64102c9..d0cfb4c524 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -89,16 +89,21 @@ struct lock_profile_qhead {
     int32_t                   idx;     /* index for printout */
 };
 
-#define _LOCK_PROFILE(name) { 0, #name, &name, 0, 0, 0, 0, 0 }
+#define _LOCK_PROFILE(name, var) { 0, #name, &var, 0, 0, 0, 0, 0 }
 #define _LOCK_PROFILE_PTR(name)                                               \
     static struct lock_profile * const __lock_profile_##name                  \
     __used_section(".lockprofile.data") =                                     \
     &__lock_profile_data_##name
-#define _SPIN_LOCK_UNLOCKED(x) { { 0 }, SPINLOCK_NO_CPU, 0, _LOCK_DEBUG, x }
+#define _SPIN_LOCK_UNLOCKED(x) { { 0 }, _LOCK_DEBUG, x }
 #define SPIN_LOCK_UNLOCKED _SPIN_LOCK_UNLOCKED(NULL)
 #define DEFINE_SPINLOCK(l)                                                    \
     spinlock_t l = _SPIN_LOCK_UNLOCKED(NULL);                                 \
-    static struct lock_profile __lock_profile_data_##l = _LOCK_PROFILE(l);    \
+    static struct lock_profile __lock_profile_data_##l = _LOCK_PROFILE(l, l); \
+    _LOCK_PROFILE_PTR(l)
+#define DEFINE_SPINLOCK_RECURSIVE(l)                                          \
+    struct spinlock_recursive l = { .lock = _SPIN_LOCK_UNLOCKED(NULL) };      \
+    static struct lock_profile __lock_profile_data_##l =                      \
+                                  _LOCK_PROFILE(l, l.lock);                   \
     _LOCK_PROFILE_PTR(l)
 
 #define spin_lock_init_prof(s, l)                                             \
@@ -136,8 +141,10 @@ extern void cf_check spinlock_profile_reset(unsigned char key);
 
 struct lock_profile_qhead { };
 
-#define SPIN_LOCK_UNLOCKED { { 0 }, SPINLOCK_NO_CPU, 0, _LOCK_DEBUG }
+#define SPIN_LOCK_UNLOCKED { { 0 }, _LOCK_DEBUG }
 #define DEFINE_SPINLOCK(l) spinlock_t l = SPIN_LOCK_UNLOCKED
+#define DEFINE_SPINLOCK_RECURSIVE(l) \
+    struct spinlock_recursive l = { .lock = SPIN_LOCK_UNLOCKED }
 
 #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l))
 #define lock_profile_register_struct(type, ptr, idx)
@@ -146,8 +153,6 @@ struct lock_profile_qhead { };
 
 #endif
 
-#define DEFINE_SPINLOCK_RECURSIVE(l) DEFINE_SPINLOCK(l)
-
 typedef union {
     u32 head_tail;
     struct {
@@ -160,21 +165,30 @@ typedef union {
 
 typedef struct spinlock {
     spinlock_tickets_t tickets;
-    u16 recurse_cpu:SPINLOCK_CPU_BITS;
-#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
-#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
-    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
-#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
     union lock_debug debug;
 #ifdef CONFIG_DEBUG_LOCK_PROFILE
     struct lock_profile *profile;
 #endif
 } spinlock_t;
 
+struct spinlock_recursive {
+    struct spinlock lock;
+    u16 recurse_cpu:SPINLOCK_CPU_BITS;
+#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
+#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
+    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
+#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
+};
 
 #define spin_lock_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED)
-#define spin_lock_recursive_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED)
-#define spin_lock_recursive_init_prof(s, l) spin_lock_init_prof(s, l)
+#define spin_lock_recursive_init(l) (*(l) = (struct spinlock_recursive){ \
+    .lock = (spinlock_t)SPIN_LOCK_UNLOCKED,                              \
+    .recurse_cpu = SPINLOCK_NO_CPU })
+#define spin_lock_recursive_init_prof(s, l) do {  \
+        spin_lock_init_prof(s, l.lock);           \
+        (s)->l.recurse_cpu = SPINLOCK_NO_CPU;     \
+        (s)->l.recurse_cnt = 0;                   \
+    } while (0)
 
 void _spin_lock(spinlock_t *lock);
 void _spin_lock_cb(spinlock_t *lock, void (*cond)(void *), void *data);
@@ -189,9 +203,10 @@ int _spin_is_locked(spinlock_t *lock);
 int _spin_trylock(spinlock_t *lock);
 void _spin_barrier(spinlock_t *lock);
 
-int _spin_trylock_recursive(spinlock_t *lock);
-void _spin_lock_recursive(spinlock_t *lock);
-void _spin_unlock_recursive(spinlock_t *lock);
+int _spin_recursive_is_locked(struct spinlock_recursive *lock);
+int _spin_trylock_recursive(struct spinlock_recursive *lock);
+void _spin_lock_recursive(struct spinlock_recursive *lock);
+void _spin_unlock_recursive(struct spinlock_recursive *lock);
 
 #define spin_lock(l)                  _spin_lock(l)
 #define spin_lock_cb(l, c, d)         _spin_lock_cb(l, c, d)
@@ -233,14 +248,14 @@ void _spin_unlock_recursive(spinlock_t *lock);
 #define spin_trylock_recursive(l)     _spin_trylock_recursive(l)
 #define spin_lock_recursive(l)        _spin_lock_recursive(l)
 #define spin_unlock_recursive(l)      _spin_unlock_recursive(l)
-#define spin_recursive_is_locked(l)   spin_is_locked(l)
-
-#define spin_trylock_nonrecursive(l)     spin_trylock(l)
-#define spin_lock_nonrecursive(l)        spin_lock(l)
-#define spin_unlock_nonrecursive(l)      spin_unlock(l)
-#define spin_lock_nonrecursive_irq(l)    spin_lock_irq(l)
-#define spin_unlock_nonrecursive_irq(l)  spin_unlock_irq(l)
-#define spin_lock_nonrecursive_irqsave(l, f)      spin_lock_irqsave(l, f)
-#define spin_unlock_nonrecursive_irqrestore(l, f) spin_unlock_irqrestore(l, f)
+#define spin_recursive_is_locked(l)   _spin_recursive_is_locked(l)
+
+#define spin_trylock_nonrecursive(l)     spin_trylock(&(l)->lock)
+#define spin_lock_nonrecursive(l)        spin_lock(&(l)->lock)
+#define spin_unlock_nonrecursive(l)      spin_unlock(&(l)->lock)
+#define spin_lock_nonrecursive_irq(l)    spin_lock_irq(&(l)->lock)
+#define spin_unlock_nonrecursive_irq(l)  spin_unlock_irq(&(l)->lock)
+#define spin_lock_nonrecursive_irqsave(l, f)      spin_lock_irqsave(&(l)->lock, f)
+#define spin_unlock_nonrecursive_irqrestore(l, f) spin_unlock_irqrestore(&(l)->lock, f)
 
 #endif /* __SPINLOCK_H__ */
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 3/3] xen/spinlock: support higher number of cpus
  2022-09-10 15:49 [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Juergen Gross
  2022-09-10 15:49 ` [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions Juergen Gross
  2022-09-10 15:49 ` [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones Juergen Gross
@ 2022-09-10 15:49 ` Juergen Gross
  2022-12-14 15:03 ` [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Daniel P. Smith
  3 siblings, 0 replies; 19+ messages in thread
From: Juergen Gross @ 2022-09-10 15:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu

There is no real reason why the cpu fields of struct spinlock should
be limited to 12 bits, now that there is a 2 byte padding hole after
those fields.

Make the related structures a little bit larger allowing 16 bits per
cpu number, which is the limit imposed by spinlock_tickets_t.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/spinlock.c      |  1 +
 xen/include/xen/spinlock.h | 18 +++++++++---------
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index a48ed17ac6..5509e4b79a 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -303,6 +303,7 @@ int _spin_trylock_recursive(struct spinlock_recursive *lock)
 
     /* Don't allow overflow of recurse_cpu field. */
     BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
+    BUILD_BUG_ON(SPINLOCK_CPU_BITS > sizeof(lock->recurse_cpu) * 8);
     BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
 
     check_lock(&lock->lock.debug, true);
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index d0cfb4c524..e157b12f6e 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -6,16 +6,16 @@
 #include <asm/spinlock.h>
 #include <asm/types.h>
 
-#define SPINLOCK_CPU_BITS  12
+#define SPINLOCK_CPU_BITS  16
 
 #ifdef CONFIG_DEBUG_LOCKS
 union lock_debug {
-    uint16_t val;
-#define LOCK_DEBUG_INITVAL 0xffff
+    uint32_t val;
+#define LOCK_DEBUG_INITVAL 0xffffffff
     struct {
-        uint16_t cpu:SPINLOCK_CPU_BITS;
-#define LOCK_DEBUG_PAD_BITS (14 - SPINLOCK_CPU_BITS)
-        uint16_t :LOCK_DEBUG_PAD_BITS;
+        uint32_t cpu:SPINLOCK_CPU_BITS;
+#define LOCK_DEBUG_PAD_BITS (30 - SPINLOCK_CPU_BITS)
+        uint32_t :LOCK_DEBUG_PAD_BITS;
         bool irq_safe:1;
         bool unseen:1;
     };
@@ -173,10 +173,10 @@ typedef struct spinlock {
 
 struct spinlock_recursive {
     struct spinlock lock;
-    u16 recurse_cpu:SPINLOCK_CPU_BITS;
+    uint16_t recurse_cpu;
 #define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
-#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
-    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
+#define SPINLOCK_RECURSE_BITS  8
+    uint8_t recurse_cnt;
 #define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
 };
 
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions
  2022-09-10 15:49 ` [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions Juergen Gross
@ 2022-12-14 10:21   ` Jan Beulich
  2022-12-14 10:32     ` Juergen Gross
  2022-12-14 10:32     ` Jan Beulich
  0 siblings, 2 replies; 19+ messages in thread
From: Jan Beulich @ 2022-12-14 10:21 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Daniel P. Smith,
	Mateusz Mówka, Paul Durrant, xen-devel

On 10.09.2022 17:49, Juergen Gross wrote:
> In order to prepare a type-safe recursive spinlock structure, add
> explicitly non-recursive locking functions to be used for non-recursive
> locking of spinlocks, which are use recursively, too.

While I can see that something needs doing, a function name like
spin_unlock_nonrecursive_irqrestore() is really unwieldy, imo.

Just one minor further remark:

> @@ -64,7 +64,7 @@ void pcidevs_unlock(void)
>  
>  bool_t pcidevs_locked(void)
>  {
> -    return !!spin_is_locked(&_pcidevs_lock);
> +    return !!spin_recursive_is_locked(&_pcidevs_lock);
>  }

While touching this line, could you please also get rid of the !! ?

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions
  2022-12-14 10:21   ` Jan Beulich
@ 2022-12-14 10:32     ` Juergen Gross
  2022-12-14 10:32     ` Jan Beulich
  1 sibling, 0 replies; 19+ messages in thread
From: Juergen Gross @ 2022-12-14 10:32 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Daniel P. Smith,
	Mateusz Mówka, Paul Durrant, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 849 bytes --]

On 14.12.22 11:21, Jan Beulich wrote:
> On 10.09.2022 17:49, Juergen Gross wrote:
>> In order to prepare a type-safe recursive spinlock structure, add
>> explicitly non-recursive locking functions to be used for non-recursive
>> locking of spinlocks, which are use recursively, too.
> 
> While I can see that something needs doing, a function name like
> spin_unlock_nonrecursive_irqrestore() is really unwieldy, imo.

Would you be fine with s/nonrecursive/nonrec/ in all the names?

> 
> Just one minor further remark:
> 
>> @@ -64,7 +64,7 @@ void pcidevs_unlock(void)
>>   
>>   bool_t pcidevs_locked(void)
>>   {
>> -    return !!spin_is_locked(&_pcidevs_lock);
>> +    return !!spin_recursive_is_locked(&_pcidevs_lock);
>>   }
> 
> While touching this line, could you please also get rid of the !! ?

Okay.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions
  2022-12-14 10:21   ` Jan Beulich
  2022-12-14 10:32     ` Juergen Gross
@ 2022-12-14 10:32     ` Jan Beulich
  2022-12-14 10:38       ` Juergen Gross
  1 sibling, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2022-12-14 10:32 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Daniel P. Smith,
	Mateusz Mówka, Paul Durrant, xen-devel

On 14.12.2022 11:21, Jan Beulich wrote:
> On 10.09.2022 17:49, Juergen Gross wrote:
>> In order to prepare a type-safe recursive spinlock structure, add
>> explicitly non-recursive locking functions to be used for non-recursive
>> locking of spinlocks, which are use recursively, too.
> 
> While I can see that something needs doing, a function name like
> spin_unlock_nonrecursive_irqrestore() is really unwieldy, imo.

While further looking at patch 2 - how about rspinlock_t and rspin_lock()
etc, and then nrspin_lock() etc here?

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions
  2022-12-14 10:32     ` Jan Beulich
@ 2022-12-14 10:38       ` Juergen Gross
  0 siblings, 0 replies; 19+ messages in thread
From: Juergen Gross @ 2022-12-14 10:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Daniel P. Smith,
	Mateusz Mówka, Paul Durrant, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 634 bytes --]

On 14.12.22 11:32, Jan Beulich wrote:
> On 14.12.2022 11:21, Jan Beulich wrote:
>> On 10.09.2022 17:49, Juergen Gross wrote:
>>> In order to prepare a type-safe recursive spinlock structure, add
>>> explicitly non-recursive locking functions to be used for non-recursive
>>> locking of spinlocks, which are use recursively, too.
>>
>> While I can see that something needs doing, a function name like
>> spin_unlock_nonrecursive_irqrestore() is really unwieldy, imo.
> 
> While further looking at patch 2 - how about rspinlock_t and rspin_lock()
> etc, and then nrspin_lock() etc here?

Would work for me.


Juergen


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones
  2022-09-10 15:49 ` [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones Juergen Gross
@ 2022-12-14 10:39   ` Jan Beulich
  2022-12-14 11:10     ` Juergen Gross
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2022-12-14 10:39 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Julien Grall, Stefano Stabellini,
	xen-devel

On 10.09.2022 17:49, Juergen Gross wrote:
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -397,7 +397,7 @@ int p2m_pod_empty_cache(struct domain *d)
>  
>      /* After this barrier no new PoD activities can happen. */
>      BUG_ON(!d->is_dying);
> -    spin_barrier(&p2m->pod.lock.lock);
> +    spin_barrier(&p2m->pod.lock.lock.lock);

This is getting unwieldy as well, and ...

> @@ -160,21 +165,30 @@ typedef union {
>  
>  typedef struct spinlock {
>      spinlock_tickets_t tickets;
> -    u16 recurse_cpu:SPINLOCK_CPU_BITS;
> -#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
> -#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
> -    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
> -#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>      union lock_debug debug;
>  #ifdef CONFIG_DEBUG_LOCK_PROFILE
>      struct lock_profile *profile;
>  #endif
>  } spinlock_t;
>  
> +struct spinlock_recursive {
> +    struct spinlock lock;
> +    u16 recurse_cpu:SPINLOCK_CPU_BITS;
> +#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
> +#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
> +    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
> +#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
> +};

... I'm not sure anyway it's a good idea to embed spinlock_t inside the
new struct. I'd prefer if non-optional fields were always at the same
position, and there's not going to be that much duplication if we went
with

typedef struct spinlock {
    spinlock_tickets_t tickets;
    union lock_debug debug;
#ifdef CONFIG_DEBUG_LOCK_PROFILE
    struct lock_profile *profile;
#endif
} spinlock_t;

typedef struct rspinlock {
    spinlock_tickets_t tickets;
    u16 recurse_cpu:SPINLOCK_CPU_BITS;
#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
    union lock_debug debug;
#ifdef CONFIG_DEBUG_LOCK_PROFILE
    struct lock_profile *profile;
#endif
} rspinlock_t;

This would also eliminate the size increase of recursive locks in
debug builds. And functions like spin_barrier() then also would
(have to) properly indicate what kind of lock they act on.

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones
  2022-12-14 10:39   ` Jan Beulich
@ 2022-12-14 11:10     ` Juergen Gross
  2022-12-14 11:29       ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Juergen Gross @ 2022-12-14 11:10 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Julien Grall, Stefano Stabellini,
	xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 2763 bytes --]

On 14.12.22 11:39, Jan Beulich wrote:
> On 10.09.2022 17:49, Juergen Gross wrote:
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -397,7 +397,7 @@ int p2m_pod_empty_cache(struct domain *d)
>>   
>>       /* After this barrier no new PoD activities can happen. */
>>       BUG_ON(!d->is_dying);
>> -    spin_barrier(&p2m->pod.lock.lock);
>> +    spin_barrier(&p2m->pod.lock.lock.lock);
> 
> This is getting unwieldy as well, and ...
> 
>> @@ -160,21 +165,30 @@ typedef union {
>>   
>>   typedef struct spinlock {
>>       spinlock_tickets_t tickets;
>> -    u16 recurse_cpu:SPINLOCK_CPU_BITS;
>> -#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>> -#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>> -    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>> -#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>       union lock_debug debug;
>>   #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>       struct lock_profile *profile;
>>   #endif
>>   } spinlock_t;
>>   
>> +struct spinlock_recursive {
>> +    struct spinlock lock;
>> +    u16 recurse_cpu:SPINLOCK_CPU_BITS;
>> +#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>> +#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>> +    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>> +#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>> +};
> 
> ... I'm not sure anyway it's a good idea to embed spinlock_t inside the
> new struct. I'd prefer if non-optional fields were always at the same
> position, and there's not going to be that much duplication if we went
> with
> 
> typedef struct spinlock {
>      spinlock_tickets_t tickets;
>      union lock_debug debug;
> #ifdef CONFIG_DEBUG_LOCK_PROFILE
>      struct lock_profile *profile;
> #endif
> } spinlock_t;
> 
> typedef struct rspinlock {
>      spinlock_tickets_t tickets;
>      u16 recurse_cpu:SPINLOCK_CPU_BITS;
> #define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
> #define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>      u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
> #define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>      union lock_debug debug;
> #ifdef CONFIG_DEBUG_LOCK_PROFILE
>      struct lock_profile *profile;
> #endif
> } rspinlock_t;
> 
> This would also eliminate the size increase of recursive locks in
> debug builds. And functions like spin_barrier() then also would
> (have to) properly indicate what kind of lock they act on.

You are aware that this would require to duplicate all the spinlock
functions for the recursive spinlocks?

I'm not strictly against this, but before going this route I think I
should mention the implications.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones
  2022-12-14 11:10     ` Juergen Gross
@ 2022-12-14 11:29       ` Jan Beulich
  2022-12-14 11:38         ` Juergen Gross
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2022-12-14 11:29 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Julien Grall, Stefano Stabellini,
	xen-devel

On 14.12.2022 12:10, Juergen Gross wrote:
> On 14.12.22 11:39, Jan Beulich wrote:
>> On 10.09.2022 17:49, Juergen Gross wrote:
>>> --- a/xen/arch/x86/mm/p2m-pod.c
>>> +++ b/xen/arch/x86/mm/p2m-pod.c
>>> @@ -397,7 +397,7 @@ int p2m_pod_empty_cache(struct domain *d)
>>>   
>>>       /* After this barrier no new PoD activities can happen. */
>>>       BUG_ON(!d->is_dying);
>>> -    spin_barrier(&p2m->pod.lock.lock);
>>> +    spin_barrier(&p2m->pod.lock.lock.lock);
>>
>> This is getting unwieldy as well, and ...
>>
>>> @@ -160,21 +165,30 @@ typedef union {
>>>   
>>>   typedef struct spinlock {
>>>       spinlock_tickets_t tickets;
>>> -    u16 recurse_cpu:SPINLOCK_CPU_BITS;
>>> -#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>>> -#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>> -    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>>> -#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>>       union lock_debug debug;
>>>   #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>>       struct lock_profile *profile;
>>>   #endif
>>>   } spinlock_t;
>>>   
>>> +struct spinlock_recursive {
>>> +    struct spinlock lock;
>>> +    u16 recurse_cpu:SPINLOCK_CPU_BITS;
>>> +#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>>> +#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>> +    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>>> +#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>> +};
>>
>> ... I'm not sure anyway it's a good idea to embed spinlock_t inside the
>> new struct. I'd prefer if non-optional fields were always at the same
>> position, and there's not going to be that much duplication if we went
>> with
>>
>> typedef struct spinlock {
>>      spinlock_tickets_t tickets;
>>      union lock_debug debug;
>> #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>      struct lock_profile *profile;
>> #endif
>> } spinlock_t;
>>
>> typedef struct rspinlock {
>>      spinlock_tickets_t tickets;
>>      u16 recurse_cpu:SPINLOCK_CPU_BITS;
>> #define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>> #define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>      u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>> #define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>      union lock_debug debug;
>> #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>      struct lock_profile *profile;
>> #endif
>> } rspinlock_t;
>>
>> This would also eliminate the size increase of recursive locks in
>> debug builds. And functions like spin_barrier() then also would
>> (have to) properly indicate what kind of lock they act on.
> 
> You are aware that this would require to duplicate all the spinlock
> functions for the recursive spinlocks?

Well, to be honest I didn't really consider this aspect, but I think
that's a reasonable price to pay (with some de-duplication potential
if we wanted to), provided we want to go this route in the first place.
The latest with this aspect in mind I wonder whether we aren't better
off with the current state (the more that iirc Andrew thinks that we
should get rid of recursive locking altogether).

Jan

> I'm not strictly against this, but before going this route I think I
> should mention the implications.
> 
> 
> Juergen



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones
  2022-12-14 11:29       ` Jan Beulich
@ 2022-12-14 11:38         ` Juergen Gross
  0 siblings, 0 replies; 19+ messages in thread
From: Juergen Gross @ 2022-12-14 11:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Julien Grall, Stefano Stabellini,
	xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 3570 bytes --]

On 14.12.22 12:29, Jan Beulich wrote:
> On 14.12.2022 12:10, Juergen Gross wrote:
>> On 14.12.22 11:39, Jan Beulich wrote:
>>> On 10.09.2022 17:49, Juergen Gross wrote:
>>>> --- a/xen/arch/x86/mm/p2m-pod.c
>>>> +++ b/xen/arch/x86/mm/p2m-pod.c
>>>> @@ -397,7 +397,7 @@ int p2m_pod_empty_cache(struct domain *d)
>>>>    
>>>>        /* After this barrier no new PoD activities can happen. */
>>>>        BUG_ON(!d->is_dying);
>>>> -    spin_barrier(&p2m->pod.lock.lock);
>>>> +    spin_barrier(&p2m->pod.lock.lock.lock);
>>>
>>> This is getting unwieldy as well, and ...
>>>
>>>> @@ -160,21 +165,30 @@ typedef union {
>>>>    
>>>>    typedef struct spinlock {
>>>>        spinlock_tickets_t tickets;
>>>> -    u16 recurse_cpu:SPINLOCK_CPU_BITS;
>>>> -#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>>>> -#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>>> -    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>>>> -#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>>>        union lock_debug debug;
>>>>    #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>>>        struct lock_profile *profile;
>>>>    #endif
>>>>    } spinlock_t;
>>>>    
>>>> +struct spinlock_recursive {
>>>> +    struct spinlock lock;
>>>> +    u16 recurse_cpu:SPINLOCK_CPU_BITS;
>>>> +#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>>>> +#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>>> +    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>>>> +#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>>> +};
>>>
>>> ... I'm not sure anyway it's a good idea to embed spinlock_t inside the
>>> new struct. I'd prefer if non-optional fields were always at the same
>>> position, and there's not going to be that much duplication if we went
>>> with
>>>
>>> typedef struct spinlock {
>>>       spinlock_tickets_t tickets;
>>>       union lock_debug debug;
>>> #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>>       struct lock_profile *profile;
>>> #endif
>>> } spinlock_t;
>>>
>>> typedef struct rspinlock {
>>>       spinlock_tickets_t tickets;
>>>       u16 recurse_cpu:SPINLOCK_CPU_BITS;
>>> #define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>>> #define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>>       u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>>> #define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>>       union lock_debug debug;
>>> #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>>       struct lock_profile *profile;
>>> #endif
>>> } rspinlock_t;
>>>
>>> This would also eliminate the size increase of recursive locks in
>>> debug builds. And functions like spin_barrier() then also would
>>> (have to) properly indicate what kind of lock they act on.
>>
>> You are aware that this would require to duplicate all the spinlock
>> functions for the recursive spinlocks?
> 
> Well, to be honest I didn't really consider this aspect, but I think
> that's a reasonable price to pay (with some de-duplication potential
> if we wanted to), provided we want to go this route in the first place.

Okay.

> The latest with this aspect in mind I wonder whether we aren't better
> off with the current state (the more that iirc Andrew thinks that we
> should get rid of recursive locking altogether).

The question is how (and when) to reach that goal.

In the end this was the reason to send the series as RFC first.

I'm waiting with addressing your comments until there is consensus
that the whole idea is really worth to be pursued.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type
  2022-09-10 15:49 [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Juergen Gross
                   ` (2 preceding siblings ...)
  2022-09-10 15:49 ` [RFC PATCH 3/3] xen/spinlock: support higher number of cpus Juergen Gross
@ 2022-12-14 15:03 ` Daniel P. Smith
  2022-12-14 15:31   ` Juergen Gross
  3 siblings, 1 reply; 19+ messages in thread
From: Daniel P. Smith @ 2022-12-14 15:03 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Mateusz Mówka,
	Paul Durrant


On 9/10/22 11:49, Juergen Gross wrote:
> Instead of being able to use normal spinlocks as recursive ones, too,
> make recursive spinlocks a special lock type.
> 
> This will make the spinlock structure smaller in production builds and
> add type-safety.

Just a little yak shaving, IMHO a spinlock is normally not expected to 
be recursive. Thus explicitly naming a spinlock as non-recursive I find 
to be redundant along with being expensive for typing. Whereas a 
recursive spinlock is the special instance and should have a declarative 
distinction. Only codifying the recursive type would significantly cut 
down on the size of the series and still provide equal type and visual 
clarification.

v/r,
dps


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type
  2022-12-14 15:03 ` [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Daniel P. Smith
@ 2022-12-14 15:31   ` Juergen Gross
  2022-12-14 16:25     ` Daniel P. Smith
  0 siblings, 1 reply; 19+ messages in thread
From: Juergen Gross @ 2022-12-14 15:31 UTC (permalink / raw)
  To: Daniel P. Smith, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Mateusz Mówka,
	Paul Durrant


[-- Attachment #1.1.1: Type: text/plain, Size: 1073 bytes --]

On 14.12.22 16:03, Daniel P. Smith wrote:
> 
> On 9/10/22 11:49, Juergen Gross wrote:
>> Instead of being able to use normal spinlocks as recursive ones, too,
>> make recursive spinlocks a special lock type.
>>
>> This will make the spinlock structure smaller in production builds and
>> add type-safety.
> 
> Just a little yak shaving, IMHO a spinlock is normally not expected to be 
> recursive. Thus explicitly naming a spinlock as non-recursive I find to be 
> redundant along with being expensive for typing. Whereas a recursive spinlock is 
> the special instance and should have a declarative distinction. Only codifying 
> the recursive type would significantly cut down on the size of the series and 
> still provide equal type and visual clarification.

A "normal" spinlock is non-recursive.

A recursive spinlock in Xen can be either taken recursive, or it can be taken
non-recursive, causing further recursive attempts to spin.

So the explicit non-recursive locking applies to that special treatment of
recursive spinlocks.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type
  2022-12-14 15:31   ` Juergen Gross
@ 2022-12-14 16:25     ` Daniel P. Smith
  2022-12-14 16:36       ` Juergen Gross
  0 siblings, 1 reply; 19+ messages in thread
From: Daniel P. Smith @ 2022-12-14 16:25 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Mateusz Mówka,
	Paul Durrant

On 12/14/22 10:31, Juergen Gross wrote:
> On 14.12.22 16:03, Daniel P. Smith wrote:
>>
>> On 9/10/22 11:49, Juergen Gross wrote:
>>> Instead of being able to use normal spinlocks as recursive ones, too,
>>> make recursive spinlocks a special lock type.
>>>
>>> This will make the spinlock structure smaller in production builds and
>>> add type-safety.
>>
>> Just a little yak shaving, IMHO a spinlock is normally not expected to 
>> be recursive. Thus explicitly naming a spinlock as non-recursive I 
>> find to be redundant along with being expensive for typing. Whereas a 
>> recursive spinlock is the special instance and should have a 
>> declarative distinction. Only codifying the recursive type would 
>> significantly cut down on the size of the series and still provide 
>> equal type and visual clarification.
> 
> A "normal" spinlock is non-recursive.
> 
> A recursive spinlock in Xen can be either taken recursive, or it can be 
> taken
> non-recursive, causing further recursive attempts to spin.

Yes, I understand the current situation.

> So the explicit non-recursive locking applies to that special treatment of
> recursive spinlocks.

I understand this, but to help clarify what I am saying is that 
individuals coming from outside Xen would expect is the spinlock family 
of calls to behave as a non-recursive spinlocks and recursive spinlock 
family of calls would provide the recursive behavior. Currently Xen's 
behavior is backwards to this, which this series continues and is a 
valid approach. Here spinlock and recursive spinlock family of calls are 
recursive and must use non-recursive spinlock family to have "normal" 
spinlock behavior. IMHO it would greatly simplify the code and align 
with the "normal" understanding of spinlocks if instead 
spin_{lock,locked,unlock} macros were the non-recursive calls and 
spin_{lock,locked,unlock}_recursive macros were the recursive calls. 
Then there would only be two suites of calls for spinlocks and a lot 
less keystrokes for need for most development.

v/r,
dps


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type
  2022-12-14 16:25     ` Daniel P. Smith
@ 2022-12-14 16:36       ` Juergen Gross
  2022-12-15  7:48         ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Juergen Gross @ 2022-12-14 16:36 UTC (permalink / raw)
  To: Daniel P. Smith, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Mateusz Mówka,
	Paul Durrant


[-- Attachment #1.1.1: Type: text/plain, Size: 2526 bytes --]

On 14.12.22 17:25, Daniel P. Smith wrote:
> On 12/14/22 10:31, Juergen Gross wrote:
>> On 14.12.22 16:03, Daniel P. Smith wrote:
>>>
>>> On 9/10/22 11:49, Juergen Gross wrote:
>>>> Instead of being able to use normal spinlocks as recursive ones, too,
>>>> make recursive spinlocks a special lock type.
>>>>
>>>> This will make the spinlock structure smaller in production builds and
>>>> add type-safety.
>>>
>>> Just a little yak shaving, IMHO a spinlock is normally not expected to be 
>>> recursive. Thus explicitly naming a spinlock as non-recursive I find to be 
>>> redundant along with being expensive for typing. Whereas a recursive spinlock 
>>> is the special instance and should have a declarative distinction. Only 
>>> codifying the recursive type would significantly cut down on the size of the 
>>> series and still provide equal type and visual clarification.
>>
>> A "normal" spinlock is non-recursive.
>>
>> A recursive spinlock in Xen can be either taken recursive, or it can be taken
>> non-recursive, causing further recursive attempts to spin.
> 
> Yes, I understand the current situation.
> 
>> So the explicit non-recursive locking applies to that special treatment of
>> recursive spinlocks.
> 
> I understand this, but to help clarify what I am saying is that individuals 
> coming from outside Xen would expect is the spinlock family of calls to behave 
> as a non-recursive spinlocks and recursive spinlock family of calls would 
> provide the recursive behavior. Currently Xen's behavior is backwards to this, 
> which this series continues and is a valid approach. Here spinlock and recursive 
> spinlock family of calls are recursive and must use non-recursive spinlock 
> family to have "normal" spinlock behavior. IMHO it would greatly simplify the 

My series doesn't change treatment of normal spinlocks. They are still used via
spin_{lock,locked,unlock}.

> code and align with the "normal" understanding of spinlocks if instead 
> spin_{lock,locked,unlock} macros were the non-recursive calls and 
> spin_{lock,locked,unlock}_recursive macros were the recursive calls. Then there 
> would only be two suites of calls for spinlocks and a lot less keystrokes for 
> need for most development.

Only the recursive spinlock type user must specify, whether a lock is meant to
be handled as a recursive or a non-recursive lock attempt. This is similar to
a rwlock, where the user must specify whether to lock as a reader or a writer.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type
  2022-12-14 16:36       ` Juergen Gross
@ 2022-12-15  7:48         ` Jan Beulich
  2022-12-15  8:03           ` Juergen Gross
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2022-12-15  7:48 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Mateusz Mówka,
	Paul Durrant, Daniel P. Smith, xen-devel

On 14.12.2022 17:36, Juergen Gross wrote:
> On 14.12.22 17:25, Daniel P. Smith wrote:
>> On 12/14/22 10:31, Juergen Gross wrote:
>>> On 14.12.22 16:03, Daniel P. Smith wrote:
>>>>
>>>> On 9/10/22 11:49, Juergen Gross wrote:
>>>>> Instead of being able to use normal spinlocks as recursive ones, too,
>>>>> make recursive spinlocks a special lock type.
>>>>>
>>>>> This will make the spinlock structure smaller in production builds and
>>>>> add type-safety.
>>>>
>>>> Just a little yak shaving, IMHO a spinlock is normally not expected to be 
>>>> recursive. Thus explicitly naming a spinlock as non-recursive I find to be 
>>>> redundant along with being expensive for typing. Whereas a recursive spinlock 
>>>> is the special instance and should have a declarative distinction. Only 
>>>> codifying the recursive type would significantly cut down on the size of the 
>>>> series and still provide equal type and visual clarification.
>>>
>>> A "normal" spinlock is non-recursive.
>>>
>>> A recursive spinlock in Xen can be either taken recursive, or it can be taken
>>> non-recursive, causing further recursive attempts to spin.
>>
>> Yes, I understand the current situation.
>>
>>> So the explicit non-recursive locking applies to that special treatment of
>>> recursive spinlocks.
>>
>> I understand this, but to help clarify what I am saying is that individuals 
>> coming from outside Xen would expect is the spinlock family of calls to behave 
>> as a non-recursive spinlocks and recursive spinlock family of calls would 
>> provide the recursive behavior. Currently Xen's behavior is backwards to this, 
>> which this series continues and is a valid approach. Here spinlock and recursive 
>> spinlock family of calls are recursive and must use non-recursive spinlock 
>> family to have "normal" spinlock behavior. IMHO it would greatly simplify the 
> 
> My series doesn't change treatment of normal spinlocks. They are still used via
> spin_{lock,locked,unlock}.
> 
>> code and align with the "normal" understanding of spinlocks if instead 
>> spin_{lock,locked,unlock} macros were the non-recursive calls and 
>> spin_{lock,locked,unlock}_recursive macros were the recursive calls. Then there 
>> would only be two suites of calls for spinlocks and a lot less keystrokes for 
>> need for most development.
> 
> Only the recursive spinlock type user must specify, whether a lock is meant to
> be handled as a recursive or a non-recursive lock attempt. This is similar to
> a rwlock, where the user must specify whether to lock as a reader or a writer.

While I can't come up with anything neat right away, it feels like it should be
possible to come up with some trickery to make spin_lock() usable on both lock
types, eliminating the need to ..._nonrecursive() variants visible at use sites
(they may still be necessary as helpers then). At least if a spinlock_t instance
wasn't embedded in the recursive lock struct (as I did suggest), then something
along the lines of what tgmath.h does may be possible.

Jan


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type
  2022-12-15  7:48         ` Jan Beulich
@ 2022-12-15  8:03           ` Juergen Gross
  2022-12-15  8:29             ` Julien Grall
  0 siblings, 1 reply; 19+ messages in thread
From: Juergen Gross @ 2022-12-15  8:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Mateusz Mówka,
	Paul Durrant, Daniel P. Smith, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 3574 bytes --]

On 15.12.22 08:48, Jan Beulich wrote:
> On 14.12.2022 17:36, Juergen Gross wrote:
>> On 14.12.22 17:25, Daniel P. Smith wrote:
>>> On 12/14/22 10:31, Juergen Gross wrote:
>>>> On 14.12.22 16:03, Daniel P. Smith wrote:
>>>>>
>>>>> On 9/10/22 11:49, Juergen Gross wrote:
>>>>>> Instead of being able to use normal spinlocks as recursive ones, too,
>>>>>> make recursive spinlocks a special lock type.
>>>>>>
>>>>>> This will make the spinlock structure smaller in production builds and
>>>>>> add type-safety.
>>>>>
>>>>> Just a little yak shaving, IMHO a spinlock is normally not expected to be
>>>>> recursive. Thus explicitly naming a spinlock as non-recursive I find to be
>>>>> redundant along with being expensive for typing. Whereas a recursive spinlock
>>>>> is the special instance and should have a declarative distinction. Only
>>>>> codifying the recursive type would significantly cut down on the size of the
>>>>> series and still provide equal type and visual clarification.
>>>>
>>>> A "normal" spinlock is non-recursive.
>>>>
>>>> A recursive spinlock in Xen can be either taken recursive, or it can be taken
>>>> non-recursive, causing further recursive attempts to spin.
>>>
>>> Yes, I understand the current situation.
>>>
>>>> So the explicit non-recursive locking applies to that special treatment of
>>>> recursive spinlocks.
>>>
>>> I understand this, but to help clarify what I am saying is that individuals
>>> coming from outside Xen would expect is the spinlock family of calls to behave
>>> as a non-recursive spinlocks and recursive spinlock family of calls would
>>> provide the recursive behavior. Currently Xen's behavior is backwards to this,
>>> which this series continues and is a valid approach. Here spinlock and recursive
>>> spinlock family of calls are recursive and must use non-recursive spinlock
>>> family to have "normal" spinlock behavior. IMHO it would greatly simplify the
>>
>> My series doesn't change treatment of normal spinlocks. They are still used via
>> spin_{lock,locked,unlock}.
>>
>>> code and align with the "normal" understanding of spinlocks if instead
>>> spin_{lock,locked,unlock} macros were the non-recursive calls and
>>> spin_{lock,locked,unlock}_recursive macros were the recursive calls. Then there
>>> would only be two suites of calls for spinlocks and a lot less keystrokes for
>>> need for most development.
>>
>> Only the recursive spinlock type user must specify, whether a lock is meant to
>> be handled as a recursive or a non-recursive lock attempt. This is similar to
>> a rwlock, where the user must specify whether to lock as a reader or a writer.
> 
> While I can't come up with anything neat right away, it feels like it should be
> possible to come up with some trickery to make spin_lock() usable on both lock
> types, eliminating the need to ..._nonrecursive() variants visible at use sites
> (they may still be necessary as helpers then). At least if a spinlock_t instance
> wasn't embedded in the recursive lock struct (as I did suggest), then something
> along the lines of what tgmath.h does may be possible.

Might be, but do we really want that?

Wouldn't it make more sense to let the user explicitly say that he wants a lock
to be taken non-recursively? Allowing "spin_lock()" would add some more risk to
use it by accident e.g. because of copy/paste without noticing that it is a
recursive lock that is taken non-recursively. Same applies for patch reviews.

I'd prefer to make this easily visible.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type
  2022-12-15  8:03           ` Juergen Gross
@ 2022-12-15  8:29             ` Julien Grall
  0 siblings, 0 replies; 19+ messages in thread
From: Julien Grall @ 2022-12-15  8:29 UTC (permalink / raw)
  To: Juergen Gross, Jan Beulich
  Cc: Stefano Stabellini, Bertrand Marquis, Volodymyr Babchuk,
	Andrew Cooper, George Dunlap, Wei Liu, Roger Pau Monné,
	Tamas K Lengyel, Lukasz Hawrylko, Mateusz Mówka,
	Paul Durrant, Daniel P. Smith, xen-devel



On 15/12/2022 08:03, Juergen Gross wrote:
> Might be, but do we really want that?
> 
> Wouldn't it make more sense to let the user explicitly say that he wants 
> a lock
> to be taken non-recursively? Allowing "spin_lock()" would add some more 
> risk to
> use it by accident e.g. because of copy/paste without noticing that it is a
> recursive lock that is taken non-recursively. Same applies for patch 
> reviews.
> 
> I'd prefer to make this easily visible.

FWIW, +1.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2022-12-15  8:30 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-10 15:49 [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Juergen Gross
2022-09-10 15:49 ` [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions Juergen Gross
2022-12-14 10:21   ` Jan Beulich
2022-12-14 10:32     ` Juergen Gross
2022-12-14 10:32     ` Jan Beulich
2022-12-14 10:38       ` Juergen Gross
2022-09-10 15:49 ` [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones Juergen Gross
2022-12-14 10:39   ` Jan Beulich
2022-12-14 11:10     ` Juergen Gross
2022-12-14 11:29       ` Jan Beulich
2022-12-14 11:38         ` Juergen Gross
2022-09-10 15:49 ` [RFC PATCH 3/3] xen/spinlock: support higher number of cpus Juergen Gross
2022-12-14 15:03 ` [RFC PATCH 0/3] xen/spinlock: make recursive spinlocks a dedicated type Daniel P. Smith
2022-12-14 15:31   ` Juergen Gross
2022-12-14 16:25     ` Daniel P. Smith
2022-12-14 16:36       ` Juergen Gross
2022-12-15  7:48         ` Jan Beulich
2022-12-15  8:03           ` Juergen Gross
2022-12-15  8:29             ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.