linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance
@ 2023-10-16 16:31 Kirill A. Shutemov
  2023-10-16 16:41 ` Vlastimil Babka
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Kirill A. Shutemov @ 2023-10-16 16:31 UTC (permalink / raw)
  To: Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Sean Christopherson, Andrew Morton, Joerg Roedel, Ard Biesheuvel
  Cc: Andi Kleen, Kuppuswamy Sathyanarayanan, David Rientjes,
	Vlastimil Babka, Tom Lendacky, Thomas Gleixner, Peter Zijlstra,
	Paolo Bonzini, Ingo Molnar, Dario Faggioli, Mike Rapoport,
	David Hildenbrand, Mel Gorman, marcelo.cerri, tim.gardner,
	philip.cox, aarcange, peterx, x86, linux-mm, linux-coco,
	linux-efi, linux-kernel, Kirill A. Shutemov, stable,
	Nikolay Borisov

Michael reported soft lockups on a system that has unaccepted memory.
This occurs when a user attempts to allocate and accept memory on
multiple CPUs simultaneously.

The root cause of the issue is that memory acceptance is serialized with
a spinlock, allowing only one CPU to accept memory at a time. The other
CPUs spin and wait for their turn, leading to starvation and soft lockup
reports.

To address this, the code has been modified to release the spinlock
while accepting memory. This allows for parallel memory acceptance on
multiple CPUs.

A newly introduced "accepting_list" keeps track of which memory is
currently being accepted. This is necessary to prevent parallel
acceptance of the same memory block. If a collision occurs, the lock is
released and the process is retried.

Such collisions should rarely occur. The main path for memory acceptance
is the page allocator, which accepts memory in MAX_ORDER chunks. As long
as MAX_ORDER is equal to or larger than the unit_size, collisions will
never occur because the caller fully owns the memory block being
accepted.

Aside from the page allocator, only memblock and deferered_free_range()
accept memory, but this only happens during boot.

The code has been tested with unit_size == 128MiB to trigger collisions
and validate the retry codepath.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Michael Roth <michael.roth@amd.com
Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
Cc: <stable@kernel.org>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
---

  v2:
   - Fix deadlock (Vlastimil);
   - Fix comments (Vlastimil);
   - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
     from atomic context;

---
 drivers/firmware/efi/unaccepted_memory.c | 71 ++++++++++++++++++++++--
 1 file changed, 67 insertions(+), 4 deletions(-)

diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c
index 853f7dc3c21d..fa3363889224 100644
--- a/drivers/firmware/efi/unaccepted_memory.c
+++ b/drivers/firmware/efi/unaccepted_memory.c
@@ -5,9 +5,17 @@
 #include <linux/spinlock.h>
 #include <asm/unaccepted_memory.h>
 
-/* Protects unaccepted memory bitmap */
+/* Protects unaccepted memory bitmap and accepting_list */
 static DEFINE_SPINLOCK(unaccepted_memory_lock);
 
+struct accept_range {
+	struct list_head list;
+	unsigned long start;
+	unsigned long end;
+};
+
+static LIST_HEAD(accepting_list);
+
 /*
  * accept_memory() -- Consult bitmap and accept the memory if needed.
  *
@@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end)
 {
 	struct efi_unaccepted_memory *unaccepted;
 	unsigned long range_start, range_end;
+	struct accept_range range, *entry;
 	unsigned long flags;
 	u64 unit_size;
 
@@ -78,20 +87,74 @@ void accept_memory(phys_addr_t start, phys_addr_t end)
 	if (end > unaccepted->size * unit_size * BITS_PER_BYTE)
 		end = unaccepted->size * unit_size * BITS_PER_BYTE;
 
-	range_start = start / unit_size;
-
+	range.start = start / unit_size;
+	range.end = DIV_ROUND_UP(end, unit_size);
+retry:
 	spin_lock_irqsave(&unaccepted_memory_lock, flags);
+
+	/*
+	 * Check if anybody works on accepting the same range of the memory.
+	 *
+	 * The check is done with unit_size granularity. It is crucial to catch
+	 * all accept requests to the same unit_size block, even if they don't
+	 * overlap on physical address level.
+	 */
+	list_for_each_entry(entry, &accepting_list, list) {
+		if (entry->end < range.start)
+			continue;
+		if (entry->start >= range.end)
+			continue;
+
+		/*
+		 * Somebody else accepting the range. Or at least part of it.
+		 *
+		 * Drop the lock and retry until it is complete.
+		 */
+		spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
+
+		/*
+		 * The code is reachable from atomic context.
+		 * cond_resched() cannot be used.
+		 */
+		cpu_relax();
+
+		goto retry;
+	}
+
+	/*
+	 * Register that the range is about to be accepted.
+	 * Make sure nobody else will accept it.
+	 */
+	list_add(&range.list, &accepting_list);
+
+	range_start = range.start;
 	for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap,
-				   DIV_ROUND_UP(end, unit_size)) {
+				   range.end) {
 		unsigned long phys_start, phys_end;
 		unsigned long len = range_end - range_start;
 
 		phys_start = range_start * unit_size + unaccepted->phys_base;
 		phys_end = range_end * unit_size + unaccepted->phys_base;
 
+		/*
+		 * Keep interrupts disabled until the accept operation is
+		 * complete in order to prevent deadlocks.
+		 *
+		 * Enabling interrupts before calling arch_accept_memory()
+		 * creates an opportunity for an interrupt handler to request
+		 * acceptance for the same memory. The handler will continuously
+		 * spin with interrupts disabled, preventing other task from
+		 * making progress with the acceptance process.
+		 */
+		spin_unlock(&unaccepted_memory_lock);
+
 		arch_accept_memory(phys_start, phys_end);
+
+		spin_lock(&unaccepted_memory_lock);
 		bitmap_clear(unaccepted->bitmap, range_start, len);
 	}
+
+	list_del(&range.list);
 	spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
 }
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread
* Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance
@ 2023-10-18 18:56 Jianxiong Gao
  0 siblings, 0 replies; 15+ messages in thread
From: Jianxiong Gao @ 2023-10-18 18:56 UTC (permalink / raw)
  To: michael.roth
  Cc: aarcange, ak, akpm, ardb, bp, dave.hansen, david, dfaggioli,
	jroedel, kirill.shutemov, linux-coco, linux-efi, linux-kernel,
	linux-mm, luto, Marcelo Henrique Cerri, mgorman, mingo,
	nik.borisov, pbonzini, peterx, peterz, philip.cox,
	David Rientjes, rppt, sathyanarayanan.kuppuswamy,
	Sean Christopherson, stable, tglx, thomas.lendacky, tim.gardner,
	vbabka, x86

The patch helps us gain more stability in our testing.
We are not able to reproduce the soft lockup issue in over 20 runs
with 176 vcpus so far.

Thanks!
-- 
Jianxiong Gao

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-11-03  0:01 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-16 16:31 [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Kirill A. Shutemov
2023-10-16 16:41 ` Vlastimil Babka
2023-10-16 17:55 ` Matthew Wilcox
2023-10-16 21:39   ` Kirill A. Shutemov
2023-10-17  7:42     ` Ard Biesheuvel
2023-10-17  9:44       ` Kirill A. Shutemov
2023-10-17  9:57         ` Ard Biesheuvel
2023-10-17 10:17       ` Peter Zijlstra
2023-10-17 15:36         ` Ard Biesheuvel
2023-10-16 20:54 ` Michael Roth
2023-10-17  7:02   ` Vlastimil Babka
2023-11-01  0:45     ` Michael Roth
2023-11-02 13:56       ` Kirill A. Shutemov
2023-11-03  0:01         ` Michael Roth
2023-10-18 18:56 Jianxiong Gao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).