linux-bcache.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13] treewide: Refactor heap related implementation
@ 2024-03-19 17:59 Kuan-Wei Chiu
  2024-03-19 17:59 ` [PATCH 01/13] perf/core: Fix several typos Kuan-Wei Chiu
                   ` (13 more replies)
  0 siblings, 14 replies; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 17:59 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Hello,

This patch series focuses on several adjustments related to heap
implementation. Firstly, a type-safe interface has been added to the
min_heap, along with the introduction of several new functions to
enhance its functionality. Additionally, the heap implementation for
bcache and bcachefs has been replaced with the generic min_heap
implementation from include/linux. Furthermore, several typos have been
corrected.

Previous discussion with Kent Overstreet:
https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu

Regards,
Kuan-Wei

Kuan-Wei Chiu (13):
  perf/core: Fix several typos
  bcache: Fix typo
  bcachefs: Fix typo
  lib min_heap: Add type safe interface
  lib min_heap: Add min_heap_init()
  lib min_heap: Add min_heap_peek()
  lib min_heap: Add min_heap_full()
  lib min_heap: Add args for min_heap_callbacks
  lib min_heap: Update min_heap_push() and min_heap_pop() to return bool
    values
  bcache: Remove heap-related macros and switch to generic min_heap
  lib min_heap: Add min_heap_del()
  lib min_heap: Add min_heap_sift_up()
  bcachefs: Remove heap-related macros and switch to generic min_heap

 drivers/md/bcache/alloc.c      |  66 ++++++++----
 drivers/md/bcache/bcache.h     |   2 +-
 drivers/md/bcache/bset.c       |  73 ++++++++-----
 drivers/md/bcache/bset.h       |  38 ++++---
 drivers/md/bcache/btree.c      |  27 ++++-
 drivers/md/bcache/extents.c    |  44 ++++----
 drivers/md/bcache/movinggc.c   |  40 ++++++--
 drivers/md/bcache/super.c      |  16 +++
 drivers/md/bcache/sysfs.c      |   3 +
 drivers/md/bcache/util.c       |   2 +-
 drivers/md/bcache/util.h       |  81 +--------------
 drivers/md/dm-vdo/repair.c     |  29 +++---
 drivers/md/dm-vdo/slab-depot.c |  21 ++--
 fs/bcachefs/clock.c            |  53 +++++++---
 fs/bcachefs/clock_types.h      |   2 +-
 fs/bcachefs/ec.c               |  99 +++++++++++-------
 fs/bcachefs/ec_types.h         |   2 +-
 fs/bcachefs/util.c             |   2 +-
 fs/bcachefs/util.h             | 127 ++---------------------
 include/linux/min_heap.h       | 180 ++++++++++++++++++++++++++-------
 kernel/events/core.c           |  53 +++++-----
 lib/test_min_heap.c            |  75 +++++++-------
 22 files changed, 565 insertions(+), 470 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 01/13] perf/core: Fix several typos
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
@ 2024-03-19 17:59 ` Kuan-Wei Chiu
  2024-03-19 19:31   ` Ian Rogers
  2024-03-19 17:59 ` [PATCH 02/13] bcache: Fix typo Kuan-Wei Chiu
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 17:59 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Replace 'artifically' with 'artificially'.
Replace 'irrespecive' with 'irrespective'.
Replace 'futher' with 'further'.
Replace 'sufficent' with 'sufficient'.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 kernel/events/core.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 724e6d7e128f..10ac2db83f14 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -534,7 +534,7 @@ void perf_sample_event_took(u64 sample_len_ns)
 	__this_cpu_write(running_sample_length, running_len);
 
 	/*
-	 * Note: this will be biased artifically low until we have
+	 * Note: this will be biased artificially low until we have
 	 * seen NR_ACCUMULATED_SAMPLES. Doing it this way keeps us
 	 * from having to maintain a count.
 	 */
@@ -596,10 +596,10 @@ static inline u64 perf_event_clock(struct perf_event *event)
  *
  * Event groups make things a little more complicated, but not terribly so. The
  * rules for a group are that if the group leader is OFF the entire group is
- * OFF, irrespecive of what the group member states are. This results in
+ * OFF, irrespective of what the group member states are. This results in
  * __perf_effective_state().
  *
- * A futher ramification is that when a group leader flips between OFF and
+ * A further ramification is that when a group leader flips between OFF and
  * !OFF, we need to update all group member times.
  *
  *
@@ -891,7 +891,7 @@ static int perf_cgroup_ensure_storage(struct perf_event *event,
 	int cpu, heap_size, ret = 0;
 
 	/*
-	 * Allow storage to have sufficent space for an iterator for each
+	 * Allow storage to have sufficient space for an iterator for each
 	 * possibly nested cgroup plus an iterator for events with no cgroup.
 	 */
 	for (heap_size = 1; css; css = css->parent)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 02/13] bcache: Fix typo
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
  2024-03-19 17:59 ` [PATCH 01/13] perf/core: Fix several typos Kuan-Wei Chiu
@ 2024-03-19 17:59 ` Kuan-Wei Chiu
  2024-03-19 20:13   ` Ian Rogers
  2024-03-19 17:59 ` [PATCH 03/13] bcachefs: " Kuan-Wei Chiu
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 17:59 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Replace 'utiility' with 'utility'.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 drivers/md/bcache/util.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index ae380bc3992e..410d8cb49e50 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 /*
- * random utiility code, for bcache but in theory not specific to bcache
+ * random utility code, for bcache but in theory not specific to bcache
  *
  * Copyright 2010, 2011 Kent Overstreet <kent.overstreet@gmail.com>
  * Copyright 2012 Google, Inc.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 03/13] bcachefs: Fix typo
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
  2024-03-19 17:59 ` [PATCH 01/13] perf/core: Fix several typos Kuan-Wei Chiu
  2024-03-19 17:59 ` [PATCH 02/13] bcache: Fix typo Kuan-Wei Chiu
@ 2024-03-19 17:59 ` Kuan-Wei Chiu
  2024-03-19 20:13   ` Ian Rogers
  2024-03-19 17:59 ` [PATCH 04/13] lib min_heap: Add type safe interface Kuan-Wei Chiu
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 17:59 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Replace 'utiility' with 'utility'.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 fs/bcachefs/util.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/bcachefs/util.c b/fs/bcachefs/util.c
index 216fadf16928..f5a16ad65424 100644
--- a/fs/bcachefs/util.c
+++ b/fs/bcachefs/util.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 /*
- * random utiility code, for bcache but in theory not specific to bcache
+ * random utility code, for bcache but in theory not specific to bcache
  *
  * Copyright 2010, 2011 Kent Overstreet <kent.overstreet@gmail.com>
  * Copyright 2012 Google, Inc.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 04/13] lib min_heap: Add type safe interface
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (2 preceding siblings ...)
  2024-03-19 17:59 ` [PATCH 03/13] bcachefs: " Kuan-Wei Chiu
@ 2024-03-19 17:59 ` Kuan-Wei Chiu
  2024-03-19 19:49   ` Ian Rogers
  2024-03-19 17:59 ` [PATCH 05/13] lib min_heap: Add min_heap_init() Kuan-Wei Chiu
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 17:59 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Introduce a type-safe interface for min_heap by adding small macro
wrappers around functions and using a 0-size array to store type
information. This enables the use of __minheap_cast and
__minheap_obj_size macros for type casting and obtaining element size.
The implementation draws inspiration from generic-radix-tree.h,
eliminating the need to pass element size in min_heap_callbacks.

Link: https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 drivers/md/dm-vdo/repair.c     | 21 +++++-----
 drivers/md/dm-vdo/slab-depot.c | 13 +++---
 include/linux/min_heap.h       | 75 +++++++++++++++++++++++-----------
 kernel/events/core.c           | 35 ++++++++--------
 lib/test_min_heap.c            | 49 +++++++++++-----------
 5 files changed, 107 insertions(+), 86 deletions(-)

diff --git a/drivers/md/dm-vdo/repair.c b/drivers/md/dm-vdo/repair.c
index defc9359f10e..7663fa2098f4 100644
--- a/drivers/md/dm-vdo/repair.c
+++ b/drivers/md/dm-vdo/repair.c
@@ -51,6 +51,8 @@ struct recovery_point {
 	bool increment_applied;
 };
 
+MIN_HEAP(struct numbered_block_mapping *, replay_heap);
+
 struct repair_completion {
 	/* The completion header */
 	struct vdo_completion completion;
@@ -97,7 +99,7 @@ struct repair_completion {
 	 * order, then original journal order. This permits efficient iteration over the journal
 	 * entries in order.
 	 */
-	struct min_heap replay_heap;
+	struct replay_heap replay_heap;
 	/* Fields tracking progress through the journal entries. */
 	struct numbered_block_mapping *current_entry;
 	struct numbered_block_mapping *current_unfetched_entry;
@@ -163,25 +165,24 @@ static void swap_mappings(void *item1, void *item2)
 }
 
 static const struct min_heap_callbacks repair_min_heap = {
-	.elem_size = sizeof(struct numbered_block_mapping),
 	.less = mapping_is_less_than,
 	.swp = swap_mappings,
 };
 
 static struct numbered_block_mapping *sort_next_heap_element(struct repair_completion *repair)
 {
-	struct min_heap *heap = &repair->replay_heap;
+	struct replay_heap *heap = &repair->replay_heap;
 	struct numbered_block_mapping *last;
 
-	if (heap->nr == 0)
+	if (heap->heap.nr == 0)
 		return NULL;
 
 	/*
 	 * Swap the next heap element with the last one on the heap, popping it off the heap,
 	 * restore the heap invariant, and return a pointer to the popped element.
 	 */
-	last = &repair->entries[--heap->nr];
-	swap_mappings(heap->data, last);
+	last = &repair->entries[--heap->heap.nr];
+	swap_mappings(heap->heap.data, last);
 	min_heapify(heap, 0, &repair_min_heap);
 	return last;
 }
@@ -1117,11 +1118,9 @@ static void recover_block_map(struct vdo_completion *completion)
 	 * Organize the journal entries into a binary heap so we can iterate over them in sorted
 	 * order incrementally, avoiding an expensive sort call.
 	 */
-	repair->replay_heap = (struct min_heap) {
-		.data = repair->entries,
-		.nr = repair->block_map_entry_count,
-		.size = repair->block_map_entry_count,
-	};
+	repair->replay_heap.heap.data = repair->entries;
+	repair->replay_heap.heap.nr = repair->block_map_entry_count;
+	repair->replay_heap.heap.size = repair->block_map_entry_count;
 	min_heapify_all(&repair->replay_heap, &repair_min_heap);
 
 	vdo_log_info("Replaying %zu recovery entries into block map",
diff --git a/drivers/md/dm-vdo/slab-depot.c b/drivers/md/dm-vdo/slab-depot.c
index 46e4721e5b4f..3309480170c3 100644
--- a/drivers/md/dm-vdo/slab-depot.c
+++ b/drivers/md/dm-vdo/slab-depot.c
@@ -3309,7 +3309,6 @@ static void swap_slab_statuses(void *item1, void *item2)
 }
 
 static const struct min_heap_callbacks slab_status_min_heap = {
-	.elem_size = sizeof(struct slab_status),
 	.less = slab_status_is_less_than,
 	.swp = swap_slab_statuses,
 };
@@ -3509,7 +3508,7 @@ static int get_slab_statuses(struct block_allocator *allocator,
 static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator *allocator)
 {
 	struct slab_status current_slab_status;
-	struct min_heap heap;
+	MIN_HEAP(struct slab_status *, heap) heap;
 	int result;
 	struct slab_status *slab_statuses;
 	struct slab_depot *depot = allocator->depot;
@@ -3521,14 +3520,12 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
 		return result;
 
 	/* Sort the slabs by cleanliness, then by emptiness hint. */
-	heap = (struct min_heap) {
-		.data = slab_statuses,
-		.nr = allocator->slab_count,
-		.size = allocator->slab_count,
-	};
+	heap.heap.data = slab_statuses;
+	heap.heap.nr = allocator->slab_count;
+	heap.heap.size = allocator->slab_count;
 	min_heapify_all(&heap, &slab_status_min_heap);
 
-	while (heap.nr > 0) {
+	while (heap.heap.nr > 0) {
 		bool high_priority;
 		struct vdo_slab *slab;
 		struct slab_journal *journal;
diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
index d52daf45861b..c3635a7fdb88 100644
--- a/include/linux/min_heap.h
+++ b/include/linux/min_heap.h
@@ -7,45 +7,59 @@
 #include <linux/types.h>
 
 /**
- * struct min_heap - Data structure to hold a min-heap.
+ * struct __min_heap - Data structure to hold a min-heap.
  * @data: Start of array holding the heap elements.
  * @nr: Number of elements currently in the heap.
  * @size: Maximum number of elements that can be held in current storage.
  */
-struct min_heap {
+struct __min_heap {
 	void *data;
 	int nr;
 	int size;
 };
 
+/*
+ * We use a 0 size array to stash the type we're storing without taking any
+ * space at runtime - then the various accessor macros can use typeof() to get
+ * to it for casts/sizeof - we also force the alignment so that storing a type
+ * with a ridiculous alignment doesn't blow up the alignment or size of the
+ * min_heap.
+ */
+#define MIN_HEAP(_type, _name)			\
+struct _name {				\
+	struct __min_heap	heap;		\
+	_type type[0] __aligned(1);	\
+}
+
+#define __minheap_cast(_heap)		(typeof((_heap)->type[0]) *)
+#define __minheap_obj_size(_heap)	sizeof((_heap)->type[0])
+
 /**
  * struct min_heap_callbacks - Data/functions to customise the min_heap.
- * @elem_size: The nr of each element in bytes.
  * @less: Partial order function for this heap.
  * @swp: Swap elements function.
  */
 struct min_heap_callbacks {
-	int elem_size;
 	bool (*less)(const void *lhs, const void *rhs);
 	void (*swp)(void *lhs, void *rhs);
 };
 
 /* Sift the element at pos down the heap. */
 static __always_inline
-void min_heapify(struct min_heap *heap, int pos,
+void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
 		const struct min_heap_callbacks *func)
 {
 	void *left, *right;
 	void *data = heap->data;
-	void *root = data + pos * func->elem_size;
+	void *root = data + pos * elem_size;
 	int i = pos, j;
 
 	/* Find the sift-down path all the way to the leaves. */
 	for (;;) {
 		if (i * 2 + 2 >= heap->nr)
 			break;
-		left = data + (i * 2 + 1) * func->elem_size;
-		right = data + (i * 2 + 2) * func->elem_size;
+		left = data + (i * 2 + 1) * elem_size;
+		right = data + (i * 2 + 2) * elem_size;
 		i = func->less(left, right) ? i * 2 + 1 : i * 2 + 2;
 	}
 
@@ -54,31 +68,37 @@ void min_heapify(struct min_heap *heap, int pos,
 		i = i * 2 + 1;
 
 	/* Backtrack to the correct location. */
-	while (i != pos && func->less(root, data + i * func->elem_size))
+	while (i != pos && func->less(root, data + i * elem_size))
 		i = (i - 1) / 2;
 
 	/* Shift the element into its correct place. */
 	j = i;
 	while (i != pos) {
 		i = (i - 1) / 2;
-		func->swp(data + i * func->elem_size, data + j * func->elem_size);
+		func->swp(data + i * elem_size, data + j * elem_size);
 	}
 }
 
+#define min_heapify(_heap, _pos, _func)	\
+	__min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func)
+
 /* Floyd's approach to heapification that is O(nr). */
 static __always_inline
-void min_heapify_all(struct min_heap *heap,
+void __min_heapify_all(struct __min_heap *heap, size_t elem_size,
 		const struct min_heap_callbacks *func)
 {
 	int i;
 
 	for (i = heap->nr / 2 - 1; i >= 0; i--)
-		min_heapify(heap, i, func);
+		__min_heapify(heap, i, elem_size, func);
 }
 
+#define min_heapify_all(_heap, _func)	\
+	__min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func)
+
 /* Remove minimum element from the heap, O(log2(nr)). */
 static __always_inline
-void min_heap_pop(struct min_heap *heap,
+void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
 		const struct min_heap_callbacks *func)
 {
 	void *data = heap->data;
@@ -88,27 +108,33 @@ void min_heap_pop(struct min_heap *heap,
 
 	/* Place last element at the root (position 0) and then sift down. */
 	heap->nr--;
-	memcpy(data, data + (heap->nr * func->elem_size), func->elem_size);
-	min_heapify(heap, 0, func);
+	memcpy(data, data + (heap->nr * elem_size), elem_size);
+	__min_heapify(heap, 0, elem_size, func);
 }
 
+#define min_heap_pop(_heap, _func)	\
+	__min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func)
+
 /*
  * Remove the minimum element and then push the given element. The
  * implementation performs 1 sift (O(log2(nr))) and is therefore more
  * efficient than a pop followed by a push that does 2.
  */
 static __always_inline
-void min_heap_pop_push(struct min_heap *heap,
-		const void *element,
+void __min_heap_pop_push(struct __min_heap *heap,
+		const void *element, size_t elem_size,
 		const struct min_heap_callbacks *func)
 {
-	memcpy(heap->data, element, func->elem_size);
-	min_heapify(heap, 0, func);
+	memcpy(heap->data, element, elem_size);
+	__min_heapify(heap, 0, elem_size, func);
 }
 
+#define min_heap_pop_push(_heap, _element, _func)	\
+	__min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
+
 /* Push an element on to the heap, O(log2(nr)). */
 static __always_inline
-void min_heap_push(struct min_heap *heap, const void *element,
+void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
 		const struct min_heap_callbacks *func)
 {
 	void *data = heap->data;
@@ -120,17 +146,20 @@ void min_heap_push(struct min_heap *heap, const void *element,
 
 	/* Place at the end of data. */
 	pos = heap->nr;
-	memcpy(data + (pos * func->elem_size), element, func->elem_size);
+	memcpy(data + (pos * elem_size), element, elem_size);
 	heap->nr++;
 
 	/* Sift child at pos up. */
 	for (; pos > 0; pos = (pos - 1) / 2) {
-		child = data + (pos * func->elem_size);
-		parent = data + ((pos - 1) / 2) * func->elem_size;
+		child = data + (pos * elem_size);
+		parent = data + ((pos - 1) / 2) * elem_size;
 		if (func->less(parent, child))
 			break;
 		func->swp(parent, child);
 	}
 }
 
+#define min_heap_push(_heap, _element, _func)	\
+	__min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
+
 #endif /* _LINUX_MIN_HEAP_H */
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 10ac2db83f14..065dfaa8b009 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3698,19 +3698,20 @@ static void swap_ptr(void *l, void *r)
 	swap(*lp, *rp);
 }
 
+MIN_HEAP(struct perf_event *, perf_event_min_heap);
+
 static const struct min_heap_callbacks perf_min_heap = {
-	.elem_size = sizeof(struct perf_event *),
 	.less = perf_less_group_idx,
 	.swp = swap_ptr,
 };
 
-static void __heap_add(struct min_heap *heap, struct perf_event *event)
+static void __heap_add(struct perf_event_min_heap *heap, struct perf_event *event)
 {
-	struct perf_event **itrs = heap->data;
+	struct perf_event **itrs = heap->heap.data;
 
 	if (event) {
-		itrs[heap->nr] = event;
-		heap->nr++;
+		itrs[heap->heap.nr] = event;
+		heap->heap.nr++;
 	}
 }
 
@@ -3738,7 +3739,7 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
 	struct perf_cpu_context *cpuctx = NULL;
 	/* Space for per CPU and/or any CPU event iterators. */
 	struct perf_event *itrs[2];
-	struct min_heap event_heap;
+	struct perf_event_min_heap event_heap;
 	struct perf_event **evt;
 	int ret;
 
@@ -3747,11 +3748,9 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
 
 	if (!ctx->task) {
 		cpuctx = this_cpu_ptr(&perf_cpu_context);
-		event_heap = (struct min_heap){
-			.data = cpuctx->heap,
-			.nr = 0,
-			.size = cpuctx->heap_size,
-		};
+		event_heap.heap.data = cpuctx->heap;
+		event_heap.heap.nr = 0;
+		event_heap.heap.size = cpuctx->heap_size;
 
 		lockdep_assert_held(&cpuctx->ctx.lock);
 
@@ -3760,15 +3759,13 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
 			css = &cpuctx->cgrp->css;
 #endif
 	} else {
-		event_heap = (struct min_heap){
-			.data = itrs,
-			.nr = 0,
-			.size = ARRAY_SIZE(itrs),
-		};
+		event_heap.heap.data = itrs;
+		event_heap.heap.nr = 0;
+		event_heap.heap.size = ARRAY_SIZE(itrs);
 		/* Events not within a CPU context may be on any CPU. */
 		__heap_add(&event_heap, perf_event_groups_first(groups, -1, pmu, NULL));
 	}
-	evt = event_heap.data;
+	evt = event_heap.heap.data;
 
 	__heap_add(&event_heap, perf_event_groups_first(groups, cpu, pmu, NULL));
 
@@ -3777,14 +3774,14 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
 		__heap_add(&event_heap, perf_event_groups_first(groups, cpu, pmu, css->cgroup));
 #endif
 
-	if (event_heap.nr) {
+	if (event_heap.heap.nr) {
 		__link_epc((*evt)->pmu_ctx);
 		perf_assert_pmu_disabled((*evt)->pmu_ctx->pmu);
 	}
 
 	min_heapify_all(&event_heap, &perf_min_heap);
 
-	while (event_heap.nr) {
+	while (event_heap.heap.nr) {
 		ret = func(*evt, data);
 		if (ret)
 			return ret;
diff --git a/lib/test_min_heap.c b/lib/test_min_heap.c
index 7b01b4387cfb..af2e446034d8 100644
--- a/lib/test_min_heap.c
+++ b/lib/test_min_heap.c
@@ -11,6 +11,8 @@
 #include <linux/printk.h>
 #include <linux/random.h>
 
+MIN_HEAP(int, min_heap_test);
+
 static __init bool less_than(const void *lhs, const void *rhs)
 {
 	return *(int *)lhs < *(int *)rhs;
@@ -30,16 +32,16 @@ static __init void swap_ints(void *lhs, void *rhs)
 }
 
 static __init int pop_verify_heap(bool min_heap,
-				struct min_heap *heap,
+				struct min_heap_test *heap,
 				const struct min_heap_callbacks *funcs)
 {
-	int *values = heap->data;
+	int *values = heap->heap.data;
 	int err = 0;
 	int last;
 
 	last = values[0];
 	min_heap_pop(heap, funcs);
-	while (heap->nr > 0) {
+	while (heap->heap.nr > 0) {
 		if (min_heap) {
 			if (last > values[0]) {
 				pr_err("error: expected %d <= %d\n", last,
@@ -63,13 +65,12 @@ static __init int test_heapify_all(bool min_heap)
 {
 	int values[] = { 3, 1, 2, 4, 0x8000000, 0x7FFFFFF, 0,
 			 -3, -1, -2, -4, 0x8000000, 0x7FFFFFF };
-	struct min_heap heap = {
-		.data = values,
-		.nr = ARRAY_SIZE(values),
-		.size =  ARRAY_SIZE(values),
-	};
+	struct min_heap_test heap;
+
+	heap.heap.data = values;
+	heap.heap.nr = ARRAY_SIZE(values);
+	heap.heap.size =  ARRAY_SIZE(values);
 	struct min_heap_callbacks funcs = {
-		.elem_size = sizeof(int),
 		.less = min_heap ? less_than : greater_than,
 		.swp = swap_ints,
 	};
@@ -81,8 +82,8 @@ static __init int test_heapify_all(bool min_heap)
 
 
 	/* Test with randomly generated values. */
-	heap.nr = ARRAY_SIZE(values);
-	for (i = 0; i < heap.nr; i++)
+	heap.heap.nr = ARRAY_SIZE(values);
+	for (i = 0; i < heap.heap.nr; i++)
 		values[i] = get_random_u32();
 
 	min_heapify_all(&heap, &funcs);
@@ -96,13 +97,12 @@ static __init int test_heap_push(bool min_heap)
 	const int data[] = { 3, 1, 2, 4, 0x80000000, 0x7FFFFFFF, 0,
 			     -3, -1, -2, -4, 0x80000000, 0x7FFFFFFF };
 	int values[ARRAY_SIZE(data)];
-	struct min_heap heap = {
-		.data = values,
-		.nr = 0,
-		.size =  ARRAY_SIZE(values),
-	};
+	struct min_heap_test heap;
+
+	heap.heap.data = values;
+	heap.heap.nr = 0;
+	heap.heap.size =  ARRAY_SIZE(values);
 	struct min_heap_callbacks funcs = {
-		.elem_size = sizeof(int),
 		.less = min_heap ? less_than : greater_than,
 		.swp = swap_ints,
 	};
@@ -115,7 +115,7 @@ static __init int test_heap_push(bool min_heap)
 	err = pop_verify_heap(min_heap, &heap, &funcs);
 
 	/* Test with randomly generated values. */
-	while (heap.nr < heap.size) {
+	while (heap.heap.nr < heap.heap.size) {
 		temp = get_random_u32();
 		min_heap_push(&heap, &temp, &funcs);
 	}
@@ -129,13 +129,12 @@ static __init int test_heap_pop_push(bool min_heap)
 	const int data[] = { 3, 1, 2, 4, 0x80000000, 0x7FFFFFFF, 0,
 			     -3, -1, -2, -4, 0x80000000, 0x7FFFFFFF };
 	int values[ARRAY_SIZE(data)];
-	struct min_heap heap = {
-		.data = values,
-		.nr = 0,
-		.size =  ARRAY_SIZE(values),
-	};
+	struct min_heap_test heap;
+
+	heap.heap.data = values;
+	heap.heap.nr = 0;
+	heap.heap.size =  ARRAY_SIZE(values);
 	struct min_heap_callbacks funcs = {
-		.elem_size = sizeof(int),
 		.less = min_heap ? less_than : greater_than,
 		.swp = swap_ints,
 	};
@@ -152,7 +151,7 @@ static __init int test_heap_pop_push(bool min_heap)
 
 	err = pop_verify_heap(min_heap, &heap, &funcs);
 
-	heap.nr = 0;
+	heap.heap.nr = 0;
 	for (i = 0; i < ARRAY_SIZE(data); i++)
 		min_heap_push(&heap, &temp, &funcs);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 05/13] lib min_heap: Add min_heap_init()
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (3 preceding siblings ...)
  2024-03-19 17:59 ` [PATCH 04/13] lib min_heap: Add type safe interface Kuan-Wei Chiu
@ 2024-03-19 17:59 ` Kuan-Wei Chiu
  2024-03-19 19:51   ` Ian Rogers
  2024-03-19 17:59 ` [PATCH 06/13] lib min_heap: Add min_heap_peek() Kuan-Wei Chiu
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 17:59 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Add min_heap_init() for initializing heap with data, nr, and size.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 include/linux/min_heap.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
index c3635a7fdb88..ed462f194b88 100644
--- a/include/linux/min_heap.h
+++ b/include/linux/min_heap.h
@@ -44,6 +44,18 @@ struct min_heap_callbacks {
 	void (*swp)(void *lhs, void *rhs);
 };
 
+/* Initialize a min-heap. */
+static __always_inline
+void __min_heap_init(struct __min_heap *heap, void *data, int size)
+{
+	heap->data = data;
+	heap->nr = 0;
+	heap->size = size;
+}
+
+#define min_heap_init(_heap, _data, _size)	\
+	__min_heap_init(&(_heap)->heap, _data, _size)
+
 /* Sift the element at pos down the heap. */
 static __always_inline
 void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 06/13] lib min_heap: Add min_heap_peek()
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (4 preceding siblings ...)
  2024-03-19 17:59 ` [PATCH 05/13] lib min_heap: Add min_heap_init() Kuan-Wei Chiu
@ 2024-03-19 17:59 ` Kuan-Wei Chiu
  2024-03-19 19:55   ` Ian Rogers
  2024-03-19 17:59 ` [PATCH 07/13] lib min_heap: Add min_heap_full() Kuan-Wei Chiu
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 17:59 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Add min_heap_peek() function to retrieve a pointer to the smallest
element. The pointer is cast to the appropriate type of heap elements.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 include/linux/min_heap.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
index ed462f194b88..7c1fd1ddc71a 100644
--- a/include/linux/min_heap.h
+++ b/include/linux/min_heap.h
@@ -56,6 +56,16 @@ void __min_heap_init(struct __min_heap *heap, void *data, int size)
 #define min_heap_init(_heap, _data, _size)	\
 	__min_heap_init(&(_heap)->heap, _data, _size)
 
+/* Get the minimum element from the heap. */
+static __always_inline
+void *__min_heap_peek(struct __min_heap *heap)
+{
+	return heap->nr ? heap->data : NULL;
+}
+
+#define min_heap_peek(_heap)	\
+	(__minheap_cast(_heap) __min_heap_peek(&(_heap)->heap))
+
 /* Sift the element at pos down the heap. */
 static __always_inline
 void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 07/13] lib min_heap: Add min_heap_full()
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (5 preceding siblings ...)
  2024-03-19 17:59 ` [PATCH 06/13] lib min_heap: Add min_heap_peek() Kuan-Wei Chiu
@ 2024-03-19 17:59 ` Kuan-Wei Chiu
  2024-03-19 19:55   ` Ian Rogers
  2024-03-19 18:00 ` [PATCH 08/13] lib min_heap: Add args for min_heap_callbacks Kuan-Wei Chiu
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 17:59 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Add min_heap_full() which returns a boolean value indicating whether
the heap has reached its maximum capacity.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 include/linux/min_heap.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
index 7c1fd1ddc71a..b1d874f4d536 100644
--- a/include/linux/min_heap.h
+++ b/include/linux/min_heap.h
@@ -66,6 +66,16 @@ void *__min_heap_peek(struct __min_heap *heap)
 #define min_heap_peek(_heap)	\
 	(__minheap_cast(_heap) __min_heap_peek(&(_heap)->heap))
 
+/* Check if the heap is full. */
+static __always_inline
+bool __min_heap_full(struct __min_heap *heap)
+{
+	return heap->nr == heap->size;
+}
+
+#define min_heap_full(_heap)	\
+	__min_heap_full(&(_heap)->heap)
+
 /* Sift the element at pos down the heap. */
 static __always_inline
 void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 08/13] lib min_heap: Add args for min_heap_callbacks
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (6 preceding siblings ...)
  2024-03-19 17:59 ` [PATCH 07/13] lib min_heap: Add min_heap_full() Kuan-Wei Chiu
@ 2024-03-19 18:00 ` Kuan-Wei Chiu
  2024-03-19 20:05   ` Ian Rogers
  2024-03-19 18:00 ` [PATCH 09/13] lib min_heap: Update min_heap_push() and min_heap_pop() to return bool values Kuan-Wei Chiu
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 18:00 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Add a third parameter 'args' for the 'less' and 'swp' functions in the
'struct min_heap_callbacks'. This additional parameter allows these
comparison and swap functions to handle extra arguments when necessary.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 drivers/md/dm-vdo/repair.c     | 10 +++----
 drivers/md/dm-vdo/slab-depot.c |  8 +++---
 include/linux/min_heap.h       | 51 +++++++++++++++++-----------------
 kernel/events/core.c           | 10 +++----
 lib/test_min_heap.c            | 26 ++++++++---------
 5 files changed, 53 insertions(+), 52 deletions(-)

diff --git a/drivers/md/dm-vdo/repair.c b/drivers/md/dm-vdo/repair.c
index 7663fa2098f4..528fa100b410 100644
--- a/drivers/md/dm-vdo/repair.c
+++ b/drivers/md/dm-vdo/repair.c
@@ -137,7 +137,7 @@ struct repair_completion {
  * to sort by slot while still ensuring we replay all entries with the same slot in the exact order
  * as they appeared in the journal.
  */
-static bool mapping_is_less_than(const void *item1, const void *item2)
+static bool mapping_is_less_than(const void *item1, const void *item2, void *args)
 {
 	const struct numbered_block_mapping *mapping1 =
 		(const struct numbered_block_mapping *) item1;
@@ -156,7 +156,7 @@ static bool mapping_is_less_than(const void *item1, const void *item2)
 	return 0;
 }
 
-static void swap_mappings(void *item1, void *item2)
+static void swap_mappings(void *item1, void *item2, void *args)
 {
 	struct numbered_block_mapping *mapping1 = item1;
 	struct numbered_block_mapping *mapping2 = item2;
@@ -182,8 +182,8 @@ static struct numbered_block_mapping *sort_next_heap_element(struct repair_compl
 	 * restore the heap invariant, and return a pointer to the popped element.
 	 */
 	last = &repair->entries[--heap->heap.nr];
-	swap_mappings(heap->heap.data, last);
-	min_heapify(heap, 0, &repair_min_heap);
+	swap_mappings(heap->heap.data, last, NULL);
+	min_heapify(heap, 0, &repair_min_heap, NULL);
 	return last;
 }
 
@@ -1121,7 +1121,7 @@ static void recover_block_map(struct vdo_completion *completion)
 	repair->replay_heap.heap.data = repair->entries;
 	repair->replay_heap.heap.nr = repair->block_map_entry_count;
 	repair->replay_heap.heap.size = repair->block_map_entry_count;
-	min_heapify_all(&repair->replay_heap, &repair_min_heap);
+	min_heapify_all(&repair->replay_heap, &repair_min_heap, NULL);
 
 	vdo_log_info("Replaying %zu recovery entries into block map",
 		     repair->block_map_entry_count);
diff --git a/drivers/md/dm-vdo/slab-depot.c b/drivers/md/dm-vdo/slab-depot.c
index 3309480170c3..b8c41d7ccde0 100644
--- a/drivers/md/dm-vdo/slab-depot.c
+++ b/drivers/md/dm-vdo/slab-depot.c
@@ -3288,7 +3288,7 @@ int vdo_release_block_reference(struct block_allocator *allocator,
  * Thus, the ordering is reversed from the usual sense since min_heap returns smaller elements
  * before larger ones.
  */
-static bool slab_status_is_less_than(const void *item1, const void *item2)
+static bool slab_status_is_less_than(const void *item1, const void *item2, void *args)
 {
 	const struct slab_status *info1 = item1;
 	const struct slab_status *info2 = item2;
@@ -3300,7 +3300,7 @@ static bool slab_status_is_less_than(const void *item1, const void *item2)
 	return info1->slab_number < info2->slab_number;
 }
 
-static void swap_slab_statuses(void *item1, void *item2)
+static void swap_slab_statuses(void *item1, void *item2, void *args)
 {
 	struct slab_status *info1 = item1;
 	struct slab_status *info2 = item2;
@@ -3523,7 +3523,7 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
 	heap.heap.data = slab_statuses;
 	heap.heap.nr = allocator->slab_count;
 	heap.heap.size = allocator->slab_count;
-	min_heapify_all(&heap, &slab_status_min_heap);
+	min_heapify_all(&heap, &slab_status_min_heap, NULL);
 
 	while (heap.heap.nr > 0) {
 		bool high_priority;
@@ -3531,7 +3531,7 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
 		struct slab_journal *journal;
 
 		current_slab_status = slab_statuses[0];
-		min_heap_pop(&heap, &slab_status_min_heap);
+		min_heap_pop(&heap, &slab_status_min_heap, NULL);
 		slab = depot->slabs[current_slab_status.slab_number];
 
 		if ((depot->load_type == VDO_SLAB_DEPOT_REBUILD_LOAD) ||
diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
index b1d874f4d536..97d8ba5c32e6 100644
--- a/include/linux/min_heap.h
+++ b/include/linux/min_heap.h
@@ -40,8 +40,8 @@ struct _name {				\
  * @swp: Swap elements function.
  */
 struct min_heap_callbacks {
-	bool (*less)(const void *lhs, const void *rhs);
-	void (*swp)(void *lhs, void *rhs);
+	bool (*less)(const void *lhs, const void *rhs, void *args);
+	void (*swp)(void *lhs, void *rhs, void *args);
 };
 
 /* Initialize a min-heap. */
@@ -79,7 +79,7 @@ bool __min_heap_full(struct __min_heap *heap)
 /* Sift the element at pos down the heap. */
 static __always_inline
 void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
-		const struct min_heap_callbacks *func)
+		const struct min_heap_callbacks *func, void *args)
 {
 	void *left, *right;
 	void *data = heap->data;
@@ -92,7 +92,7 @@ void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
 			break;
 		left = data + (i * 2 + 1) * elem_size;
 		right = data + (i * 2 + 2) * elem_size;
-		i = func->less(left, right) ? i * 2 + 1 : i * 2 + 2;
+		i = func->less(left, right, args) ? i * 2 + 1 : i * 2 + 2;
 	}
 
 	/* Special case for the last leaf with no sibling. */
@@ -100,38 +100,38 @@ void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
 		i = i * 2 + 1;
 
 	/* Backtrack to the correct location. */
-	while (i != pos && func->less(root, data + i * elem_size))
+	while (i != pos && func->less(root, data + i * elem_size, args))
 		i = (i - 1) / 2;
 
 	/* Shift the element into its correct place. */
 	j = i;
 	while (i != pos) {
 		i = (i - 1) / 2;
-		func->swp(data + i * elem_size, data + j * elem_size);
+		func->swp(data + i * elem_size, data + j * elem_size, args);
 	}
 }
 
-#define min_heapify(_heap, _pos, _func)	\
-	__min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func)
+#define min_heapify(_heap, _pos, _func, _args)	\
+	__min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func, _args)
 
 /* Floyd's approach to heapification that is O(nr). */
 static __always_inline
 void __min_heapify_all(struct __min_heap *heap, size_t elem_size,
-		const struct min_heap_callbacks *func)
+		const struct min_heap_callbacks *func, void *args)
 {
 	int i;
 
 	for (i = heap->nr / 2 - 1; i >= 0; i--)
-		__min_heapify(heap, i, elem_size, func);
+		__min_heapify(heap, i, elem_size, func, args);
 }
 
-#define min_heapify_all(_heap, _func)	\
-	__min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func)
+#define min_heapify_all(_heap, _func, _args)	\
+	__min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func, _args)
 
 /* Remove minimum element from the heap, O(log2(nr)). */
 static __always_inline
 void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
-		const struct min_heap_callbacks *func)
+		const struct min_heap_callbacks *func, void *args)
 {
 	void *data = heap->data;
 
@@ -141,11 +141,11 @@ void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
 	/* Place last element at the root (position 0) and then sift down. */
 	heap->nr--;
 	memcpy(data, data + (heap->nr * elem_size), elem_size);
-	__min_heapify(heap, 0, elem_size, func);
+	__min_heapify(heap, 0, elem_size, func, args);
 }
 
-#define min_heap_pop(_heap, _func)	\
-	__min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func)
+#define min_heap_pop(_heap, _func, _args)	\
+	__min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func, _args)
 
 /*
  * Remove the minimum element and then push the given element. The
@@ -155,19 +155,20 @@ void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
 static __always_inline
 void __min_heap_pop_push(struct __min_heap *heap,
 		const void *element, size_t elem_size,
-		const struct min_heap_callbacks *func)
+		const struct min_heap_callbacks *func,
+		void *args)
 {
 	memcpy(heap->data, element, elem_size);
-	__min_heapify(heap, 0, elem_size, func);
+	__min_heapify(heap, 0, elem_size, func, args);
 }
 
-#define min_heap_pop_push(_heap, _element, _func)	\
-	__min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
+#define min_heap_pop_push(_heap, _element, _func, _args)	\
+	__min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
 
 /* Push an element on to the heap, O(log2(nr)). */
 static __always_inline
 void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
-		const struct min_heap_callbacks *func)
+		const struct min_heap_callbacks *func, void *args)
 {
 	void *data = heap->data;
 	void *child, *parent;
@@ -185,13 +186,13 @@ void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
 	for (; pos > 0; pos = (pos - 1) / 2) {
 		child = data + (pos * elem_size);
 		parent = data + ((pos - 1) / 2) * elem_size;
-		if (func->less(parent, child))
+		if (func->less(parent, child, args))
 			break;
-		func->swp(parent, child);
+		func->swp(parent, child, args);
 	}
 }
 
-#define min_heap_push(_heap, _element, _func)	\
-	__min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
+#define min_heap_push(_heap, _element, _func, _args)	\
+	__min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
 
 #endif /* _LINUX_MIN_HEAP_H */
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 065dfaa8b009..f2a9044058ee 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3683,7 +3683,7 @@ void __perf_event_task_sched_out(struct task_struct *task,
 	perf_cgroup_switch(next);
 }
 
-static bool perf_less_group_idx(const void *l, const void *r)
+static bool perf_less_group_idx(const void *l, const void *r, void *args)
 {
 	const struct perf_event *le = *(const struct perf_event **)l;
 	const struct perf_event *re = *(const struct perf_event **)r;
@@ -3691,7 +3691,7 @@ static bool perf_less_group_idx(const void *l, const void *r)
 	return le->group_index < re->group_index;
 }
 
-static void swap_ptr(void *l, void *r)
+static void swap_ptr(void *l, void *r, void *args)
 {
 	void **lp = l, **rp = r;
 
@@ -3779,7 +3779,7 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
 		perf_assert_pmu_disabled((*evt)->pmu_ctx->pmu);
 	}
 
-	min_heapify_all(&event_heap, &perf_min_heap);
+	min_heapify_all(&event_heap, &perf_min_heap, NULL);
 
 	while (event_heap.heap.nr) {
 		ret = func(*evt, data);
@@ -3788,9 +3788,9 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
 
 		*evt = perf_event_groups_next(*evt, pmu);
 		if (*evt)
-			min_heapify(&event_heap, 0, &perf_min_heap);
+			min_heapify(&event_heap, 0, &perf_min_heap, NULL);
 		else
-			min_heap_pop(&event_heap, &perf_min_heap);
+			min_heap_pop(&event_heap, &perf_min_heap, NULL);
 	}
 
 	return 0;
diff --git a/lib/test_min_heap.c b/lib/test_min_heap.c
index af2e446034d8..b8859d17a19c 100644
--- a/lib/test_min_heap.c
+++ b/lib/test_min_heap.c
@@ -13,17 +13,17 @@
 
 MIN_HEAP(int, min_heap_test);
 
-static __init bool less_than(const void *lhs, const void *rhs)
+static __init bool less_than(const void *lhs, const void *rhs, void *args)
 {
 	return *(int *)lhs < *(int *)rhs;
 }
 
-static __init bool greater_than(const void *lhs, const void *rhs)
+static __init bool greater_than(const void *lhs, const void *rhs, void *args)
 {
 	return *(int *)lhs > *(int *)rhs;
 }
 
-static __init void swap_ints(void *lhs, void *rhs)
+static __init void swap_ints(void *lhs, void *rhs, void *argsss)
 {
 	int temp = *(int *)lhs;
 
@@ -40,7 +40,7 @@ static __init int pop_verify_heap(bool min_heap,
 	int last;
 
 	last = values[0];
-	min_heap_pop(heap, funcs);
+	min_heap_pop(heap, funcs, NULL);
 	while (heap->heap.nr > 0) {
 		if (min_heap) {
 			if (last > values[0]) {
@@ -56,7 +56,7 @@ static __init int pop_verify_heap(bool min_heap,
 			}
 		}
 		last = values[0];
-		min_heap_pop(heap, funcs);
+		min_heap_pop(heap, funcs, NULL);
 	}
 	return err;
 }
@@ -77,7 +77,7 @@ static __init int test_heapify_all(bool min_heap)
 	int i, err;
 
 	/* Test with known set of values. */
-	min_heapify_all(&heap, &funcs);
+	min_heapify_all(&heap, &funcs, NULL);
 	err = pop_verify_heap(min_heap, &heap, &funcs);
 
 
@@ -86,7 +86,7 @@ static __init int test_heapify_all(bool min_heap)
 	for (i = 0; i < heap.heap.nr; i++)
 		values[i] = get_random_u32();
 
-	min_heapify_all(&heap, &funcs);
+	min_heapify_all(&heap, &funcs, NULL);
 	err += pop_verify_heap(min_heap, &heap, &funcs);
 
 	return err;
@@ -110,14 +110,14 @@ static __init int test_heap_push(bool min_heap)
 
 	/* Test with known set of values copied from data. */
 	for (i = 0; i < ARRAY_SIZE(data); i++)
-		min_heap_push(&heap, &data[i], &funcs);
+		min_heap_push(&heap, &data[i], &funcs, NULL);
 
 	err = pop_verify_heap(min_heap, &heap, &funcs);
 
 	/* Test with randomly generated values. */
 	while (heap.heap.nr < heap.heap.size) {
 		temp = get_random_u32();
-		min_heap_push(&heap, &temp, &funcs);
+		min_heap_push(&heap, &temp, &funcs, NULL);
 	}
 	err += pop_verify_heap(min_heap, &heap, &funcs);
 
@@ -143,22 +143,22 @@ static __init int test_heap_pop_push(bool min_heap)
 	/* Fill values with data to pop and replace. */
 	temp = min_heap ? 0x80000000 : 0x7FFFFFFF;
 	for (i = 0; i < ARRAY_SIZE(data); i++)
-		min_heap_push(&heap, &temp, &funcs);
+		min_heap_push(&heap, &temp, &funcs, NULL);
 
 	/* Test with known set of values copied from data. */
 	for (i = 0; i < ARRAY_SIZE(data); i++)
-		min_heap_pop_push(&heap, &data[i], &funcs);
+		min_heap_pop_push(&heap, &data[i], &funcs, NULL);
 
 	err = pop_verify_heap(min_heap, &heap, &funcs);
 
 	heap.heap.nr = 0;
 	for (i = 0; i < ARRAY_SIZE(data); i++)
-		min_heap_push(&heap, &temp, &funcs);
+		min_heap_push(&heap, &temp, &funcs, NULL);
 
 	/* Test with randomly generated values. */
 	for (i = 0; i < ARRAY_SIZE(data); i++) {
 		temp = get_random_u32();
-		min_heap_pop_push(&heap, &temp, &funcs);
+		min_heap_pop_push(&heap, &temp, &funcs, NULL);
 	}
 	err += pop_verify_heap(min_heap, &heap, &funcs);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 09/13] lib min_heap: Update min_heap_push() and min_heap_pop() to return bool values
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (7 preceding siblings ...)
  2024-03-19 18:00 ` [PATCH 08/13] lib min_heap: Add args for min_heap_callbacks Kuan-Wei Chiu
@ 2024-03-19 18:00 ` Kuan-Wei Chiu
  2024-03-19 19:57   ` Ian Rogers
  2024-03-19 18:00 ` [PATCH 10/13] bcache: Remove heap-related macros and switch to generic min_heap Kuan-Wei Chiu
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 18:00 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Modify the min_heap_push() and min_heap_pop() to return a boolean
value. They now return false when the operation fails and true when it
succeeds.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 include/linux/min_heap.h | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
index 97d8ba5c32e6..154ac2102114 100644
--- a/include/linux/min_heap.h
+++ b/include/linux/min_heap.h
@@ -130,18 +130,20 @@ void __min_heapify_all(struct __min_heap *heap, size_t elem_size,
 
 /* Remove minimum element from the heap, O(log2(nr)). */
 static __always_inline
-void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
+bool __min_heap_pop(struct __min_heap *heap, size_t elem_size,
 		const struct min_heap_callbacks *func, void *args)
 {
 	void *data = heap->data;
 
 	if (WARN_ONCE(heap->nr <= 0, "Popping an empty heap"))
-		return;
+		return false;
 
 	/* Place last element at the root (position 0) and then sift down. */
 	heap->nr--;
 	memcpy(data, data + (heap->nr * elem_size), elem_size);
 	__min_heapify(heap, 0, elem_size, func, args);
+
+	return true;
 }
 
 #define min_heap_pop(_heap, _func, _args)	\
@@ -167,7 +169,7 @@ void __min_heap_pop_push(struct __min_heap *heap,
 
 /* Push an element on to the heap, O(log2(nr)). */
 static __always_inline
-void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
+bool __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
 		const struct min_heap_callbacks *func, void *args)
 {
 	void *data = heap->data;
@@ -175,7 +177,7 @@ void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
 	int pos;
 
 	if (WARN_ONCE(heap->nr >= heap->size, "Pushing on a full heap"))
-		return;
+		return false;
 
 	/* Place at the end of data. */
 	pos = heap->nr;
@@ -190,6 +192,8 @@ void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
 			break;
 		func->swp(parent, child, args);
 	}
+
+	return true;
 }
 
 #define min_heap_push(_heap, _element, _func, _args)	\
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 10/13] bcache: Remove heap-related macros and switch to generic min_heap
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (8 preceding siblings ...)
  2024-03-19 18:00 ` [PATCH 09/13] lib min_heap: Update min_heap_push() and min_heap_pop() to return bool values Kuan-Wei Chiu
@ 2024-03-19 18:00 ` Kuan-Wei Chiu
  2024-03-19 18:00 ` [PATCH 11/13] lib min_heap: Add min_heap_del() Kuan-Wei Chiu
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 18:00 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Drop the heap-related macros from bcache and replaces them with the
generic min_heap implementation from include/linux. This change
improves code readability by using functions instead of macros.

Link: https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 drivers/md/bcache/alloc.c    | 66 +++++++++++++++++++++--------
 drivers/md/bcache/bcache.h   |  2 +-
 drivers/md/bcache/bset.c     | 73 ++++++++++++++++++++++----------
 drivers/md/bcache/bset.h     | 38 ++++++++++-------
 drivers/md/bcache/btree.c    | 27 +++++++++++-
 drivers/md/bcache/extents.c  | 44 ++++++++++++--------
 drivers/md/bcache/movinggc.c | 40 +++++++++++++-----
 drivers/md/bcache/super.c    | 16 +++++++
 drivers/md/bcache/sysfs.c    |  3 ++
 drivers/md/bcache/util.h     | 81 +-----------------------------------
 10 files changed, 224 insertions(+), 166 deletions(-)

diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index ce13c272c387..e0459826788e 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -166,40 +166,70 @@ static void bch_invalidate_one_bucket(struct cache *ca, struct bucket *b)
  * prio is worth 1/8th of what INITIAL_PRIO is worth.
  */
 
-#define bucket_prio(b)							\
-({									\
-	unsigned int min_prio = (INITIAL_PRIO - ca->set->min_prio) / 8;	\
-									\
-	(b->prio - ca->set->min_prio + min_prio) * GC_SECTORS_USED(b);	\
-})
+static inline unsigned int bucket_prio(struct cache *ca, struct bucket *b)
+{
+	unsigned int min_prio = (INITIAL_PRIO - ca->set->min_prio) / 8;
+
+	return (b->prio - ca->set->min_prio + min_prio) * GC_SECTORS_USED(b);
+
+}
+
+static inline bool bucket_max_cmp(const void *l, const void *r, void *args)
+{
+	struct bucket *lhs = (struct bucket *)l;
+	struct bucket *rhs = (struct bucket *)r;
+	struct cache *ca = args;
+
+	return bucket_prio(ca, lhs) > bucket_prio(ca, rhs);
+}
+
+static inline bool bucket_min_cmp(const void *l, const void *r, void *args)
+{
+	struct bucket *lhs = (struct bucket *)l;
+	struct bucket *rhs = (struct bucket *)r;
+	struct cache *ca = args;
+
+	return bucket_prio(ca, lhs) < bucket_prio(ca, rhs);
+}
+
+static inline void bucket_swap(void *l, void *r, void *args)
+{
+	struct bucket *lhs = l, *rhs = r;
 
-#define bucket_max_cmp(l, r)	(bucket_prio(l) < bucket_prio(r))
-#define bucket_min_cmp(l, r)	(bucket_prio(l) > bucket_prio(r))
+	swap(*lhs, *rhs);
+}
 
 static void invalidate_buckets_lru(struct cache *ca)
 {
 	struct bucket *b;
-	ssize_t i;
+	const struct min_heap_callbacks bucket_max_cmp_callback = {
+		.less = bucket_max_cmp,
+		.swp = bucket_swap,
+	};
+	const struct min_heap_callbacks bucket_min_cmp_callback = {
+		.less = bucket_min_cmp,
+		.swp = bucket_swap,
+	};
 
-	ca->heap.used = 0;
+	ca->heap.heap.nr = 0;
 
 	for_each_bucket(b, ca) {
 		if (!bch_can_invalidate_bucket(ca, b))
 			continue;
 
-		if (!heap_full(&ca->heap))
-			heap_add(&ca->heap, b, bucket_max_cmp);
-		else if (bucket_max_cmp(b, heap_peek(&ca->heap))) {
-			ca->heap.data[0] = b;
-			heap_sift(&ca->heap, 0, bucket_max_cmp);
+		if (!min_heap_full(&ca->heap))
+			min_heap_push(&ca->heap, b, &bucket_max_cmp_callback, ca);
+		else if (!bucket_max_cmp(b, min_heap_peek(&ca->heap), ca)) {
+			*min_heap_peek(&ca->heap) = b;
+			min_heapify(&ca->heap, 0, &bucket_max_cmp_callback, ca);
 		}
 	}
 
-	for (i = ca->heap.used / 2 - 1; i >= 0; --i)
-		heap_sift(&ca->heap, i, bucket_min_cmp);
+	min_heapify_all(&ca->heap, &bucket_min_cmp_callback, ca);
 
 	while (!fifo_full(&ca->free_inc)) {
-		if (!heap_pop(&ca->heap, b, bucket_min_cmp)) {
+		b = *min_heap_peek(&ca->heap);
+		if (!min_heap_pop(&ca->heap, &bucket_min_cmp_callback, ca)) {
 			/*
 			 * We don't want to be calling invalidate_buckets()
 			 * multiple times when it can't do anything
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 4e6afa89921f..97b0a1768ba7 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -457,7 +457,7 @@ struct cache {
 	/* Allocation stuff: */
 	struct bucket		*buckets;
 
-	DECLARE_HEAP(struct bucket *, heap);
+	MIN_HEAP(struct bucket *, heap) heap;
 
 	/*
 	 * If nonzero, we know we aren't going to find any buckets to invalidate
diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
index 2bba4d6aaaa2..04187f800c78 100644
--- a/drivers/md/bcache/bset.c
+++ b/drivers/md/bcache/bset.c
@@ -56,7 +56,9 @@ int __bch_count_data(struct btree_keys *b)
 	unsigned int ret = 0;
 	struct btree_iter iter;
 	struct bkey *k;
+	struct btree_iter_set data[MAX_BSETS];
 
+	iter.heap.heap.data = data;
 	if (b->ops->is_extents)
 		for_each_key(b, k, &iter)
 			ret += KEY_SIZE(k);
@@ -69,6 +71,9 @@ void __bch_check_keys(struct btree_keys *b, const char *fmt, ...)
 	struct bkey *k, *p = NULL;
 	struct btree_iter iter;
 	const char *err;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	for_each_key(b, k, &iter) {
 		if (b->ops->is_extents) {
@@ -110,9 +115,9 @@ void __bch_check_keys(struct btree_keys *b, const char *fmt, ...)
 
 static void bch_btree_iter_next_check(struct btree_iter *iter)
 {
-	struct bkey *k = iter->data->k, *next = bkey_next(k);
+	struct bkey *k = min_heap_peek(&iter->heap)->k, *next = bkey_next(k);
 
-	if (next < iter->data->end &&
+	if (next < min_heap_peek(&iter->heap)->end &&
 	    bkey_cmp(k, iter->b->ops->is_extents ?
 		     &START_KEY(next) : next) > 0) {
 		bch_dump_bucket(iter->b);
@@ -882,6 +887,9 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k,
 	struct btree_iter iter;
 	struct bkey preceding_key_on_stack = ZERO_KEY;
 	struct bkey *preceding_key_p = &preceding_key_on_stack;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	BUG_ON(b->ops->is_extents && !KEY_SIZE(k));
 
@@ -1077,27 +1085,34 @@ struct bkey *__bch_bset_search(struct btree_keys *b, struct bset_tree *t,
 
 /* Btree iterator */
 
-typedef bool (btree_iter_cmp_fn)(struct btree_iter_set,
-				 struct btree_iter_set);
+typedef bool (btree_iter_cmp_fn)(const void *, const void *, void *);
 
-static inline bool btree_iter_cmp(struct btree_iter_set l,
-				  struct btree_iter_set r)
+static inline bool btree_iter_cmp(const void *l, const void *r, void *args)
 {
-	return bkey_cmp(l.k, r.k) > 0;
+	const struct btree_iter_set *_l = l;
+	const struct btree_iter_set *_r = r;
+
+	return bkey_cmp(_l->k, _r->k) <= 0;
 }
 
 static inline bool btree_iter_end(struct btree_iter *iter)
 {
-	return !iter->used;
+	return !iter->heap.heap.nr;
 }
 
 void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k,
 			 struct bkey *end)
 {
+	const struct min_heap_callbacks callbacks = {
+		.less = btree_iter_cmp,
+		.swp = btree_iter_swap,
+	};
+
 	if (k != end)
-		BUG_ON(!heap_add(iter,
-				 ((struct btree_iter_set) { k, end }),
-				 btree_iter_cmp));
+		BUG_ON(!min_heap_push(&iter->heap,
+				 &((struct btree_iter_set){ k, end }),
+				 &callbacks,
+				 NULL));
 }
 
 static struct bkey *__bch_btree_iter_init(struct btree_keys *b,
@@ -1107,8 +1122,8 @@ static struct bkey *__bch_btree_iter_init(struct btree_keys *b,
 {
 	struct bkey *ret = NULL;
 
-	iter->size = ARRAY_SIZE(iter->data);
-	iter->used = 0;
+	iter->heap.heap.size = MAX_BSETS;
+	iter->heap.heap.nr = 0;
 
 #ifdef CONFIG_BCACHE_DEBUG
 	iter->b = b;
@@ -1134,22 +1149,28 @@ static inline struct bkey *__bch_btree_iter_next(struct btree_iter *iter,
 {
 	struct btree_iter_set b __maybe_unused;
 	struct bkey *ret = NULL;
+	const struct min_heap_callbacks callbacks = {
+		.less = cmp,
+		.swp = btree_iter_swap,
+	};
 
 	if (!btree_iter_end(iter)) {
 		bch_btree_iter_next_check(iter);
 
-		ret = iter->data->k;
-		iter->data->k = bkey_next(iter->data->k);
+		ret = min_heap_peek(&iter->heap)->k;
+		min_heap_peek(&iter->heap)->k = bkey_next(min_heap_peek(&iter->heap)->k);
 
-		if (iter->data->k > iter->data->end) {
+		if (min_heap_peek(&iter->heap)->k > min_heap_peek(&iter->heap)->end) {
 			WARN_ONCE(1, "bset was corrupt!\n");
-			iter->data->k = iter->data->end;
+			min_heap_peek(&iter->heap)->k = min_heap_peek(&iter->heap)->end;
 		}
 
-		if (iter->data->k == iter->data->end)
-			heap_pop(iter, b, cmp);
+		if (min_heap_peek(&iter->heap)->k == min_heap_peek(&iter->heap)->end) {
+			b = *min_heap_peek(&iter->heap);
+			min_heap_pop(&iter->heap, &callbacks, NULL);
+		}
 		else
-			heap_sift(iter, 0, cmp);
+			min_heapify(&iter->heap, 0, &callbacks, NULL);
 	}
 
 	return ret;
@@ -1195,16 +1216,18 @@ static void btree_mergesort(struct btree_keys *b, struct bset *out,
 			    struct btree_iter *iter,
 			    bool fixup, bool remove_stale)
 {
-	int i;
 	struct bkey *k, *last = NULL;
 	BKEY_PADDED(k) tmp;
 	bool (*bad)(struct btree_keys *, const struct bkey *) = remove_stale
 		? bch_ptr_bad
 		: bch_ptr_invalid;
+	const struct min_heap_callbacks callbacks = {
+		.less = b->ops->sort_cmp,
+		.swp = btree_iter_swap,
+	};
 
 	/* Heapify the iterator, using our comparison function */
-	for (i = iter->used / 2 - 1; i >= 0; --i)
-		heap_sift(iter, i, b->ops->sort_cmp);
+	min_heapify_all(&iter->heap, &callbacks, NULL);
 
 	while (!btree_iter_end(iter)) {
 		if (b->ops->sort_fixup && fixup)
@@ -1295,7 +1318,9 @@ void bch_btree_sort_partial(struct btree_keys *b, unsigned int start,
 	size_t order = b->page_order, keys = 0;
 	struct btree_iter iter;
 	int oldsize = bch_count_data(b);
+	struct btree_iter_set data[MAX_BSETS];
 
+	iter.heap.heap.data = data;
 	__bch_btree_iter_init(b, &iter, NULL, &b->set[start]);
 
 	if (start) {
@@ -1324,7 +1349,9 @@ void bch_btree_sort_into(struct btree_keys *b, struct btree_keys *new,
 {
 	uint64_t start_time = local_clock();
 	struct btree_iter iter;
+	struct btree_iter_set data[MAX_BSETS];
 
+	iter.heap.heap.data = data;
 	bch_btree_iter_init(b, &iter, NULL);
 
 	btree_mergesort(b, new->set->data, &iter, false, true);
diff --git a/drivers/md/bcache/bset.h b/drivers/md/bcache/bset.h
index d795c84246b0..c764127937df 100644
--- a/drivers/md/bcache/bset.h
+++ b/drivers/md/bcache/bset.h
@@ -152,6 +152,19 @@ struct btree_iter;
 struct btree_iter_set;
 struct bkey_float;
 
+/* Btree key iteration */
+
+struct btree_iter_set {
+	struct bkey *k, *end;
+};
+
+struct btree_iter {
+#ifdef CONFIG_BCACHE_DEBUG
+	struct btree_keys *b;
+#endif
+	MIN_HEAP(struct btree_iter_set, btree_iter_heap) heap;
+};
+
 #define MAX_BSETS		4U
 
 struct bset_tree {
@@ -187,8 +200,9 @@ struct bset_tree {
 };
 
 struct btree_keys_ops {
-	bool		(*sort_cmp)(struct btree_iter_set l,
-				    struct btree_iter_set r);
+	bool		(*sort_cmp)(const void *l,
+				    const void *r,
+					void *args);
 	struct bkey	*(*sort_fixup)(struct btree_iter *iter,
 				       struct bkey *tmp);
 	bool		(*insert_fixup)(struct btree_keys *b,
@@ -258,6 +272,14 @@ static inline unsigned int bset_sector_offset(struct btree_keys *b,
 	return bset_byte_offset(b, i) >> 9;
 }
 
+static inline void btree_iter_swap(void *iter1, void *iter2, void *args)
+{
+	struct btree_iter_set *_iter1 = iter1;
+	struct btree_iter_set *_iter2 = iter2;
+
+	swap(*_iter1, *_iter2);
+}
+
 #define __set_bytes(i, k)	(sizeof(*(i)) + (k) * sizeof(uint64_t))
 #define set_bytes(i)		__set_bytes(i, i->keys)
 
@@ -312,18 +334,6 @@ enum {
 	BTREE_INSERT_STATUS_FRONT_MERGE,
 };
 
-/* Btree key iteration */
-
-struct btree_iter {
-	size_t size, used;
-#ifdef CONFIG_BCACHE_DEBUG
-	struct btree_keys *b;
-#endif
-	struct btree_iter_set {
-		struct bkey *k, *end;
-	} data[MAX_BSETS];
-};
-
 typedef bool (*ptr_filter_fn)(struct btree_keys *b, const struct bkey *k);
 
 struct bkey *bch_btree_iter_next(struct btree_iter *iter);
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 196cdacce38f..e7333a8c4819 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -157,8 +157,8 @@ void bch_btree_node_read_done(struct btree *b)
 	 * See the comment arount cache_set->fill_iter.
 	 */
 	iter = mempool_alloc(&b->c->fill_iter, GFP_NOIO);
-	iter->size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size;
-	iter->used = 0;
+	iter->heap.heap.size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size;
+	iter->heap.heap.nr = 0;
 
 #ifdef CONFIG_BCACHE_DEBUG
 	iter->b = &b->keys;
@@ -1311,6 +1311,9 @@ static bool btree_gc_mark_node(struct btree *b, struct gc_stat *gc)
 	struct bkey *k;
 	struct btree_iter iter;
 	struct bset_tree *t;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	gc->nodes++;
 
@@ -1572,6 +1575,9 @@ static unsigned int btree_gc_count_keys(struct btree *b)
 	struct bkey *k;
 	struct btree_iter iter;
 	unsigned int ret = 0;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	for_each_key_filter(&b->keys, k, &iter, bch_ptr_bad)
 		ret += bkey_u64s(k);
@@ -1614,6 +1620,9 @@ static int btree_gc_recurse(struct btree *b, struct btree_op *op,
 	struct btree_iter iter;
 	struct gc_merge_info r[GC_MERGE_NODES];
 	struct gc_merge_info *i, *last = r + ARRAY_SIZE(r) - 1;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	bch_btree_iter_init(&b->keys, &iter, &b->c->gc_done);
 
@@ -1912,6 +1921,9 @@ static int bch_btree_check_recurse(struct btree *b, struct btree_op *op)
 	int ret = 0;
 	struct bkey *k, *p = NULL;
 	struct btree_iter iter;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	for_each_key_filter(&b->keys, k, &iter, bch_ptr_invalid)
 		bch_initial_mark_key(b->c, b->level, k);
@@ -1953,7 +1965,9 @@ static int bch_btree_check_thread(void *arg)
 	struct btree_iter iter;
 	struct bkey *k, *p;
 	int cur_idx, prev_idx, skip_nr;
+	struct btree_iter_set data[MAX_BSETS];
 
+	iter.heap.heap.data = data;
 	k = p = NULL;
 	cur_idx = prev_idx = 0;
 	ret = 0;
@@ -2053,6 +2067,9 @@ int bch_btree_check(struct cache_set *c)
 	struct bkey *k = NULL;
 	struct btree_iter iter;
 	struct btree_check_state check_state;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	/* check and mark root node keys */
 	for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid)
@@ -2548,6 +2565,9 @@ static int bch_btree_map_nodes_recurse(struct btree *b, struct btree_op *op,
 	if (b->level) {
 		struct bkey *k;
 		struct btree_iter iter;
+		struct btree_iter_set data[MAX_BSETS];
+
+		iter.heap.heap.data = data;
 
 		bch_btree_iter_init(&b->keys, &iter, from);
 
@@ -2581,6 +2601,9 @@ int bch_btree_map_keys_recurse(struct btree *b, struct btree_op *op,
 	int ret = MAP_CONTINUE;
 	struct bkey *k;
 	struct btree_iter iter;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	bch_btree_iter_init(&b->keys, &iter, from);
 
diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
index d626ffcbecb9..ee6057558d4b 100644
--- a/drivers/md/bcache/extents.c
+++ b/drivers/md/bcache/extents.c
@@ -32,16 +32,19 @@ static void sort_key_next(struct btree_iter *iter,
 {
 	i->k = bkey_next(i->k);
 
-	if (i->k == i->end)
-		*i = iter->data[--iter->used];
+	if (i->k == i->end) {
+		struct btree_iter_set *data = iter->heap.heap.data;
+		*i = data[--iter->heap.heap.nr];
+	}
 }
 
-static bool bch_key_sort_cmp(struct btree_iter_set l,
-			     struct btree_iter_set r)
+static bool bch_key_sort_cmp(const void *l, const void *r, void *args)
 {
-	int64_t c = bkey_cmp(l.k, r.k);
+	struct btree_iter_set *_l = (struct btree_iter_set *)l;
+	struct btree_iter_set *_r = (struct btree_iter_set *)r;
+	int64_t c = bkey_cmp(_l->k, _r->k);
 
-	return c ? c > 0 : l.k < r.k;
+	return !(c ? c > 0 : _l->k < _r->k);
 }
 
 static bool __ptr_invalid(struct cache_set *c, const struct bkey *k)
@@ -255,22 +258,29 @@ const struct btree_keys_ops bch_btree_keys_ops = {
  * Necessary for btree_sort_fixup() - if there are multiple keys that compare
  * equal in different sets, we have to process them newest to oldest.
  */
-static bool bch_extent_sort_cmp(struct btree_iter_set l,
-				struct btree_iter_set r)
+static bool bch_extent_sort_cmp(const void *l, const void *r, void *args)
 {
-	int64_t c = bkey_cmp(&START_KEY(l.k), &START_KEY(r.k));
+	struct btree_iter_set *_l = (struct btree_iter_set *)l;
+	struct btree_iter_set *_r = (struct btree_iter_set *)r;
+
+	int64_t c = bkey_cmp(&START_KEY(_l->k), &START_KEY(_r->k));
 
-	return c ? c > 0 : l.k < r.k;
+	return !(c ? c > 0 : _l->k < _r->k);
 }
 
 static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
 					  struct bkey *tmp)
 {
-	while (iter->used > 1) {
-		struct btree_iter_set *top = iter->data, *i = top + 1;
+	const struct min_heap_callbacks callbacks = {
+		.less = bch_extent_sort_cmp,
+		.swp = btree_iter_swap,
+	};
+
+	while (iter->heap.heap.nr > 1) {
+		struct btree_iter_set *top = min_heap_peek(&iter->heap), *i = top + 1;
 
-		if (iter->used > 2 &&
-		    bch_extent_sort_cmp(i[0], i[1]))
+		if (iter->heap.heap.nr > 2 &&
+		    !bch_extent_sort_cmp(&i[0], &i[1], NULL))
 			i++;
 
 		if (bkey_cmp(top->k, &START_KEY(i->k)) <= 0)
@@ -278,7 +288,7 @@ static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
 
 		if (!KEY_SIZE(i->k)) {
 			sort_key_next(iter, i);
-			heap_sift(iter, i - top, bch_extent_sort_cmp);
+			min_heapify(&iter->heap, i - top, &callbacks, NULL);
 			continue;
 		}
 
@@ -288,7 +298,7 @@ static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
 			else
 				bch_cut_front(top->k, i->k);
 
-			heap_sift(iter, i - top, bch_extent_sort_cmp);
+			min_heapify(&iter->heap, i - top, &callbacks, NULL);
 		} else {
 			/* can't happen because of comparison func */
 			BUG_ON(!bkey_cmp(&START_KEY(top->k), &START_KEY(i->k)));
@@ -298,7 +308,7 @@ static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter,
 
 				bch_cut_back(&START_KEY(i->k), tmp);
 				bch_cut_front(i->k, top->k);
-				heap_sift(iter, 0, bch_extent_sort_cmp);
+				min_heapify(&iter->heap, 0, &callbacks, NULL);
 
 				return tmp;
 			} else {
diff --git a/drivers/md/bcache/movinggc.c b/drivers/md/bcache/movinggc.c
index ebd500bdf0b2..9f8aaf0845dc 100644
--- a/drivers/md/bcache/movinggc.c
+++ b/drivers/md/bcache/movinggc.c
@@ -182,16 +182,27 @@ err:		if (!IS_ERR_OR_NULL(w->private))
 	closure_sync(&cl);
 }
 
-static bool bucket_cmp(struct bucket *l, struct bucket *r)
+static bool bucket_cmp(const void *l, const void *r, void *args)
 {
-	return GC_SECTORS_USED(l) < GC_SECTORS_USED(r);
+	struct bucket *_l = (struct bucket *)l;
+	struct bucket *_r = (struct bucket *)r;
+
+	return GC_SECTORS_USED(_l) >= GC_SECTORS_USED(_r);
+}
+
+static void bucket_swap(void *l, void *r, void *args)
+{
+	struct bucket *_l = l;
+	struct bucket *_r = r;
+
+	swap(*_l, *_r);
 }
 
 static unsigned int bucket_heap_top(struct cache *ca)
 {
 	struct bucket *b;
 
-	return (b = heap_peek(&ca->heap)) ? GC_SECTORS_USED(b) : 0;
+	return (b = *min_heap_peek(&ca->heap)) ? GC_SECTORS_USED(b) : 0;
 }
 
 void bch_moving_gc(struct cache_set *c)
@@ -199,6 +210,10 @@ void bch_moving_gc(struct cache_set *c)
 	struct cache *ca = c->cache;
 	struct bucket *b;
 	unsigned long sectors_to_move, reserve_sectors;
+	const struct min_heap_callbacks callbacks = {
+		.less = bucket_cmp,
+		.swp = bucket_swap,
+	};
 
 	if (!c->copy_gc_enabled)
 		return;
@@ -209,7 +224,7 @@ void bch_moving_gc(struct cache_set *c)
 	reserve_sectors = ca->sb.bucket_size *
 			     fifo_used(&ca->free[RESERVE_MOVINGGC]);
 
-	ca->heap.used = 0;
+	ca->heap.heap.nr = 0;
 
 	for_each_bucket(b, ca) {
 		if (GC_MARK(b) == GC_MARK_METADATA ||
@@ -218,25 +233,28 @@ void bch_moving_gc(struct cache_set *c)
 		    atomic_read(&b->pin))
 			continue;
 
-		if (!heap_full(&ca->heap)) {
+		if (!min_heap_full(&ca->heap)) {
 			sectors_to_move += GC_SECTORS_USED(b);
-			heap_add(&ca->heap, b, bucket_cmp);
-		} else if (bucket_cmp(b, heap_peek(&ca->heap))) {
+			min_heap_push(&ca->heap, b, &callbacks, NULL);
+		} else if (!bucket_cmp(b, min_heap_peek(&ca->heap), NULL)) {
 			sectors_to_move -= bucket_heap_top(ca);
 			sectors_to_move += GC_SECTORS_USED(b);
 
-			ca->heap.data[0] = b;
-			heap_sift(&ca->heap, 0, bucket_cmp);
+			*min_heap_peek(&ca->heap) = b;
+			min_heapify(&ca->heap, 0, &callbacks, NULL);
 		}
 	}
 
 	while (sectors_to_move > reserve_sectors) {
-		heap_pop(&ca->heap, b, bucket_cmp);
+		b = *min_heap_peek(&ca->heap);
+		min_heap_pop(&ca->heap, &callbacks, NULL);
 		sectors_to_move -= GC_SECTORS_USED(b);
 	}
 
-	while (heap_pop(&ca->heap, b, bucket_cmp))
+	while ((b = *min_heap_peek(&ca->heap))) {
+		min_heap_pop(&ca->heap, &callbacks, NULL);
 		SET_GC_MOVE(b, 1);
+	}
 
 	mutex_unlock(&c->bucket_lock);
 
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 330bcd9ea4a9..1c6aeeff2cc3 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -2193,6 +2193,22 @@ static const char *register_cache_set(struct cache *ca)
 	return err;
 }
 
+static inline bool init_heap(struct heap *heap, size_t size, gfp_t gfp)
+{
+	size_t bytes = size * sizeof(struct bucket *);
+	void *data = kvmalloc(bytes, (gfp) & GFP_KERNEL);
+
+	min_heap_init(heap, data, size);
+
+	return data;
+}
+
+static inline void free_heap(struct heap *heap)
+{
+	kvfree(heap->heap.data);
+	heap->heap.data = NULL;
+}
+
 /* Cache device */
 
 /* When ca->kobj released */
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 6956beb55326..eac2039c2cad 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -661,6 +661,9 @@ static unsigned int bch_root_usage(struct cache_set *c)
 	struct bkey *k;
 	struct btree *b;
 	struct btree_iter iter;
+	struct btree_iter_set data[MAX_BSETS];
+
+	iter.heap.heap.data = data;
 
 	goto lock_root;
 
diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h
index f61ab1bada6c..fa928d1d327a 100644
--- a/drivers/md/bcache/util.h
+++ b/drivers/md/bcache/util.h
@@ -7,6 +7,7 @@
 #include <linux/closure.h>
 #include <linux/errno.h>
 #include <linux/kernel.h>
+#include <linux/min_heap.h>
 #include <linux/sched/clock.h>
 #include <linux/llist.h>
 #include <linux/ratelimit.h>
@@ -30,86 +31,6 @@ struct closure;
 
 #endif
 
-#define DECLARE_HEAP(type, name)					\
-	struct {							\
-		size_t size, used;					\
-		type *data;						\
-	} name
-
-#define init_heap(heap, _size, gfp)					\
-({									\
-	size_t _bytes;							\
-	(heap)->used = 0;						\
-	(heap)->size = (_size);						\
-	_bytes = (heap)->size * sizeof(*(heap)->data);			\
-	(heap)->data = kvmalloc(_bytes, (gfp) & GFP_KERNEL);		\
-	(heap)->data;							\
-})
-
-#define free_heap(heap)							\
-do {									\
-	kvfree((heap)->data);						\
-	(heap)->data = NULL;						\
-} while (0)
-
-#define heap_swap(h, i, j)	swap((h)->data[i], (h)->data[j])
-
-#define heap_sift(h, i, cmp)						\
-do {									\
-	size_t _r, _j = i;						\
-									\
-	for (; _j * 2 + 1 < (h)->used; _j = _r) {			\
-		_r = _j * 2 + 1;					\
-		if (_r + 1 < (h)->used &&				\
-		    cmp((h)->data[_r], (h)->data[_r + 1]))		\
-			_r++;						\
-									\
-		if (cmp((h)->data[_r], (h)->data[_j]))			\
-			break;						\
-		heap_swap(h, _r, _j);					\
-	}								\
-} while (0)
-
-#define heap_sift_down(h, i, cmp)					\
-do {									\
-	while (i) {							\
-		size_t p = (i - 1) / 2;					\
-		if (cmp((h)->data[i], (h)->data[p]))			\
-			break;						\
-		heap_swap(h, i, p);					\
-		i = p;							\
-	}								\
-} while (0)
-
-#define heap_add(h, d, cmp)						\
-({									\
-	bool _r = !heap_full(h);					\
-	if (_r) {							\
-		size_t _i = (h)->used++;				\
-		(h)->data[_i] = d;					\
-									\
-		heap_sift_down(h, _i, cmp);				\
-		heap_sift(h, _i, cmp);					\
-	}								\
-	_r;								\
-})
-
-#define heap_pop(h, d, cmp)						\
-({									\
-	bool _r = (h)->used;						\
-	if (_r) {							\
-		(d) = (h)->data[0];					\
-		(h)->used--;						\
-		heap_swap(h, 0, (h)->used);				\
-		heap_sift(h, 0, cmp);					\
-	}								\
-	_r;								\
-})
-
-#define heap_peek(h)	((h)->used ? (h)->data[0] : NULL)
-
-#define heap_full(h)	((h)->used == (h)->size)
-
 #define DECLARE_FIFO(type, name)					\
 	struct {							\
 		size_t front, back, size, mask;				\
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 11/13] lib min_heap: Add min_heap_del()
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (9 preceding siblings ...)
  2024-03-19 18:00 ` [PATCH 10/13] bcache: Remove heap-related macros and switch to generic min_heap Kuan-Wei Chiu
@ 2024-03-19 18:00 ` Kuan-Wei Chiu
  2024-03-19 19:59   ` Ian Rogers
  2024-03-19 18:00 ` [PATCH 12/13] lib min_heap: Add min_heap_sift_up() Kuan-Wei Chiu
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 18:00 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Add min_heap_del() to delete the element at index 'idx' in the heap.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 include/linux/min_heap.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
index 154ac2102114..ce085137fce7 100644
--- a/include/linux/min_heap.h
+++ b/include/linux/min_heap.h
@@ -199,4 +199,28 @@ bool __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
 #define min_heap_push(_heap, _element, _func, _args)	\
 	__min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
 
+/* Remove ith element from the heap, O(log2(nr)). */
+static __always_inline
+bool __min_heap_del(struct __min_heap *heap, size_t elem_size, size_t idx,
+		const struct min_heap_callbacks *func, void *args)
+{
+	void *data = heap->data;
+
+	if (WARN_ONCE(heap->nr <= 0, "Popping an empty heap"))
+		return false;
+
+	/* Place last element at the root (position 0) and then sift down. */
+	heap->nr--;
+	if (idx == heap->nr)
+		return true;
+	memcpy(data, data + (heap->nr * elem_size), elem_size);
+	__min_heap_sift_up(heap, elem_size, idx, func, args);
+	__min_heapify(heap, idx, elem_size, func, args);
+
+	return true;
+}
+
+#define min_heap_del(_heap, _idx, _func, _args)	\
+	__min_heap_del(&(_heap)->heap, __minheap_obj_size(_heap), _idx, _func, _args)
+
 #endif /* _LINUX_MIN_HEAP_H */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 12/13] lib min_heap: Add min_heap_sift_up()
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (10 preceding siblings ...)
  2024-03-19 18:00 ` [PATCH 11/13] lib min_heap: Add min_heap_del() Kuan-Wei Chiu
@ 2024-03-19 18:00 ` Kuan-Wei Chiu
  2024-03-19 20:12   ` Ian Rogers
  2024-03-19 18:00 ` [PATCH 13/13] bcachefs: Remove heap-related macros and switch to generic min_heap Kuan-Wei Chiu
  2024-03-19 22:12 ` [PATCH 00/13] treewide: Refactor heap related implementation Kent Overstreet
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 18:00 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Add min_heap_sift_up() to sift up the element at index 'idx' in the
heap.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 include/linux/min_heap.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
index ce085137fce7..586965977104 100644
--- a/include/linux/min_heap.h
+++ b/include/linux/min_heap.h
@@ -199,6 +199,26 @@ bool __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
 #define min_heap_push(_heap, _element, _func, _args)	\
 	__min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
 
+/* Sift up ith element from the heap, O(log2(nr)). */
+static __always_inline
+void __min_heap_sift_up(struct __min_heap *heap, size_t elem_size, size_t idx,
+		const struct min_heap_callbacks *func, void *args)
+{
+	void *data = heap->data;
+	size_t parent;
+
+	while (idx) {
+		parent = (idx - 1) / 2;
+		if (func->less(data + parent * elem_size, data + idx * elem_size, args))
+			break;
+		func->swp(data + parent * elem_size, data + idx * elem_size, args);
+		idx = parent;
+	}
+}
+
+#define min_heap_sift_up(_heap, _idx, _func, _args)	\
+	__min_heap_sift_up(&(_heap)->heap, __minheap_obj_size(_heap), _idx, _func, _args)
+
 /* Remove ith element from the heap, O(log2(nr)). */
 static __always_inline
 bool __min_heap_del(struct __min_heap *heap, size_t elem_size, size_t idx,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 13/13] bcachefs: Remove heap-related macros and switch to generic min_heap
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (11 preceding siblings ...)
  2024-03-19 18:00 ` [PATCH 12/13] lib min_heap: Add min_heap_sift_up() Kuan-Wei Chiu
@ 2024-03-19 18:00 ` Kuan-Wei Chiu
  2024-03-19 20:03   ` Ian Rogers
  2024-03-19 22:12 ` [PATCH 00/13] treewide: Refactor heap related implementation Kent Overstreet
  13 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-19 18:00 UTC (permalink / raw)
  To: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung, akpm
  Cc: bfoster, mark.rutland, alexander.shishkin, jolsa, irogers,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel, Kuan-Wei Chiu

Drop the heap-related macros from bcachefs and replaces them with the
generic min_heap implementation from include/linux. This change
improves code readability by using functions instead of macros.

Link: https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 fs/bcachefs/clock.c       |  53 +++++++++++-----
 fs/bcachefs/clock_types.h |   2 +-
 fs/bcachefs/ec.c          |  99 ++++++++++++++++++-----------
 fs/bcachefs/ec_types.h    |   2 +-
 fs/bcachefs/util.h        | 127 +++-----------------------------------
 5 files changed, 109 insertions(+), 174 deletions(-)

diff --git a/fs/bcachefs/clock.c b/fs/bcachefs/clock.c
index 363644451106..7a7d13f7a629 100644
--- a/fs/bcachefs/clock.c
+++ b/fs/bcachefs/clock.c
@@ -6,16 +6,29 @@
 #include <linux/kthread.h>
 #include <linux/preempt.h>
 
-static inline long io_timer_cmp(io_timer_heap *h,
-				struct io_timer *l,
-				struct io_timer *r)
+static inline bool io_timer_cmp(const void *l, const void *r, void *args)
 {
-	return l->expire - r->expire;
+	struct io_timer *_l = (struct io_timer *)l;
+	struct io_timer *_r = (struct io_timer *)r;
+
+	return _l->expire >= _r->expire;
+}
+
+static inline void io_timer_swp(void *l, void *r, void *args)
+{
+	struct io_timer *_l = (struct io_timer *)l;
+	struct io_timer *_r = (struct io_timer *)r;
+
+	swap(*_l, *_r);
 }
 
 void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
 {
 	size_t i;
+	const struct min_heap_callbacks callbacks = {
+		.less = io_timer_cmp,
+		.swp = io_timer_swp,
+	};
 
 	spin_lock(&clock->timer_lock);
 
@@ -26,11 +39,11 @@ void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
 		return;
 	}
 
-	for (i = 0; i < clock->timers.used; i++)
-		if (clock->timers.data[i] == timer)
+	for (i = 0; i < clock->timers.heap.nr; i++)
+		if (min_heap_peek(&clock->timers)[i] == timer)
 			goto out;
 
-	BUG_ON(!heap_add(&clock->timers, timer, io_timer_cmp, NULL));
+	BUG_ON(!min_heap_push(&clock->timers, &timer, &callbacks, NULL));
 out:
 	spin_unlock(&clock->timer_lock);
 }
@@ -38,12 +51,16 @@ void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
 void bch2_io_timer_del(struct io_clock *clock, struct io_timer *timer)
 {
 	size_t i;
+	const struct min_heap_callbacks callbacks = {
+		.less = io_timer_cmp,
+		.swp = io_timer_swp,
+	};
 
 	spin_lock(&clock->timer_lock);
 
-	for (i = 0; i < clock->timers.used; i++)
-		if (clock->timers.data[i] == timer) {
-			heap_del(&clock->timers, i, io_timer_cmp, NULL);
+	for (i = 0; i < clock->timers.heap.nr; i++)
+		if (min_heap_peek(&clock->timers)[i] == timer) {
+			min_heap_pop(&clock->timers, &callbacks, NULL);
 			break;
 		}
 
@@ -131,12 +148,16 @@ static struct io_timer *get_expired_timer(struct io_clock *clock,
 					  unsigned long now)
 {
 	struct io_timer *ret = NULL;
+	const struct min_heap_callbacks callbacks = {
+		.less = io_timer_cmp,
+		.swp = io_timer_swp,
+	};
 
 	spin_lock(&clock->timer_lock);
 
-	if (clock->timers.used &&
-	    time_after_eq(now, clock->timers.data[0]->expire))
-		heap_pop(&clock->timers, ret, io_timer_cmp, NULL);
+	if (clock->timers.heap.nr &&
+	    time_after_eq(now, min_heap_peek(&clock->timers)[0]->expire))
+		min_heap_pop(&clock->timers, &callbacks, NULL);
 
 	spin_unlock(&clock->timer_lock);
 
@@ -161,10 +182,10 @@ void bch2_io_timers_to_text(struct printbuf *out, struct io_clock *clock)
 	spin_lock(&clock->timer_lock);
 	now = atomic64_read(&clock->now);
 
-	for (i = 0; i < clock->timers.used; i++)
+	for (i = 0; i < clock->timers.heap.nr; i++)
 		prt_printf(out, "%ps:\t%li\n",
-		       clock->timers.data[i]->fn,
-		       clock->timers.data[i]->expire - now);
+		       min_heap_peek(&clock->timers)[i]->fn,
+		       min_heap_peek(&clock->timers)[i]->expire - now);
 	spin_unlock(&clock->timer_lock);
 	--out->atomic;
 }
diff --git a/fs/bcachefs/clock_types.h b/fs/bcachefs/clock_types.h
index 5fae0012d808..b02b24b9d74f 100644
--- a/fs/bcachefs/clock_types.h
+++ b/fs/bcachefs/clock_types.h
@@ -23,7 +23,7 @@ struct io_timer {
 /* Amount to buffer up on a percpu counter */
 #define IO_CLOCK_PCPU_SECTORS	128
 
-typedef HEAP(struct io_timer *)	io_timer_heap;
+typedef MIN_HEAP(struct io_timer *, io_timer_heap) io_timer_heap;
 
 struct io_clock {
 	atomic64_t		now;
diff --git a/fs/bcachefs/ec.c b/fs/bcachefs/ec.c
index b98e2c2b8bf0..7b6a31237503 100644
--- a/fs/bcachefs/ec.c
+++ b/fs/bcachefs/ec.c
@@ -860,14 +860,14 @@ static int __ec_stripe_mem_alloc(struct bch_fs *c, size_t idx, gfp_t gfp)
 {
 	ec_stripes_heap n, *h = &c->ec_stripes_heap;
 
-	if (idx >= h->size) {
+	if (idx >= h->heap.size) {
 		if (!init_heap(&n, max(1024UL, roundup_pow_of_two(idx + 1)), gfp))
 			return -BCH_ERR_ENOMEM_ec_stripe_mem_alloc;
 
 		mutex_lock(&c->ec_stripes_heap_lock);
-		if (n.size > h->size) {
-			memcpy(n.data, h->data, h->used * sizeof(h->data[0]));
-			n.used = h->used;
+		if (n.heap.size > h->heap.size) {
+			memcpy(min_heap_peek(&n), min_heap_peek(h), h->heap.nr * sizeof(*min_heap_peek(h)));
+			n.heap.nr = h->heap.nr;
 			swap(*h, n);
 		}
 		mutex_unlock(&c->ec_stripes_heap_lock);
@@ -958,20 +958,21 @@ static u64 stripe_idx_to_delete(struct bch_fs *c)
 
 	lockdep_assert_held(&c->ec_stripes_heap_lock);
 
-	if (h->used &&
-	    h->data[0].blocks_nonempty == 0 &&
-	    !bch2_stripe_is_open(c, h->data[0].idx))
-		return h->data[0].idx;
+	if (h->heap.nr &&
+	    min_heap_peek(h)->blocks_nonempty == 0 &&
+	    !bch2_stripe_is_open(c, min_heap_peek(h)->idx))
+		return min_heap_peek(h)->idx;
 
 	return 0;
 }
 
-static inline int ec_stripes_heap_cmp(ec_stripes_heap *h,
-				      struct ec_stripe_heap_entry l,
-				      struct ec_stripe_heap_entry r)
+static inline bool ec_stripes_heap_cmp(const void *l, const void *r, void *args)
 {
-	return ((l.blocks_nonempty > r.blocks_nonempty) -
-		(l.blocks_nonempty < r.blocks_nonempty));
+	struct ec_stripe_heap_entry *_l = (struct ec_stripe_heap_entry *)l;
+	struct ec_stripe_heap_entry *_r = (struct ec_stripe_heap_entry *)r;
+
+	return ((_l->blocks_nonempty > _r->blocks_nonempty) >=
+		(_l->blocks_nonempty < _r->blocks_nonempty));
 }
 
 static inline void ec_stripes_heap_set_backpointer(ec_stripes_heap *h,
@@ -979,7 +980,21 @@ static inline void ec_stripes_heap_set_backpointer(ec_stripes_heap *h,
 {
 	struct bch_fs *c = container_of(h, struct bch_fs, ec_stripes_heap);
 
-	genradix_ptr(&c->stripes, h->data[i].idx)->heap_idx = i;
+	genradix_ptr(&c->stripes, min_heap_peek(h)[i].idx)->heap_idx = i;
+}
+
+static inline void ec_stripes_heap_swap(void *l, void *r, void *h)
+{
+	struct ec_stripe_heap_entry *_l = (struct ec_stripe_heap_entry *)l;
+	struct ec_stripe_heap_entry *_r = (struct ec_stripe_heap_entry *)r;
+	ec_stripes_heap *_h = (ec_stripes_heap *)h;
+	size_t i = _l - min_heap_peek(_h);
+	size_t j = _r - min_heap_peek(_h);
+
+	ec_stripes_heap_set_backpointer(_h, i);
+	ec_stripes_heap_set_backpointer(_h, j);
+
+	swap(*_l, *_r);
 }
 
 static void heap_verify_backpointer(struct bch_fs *c, size_t idx)
@@ -987,34 +1002,43 @@ static void heap_verify_backpointer(struct bch_fs *c, size_t idx)
 	ec_stripes_heap *h = &c->ec_stripes_heap;
 	struct stripe *m = genradix_ptr(&c->stripes, idx);
 
-	BUG_ON(m->heap_idx >= h->used);
-	BUG_ON(h->data[m->heap_idx].idx != idx);
+	BUG_ON(m->heap_idx >= h->heap.nr);
+	BUG_ON(min_heap_peek(h)[m->heap_idx].idx != idx);
 }
 
 void bch2_stripes_heap_del(struct bch_fs *c,
 			   struct stripe *m, size_t idx)
 {
+	const struct min_heap_callbacks callbacks = {
+		.less = ec_stripes_heap_cmp,
+		.swp = ec_stripes_heap_swap,
+	};
+
 	mutex_lock(&c->ec_stripes_heap_lock);
 	heap_verify_backpointer(c, idx);
 
-	heap_del(&c->ec_stripes_heap, m->heap_idx,
-		 ec_stripes_heap_cmp,
-		 ec_stripes_heap_set_backpointer);
+	min_heap_del(&c->ec_stripes_heap, m->heap_idx, &callbacks, &c->ec_stripes_heap);
 	mutex_unlock(&c->ec_stripes_heap_lock);
 }
 
 void bch2_stripes_heap_insert(struct bch_fs *c,
 			      struct stripe *m, size_t idx)
 {
+	const struct min_heap_callbacks callbacks = {
+		.less = ec_stripes_heap_cmp,
+		.swp = ec_stripes_heap_swap,
+	};
+
 	mutex_lock(&c->ec_stripes_heap_lock);
-	BUG_ON(heap_full(&c->ec_stripes_heap));
+	BUG_ON(min_heap_full(&c->ec_stripes_heap));
 
-	heap_add(&c->ec_stripes_heap, ((struct ec_stripe_heap_entry) {
+	genradix_ptr(&c->stripes, idx)->heap_idx = c->ec_stripes_heap.heap.nr;
+	min_heap_push(&c->ec_stripes_heap, &((struct ec_stripe_heap_entry) {
 			.idx = idx,
 			.blocks_nonempty = m->blocks_nonempty,
 		}),
-		 ec_stripes_heap_cmp,
-		 ec_stripes_heap_set_backpointer);
+		 &callbacks,
+		 &c->ec_stripes_heap);
 
 	heap_verify_backpointer(c, idx);
 	mutex_unlock(&c->ec_stripes_heap_lock);
@@ -1026,17 +1050,20 @@ void bch2_stripes_heap_update(struct bch_fs *c,
 	ec_stripes_heap *h = &c->ec_stripes_heap;
 	bool do_deletes;
 	size_t i;
+	const struct min_heap_callbacks callbacks = {
+		.less = ec_stripes_heap_cmp,
+		.swp = ec_stripes_heap_swap,
+	};
 
 	mutex_lock(&c->ec_stripes_heap_lock);
 	heap_verify_backpointer(c, idx);
 
-	h->data[m->heap_idx].blocks_nonempty = m->blocks_nonempty;
+	min_heap_peek(h)[m->heap_idx].blocks_nonempty = m->blocks_nonempty;
 
 	i = m->heap_idx;
-	heap_sift_up(h,	  i, ec_stripes_heap_cmp,
-		     ec_stripes_heap_set_backpointer);
-	heap_sift_down(h, i, ec_stripes_heap_cmp,
-		       ec_stripes_heap_set_backpointer);
+
+	min_heap_sift_up(h,	i, &callbacks, &c->ec_stripes_heap);
+	min_heapify(h, i, &callbacks, &c->ec_stripes_heap);
 
 	heap_verify_backpointer(c, idx);
 
@@ -1828,12 +1855,12 @@ static s64 get_existing_stripe(struct bch_fs *c,
 		return -1;
 
 	mutex_lock(&c->ec_stripes_heap_lock);
-	for (heap_idx = 0; heap_idx < h->used; heap_idx++) {
+	for (heap_idx = 0; heap_idx < h->heap.nr; heap_idx++) {
 		/* No blocks worth reusing, stripe will just be deleted: */
-		if (!h->data[heap_idx].blocks_nonempty)
+		if (!min_heap_peek(h)[heap_idx].blocks_nonempty)
 			continue;
 
-		stripe_idx = h->data[heap_idx].idx;
+		stripe_idx = min_heap_peek(h)[heap_idx].idx;
 
 		m = genradix_ptr(&c->stripes, stripe_idx);
 
@@ -2159,14 +2186,14 @@ void bch2_stripes_heap_to_text(struct printbuf *out, struct bch_fs *c)
 	size_t i;
 
 	mutex_lock(&c->ec_stripes_heap_lock);
-	for (i = 0; i < min_t(size_t, h->used, 50); i++) {
-		m = genradix_ptr(&c->stripes, h->data[i].idx);
+	for (i = 0; i < min_t(size_t, h->heap.nr, 50); i++) {
+		m = genradix_ptr(&c->stripes, min_heap_peek(h)[i].idx);
 
-		prt_printf(out, "%zu %u/%u+%u", h->data[i].idx,
-		       h->data[i].blocks_nonempty,
+		prt_printf(out, "%zu %u/%u+%u", min_heap_peek(h)[i].idx,
+		       min_heap_peek(h)[i].blocks_nonempty,
 		       m->nr_blocks - m->nr_redundant,
 		       m->nr_redundant);
-		if (bch2_stripe_is_open(c, h->data[i].idx))
+		if (bch2_stripe_is_open(c, min_heap_peek(h)[i].idx))
 			prt_str(out, " open");
 		prt_newline(out);
 	}
diff --git a/fs/bcachefs/ec_types.h b/fs/bcachefs/ec_types.h
index 976426da3a12..2ed67431a81c 100644
--- a/fs/bcachefs/ec_types.h
+++ b/fs/bcachefs/ec_types.h
@@ -36,6 +36,6 @@ struct ec_stripe_heap_entry {
 	unsigned		blocks_nonempty;
 };
 
-typedef HEAP(struct ec_stripe_heap_entry) ec_stripes_heap;
+typedef MIN_HEAP(struct ec_stripe_heap_entry, ec_stripes_heap) ec_stripes_heap;
 
 #endif /* _BCACHEFS_EC_TYPES_H */
diff --git a/fs/bcachefs/util.h b/fs/bcachefs/util.h
index 7ffbddb80400..c599dc5276ac 100644
--- a/fs/bcachefs/util.h
+++ b/fs/bcachefs/util.h
@@ -8,6 +8,7 @@
 #include <linux/errno.h>
 #include <linux/freezer.h>
 #include <linux/kernel.h>
+#include <linux/min_heap.h>
 #include <linux/sched/clock.h>
 #include <linux/llist.h>
 #include <linux/log2.h>
@@ -54,134 +55,20 @@ static inline size_t buf_pages(void *p, size_t len)
 			    PAGE_SIZE);
 }
 
-#define HEAP(type)							\
-struct {								\
-	size_t size, used;						\
-	type *data;							\
-}
-
-#define DECLARE_HEAP(type, name) HEAP(type) name
-
 #define init_heap(heap, _size, gfp)					\
 ({									\
-	(heap)->used = 0;						\
-	(heap)->size = (_size);						\
-	(heap)->data = kvmalloc((heap)->size * sizeof((heap)->data[0]),\
-				 (gfp));				\
-})
-
-#define free_heap(heap)							\
-do {									\
-	kvfree((heap)->data);						\
-	(heap)->data = NULL;						\
-} while (0)
-
-#define heap_set_backpointer(h, i, _fn)					\
-do {									\
-	void (*fn)(typeof(h), size_t) = _fn;				\
-	if (fn)								\
-		fn(h, i);						\
-} while (0)
-
-#define heap_swap(h, i, j, set_backpointer)				\
-do {									\
-	swap((h)->data[i], (h)->data[j]);				\
-	heap_set_backpointer(h, i, set_backpointer);			\
-	heap_set_backpointer(h, j, set_backpointer);			\
-} while (0)
-
-#define heap_peek(h)							\
-({									\
-	EBUG_ON(!(h)->used);						\
-	(h)->data[0];							\
-})
-
-#define heap_full(h)	((h)->used == (h)->size)
-
-#define heap_sift_down(h, i, cmp, set_backpointer)			\
-do {									\
-	size_t _c, _j = i;						\
-									\
-	for (; _j * 2 + 1 < (h)->used; _j = _c) {			\
-		_c = _j * 2 + 1;					\
-		if (_c + 1 < (h)->used &&				\
-		    cmp(h, (h)->data[_c], (h)->data[_c + 1]) >= 0)	\
-			_c++;						\
-									\
-		if (cmp(h, (h)->data[_c], (h)->data[_j]) >= 0)		\
-			break;						\
-		heap_swap(h, _c, _j, set_backpointer);			\
-	}								\
-} while (0)
-
-#define heap_sift_up(h, i, cmp, set_backpointer)			\
-do {									\
-	while (i) {							\
-		size_t p = (i - 1) / 2;					\
-		if (cmp(h, (h)->data[i], (h)->data[p]) >= 0)		\
-			break;						\
-		heap_swap(h, i, p, set_backpointer);			\
-		i = p;							\
-	}								\
-} while (0)
-
-#define __heap_add(h, d, cmp, set_backpointer)				\
-({									\
-	size_t _i = (h)->used++;					\
-	(h)->data[_i] = d;						\
-	heap_set_backpointer(h, _i, set_backpointer);			\
-									\
-	heap_sift_up(h, _i, cmp, set_backpointer);			\
-	_i;								\
-})
-
-#define heap_add(h, d, cmp, set_backpointer)				\
-({									\
-	bool _r = !heap_full(h);					\
-	if (_r)								\
-		__heap_add(h, d, cmp, set_backpointer);			\
-	_r;								\
+	void *data = kvmalloc(_size * sizeof(*min_heap_peek(heap)), (gfp));\
+	min_heap_init(heap, data, _size);				\
+	min_heap_peek(heap);						\
 })
 
-#define heap_add_or_replace(h, new, cmp, set_backpointer)		\
-do {									\
-	if (!heap_add(h, new, cmp, set_backpointer) &&			\
-	    cmp(h, new, heap_peek(h)) >= 0) {				\
-		(h)->data[0] = new;					\
-		heap_set_backpointer(h, 0, set_backpointer);		\
-		heap_sift_down(h, 0, cmp, set_backpointer);		\
-	}								\
-} while (0)
 
-#define heap_del(h, i, cmp, set_backpointer)				\
+#define free_heap(_heap)							\
 do {									\
-	size_t _i = (i);						\
-									\
-	BUG_ON(_i >= (h)->used);					\
-	(h)->used--;							\
-	if ((_i) < (h)->used) {						\
-		heap_swap(h, _i, (h)->used, set_backpointer);		\
-		heap_sift_up(h, _i, cmp, set_backpointer);		\
-		heap_sift_down(h, _i, cmp, set_backpointer);		\
-	}								\
+	kvfree((_heap)->heap.data);						\
+	(_heap)->heap.data = NULL;						\
 } while (0)
 
-#define heap_pop(h, d, cmp, set_backpointer)				\
-({									\
-	bool _r = (h)->used;						\
-	if (_r) {							\
-		(d) = (h)->data[0];					\
-		heap_del(h, 0, cmp, set_backpointer);			\
-	}								\
-	_r;								\
-})
-
-#define heap_resort(heap, cmp, set_backpointer)				\
-do {									\
-	ssize_t _i;							\
-	for (_i = (ssize_t) (heap)->used / 2 -  1; _i >= 0; --_i)	\
-		heap_sift_down(heap, _i, cmp, set_backpointer);		\
-} while (0)
 
 #define ANYSINT_MAX(t)							\
 	((((t) 1 << (sizeof(t) * 8 - 2)) - (t) 1) * (t) 2 + (t) 1)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH 01/13] perf/core: Fix several typos
  2024-03-19 17:59 ` [PATCH 01/13] perf/core: Fix several typos Kuan-Wei Chiu
@ 2024-03-19 19:31   ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 19:31 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Replace 'artifically' with 'artificially'.
> Replace 'irrespecive' with 'irrespective'.
> Replace 'futher' with 'further'.
> Replace 'sufficent' with 'sufficient'.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  kernel/events/core.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 724e6d7e128f..10ac2db83f14 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -534,7 +534,7 @@ void perf_sample_event_took(u64 sample_len_ns)
>         __this_cpu_write(running_sample_length, running_len);
>
>         /*
> -        * Note: this will be biased artifically low until we have
> +        * Note: this will be biased artificially low until we have
>          * seen NR_ACCUMULATED_SAMPLES. Doing it this way keeps us
>          * from having to maintain a count.
>          */
> @@ -596,10 +596,10 @@ static inline u64 perf_event_clock(struct perf_event *event)
>   *
>   * Event groups make things a little more complicated, but not terribly so. The
>   * rules for a group are that if the group leader is OFF the entire group is
> - * OFF, irrespecive of what the group member states are. This results in
> + * OFF, irrespective of what the group member states are. This results in
>   * __perf_effective_state().
>   *
> - * A futher ramification is that when a group leader flips between OFF and
> + * A further ramification is that when a group leader flips between OFF and
>   * !OFF, we need to update all group member times.
>   *
>   *
> @@ -891,7 +891,7 @@ static int perf_cgroup_ensure_storage(struct perf_event *event,
>         int cpu, heap_size, ret = 0;
>
>         /*
> -        * Allow storage to have sufficent space for an iterator for each
> +        * Allow storage to have sufficient space for an iterator for each
>          * possibly nested cgroup plus an iterator for events with no cgroup.
>          */
>         for (heap_size = 1; css; css = css->parent)
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 04/13] lib min_heap: Add type safe interface
  2024-03-19 17:59 ` [PATCH 04/13] lib min_heap: Add type safe interface Kuan-Wei Chiu
@ 2024-03-19 19:49   ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 19:49 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Introduce a type-safe interface for min_heap by adding small macro
> wrappers around functions and using a 0-size array to store type
> information. This enables the use of __minheap_cast and
> __minheap_obj_size macros for type casting and obtaining element size.
> The implementation draws inspiration from generic-radix-tree.h,
> eliminating the need to pass element size in min_heap_callbacks.
>
> Link: https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  drivers/md/dm-vdo/repair.c     | 21 +++++-----
>  drivers/md/dm-vdo/slab-depot.c | 13 +++---
>  include/linux/min_heap.h       | 75 +++++++++++++++++++++++-----------
>  kernel/events/core.c           | 35 ++++++++--------
>  lib/test_min_heap.c            | 49 +++++++++++-----------
>  5 files changed, 107 insertions(+), 86 deletions(-)
>
> diff --git a/drivers/md/dm-vdo/repair.c b/drivers/md/dm-vdo/repair.c
> index defc9359f10e..7663fa2098f4 100644
> --- a/drivers/md/dm-vdo/repair.c
> +++ b/drivers/md/dm-vdo/repair.c
> @@ -51,6 +51,8 @@ struct recovery_point {
>         bool increment_applied;
>  };
>
> +MIN_HEAP(struct numbered_block_mapping *, replay_heap);
> +
>  struct repair_completion {
>         /* The completion header */
>         struct vdo_completion completion;
> @@ -97,7 +99,7 @@ struct repair_completion {
>          * order, then original journal order. This permits efficient iteration over the journal
>          * entries in order.
>          */
> -       struct min_heap replay_heap;
> +       struct replay_heap replay_heap;
>         /* Fields tracking progress through the journal entries. */
>         struct numbered_block_mapping *current_entry;
>         struct numbered_block_mapping *current_unfetched_entry;
> @@ -163,25 +165,24 @@ static void swap_mappings(void *item1, void *item2)
>  }
>
>  static const struct min_heap_callbacks repair_min_heap = {
> -       .elem_size = sizeof(struct numbered_block_mapping),
>         .less = mapping_is_less_than,
>         .swp = swap_mappings,
>  };
>
>  static struct numbered_block_mapping *sort_next_heap_element(struct repair_completion *repair)
>  {
> -       struct min_heap *heap = &repair->replay_heap;
> +       struct replay_heap *heap = &repair->replay_heap;
>         struct numbered_block_mapping *last;
>
> -       if (heap->nr == 0)
> +       if (heap->heap.nr == 0)
>                 return NULL;
>
>         /*
>          * Swap the next heap element with the last one on the heap, popping it off the heap,
>          * restore the heap invariant, and return a pointer to the popped element.
>          */
> -       last = &repair->entries[--heap->nr];
> -       swap_mappings(heap->data, last);
> +       last = &repair->entries[--heap->heap.nr];
> +       swap_mappings(heap->heap.data, last);
>         min_heapify(heap, 0, &repair_min_heap);
>         return last;
>  }
> @@ -1117,11 +1118,9 @@ static void recover_block_map(struct vdo_completion *completion)
>          * Organize the journal entries into a binary heap so we can iterate over them in sorted
>          * order incrementally, avoiding an expensive sort call.
>          */
> -       repair->replay_heap = (struct min_heap) {
> -               .data = repair->entries,
> -               .nr = repair->block_map_entry_count,
> -               .size = repair->block_map_entry_count,
> -       };
> +       repair->replay_heap.heap.data = repair->entries;
> +       repair->replay_heap.heap.nr = repair->block_map_entry_count;
> +       repair->replay_heap.heap.size = repair->block_map_entry_count;
>         min_heapify_all(&repair->replay_heap, &repair_min_heap);
>
>         vdo_log_info("Replaying %zu recovery entries into block map",
> diff --git a/drivers/md/dm-vdo/slab-depot.c b/drivers/md/dm-vdo/slab-depot.c
> index 46e4721e5b4f..3309480170c3 100644
> --- a/drivers/md/dm-vdo/slab-depot.c
> +++ b/drivers/md/dm-vdo/slab-depot.c
> @@ -3309,7 +3309,6 @@ static void swap_slab_statuses(void *item1, void *item2)
>  }
>
>  static const struct min_heap_callbacks slab_status_min_heap = {
> -       .elem_size = sizeof(struct slab_status),
>         .less = slab_status_is_less_than,
>         .swp = swap_slab_statuses,
>  };
> @@ -3509,7 +3508,7 @@ static int get_slab_statuses(struct block_allocator *allocator,
>  static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator *allocator)
>  {
>         struct slab_status current_slab_status;
> -       struct min_heap heap;
> +       MIN_HEAP(struct slab_status *, heap) heap;
>         int result;
>         struct slab_status *slab_statuses;
>         struct slab_depot *depot = allocator->depot;
> @@ -3521,14 +3520,12 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
>                 return result;
>
>         /* Sort the slabs by cleanliness, then by emptiness hint. */
> -       heap = (struct min_heap) {
> -               .data = slab_statuses,
> -               .nr = allocator->slab_count,
> -               .size = allocator->slab_count,
> -       };
> +       heap.heap.data = slab_statuses;
> +       heap.heap.nr = allocator->slab_count;
> +       heap.heap.size = allocator->slab_count;
>         min_heapify_all(&heap, &slab_status_min_heap);
>
> -       while (heap.nr > 0) {
> +       while (heap.heap.nr > 0) {
>                 bool high_priority;
>                 struct vdo_slab *slab;
>                 struct slab_journal *journal;
> diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> index d52daf45861b..c3635a7fdb88 100644
> --- a/include/linux/min_heap.h
> +++ b/include/linux/min_heap.h
> @@ -7,45 +7,59 @@
>  #include <linux/types.h>
>
>  /**
> - * struct min_heap - Data structure to hold a min-heap.
> + * struct __min_heap - Data structure to hold a min-heap.
>   * @data: Start of array holding the heap elements.
>   * @nr: Number of elements currently in the heap.
>   * @size: Maximum number of elements that can be held in current storage.
>   */
> -struct min_heap {
> +struct __min_heap {
>         void *data;
>         int nr;
>         int size;
>  };
>
> +/*
> + * We use a 0 size array to stash the type we're storing without taking any
> + * space at runtime - then the various accessor macros can use typeof() to get
> + * to it for casts/sizeof - we also force the alignment so that storing a type
> + * with a ridiculous alignment doesn't blow up the alignment or size of the
> + * min_heap.
> + */
> +#define MIN_HEAP(_type, _name)                 \
> +struct _name {                         \
> +       struct __min_heap       heap;           \
> +       _type type[0] __aligned(1);     \
> +}
> +
> +#define __minheap_cast(_heap)          (typeof((_heap)->type[0]) *)
> +#define __minheap_obj_size(_heap)      sizeof((_heap)->type[0])
> +
>  /**
>   * struct min_heap_callbacks - Data/functions to customise the min_heap.
> - * @elem_size: The nr of each element in bytes.
>   * @less: Partial order function for this heap.
>   * @swp: Swap elements function.
>   */
>  struct min_heap_callbacks {
> -       int elem_size;
>         bool (*less)(const void *lhs, const void *rhs);
>         void (*swp)(void *lhs, void *rhs);
>  };
>
>  /* Sift the element at pos down the heap. */
>  static __always_inline
> -void min_heapify(struct min_heap *heap, int pos,
> +void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
>                 const struct min_heap_callbacks *func)
>  {
>         void *left, *right;
>         void *data = heap->data;
> -       void *root = data + pos * func->elem_size;
> +       void *root = data + pos * elem_size;
>         int i = pos, j;
>
>         /* Find the sift-down path all the way to the leaves. */
>         for (;;) {
>                 if (i * 2 + 2 >= heap->nr)
>                         break;
> -               left = data + (i * 2 + 1) * func->elem_size;
> -               right = data + (i * 2 + 2) * func->elem_size;
> +               left = data + (i * 2 + 1) * elem_size;
> +               right = data + (i * 2 + 2) * elem_size;
>                 i = func->less(left, right) ? i * 2 + 1 : i * 2 + 2;
>         }
>
> @@ -54,31 +68,37 @@ void min_heapify(struct min_heap *heap, int pos,
>                 i = i * 2 + 1;
>
>         /* Backtrack to the correct location. */
> -       while (i != pos && func->less(root, data + i * func->elem_size))
> +       while (i != pos && func->less(root, data + i * elem_size))
>                 i = (i - 1) / 2;
>
>         /* Shift the element into its correct place. */
>         j = i;
>         while (i != pos) {
>                 i = (i - 1) / 2;
> -               func->swp(data + i * func->elem_size, data + j * func->elem_size);
> +               func->swp(data + i * elem_size, data + j * elem_size);
>         }
>  }
>
> +#define min_heapify(_heap, _pos, _func)        \
> +       __min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func)
> +
>  /* Floyd's approach to heapification that is O(nr). */
>  static __always_inline
> -void min_heapify_all(struct min_heap *heap,
> +void __min_heapify_all(struct __min_heap *heap, size_t elem_size,
>                 const struct min_heap_callbacks *func)
>  {
>         int i;
>
>         for (i = heap->nr / 2 - 1; i >= 0; i--)
> -               min_heapify(heap, i, func);
> +               __min_heapify(heap, i, elem_size, func);
>  }
>
> +#define min_heapify_all(_heap, _func)  \
> +       __min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func)
> +
>  /* Remove minimum element from the heap, O(log2(nr)). */
>  static __always_inline
> -void min_heap_pop(struct min_heap *heap,
> +void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
>                 const struct min_heap_callbacks *func)
>  {
>         void *data = heap->data;
> @@ -88,27 +108,33 @@ void min_heap_pop(struct min_heap *heap,
>
>         /* Place last element at the root (position 0) and then sift down. */
>         heap->nr--;
> -       memcpy(data, data + (heap->nr * func->elem_size), func->elem_size);
> -       min_heapify(heap, 0, func);
> +       memcpy(data, data + (heap->nr * elem_size), elem_size);
> +       __min_heapify(heap, 0, elem_size, func);
>  }
>
> +#define min_heap_pop(_heap, _func)     \
> +       __min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func)
> +
>  /*
>   * Remove the minimum element and then push the given element. The
>   * implementation performs 1 sift (O(log2(nr))) and is therefore more
>   * efficient than a pop followed by a push that does 2.
>   */
>  static __always_inline
> -void min_heap_pop_push(struct min_heap *heap,
> -               const void *element,
> +void __min_heap_pop_push(struct __min_heap *heap,
> +               const void *element, size_t elem_size,
>                 const struct min_heap_callbacks *func)
>  {
> -       memcpy(heap->data, element, func->elem_size);
> -       min_heapify(heap, 0, func);
> +       memcpy(heap->data, element, elem_size);
> +       __min_heapify(heap, 0, elem_size, func);
>  }
>
> +#define min_heap_pop_push(_heap, _element, _func)      \
> +       __min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
> +
>  /* Push an element on to the heap, O(log2(nr)). */
>  static __always_inline
> -void min_heap_push(struct min_heap *heap, const void *element,
> +void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
>                 const struct min_heap_callbacks *func)
>  {
>         void *data = heap->data;
> @@ -120,17 +146,20 @@ void min_heap_push(struct min_heap *heap, const void *element,
>
>         /* Place at the end of data. */
>         pos = heap->nr;
> -       memcpy(data + (pos * func->elem_size), element, func->elem_size);
> +       memcpy(data + (pos * elem_size), element, elem_size);
>         heap->nr++;
>
>         /* Sift child at pos up. */
>         for (; pos > 0; pos = (pos - 1) / 2) {
> -               child = data + (pos * func->elem_size);
> -               parent = data + ((pos - 1) / 2) * func->elem_size;
> +               child = data + (pos * elem_size);
> +               parent = data + ((pos - 1) / 2) * elem_size;
>                 if (func->less(parent, child))
>                         break;
>                 func->swp(parent, child);
>         }
>  }
>
> +#define min_heap_push(_heap, _element, _func)  \
> +       __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
> +
>  #endif /* _LINUX_MIN_HEAP_H */
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 10ac2db83f14..065dfaa8b009 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3698,19 +3698,20 @@ static void swap_ptr(void *l, void *r)
>         swap(*lp, *rp);
>  }
>
> +MIN_HEAP(struct perf_event *, perf_event_min_heap);
> +
>  static const struct min_heap_callbacks perf_min_heap = {
> -       .elem_size = sizeof(struct perf_event *),
>         .less = perf_less_group_idx,
>         .swp = swap_ptr,
>  };
>
> -static void __heap_add(struct min_heap *heap, struct perf_event *event)
> +static void __heap_add(struct perf_event_min_heap *heap, struct perf_event *event)
>  {
> -       struct perf_event **itrs = heap->data;
> +       struct perf_event **itrs = heap->heap.data;
>
>         if (event) {
> -               itrs[heap->nr] = event;
> -               heap->nr++;
> +               itrs[heap->heap.nr] = event;
> +               heap->heap.nr++;
>         }
>  }
>
> @@ -3738,7 +3739,7 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
>         struct perf_cpu_context *cpuctx = NULL;
>         /* Space for per CPU and/or any CPU event iterators. */
>         struct perf_event *itrs[2];
> -       struct min_heap event_heap;
> +       struct perf_event_min_heap event_heap;
>         struct perf_event **evt;
>         int ret;
>
> @@ -3747,11 +3748,9 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
>
>         if (!ctx->task) {
>                 cpuctx = this_cpu_ptr(&perf_cpu_context);
> -               event_heap = (struct min_heap){
> -                       .data = cpuctx->heap,
> -                       .nr = 0,
> -                       .size = cpuctx->heap_size,
> -               };
> +               event_heap.heap.data = cpuctx->heap;
> +               event_heap.heap.nr = 0;
> +               event_heap.heap.size = cpuctx->heap_size;
>
>                 lockdep_assert_held(&cpuctx->ctx.lock);
>
> @@ -3760,15 +3759,13 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
>                         css = &cpuctx->cgrp->css;
>  #endif
>         } else {
> -               event_heap = (struct min_heap){
> -                       .data = itrs,
> -                       .nr = 0,
> -                       .size = ARRAY_SIZE(itrs),
> -               };
> +               event_heap.heap.data = itrs;
> +               event_heap.heap.nr = 0;
> +               event_heap.heap.size = ARRAY_SIZE(itrs);
>                 /* Events not within a CPU context may be on any CPU. */
>                 __heap_add(&event_heap, perf_event_groups_first(groups, -1, pmu, NULL));
>         }
> -       evt = event_heap.data;
> +       evt = event_heap.heap.data;
>
>         __heap_add(&event_heap, perf_event_groups_first(groups, cpu, pmu, NULL));
>
> @@ -3777,14 +3774,14 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
>                 __heap_add(&event_heap, perf_event_groups_first(groups, cpu, pmu, css->cgroup));
>  #endif
>
> -       if (event_heap.nr) {
> +       if (event_heap.heap.nr) {
>                 __link_epc((*evt)->pmu_ctx);
>                 perf_assert_pmu_disabled((*evt)->pmu_ctx->pmu);
>         }
>
>         min_heapify_all(&event_heap, &perf_min_heap);
>
> -       while (event_heap.nr) {
> +       while (event_heap.heap.nr) {
>                 ret = func(*evt, data);
>                 if (ret)
>                         return ret;
> diff --git a/lib/test_min_heap.c b/lib/test_min_heap.c
> index 7b01b4387cfb..af2e446034d8 100644
> --- a/lib/test_min_heap.c
> +++ b/lib/test_min_heap.c
> @@ -11,6 +11,8 @@
>  #include <linux/printk.h>
>  #include <linux/random.h>
>
> +MIN_HEAP(int, min_heap_test);
> +
>  static __init bool less_than(const void *lhs, const void *rhs)
>  {
>         return *(int *)lhs < *(int *)rhs;
> @@ -30,16 +32,16 @@ static __init void swap_ints(void *lhs, void *rhs)
>  }
>
>  static __init int pop_verify_heap(bool min_heap,
> -                               struct min_heap *heap,
> +                               struct min_heap_test *heap,
>                                 const struct min_heap_callbacks *funcs)
>  {
> -       int *values = heap->data;
> +       int *values = heap->heap.data;
>         int err = 0;
>         int last;
>
>         last = values[0];
>         min_heap_pop(heap, funcs);
> -       while (heap->nr > 0) {
> +       while (heap->heap.nr > 0) {
>                 if (min_heap) {
>                         if (last > values[0]) {
>                                 pr_err("error: expected %d <= %d\n", last,
> @@ -63,13 +65,12 @@ static __init int test_heapify_all(bool min_heap)
>  {
>         int values[] = { 3, 1, 2, 4, 0x8000000, 0x7FFFFFF, 0,
>                          -3, -1, -2, -4, 0x8000000, 0x7FFFFFF };
> -       struct min_heap heap = {
> -               .data = values,
> -               .nr = ARRAY_SIZE(values),
> -               .size =  ARRAY_SIZE(values),
> -       };
> +       struct min_heap_test heap;
> +
> +       heap.heap.data = values;
> +       heap.heap.nr = ARRAY_SIZE(values);
> +       heap.heap.size =  ARRAY_SIZE(values);
>         struct min_heap_callbacks funcs = {
> -               .elem_size = sizeof(int),
>                 .less = min_heap ? less_than : greater_than,
>                 .swp = swap_ints,
>         };
> @@ -81,8 +82,8 @@ static __init int test_heapify_all(bool min_heap)
>
>
>         /* Test with randomly generated values. */
> -       heap.nr = ARRAY_SIZE(values);
> -       for (i = 0; i < heap.nr; i++)
> +       heap.heap.nr = ARRAY_SIZE(values);
> +       for (i = 0; i < heap.heap.nr; i++)
>                 values[i] = get_random_u32();
>
>         min_heapify_all(&heap, &funcs);
> @@ -96,13 +97,12 @@ static __init int test_heap_push(bool min_heap)
>         const int data[] = { 3, 1, 2, 4, 0x80000000, 0x7FFFFFFF, 0,
>                              -3, -1, -2, -4, 0x80000000, 0x7FFFFFFF };
>         int values[ARRAY_SIZE(data)];
> -       struct min_heap heap = {
> -               .data = values,
> -               .nr = 0,
> -               .size =  ARRAY_SIZE(values),
> -       };
> +       struct min_heap_test heap;
> +
> +       heap.heap.data = values;
> +       heap.heap.nr = 0;
> +       heap.heap.size =  ARRAY_SIZE(values);
>         struct min_heap_callbacks funcs = {
> -               .elem_size = sizeof(int),
>                 .less = min_heap ? less_than : greater_than,
>                 .swp = swap_ints,
>         };
> @@ -115,7 +115,7 @@ static __init int test_heap_push(bool min_heap)
>         err = pop_verify_heap(min_heap, &heap, &funcs);
>
>         /* Test with randomly generated values. */
> -       while (heap.nr < heap.size) {
> +       while (heap.heap.nr < heap.heap.size) {
>                 temp = get_random_u32();
>                 min_heap_push(&heap, &temp, &funcs);
>         }
> @@ -129,13 +129,12 @@ static __init int test_heap_pop_push(bool min_heap)
>         const int data[] = { 3, 1, 2, 4, 0x80000000, 0x7FFFFFFF, 0,
>                              -3, -1, -2, -4, 0x80000000, 0x7FFFFFFF };
>         int values[ARRAY_SIZE(data)];
> -       struct min_heap heap = {
> -               .data = values,
> -               .nr = 0,
> -               .size =  ARRAY_SIZE(values),
> -       };
> +       struct min_heap_test heap;
> +
> +       heap.heap.data = values;
> +       heap.heap.nr = 0;
> +       heap.heap.size =  ARRAY_SIZE(values);
>         struct min_heap_callbacks funcs = {
> -               .elem_size = sizeof(int),
>                 .less = min_heap ? less_than : greater_than,
>                 .swp = swap_ints,
>         };
> @@ -152,7 +151,7 @@ static __init int test_heap_pop_push(bool min_heap)
>
>         err = pop_verify_heap(min_heap, &heap, &funcs);
>
> -       heap.nr = 0;
> +       heap.heap.nr = 0;
>         for (i = 0; i < ARRAY_SIZE(data); i++)
>                 min_heap_push(&heap, &temp, &funcs);
>
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 05/13] lib min_heap: Add min_heap_init()
  2024-03-19 17:59 ` [PATCH 05/13] lib min_heap: Add min_heap_init() Kuan-Wei Chiu
@ 2024-03-19 19:51   ` Ian Rogers
  2024-03-20  2:56     ` Kuan-Wei Chiu
  0 siblings, 1 reply; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 19:51 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Add min_heap_init() for initializing heap with data, nr, and size.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

Should this change update lib/test_min_heap.c to use min_heap_init?

Thanks,
Ian

> ---
>  include/linux/min_heap.h | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> index c3635a7fdb88..ed462f194b88 100644
> --- a/include/linux/min_heap.h
> +++ b/include/linux/min_heap.h
> @@ -44,6 +44,18 @@ struct min_heap_callbacks {
>         void (*swp)(void *lhs, void *rhs);
>  };
>
> +/* Initialize a min-heap. */
> +static __always_inline
> +void __min_heap_init(struct __min_heap *heap, void *data, int size)
> +{
> +       heap->data = data;
> +       heap->nr = 0;
> +       heap->size = size;
> +}
> +
> +#define min_heap_init(_heap, _data, _size)     \
> +       __min_heap_init(&(_heap)->heap, _data, _size)
> +
>  /* Sift the element at pos down the heap. */
>  static __always_inline
>  void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 06/13] lib min_heap: Add min_heap_peek()
  2024-03-19 17:59 ` [PATCH 06/13] lib min_heap: Add min_heap_peek() Kuan-Wei Chiu
@ 2024-03-19 19:55   ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 19:55 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Add min_heap_peek() function to retrieve a pointer to the smallest
> element. The pointer is cast to the appropriate type of heap elements.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

I see there's coverage of these functions caused by later changes.

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  include/linux/min_heap.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> index ed462f194b88..7c1fd1ddc71a 100644
> --- a/include/linux/min_heap.h
> +++ b/include/linux/min_heap.h
> @@ -56,6 +56,16 @@ void __min_heap_init(struct __min_heap *heap, void *data, int size)
>  #define min_heap_init(_heap, _data, _size)     \
>         __min_heap_init(&(_heap)->heap, _data, _size)
>
> +/* Get the minimum element from the heap. */
> +static __always_inline
> +void *__min_heap_peek(struct __min_heap *heap)
> +{
> +       return heap->nr ? heap->data : NULL;
> +}
> +
> +#define min_heap_peek(_heap)   \
> +       (__minheap_cast(_heap) __min_heap_peek(&(_heap)->heap))
> +
>  /* Sift the element at pos down the heap. */
>  static __always_inline
>  void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 07/13] lib min_heap: Add min_heap_full()
  2024-03-19 17:59 ` [PATCH 07/13] lib min_heap: Add min_heap_full() Kuan-Wei Chiu
@ 2024-03-19 19:55   ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 19:55 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Add min_heap_full() which returns a boolean value indicating whether
> the heap has reached its maximum capacity.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

I see there's coverage of these functions caused by later changes.

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  include/linux/min_heap.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> index 7c1fd1ddc71a..b1d874f4d536 100644
> --- a/include/linux/min_heap.h
> +++ b/include/linux/min_heap.h
> @@ -66,6 +66,16 @@ void *__min_heap_peek(struct __min_heap *heap)
>  #define min_heap_peek(_heap)   \
>         (__minheap_cast(_heap) __min_heap_peek(&(_heap)->heap))
>
> +/* Check if the heap is full. */
> +static __always_inline
> +bool __min_heap_full(struct __min_heap *heap)
> +{
> +       return heap->nr == heap->size;
> +}
> +
> +#define min_heap_full(_heap)   \
> +       __min_heap_full(&(_heap)->heap)
> +
>  /* Sift the element at pos down the heap. */
>  static __always_inline
>  void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 09/13] lib min_heap: Update min_heap_push() and min_heap_pop() to return bool values
  2024-03-19 18:00 ` [PATCH 09/13] lib min_heap: Update min_heap_push() and min_heap_pop() to return bool values Kuan-Wei Chiu
@ 2024-03-19 19:57   ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 19:57 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Modify the min_heap_push() and min_heap_pop() to return a boolean
> value. They now return false when the operation fails and true when it
> succeeds.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

Nice change.

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  include/linux/min_heap.h | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> index 97d8ba5c32e6..154ac2102114 100644
> --- a/include/linux/min_heap.h
> +++ b/include/linux/min_heap.h
> @@ -130,18 +130,20 @@ void __min_heapify_all(struct __min_heap *heap, size_t elem_size,
>
>  /* Remove minimum element from the heap, O(log2(nr)). */
>  static __always_inline
> -void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
> +bool __min_heap_pop(struct __min_heap *heap, size_t elem_size,
>                 const struct min_heap_callbacks *func, void *args)
>  {
>         void *data = heap->data;
>
>         if (WARN_ONCE(heap->nr <= 0, "Popping an empty heap"))
> -               return;
> +               return false;
>
>         /* Place last element at the root (position 0) and then sift down. */
>         heap->nr--;
>         memcpy(data, data + (heap->nr * elem_size), elem_size);
>         __min_heapify(heap, 0, elem_size, func, args);
> +
> +       return true;
>  }
>
>  #define min_heap_pop(_heap, _func, _args)      \
> @@ -167,7 +169,7 @@ void __min_heap_pop_push(struct __min_heap *heap,
>
>  /* Push an element on to the heap, O(log2(nr)). */
>  static __always_inline
> -void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
> +bool __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
>                 const struct min_heap_callbacks *func, void *args)
>  {
>         void *data = heap->data;
> @@ -175,7 +177,7 @@ void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
>         int pos;
>
>         if (WARN_ONCE(heap->nr >= heap->size, "Pushing on a full heap"))
> -               return;
> +               return false;
>
>         /* Place at the end of data. */
>         pos = heap->nr;
> @@ -190,6 +192,8 @@ void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
>                         break;
>                 func->swp(parent, child, args);
>         }
> +
> +       return true;
>  }
>
>  #define min_heap_push(_heap, _element, _func, _args)   \
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 11/13] lib min_heap: Add min_heap_del()
  2024-03-19 18:00 ` [PATCH 11/13] lib min_heap: Add min_heap_del() Kuan-Wei Chiu
@ 2024-03-19 19:59   ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 19:59 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:01 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Add min_heap_del() to delete the element at index 'idx' in the heap.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  include/linux/min_heap.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>
> diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> index 154ac2102114..ce085137fce7 100644
> --- a/include/linux/min_heap.h
> +++ b/include/linux/min_heap.h
> @@ -199,4 +199,28 @@ bool __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
>  #define min_heap_push(_heap, _element, _func, _args)   \
>         __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
>
> +/* Remove ith element from the heap, O(log2(nr)). */
> +static __always_inline
> +bool __min_heap_del(struct __min_heap *heap, size_t elem_size, size_t idx,
> +               const struct min_heap_callbacks *func, void *args)
> +{
> +       void *data = heap->data;
> +
> +       if (WARN_ONCE(heap->nr <= 0, "Popping an empty heap"))
> +               return false;
> +
> +       /* Place last element at the root (position 0) and then sift down. */
> +       heap->nr--;
> +       if (idx == heap->nr)
> +               return true;
> +       memcpy(data, data + (heap->nr * elem_size), elem_size);
> +       __min_heap_sift_up(heap, elem_size, idx, func, args);
> +       __min_heapify(heap, idx, elem_size, func, args);
> +
> +       return true;
> +}
> +
> +#define min_heap_del(_heap, _idx, _func, _args)        \
> +       __min_heap_del(&(_heap)->heap, __minheap_obj_size(_heap), _idx, _func, _args)
> +
>  #endif /* _LINUX_MIN_HEAP_H */
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 13/13] bcachefs: Remove heap-related macros and switch to generic min_heap
  2024-03-19 18:00 ` [PATCH 13/13] bcachefs: Remove heap-related macros and switch to generic min_heap Kuan-Wei Chiu
@ 2024-03-19 20:03   ` Ian Rogers
  2024-03-20  2:55     ` Kuan-Wei Chiu
  0 siblings, 1 reply; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 20:03 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:01 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Drop the heap-related macros from bcachefs and replaces them with the
> generic min_heap implementation from include/linux. This change
> improves code readability by using functions instead of macros.
>
> Link: https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> ---
>  fs/bcachefs/clock.c       |  53 +++++++++++-----
>  fs/bcachefs/clock_types.h |   2 +-
>  fs/bcachefs/ec.c          |  99 ++++++++++++++++++-----------
>  fs/bcachefs/ec_types.h    |   2 +-
>  fs/bcachefs/util.h        | 127 +++-----------------------------------
>  5 files changed, 109 insertions(+), 174 deletions(-)
>
> diff --git a/fs/bcachefs/clock.c b/fs/bcachefs/clock.c
> index 363644451106..7a7d13f7a629 100644
> --- a/fs/bcachefs/clock.c
> +++ b/fs/bcachefs/clock.c
> @@ -6,16 +6,29 @@
>  #include <linux/kthread.h>
>  #include <linux/preempt.h>
>
> -static inline long io_timer_cmp(io_timer_heap *h,
> -                               struct io_timer *l,
> -                               struct io_timer *r)
> +static inline bool io_timer_cmp(const void *l, const void *r, void *args)

Should args here and below be marked with the attribute __always_unused?

Thanks,
Ian

>  {
> -       return l->expire - r->expire;
> +       struct io_timer *_l = (struct io_timer *)l;
> +       struct io_timer *_r = (struct io_timer *)r;
> +
> +       return _l->expire >= _r->expire;
> +}
> +
> +static inline void io_timer_swp(void *l, void *r, void *args)
> +{
> +       struct io_timer *_l = (struct io_timer *)l;
> +       struct io_timer *_r = (struct io_timer *)r;
> +
> +       swap(*_l, *_r);
>  }
>
>  void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
>  {
>         size_t i;
> +       const struct min_heap_callbacks callbacks = {
> +               .less = io_timer_cmp,
> +               .swp = io_timer_swp,
> +       };
>
>         spin_lock(&clock->timer_lock);
>
> @@ -26,11 +39,11 @@ void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
>                 return;
>         }
>
> -       for (i = 0; i < clock->timers.used; i++)
> -               if (clock->timers.data[i] == timer)
> +       for (i = 0; i < clock->timers.heap.nr; i++)
> +               if (min_heap_peek(&clock->timers)[i] == timer)
>                         goto out;
>
> -       BUG_ON(!heap_add(&clock->timers, timer, io_timer_cmp, NULL));
> +       BUG_ON(!min_heap_push(&clock->timers, &timer, &callbacks, NULL));
>  out:
>         spin_unlock(&clock->timer_lock);
>  }
> @@ -38,12 +51,16 @@ void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
>  void bch2_io_timer_del(struct io_clock *clock, struct io_timer *timer)
>  {
>         size_t i;
> +       const struct min_heap_callbacks callbacks = {
> +               .less = io_timer_cmp,
> +               .swp = io_timer_swp,
> +       };
>
>         spin_lock(&clock->timer_lock);
>
> -       for (i = 0; i < clock->timers.used; i++)
> -               if (clock->timers.data[i] == timer) {
> -                       heap_del(&clock->timers, i, io_timer_cmp, NULL);
> +       for (i = 0; i < clock->timers.heap.nr; i++)
> +               if (min_heap_peek(&clock->timers)[i] == timer) {
> +                       min_heap_pop(&clock->timers, &callbacks, NULL);
>                         break;
>                 }
>
> @@ -131,12 +148,16 @@ static struct io_timer *get_expired_timer(struct io_clock *clock,
>                                           unsigned long now)
>  {
>         struct io_timer *ret = NULL;
> +       const struct min_heap_callbacks callbacks = {
> +               .less = io_timer_cmp,
> +               .swp = io_timer_swp,
> +       };
>
>         spin_lock(&clock->timer_lock);
>
> -       if (clock->timers.used &&
> -           time_after_eq(now, clock->timers.data[0]->expire))
> -               heap_pop(&clock->timers, ret, io_timer_cmp, NULL);
> +       if (clock->timers.heap.nr &&
> +           time_after_eq(now, min_heap_peek(&clock->timers)[0]->expire))
> +               min_heap_pop(&clock->timers, &callbacks, NULL);
>
>         spin_unlock(&clock->timer_lock);
>
> @@ -161,10 +182,10 @@ void bch2_io_timers_to_text(struct printbuf *out, struct io_clock *clock)
>         spin_lock(&clock->timer_lock);
>         now = atomic64_read(&clock->now);
>
> -       for (i = 0; i < clock->timers.used; i++)
> +       for (i = 0; i < clock->timers.heap.nr; i++)
>                 prt_printf(out, "%ps:\t%li\n",
> -                      clock->timers.data[i]->fn,
> -                      clock->timers.data[i]->expire - now);
> +                      min_heap_peek(&clock->timers)[i]->fn,
> +                      min_heap_peek(&clock->timers)[i]->expire - now);
>         spin_unlock(&clock->timer_lock);
>         --out->atomic;
>  }
> diff --git a/fs/bcachefs/clock_types.h b/fs/bcachefs/clock_types.h
> index 5fae0012d808..b02b24b9d74f 100644
> --- a/fs/bcachefs/clock_types.h
> +++ b/fs/bcachefs/clock_types.h
> @@ -23,7 +23,7 @@ struct io_timer {
>  /* Amount to buffer up on a percpu counter */
>  #define IO_CLOCK_PCPU_SECTORS  128
>
> -typedef HEAP(struct io_timer *)        io_timer_heap;
> +typedef MIN_HEAP(struct io_timer *, io_timer_heap) io_timer_heap;
>
>  struct io_clock {
>         atomic64_t              now;
> diff --git a/fs/bcachefs/ec.c b/fs/bcachefs/ec.c
> index b98e2c2b8bf0..7b6a31237503 100644
> --- a/fs/bcachefs/ec.c
> +++ b/fs/bcachefs/ec.c
> @@ -860,14 +860,14 @@ static int __ec_stripe_mem_alloc(struct bch_fs *c, size_t idx, gfp_t gfp)
>  {
>         ec_stripes_heap n, *h = &c->ec_stripes_heap;
>
> -       if (idx >= h->size) {
> +       if (idx >= h->heap.size) {
>                 if (!init_heap(&n, max(1024UL, roundup_pow_of_two(idx + 1)), gfp))
>                         return -BCH_ERR_ENOMEM_ec_stripe_mem_alloc;
>
>                 mutex_lock(&c->ec_stripes_heap_lock);
> -               if (n.size > h->size) {
> -                       memcpy(n.data, h->data, h->used * sizeof(h->data[0]));
> -                       n.used = h->used;
> +               if (n.heap.size > h->heap.size) {
> +                       memcpy(min_heap_peek(&n), min_heap_peek(h), h->heap.nr * sizeof(*min_heap_peek(h)));
> +                       n.heap.nr = h->heap.nr;
>                         swap(*h, n);
>                 }
>                 mutex_unlock(&c->ec_stripes_heap_lock);
> @@ -958,20 +958,21 @@ static u64 stripe_idx_to_delete(struct bch_fs *c)
>
>         lockdep_assert_held(&c->ec_stripes_heap_lock);
>
> -       if (h->used &&
> -           h->data[0].blocks_nonempty == 0 &&
> -           !bch2_stripe_is_open(c, h->data[0].idx))
> -               return h->data[0].idx;
> +       if (h->heap.nr &&
> +           min_heap_peek(h)->blocks_nonempty == 0 &&
> +           !bch2_stripe_is_open(c, min_heap_peek(h)->idx))
> +               return min_heap_peek(h)->idx;
>
>         return 0;
>  }
>
> -static inline int ec_stripes_heap_cmp(ec_stripes_heap *h,
> -                                     struct ec_stripe_heap_entry l,
> -                                     struct ec_stripe_heap_entry r)
> +static inline bool ec_stripes_heap_cmp(const void *l, const void *r, void *args)
>  {
> -       return ((l.blocks_nonempty > r.blocks_nonempty) -
> -               (l.blocks_nonempty < r.blocks_nonempty));
> +       struct ec_stripe_heap_entry *_l = (struct ec_stripe_heap_entry *)l;
> +       struct ec_stripe_heap_entry *_r = (struct ec_stripe_heap_entry *)r;
> +
> +       return ((_l->blocks_nonempty > _r->blocks_nonempty) >=
> +               (_l->blocks_nonempty < _r->blocks_nonempty));
>  }
>
>  static inline void ec_stripes_heap_set_backpointer(ec_stripes_heap *h,
> @@ -979,7 +980,21 @@ static inline void ec_stripes_heap_set_backpointer(ec_stripes_heap *h,
>  {
>         struct bch_fs *c = container_of(h, struct bch_fs, ec_stripes_heap);
>
> -       genradix_ptr(&c->stripes, h->data[i].idx)->heap_idx = i;
> +       genradix_ptr(&c->stripes, min_heap_peek(h)[i].idx)->heap_idx = i;
> +}
> +
> +static inline void ec_stripes_heap_swap(void *l, void *r, void *h)
> +{
> +       struct ec_stripe_heap_entry *_l = (struct ec_stripe_heap_entry *)l;
> +       struct ec_stripe_heap_entry *_r = (struct ec_stripe_heap_entry *)r;
> +       ec_stripes_heap *_h = (ec_stripes_heap *)h;
> +       size_t i = _l - min_heap_peek(_h);
> +       size_t j = _r - min_heap_peek(_h);
> +
> +       ec_stripes_heap_set_backpointer(_h, i);
> +       ec_stripes_heap_set_backpointer(_h, j);
> +
> +       swap(*_l, *_r);
>  }
>
>  static void heap_verify_backpointer(struct bch_fs *c, size_t idx)
> @@ -987,34 +1002,43 @@ static void heap_verify_backpointer(struct bch_fs *c, size_t idx)
>         ec_stripes_heap *h = &c->ec_stripes_heap;
>         struct stripe *m = genradix_ptr(&c->stripes, idx);
>
> -       BUG_ON(m->heap_idx >= h->used);
> -       BUG_ON(h->data[m->heap_idx].idx != idx);
> +       BUG_ON(m->heap_idx >= h->heap.nr);
> +       BUG_ON(min_heap_peek(h)[m->heap_idx].idx != idx);
>  }
>
>  void bch2_stripes_heap_del(struct bch_fs *c,
>                            struct stripe *m, size_t idx)
>  {
> +       const struct min_heap_callbacks callbacks = {
> +               .less = ec_stripes_heap_cmp,
> +               .swp = ec_stripes_heap_swap,
> +       };
> +
>         mutex_lock(&c->ec_stripes_heap_lock);
>         heap_verify_backpointer(c, idx);
>
> -       heap_del(&c->ec_stripes_heap, m->heap_idx,
> -                ec_stripes_heap_cmp,
> -                ec_stripes_heap_set_backpointer);
> +       min_heap_del(&c->ec_stripes_heap, m->heap_idx, &callbacks, &c->ec_stripes_heap);
>         mutex_unlock(&c->ec_stripes_heap_lock);
>  }
>
>  void bch2_stripes_heap_insert(struct bch_fs *c,
>                               struct stripe *m, size_t idx)
>  {
> +       const struct min_heap_callbacks callbacks = {
> +               .less = ec_stripes_heap_cmp,
> +               .swp = ec_stripes_heap_swap,
> +       };
> +
>         mutex_lock(&c->ec_stripes_heap_lock);
> -       BUG_ON(heap_full(&c->ec_stripes_heap));
> +       BUG_ON(min_heap_full(&c->ec_stripes_heap));
>
> -       heap_add(&c->ec_stripes_heap, ((struct ec_stripe_heap_entry) {
> +       genradix_ptr(&c->stripes, idx)->heap_idx = c->ec_stripes_heap.heap.nr;
> +       min_heap_push(&c->ec_stripes_heap, &((struct ec_stripe_heap_entry) {
>                         .idx = idx,
>                         .blocks_nonempty = m->blocks_nonempty,
>                 }),
> -                ec_stripes_heap_cmp,
> -                ec_stripes_heap_set_backpointer);
> +                &callbacks,
> +                &c->ec_stripes_heap);
>
>         heap_verify_backpointer(c, idx);
>         mutex_unlock(&c->ec_stripes_heap_lock);
> @@ -1026,17 +1050,20 @@ void bch2_stripes_heap_update(struct bch_fs *c,
>         ec_stripes_heap *h = &c->ec_stripes_heap;
>         bool do_deletes;
>         size_t i;
> +       const struct min_heap_callbacks callbacks = {
> +               .less = ec_stripes_heap_cmp,
> +               .swp = ec_stripes_heap_swap,
> +       };
>
>         mutex_lock(&c->ec_stripes_heap_lock);
>         heap_verify_backpointer(c, idx);
>
> -       h->data[m->heap_idx].blocks_nonempty = m->blocks_nonempty;
> +       min_heap_peek(h)[m->heap_idx].blocks_nonempty = m->blocks_nonempty;
>
>         i = m->heap_idx;
> -       heap_sift_up(h,   i, ec_stripes_heap_cmp,
> -                    ec_stripes_heap_set_backpointer);
> -       heap_sift_down(h, i, ec_stripes_heap_cmp,
> -                      ec_stripes_heap_set_backpointer);
> +
> +       min_heap_sift_up(h,     i, &callbacks, &c->ec_stripes_heap);
> +       min_heapify(h, i, &callbacks, &c->ec_stripes_heap);
>
>         heap_verify_backpointer(c, idx);
>
> @@ -1828,12 +1855,12 @@ static s64 get_existing_stripe(struct bch_fs *c,
>                 return -1;
>
>         mutex_lock(&c->ec_stripes_heap_lock);
> -       for (heap_idx = 0; heap_idx < h->used; heap_idx++) {
> +       for (heap_idx = 0; heap_idx < h->heap.nr; heap_idx++) {
>                 /* No blocks worth reusing, stripe will just be deleted: */
> -               if (!h->data[heap_idx].blocks_nonempty)
> +               if (!min_heap_peek(h)[heap_idx].blocks_nonempty)
>                         continue;
>
> -               stripe_idx = h->data[heap_idx].idx;
> +               stripe_idx = min_heap_peek(h)[heap_idx].idx;
>
>                 m = genradix_ptr(&c->stripes, stripe_idx);
>
> @@ -2159,14 +2186,14 @@ void bch2_stripes_heap_to_text(struct printbuf *out, struct bch_fs *c)
>         size_t i;
>
>         mutex_lock(&c->ec_stripes_heap_lock);
> -       for (i = 0; i < min_t(size_t, h->used, 50); i++) {
> -               m = genradix_ptr(&c->stripes, h->data[i].idx);
> +       for (i = 0; i < min_t(size_t, h->heap.nr, 50); i++) {
> +               m = genradix_ptr(&c->stripes, min_heap_peek(h)[i].idx);
>
> -               prt_printf(out, "%zu %u/%u+%u", h->data[i].idx,
> -                      h->data[i].blocks_nonempty,
> +               prt_printf(out, "%zu %u/%u+%u", min_heap_peek(h)[i].idx,
> +                      min_heap_peek(h)[i].blocks_nonempty,
>                        m->nr_blocks - m->nr_redundant,
>                        m->nr_redundant);
> -               if (bch2_stripe_is_open(c, h->data[i].idx))
> +               if (bch2_stripe_is_open(c, min_heap_peek(h)[i].idx))
>                         prt_str(out, " open");
>                 prt_newline(out);
>         }
> diff --git a/fs/bcachefs/ec_types.h b/fs/bcachefs/ec_types.h
> index 976426da3a12..2ed67431a81c 100644
> --- a/fs/bcachefs/ec_types.h
> +++ b/fs/bcachefs/ec_types.h
> @@ -36,6 +36,6 @@ struct ec_stripe_heap_entry {
>         unsigned                blocks_nonempty;
>  };
>
> -typedef HEAP(struct ec_stripe_heap_entry) ec_stripes_heap;
> +typedef MIN_HEAP(struct ec_stripe_heap_entry, ec_stripes_heap) ec_stripes_heap;
>
>  #endif /* _BCACHEFS_EC_TYPES_H */
> diff --git a/fs/bcachefs/util.h b/fs/bcachefs/util.h
> index 7ffbddb80400..c599dc5276ac 100644
> --- a/fs/bcachefs/util.h
> +++ b/fs/bcachefs/util.h
> @@ -8,6 +8,7 @@
>  #include <linux/errno.h>
>  #include <linux/freezer.h>
>  #include <linux/kernel.h>
> +#include <linux/min_heap.h>
>  #include <linux/sched/clock.h>
>  #include <linux/llist.h>
>  #include <linux/log2.h>
> @@ -54,134 +55,20 @@ static inline size_t buf_pages(void *p, size_t len)
>                             PAGE_SIZE);
>  }
>
> -#define HEAP(type)                                                     \
> -struct {                                                               \
> -       size_t size, used;                                              \
> -       type *data;                                                     \
> -}
> -
> -#define DECLARE_HEAP(type, name) HEAP(type) name
> -
>  #define init_heap(heap, _size, gfp)                                    \
>  ({                                                                     \
> -       (heap)->used = 0;                                               \
> -       (heap)->size = (_size);                                         \
> -       (heap)->data = kvmalloc((heap)->size * sizeof((heap)->data[0]),\
> -                                (gfp));                                \
> -})
> -
> -#define free_heap(heap)                                                        \
> -do {                                                                   \
> -       kvfree((heap)->data);                                           \
> -       (heap)->data = NULL;                                            \
> -} while (0)
> -
> -#define heap_set_backpointer(h, i, _fn)                                        \
> -do {                                                                   \
> -       void (*fn)(typeof(h), size_t) = _fn;                            \
> -       if (fn)                                                         \
> -               fn(h, i);                                               \
> -} while (0)
> -
> -#define heap_swap(h, i, j, set_backpointer)                            \
> -do {                                                                   \
> -       swap((h)->data[i], (h)->data[j]);                               \
> -       heap_set_backpointer(h, i, set_backpointer);                    \
> -       heap_set_backpointer(h, j, set_backpointer);                    \
> -} while (0)
> -
> -#define heap_peek(h)                                                   \
> -({                                                                     \
> -       EBUG_ON(!(h)->used);                                            \
> -       (h)->data[0];                                                   \
> -})
> -
> -#define heap_full(h)   ((h)->used == (h)->size)
> -
> -#define heap_sift_down(h, i, cmp, set_backpointer)                     \
> -do {                                                                   \
> -       size_t _c, _j = i;                                              \
> -                                                                       \
> -       for (; _j * 2 + 1 < (h)->used; _j = _c) {                       \
> -               _c = _j * 2 + 1;                                        \
> -               if (_c + 1 < (h)->used &&                               \
> -                   cmp(h, (h)->data[_c], (h)->data[_c + 1]) >= 0)      \
> -                       _c++;                                           \
> -                                                                       \
> -               if (cmp(h, (h)->data[_c], (h)->data[_j]) >= 0)          \
> -                       break;                                          \
> -               heap_swap(h, _c, _j, set_backpointer);                  \
> -       }                                                               \
> -} while (0)
> -
> -#define heap_sift_up(h, i, cmp, set_backpointer)                       \
> -do {                                                                   \
> -       while (i) {                                                     \
> -               size_t p = (i - 1) / 2;                                 \
> -               if (cmp(h, (h)->data[i], (h)->data[p]) >= 0)            \
> -                       break;                                          \
> -               heap_swap(h, i, p, set_backpointer);                    \
> -               i = p;                                                  \
> -       }                                                               \
> -} while (0)
> -
> -#define __heap_add(h, d, cmp, set_backpointer)                         \
> -({                                                                     \
> -       size_t _i = (h)->used++;                                        \
> -       (h)->data[_i] = d;                                              \
> -       heap_set_backpointer(h, _i, set_backpointer);                   \
> -                                                                       \
> -       heap_sift_up(h, _i, cmp, set_backpointer);                      \
> -       _i;                                                             \
> -})
> -
> -#define heap_add(h, d, cmp, set_backpointer)                           \
> -({                                                                     \
> -       bool _r = !heap_full(h);                                        \
> -       if (_r)                                                         \
> -               __heap_add(h, d, cmp, set_backpointer);                 \
> -       _r;                                                             \
> +       void *data = kvmalloc(_size * sizeof(*min_heap_peek(heap)), (gfp));\
> +       min_heap_init(heap, data, _size);                               \
> +       min_heap_peek(heap);                                            \
>  })
>
> -#define heap_add_or_replace(h, new, cmp, set_backpointer)              \
> -do {                                                                   \
> -       if (!heap_add(h, new, cmp, set_backpointer) &&                  \
> -           cmp(h, new, heap_peek(h)) >= 0) {                           \
> -               (h)->data[0] = new;                                     \
> -               heap_set_backpointer(h, 0, set_backpointer);            \
> -               heap_sift_down(h, 0, cmp, set_backpointer);             \
> -       }                                                               \
> -} while (0)
>
> -#define heap_del(h, i, cmp, set_backpointer)                           \
> +#define free_heap(_heap)                                                       \
>  do {                                                                   \
> -       size_t _i = (i);                                                \
> -                                                                       \
> -       BUG_ON(_i >= (h)->used);                                        \
> -       (h)->used--;                                                    \
> -       if ((_i) < (h)->used) {                                         \
> -               heap_swap(h, _i, (h)->used, set_backpointer);           \
> -               heap_sift_up(h, _i, cmp, set_backpointer);              \
> -               heap_sift_down(h, _i, cmp, set_backpointer);            \
> -       }                                                               \
> +       kvfree((_heap)->heap.data);                                             \
> +       (_heap)->heap.data = NULL;                                              \
>  } while (0)
>
> -#define heap_pop(h, d, cmp, set_backpointer)                           \
> -({                                                                     \
> -       bool _r = (h)->used;                                            \
> -       if (_r) {                                                       \
> -               (d) = (h)->data[0];                                     \
> -               heap_del(h, 0, cmp, set_backpointer);                   \
> -       }                                                               \
> -       _r;                                                             \
> -})
> -
> -#define heap_resort(heap, cmp, set_backpointer)                                \
> -do {                                                                   \
> -       ssize_t _i;                                                     \
> -       for (_i = (ssize_t) (heap)->used / 2 -  1; _i >= 0; --_i)       \
> -               heap_sift_down(heap, _i, cmp, set_backpointer);         \
> -} while (0)
>
>  #define ANYSINT_MAX(t)                                                 \
>         ((((t) 1 << (sizeof(t) * 8 - 2)) - (t) 1) * (t) 2 + (t) 1)
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 08/13] lib min_heap: Add args for min_heap_callbacks
  2024-03-19 18:00 ` [PATCH 08/13] lib min_heap: Add args for min_heap_callbacks Kuan-Wei Chiu
@ 2024-03-19 20:05   ` Ian Rogers
  2024-03-20  2:49     ` Kuan-Wei Chiu
  0 siblings, 1 reply; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 20:05 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Add a third parameter 'args' for the 'less' and 'swp' functions in the
> 'struct min_heap_callbacks'. This additional parameter allows these
> comparison and swap functions to handle extra arguments when necessary.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

I've no objection to this but I don't see it used in your changes -
that may be my fault :-). Perhaps hold off or add a test with args
being non-null?

Thanks,
Ian

> ---
>  drivers/md/dm-vdo/repair.c     | 10 +++----
>  drivers/md/dm-vdo/slab-depot.c |  8 +++---
>  include/linux/min_heap.h       | 51 +++++++++++++++++-----------------
>  kernel/events/core.c           | 10 +++----
>  lib/test_min_heap.c            | 26 ++++++++---------
>  5 files changed, 53 insertions(+), 52 deletions(-)
>
> diff --git a/drivers/md/dm-vdo/repair.c b/drivers/md/dm-vdo/repair.c
> index 7663fa2098f4..528fa100b410 100644
> --- a/drivers/md/dm-vdo/repair.c
> +++ b/drivers/md/dm-vdo/repair.c
> @@ -137,7 +137,7 @@ struct repair_completion {
>   * to sort by slot while still ensuring we replay all entries with the same slot in the exact order
>   * as they appeared in the journal.
>   */
> -static bool mapping_is_less_than(const void *item1, const void *item2)
> +static bool mapping_is_less_than(const void *item1, const void *item2, void *args)
>  {
>         const struct numbered_block_mapping *mapping1 =
>                 (const struct numbered_block_mapping *) item1;
> @@ -156,7 +156,7 @@ static bool mapping_is_less_than(const void *item1, const void *item2)
>         return 0;
>  }
>
> -static void swap_mappings(void *item1, void *item2)
> +static void swap_mappings(void *item1, void *item2, void *args)
>  {
>         struct numbered_block_mapping *mapping1 = item1;
>         struct numbered_block_mapping *mapping2 = item2;
> @@ -182,8 +182,8 @@ static struct numbered_block_mapping *sort_next_heap_element(struct repair_compl
>          * restore the heap invariant, and return a pointer to the popped element.
>          */
>         last = &repair->entries[--heap->heap.nr];
> -       swap_mappings(heap->heap.data, last);
> -       min_heapify(heap, 0, &repair_min_heap);
> +       swap_mappings(heap->heap.data, last, NULL);
> +       min_heapify(heap, 0, &repair_min_heap, NULL);
>         return last;
>  }
>
> @@ -1121,7 +1121,7 @@ static void recover_block_map(struct vdo_completion *completion)
>         repair->replay_heap.heap.data = repair->entries;
>         repair->replay_heap.heap.nr = repair->block_map_entry_count;
>         repair->replay_heap.heap.size = repair->block_map_entry_count;
> -       min_heapify_all(&repair->replay_heap, &repair_min_heap);
> +       min_heapify_all(&repair->replay_heap, &repair_min_heap, NULL);
>
>         vdo_log_info("Replaying %zu recovery entries into block map",
>                      repair->block_map_entry_count);
> diff --git a/drivers/md/dm-vdo/slab-depot.c b/drivers/md/dm-vdo/slab-depot.c
> index 3309480170c3..b8c41d7ccde0 100644
> --- a/drivers/md/dm-vdo/slab-depot.c
> +++ b/drivers/md/dm-vdo/slab-depot.c
> @@ -3288,7 +3288,7 @@ int vdo_release_block_reference(struct block_allocator *allocator,
>   * Thus, the ordering is reversed from the usual sense since min_heap returns smaller elements
>   * before larger ones.
>   */
> -static bool slab_status_is_less_than(const void *item1, const void *item2)
> +static bool slab_status_is_less_than(const void *item1, const void *item2, void *args)
>  {
>         const struct slab_status *info1 = item1;
>         const struct slab_status *info2 = item2;
> @@ -3300,7 +3300,7 @@ static bool slab_status_is_less_than(const void *item1, const void *item2)
>         return info1->slab_number < info2->slab_number;
>  }
>
> -static void swap_slab_statuses(void *item1, void *item2)
> +static void swap_slab_statuses(void *item1, void *item2, void *args)
>  {
>         struct slab_status *info1 = item1;
>         struct slab_status *info2 = item2;
> @@ -3523,7 +3523,7 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
>         heap.heap.data = slab_statuses;
>         heap.heap.nr = allocator->slab_count;
>         heap.heap.size = allocator->slab_count;
> -       min_heapify_all(&heap, &slab_status_min_heap);
> +       min_heapify_all(&heap, &slab_status_min_heap, NULL);
>
>         while (heap.heap.nr > 0) {
>                 bool high_priority;
> @@ -3531,7 +3531,7 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
>                 struct slab_journal *journal;
>
>                 current_slab_status = slab_statuses[0];
> -               min_heap_pop(&heap, &slab_status_min_heap);
> +               min_heap_pop(&heap, &slab_status_min_heap, NULL);
>                 slab = depot->slabs[current_slab_status.slab_number];
>
>                 if ((depot->load_type == VDO_SLAB_DEPOT_REBUILD_LOAD) ||
> diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> index b1d874f4d536..97d8ba5c32e6 100644
> --- a/include/linux/min_heap.h
> +++ b/include/linux/min_heap.h
> @@ -40,8 +40,8 @@ struct _name {                                \
>   * @swp: Swap elements function.
>   */
>  struct min_heap_callbacks {
> -       bool (*less)(const void *lhs, const void *rhs);
> -       void (*swp)(void *lhs, void *rhs);
> +       bool (*less)(const void *lhs, const void *rhs, void *args);
> +       void (*swp)(void *lhs, void *rhs, void *args);
>  };
>
>  /* Initialize a min-heap. */
> @@ -79,7 +79,7 @@ bool __min_heap_full(struct __min_heap *heap)
>  /* Sift the element at pos down the heap. */
>  static __always_inline
>  void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> -               const struct min_heap_callbacks *func)
> +               const struct min_heap_callbacks *func, void *args)
>  {
>         void *left, *right;
>         void *data = heap->data;
> @@ -92,7 +92,7 @@ void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
>                         break;
>                 left = data + (i * 2 + 1) * elem_size;
>                 right = data + (i * 2 + 2) * elem_size;
> -               i = func->less(left, right) ? i * 2 + 1 : i * 2 + 2;
> +               i = func->less(left, right, args) ? i * 2 + 1 : i * 2 + 2;
>         }
>
>         /* Special case for the last leaf with no sibling. */
> @@ -100,38 +100,38 @@ void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
>                 i = i * 2 + 1;
>
>         /* Backtrack to the correct location. */
> -       while (i != pos && func->less(root, data + i * elem_size))
> +       while (i != pos && func->less(root, data + i * elem_size, args))
>                 i = (i - 1) / 2;
>
>         /* Shift the element into its correct place. */
>         j = i;
>         while (i != pos) {
>                 i = (i - 1) / 2;
> -               func->swp(data + i * elem_size, data + j * elem_size);
> +               func->swp(data + i * elem_size, data + j * elem_size, args);
>         }
>  }
>
> -#define min_heapify(_heap, _pos, _func)        \
> -       __min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func)
> +#define min_heapify(_heap, _pos, _func, _args) \
> +       __min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func, _args)
>
>  /* Floyd's approach to heapification that is O(nr). */
>  static __always_inline
>  void __min_heapify_all(struct __min_heap *heap, size_t elem_size,
> -               const struct min_heap_callbacks *func)
> +               const struct min_heap_callbacks *func, void *args)
>  {
>         int i;
>
>         for (i = heap->nr / 2 - 1; i >= 0; i--)
> -               __min_heapify(heap, i, elem_size, func);
> +               __min_heapify(heap, i, elem_size, func, args);
>  }
>
> -#define min_heapify_all(_heap, _func)  \
> -       __min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func)
> +#define min_heapify_all(_heap, _func, _args)   \
> +       __min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func, _args)
>
>  /* Remove minimum element from the heap, O(log2(nr)). */
>  static __always_inline
>  void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
> -               const struct min_heap_callbacks *func)
> +               const struct min_heap_callbacks *func, void *args)
>  {
>         void *data = heap->data;
>
> @@ -141,11 +141,11 @@ void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
>         /* Place last element at the root (position 0) and then sift down. */
>         heap->nr--;
>         memcpy(data, data + (heap->nr * elem_size), elem_size);
> -       __min_heapify(heap, 0, elem_size, func);
> +       __min_heapify(heap, 0, elem_size, func, args);
>  }
>
> -#define min_heap_pop(_heap, _func)     \
> -       __min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func)
> +#define min_heap_pop(_heap, _func, _args)      \
> +       __min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func, _args)
>
>  /*
>   * Remove the minimum element and then push the given element. The
> @@ -155,19 +155,20 @@ void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
>  static __always_inline
>  void __min_heap_pop_push(struct __min_heap *heap,
>                 const void *element, size_t elem_size,
> -               const struct min_heap_callbacks *func)
> +               const struct min_heap_callbacks *func,
> +               void *args)
>  {
>         memcpy(heap->data, element, elem_size);
> -       __min_heapify(heap, 0, elem_size, func);
> +       __min_heapify(heap, 0, elem_size, func, args);
>  }
>
> -#define min_heap_pop_push(_heap, _element, _func)      \
> -       __min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
> +#define min_heap_pop_push(_heap, _element, _func, _args)       \
> +       __min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
>
>  /* Push an element on to the heap, O(log2(nr)). */
>  static __always_inline
>  void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
> -               const struct min_heap_callbacks *func)
> +               const struct min_heap_callbacks *func, void *args)
>  {
>         void *data = heap->data;
>         void *child, *parent;
> @@ -185,13 +186,13 @@ void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
>         for (; pos > 0; pos = (pos - 1) / 2) {
>                 child = data + (pos * elem_size);
>                 parent = data + ((pos - 1) / 2) * elem_size;
> -               if (func->less(parent, child))
> +               if (func->less(parent, child, args))
>                         break;
> -               func->swp(parent, child);
> +               func->swp(parent, child, args);
>         }
>  }
>
> -#define min_heap_push(_heap, _element, _func)  \
> -       __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
> +#define min_heap_push(_heap, _element, _func, _args)   \
> +       __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
>
>  #endif /* _LINUX_MIN_HEAP_H */
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 065dfaa8b009..f2a9044058ee 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3683,7 +3683,7 @@ void __perf_event_task_sched_out(struct task_struct *task,
>         perf_cgroup_switch(next);
>  }
>
> -static bool perf_less_group_idx(const void *l, const void *r)
> +static bool perf_less_group_idx(const void *l, const void *r, void *args)
>  {
>         const struct perf_event *le = *(const struct perf_event **)l;
>         const struct perf_event *re = *(const struct perf_event **)r;
> @@ -3691,7 +3691,7 @@ static bool perf_less_group_idx(const void *l, const void *r)
>         return le->group_index < re->group_index;
>  }
>
> -static void swap_ptr(void *l, void *r)
> +static void swap_ptr(void *l, void *r, void *args)
>  {
>         void **lp = l, **rp = r;
>
> @@ -3779,7 +3779,7 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
>                 perf_assert_pmu_disabled((*evt)->pmu_ctx->pmu);
>         }
>
> -       min_heapify_all(&event_heap, &perf_min_heap);
> +       min_heapify_all(&event_heap, &perf_min_heap, NULL);
>
>         while (event_heap.heap.nr) {
>                 ret = func(*evt, data);
> @@ -3788,9 +3788,9 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
>
>                 *evt = perf_event_groups_next(*evt, pmu);
>                 if (*evt)
> -                       min_heapify(&event_heap, 0, &perf_min_heap);
> +                       min_heapify(&event_heap, 0, &perf_min_heap, NULL);
>                 else
> -                       min_heap_pop(&event_heap, &perf_min_heap);
> +                       min_heap_pop(&event_heap, &perf_min_heap, NULL);
>         }
>
>         return 0;
> diff --git a/lib/test_min_heap.c b/lib/test_min_heap.c
> index af2e446034d8..b8859d17a19c 100644
> --- a/lib/test_min_heap.c
> +++ b/lib/test_min_heap.c
> @@ -13,17 +13,17 @@
>
>  MIN_HEAP(int, min_heap_test);
>
> -static __init bool less_than(const void *lhs, const void *rhs)
> +static __init bool less_than(const void *lhs, const void *rhs, void *args)
>  {
>         return *(int *)lhs < *(int *)rhs;
>  }
>
> -static __init bool greater_than(const void *lhs, const void *rhs)
> +static __init bool greater_than(const void *lhs, const void *rhs, void *args)
>  {
>         return *(int *)lhs > *(int *)rhs;
>  }
>
> -static __init void swap_ints(void *lhs, void *rhs)
> +static __init void swap_ints(void *lhs, void *rhs, void *argsss)
>  {
>         int temp = *(int *)lhs;
>
> @@ -40,7 +40,7 @@ static __init int pop_verify_heap(bool min_heap,
>         int last;
>
>         last = values[0];
> -       min_heap_pop(heap, funcs);
> +       min_heap_pop(heap, funcs, NULL);
>         while (heap->heap.nr > 0) {
>                 if (min_heap) {
>                         if (last > values[0]) {
> @@ -56,7 +56,7 @@ static __init int pop_verify_heap(bool min_heap,
>                         }
>                 }
>                 last = values[0];
> -               min_heap_pop(heap, funcs);
> +               min_heap_pop(heap, funcs, NULL);
>         }
>         return err;
>  }
> @@ -77,7 +77,7 @@ static __init int test_heapify_all(bool min_heap)
>         int i, err;
>
>         /* Test with known set of values. */
> -       min_heapify_all(&heap, &funcs);
> +       min_heapify_all(&heap, &funcs, NULL);
>         err = pop_verify_heap(min_heap, &heap, &funcs);
>
>
> @@ -86,7 +86,7 @@ static __init int test_heapify_all(bool min_heap)
>         for (i = 0; i < heap.heap.nr; i++)
>                 values[i] = get_random_u32();
>
> -       min_heapify_all(&heap, &funcs);
> +       min_heapify_all(&heap, &funcs, NULL);
>         err += pop_verify_heap(min_heap, &heap, &funcs);
>
>         return err;
> @@ -110,14 +110,14 @@ static __init int test_heap_push(bool min_heap)
>
>         /* Test with known set of values copied from data. */
>         for (i = 0; i < ARRAY_SIZE(data); i++)
> -               min_heap_push(&heap, &data[i], &funcs);
> +               min_heap_push(&heap, &data[i], &funcs, NULL);
>
>         err = pop_verify_heap(min_heap, &heap, &funcs);
>
>         /* Test with randomly generated values. */
>         while (heap.heap.nr < heap.heap.size) {
>                 temp = get_random_u32();
> -               min_heap_push(&heap, &temp, &funcs);
> +               min_heap_push(&heap, &temp, &funcs, NULL);
>         }
>         err += pop_verify_heap(min_heap, &heap, &funcs);
>
> @@ -143,22 +143,22 @@ static __init int test_heap_pop_push(bool min_heap)
>         /* Fill values with data to pop and replace. */
>         temp = min_heap ? 0x80000000 : 0x7FFFFFFF;
>         for (i = 0; i < ARRAY_SIZE(data); i++)
> -               min_heap_push(&heap, &temp, &funcs);
> +               min_heap_push(&heap, &temp, &funcs, NULL);
>
>         /* Test with known set of values copied from data. */
>         for (i = 0; i < ARRAY_SIZE(data); i++)
> -               min_heap_pop_push(&heap, &data[i], &funcs);
> +               min_heap_pop_push(&heap, &data[i], &funcs, NULL);
>
>         err = pop_verify_heap(min_heap, &heap, &funcs);
>
>         heap.heap.nr = 0;
>         for (i = 0; i < ARRAY_SIZE(data); i++)
> -               min_heap_push(&heap, &temp, &funcs);
> +               min_heap_push(&heap, &temp, &funcs, NULL);
>
>         /* Test with randomly generated values. */
>         for (i = 0; i < ARRAY_SIZE(data); i++) {
>                 temp = get_random_u32();
> -               min_heap_pop_push(&heap, &temp, &funcs);
> +               min_heap_pop_push(&heap, &temp, &funcs, NULL);
>         }
>         err += pop_verify_heap(min_heap, &heap, &funcs);
>
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 12/13] lib min_heap: Add min_heap_sift_up()
  2024-03-19 18:00 ` [PATCH 12/13] lib min_heap: Add min_heap_sift_up() Kuan-Wei Chiu
@ 2024-03-19 20:12   ` Ian Rogers
  2024-03-20  2:51     ` Kuan-Wei Chiu
  0 siblings, 1 reply; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 20:12 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:01 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Add min_heap_sift_up() to sift up the element at index 'idx' in the
> heap.

Normally sift up is used to implement the min heap but isn't part of
the API, eg. there is a sift up in min_heap_push. Should min_heapify
be renamed to min_heap_sift_down to align with this name?

Thanks,
Ian

> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> ---
>  include/linux/min_heap.h | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>
> diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> index ce085137fce7..586965977104 100644
> --- a/include/linux/min_heap.h
> +++ b/include/linux/min_heap.h
> @@ -199,6 +199,26 @@ bool __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
>  #define min_heap_push(_heap, _element, _func, _args)   \
>         __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
>
> +/* Sift up ith element from the heap, O(log2(nr)). */
> +static __always_inline
> +void __min_heap_sift_up(struct __min_heap *heap, size_t elem_size, size_t idx,
> +               const struct min_heap_callbacks *func, void *args)
> +{
> +       void *data = heap->data;
> +       size_t parent;
> +
> +       while (idx) {
> +               parent = (idx - 1) / 2;
> +               if (func->less(data + parent * elem_size, data + idx * elem_size, args))
> +                       break;
> +               func->swp(data + parent * elem_size, data + idx * elem_size, args);
> +               idx = parent;
> +       }
> +}
> +
> +#define min_heap_sift_up(_heap, _idx, _func, _args)    \
> +       __min_heap_sift_up(&(_heap)->heap, __minheap_obj_size(_heap), _idx, _func, _args)
> +
>  /* Remove ith element from the heap, O(log2(nr)). */
>  static __always_inline
>  bool __min_heap_del(struct __min_heap *heap, size_t elem_size, size_t idx,
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 02/13] bcache: Fix typo
  2024-03-19 17:59 ` [PATCH 02/13] bcache: Fix typo Kuan-Wei Chiu
@ 2024-03-19 20:13   ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 20:13 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Replace 'utiility' with 'utility'.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  drivers/md/bcache/util.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
> index ae380bc3992e..410d8cb49e50 100644
> --- a/drivers/md/bcache/util.c
> +++ b/drivers/md/bcache/util.c
> @@ -1,6 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /*
> - * random utiility code, for bcache but in theory not specific to bcache
> + * random utility code, for bcache but in theory not specific to bcache
>   *
>   * Copyright 2010, 2011 Kent Overstreet <kent.overstreet@gmail.com>
>   * Copyright 2012 Google, Inc.
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 03/13] bcachefs: Fix typo
  2024-03-19 17:59 ` [PATCH 03/13] bcachefs: " Kuan-Wei Chiu
@ 2024-03-19 20:13   ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-19 20:13 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> Replace 'utiility' with 'utility'.
>
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  fs/bcachefs/util.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/bcachefs/util.c b/fs/bcachefs/util.c
> index 216fadf16928..f5a16ad65424 100644
> --- a/fs/bcachefs/util.c
> +++ b/fs/bcachefs/util.c
> @@ -1,6 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /*
> - * random utiility code, for bcache but in theory not specific to bcache
> + * random utility code, for bcache but in theory not specific to bcache
>   *
>   * Copyright 2010, 2011 Kent Overstreet <kent.overstreet@gmail.com>
>   * Copyright 2012 Google, Inc.
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] treewide: Refactor heap related implementation
  2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
                   ` (12 preceding siblings ...)
  2024-03-19 18:00 ` [PATCH 13/13] bcachefs: Remove heap-related macros and switch to generic min_heap Kuan-Wei Chiu
@ 2024-03-19 22:12 ` Kent Overstreet
  2024-03-20  3:01   ` Kuan-Wei Chiu
  13 siblings, 1 reply; 33+ messages in thread
From: Kent Overstreet @ 2024-03-19 22:12 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, msakai, peterz, mingo, acme, namhyung, akpm, bfoster,
	mark.rutland, alexander.shishkin, jolsa, irogers, adrian.hunter,
	jserv, linux-bcache, dm-devel, linux-bcachefs, linux-perf-users,
	linux-kernel

On Wed, Mar 20, 2024 at 01:59:52AM +0800, Kuan-Wei Chiu wrote:
> Hello,
> 
> This patch series focuses on several adjustments related to heap
> implementation. Firstly, a type-safe interface has been added to the
> min_heap, along with the introduction of several new functions to
> enhance its functionality. Additionally, the heap implementation for
> bcache and bcachefs has been replaced with the generic min_heap
> implementation from include/linux. Furthermore, several typos have been
> corrected.
> 
> Previous discussion with Kent Overstreet:
> https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu

Hey, thanks for doing this. Can you post a git repo link? I'll point my
CI at it.

> 
> Regards,
> Kuan-Wei
> 
> Kuan-Wei Chiu (13):
>   perf/core: Fix several typos
>   bcache: Fix typo
>   bcachefs: Fix typo
>   lib min_heap: Add type safe interface
>   lib min_heap: Add min_heap_init()
>   lib min_heap: Add min_heap_peek()
>   lib min_heap: Add min_heap_full()
>   lib min_heap: Add args for min_heap_callbacks
>   lib min_heap: Update min_heap_push() and min_heap_pop() to return bool
>     values
>   bcache: Remove heap-related macros and switch to generic min_heap
>   lib min_heap: Add min_heap_del()
>   lib min_heap: Add min_heap_sift_up()
>   bcachefs: Remove heap-related macros and switch to generic min_heap
> 
>  drivers/md/bcache/alloc.c      |  66 ++++++++----
>  drivers/md/bcache/bcache.h     |   2 +-
>  drivers/md/bcache/bset.c       |  73 ++++++++-----
>  drivers/md/bcache/bset.h       |  38 ++++---
>  drivers/md/bcache/btree.c      |  27 ++++-
>  drivers/md/bcache/extents.c    |  44 ++++----
>  drivers/md/bcache/movinggc.c   |  40 ++++++--
>  drivers/md/bcache/super.c      |  16 +++
>  drivers/md/bcache/sysfs.c      |   3 +
>  drivers/md/bcache/util.c       |   2 +-
>  drivers/md/bcache/util.h       |  81 +--------------
>  drivers/md/dm-vdo/repair.c     |  29 +++---
>  drivers/md/dm-vdo/slab-depot.c |  21 ++--
>  fs/bcachefs/clock.c            |  53 +++++++---
>  fs/bcachefs/clock_types.h      |   2 +-
>  fs/bcachefs/ec.c               |  99 +++++++++++-------
>  fs/bcachefs/ec_types.h         |   2 +-
>  fs/bcachefs/util.c             |   2 +-
>  fs/bcachefs/util.h             | 127 ++---------------------
>  include/linux/min_heap.h       | 180 ++++++++++++++++++++++++++-------
>  kernel/events/core.c           |  53 +++++-----
>  lib/test_min_heap.c            |  75 +++++++-------
>  22 files changed, 565 insertions(+), 470 deletions(-)
> 
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 08/13] lib min_heap: Add args for min_heap_callbacks
  2024-03-19 20:05   ` Ian Rogers
@ 2024-03-20  2:49     ` Kuan-Wei Chiu
  2024-03-20  4:43       ` Ian Rogers
  0 siblings, 1 reply; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-20  2:49 UTC (permalink / raw)
  To: Ian Rogers
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 01:05:18PM -0700, Ian Rogers wrote:
> On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
> >
> > Add a third parameter 'args' for the 'less' and 'swp' functions in the
> > 'struct min_heap_callbacks'. This additional parameter allows these
> > comparison and swap functions to handle extra arguments when necessary.
> >
> > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> 
> I've no objection to this but I don't see it used in your changes -
> that may be my fault :-). Perhaps hold off or add a test with args
> being non-null?
>

The parameter will be required in the patches for bcache and bcachefs.
One is for bucket_min_cmp() and bucket_max_cmp() in
drivers/md/bcache/alloc.c, and the other is for ec_stripes_heap_swap()
in fs/bcachefs/ec.c.

Do you have any more specific suggestions for adding non-null
test cases? I'm unsure about what would be a reasonable way to include
non-null test items in testing.
> 
> > ---
> >  drivers/md/dm-vdo/repair.c     | 10 +++----
> >  drivers/md/dm-vdo/slab-depot.c |  8 +++---
> >  include/linux/min_heap.h       | 51 +++++++++++++++++-----------------
> >  kernel/events/core.c           | 10 +++----
> >  lib/test_min_heap.c            | 26 ++++++++---------
> >  5 files changed, 53 insertions(+), 52 deletions(-)
> >
> > diff --git a/drivers/md/dm-vdo/repair.c b/drivers/md/dm-vdo/repair.c
> > index 7663fa2098f4..528fa100b410 100644
> > --- a/drivers/md/dm-vdo/repair.c
> > +++ b/drivers/md/dm-vdo/repair.c
> > @@ -137,7 +137,7 @@ struct repair_completion {
> >   * to sort by slot while still ensuring we replay all entries with the same slot in the exact order
> >   * as they appeared in the journal.
> >   */
> > -static bool mapping_is_less_than(const void *item1, const void *item2)
> > +static bool mapping_is_less_than(const void *item1, const void *item2, void *args)
> >  {
> >         const struct numbered_block_mapping *mapping1 =
> >                 (const struct numbered_block_mapping *) item1;
> > @@ -156,7 +156,7 @@ static bool mapping_is_less_than(const void *item1, const void *item2)
> >         return 0;
> >  }
> >
> > -static void swap_mappings(void *item1, void *item2)
> > +static void swap_mappings(void *item1, void *item2, void *args)
> >  {
> >         struct numbered_block_mapping *mapping1 = item1;
> >         struct numbered_block_mapping *mapping2 = item2;
> > @@ -182,8 +182,8 @@ static struct numbered_block_mapping *sort_next_heap_element(struct repair_compl
> >          * restore the heap invariant, and return a pointer to the popped element.
> >          */
> >         last = &repair->entries[--heap->heap.nr];
> > -       swap_mappings(heap->heap.data, last);
> > -       min_heapify(heap, 0, &repair_min_heap);
> > +       swap_mappings(heap->heap.data, last, NULL);
> > +       min_heapify(heap, 0, &repair_min_heap, NULL);
> >         return last;
> >  }
> >
> > @@ -1121,7 +1121,7 @@ static void recover_block_map(struct vdo_completion *completion)
> >         repair->replay_heap.heap.data = repair->entries;
> >         repair->replay_heap.heap.nr = repair->block_map_entry_count;
> >         repair->replay_heap.heap.size = repair->block_map_entry_count;
> > -       min_heapify_all(&repair->replay_heap, &repair_min_heap);
> > +       min_heapify_all(&repair->replay_heap, &repair_min_heap, NULL);
> >
> >         vdo_log_info("Replaying %zu recovery entries into block map",
> >                      repair->block_map_entry_count);
> > diff --git a/drivers/md/dm-vdo/slab-depot.c b/drivers/md/dm-vdo/slab-depot.c
> > index 3309480170c3..b8c41d7ccde0 100644
> > --- a/drivers/md/dm-vdo/slab-depot.c
> > +++ b/drivers/md/dm-vdo/slab-depot.c
> > @@ -3288,7 +3288,7 @@ int vdo_release_block_reference(struct block_allocator *allocator,
> >   * Thus, the ordering is reversed from the usual sense since min_heap returns smaller elements
> >   * before larger ones.
> >   */
> > -static bool slab_status_is_less_than(const void *item1, const void *item2)
> > +static bool slab_status_is_less_than(const void *item1, const void *item2, void *args)
> >  {
> >         const struct slab_status *info1 = item1;
> >         const struct slab_status *info2 = item2;
> > @@ -3300,7 +3300,7 @@ static bool slab_status_is_less_than(const void *item1, const void *item2)
> >         return info1->slab_number < info2->slab_number;
> >  }
> >
> > -static void swap_slab_statuses(void *item1, void *item2)
> > +static void swap_slab_statuses(void *item1, void *item2, void *args)
> >  {
> >         struct slab_status *info1 = item1;
> >         struct slab_status *info2 = item2;
> > @@ -3523,7 +3523,7 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
> >         heap.heap.data = slab_statuses;
> >         heap.heap.nr = allocator->slab_count;
> >         heap.heap.size = allocator->slab_count;
> > -       min_heapify_all(&heap, &slab_status_min_heap);
> > +       min_heapify_all(&heap, &slab_status_min_heap, NULL);
> >
> >         while (heap.heap.nr > 0) {
> >                 bool high_priority;
> > @@ -3531,7 +3531,7 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
> >                 struct slab_journal *journal;
> >
> >                 current_slab_status = slab_statuses[0];
> > -               min_heap_pop(&heap, &slab_status_min_heap);
> > +               min_heap_pop(&heap, &slab_status_min_heap, NULL);
> >                 slab = depot->slabs[current_slab_status.slab_number];
> >
> >                 if ((depot->load_type == VDO_SLAB_DEPOT_REBUILD_LOAD) ||
> > diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> > index b1d874f4d536..97d8ba5c32e6 100644
> > --- a/include/linux/min_heap.h
> > +++ b/include/linux/min_heap.h
> > @@ -40,8 +40,8 @@ struct _name {                                \
> >   * @swp: Swap elements function.
> >   */
> >  struct min_heap_callbacks {
> > -       bool (*less)(const void *lhs, const void *rhs);
> > -       void (*swp)(void *lhs, void *rhs);
> > +       bool (*less)(const void *lhs, const void *rhs, void *args);
> > +       void (*swp)(void *lhs, void *rhs, void *args);
> >  };
> >
> >  /* Initialize a min-heap. */
> > @@ -79,7 +79,7 @@ bool __min_heap_full(struct __min_heap *heap)
> >  /* Sift the element at pos down the heap. */
> >  static __always_inline
> >  void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> > -               const struct min_heap_callbacks *func)
> > +               const struct min_heap_callbacks *func, void *args)
> >  {
> >         void *left, *right;
> >         void *data = heap->data;
> > @@ -92,7 +92,7 @@ void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> >                         break;
> >                 left = data + (i * 2 + 1) * elem_size;
> >                 right = data + (i * 2 + 2) * elem_size;
> > -               i = func->less(left, right) ? i * 2 + 1 : i * 2 + 2;
> > +               i = func->less(left, right, args) ? i * 2 + 1 : i * 2 + 2;
> >         }
> >
> >         /* Special case for the last leaf with no sibling. */
> > @@ -100,38 +100,38 @@ void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> >                 i = i * 2 + 1;
> >
> >         /* Backtrack to the correct location. */
> > -       while (i != pos && func->less(root, data + i * elem_size))
> > +       while (i != pos && func->less(root, data + i * elem_size, args))
> >                 i = (i - 1) / 2;
> >
> >         /* Shift the element into its correct place. */
> >         j = i;
> >         while (i != pos) {
> >                 i = (i - 1) / 2;
> > -               func->swp(data + i * elem_size, data + j * elem_size);
> > +               func->swp(data + i * elem_size, data + j * elem_size, args);
> >         }
> >  }
> >
> > -#define min_heapify(_heap, _pos, _func)        \
> > -       __min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func)
> > +#define min_heapify(_heap, _pos, _func, _args) \
> > +       __min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func, _args)
> >
> >  /* Floyd's approach to heapification that is O(nr). */
> >  static __always_inline
> >  void __min_heapify_all(struct __min_heap *heap, size_t elem_size,
> > -               const struct min_heap_callbacks *func)
> > +               const struct min_heap_callbacks *func, void *args)
> >  {
> >         int i;
> >
> >         for (i = heap->nr / 2 - 1; i >= 0; i--)
> > -               __min_heapify(heap, i, elem_size, func);
> > +               __min_heapify(heap, i, elem_size, func, args);
> >  }
> >
> > -#define min_heapify_all(_heap, _func)  \
> > -       __min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func)
> > +#define min_heapify_all(_heap, _func, _args)   \
> > +       __min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func, _args)
> >
> >  /* Remove minimum element from the heap, O(log2(nr)). */
> >  static __always_inline
> >  void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
> > -               const struct min_heap_callbacks *func)
> > +               const struct min_heap_callbacks *func, void *args)
> >  {
> >         void *data = heap->data;
> >
> > @@ -141,11 +141,11 @@ void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
> >         /* Place last element at the root (position 0) and then sift down. */
> >         heap->nr--;
> >         memcpy(data, data + (heap->nr * elem_size), elem_size);
> > -       __min_heapify(heap, 0, elem_size, func);
> > +       __min_heapify(heap, 0, elem_size, func, args);
> >  }
> >
> > -#define min_heap_pop(_heap, _func)     \
> > -       __min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func)
> > +#define min_heap_pop(_heap, _func, _args)      \
> > +       __min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func, _args)
> >
> >  /*
> >   * Remove the minimum element and then push the given element. The
> > @@ -155,19 +155,20 @@ void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
> >  static __always_inline
> >  void __min_heap_pop_push(struct __min_heap *heap,
> >                 const void *element, size_t elem_size,
> > -               const struct min_heap_callbacks *func)
> > +               const struct min_heap_callbacks *func,
> > +               void *args)
> >  {
> >         memcpy(heap->data, element, elem_size);
> > -       __min_heapify(heap, 0, elem_size, func);
> > +       __min_heapify(heap, 0, elem_size, func, args);
> >  }
> >
> > -#define min_heap_pop_push(_heap, _element, _func)      \
> > -       __min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
> > +#define min_heap_pop_push(_heap, _element, _func, _args)       \
> > +       __min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
> >
> >  /* Push an element on to the heap, O(log2(nr)). */
> >  static __always_inline
> >  void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
> > -               const struct min_heap_callbacks *func)
> > +               const struct min_heap_callbacks *func, void *args)
> >  {
> >         void *data = heap->data;
> >         void *child, *parent;
> > @@ -185,13 +186,13 @@ void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
> >         for (; pos > 0; pos = (pos - 1) / 2) {
> >                 child = data + (pos * elem_size);
> >                 parent = data + ((pos - 1) / 2) * elem_size;
> > -               if (func->less(parent, child))
> > +               if (func->less(parent, child, args))
> >                         break;
> > -               func->swp(parent, child);
> > +               func->swp(parent, child, args);
> >         }
> >  }
> >
> > -#define min_heap_push(_heap, _element, _func)  \
> > -       __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
> > +#define min_heap_push(_heap, _element, _func, _args)   \
> > +       __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
> >
> >  #endif /* _LINUX_MIN_HEAP_H */
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 065dfaa8b009..f2a9044058ee 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -3683,7 +3683,7 @@ void __perf_event_task_sched_out(struct task_struct *task,
> >         perf_cgroup_switch(next);
> >  }
> >
> > -static bool perf_less_group_idx(const void *l, const void *r)
> > +static bool perf_less_group_idx(const void *l, const void *r, void *args)
> >  {
> >         const struct perf_event *le = *(const struct perf_event **)l;
> >         const struct perf_event *re = *(const struct perf_event **)r;
> > @@ -3691,7 +3691,7 @@ static bool perf_less_group_idx(const void *l, const void *r)
> >         return le->group_index < re->group_index;
> >  }
> >
> > -static void swap_ptr(void *l, void *r)
> > +static void swap_ptr(void *l, void *r, void *args)
> >  {
> >         void **lp = l, **rp = r;
> >
> > @@ -3779,7 +3779,7 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
> >                 perf_assert_pmu_disabled((*evt)->pmu_ctx->pmu);
> >         }
> >
> > -       min_heapify_all(&event_heap, &perf_min_heap);
> > +       min_heapify_all(&event_heap, &perf_min_heap, NULL);
> >
> >         while (event_heap.heap.nr) {
> >                 ret = func(*evt, data);
> > @@ -3788,9 +3788,9 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
> >
> >                 *evt = perf_event_groups_next(*evt, pmu);
> >                 if (*evt)
> > -                       min_heapify(&event_heap, 0, &perf_min_heap);
> > +                       min_heapify(&event_heap, 0, &perf_min_heap, NULL);
> >                 else
> > -                       min_heap_pop(&event_heap, &perf_min_heap);
> > +                       min_heap_pop(&event_heap, &perf_min_heap, NULL);
> >         }
> >
> >         return 0;
> > diff --git a/lib/test_min_heap.c b/lib/test_min_heap.c
> > index af2e446034d8..b8859d17a19c 100644
> > --- a/lib/test_min_heap.c
> > +++ b/lib/test_min_heap.c
> > @@ -13,17 +13,17 @@
> >
> >  MIN_HEAP(int, min_heap_test);
> >
> > -static __init bool less_than(const void *lhs, const void *rhs)
> > +static __init bool less_than(const void *lhs, const void *rhs, void *args)
> >  {
> >         return *(int *)lhs < *(int *)rhs;
> >  }
> >
> > -static __init bool greater_than(const void *lhs, const void *rhs)
> > +static __init bool greater_than(const void *lhs, const void *rhs, void *args)
> >  {
> >         return *(int *)lhs > *(int *)rhs;
> >  }
> >
> > -static __init void swap_ints(void *lhs, void *rhs)
> > +static __init void swap_ints(void *lhs, void *rhs, void *argsss)
> >  {
> >         int temp = *(int *)lhs;
> >
> > @@ -40,7 +40,7 @@ static __init int pop_verify_heap(bool min_heap,
> >         int last;
> >
> >         last = values[0];
> > -       min_heap_pop(heap, funcs);
> > +       min_heap_pop(heap, funcs, NULL);
> >         while (heap->heap.nr > 0) {
> >                 if (min_heap) {
> >                         if (last > values[0]) {
> > @@ -56,7 +56,7 @@ static __init int pop_verify_heap(bool min_heap,
> >                         }
> >                 }
> >                 last = values[0];
> > -               min_heap_pop(heap, funcs);
> > +               min_heap_pop(heap, funcs, NULL);
> >         }
> >         return err;
> >  }
> > @@ -77,7 +77,7 @@ static __init int test_heapify_all(bool min_heap)
> >         int i, err;
> >
> >         /* Test with known set of values. */
> > -       min_heapify_all(&heap, &funcs);
> > +       min_heapify_all(&heap, &funcs, NULL);
> >         err = pop_verify_heap(min_heap, &heap, &funcs);
> >
> >
> > @@ -86,7 +86,7 @@ static __init int test_heapify_all(bool min_heap)
> >         for (i = 0; i < heap.heap.nr; i++)
> >                 values[i] = get_random_u32();
> >
> > -       min_heapify_all(&heap, &funcs);
> > +       min_heapify_all(&heap, &funcs, NULL);
> >         err += pop_verify_heap(min_heap, &heap, &funcs);
> >
> >         return err;
> > @@ -110,14 +110,14 @@ static __init int test_heap_push(bool min_heap)
> >
> >         /* Test with known set of values copied from data. */
> >         for (i = 0; i < ARRAY_SIZE(data); i++)
> > -               min_heap_push(&heap, &data[i], &funcs);
> > +               min_heap_push(&heap, &data[i], &funcs, NULL);
> >
> >         err = pop_verify_heap(min_heap, &heap, &funcs);
> >
> >         /* Test with randomly generated values. */
> >         while (heap.heap.nr < heap.heap.size) {
> >                 temp = get_random_u32();
> > -               min_heap_push(&heap, &temp, &funcs);
> > +               min_heap_push(&heap, &temp, &funcs, NULL);
> >         }
> >         err += pop_verify_heap(min_heap, &heap, &funcs);
> >
> > @@ -143,22 +143,22 @@ static __init int test_heap_pop_push(bool min_heap)
> >         /* Fill values with data to pop and replace. */
> >         temp = min_heap ? 0x80000000 : 0x7FFFFFFF;
> >         for (i = 0; i < ARRAY_SIZE(data); i++)
> > -               min_heap_push(&heap, &temp, &funcs);
> > +               min_heap_push(&heap, &temp, &funcs, NULL);
> >
> >         /* Test with known set of values copied from data. */
> >         for (i = 0; i < ARRAY_SIZE(data); i++)
> > -               min_heap_pop_push(&heap, &data[i], &funcs);
> > +               min_heap_pop_push(&heap, &data[i], &funcs, NULL);
> >
> >         err = pop_verify_heap(min_heap, &heap, &funcs);
> >
> >         heap.heap.nr = 0;
> >         for (i = 0; i < ARRAY_SIZE(data); i++)
> > -               min_heap_push(&heap, &temp, &funcs);
> > +               min_heap_push(&heap, &temp, &funcs, NULL);
> >
> >         /* Test with randomly generated values. */
> >         for (i = 0; i < ARRAY_SIZE(data); i++) {
> >                 temp = get_random_u32();
> > -               min_heap_pop_push(&heap, &temp, &funcs);
> > +               min_heap_pop_push(&heap, &temp, &funcs, NULL);
> >         }
> >         err += pop_verify_heap(min_heap, &heap, &funcs);
> >
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 12/13] lib min_heap: Add min_heap_sift_up()
  2024-03-19 20:12   ` Ian Rogers
@ 2024-03-20  2:51     ` Kuan-Wei Chiu
  0 siblings, 0 replies; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-20  2:51 UTC (permalink / raw)
  To: Ian Rogers
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 01:12:18PM -0700, Ian Rogers wrote:
> On Tue, Mar 19, 2024 at 11:01 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
> >
> > Add min_heap_sift_up() to sift up the element at index 'idx' in the
> > heap.
> 
> Normally sift up is used to implement the min heap but isn't part of
> the API, eg. there is a sift up in min_heap_push. Should min_heapify
> be renamed to min_heap_sift_down to align with this name?
> 

Sure, I can add a patch in v2 to rename min_heapify to min_heap_sift_down.

Regards,
Kuan-Wei

> 
> > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> > ---
> >  include/linux/min_heap.h | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> >
> > diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> > index ce085137fce7..586965977104 100644
> > --- a/include/linux/min_heap.h
> > +++ b/include/linux/min_heap.h
> > @@ -199,6 +199,26 @@ bool __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
> >  #define min_heap_push(_heap, _element, _func, _args)   \
> >         __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
> >
> > +/* Sift up ith element from the heap, O(log2(nr)). */
> > +static __always_inline
> > +void __min_heap_sift_up(struct __min_heap *heap, size_t elem_size, size_t idx,
> > +               const struct min_heap_callbacks *func, void *args)
> > +{
> > +       void *data = heap->data;
> > +       size_t parent;
> > +
> > +       while (idx) {
> > +               parent = (idx - 1) / 2;
> > +               if (func->less(data + parent * elem_size, data + idx * elem_size, args))
> > +                       break;
> > +               func->swp(data + parent * elem_size, data + idx * elem_size, args);
> > +               idx = parent;
> > +       }
> > +}
> > +
> > +#define min_heap_sift_up(_heap, _idx, _func, _args)    \
> > +       __min_heap_sift_up(&(_heap)->heap, __minheap_obj_size(_heap), _idx, _func, _args)
> > +
> >  /* Remove ith element from the heap, O(log2(nr)). */
> >  static __always_inline
> >  bool __min_heap_del(struct __min_heap *heap, size_t elem_size, size_t idx,
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 13/13] bcachefs: Remove heap-related macros and switch to generic min_heap
  2024-03-19 20:03   ` Ian Rogers
@ 2024-03-20  2:55     ` Kuan-Wei Chiu
  0 siblings, 0 replies; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-20  2:55 UTC (permalink / raw)
  To: Ian Rogers
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 01:03:17PM -0700, Ian Rogers wrote:
> On Tue, Mar 19, 2024 at 11:01 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
> >
> > Drop the heap-related macros from bcachefs and replaces them with the
> > generic min_heap implementation from include/linux. This change
> > improves code readability by using functions instead of macros.
> >
> > Link: https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu
> > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> > ---
> >  fs/bcachefs/clock.c       |  53 +++++++++++-----
> >  fs/bcachefs/clock_types.h |   2 +-
> >  fs/bcachefs/ec.c          |  99 ++++++++++++++++++-----------
> >  fs/bcachefs/ec_types.h    |   2 +-
> >  fs/bcachefs/util.h        | 127 +++-----------------------------------
> >  5 files changed, 109 insertions(+), 174 deletions(-)
> >
> > diff --git a/fs/bcachefs/clock.c b/fs/bcachefs/clock.c
> > index 363644451106..7a7d13f7a629 100644
> > --- a/fs/bcachefs/clock.c
> > +++ b/fs/bcachefs/clock.c
> > @@ -6,16 +6,29 @@
> >  #include <linux/kthread.h>
> >  #include <linux/preempt.h>
> >
> > -static inline long io_timer_cmp(io_timer_heap *h,
> > -                               struct io_timer *l,
> > -                               struct io_timer *r)
> > +static inline bool io_timer_cmp(const void *l, const void *r, void *args)
> 
> Should args here and below be marked with the attribute __always_unused?
> 

Thank you for bringing this to my attention.
Yes, I will add the __always_unused to the args in v2.

Regards,
Kuan-Wei

> 
> >  {
> > -       return l->expire - r->expire;
> > +       struct io_timer *_l = (struct io_timer *)l;
> > +       struct io_timer *_r = (struct io_timer *)r;
> > +
> > +       return _l->expire >= _r->expire;
> > +}
> > +
> > +static inline void io_timer_swp(void *l, void *r, void *args)
> > +{
> > +       struct io_timer *_l = (struct io_timer *)l;
> > +       struct io_timer *_r = (struct io_timer *)r;
> > +
> > +       swap(*_l, *_r);
> >  }
> >
> >  void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
> >  {
> >         size_t i;
> > +       const struct min_heap_callbacks callbacks = {
> > +               .less = io_timer_cmp,
> > +               .swp = io_timer_swp,
> > +       };
> >
> >         spin_lock(&clock->timer_lock);
> >
> > @@ -26,11 +39,11 @@ void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
> >                 return;
> >         }
> >
> > -       for (i = 0; i < clock->timers.used; i++)
> > -               if (clock->timers.data[i] == timer)
> > +       for (i = 0; i < clock->timers.heap.nr; i++)
> > +               if (min_heap_peek(&clock->timers)[i] == timer)
> >                         goto out;
> >
> > -       BUG_ON(!heap_add(&clock->timers, timer, io_timer_cmp, NULL));
> > +       BUG_ON(!min_heap_push(&clock->timers, &timer, &callbacks, NULL));
> >  out:
> >         spin_unlock(&clock->timer_lock);
> >  }
> > @@ -38,12 +51,16 @@ void bch2_io_timer_add(struct io_clock *clock, struct io_timer *timer)
> >  void bch2_io_timer_del(struct io_clock *clock, struct io_timer *timer)
> >  {
> >         size_t i;
> > +       const struct min_heap_callbacks callbacks = {
> > +               .less = io_timer_cmp,
> > +               .swp = io_timer_swp,
> > +       };
> >
> >         spin_lock(&clock->timer_lock);
> >
> > -       for (i = 0; i < clock->timers.used; i++)
> > -               if (clock->timers.data[i] == timer) {
> > -                       heap_del(&clock->timers, i, io_timer_cmp, NULL);
> > +       for (i = 0; i < clock->timers.heap.nr; i++)
> > +               if (min_heap_peek(&clock->timers)[i] == timer) {
> > +                       min_heap_pop(&clock->timers, &callbacks, NULL);
> >                         break;
> >                 }
> >
> > @@ -131,12 +148,16 @@ static struct io_timer *get_expired_timer(struct io_clock *clock,
> >                                           unsigned long now)
> >  {
> >         struct io_timer *ret = NULL;
> > +       const struct min_heap_callbacks callbacks = {
> > +               .less = io_timer_cmp,
> > +               .swp = io_timer_swp,
> > +       };
> >
> >         spin_lock(&clock->timer_lock);
> >
> > -       if (clock->timers.used &&
> > -           time_after_eq(now, clock->timers.data[0]->expire))
> > -               heap_pop(&clock->timers, ret, io_timer_cmp, NULL);
> > +       if (clock->timers.heap.nr &&
> > +           time_after_eq(now, min_heap_peek(&clock->timers)[0]->expire))
> > +               min_heap_pop(&clock->timers, &callbacks, NULL);
> >
> >         spin_unlock(&clock->timer_lock);
> >
> > @@ -161,10 +182,10 @@ void bch2_io_timers_to_text(struct printbuf *out, struct io_clock *clock)
> >         spin_lock(&clock->timer_lock);
> >         now = atomic64_read(&clock->now);
> >
> > -       for (i = 0; i < clock->timers.used; i++)
> > +       for (i = 0; i < clock->timers.heap.nr; i++)
> >                 prt_printf(out, "%ps:\t%li\n",
> > -                      clock->timers.data[i]->fn,
> > -                      clock->timers.data[i]->expire - now);
> > +                      min_heap_peek(&clock->timers)[i]->fn,
> > +                      min_heap_peek(&clock->timers)[i]->expire - now);
> >         spin_unlock(&clock->timer_lock);
> >         --out->atomic;
> >  }
> > diff --git a/fs/bcachefs/clock_types.h b/fs/bcachefs/clock_types.h
> > index 5fae0012d808..b02b24b9d74f 100644
> > --- a/fs/bcachefs/clock_types.h
> > +++ b/fs/bcachefs/clock_types.h
> > @@ -23,7 +23,7 @@ struct io_timer {
> >  /* Amount to buffer up on a percpu counter */
> >  #define IO_CLOCK_PCPU_SECTORS  128
> >
> > -typedef HEAP(struct io_timer *)        io_timer_heap;
> > +typedef MIN_HEAP(struct io_timer *, io_timer_heap) io_timer_heap;
> >
> >  struct io_clock {
> >         atomic64_t              now;
> > diff --git a/fs/bcachefs/ec.c b/fs/bcachefs/ec.c
> > index b98e2c2b8bf0..7b6a31237503 100644
> > --- a/fs/bcachefs/ec.c
> > +++ b/fs/bcachefs/ec.c
> > @@ -860,14 +860,14 @@ static int __ec_stripe_mem_alloc(struct bch_fs *c, size_t idx, gfp_t gfp)
> >  {
> >         ec_stripes_heap n, *h = &c->ec_stripes_heap;
> >
> > -       if (idx >= h->size) {
> > +       if (idx >= h->heap.size) {
> >                 if (!init_heap(&n, max(1024UL, roundup_pow_of_two(idx + 1)), gfp))
> >                         return -BCH_ERR_ENOMEM_ec_stripe_mem_alloc;
> >
> >                 mutex_lock(&c->ec_stripes_heap_lock);
> > -               if (n.size > h->size) {
> > -                       memcpy(n.data, h->data, h->used * sizeof(h->data[0]));
> > -                       n.used = h->used;
> > +               if (n.heap.size > h->heap.size) {
> > +                       memcpy(min_heap_peek(&n), min_heap_peek(h), h->heap.nr * sizeof(*min_heap_peek(h)));
> > +                       n.heap.nr = h->heap.nr;
> >                         swap(*h, n);
> >                 }
> >                 mutex_unlock(&c->ec_stripes_heap_lock);
> > @@ -958,20 +958,21 @@ static u64 stripe_idx_to_delete(struct bch_fs *c)
> >
> >         lockdep_assert_held(&c->ec_stripes_heap_lock);
> >
> > -       if (h->used &&
> > -           h->data[0].blocks_nonempty == 0 &&
> > -           !bch2_stripe_is_open(c, h->data[0].idx))
> > -               return h->data[0].idx;
> > +       if (h->heap.nr &&
> > +           min_heap_peek(h)->blocks_nonempty == 0 &&
> > +           !bch2_stripe_is_open(c, min_heap_peek(h)->idx))
> > +               return min_heap_peek(h)->idx;
> >
> >         return 0;
> >  }
> >
> > -static inline int ec_stripes_heap_cmp(ec_stripes_heap *h,
> > -                                     struct ec_stripe_heap_entry l,
> > -                                     struct ec_stripe_heap_entry r)
> > +static inline bool ec_stripes_heap_cmp(const void *l, const void *r, void *args)
> >  {
> > -       return ((l.blocks_nonempty > r.blocks_nonempty) -
> > -               (l.blocks_nonempty < r.blocks_nonempty));
> > +       struct ec_stripe_heap_entry *_l = (struct ec_stripe_heap_entry *)l;
> > +       struct ec_stripe_heap_entry *_r = (struct ec_stripe_heap_entry *)r;
> > +
> > +       return ((_l->blocks_nonempty > _r->blocks_nonempty) >=
> > +               (_l->blocks_nonempty < _r->blocks_nonempty));
> >  }
> >
> >  static inline void ec_stripes_heap_set_backpointer(ec_stripes_heap *h,
> > @@ -979,7 +980,21 @@ static inline void ec_stripes_heap_set_backpointer(ec_stripes_heap *h,
> >  {
> >         struct bch_fs *c = container_of(h, struct bch_fs, ec_stripes_heap);
> >
> > -       genradix_ptr(&c->stripes, h->data[i].idx)->heap_idx = i;
> > +       genradix_ptr(&c->stripes, min_heap_peek(h)[i].idx)->heap_idx = i;
> > +}
> > +
> > +static inline void ec_stripes_heap_swap(void *l, void *r, void *h)
> > +{
> > +       struct ec_stripe_heap_entry *_l = (struct ec_stripe_heap_entry *)l;
> > +       struct ec_stripe_heap_entry *_r = (struct ec_stripe_heap_entry *)r;
> > +       ec_stripes_heap *_h = (ec_stripes_heap *)h;
> > +       size_t i = _l - min_heap_peek(_h);
> > +       size_t j = _r - min_heap_peek(_h);
> > +
> > +       ec_stripes_heap_set_backpointer(_h, i);
> > +       ec_stripes_heap_set_backpointer(_h, j);
> > +
> > +       swap(*_l, *_r);
> >  }
> >
> >  static void heap_verify_backpointer(struct bch_fs *c, size_t idx)
> > @@ -987,34 +1002,43 @@ static void heap_verify_backpointer(struct bch_fs *c, size_t idx)
> >         ec_stripes_heap *h = &c->ec_stripes_heap;
> >         struct stripe *m = genradix_ptr(&c->stripes, idx);
> >
> > -       BUG_ON(m->heap_idx >= h->used);
> > -       BUG_ON(h->data[m->heap_idx].idx != idx);
> > +       BUG_ON(m->heap_idx >= h->heap.nr);
> > +       BUG_ON(min_heap_peek(h)[m->heap_idx].idx != idx);
> >  }
> >
> >  void bch2_stripes_heap_del(struct bch_fs *c,
> >                            struct stripe *m, size_t idx)
> >  {
> > +       const struct min_heap_callbacks callbacks = {
> > +               .less = ec_stripes_heap_cmp,
> > +               .swp = ec_stripes_heap_swap,
> > +       };
> > +
> >         mutex_lock(&c->ec_stripes_heap_lock);
> >         heap_verify_backpointer(c, idx);
> >
> > -       heap_del(&c->ec_stripes_heap, m->heap_idx,
> > -                ec_stripes_heap_cmp,
> > -                ec_stripes_heap_set_backpointer);
> > +       min_heap_del(&c->ec_stripes_heap, m->heap_idx, &callbacks, &c->ec_stripes_heap);
> >         mutex_unlock(&c->ec_stripes_heap_lock);
> >  }
> >
> >  void bch2_stripes_heap_insert(struct bch_fs *c,
> >                               struct stripe *m, size_t idx)
> >  {
> > +       const struct min_heap_callbacks callbacks = {
> > +               .less = ec_stripes_heap_cmp,
> > +               .swp = ec_stripes_heap_swap,
> > +       };
> > +
> >         mutex_lock(&c->ec_stripes_heap_lock);
> > -       BUG_ON(heap_full(&c->ec_stripes_heap));
> > +       BUG_ON(min_heap_full(&c->ec_stripes_heap));
> >
> > -       heap_add(&c->ec_stripes_heap, ((struct ec_stripe_heap_entry) {
> > +       genradix_ptr(&c->stripes, idx)->heap_idx = c->ec_stripes_heap.heap.nr;
> > +       min_heap_push(&c->ec_stripes_heap, &((struct ec_stripe_heap_entry) {
> >                         .idx = idx,
> >                         .blocks_nonempty = m->blocks_nonempty,
> >                 }),
> > -                ec_stripes_heap_cmp,
> > -                ec_stripes_heap_set_backpointer);
> > +                &callbacks,
> > +                &c->ec_stripes_heap);
> >
> >         heap_verify_backpointer(c, idx);
> >         mutex_unlock(&c->ec_stripes_heap_lock);
> > @@ -1026,17 +1050,20 @@ void bch2_stripes_heap_update(struct bch_fs *c,
> >         ec_stripes_heap *h = &c->ec_stripes_heap;
> >         bool do_deletes;
> >         size_t i;
> > +       const struct min_heap_callbacks callbacks = {
> > +               .less = ec_stripes_heap_cmp,
> > +               .swp = ec_stripes_heap_swap,
> > +       };
> >
> >         mutex_lock(&c->ec_stripes_heap_lock);
> >         heap_verify_backpointer(c, idx);
> >
> > -       h->data[m->heap_idx].blocks_nonempty = m->blocks_nonempty;
> > +       min_heap_peek(h)[m->heap_idx].blocks_nonempty = m->blocks_nonempty;
> >
> >         i = m->heap_idx;
> > -       heap_sift_up(h,   i, ec_stripes_heap_cmp,
> > -                    ec_stripes_heap_set_backpointer);
> > -       heap_sift_down(h, i, ec_stripes_heap_cmp,
> > -                      ec_stripes_heap_set_backpointer);
> > +
> > +       min_heap_sift_up(h,     i, &callbacks, &c->ec_stripes_heap);
> > +       min_heapify(h, i, &callbacks, &c->ec_stripes_heap);
> >
> >         heap_verify_backpointer(c, idx);
> >
> > @@ -1828,12 +1855,12 @@ static s64 get_existing_stripe(struct bch_fs *c,
> >                 return -1;
> >
> >         mutex_lock(&c->ec_stripes_heap_lock);
> > -       for (heap_idx = 0; heap_idx < h->used; heap_idx++) {
> > +       for (heap_idx = 0; heap_idx < h->heap.nr; heap_idx++) {
> >                 /* No blocks worth reusing, stripe will just be deleted: */
> > -               if (!h->data[heap_idx].blocks_nonempty)
> > +               if (!min_heap_peek(h)[heap_idx].blocks_nonempty)
> >                         continue;
> >
> > -               stripe_idx = h->data[heap_idx].idx;
> > +               stripe_idx = min_heap_peek(h)[heap_idx].idx;
> >
> >                 m = genradix_ptr(&c->stripes, stripe_idx);
> >
> > @@ -2159,14 +2186,14 @@ void bch2_stripes_heap_to_text(struct printbuf *out, struct bch_fs *c)
> >         size_t i;
> >
> >         mutex_lock(&c->ec_stripes_heap_lock);
> > -       for (i = 0; i < min_t(size_t, h->used, 50); i++) {
> > -               m = genradix_ptr(&c->stripes, h->data[i].idx);
> > +       for (i = 0; i < min_t(size_t, h->heap.nr, 50); i++) {
> > +               m = genradix_ptr(&c->stripes, min_heap_peek(h)[i].idx);
> >
> > -               prt_printf(out, "%zu %u/%u+%u", h->data[i].idx,
> > -                      h->data[i].blocks_nonempty,
> > +               prt_printf(out, "%zu %u/%u+%u", min_heap_peek(h)[i].idx,
> > +                      min_heap_peek(h)[i].blocks_nonempty,
> >                        m->nr_blocks - m->nr_redundant,
> >                        m->nr_redundant);
> > -               if (bch2_stripe_is_open(c, h->data[i].idx))
> > +               if (bch2_stripe_is_open(c, min_heap_peek(h)[i].idx))
> >                         prt_str(out, " open");
> >                 prt_newline(out);
> >         }
> > diff --git a/fs/bcachefs/ec_types.h b/fs/bcachefs/ec_types.h
> > index 976426da3a12..2ed67431a81c 100644
> > --- a/fs/bcachefs/ec_types.h
> > +++ b/fs/bcachefs/ec_types.h
> > @@ -36,6 +36,6 @@ struct ec_stripe_heap_entry {
> >         unsigned                blocks_nonempty;
> >  };
> >
> > -typedef HEAP(struct ec_stripe_heap_entry) ec_stripes_heap;
> > +typedef MIN_HEAP(struct ec_stripe_heap_entry, ec_stripes_heap) ec_stripes_heap;
> >
> >  #endif /* _BCACHEFS_EC_TYPES_H */
> > diff --git a/fs/bcachefs/util.h b/fs/bcachefs/util.h
> > index 7ffbddb80400..c599dc5276ac 100644
> > --- a/fs/bcachefs/util.h
> > +++ b/fs/bcachefs/util.h
> > @@ -8,6 +8,7 @@
> >  #include <linux/errno.h>
> >  #include <linux/freezer.h>
> >  #include <linux/kernel.h>
> > +#include <linux/min_heap.h>
> >  #include <linux/sched/clock.h>
> >  #include <linux/llist.h>
> >  #include <linux/log2.h>
> > @@ -54,134 +55,20 @@ static inline size_t buf_pages(void *p, size_t len)
> >                             PAGE_SIZE);
> >  }
> >
> > -#define HEAP(type)                                                     \
> > -struct {                                                               \
> > -       size_t size, used;                                              \
> > -       type *data;                                                     \
> > -}
> > -
> > -#define DECLARE_HEAP(type, name) HEAP(type) name
> > -
> >  #define init_heap(heap, _size, gfp)                                    \
> >  ({                                                                     \
> > -       (heap)->used = 0;                                               \
> > -       (heap)->size = (_size);                                         \
> > -       (heap)->data = kvmalloc((heap)->size * sizeof((heap)->data[0]),\
> > -                                (gfp));                                \
> > -})
> > -
> > -#define free_heap(heap)                                                        \
> > -do {                                                                   \
> > -       kvfree((heap)->data);                                           \
> > -       (heap)->data = NULL;                                            \
> > -} while (0)
> > -
> > -#define heap_set_backpointer(h, i, _fn)                                        \
> > -do {                                                                   \
> > -       void (*fn)(typeof(h), size_t) = _fn;                            \
> > -       if (fn)                                                         \
> > -               fn(h, i);                                               \
> > -} while (0)
> > -
> > -#define heap_swap(h, i, j, set_backpointer)                            \
> > -do {                                                                   \
> > -       swap((h)->data[i], (h)->data[j]);                               \
> > -       heap_set_backpointer(h, i, set_backpointer);                    \
> > -       heap_set_backpointer(h, j, set_backpointer);                    \
> > -} while (0)
> > -
> > -#define heap_peek(h)                                                   \
> > -({                                                                     \
> > -       EBUG_ON(!(h)->used);                                            \
> > -       (h)->data[0];                                                   \
> > -})
> > -
> > -#define heap_full(h)   ((h)->used == (h)->size)
> > -
> > -#define heap_sift_down(h, i, cmp, set_backpointer)                     \
> > -do {                                                                   \
> > -       size_t _c, _j = i;                                              \
> > -                                                                       \
> > -       for (; _j * 2 + 1 < (h)->used; _j = _c) {                       \
> > -               _c = _j * 2 + 1;                                        \
> > -               if (_c + 1 < (h)->used &&                               \
> > -                   cmp(h, (h)->data[_c], (h)->data[_c + 1]) >= 0)      \
> > -                       _c++;                                           \
> > -                                                                       \
> > -               if (cmp(h, (h)->data[_c], (h)->data[_j]) >= 0)          \
> > -                       break;                                          \
> > -               heap_swap(h, _c, _j, set_backpointer);                  \
> > -       }                                                               \
> > -} while (0)
> > -
> > -#define heap_sift_up(h, i, cmp, set_backpointer)                       \
> > -do {                                                                   \
> > -       while (i) {                                                     \
> > -               size_t p = (i - 1) / 2;                                 \
> > -               if (cmp(h, (h)->data[i], (h)->data[p]) >= 0)            \
> > -                       break;                                          \
> > -               heap_swap(h, i, p, set_backpointer);                    \
> > -               i = p;                                                  \
> > -       }                                                               \
> > -} while (0)
> > -
> > -#define __heap_add(h, d, cmp, set_backpointer)                         \
> > -({                                                                     \
> > -       size_t _i = (h)->used++;                                        \
> > -       (h)->data[_i] = d;                                              \
> > -       heap_set_backpointer(h, _i, set_backpointer);                   \
> > -                                                                       \
> > -       heap_sift_up(h, _i, cmp, set_backpointer);                      \
> > -       _i;                                                             \
> > -})
> > -
> > -#define heap_add(h, d, cmp, set_backpointer)                           \
> > -({                                                                     \
> > -       bool _r = !heap_full(h);                                        \
> > -       if (_r)                                                         \
> > -               __heap_add(h, d, cmp, set_backpointer);                 \
> > -       _r;                                                             \
> > +       void *data = kvmalloc(_size * sizeof(*min_heap_peek(heap)), (gfp));\
> > +       min_heap_init(heap, data, _size);                               \
> > +       min_heap_peek(heap);                                            \
> >  })
> >
> > -#define heap_add_or_replace(h, new, cmp, set_backpointer)              \
> > -do {                                                                   \
> > -       if (!heap_add(h, new, cmp, set_backpointer) &&                  \
> > -           cmp(h, new, heap_peek(h)) >= 0) {                           \
> > -               (h)->data[0] = new;                                     \
> > -               heap_set_backpointer(h, 0, set_backpointer);            \
> > -               heap_sift_down(h, 0, cmp, set_backpointer);             \
> > -       }                                                               \
> > -} while (0)
> >
> > -#define heap_del(h, i, cmp, set_backpointer)                           \
> > +#define free_heap(_heap)                                                       \
> >  do {                                                                   \
> > -       size_t _i = (i);                                                \
> > -                                                                       \
> > -       BUG_ON(_i >= (h)->used);                                        \
> > -       (h)->used--;                                                    \
> > -       if ((_i) < (h)->used) {                                         \
> > -               heap_swap(h, _i, (h)->used, set_backpointer);           \
> > -               heap_sift_up(h, _i, cmp, set_backpointer);              \
> > -               heap_sift_down(h, _i, cmp, set_backpointer);            \
> > -       }                                                               \
> > +       kvfree((_heap)->heap.data);                                             \
> > +       (_heap)->heap.data = NULL;                                              \
> >  } while (0)
> >
> > -#define heap_pop(h, d, cmp, set_backpointer)                           \
> > -({                                                                     \
> > -       bool _r = (h)->used;                                            \
> > -       if (_r) {                                                       \
> > -               (d) = (h)->data[0];                                     \
> > -               heap_del(h, 0, cmp, set_backpointer);                   \
> > -       }                                                               \
> > -       _r;                                                             \
> > -})
> > -
> > -#define heap_resort(heap, cmp, set_backpointer)                                \
> > -do {                                                                   \
> > -       ssize_t _i;                                                     \
> > -       for (_i = (ssize_t) (heap)->used / 2 -  1; _i >= 0; --_i)       \
> > -               heap_sift_down(heap, _i, cmp, set_backpointer);         \
> > -} while (0)
> >
> >  #define ANYSINT_MAX(t)                                                 \
> >         ((((t) 1 << (sizeof(t) * 8 - 2)) - (t) 1) * (t) 2 + (t) 1)
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 05/13] lib min_heap: Add min_heap_init()
  2024-03-19 19:51   ` Ian Rogers
@ 2024-03-20  2:56     ` Kuan-Wei Chiu
  0 siblings, 0 replies; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-20  2:56 UTC (permalink / raw)
  To: Ian Rogers
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 12:51:22PM -0700, Ian Rogers wrote:
> On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
> >
> > Add min_heap_init() for initializing heap with data, nr, and size.
> >
> > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> 
> Should this change update lib/test_min_heap.c to use min_heap_init?
> 
Sure, will do that in v2.

Regards,
Kuan-Wei
> 
> > ---
> >  include/linux/min_heap.h | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> > index c3635a7fdb88..ed462f194b88 100644
> > --- a/include/linux/min_heap.h
> > +++ b/include/linux/min_heap.h
> > @@ -44,6 +44,18 @@ struct min_heap_callbacks {
> >         void (*swp)(void *lhs, void *rhs);
> >  };
> >
> > +/* Initialize a min-heap. */
> > +static __always_inline
> > +void __min_heap_init(struct __min_heap *heap, void *data, int size)
> > +{
> > +       heap->data = data;
> > +       heap->nr = 0;
> > +       heap->size = size;
> > +}
> > +
> > +#define min_heap_init(_heap, _data, _size)     \
> > +       __min_heap_init(&(_heap)->heap, _data, _size)
> > +
> >  /* Sift the element at pos down the heap. */
> >  static __always_inline
> >  void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] treewide: Refactor heap related implementation
  2024-03-19 22:12 ` [PATCH 00/13] treewide: Refactor heap related implementation Kent Overstreet
@ 2024-03-20  3:01   ` Kuan-Wei Chiu
  0 siblings, 0 replies; 33+ messages in thread
From: Kuan-Wei Chiu @ 2024-03-20  3:01 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: colyli, msakai, peterz, mingo, acme, namhyung, akpm, bfoster,
	mark.rutland, alexander.shishkin, jolsa, irogers, adrian.hunter,
	jserv, linux-bcache, dm-devel, linux-bcachefs, linux-perf-users,
	linux-kernel

On Tue, Mar 19, 2024 at 06:12:17PM -0400, Kent Overstreet wrote:
> On Wed, Mar 20, 2024 at 01:59:52AM +0800, Kuan-Wei Chiu wrote:
> > Hello,
> > 
> > This patch series focuses on several adjustments related to heap
> > implementation. Firstly, a type-safe interface has been added to the
> > min_heap, along with the introduction of several new functions to
> > enhance its functionality. Additionally, the heap implementation for
> > bcache and bcachefs has been replaced with the generic min_heap
> > implementation from include/linux. Furthermore, several typos have been
> > corrected.
> > 
> > Previous discussion with Kent Overstreet:
> > https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu
> 
> Hey, thanks for doing this. Can you post a git repo link? I'll point my
> CI at it.
>
Here is the link to my GitHub repository:
https://github.com/visitorckw/linux.git

The patch series can be found on the 'refactor-heap' branch.

Regards,
Kuan-Wei

> > 
> > Regards,
> > Kuan-Wei
> > 
> > Kuan-Wei Chiu (13):
> >   perf/core: Fix several typos
> >   bcache: Fix typo
> >   bcachefs: Fix typo
> >   lib min_heap: Add type safe interface
> >   lib min_heap: Add min_heap_init()
> >   lib min_heap: Add min_heap_peek()
> >   lib min_heap: Add min_heap_full()
> >   lib min_heap: Add args for min_heap_callbacks
> >   lib min_heap: Update min_heap_push() and min_heap_pop() to return bool
> >     values
> >   bcache: Remove heap-related macros and switch to generic min_heap
> >   lib min_heap: Add min_heap_del()
> >   lib min_heap: Add min_heap_sift_up()
> >   bcachefs: Remove heap-related macros and switch to generic min_heap
> > 
> >  drivers/md/bcache/alloc.c      |  66 ++++++++----
> >  drivers/md/bcache/bcache.h     |   2 +-
> >  drivers/md/bcache/bset.c       |  73 ++++++++-----
> >  drivers/md/bcache/bset.h       |  38 ++++---
> >  drivers/md/bcache/btree.c      |  27 ++++-
> >  drivers/md/bcache/extents.c    |  44 ++++----
> >  drivers/md/bcache/movinggc.c   |  40 ++++++--
> >  drivers/md/bcache/super.c      |  16 +++
> >  drivers/md/bcache/sysfs.c      |   3 +
> >  drivers/md/bcache/util.c       |   2 +-
> >  drivers/md/bcache/util.h       |  81 +--------------
> >  drivers/md/dm-vdo/repair.c     |  29 +++---
> >  drivers/md/dm-vdo/slab-depot.c |  21 ++--
> >  fs/bcachefs/clock.c            |  53 +++++++---
> >  fs/bcachefs/clock_types.h      |   2 +-
> >  fs/bcachefs/ec.c               |  99 +++++++++++-------
> >  fs/bcachefs/ec_types.h         |   2 +-
> >  fs/bcachefs/util.c             |   2 +-
> >  fs/bcachefs/util.h             | 127 ++---------------------
> >  include/linux/min_heap.h       | 180 ++++++++++++++++++++++++++-------
> >  kernel/events/core.c           |  53 +++++-----
> >  lib/test_min_heap.c            |  75 +++++++-------
> >  22 files changed, 565 insertions(+), 470 deletions(-)
> > 
> > -- 
> > 2.34.1
> > 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 08/13] lib min_heap: Add args for min_heap_callbacks
  2024-03-20  2:49     ` Kuan-Wei Chiu
@ 2024-03-20  4:43       ` Ian Rogers
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2024-03-20  4:43 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: colyli, kent.overstreet, msakai, peterz, mingo, acme, namhyung,
	akpm, bfoster, mark.rutland, alexander.shishkin, jolsa,
	adrian.hunter, jserv, linux-bcache, dm-devel, linux-bcachefs,
	linux-perf-users, linux-kernel

On Tue, Mar 19, 2024 at 7:49 PM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
>
> On Tue, Mar 19, 2024 at 01:05:18PM -0700, Ian Rogers wrote:
> > On Tue, Mar 19, 2024 at 11:00 AM Kuan-Wei Chiu <visitorckw@gmail.com> wrote:
> > >
> > > Add a third parameter 'args' for the 'less' and 'swp' functions in the
> > > 'struct min_heap_callbacks'. This additional parameter allows these
> > > comparison and swap functions to handle extra arguments when necessary.
> > >
> > > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> >
> > I've no objection to this but I don't see it used in your changes -
> > that may be my fault :-). Perhaps hold off or add a test with args
> > being non-null?
> >
>
> The parameter will be required in the patches for bcache and bcachefs.
> One is for bucket_min_cmp() and bucket_max_cmp() in
> drivers/md/bcache/alloc.c, and the other is for ec_stripes_heap_swap()
> in fs/bcachefs/ec.c.
>
> Do you have any more specific suggestions for adding non-null
> test cases? I'm unsure about what would be a reasonable way to include
> non-null test items in testing.

I didn't have anything in mind, I just wanted to make sure there was
coverage for everything.

Thanks,
Ian

> >
> > > ---
> > >  drivers/md/dm-vdo/repair.c     | 10 +++----
> > >  drivers/md/dm-vdo/slab-depot.c |  8 +++---
> > >  include/linux/min_heap.h       | 51 +++++++++++++++++-----------------
> > >  kernel/events/core.c           | 10 +++----
> > >  lib/test_min_heap.c            | 26 ++++++++---------
> > >  5 files changed, 53 insertions(+), 52 deletions(-)
> > >
> > > diff --git a/drivers/md/dm-vdo/repair.c b/drivers/md/dm-vdo/repair.c
> > > index 7663fa2098f4..528fa100b410 100644
> > > --- a/drivers/md/dm-vdo/repair.c
> > > +++ b/drivers/md/dm-vdo/repair.c
> > > @@ -137,7 +137,7 @@ struct repair_completion {
> > >   * to sort by slot while still ensuring we replay all entries with the same slot in the exact order
> > >   * as they appeared in the journal.
> > >   */
> > > -static bool mapping_is_less_than(const void *item1, const void *item2)
> > > +static bool mapping_is_less_than(const void *item1, const void *item2, void *args)
> > >  {
> > >         const struct numbered_block_mapping *mapping1 =
> > >                 (const struct numbered_block_mapping *) item1;
> > > @@ -156,7 +156,7 @@ static bool mapping_is_less_than(const void *item1, const void *item2)
> > >         return 0;
> > >  }
> > >
> > > -static void swap_mappings(void *item1, void *item2)
> > > +static void swap_mappings(void *item1, void *item2, void *args)
> > >  {
> > >         struct numbered_block_mapping *mapping1 = item1;
> > >         struct numbered_block_mapping *mapping2 = item2;
> > > @@ -182,8 +182,8 @@ static struct numbered_block_mapping *sort_next_heap_element(struct repair_compl
> > >          * restore the heap invariant, and return a pointer to the popped element.
> > >          */
> > >         last = &repair->entries[--heap->heap.nr];
> > > -       swap_mappings(heap->heap.data, last);
> > > -       min_heapify(heap, 0, &repair_min_heap);
> > > +       swap_mappings(heap->heap.data, last, NULL);
> > > +       min_heapify(heap, 0, &repair_min_heap, NULL);
> > >         return last;
> > >  }
> > >
> > > @@ -1121,7 +1121,7 @@ static void recover_block_map(struct vdo_completion *completion)
> > >         repair->replay_heap.heap.data = repair->entries;
> > >         repair->replay_heap.heap.nr = repair->block_map_entry_count;
> > >         repair->replay_heap.heap.size = repair->block_map_entry_count;
> > > -       min_heapify_all(&repair->replay_heap, &repair_min_heap);
> > > +       min_heapify_all(&repair->replay_heap, &repair_min_heap, NULL);
> > >
> > >         vdo_log_info("Replaying %zu recovery entries into block map",
> > >                      repair->block_map_entry_count);
> > > diff --git a/drivers/md/dm-vdo/slab-depot.c b/drivers/md/dm-vdo/slab-depot.c
> > > index 3309480170c3..b8c41d7ccde0 100644
> > > --- a/drivers/md/dm-vdo/slab-depot.c
> > > +++ b/drivers/md/dm-vdo/slab-depot.c
> > > @@ -3288,7 +3288,7 @@ int vdo_release_block_reference(struct block_allocator *allocator,
> > >   * Thus, the ordering is reversed from the usual sense since min_heap returns smaller elements
> > >   * before larger ones.
> > >   */
> > > -static bool slab_status_is_less_than(const void *item1, const void *item2)
> > > +static bool slab_status_is_less_than(const void *item1, const void *item2, void *args)
> > >  {
> > >         const struct slab_status *info1 = item1;
> > >         const struct slab_status *info2 = item2;
> > > @@ -3300,7 +3300,7 @@ static bool slab_status_is_less_than(const void *item1, const void *item2)
> > >         return info1->slab_number < info2->slab_number;
> > >  }
> > >
> > > -static void swap_slab_statuses(void *item1, void *item2)
> > > +static void swap_slab_statuses(void *item1, void *item2, void *args)
> > >  {
> > >         struct slab_status *info1 = item1;
> > >         struct slab_status *info2 = item2;
> > > @@ -3523,7 +3523,7 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
> > >         heap.heap.data = slab_statuses;
> > >         heap.heap.nr = allocator->slab_count;
> > >         heap.heap.size = allocator->slab_count;
> > > -       min_heapify_all(&heap, &slab_status_min_heap);
> > > +       min_heapify_all(&heap, &slab_status_min_heap, NULL);
> > >
> > >         while (heap.heap.nr > 0) {
> > >                 bool high_priority;
> > > @@ -3531,7 +3531,7 @@ static int __must_check vdo_prepare_slabs_for_allocation(struct block_allocator
> > >                 struct slab_journal *journal;
> > >
> > >                 current_slab_status = slab_statuses[0];
> > > -               min_heap_pop(&heap, &slab_status_min_heap);
> > > +               min_heap_pop(&heap, &slab_status_min_heap, NULL);
> > >                 slab = depot->slabs[current_slab_status.slab_number];
> > >
> > >                 if ((depot->load_type == VDO_SLAB_DEPOT_REBUILD_LOAD) ||
> > > diff --git a/include/linux/min_heap.h b/include/linux/min_heap.h
> > > index b1d874f4d536..97d8ba5c32e6 100644
> > > --- a/include/linux/min_heap.h
> > > +++ b/include/linux/min_heap.h
> > > @@ -40,8 +40,8 @@ struct _name {                                \
> > >   * @swp: Swap elements function.
> > >   */
> > >  struct min_heap_callbacks {
> > > -       bool (*less)(const void *lhs, const void *rhs);
> > > -       void (*swp)(void *lhs, void *rhs);
> > > +       bool (*less)(const void *lhs, const void *rhs, void *args);
> > > +       void (*swp)(void *lhs, void *rhs, void *args);
> > >  };
> > >
> > >  /* Initialize a min-heap. */
> > > @@ -79,7 +79,7 @@ bool __min_heap_full(struct __min_heap *heap)
> > >  /* Sift the element at pos down the heap. */
> > >  static __always_inline
> > >  void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> > > -               const struct min_heap_callbacks *func)
> > > +               const struct min_heap_callbacks *func, void *args)
> > >  {
> > >         void *left, *right;
> > >         void *data = heap->data;
> > > @@ -92,7 +92,7 @@ void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> > >                         break;
> > >                 left = data + (i * 2 + 1) * elem_size;
> > >                 right = data + (i * 2 + 2) * elem_size;
> > > -               i = func->less(left, right) ? i * 2 + 1 : i * 2 + 2;
> > > +               i = func->less(left, right, args) ? i * 2 + 1 : i * 2 + 2;
> > >         }
> > >
> > >         /* Special case for the last leaf with no sibling. */
> > > @@ -100,38 +100,38 @@ void __min_heapify(struct __min_heap *heap, int pos, size_t elem_size,
> > >                 i = i * 2 + 1;
> > >
> > >         /* Backtrack to the correct location. */
> > > -       while (i != pos && func->less(root, data + i * elem_size))
> > > +       while (i != pos && func->less(root, data + i * elem_size, args))
> > >                 i = (i - 1) / 2;
> > >
> > >         /* Shift the element into its correct place. */
> > >         j = i;
> > >         while (i != pos) {
> > >                 i = (i - 1) / 2;
> > > -               func->swp(data + i * elem_size, data + j * elem_size);
> > > +               func->swp(data + i * elem_size, data + j * elem_size, args);
> > >         }
> > >  }
> > >
> > > -#define min_heapify(_heap, _pos, _func)        \
> > > -       __min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func)
> > > +#define min_heapify(_heap, _pos, _func, _args) \
> > > +       __min_heapify(&(_heap)->heap, _pos, __minheap_obj_size(_heap), _func, _args)
> > >
> > >  /* Floyd's approach to heapification that is O(nr). */
> > >  static __always_inline
> > >  void __min_heapify_all(struct __min_heap *heap, size_t elem_size,
> > > -               const struct min_heap_callbacks *func)
> > > +               const struct min_heap_callbacks *func, void *args)
> > >  {
> > >         int i;
> > >
> > >         for (i = heap->nr / 2 - 1; i >= 0; i--)
> > > -               __min_heapify(heap, i, elem_size, func);
> > > +               __min_heapify(heap, i, elem_size, func, args);
> > >  }
> > >
> > > -#define min_heapify_all(_heap, _func)  \
> > > -       __min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func)
> > > +#define min_heapify_all(_heap, _func, _args)   \
> > > +       __min_heapify_all(&(_heap)->heap, __minheap_obj_size(_heap), _func, _args)
> > >
> > >  /* Remove minimum element from the heap, O(log2(nr)). */
> > >  static __always_inline
> > >  void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
> > > -               const struct min_heap_callbacks *func)
> > > +               const struct min_heap_callbacks *func, void *args)
> > >  {
> > >         void *data = heap->data;
> > >
> > > @@ -141,11 +141,11 @@ void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
> > >         /* Place last element at the root (position 0) and then sift down. */
> > >         heap->nr--;
> > >         memcpy(data, data + (heap->nr * elem_size), elem_size);
> > > -       __min_heapify(heap, 0, elem_size, func);
> > > +       __min_heapify(heap, 0, elem_size, func, args);
> > >  }
> > >
> > > -#define min_heap_pop(_heap, _func)     \
> > > -       __min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func)
> > > +#define min_heap_pop(_heap, _func, _args)      \
> > > +       __min_heap_pop(&(_heap)->heap, __minheap_obj_size(_heap), _func, _args)
> > >
> > >  /*
> > >   * Remove the minimum element and then push the given element. The
> > > @@ -155,19 +155,20 @@ void __min_heap_pop(struct __min_heap *heap, size_t elem_size,
> > >  static __always_inline
> > >  void __min_heap_pop_push(struct __min_heap *heap,
> > >                 const void *element, size_t elem_size,
> > > -               const struct min_heap_callbacks *func)
> > > +               const struct min_heap_callbacks *func,
> > > +               void *args)
> > >  {
> > >         memcpy(heap->data, element, elem_size);
> > > -       __min_heapify(heap, 0, elem_size, func);
> > > +       __min_heapify(heap, 0, elem_size, func, args);
> > >  }
> > >
> > > -#define min_heap_pop_push(_heap, _element, _func)      \
> > > -       __min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
> > > +#define min_heap_pop_push(_heap, _element, _func, _args)       \
> > > +       __min_heap_pop_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
> > >
> > >  /* Push an element on to the heap, O(log2(nr)). */
> > >  static __always_inline
> > >  void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_size,
> > > -               const struct min_heap_callbacks *func)
> > > +               const struct min_heap_callbacks *func, void *args)
> > >  {
> > >         void *data = heap->data;
> > >         void *child, *parent;
> > > @@ -185,13 +186,13 @@ void __min_heap_push(struct __min_heap *heap, const void *element, size_t elem_s
> > >         for (; pos > 0; pos = (pos - 1) / 2) {
> > >                 child = data + (pos * elem_size);
> > >                 parent = data + ((pos - 1) / 2) * elem_size;
> > > -               if (func->less(parent, child))
> > > +               if (func->less(parent, child, args))
> > >                         break;
> > > -               func->swp(parent, child);
> > > +               func->swp(parent, child, args);
> > >         }
> > >  }
> > >
> > > -#define min_heap_push(_heap, _element, _func)  \
> > > -       __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func)
> > > +#define min_heap_push(_heap, _element, _func, _args)   \
> > > +       __min_heap_push(&(_heap)->heap, _element, __minheap_obj_size(_heap), _func, _args)
> > >
> > >  #endif /* _LINUX_MIN_HEAP_H */
> > > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > > index 065dfaa8b009..f2a9044058ee 100644
> > > --- a/kernel/events/core.c
> > > +++ b/kernel/events/core.c
> > > @@ -3683,7 +3683,7 @@ void __perf_event_task_sched_out(struct task_struct *task,
> > >         perf_cgroup_switch(next);
> > >  }
> > >
> > > -static bool perf_less_group_idx(const void *l, const void *r)
> > > +static bool perf_less_group_idx(const void *l, const void *r, void *args)
> > >  {
> > >         const struct perf_event *le = *(const struct perf_event **)l;
> > >         const struct perf_event *re = *(const struct perf_event **)r;
> > > @@ -3691,7 +3691,7 @@ static bool perf_less_group_idx(const void *l, const void *r)
> > >         return le->group_index < re->group_index;
> > >  }
> > >
> > > -static void swap_ptr(void *l, void *r)
> > > +static void swap_ptr(void *l, void *r, void *args)
> > >  {
> > >         void **lp = l, **rp = r;
> > >
> > > @@ -3779,7 +3779,7 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
> > >                 perf_assert_pmu_disabled((*evt)->pmu_ctx->pmu);
> > >         }
> > >
> > > -       min_heapify_all(&event_heap, &perf_min_heap);
> > > +       min_heapify_all(&event_heap, &perf_min_heap, NULL);
> > >
> > >         while (event_heap.heap.nr) {
> > >                 ret = func(*evt, data);
> > > @@ -3788,9 +3788,9 @@ static noinline int visit_groups_merge(struct perf_event_context *ctx,
> > >
> > >                 *evt = perf_event_groups_next(*evt, pmu);
> > >                 if (*evt)
> > > -                       min_heapify(&event_heap, 0, &perf_min_heap);
> > > +                       min_heapify(&event_heap, 0, &perf_min_heap, NULL);
> > >                 else
> > > -                       min_heap_pop(&event_heap, &perf_min_heap);
> > > +                       min_heap_pop(&event_heap, &perf_min_heap, NULL);
> > >         }
> > >
> > >         return 0;
> > > diff --git a/lib/test_min_heap.c b/lib/test_min_heap.c
> > > index af2e446034d8..b8859d17a19c 100644
> > > --- a/lib/test_min_heap.c
> > > +++ b/lib/test_min_heap.c
> > > @@ -13,17 +13,17 @@
> > >
> > >  MIN_HEAP(int, min_heap_test);
> > >
> > > -static __init bool less_than(const void *lhs, const void *rhs)
> > > +static __init bool less_than(const void *lhs, const void *rhs, void *args)
> > >  {
> > >         return *(int *)lhs < *(int *)rhs;
> > >  }
> > >
> > > -static __init bool greater_than(const void *lhs, const void *rhs)
> > > +static __init bool greater_than(const void *lhs, const void *rhs, void *args)
> > >  {
> > >         return *(int *)lhs > *(int *)rhs;
> > >  }
> > >
> > > -static __init void swap_ints(void *lhs, void *rhs)
> > > +static __init void swap_ints(void *lhs, void *rhs, void *argsss)
> > >  {
> > >         int temp = *(int *)lhs;
> > >
> > > @@ -40,7 +40,7 @@ static __init int pop_verify_heap(bool min_heap,
> > >         int last;
> > >
> > >         last = values[0];
> > > -       min_heap_pop(heap, funcs);
> > > +       min_heap_pop(heap, funcs, NULL);
> > >         while (heap->heap.nr > 0) {
> > >                 if (min_heap) {
> > >                         if (last > values[0]) {
> > > @@ -56,7 +56,7 @@ static __init int pop_verify_heap(bool min_heap,
> > >                         }
> > >                 }
> > >                 last = values[0];
> > > -               min_heap_pop(heap, funcs);
> > > +               min_heap_pop(heap, funcs, NULL);
> > >         }
> > >         return err;
> > >  }
> > > @@ -77,7 +77,7 @@ static __init int test_heapify_all(bool min_heap)
> > >         int i, err;
> > >
> > >         /* Test with known set of values. */
> > > -       min_heapify_all(&heap, &funcs);
> > > +       min_heapify_all(&heap, &funcs, NULL);
> > >         err = pop_verify_heap(min_heap, &heap, &funcs);
> > >
> > >
> > > @@ -86,7 +86,7 @@ static __init int test_heapify_all(bool min_heap)
> > >         for (i = 0; i < heap.heap.nr; i++)
> > >                 values[i] = get_random_u32();
> > >
> > > -       min_heapify_all(&heap, &funcs);
> > > +       min_heapify_all(&heap, &funcs, NULL);
> > >         err += pop_verify_heap(min_heap, &heap, &funcs);
> > >
> > >         return err;
> > > @@ -110,14 +110,14 @@ static __init int test_heap_push(bool min_heap)
> > >
> > >         /* Test with known set of values copied from data. */
> > >         for (i = 0; i < ARRAY_SIZE(data); i++)
> > > -               min_heap_push(&heap, &data[i], &funcs);
> > > +               min_heap_push(&heap, &data[i], &funcs, NULL);
> > >
> > >         err = pop_verify_heap(min_heap, &heap, &funcs);
> > >
> > >         /* Test with randomly generated values. */
> > >         while (heap.heap.nr < heap.heap.size) {
> > >                 temp = get_random_u32();
> > > -               min_heap_push(&heap, &temp, &funcs);
> > > +               min_heap_push(&heap, &temp, &funcs, NULL);
> > >         }
> > >         err += pop_verify_heap(min_heap, &heap, &funcs);
> > >
> > > @@ -143,22 +143,22 @@ static __init int test_heap_pop_push(bool min_heap)
> > >         /* Fill values with data to pop and replace. */
> > >         temp = min_heap ? 0x80000000 : 0x7FFFFFFF;
> > >         for (i = 0; i < ARRAY_SIZE(data); i++)
> > > -               min_heap_push(&heap, &temp, &funcs);
> > > +               min_heap_push(&heap, &temp, &funcs, NULL);
> > >
> > >         /* Test with known set of values copied from data. */
> > >         for (i = 0; i < ARRAY_SIZE(data); i++)
> > > -               min_heap_pop_push(&heap, &data[i], &funcs);
> > > +               min_heap_pop_push(&heap, &data[i], &funcs, NULL);
> > >
> > >         err = pop_verify_heap(min_heap, &heap, &funcs);
> > >
> > >         heap.heap.nr = 0;
> > >         for (i = 0; i < ARRAY_SIZE(data); i++)
> > > -               min_heap_push(&heap, &temp, &funcs);
> > > +               min_heap_push(&heap, &temp, &funcs, NULL);
> > >
> > >         /* Test with randomly generated values. */
> > >         for (i = 0; i < ARRAY_SIZE(data); i++) {
> > >                 temp = get_random_u32();
> > > -               min_heap_pop_push(&heap, &temp, &funcs);
> > > +               min_heap_pop_push(&heap, &temp, &funcs, NULL);
> > >         }
> > >         err += pop_verify_heap(min_heap, &heap, &funcs);
> > >
> > > --
> > > 2.34.1
> > >

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2024-03-20  4:44 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-19 17:59 [PATCH 00/13] treewide: Refactor heap related implementation Kuan-Wei Chiu
2024-03-19 17:59 ` [PATCH 01/13] perf/core: Fix several typos Kuan-Wei Chiu
2024-03-19 19:31   ` Ian Rogers
2024-03-19 17:59 ` [PATCH 02/13] bcache: Fix typo Kuan-Wei Chiu
2024-03-19 20:13   ` Ian Rogers
2024-03-19 17:59 ` [PATCH 03/13] bcachefs: " Kuan-Wei Chiu
2024-03-19 20:13   ` Ian Rogers
2024-03-19 17:59 ` [PATCH 04/13] lib min_heap: Add type safe interface Kuan-Wei Chiu
2024-03-19 19:49   ` Ian Rogers
2024-03-19 17:59 ` [PATCH 05/13] lib min_heap: Add min_heap_init() Kuan-Wei Chiu
2024-03-19 19:51   ` Ian Rogers
2024-03-20  2:56     ` Kuan-Wei Chiu
2024-03-19 17:59 ` [PATCH 06/13] lib min_heap: Add min_heap_peek() Kuan-Wei Chiu
2024-03-19 19:55   ` Ian Rogers
2024-03-19 17:59 ` [PATCH 07/13] lib min_heap: Add min_heap_full() Kuan-Wei Chiu
2024-03-19 19:55   ` Ian Rogers
2024-03-19 18:00 ` [PATCH 08/13] lib min_heap: Add args for min_heap_callbacks Kuan-Wei Chiu
2024-03-19 20:05   ` Ian Rogers
2024-03-20  2:49     ` Kuan-Wei Chiu
2024-03-20  4:43       ` Ian Rogers
2024-03-19 18:00 ` [PATCH 09/13] lib min_heap: Update min_heap_push() and min_heap_pop() to return bool values Kuan-Wei Chiu
2024-03-19 19:57   ` Ian Rogers
2024-03-19 18:00 ` [PATCH 10/13] bcache: Remove heap-related macros and switch to generic min_heap Kuan-Wei Chiu
2024-03-19 18:00 ` [PATCH 11/13] lib min_heap: Add min_heap_del() Kuan-Wei Chiu
2024-03-19 19:59   ` Ian Rogers
2024-03-19 18:00 ` [PATCH 12/13] lib min_heap: Add min_heap_sift_up() Kuan-Wei Chiu
2024-03-19 20:12   ` Ian Rogers
2024-03-20  2:51     ` Kuan-Wei Chiu
2024-03-19 18:00 ` [PATCH 13/13] bcachefs: Remove heap-related macros and switch to generic min_heap Kuan-Wei Chiu
2024-03-19 20:03   ` Ian Rogers
2024-03-20  2:55     ` Kuan-Wei Chiu
2024-03-19 22:12 ` [PATCH 00/13] treewide: Refactor heap related implementation Kent Overstreet
2024-03-20  3:01   ` Kuan-Wei Chiu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).