linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RESEND PATCH] Don't reinvent the wheel but use existing llilst API
@ 2017-07-10  2:37 Byungchul Park
  2017-07-10  2:37 ` [RESEND PATCH] bcache: Don't reinvent the wheel but use existing llist API Byungchul Park
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Byungchul Park @ 2017-07-10  2:37 UTC (permalink / raw)
  To: torvalds; +Cc: axboe, viro, linux-kernel, kernel-team

Hello Linus,

Even though llist APIs exist, serveral code in kernel reinvent the
implementation again and again. Since I think it's *worth* making it use
existing APIs, I submit patches doing it.

Actually I've submitted them to maintainers and commiters about 10 times
for 4 months, but they seem to be too busy to review or take them. Of
course you might be even busier, but it would be appriciated if you do
that instead. It might not take your time much since they are simple.

Patches worked on scheduler, irq_work, vhost/scsi and raid are already
taken by proper maintainers. For now, only ones for bcache, mm, fput and
namespace remain.

Thank you,
Byungchul

^ permalink raw reply	[flat|nested] 7+ messages in thread
* [RESEND PATCH] mm: Don't reinvent the wheel but use existing llist API
@ 2017-08-07  8:42 Byungchul Park
  0 siblings, 0 replies; 7+ messages in thread
From: Byungchul Park @ 2017-08-07  8:42 UTC (permalink / raw)
  To: akpm, zijun_hu, mhocko, vbabka, joelaf, aryabinin
  Cc: linux-mm, linux-kernel, kernel-team

Although llist provides proper APIs, they are not used. Make them used.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
---
 mm/vmalloc.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 3ca82d4..8c0eb45 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -49,12 +49,10 @@ struct vfree_deferred {
 static void free_work(struct work_struct *w)
 {
 	struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq);
-	struct llist_node *llnode = llist_del_all(&p->list);
-	while (llnode) {
-		void *p = llnode;
-		llnode = llist_next(llnode);
-		__vunmap(p, 1);
-	}
+	struct llist_node *t, *llnode;
+
+	llist_for_each_safe(llnode, t, llist_del_all(&p->list))
+		__vunmap((void *)llnode, 1);
 }
 
 /*** Page table manipulation functions ***/
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread
* [RESEND PATCH] llist: Provide a safe version for llist_for_each
@ 2017-05-12  0:36 Byungchul Park
  2017-05-12  5:55 ` [RESEND PATCH] mm: Don't reinvent the wheel but use existing llist API Byungchul Park
  0 siblings, 1 reply; 7+ messages in thread
From: Byungchul Park @ 2017-05-12  0:36 UTC (permalink / raw)
  To: peterz; +Cc: linux-kernel, kernel-team

Sometimes we have to dereference next field of llist node before entering
loop becasue the node might be deleted or the next field might be
modified within the loop. So this adds the safe version of llist_for_each,
that is, llist_for_each_safe.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
---
 include/linux/llist.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/include/linux/llist.h b/include/linux/llist.h
index fd4ca0b..b90c9f2 100644
--- a/include/linux/llist.h
+++ b/include/linux/llist.h
@@ -105,6 +105,25 @@ static inline void init_llist_head(struct llist_head *list)
 	for ((pos) = (node); pos; (pos) = (pos)->next)
 
 /**
+ * llist_for_each_safe - iterate over some deleted entries of a lock-less list
+ *			 safe against removal of list entry
+ * @pos:	the &struct llist_node to use as a loop cursor
+ * @n:		another &struct llist_node to use as temporary storage
+ * @node:	the first entry of deleted list entries
+ *
+ * In general, some entries of the lock-less list can be traversed
+ * safely only after being deleted from list, so start with an entry
+ * instead of list head.
+ *
+ * If being used on entries deleted from lock-less list directly, the
+ * traverse order is from the newest to the oldest added entry.  If
+ * you want to traverse from the oldest to the newest, you must
+ * reverse the order by yourself before traversing.
+ */
+#define llist_for_each_safe(pos, n, node)			\
+	for ((pos) = (node); (pos) && ((n) = (pos)->next, true); (pos) = (n))
+
+/**
  * llist_for_each_entry - iterate over some deleted entries of lock-less list of given type
  * @pos:	the type * to use as a loop cursor.
  * @node:	the fist entry of deleted list entries.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-08-07  8:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-10  2:37 [RESEND PATCH] Don't reinvent the wheel but use existing llilst API Byungchul Park
2017-07-10  2:37 ` [RESEND PATCH] bcache: Don't reinvent the wheel but use existing llist API Byungchul Park
2017-07-10  2:37 ` [RESEND PATCH] fput: " Byungchul Park
2017-07-10  2:37 ` [RESEND PATCH] mm: " Byungchul Park
2017-07-10  2:37 ` [RESEND PATCH] namespace.c: " Byungchul Park
  -- strict thread matches above, loose matches on Subject: below --
2017-08-07  8:42 [RESEND PATCH] mm: " Byungchul Park
2017-05-12  0:36 [RESEND PATCH] llist: Provide a safe version for llist_for_each Byungchul Park
2017-05-12  5:55 ` [RESEND PATCH] mm: Don't reinvent the wheel but use existing llist API Byungchul Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).