linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO
@ 2024-05-15 12:53 Håkon Bugge
  2024-05-15 12:53 ` [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags Håkon Bugge
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Håkon Bugge @ 2024-05-15 12:53 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, netdev, rds-devel
  Cc: Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Håkon Bugge, Chuck Lever, Shiraz Saleem,
	Yang Li

This series enables RDS and the RDMA stack to be used as a block I/O
device. This to support a filesystem on top of a raw block device
which uses RDS and the RDMA stack as the network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into RDS, which calls the RDMA stack. Now, if regular GFP_KERNEL
allocations in RDS or the RDMA stack require reclaims to be fulfilled,
we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in RDS and the relevant RDMA stack to use
   GFP_NOIO, by means of a parenthetic use of
   memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
   wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
   work-queue inherits the same flag(s).

Håkon Bugge (6):
  workqueue: Inherit NOIO and NOFS alloc flags
  rds: Brute force GFP_NOIO
  RDMA/cma: Brute force GFP_NOIO
  RDMA/cm: Brute force GFP_NOIO
  RDMA/mlx5: Brute force GFP_NOIO
  net/mlx5: Brute force GFP_NOIO

 drivers/infiniband/core/cm.c                  | 15 ++++-
 drivers/infiniband/core/cma.c                 | 20 ++++++-
 drivers/infiniband/hw/mlx5/main.c             | 22 +++++--
 .../net/ethernet/mellanox/mlx5/core/main.c    | 14 ++++-
 include/linux/workqueue.h                     |  2 +
 kernel/workqueue.c                            | 21 +++++++
 net/rds/af_rds.c                              | 59 ++++++++++++++++++-
 7 files changed, 141 insertions(+), 12 deletions(-)

--
2.45.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags
  2024-05-15 12:53 [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO Håkon Bugge
@ 2024-05-15 12:53 ` Håkon Bugge
  2024-05-15 16:54   ` Tejun Heo
  2024-05-15 12:53 ` [PATCH v2 2/6] rds: Brute force GFP_NOIO Håkon Bugge
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Håkon Bugge @ 2024-05-15 12:53 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, netdev, rds-devel
  Cc: Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Håkon Bugge, Chuck Lever, Shiraz Saleem,
	Yang Li

For drivers/modules running inside a
memalloc_{noio,nofs}_{save,restore} region, if a work-queue is
created, we make sure work executed on the work-queue inherits the
same flag(s).

This in order to conditionally enable drivers to work aligned with
block I/O devices. This commit makes sure that any work queued later
on work-queues created during module initialization, when current's
flags has PF_MEMALLOC_{NOIO,NOFS} set, will inherit the same flags.

We do this in order to enable drivers to be used as a network block
I/O device. This in order to support XFS or other file-systems on top
of a raw block device which uses said drivers as the network transport
layer.

Under intense memory pressure, we get memory reclaims. Assume the
file-system reclaims memory, goes to the raw block device, which calls
into said drivers. Now, if regular GFP_KERNEL allocations in the
drivers require reclaims to be fulfilled, we end up in a circular
dependency.

We break this circular dependency by:

1. Force all allocations in the drivers to use GFP_NOIO, by means of a
   parenthetic use of memalloc_noio_{save,restore} on all relevant
   entry points.

2. Make sure work-queues inherits current->flags
   wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
   work-queue inherits the same flag(s). That is what this commit
   contributes with.

Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>

---

v1 -> v2:
   * Added missing hunk in alloc_workqueue()
---
 include/linux/workqueue.h |  2 ++
 kernel/workqueue.c        | 21 +++++++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 158784dd189ab..09ecc692ffcae 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -398,6 +398,8 @@ enum wq_flags {
 	__WQ_DRAINING		= 1 << 16, /* internal: workqueue is draining */
 	__WQ_ORDERED		= 1 << 17, /* internal: workqueue is ordered */
 	__WQ_LEGACY		= 1 << 18, /* internal: create*_workqueue() */
+	__WQ_NOIO               = 1 << 19, /* internal: execute work with NOIO */
+	__WQ_NOFS               = 1 << 20, /* internal: execute work with NOFS */
 
 	/* BH wq only allows the following flags */
 	__WQ_BH_ALLOWS		= WQ_BH | WQ_HIGHPRI,
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d2dbe099286b9..8eb7562372ce2 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -51,6 +51,7 @@
 #include <linux/uaccess.h>
 #include <linux/sched/isolation.h>
 #include <linux/sched/debug.h>
+#include <linux/sched/mm.h>
 #include <linux/nmi.h>
 #include <linux/kvm_para.h>
 #include <linux/delay.h>
@@ -3172,6 +3173,10 @@ __acquires(&pool->lock)
 	unsigned long work_data;
 	int lockdep_start_depth, rcu_start_depth;
 	bool bh_draining = pool->flags & POOL_BH_DRAINING;
+	bool use_noio_allocs = pwq->wq->flags & __WQ_NOIO;
+	bool use_nofs_allocs = pwq->wq->flags & __WQ_NOFS;
+	unsigned long noio_flags;
+	unsigned long nofs_flags;
 #ifdef CONFIG_LOCKDEP
 	/*
 	 * It is permissible to free the struct work_struct from
@@ -3184,6 +3189,12 @@ __acquires(&pool->lock)
 
 	lockdep_copy_map(&lockdep_map, &work->lockdep_map);
 #endif
+	/* Set inherited alloc flags */
+	if (use_noio_allocs)
+		noio_flags = memalloc_noio_save();
+	if (use_nofs_allocs)
+		nofs_flags = memalloc_nofs_save();
+
 	/* ensure we're on the correct CPU */
 	WARN_ON_ONCE(!(pool->flags & POOL_DISASSOCIATED) &&
 		     raw_smp_processor_id() != pool->cpu);
@@ -3320,6 +3331,12 @@ __acquires(&pool->lock)
 
 	/* must be the last step, see the function comment */
 	pwq_dec_nr_in_flight(pwq, work_data);
+
+	/* Restore alloc flags */
+	if (use_nofs_allocs)
+		memalloc_nofs_restore(nofs_flags);
+	if (use_noio_allocs)
+		memalloc_noio_restore(noio_flags);
 }
 
 /**
@@ -5583,6 +5600,10 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
 
 	/* init wq */
 	wq->flags = flags;
+	if (current->flags & PF_MEMALLOC_NOIO)
+		wq->flags |= __WQ_NOIO;
+	if (current->flags & PF_MEMALLOC_NOFS)
+		wq->flags |= __WQ_NOFS;
 	wq->max_active = max_active;
 	wq->min_active = min(max_active, WQ_DFL_MIN_ACTIVE);
 	wq->saved_max_active = wq->max_active;
-- 
2.45.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 2/6] rds: Brute force GFP_NOIO
  2024-05-15 12:53 [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO Håkon Bugge
  2024-05-15 12:53 ` [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags Håkon Bugge
@ 2024-05-15 12:53 ` Håkon Bugge
  2024-05-15 12:53 ` [PATCH v2 3/6] RDMA/cma: " Håkon Bugge
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Håkon Bugge @ 2024-05-15 12:53 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, netdev, rds-devel
  Cc: Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Håkon Bugge, Chuck Lever, Shiraz Saleem,
	Yang Li

For most entry points to RDS, we call memalloc_noio_{save,restore} in
a parenthetic fashion when enabled by the module parameter force_noio.

We skip the calls to memalloc_noio_{save,restore} in rds_ioctl(), as
no memory allocations are executed in this function or its callees.

The reason we execute memalloc_noio_{save,restore} in rds_poll(), is
due to the following call chain:

rds_poll()
        poll_wait()
                __pollwait()
                        poll_get_entry()
                                __get_free_page(GFP_KERNEL)

The function rds_setsockopt() allocates memory in its callee's
rds_get_mr() and rds_get_mr_for_dest(). Hence, we need
memalloc_noio_{save,restore} in rds_setsockopt().

In rds_getsockopt(), we have rds_info_getsockopt() that allocates
memory. Hence, we need memalloc_noio_{save,restore} in
rds_getsockopt().

All the above, in order to conditionally enable RDS to become a block I/O
device.

Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>

---

v1 -> v2:
   * s/EXPORT_SYMBOL/static/ for the rds_force_noio variable as
     pin-pointed by Simon
   * Straightened the reverse xmas tree two places
   * Fixed C/P error in rds_cancel_sent_to() where I had two _save()s
     and no _restore() as reported by Simon
---
 net/rds/af_rds.c | 59 +++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 56 insertions(+), 3 deletions(-)

diff --git a/net/rds/af_rds.c b/net/rds/af_rds.c
index 8435a20968ef5..846ad20b3783a 100644
--- a/net/rds/af_rds.c
+++ b/net/rds/af_rds.c
@@ -37,10 +37,15 @@
 #include <linux/in.h>
 #include <linux/ipv6.h>
 #include <linux/poll.h>
+#include <linux/sched/mm.h>
 #include <net/sock.h>
 
 #include "rds.h"
 
+static bool rds_force_noio;
+module_param_named(force_noio, rds_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
 /* this is just used for stats gathering :/ */
 static DEFINE_SPINLOCK(rds_sock_lock);
 static unsigned long rds_sock_count;
@@ -59,8 +64,12 @@ DECLARE_WAIT_QUEUE_HEAD(rds_poll_waitq);
 static int rds_release(struct socket *sock)
 {
 	struct sock *sk = sock->sk;
+	unsigned int noio_flags;
 	struct rds_sock *rs;
 
+	if (rds_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	if (!sk)
 		goto out;
 
@@ -90,6 +99,8 @@ static int rds_release(struct socket *sock)
 	sock->sk = NULL;
 	sock_put(sk);
 out:
+	if (rds_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return 0;
 }
 
@@ -214,9 +225,13 @@ static __poll_t rds_poll(struct file *file, struct socket *sock,
 {
 	struct sock *sk = sock->sk;
 	struct rds_sock *rs = rds_sk_to_rs(sk);
+	unsigned int noio_flags;
 	__poll_t mask = 0;
 	unsigned long flags;
 
+	if (rds_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	poll_wait(file, sk_sleep(sk), wait);
 
 	if (rs->rs_seen_congestion)
@@ -249,6 +264,8 @@ static __poll_t rds_poll(struct file *file, struct socket *sock,
 	if (mask)
 		rs->rs_seen_congestion = 0;
 
+	if (rds_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return mask;
 }
 
@@ -293,9 +310,13 @@ static int rds_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
 static int rds_cancel_sent_to(struct rds_sock *rs, sockptr_t optval, int len)
 {
 	struct sockaddr_in6 sin6;
+	unsigned int noio_flags;
 	struct sockaddr_in sin;
 	int ret = 0;
 
+	if (rds_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	/* racing with another thread binding seems ok here */
 	if (ipv6_addr_any(&rs->rs_bound_addr)) {
 		ret = -ENOTCONN; /* XXX not a great errno */
@@ -324,6 +345,8 @@ static int rds_cancel_sent_to(struct rds_sock *rs, sockptr_t optval, int len)
 
 	rds_send_drop_to(rs, &sin6);
 out:
+	if (rds_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return ret;
 }
 
@@ -485,8 +508,12 @@ static int rds_getsockopt(struct socket *sock, int level, int optname,
 {
 	struct rds_sock *rs = rds_sk_to_rs(sock->sk);
 	int ret = -ENOPROTOOPT, len;
+	unsigned int noio_flags;
 	int trans;
 
+	if (rds_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	if (level != SOL_RDS)
 		goto out;
 
@@ -529,6 +556,8 @@ static int rds_getsockopt(struct socket *sock, int level, int optname,
 	}
 
 out:
+	if (rds_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return ret;
 
 }
@@ -538,12 +567,16 @@ static int rds_connect(struct socket *sock, struct sockaddr *uaddr,
 {
 	struct sock *sk = sock->sk;
 	struct sockaddr_in *sin;
+	unsigned int noio_flags;
 	struct rds_sock *rs = rds_sk_to_rs(sk);
 	int ret = 0;
 
 	if (addr_len < offsetofend(struct sockaddr, sa_family))
 		return -EINVAL;
 
+	if (rds_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	lock_sock(sk);
 
 	switch (uaddr->sa_family) {
@@ -626,6 +659,8 @@ static int rds_connect(struct socket *sock, struct sockaddr *uaddr,
 	}
 
 	release_sock(sk);
+	if (rds_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return ret;
 }
 
@@ -697,16 +732,28 @@ static int __rds_create(struct socket *sock, struct sock *sk, int protocol)
 static int rds_create(struct net *net, struct socket *sock, int protocol,
 		      int kern)
 {
+	unsigned int noio_flags;
 	struct sock *sk;
+	int ret;
 
 	if (sock->type != SOCK_SEQPACKET || protocol)
 		return -ESOCKTNOSUPPORT;
 
+	if (rds_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	sk = sk_alloc(net, AF_RDS, GFP_KERNEL, &rds_proto, kern);
-	if (!sk)
-		return -ENOMEM;
+	if (!sk) {
+		ret = -ENOMEM;
+		goto out;
+	}
 
-	return __rds_create(sock, sk, protocol);
+	ret = __rds_create(sock, sk, protocol);
+out:
+	if (rds_force_noio)
+		memalloc_noio_restore(noio_flags);
+
+	return ret;
 }
 
 void rds_sock_addref(struct rds_sock *rs)
@@ -895,8 +942,12 @@ u32 rds_gen_num;
 
 static int __init rds_init(void)
 {
+	unsigned int noio_flags;
 	int ret;
 
+	if (rds_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	net_get_random_once(&rds_gen_num, sizeof(rds_gen_num));
 
 	ret = rds_bind_lock_init();
@@ -947,6 +998,8 @@ static int __init rds_init(void)
 out_bind:
 	rds_bind_lock_destroy();
 out:
+	if (rds_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return ret;
 }
 module_init(rds_init);
-- 
2.45.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 3/6] RDMA/cma: Brute force GFP_NOIO
  2024-05-15 12:53 [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO Håkon Bugge
  2024-05-15 12:53 ` [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags Håkon Bugge
  2024-05-15 12:53 ` [PATCH v2 2/6] rds: Brute force GFP_NOIO Håkon Bugge
@ 2024-05-15 12:53 ` Håkon Bugge
  2024-05-16  7:37   ` Zhu Yanjun
  2024-05-15 12:53 ` [PATCH v2 4/6] RDMA/cm: " Håkon Bugge
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Håkon Bugge @ 2024-05-15 12:53 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, netdev, rds-devel
  Cc: Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Håkon Bugge, Chuck Lever, Shiraz Saleem,
	Yang Li

In cma_init(), we call memalloc_noio_{save,restore} in a parenthetic
fashion when enabled by the module parameter force_noio.

This in order to conditionally enable rdma_cm to work aligned with
block I/O devices. Any work queued later on work-queues created during
module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS}
flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc
flags").

We do this in order to enable ULPs using the RDMA stack to be used as
a network block I/O device. This to support a filesystem on top of a
raw block device which uses said ULP(s) and the RDMA stack as the
network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into the ULP in question, which calls the RDMA stack. Now, if
regular GFP_KERNEL allocations in the ULP or the RDMA stack require
reclaims to be fulfilled, we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in the ULP and the relevant RDMA stack to use
   GFP_NOIO, by means of a parenthetic use of
   memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
   wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
   work-queue inherits the same flag(s).

Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
---
 drivers/infiniband/core/cma.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 1e2cd7c8716e8..23a50cc3e81cb 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -50,6 +50,10 @@ MODULE_LICENSE("Dual BSD/GPL");
 #define CMA_IBOE_PACKET_LIFETIME 16
 #define CMA_PREFERRED_ROCE_GID_TYPE IB_GID_TYPE_ROCE_UDP_ENCAP
 
+static bool cma_force_noio;
+module_param_named(force_noio, cma_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
 static const char * const cma_events[] = {
 	[RDMA_CM_EVENT_ADDR_RESOLVED]	 = "address resolved",
 	[RDMA_CM_EVENT_ADDR_ERROR]	 = "address error",
@@ -5424,6 +5428,10 @@ static struct pernet_operations cma_pernet_operations = {
 static int __init cma_init(void)
 {
 	int ret;
+	unsigned int noio_flags;
+
+	if (cma_force_noio)
+		noio_flags = memalloc_noio_save();
 
 	/*
 	 * There is a rare lock ordering dependency in cma_netdev_callback()
@@ -5439,8 +5447,10 @@ static int __init cma_init(void)
 	}
 
 	cma_wq = alloc_ordered_workqueue("rdma_cm", WQ_MEM_RECLAIM);
-	if (!cma_wq)
-		return -ENOMEM;
+	if (!cma_wq) {
+		ret = -ENOMEM;
+		goto out;
+	}
 
 	ret = register_pernet_subsys(&cma_pernet_operations);
 	if (ret)
@@ -5458,7 +5468,8 @@ static int __init cma_init(void)
 	if (ret)
 		goto err_ib;
 
-	return 0;
+	ret = 0;
+	goto out;
 
 err_ib:
 	ib_unregister_client(&cma_client);
@@ -5469,6 +5480,9 @@ static int __init cma_init(void)
 	unregister_pernet_subsys(&cma_pernet_operations);
 err_wq:
 	destroy_workqueue(cma_wq);
+out:
+	if (cma_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return ret;
 }
 
-- 
2.45.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 4/6] RDMA/cm: Brute force GFP_NOIO
  2024-05-15 12:53 [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO Håkon Bugge
                   ` (2 preceding siblings ...)
  2024-05-15 12:53 ` [PATCH v2 3/6] RDMA/cma: " Håkon Bugge
@ 2024-05-15 12:53 ` Håkon Bugge
  2024-05-15 12:53 ` [PATCH v2 5/6] RDMA/mlx5: " Håkon Bugge
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Håkon Bugge @ 2024-05-15 12:53 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, netdev, rds-devel
  Cc: Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Håkon Bugge, Chuck Lever, Shiraz Saleem,
	Yang Li

In ib_cm_init(), we call memalloc_noio_{save,restore} in a parenthetic
fashion when enabled by the module parameter force_noio.

This in order to conditionally enable ib_cm to work aligned with block
I/O devices. Any work queued later on work-queues created during
module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS}
flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc
flags").

We do this in order to enable ULPs using the RDMA stack to be used as
a network block I/O device. This to support a filesystem on top of a
raw block device which uses said ULP(s) and the RDMA stack as the
network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into the ULP in question, which calls the RDMA stack. Now, if regular
GFP_KERNEL allocations in ULP or the RDMA stack require reclaims to be
fulfilled, we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in the ULP and the relevant RDMA stack to use
   GFP_NOIO, by means of a parenthetic use of
   memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
   wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
   work-queue inherits the same flag(s).

Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
---
 drivers/infiniband/core/cm.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 07fb8d3c037f0..767eec38eb57d 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -22,6 +22,7 @@
 #include <linux/workqueue.h>
 #include <linux/kdev_t.h>
 #include <linux/etherdevice.h>
+#include <linux/sched/mm.h>
 
 #include <rdma/ib_cache.h>
 #include <rdma/ib_cm.h>
@@ -35,6 +36,11 @@ MODULE_DESCRIPTION("InfiniBand CM");
 MODULE_LICENSE("Dual BSD/GPL");
 
 #define CM_DESTROY_ID_WAIT_TIMEOUT 10000 /* msecs */
+
+static bool cm_force_noio;
+module_param_named(force_noio, cm_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
 static const char * const ibcm_rej_reason_strs[] = {
 	[IB_CM_REJ_NO_QP]			= "no QP",
 	[IB_CM_REJ_NO_EEC]			= "no EEC",
@@ -4504,6 +4510,10 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
 static int __init ib_cm_init(void)
 {
 	int ret;
+	unsigned int noio_flags;
+
+	if (cm_force_noio)
+		noio_flags = memalloc_noio_save();
 
 	INIT_LIST_HEAD(&cm.device_list);
 	rwlock_init(&cm.device_lock);
@@ -4527,10 +4537,13 @@ static int __init ib_cm_init(void)
 	if (ret)
 		goto error3;
 
-	return 0;
+	goto error2;
 error3:
 	destroy_workqueue(cm.wq);
 error2:
+	if (cm_force_noio)
+		memalloc_noio_restore(noio_flags);
+
 	return ret;
 }
 
-- 
2.45.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 5/6] RDMA/mlx5: Brute force GFP_NOIO
  2024-05-15 12:53 [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO Håkon Bugge
                   ` (3 preceding siblings ...)
  2024-05-15 12:53 ` [PATCH v2 4/6] RDMA/cm: " Håkon Bugge
@ 2024-05-15 12:53 ` Håkon Bugge
  2024-05-15 12:53 ` [PATCH v2 6/6] net/mlx5: " Håkon Bugge
  2024-05-21 14:24 ` [PATCH v2 0/6] rds: rdma: Add ability to " Christoph Hellwig
  6 siblings, 0 replies; 17+ messages in thread
From: Håkon Bugge @ 2024-05-15 12:53 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, netdev, rds-devel
  Cc: Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Håkon Bugge, Chuck Lever, Shiraz Saleem,
	Yang Li

In mlx5_ib_init(), we call memalloc_noio_{save,restore} in a parenthetic
fashion when enabled by the module parameter force_noio.

This in order to conditionally enable mlx5_ib to work aligned with
I/O devices. Any work queued later on work-queues created during
module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS}
flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc
flags").

We do this in order to enable ULPs using the RDMA stack and the
mlx5_ib driver to be used as a network block I/O device. This to
support a filesystem on top of a raw block device which uses said
ULP(s) and the RDMA stack as the network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into the ULP in question, which calls the RDMA stack. Now, if regular
GFP_KERNEL allocations in ULP or the RDMA stack require reclaims to be
fulfilled, we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in the ULP and the relevant RDMA stack to use
   GFP_NOIO, by means of a parenthetic use of
   memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
   wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
   work-queue inherits the same flag(s).

Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
---
 drivers/infiniband/hw/mlx5/main.c | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index c2b557e642906..a424d518538ed 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -56,6 +56,10 @@ MODULE_AUTHOR("Eli Cohen <eli@mellanox.com>");
 MODULE_DESCRIPTION("Mellanox 5th generation network adapters (ConnectX series) IB driver");
 MODULE_LICENSE("Dual BSD/GPL");
 
+static bool mlx5_ib_force_noio;
+module_param_named(force_noio, mlx5_ib_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
 struct mlx5_ib_event_work {
 	struct work_struct	work;
 	union {
@@ -4489,16 +4493,23 @@ static struct auxiliary_driver mlx5r_driver = {
 
 static int __init mlx5_ib_init(void)
 {
+	unsigned int noio_flags;
 	int ret;
 
+	if (mlx5_ib_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	xlt_emergency_page = (void *)__get_free_page(GFP_KERNEL);
-	if (!xlt_emergency_page)
-		return -ENOMEM;
+	if (!xlt_emergency_page) {
+		ret = -ENOMEM;
+		goto out;
+	}
 
 	mlx5_ib_event_wq = alloc_ordered_workqueue("mlx5_ib_event_wq", 0);
 	if (!mlx5_ib_event_wq) {
 		free_page((unsigned long)xlt_emergency_page);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto out;
 	}
 
 	ret = mlx5_ib_qp_event_init();
@@ -4515,7 +4526,7 @@ static int __init mlx5_ib_init(void)
 	ret = auxiliary_driver_register(&mlx5r_driver);
 	if (ret)
 		goto drv_err;
-	return 0;
+	goto out;
 
 drv_err:
 	auxiliary_driver_unregister(&mlx5r_mp_driver);
@@ -4526,6 +4537,9 @@ static int __init mlx5_ib_init(void)
 qp_event_err:
 	destroy_workqueue(mlx5_ib_event_wq);
 	free_page((unsigned long)xlt_emergency_page);
+out:
+	if (mlx5_ib_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return ret;
 }
 
-- 
2.45.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 6/6] net/mlx5: Brute force GFP_NOIO
  2024-05-15 12:53 [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO Håkon Bugge
                   ` (4 preceding siblings ...)
  2024-05-15 12:53 ` [PATCH v2 5/6] RDMA/mlx5: " Håkon Bugge
@ 2024-05-15 12:53 ` Håkon Bugge
  2024-05-21 14:24 ` [PATCH v2 0/6] rds: rdma: Add ability to " Christoph Hellwig
  6 siblings, 0 replies; 17+ messages in thread
From: Håkon Bugge @ 2024-05-15 12:53 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, netdev, rds-devel
  Cc: Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Håkon Bugge, Chuck Lever, Shiraz Saleem,
	Yang Li

In mlx5_core_init(), we call memalloc_noio_{save,restore} in a parenthetic
fashion when enabled by the module parameter force_noio.

This in order to conditionally enable mlx5_core to work aligned with
I/O devices. Any work queued later on work-queues created during
module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS}
flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc
flags").

We do this in order to enable ULPs using the RDMA stack and the
mlx5_core driver to be used as a network block I/O device. This to
support a filesystem on top of a raw block device which uses said
ULP(s) and the RDMA stack as the network transport layer.

Under intense memory pressure, we get memory reclaims. Assume the
filesystem reclaims memory, goes to the raw block device, which calls
into the ULP in question, which calls the RDMA stack. Now, if regular
GFP_KERNEL allocations in ULP or the RDMA stack require reclaims to be
fulfilled, we end up in a circular dependency.

We break this circular dependency by:

1. Force all allocations in the ULP and the relevant RDMA stack to use
   GFP_NOIO, by means of a parenthetic use of
   memalloc_noio_{save,restore} on all relevant entry points.

2. Make sure work-queues inherits current->flags
   wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
   work-queue inherits the same flag(s).

Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/main.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 331ce47f51a17..aa1bf8bb5d15c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -48,6 +48,7 @@
 #include <linux/mlx5/vport.h>
 #include <linux/version.h>
 #include <net/devlink.h>
+#include <linux/sched/mm.h>
 #include "mlx5_core.h"
 #include "lib/eq.h"
 #include "fs_core.h"
@@ -87,6 +88,10 @@ static unsigned int prof_sel = MLX5_DEFAULT_PROF;
 module_param_named(prof_sel, prof_sel, uint, 0444);
 MODULE_PARM_DESC(prof_sel, "profile selector. Valid range 0 - 2");
 
+static bool mlx5_core_force_noio;
+module_param_named(force_noio, mlx5_core_force_noio, bool, 0444);
+MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
+
 static u32 sw_owner_id[4];
 #define MAX_SW_VHCA_ID (BIT(__mlx5_bit_sz(cmd_hca_cap_2, sw_vhca_id)) - 1)
 static DEFINE_IDA(sw_vhca_ida);
@@ -2312,8 +2317,12 @@ static void mlx5_core_verify_params(void)
 
 static int __init mlx5_init(void)
 {
+	unsigned int noio_flags;
 	int err;
 
+	if (mlx5_core_force_noio)
+		noio_flags = memalloc_noio_save();
+
 	WARN_ONCE(strcmp(MLX5_ADEV_NAME, KBUILD_MODNAME),
 		  "mlx5_core name not in sync with kernel module name");
 
@@ -2334,7 +2343,7 @@ static int __init mlx5_init(void)
 	if (err)
 		goto err_pci;
 
-	return 0;
+	goto out;
 
 err_pci:
 	mlx5_sf_driver_unregister();
@@ -2342,6 +2351,9 @@ static int __init mlx5_init(void)
 	mlx5e_cleanup();
 err_debug:
 	mlx5_unregister_debugfs();
+out:
+	if (mlx5_core_force_noio)
+		memalloc_noio_restore(noio_flags);
 	return err;
 }
 
-- 
2.45.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags
  2024-05-15 12:53 ` [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags Håkon Bugge
@ 2024-05-15 16:54   ` Tejun Heo
  2024-05-16 15:27     ` Haakon Bugge
  0 siblings, 1 reply; 17+ messages in thread
From: Tejun Heo @ 2024-05-15 16:54 UTC (permalink / raw)
  To: Håkon Bugge
  Cc: linux-rdma, linux-kernel, netdev, rds-devel, Jason Gunthorpe,
	Leon Romanovsky, Saeed Mahameed, Tariq Toukan, David S . Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Lai Jiangshan,
	Allison Henderson, Manjunath Patil, Mark Zhang, Chuck Lever,
	Shiraz Saleem, Yang Li

> @@ -5583,6 +5600,10 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
>  
>  	/* init wq */
>  	wq->flags = flags;
> +	if (current->flags & PF_MEMALLOC_NOIO)
> +		wq->flags |= __WQ_NOIO;
> +	if (current->flags & PF_MEMALLOC_NOFS)
> +		wq->flags |= __WQ_NOFS;

So, yeah, please don't do this. What if a NOIO callers wants to scheduler a
work item so that it can user GFP_KERNEL allocations. I don't mind a
convenience feature to workqueue for this but this doesn't seem like the
right way. Also, memalloc_noio_save() and memalloc_nofs_save() are
convenience wrappers around memalloc_flags_save(), so it'd probably be
better to deal with gfp flags directly rather than singling out these two
flags.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/6] RDMA/cma: Brute force GFP_NOIO
  2024-05-15 12:53 ` [PATCH v2 3/6] RDMA/cma: " Håkon Bugge
@ 2024-05-16  7:37   ` Zhu Yanjun
  2024-05-16 15:49     ` Haakon Bugge
  0 siblings, 1 reply; 17+ messages in thread
From: Zhu Yanjun @ 2024-05-16  7:37 UTC (permalink / raw)
  To: Håkon Bugge, linux-rdma, linux-kernel, netdev, rds-devel
  Cc: Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Chuck Lever, Shiraz Saleem, Yang Li

On 15.05.24 14:53, Håkon Bugge wrote:
> In cma_init(), we call memalloc_noio_{save,restore} in a parenthetic
> fashion when enabled by the module parameter force_noio.
> 
> This in order to conditionally enable rdma_cm to work aligned with
> block I/O devices. Any work queued later on work-queues created during
> module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS}
> flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc
> flags").
> 
> We do this in order to enable ULPs using the RDMA stack to be used as
> a network block I/O device. This to support a filesystem on top of a
> raw block device which uses said ULP(s) and the RDMA stack as the
> network transport layer.
> 
> Under intense memory pressure, we get memory reclaims. Assume the
> filesystem reclaims memory, goes to the raw block device, which calls
> into the ULP in question, which calls the RDMA stack. Now, if
> regular GFP_KERNEL allocations in the ULP or the RDMA stack require
> reclaims to be fulfilled, we end up in a circular dependency.
> 
> We break this circular dependency by:
> 
> 1. Force all allocations in the ULP and the relevant RDMA stack to use
>     GFP_NOIO, by means of a parenthetic use of
>     memalloc_noio_{save,restore} on all relevant entry points.
> 
> 2. Make sure work-queues inherits current->flags
>     wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the
>     work-queue inherits the same flag(s).
> 
> Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
> ---
>   drivers/infiniband/core/cma.c | 20 +++++++++++++++++---
>   1 file changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
> index 1e2cd7c8716e8..23a50cc3e81cb 100644
> --- a/drivers/infiniband/core/cma.c
> +++ b/drivers/infiniband/core/cma.c
> @@ -50,6 +50,10 @@ MODULE_LICENSE("Dual BSD/GPL");
>   #define CMA_IBOE_PACKET_LIFETIME 16
>   #define CMA_PREFERRED_ROCE_GID_TYPE IB_GID_TYPE_ROCE_UDP_ENCAP
>   
> +static bool cma_force_noio;
> +module_param_named(force_noio, cma_force_noio, bool, 0444);
> +MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)");
> +
>   static const char * const cma_events[] = {
>   	[RDMA_CM_EVENT_ADDR_RESOLVED]	 = "address resolved",
>   	[RDMA_CM_EVENT_ADDR_ERROR]	 = "address error",
> @@ -5424,6 +5428,10 @@ static struct pernet_operations cma_pernet_operations = {
>   static int __init cma_init(void)
>   {
>   	int ret;
> +	unsigned int noio_flags;

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/maintainer-netdev.rst?h=v6.9#n376

"
Netdev has a convention for ordering local variables in functions.
Order the variable declaration lines longest to shortest, e.g.::

   struct scatterlist *sg;
   struct sk_buff *skb;
   int err, i;

If there are dependencies between the variables preventing the ordering
move the initialization out of line.
"

Zhu Yanjun

> +
> +	if (cma_force_noio)
> +		noio_flags = memalloc_noio_save();
>   
>   	/*
>   	 * There is a rare lock ordering dependency in cma_netdev_callback()
> @@ -5439,8 +5447,10 @@ static int __init cma_init(void)
>   	}
>   
>   	cma_wq = alloc_ordered_workqueue("rdma_cm", WQ_MEM_RECLAIM);
> -	if (!cma_wq)
> -		return -ENOMEM;
> +	if (!cma_wq) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
>   
>   	ret = register_pernet_subsys(&cma_pernet_operations);
>   	if (ret)
> @@ -5458,7 +5468,8 @@ static int __init cma_init(void)
>   	if (ret)
>   		goto err_ib;
>   
> -	return 0;
> +	ret = 0;
> +	goto out;
>   
>   err_ib:
>   	ib_unregister_client(&cma_client);
> @@ -5469,6 +5480,9 @@ static int __init cma_init(void)
>   	unregister_pernet_subsys(&cma_pernet_operations);
>   err_wq:
>   	destroy_workqueue(cma_wq);
> +out:
> +	if (cma_force_noio)
> +		memalloc_noio_restore(noio_flags);
>   	return ret;
>   }
>   


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags
  2024-05-15 16:54   ` Tejun Heo
@ 2024-05-16 15:27     ` Haakon Bugge
  2024-05-16 16:29       ` Tejun Heo
  0 siblings, 1 reply; 17+ messages in thread
From: Haakon Bugge @ 2024-05-16 15:27 UTC (permalink / raw)
  To: Tejun Heo
  Cc: OFED mailing list, open list, netdev, rds-devel, Jason Gunthorpe,
	Leon Romanovsky, Saeed Mahameed, Tariq Toukan, David S . Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Lai Jiangshan,
	Allison Henderson, Manjunath Patil, Mark Zhang, Chuck Lever III,
	Shiraz Saleem, Yang Li



> On 15 May 2024, at 18:54, Tejun Heo <tj@kernel.org> wrote:
> 
>> @@ -5583,6 +5600,10 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
>> 
>> /* init wq */
>> wq->flags = flags;
>> + if (current->flags & PF_MEMALLOC_NOIO)
>> + wq->flags |= __WQ_NOIO;
>> + if (current->flags & PF_MEMALLOC_NOFS)
>> + wq->flags |= __WQ_NOFS;
> 
> So, yeah, please don't do this. What if a NOIO callers wants to scheduler a
> work item so that it can user GFP_KERNEL allocations.

If one work function want to use GPF_KERNEL and another using GFP_NOIO, queued on the same workqueue, one could create two workqueues. Create one that is surrounded by memalloc_noio_{save,restore}, another surrounded by memalloc_flags_save() + current->flags &= ~PF_MEMALLOC_NOIO and memalloc_flags_restore().

If you imply a work functions that performs combinations of GFP_KERNEL and GFP_NOIO, that sounds a little bit peculiar to me, but if needed, it must be open-coded. But wouldn't that be the same case as a WQ created with WQ_MEM_RECLAIM?

> I don't mind a
> convenience feature to workqueue for this but this doesn't seem like the
> right way. Also, memalloc_noio_save() and memalloc_nofs_save() are
> convenience wrappers around memalloc_flags_save(), so it'd probably be
> better to deal with gfp flags directly rather than singling out these two
> flags.

Actually, based on https://lore.kernel.org/linux-fsdevel/ZZcgXI46AinlcBDP@casper.infradead.org, I am inclided to skip GFP_NOFS. Also because the use-case for this series does not need GFP_NOFS.

When you say "deal with gfp flags directly", do you imply during WQ creation or queuing work on one? I am OK with adding the other per-process memory allocation flags, but that doesn's solve your initial issue ("if a NOIO callers wants to scheduler a work item so that it can user GFP_KERNEL").


Thxs, Håkon


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/6] RDMA/cma: Brute force GFP_NOIO
  2024-05-16  7:37   ` Zhu Yanjun
@ 2024-05-16 15:49     ` Haakon Bugge
  2024-05-16 19:07       ` Greg Sword
  0 siblings, 1 reply; 17+ messages in thread
From: Haakon Bugge @ 2024-05-16 15:49 UTC (permalink / raw)
  To: Zhu Yanjun
  Cc: OFED mailing list, open list, netdev, rds-devel, Jason Gunthorpe,
	Leon Romanovsky, Saeed Mahameed, Tariq Toukan, David S . Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Tejun Heo,
	Lai Jiangshan, Allison Henderson, Manjunath Patil, Mark Zhang,
	Chuck Lever III, Shiraz Saleem, Yang Li

Hi Yanjun,


> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/maintainer-netdev.rst?h=v6.9#n376
> 
> "
> Netdev has a convention for ordering local variables in functions.
> Order the variable declaration lines longest to shortest, e.g.::

"Infiniband subsystem" != netdev, right?


Thxs, Håkon


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags
  2024-05-16 15:27     ` Haakon Bugge
@ 2024-05-16 16:29       ` Tejun Heo
  2024-05-21 14:02         ` Haakon Bugge
  0 siblings, 1 reply; 17+ messages in thread
From: Tejun Heo @ 2024-05-16 16:29 UTC (permalink / raw)
  To: Haakon Bugge
  Cc: OFED mailing list, open list, netdev, rds-devel, Jason Gunthorpe,
	Leon Romanovsky, Saeed Mahameed, Tariq Toukan, David S . Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Lai Jiangshan,
	Allison Henderson, Manjunath Patil, Mark Zhang, Chuck Lever III,
	Shiraz Saleem, Yang Li

Hello,

On Thu, May 16, 2024 at 03:27:15PM +0000, Haakon Bugge wrote:
> > So, yeah, please don't do this. What if a NOIO callers wants to scheduler a
> > work item so that it can user GFP_KERNEL allocations.
> 
> If one work function want to use GPF_KERNEL and another using GFP_NOIO,
> queued on the same workqueue, one could create two workqueues. Create one
> that is surrounded by memalloc_noio_{save,restore}, another surrounded by
> memalloc_flags_save() + current->flags &= ~PF_MEMALLOC_NOIO and
> memalloc_flags_restore().

This is too subtle and the default behavior doesn't seem great either - in
most cases, the code path which sets up workqueues would be in GFP_KERNEL
context as init paths usually are, so it's not like this would make things
work automatically in most cases. In addition, now, the memory allocations
for workqueues themselves have to be subject to the same GFP restrictions
even when alloc_workqueue() is called from GFP_KERNEL context. It just
doesn't seem well thought out.

> When you say "deal with gfp flags directly", do you imply during WQ
> creation or queuing work on one? I am OK with adding the other per-process
> memory allocation flags, but that doesn's solve your initial issue ("if a
> NOIO callers wants to scheduler a work item so that it can user
> GFP_KERNEL").

It being a purely convenience feature, I don't think there's hard
requirement on where this should go although I don't know where you'd carry
this information if you tied it to each work item. And, please don't single
out specific GFP flags. Please make the feature generic so that users who
may need different GFP masking can also use it too. The underlying GFP
feature is already like that. There's no reason to restrict it from
workqueue side.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/6] RDMA/cma: Brute force GFP_NOIO
  2024-05-16 15:49     ` Haakon Bugge
@ 2024-05-16 19:07       ` Greg Sword
  2024-05-17  9:28         ` Haakon Bugge
  2024-05-26  9:27         ` Leon Romanovsky
  0 siblings, 2 replies; 17+ messages in thread
From: Greg Sword @ 2024-05-16 19:07 UTC (permalink / raw)
  To: Haakon Bugge
  Cc: Zhu Yanjun, OFED mailing list, open list, netdev, rds-devel,
	Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Chuck Lever III, Shiraz Saleem, Yang Li

On Thu, May 16, 2024 at 11:54 PM Haakon Bugge <haakon.bugge@oracle.com> wrote:
>
> Hi Yanjun,
>
>
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/maintainer-netdev.rst?h=v6.9#n376
> >
> > "
> > Netdev has a convention for ordering local variables in functions.
> > Order the variable declaration lines longest to shortest, e.g.::
>
> "Infiniband subsystem" != netdev, right?

All kernel subsystems should follow this rule, including the network
and rdma subsystems

>
>
> Thxs, Håkon
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/6] RDMA/cma: Brute force GFP_NOIO
  2024-05-16 19:07       ` Greg Sword
@ 2024-05-17  9:28         ` Haakon Bugge
  2024-05-26  9:27         ` Leon Romanovsky
  1 sibling, 0 replies; 17+ messages in thread
From: Haakon Bugge @ 2024-05-17  9:28 UTC (permalink / raw)
  To: Greg Sword
  Cc: Zhu Yanjun, OFED mailing list, open list, netdev, rds-devel,
	Jason Gunthorpe, Leon Romanovsky, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Chuck Lever III, Shiraz Saleem, Yang Li



> On 16 May 2024, at 21:07, Greg Sword <gregsword0@gmail.com> wrote:
> 
> On Thu, May 16, 2024 at 11:54 PM Haakon Bugge <haakon.bugge@oracle.com> wrote:
>> 
>> Hi Yanjun,
>> 
>> 
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/maintainer-netdev.rst?h=v6.9#n376
>>> 
>>> "
>>> Netdev has a convention for ordering local variables in functions.
>>> Order the variable declaration lines longest to shortest, e.g.::
>> 
>> "Infiniband subsystem" != netdev, right?
> 
> All kernel subsystems should follow this rule, including the network
> and rdma subsystems

I am note aware. Where is this documented?


Thxs, Håkon



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags
  2024-05-16 16:29       ` Tejun Heo
@ 2024-05-21 14:02         ` Haakon Bugge
  0 siblings, 0 replies; 17+ messages in thread
From: Haakon Bugge @ 2024-05-21 14:02 UTC (permalink / raw)
  To: Tejun Heo
  Cc: OFED mailing list, open list, netdev, rds-devel, Jason Gunthorpe,
	Leon Romanovsky, Saeed Mahameed, Tariq Toukan, David S . Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Lai Jiangshan,
	Allison Henderson, Manjunath Patil, Mark Zhang, Chuck Lever III,
	Shiraz Saleem, Yang Li

Hi,


> On 16 May 2024, at 18:29, Tejun Heo <tj@kernel.org> wrote:
> 
> 
>> When you say "deal with gfp flags directly", do you imply during WQ
>> creation or queuing work on one? I am OK with adding the other per-process
>> memory allocation flags, but that doesn's solve your initial issue ("if a
>> NOIO callers wants to scheduler a work item so that it can user
>> GFP_KERNEL").
> 
> It being a purely convenience feature, I don't think there's hard
> requirement on where this should go although I don't know where you'd carry
> this information if you tied it to each work item. And, please don't single
> out specific GFP flags. Please make the feature generic so that users who
> may need different GFP masking can also use it too. The underlying GFP
> feature is already like that. There's no reason to restrict it from
> workqueue side.


I am preparing a v3 which handles all PF_MEMALLOC* flags. The plan is to send it out tomorrow.


Thxs, Håkon


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO
  2024-05-15 12:53 [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO Håkon Bugge
                   ` (5 preceding siblings ...)
  2024-05-15 12:53 ` [PATCH v2 6/6] net/mlx5: " Håkon Bugge
@ 2024-05-21 14:24 ` Christoph Hellwig
  6 siblings, 0 replies; 17+ messages in thread
From: Christoph Hellwig @ 2024-05-21 14:24 UTC (permalink / raw)
  To: Håkon Bugge
  Cc: linux-rdma, linux-kernel, netdev, rds-devel, Jason Gunthorpe,
	Leon Romanovsky, Saeed Mahameed, Tariq Toukan, David S . Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Tejun Heo,
	Lai Jiangshan, Allison Henderson, Manjunath Patil, Mark Zhang,
	Chuck Lever, Shiraz Saleem, Yang Li

On Wed, May 15, 2024 at 02:53:36PM +0200, Håkon Bugge wrote:
> This series enables RDS and the RDMA stack to be used as a block I/O
> device. This to support a filesystem on top of a raw block device
> which uses RDS and the RDMA stack as the network transport layer.
> 
> Under intense memory pressure, we get memory reclaims. Assume the
> filesystem reclaims memory, goes to the raw block device, which calls
> into RDS, which calls the RDMA stack. Now, if regular GFP_KERNEL
> allocations in RDS or the RDMA stack require reclaims to be fulfilled,
> we end up in a circular dependency.

Use of network block devices or file systems from the local system
simply isn't supported in the Linux reclaim hierchary.  Trying to
hack in through module options for code you haven't even submitted
is a complete nogo.

NAK.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/6] RDMA/cma: Brute force GFP_NOIO
  2024-05-16 19:07       ` Greg Sword
  2024-05-17  9:28         ` Haakon Bugge
@ 2024-05-26  9:27         ` Leon Romanovsky
  1 sibling, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2024-05-26  9:27 UTC (permalink / raw)
  To: Greg Sword
  Cc: Haakon Bugge, Zhu Yanjun, OFED mailing list, open list, netdev,
	rds-devel, Jason Gunthorpe, Saeed Mahameed, Tariq Toukan,
	David S . Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Tejun Heo, Lai Jiangshan, Allison Henderson, Manjunath Patil,
	Mark Zhang, Chuck Lever III, Shiraz Saleem, Yang Li

On Fri, May 17, 2024 at 03:07:34AM +0800, Greg Sword wrote:
> On Thu, May 16, 2024 at 11:54 PM Haakon Bugge <haakon.bugge@oracle.com> wrote:
> >
> > Hi Yanjun,
> >
> >
> > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/maintainer-netdev.rst?h=v6.9#n376
> > >
> > > "
> > > Netdev has a convention for ordering local variables in functions.
> > > Order the variable declaration lines longest to shortest, e.g.::
> >
> > "Infiniband subsystem" != netdev, right?
> 
> All kernel subsystems should follow this rule, including the network
> and rdma subsystems

Of course not. The request to sort variables is netdev coding style,
rest of the kernel doesn't have this rule and doesn't care about it.

In Infiniband, we accept both styles, just to make sure that people who
submit their patches to both subsystems won't need to bother themselves
with this.

Thanks

> 
> >
> >
> > Thxs, Håkon
> >

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2024-05-26  9:27 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-15 12:53 [PATCH v2 0/6] rds: rdma: Add ability to force GFP_NOIO Håkon Bugge
2024-05-15 12:53 ` [PATCH v2 1/6] workqueue: Inherit NOIO and NOFS alloc flags Håkon Bugge
2024-05-15 16:54   ` Tejun Heo
2024-05-16 15:27     ` Haakon Bugge
2024-05-16 16:29       ` Tejun Heo
2024-05-21 14:02         ` Haakon Bugge
2024-05-15 12:53 ` [PATCH v2 2/6] rds: Brute force GFP_NOIO Håkon Bugge
2024-05-15 12:53 ` [PATCH v2 3/6] RDMA/cma: " Håkon Bugge
2024-05-16  7:37   ` Zhu Yanjun
2024-05-16 15:49     ` Haakon Bugge
2024-05-16 19:07       ` Greg Sword
2024-05-17  9:28         ` Haakon Bugge
2024-05-26  9:27         ` Leon Romanovsky
2024-05-15 12:53 ` [PATCH v2 4/6] RDMA/cm: " Håkon Bugge
2024-05-15 12:53 ` [PATCH v2 5/6] RDMA/mlx5: " Håkon Bugge
2024-05-15 12:53 ` [PATCH v2 6/6] net/mlx5: " Håkon Bugge
2024-05-21 14:24 ` [PATCH v2 0/6] rds: rdma: Add ability to " Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).