netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC net] Should sk_page_frag() also look at the current GFP context?
@ 2022-07-01 18:41 Guillaume Nault
  2022-07-07 15:31 ` Benjamin Coddington
  2022-07-07 16:29 ` Eric Dumazet
  0 siblings, 2 replies; 9+ messages in thread
From: Guillaume Nault @ 2022-07-01 18:41 UTC (permalink / raw)
  To: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: netdev, Chuck Lever, Jeff Layton, Trond Myklebust,
	Anna Schumaker, Steve French, Josef Bacik, Scott Mayhew,
	Benjamin Coddington, Tejun Heo

I'm investigating a kernel oops that looks similar to
20eb4f29b602 ("net: fix sk_page_frag() recursion from memory reclaim")
and dacb5d8875cc ("tcp: fix page frag corruption on page fault").

This time the problem happens on an NFS client, while the previous bzs
respectively used NBD and CIFS. While NBD and CIFS clear __GFP_FS in
their socket's ->sk_allocation field (using GFP_NOIO or GFP_NOFS), NFS
leaves sk_allocation to its default value since commit a1231fda7e94
("SUNRPC: Set memalloc_nofs_save() on all rpciod/xprtiod jobs").

To recap the original problems, in commit 20eb4f29b602 and dacb5d8875cc,
memory reclaim happened while executing tcp_sendmsg_locked(). The code
path entered tcp_sendmsg_locked() recursively as pages to be reclaimed
were backed by files on the network. The problem was that both the
outer and the inner tcp_sendmsg_locked() calls used current->task_frag,
thus leaving it in an inconsistent state. The fix was to use the
socket's ->sk_frag instead for the file system socket, so that the
inner and outer calls wouln't step on each other's toes.

But now that NFS doesn't modify ->sk_allocation anymore, sk_page_frag()
sees sunrpc sockets as plain TCP ones and returns ->task_frag in the
inner tcp_sendmsg_locked() call.

Also it looks like the trend is to avoid GFS_NOFS and GFP_NOIO and use
memalloc_no{fs,io}_save() instead. So maybe other network file systems
will also stop setting ->sk_allocation in the future and we should
teach sk_page_frag() to look at the current GFP flags. Or should we
stick to ->sk_allocation and make NFS drop __GFP_FS again?

Signed-off-by: Guillaume Nault <gnault@redhat.com>
---
 include/net/sock.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 72ca97ccb460..b934c9851058 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -46,6 +46,7 @@
 #include <linux/netdevice.h>
 #include <linux/skbuff.h>	/* struct sk_buff */
 #include <linux/mm.h>
+#include <linux/sched/mm.h>
 #include <linux/security.h>
 #include <linux/slab.h>
 #include <linux/uaccess.h>
@@ -2503,14 +2504,17 @@ static inline void sk_stream_moderate_sndbuf(struct sock *sk)
  * socket operations and end up recursing into sk_page_frag()
  * while it's already in use: explicitly avoid task page_frag
  * usage if the caller is potentially doing any of them.
- * This assumes that page fault handlers use the GFP_NOFS flags.
+ * This assumes that page fault handlers use the GFP_NOFS flags
+ * or run under memalloc_nofs_save() protection.
  *
  * Return: a per task page_frag if context allows that,
  * otherwise a per socket one.
  */
 static inline struct page_frag *sk_page_frag(struct sock *sk)
 {
-	if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) ==
+	gfp_t gfp_mask = current_gfp_context(sk->sk_allocation);
+
+	if ((gfp_mask & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) ==
 	    (__GFP_DIRECT_RECLAIM | __GFP_FS))
 		return &current->task_frag;
 
-- 
2.21.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-09-20 18:51 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-01 18:41 [RFC net] Should sk_page_frag() also look at the current GFP context? Guillaume Nault
2022-07-07 15:31 ` Benjamin Coddington
2022-07-07 16:29 ` Eric Dumazet
2022-07-08 17:51   ` Guillaume Nault
2022-07-08 18:10   ` Benjamin Coddington
2022-07-08 20:04     ` Trond Myklebust
2022-07-11 14:07       ` Benjamin Coddington
2022-07-11 15:31         ` Eric Dumazet
2022-09-20 18:50           ` Guillaume Nault

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).