linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ankur Arora <ankur.a.arora@oracle.com>
To: linux-kernel@vger.kernel.org
Cc: tglx@linutronix.de, peterz@infradead.org,
	torvalds@linux-foundation.org, paulmck@kernel.org,
	linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org,
	luto@kernel.org, bp@alien8.de, dave.hansen@linux.intel.com,
	hpa@zytor.com, mingo@redhat.com, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, willy@infradead.org, mgorman@suse.de,
	jon.grimm@amd.com, bharata@amd.com, raghavendra.kt@amd.com,
	boris.ostrovsky@oracle.com, konrad.wilk@oracle.com,
	jgross@suse.com, andrew.cooper3@citrix.com, mingo@kernel.org,
	bristot@kernel.org, mathieu.desnoyers@efficios.com,
	geert@linux-m68k.org, glaubitz@physik.fu-berlin.de,
	anton.ivanov@cambridgegreys.com, mattst88@gmail.com,
	krypton@ulrich-teichert.org, rostedt@goodmis.org,
	David.Laight@ACULAB.COM, richard@nod.at, mjguzik@gmail.com,
	Ankur Arora <ankur.a.arora@oracle.com>,
	Marek Lindner <mareklindner@neomailbox.ch>,
	Simon Wunderlich <sw@simonwunderlich.de>,
	Antonio Quartulli <a@unstable.cc>,
	Sven Eckelmann <sven@narfation.org>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Roopa Prabhu <roopa@nvidia.com>,
	Nikolay Aleksandrov <razor@blackwall.org>,
	David Ahern <dsahern@kernel.org>,
	Pablo Neira Ayuso <pablo@netfilter.org>,
	Jozsef Kadlecsik <kadlec@netfilter.org>,
	Florian Westphal <fw@strlen.de>,
	Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
	Matthieu Baerts <matttbe@kernel.org>,
	Mat Martineau <martineau@kernel.org>,
	Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>,
	Xin Long <lucien.xin@gmail.com>,
	Trond Myklebust <trond.myklebust@hammerspace.com>,
	Anna Schumaker <anna@kernel.org>, Jon Maloy <jmaloy@redhat.com>,
	Ying Xue <ying.xue@windriver.com>,
	Martin Schiller <ms@dev.tdt.de>
Subject: [RFC PATCH 79/86] treewide: net: remove cond_resched()
Date: Tue,  7 Nov 2023 15:08:15 -0800	[thread overview]
Message-ID: <20231107230822.371443-23-ankur.a.arora@oracle.com> (raw)
In-Reply-To: <20231107230822.371443-1-ankur.a.arora@oracle.com>

There are broadly three sets of uses of cond_resched():

1.  Calls to cond_resched() out of the goodness of our heart,
    otherwise known as avoiding lockup splats.

2.  Open coded variants of cond_resched_lock() which call
    cond_resched().

3.  Retry or error handling loops, where cond_resched() is used as a
    quick alternative to spinning in a tight-loop.

When running under a full preemption model, the cond_resched() reduces
to a NOP (not even a barrier) so removing it obviously cannot matter.

But considering only voluntary preemption models (for say code that
has been mostly tested under those), for set-1 and set-2 the
scheduler can now preempt kernel tasks running beyond their time
quanta anywhere they are preemptible() [1]. Which removes any need
for these explicitly placed scheduling points.

The cond_resched() calls in set-3 are a little more difficult.
To start with, given it's NOP character under full preemption, it
never actually saved us from a tight loop.
With voluntary preemption, it's not a NOP, but it might as well be --
for most workloads the scheduler does not have an interminable supply
of runnable tasks on the runqueue.

So, cond_resched() is useful to not get softlockup splats, but not
terribly good for error handling. Ideally, these should be replaced
with some kind of timed or event wait.
For now we use cond_resched_stall(), which tries to schedule if
possible, and executes a cpu_relax() if not.

Most of the uses here are in set-1 (some right after we give up a
lock or enable bottom-halves, causing an explicit preemption check.)

We can remove all of them.

[1] https://lore.kernel.org/lkml/20231107215742.363031-1-ankur.a.arora@oracle.com/

Cc: Marek Lindner <mareklindner@neomailbox.ch> 
Cc: Simon Wunderlich <sw@simonwunderlich.de> 
Cc: Antonio Quartulli <a@unstable.cc> 
Cc: Sven Eckelmann <sven@narfation.org> 
Cc: "David S. Miller" <davem@davemloft.net> 
Cc: Eric Dumazet <edumazet@google.com> 
Cc: Jakub Kicinski <kuba@kernel.org> 
Cc: Paolo Abeni <pabeni@redhat.com> 
Cc: Roopa Prabhu <roopa@nvidia.com> 
Cc: Nikolay Aleksandrov <razor@blackwall.org> 
Cc: David Ahern <dsahern@kernel.org> 
Cc: Pablo Neira Ayuso <pablo@netfilter.org> 
Cc: Jozsef Kadlecsik <kadlec@netfilter.org> 
Cc: Florian Westphal <fw@strlen.de> 
Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com> 
Cc: Matthieu Baerts <matttbe@kernel.org> 
Cc: Mat Martineau <martineau@kernel.org> 
Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> 
Cc: Xin Long <lucien.xin@gmail.com> 
Cc: Trond Myklebust <trond.myklebust@hammerspace.com> 
Cc: Anna Schumaker <anna@kernel.org> 
Cc: Jon Maloy <jmaloy@redhat.com> 
Cc: Ying Xue <ying.xue@windriver.com> 
Cc: Martin Schiller <ms@dev.tdt.de> 
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
 net/batman-adv/tp_meter.c       |  2 --
 net/bpf/test_run.c              |  1 -
 net/bridge/br_netlink.c         |  1 -
 net/ipv6/fib6_rules.c           |  1 -
 net/ipv6/netfilter/ip6_tables.c |  2 --
 net/ipv6/udp.c                  |  2 --
 net/mptcp/mptcp_diag.c          |  2 --
 net/mptcp/pm_netlink.c          |  5 -----
 net/mptcp/protocol.c            |  1 -
 net/rds/ib_recv.c               |  2 --
 net/rds/tcp.c                   |  2 +-
 net/rds/threads.c               |  1 -
 net/rxrpc/call_object.c         |  2 +-
 net/sctp/socket.c               |  1 -
 net/sunrpc/cache.c              | 11 +++++++++--
 net/sunrpc/sched.c              |  2 +-
 net/sunrpc/svc_xprt.c           |  1 -
 net/sunrpc/xprtsock.c           |  2 --
 net/tipc/core.c                 |  2 +-
 net/tipc/topsrv.c               |  3 ---
 net/unix/af_unix.c              |  5 ++---
 net/x25/af_x25.c                |  1 -
 22 files changed, 15 insertions(+), 37 deletions(-)

diff --git a/net/batman-adv/tp_meter.c b/net/batman-adv/tp_meter.c
index 7f3dd3c393e0..a0b160088c33 100644
--- a/net/batman-adv/tp_meter.c
+++ b/net/batman-adv/tp_meter.c
@@ -877,8 +877,6 @@ static int batadv_tp_send(void *arg)
 		/* right-shift the TWND */
 		if (!err)
 			tp_vars->last_sent += payload_len;
-
-		cond_resched();
 	}
 
 out:
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 0841f8d82419..f4558fdfdf74 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -81,7 +81,6 @@ static bool bpf_test_timer_continue(struct bpf_test_timer *t, int iterations,
 		/* During iteration: we need to reschedule between runs. */
 		t->time_spent += ktime_get_ns() - t->time_start;
 		bpf_test_timer_leave(t);
-		cond_resched();
 		bpf_test_timer_enter(t);
 	}
 
diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
index 10f0d33d8ccf..f326b034245f 100644
--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -780,7 +780,6 @@ int br_process_vlan_info(struct net_bridge *br,
 					       v - 1, rtm_cmd);
 				v_change_start = 0;
 			}
-			cond_resched();
 		}
 		/* v_change_start is set only if the last/whole range changed */
 		if (v_change_start)
diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c
index 7c2003833010..528e6a582c21 100644
--- a/net/ipv6/fib6_rules.c
+++ b/net/ipv6/fib6_rules.c
@@ -500,7 +500,6 @@ static void __net_exit fib6_rules_net_exit_batch(struct list_head *net_list)
 	rtnl_lock();
 	list_for_each_entry(net, net_list, exit_list) {
 		fib_rules_unregister(net->ipv6.fib6_rules_ops);
-		cond_resched();
 	}
 	rtnl_unlock();
 }
diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
index fd9f049d6d41..704f14c4146f 100644
--- a/net/ipv6/netfilter/ip6_tables.c
+++ b/net/ipv6/netfilter/ip6_tables.c
@@ -778,7 +778,6 @@ get_counters(const struct xt_table_info *t,
 
 			ADD_COUNTER(counters[i], bcnt, pcnt);
 			++i;
-			cond_resched();
 		}
 	}
 }
@@ -798,7 +797,6 @@ static void get_old_counters(const struct xt_table_info *t,
 			ADD_COUNTER(counters[i], tmp->bcnt, tmp->pcnt);
 			++i;
 		}
-		cond_resched();
 	}
 }
 
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 86b5d509a468..032d4f7e6ed3 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -443,8 +443,6 @@ int udpv6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
 	}
 	kfree_skb(skb);
 
-	/* starting over for a new packet, but check if we need to yield */
-	cond_resched();
 	msg->msg_flags &= ~MSG_TRUNC;
 	goto try_again;
 }
diff --git a/net/mptcp/mptcp_diag.c b/net/mptcp/mptcp_diag.c
index 8df1bdb647e2..82bf16511476 100644
--- a/net/mptcp/mptcp_diag.c
+++ b/net/mptcp/mptcp_diag.c
@@ -141,7 +141,6 @@ static void mptcp_diag_dump_listeners(struct sk_buff *skb, struct netlink_callba
 		spin_unlock(&ilb->lock);
 		rcu_read_unlock();
 
-		cond_resched();
 		diag_ctx->l_num = 0;
 	}
 
@@ -190,7 +189,6 @@ static void mptcp_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
 			diag_ctx->s_num--;
 			break;
 		}
-		cond_resched();
 	}
 
 	if ((r->idiag_states & TCPF_LISTEN) && r->id.idiag_dport == 0)
diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 9661f3812682..b48d2636ce8d 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -1297,7 +1297,6 @@ static int mptcp_nl_add_subflow_or_signal_addr(struct net *net)
 
 next:
 		sock_put(sk);
-		cond_resched();
 	}
 
 	return 0;
@@ -1443,7 +1442,6 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
 
 next:
 		sock_put(sk);
-		cond_resched();
 	}
 
 	return 0;
@@ -1478,7 +1476,6 @@ static int mptcp_nl_remove_id_zero_address(struct net *net,
 
 next:
 		sock_put(sk);
-		cond_resched();
 	}
 
 	return 0;
@@ -1594,7 +1591,6 @@ static void mptcp_nl_remove_addrs_list(struct net *net,
 		}
 
 		sock_put(sk);
-		cond_resched();
 	}
 }
 
@@ -1878,7 +1874,6 @@ static int mptcp_nl_set_flags(struct net *net,
 
 next:
 		sock_put(sk);
-		cond_resched();
 	}
 
 	return ret;
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 886ab689a8ae..8c4a51903b23 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -3383,7 +3383,6 @@ static void mptcp_release_cb(struct sock *sk)
 		if (flags & BIT(MPTCP_RETRANSMIT))
 			__mptcp_retrans(sk);
 
-		cond_resched();
 		spin_lock_bh(&sk->sk_lock.slock);
 	}
 
diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
index e53b7f266bd7..d2111e895a10 100644
--- a/net/rds/ib_recv.c
+++ b/net/rds/ib_recv.c
@@ -459,8 +459,6 @@ void rds_ib_recv_refill(struct rds_connection *conn, int prefill, gfp_t gfp)
 	    rds_ib_ring_empty(&ic->i_recv_ring))) {
 		queue_delayed_work(rds_wq, &conn->c_recv_w, 1);
 	}
-	if (can_wait)
-		cond_resched();
 }
 
 /*
diff --git a/net/rds/tcp.c b/net/rds/tcp.c
index 2dba7505b414..9b4d07235904 100644
--- a/net/rds/tcp.c
+++ b/net/rds/tcp.c
@@ -530,7 +530,7 @@ static void rds_tcp_accept_worker(struct work_struct *work)
 					       rds_tcp_accept_w);
 
 	while (rds_tcp_accept_one(rtn->rds_tcp_listen_sock) == 0)
-		cond_resched();
+		cond_resched_stall();
 }
 
 void rds_tcp_accept_work(struct sock *sk)
diff --git a/net/rds/threads.c b/net/rds/threads.c
index 1f424cbfcbb4..2a75b48769e8 100644
--- a/net/rds/threads.c
+++ b/net/rds/threads.c
@@ -198,7 +198,6 @@ void rds_send_worker(struct work_struct *work)
 	if (rds_conn_path_state(cp) == RDS_CONN_UP) {
 		clear_bit(RDS_LL_SEND_FULL, &cp->cp_flags);
 		ret = rds_send_xmit(cp);
-		cond_resched();
 		rdsdebug("conn %p ret %d\n", cp->cp_conn, ret);
 		switch (ret) {
 		case -EAGAIN:
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index 773eecd1e979..d2704a492a3c 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -755,7 +755,7 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
 			       call->flags, call->events);
 
 			spin_unlock(&rxnet->call_lock);
-			cond_resched();
+			cpu_relax();
 			spin_lock(&rxnet->call_lock);
 		}
 
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 7f89e43154c0..448112919848 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -8364,7 +8364,6 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr)
 			break;
 		next:
 			spin_unlock_bh(&head->lock);
-			cond_resched();
 		} while (--remaining > 0);
 
 		/* Exhausted local port range during search? */
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 95ff74706104..3bcacfbbf35f 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -521,10 +521,17 @@ static void do_cache_clean(struct work_struct *work)
  */
 void cache_flush(void)
 {
+	/*
+	 * We call cache_clean() in what is seemingly a tight loop. But,
+	 * the scheduler can always preempt us when we give up the spinlock
+	 * in cache_clean().
+	 */
+
 	while (cache_clean() != -1)
-		cond_resched();
+		;
+
 	while (cache_clean() != -1)
-		cond_resched();
+		;
 }
 EXPORT_SYMBOL_GPL(cache_flush);
 
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 6debf4fd42d4..5b7a3c8a271f 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -950,7 +950,7 @@ static void __rpc_execute(struct rpc_task *task)
 		 * Lockless check for whether task is sleeping or not.
 		 */
 		if (!RPC_IS_QUEUED(task)) {
-			cond_resched();
+			cond_resched_stall();
 			continue;
 		}
 
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 4cfe9640df48..d2486645d725 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -851,7 +851,6 @@ void svc_recv(struct svc_rqst *rqstp)
 		goto out;
 
 	try_to_freeze();
-	cond_resched();
 	if (kthread_should_stop())
 		goto out;
 
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index a15bf2ede89b..50c1f2556b3e 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -776,7 +776,6 @@ static void xs_stream_data_receive(struct sock_xprt *transport)
 		if (ret < 0)
 			break;
 		read += ret;
-		cond_resched();
 	}
 	if (ret == -ESHUTDOWN)
 		kernel_sock_shutdown(transport->sock, SHUT_RDWR);
@@ -1412,7 +1411,6 @@ static void xs_udp_data_receive(struct sock_xprt *transport)
 			break;
 		xs_udp_data_read_skb(&transport->xprt, sk, skb);
 		consume_skb(skb);
-		cond_resched();
 	}
 	xs_poll_check_readable(transport);
 out:
diff --git a/net/tipc/core.c b/net/tipc/core.c
index 434e70eabe08..ed4cd5faa387 100644
--- a/net/tipc/core.c
+++ b/net/tipc/core.c
@@ -119,7 +119,7 @@ static void __net_exit tipc_exit_net(struct net *net)
 	tipc_crypto_stop(&tipc_net(net)->crypto_tx);
 #endif
 	while (atomic_read(&tn->wq_count))
-		cond_resched();
+		cond_resched_stall();
 }
 
 static void __net_exit tipc_pernet_pre_exit(struct net *net)
diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
index 8ee0c07d00e9..13cd3816fb52 100644
--- a/net/tipc/topsrv.c
+++ b/net/tipc/topsrv.c
@@ -277,7 +277,6 @@ static void tipc_conn_send_to_sock(struct tipc_conn *con)
 			ret = kernel_sendmsg(con->sock, &msg, &iov,
 					     1, sizeof(*evt));
 			if (ret == -EWOULDBLOCK || ret == 0) {
-				cond_resched();
 				return;
 			} else if (ret < 0) {
 				return tipc_conn_close(con);
@@ -288,7 +287,6 @@ static void tipc_conn_send_to_sock(struct tipc_conn *con)
 
 		/* Don't starve users filling buffers */
 		if (++count >= MAX_SEND_MSG_COUNT) {
-			cond_resched();
 			count = 0;
 		}
 		spin_lock_bh(&con->outqueue_lock);
@@ -426,7 +424,6 @@ static void tipc_conn_recv_work(struct work_struct *work)
 
 		/* Don't flood Rx machine */
 		if (++count >= MAX_RECV_MSG_COUNT) {
-			cond_resched();
 			count = 0;
 		}
 	}
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 3e8a04a13668..bb1367f93db2 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1184,10 +1184,9 @@ static int unix_autobind(struct sock *sk)
 		unix_table_double_unlock(net, old_hash, new_hash);
 
 		/* __unix_find_socket_byname() may take long time if many names
-		 * are already in use.
+		 * are already in use. The unlock above would have allowed the
+		 * scheduler to preempt if preemption was needed.
 		 */
-		cond_resched();
-
 		if (ordernum == lastnum) {
 			/* Give up if all names seems to be in use. */
 			err = -ENOSPC;
diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
index 0fb5143bec7a..2a6b05bcb53d 100644
--- a/net/x25/af_x25.c
+++ b/net/x25/af_x25.c
@@ -343,7 +343,6 @@ static unsigned int x25_new_lci(struct x25_neigh *nb)
 			lci = 0;
 			break;
 		}
-		cond_resched();
 	}
 
 	return lci;
-- 
2.31.1


  parent reply	other threads:[~2023-11-07 23:12 UTC|newest]

Thread overview: 250+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-07 21:56 [RFC PATCH 00/86] Make the kernel preemptible Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 01/86] Revert "riscv: support PREEMPT_DYNAMIC with static keys" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 02/86] Revert "sched/core: Make sched_dynamic_mutex static" Ankur Arora
2023-11-07 23:04   ` Steven Rostedt
2023-11-07 21:56 ` [RFC PATCH 03/86] Revert "ftrace: Use preemption model accessors for trace header printout" Ankur Arora
2023-11-07 23:10   ` Steven Rostedt
2023-11-07 23:23     ` Ankur Arora
2023-11-07 23:31       ` Steven Rostedt
2023-11-07 23:34         ` Steven Rostedt
2023-11-08  0:12           ` Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 04/86] Revert "preempt/dynamic: Introduce preemption model accessors" Ankur Arora
2023-11-07 23:12   ` Steven Rostedt
2023-11-08  4:59     ` Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 05/86] Revert "kcsan: Use " Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 06/86] Revert "entry: Fix compile error in dynamic_irqentry_exit_cond_resched()" Ankur Arora
2023-11-08  7:47   ` Greg KH
2023-11-08  9:09     ` Ankur Arora
2023-11-08 10:00       ` Greg KH
2023-11-07 21:56 ` [RFC PATCH 07/86] Revert "livepatch,sched: Add livepatch task switching to cond_resched()" Ankur Arora
2023-11-07 23:16   ` Steven Rostedt
2023-11-08  4:55     ` Ankur Arora
2023-11-09 17:26     ` Josh Poimboeuf
2023-11-09 17:31       ` Steven Rostedt
2023-11-09 17:51         ` Josh Poimboeuf
2023-11-09 22:50           ` Ankur Arora
2023-11-09 23:47             ` Josh Poimboeuf
2023-11-10  0:46               ` Ankur Arora
2023-11-10  0:56           ` Steven Rostedt
2023-11-07 21:56 ` [RFC PATCH 08/86] Revert "arm64: Support PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 23:17   ` Steven Rostedt
2023-11-08 15:44   ` Mark Rutland
2023-11-07 21:56 ` [RFC PATCH 09/86] Revert "sched/preempt: Add PREEMPT_DYNAMIC using static keys" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 10/86] Revert "sched/preempt: Decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 11/86] Revert "sched/preempt: Simplify irqentry_exit_cond_resched() callers" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 12/86] Revert "sched/preempt: Refactor sched_dynamic_update()" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 13/86] Revert "sched/preempt: Move PREEMPT_DYNAMIC logic later" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 14/86] Revert "preempt/dynamic: Fix setup_preempt_mode() return value" Ankur Arora
2023-11-07 23:20   ` Steven Rostedt
2023-11-07 21:57 ` [RFC PATCH 15/86] Revert "preempt: Restore preemption model selection configs" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 16/86] Revert "sched: Provide Kconfig support for default dynamic preempt mode" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 17/86] sched/preempt: remove PREEMPT_DYNAMIC from the build version Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 18/86] Revert "preempt/dynamic: Fix typo in macro conditional statement" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 19/86] Revert "sched,preempt: Move preempt_dynamic to debug.c" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 20/86] Revert "static_call: Relax static_call_update() function argument type" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 21/86] Revert "sched/core: Use -EINVAL in sched_dynamic_mode()" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 22/86] Revert "sched/core: Stop using magic values " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 23/86] Revert "sched,x86: Allow !PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 24/86] Revert "sched: Harden PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 25/86] Revert "sched: Add /debug/sched_preempt" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 26/86] Revert "preempt/dynamic: Support dynamic preempt with preempt= boot option" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 27/86] Revert "preempt/dynamic: Provide irqentry_exit_cond_resched() static call" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 28/86] Revert "preempt/dynamic: Provide preempt_schedule[_notrace]() static calls" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 29/86] Revert "preempt/dynamic: Provide cond_resched() and might_resched() " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 30/86] Revert "preempt: Introduce CONFIG_PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 31/86] x86/thread_info: add TIF_NEED_RESCHED_LAZY Ankur Arora
2023-11-07 23:26   ` Steven Rostedt
2023-11-07 21:57 ` [RFC PATCH 32/86] entry: handle TIF_NEED_RESCHED_LAZY Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 33/86] entry/kvm: " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 34/86] thread_info: accessors for TIF_NEED_RESCHED* Ankur Arora
2023-11-08  8:58   ` Peter Zijlstra
2023-11-21  5:59     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 35/86] thread_info: change to tif_need_resched(resched_t) Ankur Arora
2023-11-08  9:00   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 36/86] entry: irqentry_exit only preempts TIF_NEED_RESCHED Ankur Arora
2023-11-08  9:01   ` Peter Zijlstra
2023-11-21  6:00     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 37/86] sched: make test_*_tsk_thread_flag() return bool Ankur Arora
2023-11-08  9:02   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 38/86] sched: *_tsk_need_resched() now takes resched_t Ankur Arora
2023-11-08  9:03   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 39/86] sched: handle lazy resched in set_nr_*_polling() Ankur Arora
2023-11-08  9:15   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 40/86] context_tracking: add ct_state_cpu() Ankur Arora
2023-11-08  9:16   ` Peter Zijlstra
2023-11-21  6:32     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 41/86] sched: handle resched policy in resched_curr() Ankur Arora
2023-11-08  9:36   ` Peter Zijlstra
2023-11-08 10:26     ` Ankur Arora
2023-11-08 10:46       ` Peter Zijlstra
2023-11-21  6:34         ` Ankur Arora
2023-11-21  6:31       ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 42/86] sched: force preemption on tick expiration Ankur Arora
2023-11-08  9:56   ` Peter Zijlstra
2023-11-21  6:44     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 43/86] sched: enable PREEMPT_COUNT, PREEMPTION for all preemption models Ankur Arora
2023-11-08  9:58   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 44/86] sched: voluntary preemption Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 45/86] preempt: ARCH_NO_PREEMPT only preempts lazily Ankur Arora
2023-11-08  0:07   ` Steven Rostedt
2023-11-08  8:47     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 46/86] tracing: handle lazy resched Ankur Arora
2023-11-08  0:19   ` Steven Rostedt
2023-11-08  9:24     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 47/86] rcu: select PREEMPT_RCU if PREEMPT Ankur Arora
2023-11-08  0:27   ` Steven Rostedt
2023-11-21  0:28     ` Paul E. McKenney
2023-11-21  3:43       ` Steven Rostedt
2023-11-21  5:04         ` Paul E. McKenney
2023-11-21  5:39           ` Ankur Arora
2023-11-21 15:00           ` Steven Rostedt
2023-11-21 15:19             ` Paul E. McKenney
2023-11-28 10:53               ` Thomas Gleixner
2023-11-28 18:30                 ` Ankur Arora
2023-12-05  1:03                   ` Paul E. McKenney
2023-12-05  1:01                 ` Paul E. McKenney
2023-12-05 15:01                   ` Steven Rostedt
2023-12-05 19:38                     ` Paul E. McKenney
2023-12-05 20:18                       ` Ankur Arora
2023-12-06  4:07                         ` Paul E. McKenney
2023-12-07  1:33                           ` Ankur Arora
2023-12-05 20:45                       ` Steven Rostedt
2023-12-06 10:08                         ` David Laight
2023-12-07  4:34                         ` Paul E. McKenney
2023-12-07 13:44                           ` Steven Rostedt
2023-12-08  4:28                             ` Paul E. McKenney
2023-11-08 12:15   ` Julian Anastasov
2023-11-07 21:57 ` [RFC PATCH 48/86] rcu: handle quiescent states for PREEMPT_RCU=n Ankur Arora
2023-11-21  0:38   ` Paul E. McKenney
2023-11-21  3:26     ` Ankur Arora
2023-11-21  5:17       ` Paul E. McKenney
2023-11-21  5:34         ` Paul E. McKenney
2023-11-21  6:13           ` Z qiang
2023-11-21 15:32             ` Paul E. McKenney
2023-11-21 19:25           ` Paul E. McKenney
2023-11-21 20:30             ` Peter Zijlstra
2023-11-21 21:14               ` Paul E. McKenney
2023-11-21 21:38                 ` Steven Rostedt
2023-11-21 22:26                   ` Paul E. McKenney
2023-11-21 22:52                     ` Steven Rostedt
2023-11-22  0:01                       ` Paul E. McKenney
2023-11-22  0:12                         ` Steven Rostedt
2023-11-22  1:09                           ` Paul E. McKenney
2023-11-28 17:04     ` Thomas Gleixner
2023-12-05  1:33       ` Paul E. McKenney
2023-12-06 15:10         ` Thomas Gleixner
2023-12-07  4:17           ` Paul E. McKenney
2023-12-07  1:31         ` Ankur Arora
2023-12-07  2:10           ` Steven Rostedt
2023-12-07  4:37             ` Paul E. McKenney
2023-12-07 14:22           ` Thomas Gleixner
2023-11-21  3:55   ` Z qiang
2023-11-07 21:57 ` [RFC PATCH 49/86] osnoise: handle quiescent states directly Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 50/86] rcu: TASKS_RCU does not need to depend on PREEMPTION Ankur Arora
2023-11-21  0:38   ` Paul E. McKenney
2023-11-07 21:57 ` [RFC PATCH 51/86] preempt: disallow !PREEMPT_COUNT or !PREEMPTION Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 52/86] sched: remove CONFIG_PREEMPTION from *_needbreak() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 53/86] sched: fixup __cond_resched_*() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 54/86] sched: add cond_resched_stall() Ankur Arora
2023-11-09 11:19   ` Thomas Gleixner
2023-11-09 22:27     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 55/86] xarray: add cond_resched_xas_rcu() and cond_resched_xas_lock_irq() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 56/86] xarray: use cond_resched_xas*() Ankur Arora
2023-11-07 23:01 ` [RFC PATCH 00/86] Make the kernel preemptible Steven Rostedt
2023-11-07 23:43   ` Ankur Arora
2023-11-08  0:00     ` Steven Rostedt
2023-11-07 23:07 ` [RFC PATCH 57/86] coccinelle: script to remove cond_resched() Ankur Arora
2023-11-07 23:07   ` [RFC PATCH 58/86] treewide: x86: " Ankur Arora
2023-11-07 23:07   ` [RFC PATCH 59/86] treewide: rcu: " Ankur Arora
2023-11-21  1:01     ` Paul E. McKenney
2023-11-07 23:07   ` [RFC PATCH 60/86] treewide: torture: " Ankur Arora
2023-11-21  1:02     ` Paul E. McKenney
2023-11-07 23:07   ` [RFC PATCH 61/86] treewide: bpf: " Ankur Arora
2023-11-07 23:07   ` [RFC PATCH 62/86] treewide: trace: " Ankur Arora
2023-11-07 23:07   ` [RFC PATCH 63/86] treewide: futex: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 64/86] treewide: printk: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 65/86] treewide: task_work: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 66/86] treewide: kernel: " Ankur Arora
2023-11-17 18:14     ` Luis Chamberlain
2023-11-17 19:51       ` Steven Rostedt
2023-11-07 23:08   ` [RFC PATCH 67/86] treewide: kernel: remove cond_reshed() Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 68/86] treewide: mm: remove cond_resched() Ankur Arora
2023-11-08  1:28     ` Sergey Senozhatsky
2023-11-08  7:49       ` Vlastimil Babka
2023-11-08  8:02         ` Yosry Ahmed
2023-11-08  8:54           ` Ankur Arora
2023-11-08 12:58             ` Matthew Wilcox
2023-11-08 14:50               ` Steven Rostedt
2023-11-07 23:08   ` [RFC PATCH 69/86] treewide: io_uring: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 70/86] treewide: ipc: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 71/86] treewide: lib: " Ankur Arora
2023-11-08  9:15     ` Herbert Xu
2023-11-08 15:08       ` Steven Rostedt
2023-11-09  4:19         ` Herbert Xu
2023-11-09  4:43           ` Steven Rostedt
2023-11-08 19:15     ` Kees Cook
2023-11-08 19:41       ` Steven Rostedt
2023-11-08 22:16         ` Kees Cook
2023-11-08 22:21           ` Steven Rostedt
2023-11-09  9:39         ` David Laight
2023-11-07 23:08   ` [RFC PATCH 72/86] treewide: crypto: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 73/86] treewide: security: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 74/86] treewide: fs: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 75/86] treewide: virt: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 76/86] treewide: block: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 77/86] treewide: netfilter: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 78/86] treewide: net: " Ankur Arora
2023-11-07 23:08   ` Ankur Arora [this message]
2023-11-08 12:16     ` [RFC PATCH 79/86] " Eric Dumazet
2023-11-08 17:11       ` Steven Rostedt
2023-11-08 20:59         ` Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 80/86] treewide: sound: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 81/86] treewide: md: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 82/86] treewide: mtd: " Ankur Arora
2023-11-08 16:28     ` Miquel Raynal
2023-11-08 16:32       ` Matthew Wilcox
2023-11-08 17:21         ` Steven Rostedt
2023-11-09  8:38           ` Miquel Raynal
2023-11-07 23:08   ` [RFC PATCH 83/86] treewide: drm: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 84/86] treewide: net: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 85/86] treewide: drivers: " Ankur Arora
2023-11-08  0:48     ` Chris Packham
2023-11-09  0:55       ` Ankur Arora
2023-11-09 23:25     ` Dmitry Torokhov
2023-11-09 23:41       ` Steven Rostedt
2023-11-10  0:01       ` Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 86/86] sched: " Ankur Arora
2023-11-07 23:19   ` [RFC PATCH 57/86] coccinelle: script to " Julia Lawall
2023-11-08  8:29     ` Ankur Arora
2023-11-08  9:49       ` Julia Lawall
2023-11-21  0:45   ` Paul E. McKenney
2023-11-21  5:16     ` Ankur Arora
2023-11-21 15:26       ` Paul E. McKenney
2023-11-08  4:08 ` [RFC PATCH 00/86] Make the kernel preemptible Christoph Lameter
2023-11-08  4:33   ` Ankur Arora
2023-11-08  4:52     ` Christoph Lameter
2023-11-08  5:12       ` Steven Rostedt
2023-11-08  6:49         ` Ankur Arora
2023-11-08  7:54         ` Vlastimil Babka
2023-11-08  7:31 ` Juergen Gross
2023-11-08  8:51 ` Peter Zijlstra
2023-11-08  9:53   ` Daniel Bristot de Oliveira
2023-11-08 10:04   ` Ankur Arora
2023-11-08 10:13     ` Peter Zijlstra
2023-11-08 11:00       ` Ankur Arora
2023-11-08 11:14         ` Peter Zijlstra
2023-11-08 12:16           ` Peter Zijlstra
2023-11-08 15:38       ` Thomas Gleixner
2023-11-08 16:15         ` Peter Zijlstra
2023-11-08 16:22         ` Steven Rostedt
2023-11-08 16:49           ` Peter Zijlstra
2023-11-08 17:18             ` Steven Rostedt
2023-11-08 20:46             ` Ankur Arora
2023-11-08 20:26         ` Ankur Arora
2023-11-08  9:43 ` David Laight
2023-11-08 15:15   ` Steven Rostedt
2023-11-08 16:29     ` David Laight
2023-11-08 16:33 ` Mark Rutland
2023-11-09  0:34   ` Ankur Arora
2023-11-09 11:00     ` Mark Rutland
2023-11-09 22:36       ` Ankur Arora

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231107230822.371443-23-ankur.a.arora@oracle.com \
    --to=ankur.a.arora@oracle.com \
    --cc=David.Laight@ACULAB.COM \
    --cc=a@unstable.cc \
    --cc=akpm@linux-foundation.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=anna@kernel.org \
    --cc=anton.ivanov@cambridgegreys.com \
    --cc=bharata@amd.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=bristot@kernel.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=edumazet@google.com \
    --cc=fw@strlen.de \
    --cc=geert@linux-m68k.org \
    --cc=glaubitz@physik.fu-berlin.de \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=jmaloy@redhat.com \
    --cc=jon.grimm@amd.com \
    --cc=juri.lelli@redhat.com \
    --cc=kadlec@netfilter.org \
    --cc=konrad.wilk@oracle.com \
    --cc=krypton@ulrich-teichert.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lucien.xin@gmail.com \
    --cc=luto@kernel.org \
    --cc=marcelo.leitner@gmail.com \
    --cc=mareklindner@neomailbox.ch \
    --cc=martineau@kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mattst88@gmail.com \
    --cc=matttbe@kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=mingo@redhat.com \
    --cc=mjguzik@gmail.com \
    --cc=ms@dev.tdt.de \
    --cc=pabeni@redhat.com \
    --cc=pablo@netfilter.org \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@amd.com \
    --cc=razor@blackwall.org \
    --cc=richard@nod.at \
    --cc=roopa@nvidia.com \
    --cc=rostedt@goodmis.org \
    --cc=sven@narfation.org \
    --cc=sw@simonwunderlich.de \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=trond.myklebust@hammerspace.com \
    --cc=vincent.guittot@linaro.org \
    --cc=willemdebruijn.kernel@gmail.com \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    --cc=ying.xue@windriver.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).