linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ankur Arora <ankur.a.arora@oracle.com>
To: linux-kernel@vger.kernel.org
Cc: tglx@linutronix.de, peterz@infradead.org,
	torvalds@linux-foundation.org, paulmck@kernel.org,
	linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org,
	luto@kernel.org, bp@alien8.de, dave.hansen@linux.intel.com,
	hpa@zytor.com, mingo@redhat.com, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, willy@infradead.org, mgorman@suse.de,
	jon.grimm@amd.com, bharata@amd.com, raghavendra.kt@amd.com,
	boris.ostrovsky@oracle.com, konrad.wilk@oracle.com,
	jgross@suse.com, andrew.cooper3@citrix.com, mingo@kernel.org,
	bristot@kernel.org, mathieu.desnoyers@efficios.com,
	geert@linux-m68k.org, glaubitz@physik.fu-berlin.de,
	anton.ivanov@cambridgegreys.com, mattst88@gmail.com,
	krypton@ulrich-teichert.org, rostedt@goodmis.org,
	David.Laight@ACULAB.COM, richard@nod.at, mjguzik@gmail.com,
	Ankur Arora <ankur.a.arora@oracle.com>,
	Inki Dae <inki.dae@samsung.com>,
	Jagan Teki <jagan@amarulasolutions.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Andrzej Hajda <andrzej.hajda@intel.com>,
	Neil Armstrong <neil.armstrong@linaro.org>,
	Robert Foss <rfoss@kernel.org>, David Airlie <airlied@gmail.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [RFC PATCH 83/86] treewide: drm: remove cond_resched()
Date: Tue,  7 Nov 2023 15:08:19 -0800	[thread overview]
Message-ID: <20231107230822.371443-27-ankur.a.arora@oracle.com> (raw)
In-Reply-To: <20231107230822.371443-1-ankur.a.arora@oracle.com>

There are broadly three sets of uses of cond_resched():

1.  Calls to cond_resched() out of the goodness of our heart,
    otherwise known as avoiding lockup splats.

2.  Open coded variants of cond_resched_lock() which call
    cond_resched().

3.  Retry or error handling loops, where cond_resched() is used as a
    quick alternative to spinning in a tight-loop.

When running under a full preemption model, the cond_resched() reduces
to a NOP (not even a barrier) so removing it obviously cannot matter.

But considering only voluntary preemption models (for say code that
has been mostly tested under those), for set-1 and set-2 the
scheduler can now preempt kernel tasks running beyond their time
quanta anywhere they are preemptible() [1]. Which removes any need
for these explicitly placed scheduling points.

The cond_resched() calls in set-3 are a little more difficult.
To start with, given it's NOP character under full preemption, it
never actually saved us from a tight loop.
With voluntary preemption, it's not a NOP, but it might as well be --
for most workloads the scheduler does not have an interminable supply
of runnable tasks on the runqueue.

So, cond_resched() is useful to not get softlockup splats, but not
terribly good for error handling. Ideally, these should be replaced
with some kind of timed or event wait.
For now we use cond_resched_stall(), which tries to schedule if
possible, and executes a cpu_relax() if not.

Most of the uses here are in set-1 (some right after we give up a lock
or enable bottom-halves, causing an explicit preemption check.)

There are a few cases from set-3. Replace them with
cond_resched_stall().

[1] https://lore.kernel.org/lkml/20231107215742.363031-1-ankur.a.arora@oracle.com/

Cc: Inki Dae <inki.dae@samsung.com> 
Cc: Jagan Teki <jagan@amarulasolutions.com> 
Cc: Marek Szyprowski <m.szyprowski@samsung.com> 
Cc: Andrzej Hajda <andrzej.hajda@intel.com> 
Cc: Neil Armstrong <neil.armstrong@linaro.org> 
Cc: Robert Foss <rfoss@kernel.org> 
Cc: David Airlie <airlied@gmail.com> 
Cc: Daniel Vetter <daniel@ffwll.ch> 
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> 
Cc: Maxime Ripard <mripard@kernel.org> 
Cc: Thomas Zimmermann <tzimmermann@suse.de> 
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
 drivers/gpu/drm/bridge/samsung-dsim.c         |  2 +-
 drivers/gpu/drm/drm_buddy.c                   |  1 -
 drivers/gpu/drm/drm_gem.c                     |  1 -
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  1 -
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c     |  2 --
 .../gpu/drm/i915/gem/selftests/huge_pages.c   |  6 ----
 .../drm/i915/gem/selftests/i915_gem_mman.c    |  5 ----
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c   |  2 +-
 drivers/gpu/drm/i915/gt/intel_gt.c            |  2 +-
 drivers/gpu/drm/i915/gt/intel_migrate.c       |  4 ---
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |  4 ---
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |  2 --
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  2 --
 drivers/gpu/drm/i915/gt/selftest_migrate.c    |  2 --
 drivers/gpu/drm/i915/gt/selftest_timeline.c   |  4 ---
 drivers/gpu/drm/i915/i915_active.c            |  2 +-
 drivers/gpu/drm/i915/i915_gem_evict.c         |  2 --
 drivers/gpu/drm/i915/i915_gpu_error.c         | 18 ++++--------
 drivers/gpu/drm/i915/intel_uncore.c           |  1 -
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c |  2 --
 drivers/gpu/drm/i915/selftests/i915_request.c |  2 --
 .../gpu/drm/i915/selftests/i915_selftest.c    |  3 --
 drivers/gpu/drm/i915/selftests/i915_vma.c     |  9 ------
 .../gpu/drm/i915/selftests/igt_flush_test.c   |  2 --
 .../drm/i915/selftests/intel_memory_region.c  |  4 ---
 drivers/gpu/drm/tests/drm_buddy_test.c        |  5 ----
 drivers/gpu/drm/tests/drm_mm_test.c           | 29 -------------------
 28 files changed, 11 insertions(+), 110 deletions(-)

diff --git a/drivers/gpu/drm/bridge/samsung-dsim.c b/drivers/gpu/drm/bridge/samsung-dsim.c
index cf777bdb25d2..ae537b9bf8df 100644
--- a/drivers/gpu/drm/bridge/samsung-dsim.c
+++ b/drivers/gpu/drm/bridge/samsung-dsim.c
@@ -1013,7 +1013,7 @@ static int samsung_dsim_wait_for_hdr_fifo(struct samsung_dsim *dsi)
 		if (reg & DSIM_SFR_HEADER_EMPTY)
 			return 0;
 
-		if (!cond_resched())
+		if (!cond_resched_stall())
 			usleep_range(950, 1050);
 	} while (--timeout);
 
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
index e6f5ba5f4baf..fe401d18bf4d 100644
--- a/drivers/gpu/drm/drm_buddy.c
+++ b/drivers/gpu/drm/drm_buddy.c
@@ -311,7 +311,6 @@ void drm_buddy_free_list(struct drm_buddy *mm, struct list_head *objects)
 
 	list_for_each_entry_safe(block, on, objects, link) {
 		drm_buddy_free_block(mm, block);
-		cond_resched();
 	}
 	INIT_LIST_HEAD(objects);
 }
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 44a948b80ee1..881caa4b48a9 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -506,7 +506,6 @@ static void drm_gem_check_release_batch(struct folio_batch *fbatch)
 {
 	check_move_unevictable_folios(fbatch);
 	__folio_batch_release(fbatch);
-	cond_resched();
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 5a687a3686bd..0b16689423b4 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1812,7 +1812,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb)
 		err = eb_copy_relocations(eb);
 		have_copy = err == 0;
 	} else {
-		cond_resched();
+		cond_resched_stall();
 		err = 0;
 	}
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index ef9346ed6d0f..172eee1e8889 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -414,7 +414,6 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
 
 		/* But keep the pointer alive for RCU-protected lookups */
 		call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
-		cond_resched();
 	}
 }
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 73a4a4eb29e0..38ea2fc206e0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -26,7 +26,6 @@ static void check_release_folio_batch(struct folio_batch *fbatch)
 {
 	check_move_unevictable_folios(fbatch);
 	__folio_batch_release(fbatch);
-	cond_resched();
 }
 
 void shmem_sg_free_table(struct sg_table *st, struct address_space *mapping,
@@ -108,7 +107,6 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
 		gfp_t gfp = noreclaim;
 
 		do {
-			cond_resched();
 			folio = shmem_read_folio_gfp(mapping, i, gfp);
 			if (!IS_ERR(folio))
 				break;
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index 6b9f6cf50bf6..fae0fa993404 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -1447,8 +1447,6 @@ static int igt_ppgtt_smoke_huge(void *arg)
 
 		if (err)
 			break;
-
-		cond_resched();
 	}
 
 	return err;
@@ -1538,8 +1536,6 @@ static int igt_ppgtt_sanity_check(void *arg)
 				goto out;
 			}
 		}
-
-		cond_resched();
 	}
 
 out:
@@ -1738,8 +1734,6 @@ static int igt_ppgtt_mixed(void *arg)
 			break;
 
 		addr += obj->base.size;
-
-		cond_resched();
 	}
 
 	i915_gem_context_unlock_engines(ctx);
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 72957a36a36b..c994071532cf 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -221,7 +221,6 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
 		u32 *cpu;
 
 		GEM_BUG_ON(view.partial.size > nreal);
-		cond_resched();
 
 		vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, PIN_MAPPABLE);
 		if (IS_ERR(vma)) {
@@ -1026,8 +1025,6 @@ static void igt_close_objects(struct drm_i915_private *i915,
 		i915_gem_object_put(obj);
 	}
 
-	cond_resched();
-
 	i915_gem_drain_freed_objects(i915);
 }
 
@@ -1041,8 +1038,6 @@ static void igt_make_evictable(struct list_head *objects)
 			i915_gem_object_unpin_pages(obj);
 		i915_gem_object_unlock(obj);
 	}
-
-	cond_resched();
 }
 
 static int igt_fill_mappable(struct intel_memory_region *mr,
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index ecc990ec1b95..e016f1203f7c 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -315,7 +315,7 @@ void __intel_breadcrumbs_park(struct intel_breadcrumbs *b)
 		local_irq_disable();
 		signal_irq_work(&b->irq_work);
 		local_irq_enable();
-		cond_resched();
+		cond_resched_stall();
 	}
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
index 449f0b7fc843..40cfdf4f5fff 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -664,7 +664,7 @@ int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
 
 	while ((timeout = intel_gt_retire_requests_timeout(gt, timeout,
 							   &remaining_timeout)) > 0) {
-		cond_resched();
+		cond_resched_stall();
 		if (signal_pending(current))
 			return -EINTR;
 	}
diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c
index 576e5ef0289b..cc3f62d5c28f 100644
--- a/drivers/gpu/drm/i915/gt/intel_migrate.c
+++ b/drivers/gpu/drm/i915/gt/intel_migrate.c
@@ -906,8 +906,6 @@ intel_context_migrate_copy(struct intel_context *ce,
 			err = -EINVAL;
 			break;
 		}
-
-		cond_resched();
 	} while (1);
 
 out_ce:
@@ -1067,8 +1065,6 @@ intel_context_migrate_clear(struct intel_context *ce,
 		i915_request_add(rq);
 		if (err || !it.sg || !sg_dma_len(it.sg))
 			break;
-
-		cond_resched();
 	} while (1);
 
 out_ce:
diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c
index 4202df5b8c12..52c8fa3e5cad 100644
--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
+++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
@@ -60,8 +60,6 @@ static int wait_for_submit(struct intel_engine_cs *engine,
 
 		if (done)
 			return -ETIME;
-
-		cond_resched();
 	} while (1);
 }
 
@@ -72,7 +70,6 @@ static int wait_for_reset(struct intel_engine_cs *engine,
 	timeout += jiffies;
 
 	do {
-		cond_resched();
 		intel_engine_flush_submission(engine);
 
 		if (READ_ONCE(engine->execlists.pending[0]))
@@ -1373,7 +1370,6 @@ static int live_timeslice_queue(void *arg)
 
 		/* Wait until we ack the release_queue and start timeslicing */
 		do {
-			cond_resched();
 			intel_engine_flush_submission(engine);
 		} while (READ_ONCE(engine->execlists.pending[0]));
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index 0dd4d00ee894..e751ed2cf8b2 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -939,8 +939,6 @@ static void active_engine(struct kthread_work *work)
 			pr_err("[%s] Request put failed: %d!\n", engine->name, err);
 			break;
 		}
-
-		cond_resched();
 	}
 
 	for (count = 0; count < ARRAY_SIZE(rq); count++) {
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 5f826b6dcf5d..83a42492f0d0 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -70,8 +70,6 @@ static int wait_for_submit(struct intel_engine_cs *engine,
 
 		if (done)
 			return -ETIME;
-
-		cond_resched();
 	} while (1);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_migrate.c b/drivers/gpu/drm/i915/gt/selftest_migrate.c
index 3def5ca72dec..9dfa70699df9 100644
--- a/drivers/gpu/drm/i915/gt/selftest_migrate.c
+++ b/drivers/gpu/drm/i915/gt/selftest_migrate.c
@@ -210,8 +210,6 @@ static int intel_context_copy_ccs(struct intel_context *ce,
 		i915_request_add(rq);
 		if (err || !it.sg || !sg_dma_len(it.sg))
 			break;
-
-		cond_resched();
 	} while (1);
 
 out_ce:
diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index fa36cf920bde..15b8fd41ad90 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -352,7 +352,6 @@ static int bench_sync(void *arg)
 		__func__, count, (long long)div64_ul(ktime_to_ns(kt), count));
 
 	mock_timeline_fini(&tl);
-	cond_resched();
 
 	mock_timeline_init(&tl, 0);
 
@@ -382,7 +381,6 @@ static int bench_sync(void *arg)
 		__func__, count, (long long)div64_ul(ktime_to_ns(kt), count));
 
 	mock_timeline_fini(&tl);
-	cond_resched();
 
 	mock_timeline_init(&tl, 0);
 
@@ -405,7 +403,6 @@ static int bench_sync(void *arg)
 	pr_info("%s: %lu repeated insert/lookups, %lluns/op\n",
 		__func__, count, (long long)div64_ul(ktime_to_ns(kt), count));
 	mock_timeline_fini(&tl);
-	cond_resched();
 
 	/* Benchmark searching for a known context id and changing the seqno */
 	for (last_order = 1, order = 1; order < 32;
@@ -434,7 +431,6 @@ static int bench_sync(void *arg)
 			__func__, count, order,
 			(long long)div64_ul(ktime_to_ns(kt), count));
 		mock_timeline_fini(&tl);
-		cond_resched();
 	}
 
 	return 0;
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 5ec293011d99..810251c33495 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -865,7 +865,7 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 
 	/* Wait until the previous preallocation is completed */
 	while (!llist_empty(&ref->preallocated_barriers))
-		cond_resched();
+		cond_resched_stall();
 
 	/*
 	 * Preallocate a node for each physical engine supporting the target
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index c02ebd6900ae..1a600f42a3ad 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -267,8 +267,6 @@ i915_gem_evict_something(struct i915_address_space *vm,
 	if (ret)
 		return ret;
 
-	cond_resched();
-
 	flags |= PIN_NONBLOCK;
 	goto search_again;
 
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 4008bb09fdb5..410072145d4d 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -320,8 +320,6 @@ static int compress_page(struct i915_vma_compress *c,
 
 		if (zlib_deflate(zstream, Z_NO_FLUSH) != Z_OK)
 			return -EIO;
-
-		cond_resched();
 	} while (zstream->avail_in);
 
 	/* Fallback to uncompressed if we increase size? */
@@ -408,7 +406,6 @@ static int compress_page(struct i915_vma_compress *c,
 	if (!(wc && i915_memcpy_from_wc(ptr, src, PAGE_SIZE)))
 		memcpy(ptr, src, PAGE_SIZE);
 	list_add_tail(&virt_to_page(ptr)->lru, &dst->page_list);
-	cond_resched();
 
 	return 0;
 }
@@ -2325,13 +2322,6 @@ void intel_klog_error_capture(struct intel_gt *gt,
 						 l_count, line++, ptr2);
 					ptr[pos] = chr;
 					ptr2 = ptr + pos;
-
-					/*
-					 * If spewing large amounts of data via a serial console,
-					 * this can be a very slow process. So be friendly and try
-					 * not to cause 'softlockup on CPU' problems.
-					 */
-					cond_resched();
 				}
 
 				if (ptr2 < (ptr + count))
@@ -2352,8 +2342,12 @@ void intel_klog_error_capture(struct intel_gt *gt,
 				got--;
 			}
 
-			/* As above. */
-			cond_resched();
+			/*
+			 * If spewing large amounts of data via a serial console,
+			 * this can be a very slow process. So be friendly and try
+			 * not to cause 'softlockup on CPU' problems.
+			 */
+			cond_resched_stall();
 		}
 
 		if (got)
diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
index dfefad5a5fec..d2e74cfb1aac 100644
--- a/drivers/gpu/drm/i915/intel_uncore.c
+++ b/drivers/gpu/drm/i915/intel_uncore.c
@@ -487,7 +487,6 @@ intel_uncore_forcewake_reset(struct intel_uncore *uncore)
 		}
 
 		spin_unlock_irqrestore(&uncore->lock, irqflags);
-		cond_resched();
 	}
 
 	drm_WARN_ON(&uncore->i915->drm, active_domains);
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 5c397a2df70e..4b497e969a33 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -201,7 +201,6 @@ static int igt_ppgtt_alloc(void *arg)
 		}
 
 		ppgtt->vm.allocate_va_range(&ppgtt->vm, &stash, 0, size);
-		cond_resched();
 
 		ppgtt->vm.clear_range(&ppgtt->vm, 0, size);
 
@@ -224,7 +223,6 @@ static int igt_ppgtt_alloc(void *arg)
 
 		ppgtt->vm.allocate_va_range(&ppgtt->vm, &stash,
 					    last, size - last);
-		cond_resched();
 
 		i915_vm_free_pt_stash(&ppgtt->vm, &stash);
 	}
diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
index a9b79888c193..43bb54fc8c78 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -438,8 +438,6 @@ static void __igt_breadcrumbs_smoketest(struct kthread_work *work)
 
 		num_fences += count;
 		num_waits++;
-
-		cond_resched();
 	}
 
 	atomic_long_add(num_fences, &t->num_fences);
diff --git a/drivers/gpu/drm/i915/selftests/i915_selftest.c b/drivers/gpu/drm/i915/selftests/i915_selftest.c
index ee79e0809a6d..17e6bbc3c87e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_selftest.c
+++ b/drivers/gpu/drm/i915/selftests/i915_selftest.c
@@ -179,7 +179,6 @@ static int __run_selftests(const char *name,
 		if (!st->enabled)
 			continue;
 
-		cond_resched();
 		if (signal_pending(current))
 			return -EINTR;
 
@@ -381,7 +380,6 @@ int __i915_subtests(const char *caller,
 	int err;
 
 	for (; count--; st++) {
-		cond_resched();
 		if (signal_pending(current))
 			return -EINTR;
 
@@ -414,7 +412,6 @@ bool __igt_timeout(unsigned long timeout, const char *fmt, ...)
 	va_list va;
 
 	if (!signal_pending(current)) {
-		cond_resched();
 		if (time_before(jiffies, timeout))
 			return false;
 	}
diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
index 71b52d5efef4..1bacdcd77c5b 100644
--- a/drivers/gpu/drm/i915/selftests/i915_vma.c
+++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
@@ -197,8 +197,6 @@ static int igt_vma_create(void *arg)
 			list_del_init(&ctx->link);
 			mock_context_close(ctx);
 		}
-
-		cond_resched();
 	}
 
 end:
@@ -347,8 +345,6 @@ static int igt_vma_pin1(void *arg)
 				goto out;
 			}
 		}
-
-		cond_resched();
 	}
 
 	err = 0;
@@ -697,7 +693,6 @@ static int igt_vma_rotate_remap(void *arg)
 						pr_err("Unbinding returned %i\n", err);
 						goto out_object;
 					}
-					cond_resched();
 				}
 			}
 		}
@@ -858,8 +853,6 @@ static int igt_vma_partial(void *arg)
 					pr_err("Unbinding returned %i\n", err);
 					goto out_object;
 				}
-
-				cond_resched();
 			}
 		}
 
@@ -1085,8 +1078,6 @@ static int igt_vma_remapped_gtt(void *arg)
 				}
 			}
 			i915_vma_unpin_iomap(vma);
-
-			cond_resched();
 		}
 	}
 
diff --git a/drivers/gpu/drm/i915/selftests/igt_flush_test.c b/drivers/gpu/drm/i915/selftests/igt_flush_test.c
index 29110abb4fe0..fbc1b606df29 100644
--- a/drivers/gpu/drm/i915/selftests/igt_flush_test.c
+++ b/drivers/gpu/drm/i915/selftests/igt_flush_test.c
@@ -22,8 +22,6 @@ int igt_flush_test(struct drm_i915_private *i915)
 		if (intel_gt_is_wedged(gt))
 			ret = -EIO;
 
-		cond_resched();
-
 		if (intel_gt_wait_for_idle(gt, HZ * 3) == -ETIME) {
 			pr_err("%pS timed out, cancelling all further testing.\n",
 			       __builtin_return_address(0));
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index d985d9bae2e8..3fce433284bd 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -46,8 +46,6 @@ static void close_objects(struct intel_memory_region *mem,
 		i915_gem_object_put(obj);
 	}
 
-	cond_resched();
-
 	i915_gem_drain_freed_objects(i915);
 }
 
@@ -1290,8 +1288,6 @@ static int _perf_memcpy(struct intel_memory_region *src_mr,
 			div64_u64(mul_u32_u32(4 * size,
 					      1000 * 1000 * 1000),
 				  t[1] + 2 * t[2] + t[3]) >> 20);
-
-		cond_resched();
 	}
 
 	i915_gem_object_unpin_map(dst);
diff --git a/drivers/gpu/drm/tests/drm_buddy_test.c b/drivers/gpu/drm/tests/drm_buddy_test.c
index 09ee6f6af896..7ee65bad4bb7 100644
--- a/drivers/gpu/drm/tests/drm_buddy_test.c
+++ b/drivers/gpu/drm/tests/drm_buddy_test.c
@@ -29,7 +29,6 @@ static bool __timeout(unsigned long timeout, const char *fmt, ...)
 	va_list va;
 
 	if (!signal_pending(current)) {
-		cond_resched();
 		if (time_before(jiffies, timeout))
 			return false;
 	}
@@ -485,8 +484,6 @@ static void drm_test_buddy_alloc_smoke(struct kunit *test)
 
 		if (err || timeout)
 			break;
-
-		cond_resched();
 	}
 
 	kfree(order);
@@ -681,8 +678,6 @@ static void drm_test_buddy_alloc_range(struct kunit *test)
 		rem -= size;
 		if (!rem)
 			break;
-
-		cond_resched();
 	}
 
 	drm_buddy_free_list(&mm, &blocks);
diff --git a/drivers/gpu/drm/tests/drm_mm_test.c b/drivers/gpu/drm/tests/drm_mm_test.c
index 05d5e7af6d25..7d11740ef599 100644
--- a/drivers/gpu/drm/tests/drm_mm_test.c
+++ b/drivers/gpu/drm/tests/drm_mm_test.c
@@ -474,8 +474,6 @@ static void drm_test_mm_reserve(struct kunit *test)
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_reserve(test, count, size - 1));
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_reserve(test, count, size));
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_reserve(test, count, size + 1));
-
-		cond_resched();
 	}
 }
 
@@ -645,8 +643,6 @@ static int __drm_test_mm_insert(struct kunit *test, unsigned int count, u64 size
 		drm_mm_for_each_node_safe(node, next, &mm)
 			drm_mm_remove_node(node);
 		DRM_MM_BUG_ON(!drm_mm_clean(&mm));
-
-		cond_resched();
 	}
 
 	ret = 0;
@@ -671,8 +667,6 @@ static void drm_test_mm_insert(struct kunit *test)
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert(test, count, size - 1, false));
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert(test, count, size, false));
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert(test, count, size + 1, false));
-
-		cond_resched();
 	}
 }
 
@@ -693,8 +687,6 @@ static void drm_test_mm_replace(struct kunit *test)
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert(test, count, size - 1, true));
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert(test, count, size, true));
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert(test, count, size + 1, true));
-
-		cond_resched();
 	}
 }
 
@@ -882,8 +874,6 @@ static int __drm_test_mm_insert_range(struct kunit *test, unsigned int count, u6
 		drm_mm_for_each_node_safe(node, next, &mm)
 			drm_mm_remove_node(node);
 		DRM_MM_BUG_ON(!drm_mm_clean(&mm));
-
-		cond_resched();
 	}
 
 	ret = 0;
@@ -942,8 +932,6 @@ static void drm_test_mm_insert_range(struct kunit *test)
 								    max / 2, max));
 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size,
 								    max / 4 + 1, 3 * max / 4 - 1));
-
-		cond_resched();
 	}
 }
 
@@ -1086,8 +1074,6 @@ static void drm_test_mm_align(struct kunit *test)
 		drm_mm_for_each_node_safe(node, next, &mm)
 			drm_mm_remove_node(node);
 		DRM_MM_BUG_ON(!drm_mm_clean(&mm));
-
-		cond_resched();
 	}
 
 out:
@@ -1122,8 +1108,6 @@ static void drm_test_mm_align_pot(struct kunit *test, int max)
 			KUNIT_FAIL(test, "insert failed with alignment=%llx [%d]", align, bit);
 			goto out;
 		}
-
-		cond_resched();
 	}
 
 out:
@@ -1465,8 +1449,6 @@ static void drm_test_mm_evict(struct kunit *test)
 				goto out;
 			}
 		}
-
-		cond_resched();
 	}
 
 out:
@@ -1547,8 +1529,6 @@ static void drm_test_mm_evict_range(struct kunit *test)
 				goto out;
 			}
 		}
-
-		cond_resched();
 	}
 
 out:
@@ -1658,7 +1638,6 @@ static void drm_test_mm_topdown(struct kunit *test)
 		drm_mm_for_each_node_safe(node, next, &mm)
 			drm_mm_remove_node(node);
 		DRM_MM_BUG_ON(!drm_mm_clean(&mm));
-		cond_resched();
 	}
 
 out:
@@ -1750,7 +1729,6 @@ static void drm_test_mm_bottomup(struct kunit *test)
 		drm_mm_for_each_node_safe(node, next, &mm)
 			drm_mm_remove_node(node);
 		DRM_MM_BUG_ON(!drm_mm_clean(&mm));
-		cond_resched();
 	}
 
 out:
@@ -1968,8 +1946,6 @@ static void drm_test_mm_color(struct kunit *test)
 			drm_mm_remove_node(node);
 			kfree(node);
 		}
-
-		cond_resched();
 	}
 
 out:
@@ -2038,7 +2014,6 @@ static int evict_color(struct kunit *test, struct drm_mm *mm, u64 range_start,
 		}
 	}
 
-	cond_resched();
 	return 0;
 }
 
@@ -2110,8 +2085,6 @@ static void drm_test_mm_color_evict(struct kunit *test)
 				goto out;
 			}
 		}
-
-		cond_resched();
 	}
 
 out:
@@ -2196,8 +2169,6 @@ static void drm_test_mm_color_evict_range(struct kunit *test)
 				goto out;
 			}
 		}
-
-		cond_resched();
 	}
 
 out:
-- 
2.31.1


  parent reply	other threads:[~2023-11-07 23:13 UTC|newest]

Thread overview: 250+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-07 21:56 [RFC PATCH 00/86] Make the kernel preemptible Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 01/86] Revert "riscv: support PREEMPT_DYNAMIC with static keys" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 02/86] Revert "sched/core: Make sched_dynamic_mutex static" Ankur Arora
2023-11-07 23:04   ` Steven Rostedt
2023-11-07 21:56 ` [RFC PATCH 03/86] Revert "ftrace: Use preemption model accessors for trace header printout" Ankur Arora
2023-11-07 23:10   ` Steven Rostedt
2023-11-07 23:23     ` Ankur Arora
2023-11-07 23:31       ` Steven Rostedt
2023-11-07 23:34         ` Steven Rostedt
2023-11-08  0:12           ` Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 04/86] Revert "preempt/dynamic: Introduce preemption model accessors" Ankur Arora
2023-11-07 23:12   ` Steven Rostedt
2023-11-08  4:59     ` Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 05/86] Revert "kcsan: Use " Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 06/86] Revert "entry: Fix compile error in dynamic_irqentry_exit_cond_resched()" Ankur Arora
2023-11-08  7:47   ` Greg KH
2023-11-08  9:09     ` Ankur Arora
2023-11-08 10:00       ` Greg KH
2023-11-07 21:56 ` [RFC PATCH 07/86] Revert "livepatch,sched: Add livepatch task switching to cond_resched()" Ankur Arora
2023-11-07 23:16   ` Steven Rostedt
2023-11-08  4:55     ` Ankur Arora
2023-11-09 17:26     ` Josh Poimboeuf
2023-11-09 17:31       ` Steven Rostedt
2023-11-09 17:51         ` Josh Poimboeuf
2023-11-09 22:50           ` Ankur Arora
2023-11-09 23:47             ` Josh Poimboeuf
2023-11-10  0:46               ` Ankur Arora
2023-11-10  0:56           ` Steven Rostedt
2023-11-07 21:56 ` [RFC PATCH 08/86] Revert "arm64: Support PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 23:17   ` Steven Rostedt
2023-11-08 15:44   ` Mark Rutland
2023-11-07 21:56 ` [RFC PATCH 09/86] Revert "sched/preempt: Add PREEMPT_DYNAMIC using static keys" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 10/86] Revert "sched/preempt: Decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 11/86] Revert "sched/preempt: Simplify irqentry_exit_cond_resched() callers" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 12/86] Revert "sched/preempt: Refactor sched_dynamic_update()" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 13/86] Revert "sched/preempt: Move PREEMPT_DYNAMIC logic later" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 14/86] Revert "preempt/dynamic: Fix setup_preempt_mode() return value" Ankur Arora
2023-11-07 23:20   ` Steven Rostedt
2023-11-07 21:57 ` [RFC PATCH 15/86] Revert "preempt: Restore preemption model selection configs" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 16/86] Revert "sched: Provide Kconfig support for default dynamic preempt mode" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 17/86] sched/preempt: remove PREEMPT_DYNAMIC from the build version Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 18/86] Revert "preempt/dynamic: Fix typo in macro conditional statement" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 19/86] Revert "sched,preempt: Move preempt_dynamic to debug.c" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 20/86] Revert "static_call: Relax static_call_update() function argument type" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 21/86] Revert "sched/core: Use -EINVAL in sched_dynamic_mode()" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 22/86] Revert "sched/core: Stop using magic values " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 23/86] Revert "sched,x86: Allow !PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 24/86] Revert "sched: Harden PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 25/86] Revert "sched: Add /debug/sched_preempt" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 26/86] Revert "preempt/dynamic: Support dynamic preempt with preempt= boot option" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 27/86] Revert "preempt/dynamic: Provide irqentry_exit_cond_resched() static call" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 28/86] Revert "preempt/dynamic: Provide preempt_schedule[_notrace]() static calls" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 29/86] Revert "preempt/dynamic: Provide cond_resched() and might_resched() " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 30/86] Revert "preempt: Introduce CONFIG_PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 31/86] x86/thread_info: add TIF_NEED_RESCHED_LAZY Ankur Arora
2023-11-07 23:26   ` Steven Rostedt
2023-11-07 21:57 ` [RFC PATCH 32/86] entry: handle TIF_NEED_RESCHED_LAZY Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 33/86] entry/kvm: " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 34/86] thread_info: accessors for TIF_NEED_RESCHED* Ankur Arora
2023-11-08  8:58   ` Peter Zijlstra
2023-11-21  5:59     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 35/86] thread_info: change to tif_need_resched(resched_t) Ankur Arora
2023-11-08  9:00   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 36/86] entry: irqentry_exit only preempts TIF_NEED_RESCHED Ankur Arora
2023-11-08  9:01   ` Peter Zijlstra
2023-11-21  6:00     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 37/86] sched: make test_*_tsk_thread_flag() return bool Ankur Arora
2023-11-08  9:02   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 38/86] sched: *_tsk_need_resched() now takes resched_t Ankur Arora
2023-11-08  9:03   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 39/86] sched: handle lazy resched in set_nr_*_polling() Ankur Arora
2023-11-08  9:15   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 40/86] context_tracking: add ct_state_cpu() Ankur Arora
2023-11-08  9:16   ` Peter Zijlstra
2023-11-21  6:32     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 41/86] sched: handle resched policy in resched_curr() Ankur Arora
2023-11-08  9:36   ` Peter Zijlstra
2023-11-08 10:26     ` Ankur Arora
2023-11-08 10:46       ` Peter Zijlstra
2023-11-21  6:34         ` Ankur Arora
2023-11-21  6:31       ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 42/86] sched: force preemption on tick expiration Ankur Arora
2023-11-08  9:56   ` Peter Zijlstra
2023-11-21  6:44     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 43/86] sched: enable PREEMPT_COUNT, PREEMPTION for all preemption models Ankur Arora
2023-11-08  9:58   ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 44/86] sched: voluntary preemption Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 45/86] preempt: ARCH_NO_PREEMPT only preempts lazily Ankur Arora
2023-11-08  0:07   ` Steven Rostedt
2023-11-08  8:47     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 46/86] tracing: handle lazy resched Ankur Arora
2023-11-08  0:19   ` Steven Rostedt
2023-11-08  9:24     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 47/86] rcu: select PREEMPT_RCU if PREEMPT Ankur Arora
2023-11-08  0:27   ` Steven Rostedt
2023-11-21  0:28     ` Paul E. McKenney
2023-11-21  3:43       ` Steven Rostedt
2023-11-21  5:04         ` Paul E. McKenney
2023-11-21  5:39           ` Ankur Arora
2023-11-21 15:00           ` Steven Rostedt
2023-11-21 15:19             ` Paul E. McKenney
2023-11-28 10:53               ` Thomas Gleixner
2023-11-28 18:30                 ` Ankur Arora
2023-12-05  1:03                   ` Paul E. McKenney
2023-12-05  1:01                 ` Paul E. McKenney
2023-12-05 15:01                   ` Steven Rostedt
2023-12-05 19:38                     ` Paul E. McKenney
2023-12-05 20:18                       ` Ankur Arora
2023-12-06  4:07                         ` Paul E. McKenney
2023-12-07  1:33                           ` Ankur Arora
2023-12-05 20:45                       ` Steven Rostedt
2023-12-06 10:08                         ` David Laight
2023-12-07  4:34                         ` Paul E. McKenney
2023-12-07 13:44                           ` Steven Rostedt
2023-12-08  4:28                             ` Paul E. McKenney
2023-11-08 12:15   ` Julian Anastasov
2023-11-07 21:57 ` [RFC PATCH 48/86] rcu: handle quiescent states for PREEMPT_RCU=n Ankur Arora
2023-11-21  0:38   ` Paul E. McKenney
2023-11-21  3:26     ` Ankur Arora
2023-11-21  5:17       ` Paul E. McKenney
2023-11-21  5:34         ` Paul E. McKenney
2023-11-21  6:13           ` Z qiang
2023-11-21 15:32             ` Paul E. McKenney
2023-11-21 19:25           ` Paul E. McKenney
2023-11-21 20:30             ` Peter Zijlstra
2023-11-21 21:14               ` Paul E. McKenney
2023-11-21 21:38                 ` Steven Rostedt
2023-11-21 22:26                   ` Paul E. McKenney
2023-11-21 22:52                     ` Steven Rostedt
2023-11-22  0:01                       ` Paul E. McKenney
2023-11-22  0:12                         ` Steven Rostedt
2023-11-22  1:09                           ` Paul E. McKenney
2023-11-28 17:04     ` Thomas Gleixner
2023-12-05  1:33       ` Paul E. McKenney
2023-12-06 15:10         ` Thomas Gleixner
2023-12-07  4:17           ` Paul E. McKenney
2023-12-07  1:31         ` Ankur Arora
2023-12-07  2:10           ` Steven Rostedt
2023-12-07  4:37             ` Paul E. McKenney
2023-12-07 14:22           ` Thomas Gleixner
2023-11-21  3:55   ` Z qiang
2023-11-07 21:57 ` [RFC PATCH 49/86] osnoise: handle quiescent states directly Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 50/86] rcu: TASKS_RCU does not need to depend on PREEMPTION Ankur Arora
2023-11-21  0:38   ` Paul E. McKenney
2023-11-07 21:57 ` [RFC PATCH 51/86] preempt: disallow !PREEMPT_COUNT or !PREEMPTION Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 52/86] sched: remove CONFIG_PREEMPTION from *_needbreak() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 53/86] sched: fixup __cond_resched_*() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 54/86] sched: add cond_resched_stall() Ankur Arora
2023-11-09 11:19   ` Thomas Gleixner
2023-11-09 22:27     ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 55/86] xarray: add cond_resched_xas_rcu() and cond_resched_xas_lock_irq() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 56/86] xarray: use cond_resched_xas*() Ankur Arora
2023-11-07 23:01 ` [RFC PATCH 00/86] Make the kernel preemptible Steven Rostedt
2023-11-07 23:43   ` Ankur Arora
2023-11-08  0:00     ` Steven Rostedt
2023-11-07 23:07 ` [RFC PATCH 57/86] coccinelle: script to remove cond_resched() Ankur Arora
2023-11-07 23:07   ` [RFC PATCH 58/86] treewide: x86: " Ankur Arora
2023-11-07 23:07   ` [RFC PATCH 59/86] treewide: rcu: " Ankur Arora
2023-11-21  1:01     ` Paul E. McKenney
2023-11-07 23:07   ` [RFC PATCH 60/86] treewide: torture: " Ankur Arora
2023-11-21  1:02     ` Paul E. McKenney
2023-11-07 23:07   ` [RFC PATCH 61/86] treewide: bpf: " Ankur Arora
2023-11-07 23:07   ` [RFC PATCH 62/86] treewide: trace: " Ankur Arora
2023-11-07 23:07   ` [RFC PATCH 63/86] treewide: futex: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 64/86] treewide: printk: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 65/86] treewide: task_work: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 66/86] treewide: kernel: " Ankur Arora
2023-11-17 18:14     ` Luis Chamberlain
2023-11-17 19:51       ` Steven Rostedt
2023-11-07 23:08   ` [RFC PATCH 67/86] treewide: kernel: remove cond_reshed() Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 68/86] treewide: mm: remove cond_resched() Ankur Arora
2023-11-08  1:28     ` Sergey Senozhatsky
2023-11-08  7:49       ` Vlastimil Babka
2023-11-08  8:02         ` Yosry Ahmed
2023-11-08  8:54           ` Ankur Arora
2023-11-08 12:58             ` Matthew Wilcox
2023-11-08 14:50               ` Steven Rostedt
2023-11-07 23:08   ` [RFC PATCH 69/86] treewide: io_uring: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 70/86] treewide: ipc: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 71/86] treewide: lib: " Ankur Arora
2023-11-08  9:15     ` Herbert Xu
2023-11-08 15:08       ` Steven Rostedt
2023-11-09  4:19         ` Herbert Xu
2023-11-09  4:43           ` Steven Rostedt
2023-11-08 19:15     ` Kees Cook
2023-11-08 19:41       ` Steven Rostedt
2023-11-08 22:16         ` Kees Cook
2023-11-08 22:21           ` Steven Rostedt
2023-11-09  9:39         ` David Laight
2023-11-07 23:08   ` [RFC PATCH 72/86] treewide: crypto: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 73/86] treewide: security: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 74/86] treewide: fs: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 75/86] treewide: virt: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 76/86] treewide: block: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 77/86] treewide: netfilter: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 78/86] treewide: net: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 79/86] " Ankur Arora
2023-11-08 12:16     ` Eric Dumazet
2023-11-08 17:11       ` Steven Rostedt
2023-11-08 20:59         ` Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 80/86] treewide: sound: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 81/86] treewide: md: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 82/86] treewide: mtd: " Ankur Arora
2023-11-08 16:28     ` Miquel Raynal
2023-11-08 16:32       ` Matthew Wilcox
2023-11-08 17:21         ` Steven Rostedt
2023-11-09  8:38           ` Miquel Raynal
2023-11-07 23:08   ` Ankur Arora [this message]
2023-11-07 23:08   ` [RFC PATCH 84/86] treewide: net: " Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 85/86] treewide: drivers: " Ankur Arora
2023-11-08  0:48     ` Chris Packham
2023-11-09  0:55       ` Ankur Arora
2023-11-09 23:25     ` Dmitry Torokhov
2023-11-09 23:41       ` Steven Rostedt
2023-11-10  0:01       ` Ankur Arora
2023-11-07 23:08   ` [RFC PATCH 86/86] sched: " Ankur Arora
2023-11-07 23:19   ` [RFC PATCH 57/86] coccinelle: script to " Julia Lawall
2023-11-08  8:29     ` Ankur Arora
2023-11-08  9:49       ` Julia Lawall
2023-11-21  0:45   ` Paul E. McKenney
2023-11-21  5:16     ` Ankur Arora
2023-11-21 15:26       ` Paul E. McKenney
2023-11-08  4:08 ` [RFC PATCH 00/86] Make the kernel preemptible Christoph Lameter
2023-11-08  4:33   ` Ankur Arora
2023-11-08  4:52     ` Christoph Lameter
2023-11-08  5:12       ` Steven Rostedt
2023-11-08  6:49         ` Ankur Arora
2023-11-08  7:54         ` Vlastimil Babka
2023-11-08  7:31 ` Juergen Gross
2023-11-08  8:51 ` Peter Zijlstra
2023-11-08  9:53   ` Daniel Bristot de Oliveira
2023-11-08 10:04   ` Ankur Arora
2023-11-08 10:13     ` Peter Zijlstra
2023-11-08 11:00       ` Ankur Arora
2023-11-08 11:14         ` Peter Zijlstra
2023-11-08 12:16           ` Peter Zijlstra
2023-11-08 15:38       ` Thomas Gleixner
2023-11-08 16:15         ` Peter Zijlstra
2023-11-08 16:22         ` Steven Rostedt
2023-11-08 16:49           ` Peter Zijlstra
2023-11-08 17:18             ` Steven Rostedt
2023-11-08 20:46             ` Ankur Arora
2023-11-08 20:26         ` Ankur Arora
2023-11-08  9:43 ` David Laight
2023-11-08 15:15   ` Steven Rostedt
2023-11-08 16:29     ` David Laight
2023-11-08 16:33 ` Mark Rutland
2023-11-09  0:34   ` Ankur Arora
2023-11-09 11:00     ` Mark Rutland
2023-11-09 22:36       ` Ankur Arora

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231107230822.371443-27-ankur.a.arora@oracle.com \
    --to=ankur.a.arora@oracle.com \
    --cc=David.Laight@ACULAB.COM \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=andrzej.hajda@intel.com \
    --cc=anton.ivanov@cambridgegreys.com \
    --cc=bharata@amd.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=bristot@kernel.org \
    --cc=daniel@ffwll.ch \
    --cc=dave.hansen@linux.intel.com \
    --cc=geert@linux-m68k.org \
    --cc=glaubitz@physik.fu-berlin.de \
    --cc=hpa@zytor.com \
    --cc=inki.dae@samsung.com \
    --cc=jagan@amarulasolutions.com \
    --cc=jgross@suse.com \
    --cc=jon.grimm@amd.com \
    --cc=juri.lelli@redhat.com \
    --cc=konrad.wilk@oracle.com \
    --cc=krypton@ulrich-teichert.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mattst88@gmail.com \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=mingo@redhat.com \
    --cc=mjguzik@gmail.com \
    --cc=mripard@kernel.org \
    --cc=neil.armstrong@linaro.org \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@amd.com \
    --cc=rfoss@kernel.org \
    --cc=richard@nod.at \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=tzimmermann@suse.de \
    --cc=vincent.guittot@linaro.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).