linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
       [not found] <cover.1670778651.git.david.keisarschm@mail.huji.ac.il>
@ 2022-12-11 22:16 ` david.keisarschm
  2022-12-12  8:35   ` Andy Shevchenko
  2022-12-12 14:35   ` Jason A. Donenfeld
  2022-12-11 22:16 ` [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c david.keisarschm
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: david.keisarschm @ 2022-12-11 22:16 UTC (permalink / raw)
  To: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Hannes Reinecke, James E.J. Bottomley,
	Martin K. Petersen, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jiri Pirko, Petr Mladek, Steven Rostedt, Sergey Senozhatsky,
	Andy Shevchenko, Rasmus Villemoes, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo
  Cc: David, aksecurity, ilay.bahat1, linux-kernel, intel-gfx,
	dri-devel, linux-mtd, linux-scsi, bpf, netdev, linux-mm

From: David <david.keisarschm@mail.huji.ac.il>

Since the two functions
 prandom_byte_state and prandom_u32_state
 use the weak prng prandom_u32,
 we added the prefix predictable_rng,
 to their signatures so it is clear they are weak.

Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
---
 arch/x86/mm/kaslr.c                           |  2 +-
 .../gpu/drm/i915/gem/selftests/huge_pages.c   |  2 +-
 .../i915/gem/selftests/i915_gem_client_blt.c  |  2 +-
 .../i915/gem/selftests/i915_gem_coherency.c   |  2 +-
 .../drm/i915/gem/selftests/i915_gem_context.c |  2 +-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  2 +-
 drivers/gpu/drm/i915/gt/selftest_migrate.c    |  2 +-
 drivers/gpu/drm/i915/gt/selftest_timeline.c   |  4 +-
 drivers/gpu/drm/i915/selftests/i915_random.c  |  4 +-
 drivers/gpu/drm/i915/selftests/i915_random.h  |  4 +-
 drivers/gpu/drm/i915/selftests/i915_syncmap.c |  4 +-
 .../drm/i915/selftests/intel_memory_region.c  | 10 ++---
 drivers/gpu/drm/i915/selftests/scatterlist.c  |  4 +-
 drivers/gpu/drm/lib/drm_random.c              |  2 +-
 drivers/mtd/tests/oobtest.c                   | 10 ++---
 drivers/mtd/tests/pagetest.c                  | 12 +++---
 drivers/mtd/tests/subpagetest.c               | 12 +++---
 drivers/scsi/fcoe/fcoe_ctlr.c                 |  2 +-
 include/linux/prandom.h                       |  6 +--
 kernel/bpf/core.c                             |  2 +-
 lib/interval_tree_test.c                      |  6 +--
 lib/random32.c                                | 42 +++++++++----------
 lib/rbtree_test.c                             |  4 +-
 lib/test_bpf.c                                |  2 +-
 lib/test_parman.c                             |  2 +-
 lib/test_scanf.c                              |  8 ++--
 mm/slab.c                                     |  2 +-
 mm/slab_common.c                              |  2 +-
 28 files changed, 79 insertions(+), 79 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 557f0fe25..66c17b449 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -123,7 +123,7 @@ void __init kernel_randomize_memory(void)
 		 * available.
 		 */
 		entropy = remain_entropy / (ARRAY_SIZE(kaslr_regions) - i);
-		prandom_bytes_state(&rand_state, &rand, sizeof(rand));
+		predictable_rng_prandom_bytes_state(&rand_state, &rand, sizeof(rand));
 		entropy = (rand % (entropy + 1)) & PUD_MASK;
 		vaddr += entropy;
 		*kaslr_regions[i].base = vaddr;
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index c570cf780..f698d58e4 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -1294,7 +1294,7 @@ static u32 igt_random_size(struct rnd_state *prng,
 	GEM_BUG_ON(min_page_size > max_page_size);
 
 	mask = ((max_page_size << 1ULL) - 1) & PAGE_MASK;
-	size = prandom_u32_state(prng) & mask;
+	size = predictable_rng_prandom_u32_state(prng) & mask;
 	if (size < min_page_size)
 		size |= min_page_size;
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
index 9a6a6b5b7..039f17b6b 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
@@ -630,7 +630,7 @@ static int tiled_blits_prepare(struct tiled_blits *t,
 
 	/* Use scratch to fill objects */
 	for (i = 0; i < ARRAY_SIZE(t->buffers); i++) {
-		fill_scratch(t, map, prandom_u32_state(prng));
+		fill_scratch(t, map, predictable_rng_prandom_u32_state(prng));
 		GEM_BUG_ON(verify_buffer(t, &t->scratch, prng));
 
 		err = tiled_blit(t,
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index a666d7e61..24cc7e6d4 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -371,7 +371,7 @@ static int igt_gem_coherency(void *arg)
 
 					i915_random_reorder(offsets, ncachelines, &prng);
 					for (n = 0; n < count; n++)
-						values[n] = prandom_u32_state(&prng);
+						values[n] = predictable_rng_prandom_u32_state(&prng);
 
 					for (n = 0; n < count; n++) {
 						err = over->set(&ctx, offsets[n], ~values[n]);
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index c6ad67b90..6e437a1d6 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -1407,7 +1407,7 @@ static int igt_ctx_readonly(void *arg)
 					goto out_file;
 				}
 
-				if (prandom_u32_state(&prng) & 1)
+				if (predictable_rng_prandom_u32_state(&prng) & 1)
 					i915_gem_object_set_readonly(obj);
 			}
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 82d3f8058..e2fe4fe8f 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -1760,7 +1760,7 @@ static struct i915_request *garbage(struct intel_context *ce,
 	if (err)
 		return ERR_PTR(err);
 
-	prandom_bytes_state(prng,
+	predictable_rng_prandom_bytes_state(prng,
 			    ce->lrc_reg_state,
 			    ce->engine->context_size -
 			    LRC_STATE_OFFSET);
diff --git a/drivers/gpu/drm/i915/gt/selftest_migrate.c b/drivers/gpu/drm/i915/gt/selftest_migrate.c
index 2b0c87999..9b6e8cf52 100644
--- a/drivers/gpu/drm/i915/gt/selftest_migrate.c
+++ b/drivers/gpu/drm/i915/gt/selftest_migrate.c
@@ -551,7 +551,7 @@ static int threaded_migrate(struct intel_migrate *migrate,
 
 		thread[i].migrate = migrate;
 		thread[i].prng =
-			I915_RND_STATE_INITIALIZER(prandom_u32_state(&prng));
+			I915_RND_STATE_INITIALIZER(predictable_rng_prandom_u32_state(&prng));
 
 		tsk = kthread_run(fn, &thread[i], "igt-%d", i);
 		if (IS_ERR(tsk)) {
diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index 522d01905..5c8b662c6 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -308,7 +308,7 @@ static int bench_sync(void *arg)
 		u32 x;
 
 		/* Make sure the compiler doesn't optimise away the prng call */
-		WRITE_ONCE(x, prandom_u32_state(&prng));
+		WRITE_ONCE(x, predictable_rng_prandom_u32_state(&prng));
 
 		count++;
 	} while (!time_after(jiffies, end_time));
@@ -393,7 +393,7 @@ static int bench_sync(void *arg)
 	end_time = jiffies + HZ/10;
 	do {
 		u32 id = random_engine(&prng);
-		u32 seqno = prandom_u32_state(&prng);
+		u32 seqno = predictable_rng_prandom_u32_state(&prng);
 
 		if (!__intel_timeline_sync_is_later(&tl, id, seqno))
 			__intel_timeline_sync_set(&tl, id, seqno);
diff --git a/drivers/gpu/drm/i915/selftests/i915_random.c b/drivers/gpu/drm/i915/selftests/i915_random.c
index abdfadcf6..e6b56688e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_random.c
+++ b/drivers/gpu/drm/i915/selftests/i915_random.c
@@ -35,9 +35,9 @@ u64 i915_prandom_u64_state(struct rnd_state *rnd)
 {
 	u64 x;
 
-	x = prandom_u32_state(rnd);
+	x = predictable_rng_prandom_u32_state(rnd);
 	x <<= 32;
-	x |= prandom_u32_state(rnd);
+	x |= predictable_rng_prandom_u32_state(rnd);
 
 	return x;
 }
diff --git a/drivers/gpu/drm/i915/selftests/i915_random.h b/drivers/gpu/drm/i915/selftests/i915_random.h
index 05364eca2..7290a2eaf 100644
--- a/drivers/gpu/drm/i915/selftests/i915_random.h
+++ b/drivers/gpu/drm/i915/selftests/i915_random.h
@@ -40,13 +40,13 @@
 	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(i915_selftest.random_seed)
 
 #define I915_RND_SUBSTATE(name__, parent__) \
-	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(prandom_u32_state(&(parent__)))
+	struct rnd_state name__ = I915_RND_STATE_INITIALIZER(predictable_rng_prandom_u32_state(&(parent__)))
 
 u64 i915_prandom_u64_state(struct rnd_state *rnd);
 
 static inline u32 i915_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
 {
-	return upper_32_bits(mul_u32_u32(prandom_u32_state(state), ep_ro));
+	return upper_32_bits(mul_u32_u32(predictable_rng_prandom_u32_state(state), ep_ro));
 }
 
 unsigned int *i915_random_order(unsigned int count,
diff --git a/drivers/gpu/drm/i915/selftests/i915_syncmap.c b/drivers/gpu/drm/i915/selftests/i915_syncmap.c
index 47f4ae18a..c02e55133 100644
--- a/drivers/gpu/drm/i915/selftests/i915_syncmap.c
+++ b/drivers/gpu/drm/i915/selftests/i915_syncmap.c
@@ -223,7 +223,7 @@ static int igt_syncmap_one(void *arg)
 
 		for (loop = 0; loop <= max; loop++) {
 			err = check_one(&sync, context,
-					prandom_u32_state(&prng));
+					predictable_rng_prandom_u32_state(&prng));
 			if (err)
 				goto out;
 		}
@@ -575,7 +575,7 @@ static int igt_syncmap_random(void *arg)
 		u32 last_seqno = seqno;
 		bool expect;
 
-		seqno = prandom_u32_state(&prng);
+		seqno = predictable_rng_prandom_u32_state(&prng);
 		expect = seqno_later(last_seqno, seqno);
 
 		for (i = 0; i < count; i++) {
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index 3b18e5905..2c65795e8 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -758,7 +758,7 @@ static int igt_gpu_write(struct i915_gem_context *ctx,
 	i = 0;
 	engines = i915_gem_context_lock_engines(ctx);
 	do {
-		u32 rng = prandom_u32_state(&prng);
+		u32 rng = predictable_rng_prandom_u32_state(&prng);
 		u32 dword = offset_in_page(rng) / 4;
 
 		ce = engines->engines[order[i] % engines->num_engines];
@@ -929,7 +929,7 @@ static int igt_lmem_create_cleared_cpu(void *arg)
 			goto out_unpin;
 		}
 
-		val = prandom_u32_state(&prng);
+		val = predictable_rng_prandom_u32_state(&prng);
 
 		memset32(vaddr, val, obj->base.size / sizeof(u32));
 
@@ -972,7 +972,7 @@ static int igt_lmem_write_gpu(void *arg)
 		goto out_file;
 	}
 
-	sz = round_up(prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE);
+	sz = round_up(predictable_rng_prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE);
 
 	obj = i915_gem_object_create_lmem(i915, sz, 0);
 	if (IS_ERR(obj)) {
@@ -1046,7 +1046,7 @@ static int igt_lmem_write_cpu(void *arg)
 
 	pr_info("%s: using %s\n", __func__, engine->name);
 
-	sz = round_up(prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE);
+	sz = round_up(predictable_rng_prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE);
 	sz = max_t(u32, 2 * PAGE_SIZE, sz);
 
 	obj = i915_gem_object_create_lmem(i915, sz, I915_BO_ALLOC_CONTIGUOUS);
@@ -1115,7 +1115,7 @@ static int igt_lmem_write_cpu(void *arg)
 		offset = igt_random_offset(&prng, 0, obj->base.size,
 					   size, align);
 
-		val = prandom_u32_state(&prng);
+		val = predictable_rng_prandom_u32_state(&prng);
 		memset32(vaddr + offset / sizeof(u32), val ^ 0xdeadbeaf,
 			 size / sizeof(u32));
 
diff --git a/drivers/gpu/drm/i915/selftests/scatterlist.c b/drivers/gpu/drm/i915/selftests/scatterlist.c
index d599186d5..ff86bf468 100644
--- a/drivers/gpu/drm/i915/selftests/scatterlist.c
+++ b/drivers/gpu/drm/i915/selftests/scatterlist.c
@@ -187,7 +187,7 @@ static unsigned int random(unsigned long n,
 			   unsigned long count,
 			   struct rnd_state *rnd)
 {
-	return 1 + (prandom_u32_state(rnd) % 1024);
+	return 1 + (predictable_rng_prandom_u32_state(rnd) % 1024);
 }
 
 static unsigned int random_page_size_pages(unsigned long n,
@@ -201,7 +201,7 @@ static unsigned int random_page_size_pages(unsigned long n,
 		BIT(21) >> PAGE_SHIFT,
 	};
 
-	return page_count[(prandom_u32_state(rnd) % 3)];
+	return page_count[(predictable_rng_prandom_u32_state(rnd) % 3)];
 }
 
 static inline bool page_contiguous(struct page *first,
diff --git a/drivers/gpu/drm/lib/drm_random.c b/drivers/gpu/drm/lib/drm_random.c
index 31b5a3e21..512455039 100644
--- a/drivers/gpu/drm/lib/drm_random.c
+++ b/drivers/gpu/drm/lib/drm_random.c
@@ -9,7 +9,7 @@
 
 u32 drm_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
 {
-	return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);
+	return upper_32_bits((u64)predictable_rng_prandom_u32_state(state) * ep_ro);
 }
 EXPORT_SYMBOL(drm_prandom_u32_max_state);
 
diff --git a/drivers/mtd/tests/oobtest.c b/drivers/mtd/tests/oobtest.c
index 13fed3989..fc248fbdb 100644
--- a/drivers/mtd/tests/oobtest.c
+++ b/drivers/mtd/tests/oobtest.c
@@ -60,7 +60,7 @@ static int write_eraseblock(int ebnum)
 	int err = 0;
 	loff_t addr = (loff_t)ebnum * mtd->erasesize;
 
-	prandom_bytes_state(&rnd_state, writebuf, use_len_max * pgcnt);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, use_len_max * pgcnt);
 	for (i = 0; i < pgcnt; ++i, addr += mtd->writesize) {
 		ops.mode      = MTD_OPS_AUTO_OOB;
 		ops.len       = 0;
@@ -170,7 +170,7 @@ static int verify_eraseblock(int ebnum)
 	loff_t addr = (loff_t)ebnum * mtd->erasesize;
 	size_t bitflips;
 
-	prandom_bytes_state(&rnd_state, writebuf, use_len_max * pgcnt);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, use_len_max * pgcnt);
 	for (i = 0; i < pgcnt; ++i, addr += mtd->writesize) {
 		ops.mode      = MTD_OPS_AUTO_OOB;
 		ops.len       = 0;
@@ -268,7 +268,7 @@ static int verify_eraseblock_in_one_go(int ebnum)
 	size_t bitflips;
 	int i;
 
-	prandom_bytes_state(&rnd_state, writebuf, len);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, len);
 	ops.mode      = MTD_OPS_AUTO_OOB;
 	ops.len       = 0;
 	ops.retlen    = 0;
@@ -642,7 +642,7 @@ static int __init mtd_oobtest_init(void)
 		if (bbt[i] || bbt[i + 1])
 			continue;
 		addr = (loff_t)(i + 1) * mtd->erasesize - mtd->writesize;
-		prandom_bytes_state(&rnd_state, writebuf, sz * cnt);
+		predictable_rng_prandom_bytes_state(&rnd_state, writebuf, sz * cnt);
 		for (pg = 0; pg < cnt; ++pg) {
 			ops.mode      = MTD_OPS_AUTO_OOB;
 			ops.len       = 0;
@@ -673,7 +673,7 @@ static int __init mtd_oobtest_init(void)
 	for (i = 0; i < ebcnt - 1; ++i) {
 		if (bbt[i] || bbt[i + 1])
 			continue;
-		prandom_bytes_state(&rnd_state, writebuf, mtd->oobavail * 2);
+		predictable_rng_prandom_bytes_state(&rnd_state, writebuf, mtd->oobavail * 2);
 		addr = (loff_t)(i + 1) * mtd->erasesize - mtd->writesize;
 		ops.mode      = MTD_OPS_AUTO_OOB;
 		ops.len       = 0;
diff --git a/drivers/mtd/tests/pagetest.c b/drivers/mtd/tests/pagetest.c
index 8eb40b6e6..088146869 100644
--- a/drivers/mtd/tests/pagetest.c
+++ b/drivers/mtd/tests/pagetest.c
@@ -42,7 +42,7 @@ static int write_eraseblock(int ebnum)
 {
 	loff_t addr = (loff_t)ebnum * mtd->erasesize;
 
-	prandom_bytes_state(&rnd_state, writebuf, mtd->erasesize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, mtd->erasesize);
 	cond_resched();
 	return mtdtest_write(mtd, addr, mtd->erasesize, writebuf);
 }
@@ -62,7 +62,7 @@ static int verify_eraseblock(int ebnum)
 	for (i = 0; i < ebcnt && bbt[ebcnt - i - 1]; ++i)
 		addrn -= mtd->erasesize;
 
-	prandom_bytes_state(&rnd_state, writebuf, mtd->erasesize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, mtd->erasesize);
 	for (j = 0; j < pgcnt - 1; ++j, addr += pgsize) {
 		/* Do a read to set the internal dataRAMs to different data */
 		err = mtdtest_read(mtd, addr0, bufsize, twopages);
@@ -97,7 +97,7 @@ static int verify_eraseblock(int ebnum)
 		if (err)
 			return err;
 		memcpy(boundary, writebuf + mtd->erasesize - pgsize, pgsize);
-		prandom_bytes_state(&rnd_state, boundary + pgsize, pgsize);
+		predictable_rng_prandom_bytes_state(&rnd_state, boundary + pgsize, pgsize);
 		if (memcmp(twopages, boundary, bufsize)) {
 			pr_err("error: verify failed at %#llx\n",
 			       (long long)addr);
@@ -210,7 +210,7 @@ static int erasecrosstest(void)
 		return err;
 
 	pr_info("writing 1st page of block %d\n", ebnum);
-	prandom_bytes_state(&rnd_state, writebuf, pgsize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, pgsize);
 	strcpy(writebuf, "There is no data like this!");
 	err = mtdtest_write(mtd, addr0, pgsize, writebuf);
 	if (err)
@@ -235,7 +235,7 @@ static int erasecrosstest(void)
 		return err;
 
 	pr_info("writing 1st page of block %d\n", ebnum);
-	prandom_bytes_state(&rnd_state, writebuf, pgsize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, pgsize);
 	strcpy(writebuf, "There is no data like this!");
 	err = mtdtest_write(mtd, addr0, pgsize, writebuf);
 	if (err)
@@ -284,7 +284,7 @@ static int erasetest(void)
 		return err;
 
 	pr_info("writing 1st page of block %d\n", ebnum);
-	prandom_bytes_state(&rnd_state, writebuf, pgsize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, pgsize);
 	err = mtdtest_write(mtd, addr0, pgsize, writebuf);
 	if (err)
 		return err;
diff --git a/drivers/mtd/tests/subpagetest.c b/drivers/mtd/tests/subpagetest.c
index 05250a080..3d312a51f 100644
--- a/drivers/mtd/tests/subpagetest.c
+++ b/drivers/mtd/tests/subpagetest.c
@@ -46,7 +46,7 @@ static int write_eraseblock(int ebnum)
 	int err = 0;
 	loff_t addr = (loff_t)ebnum * mtd->erasesize;
 
-	prandom_bytes_state(&rnd_state, writebuf, subpgsize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, subpgsize);
 	err = mtd_write(mtd, addr, subpgsize, &written, writebuf);
 	if (unlikely(err || written != subpgsize)) {
 		pr_err("error: write failed at %#llx\n",
@@ -60,7 +60,7 @@ static int write_eraseblock(int ebnum)
 
 	addr += subpgsize;
 
-	prandom_bytes_state(&rnd_state, writebuf, subpgsize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, subpgsize);
 	err = mtd_write(mtd, addr, subpgsize, &written, writebuf);
 	if (unlikely(err || written != subpgsize)) {
 		pr_err("error: write failed at %#llx\n",
@@ -84,7 +84,7 @@ static int write_eraseblock2(int ebnum)
 	for (k = 1; k < 33; ++k) {
 		if (addr + (subpgsize * k) > (loff_t)(ebnum + 1) * mtd->erasesize)
 			break;
-		prandom_bytes_state(&rnd_state, writebuf, subpgsize * k);
+		predictable_rng_prandom_bytes_state(&rnd_state, writebuf, subpgsize * k);
 		err = mtd_write(mtd, addr, subpgsize * k, &written, writebuf);
 		if (unlikely(err || written != subpgsize * k)) {
 			pr_err("error: write failed at %#llx\n",
@@ -120,7 +120,7 @@ static int verify_eraseblock(int ebnum)
 	int err = 0;
 	loff_t addr = (loff_t)ebnum * mtd->erasesize;
 
-	prandom_bytes_state(&rnd_state, writebuf, subpgsize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, subpgsize);
 	clear_data(readbuf, subpgsize);
 	err = mtd_read(mtd, addr, subpgsize, &read, readbuf);
 	if (unlikely(err || read != subpgsize)) {
@@ -147,7 +147,7 @@ static int verify_eraseblock(int ebnum)
 
 	addr += subpgsize;
 
-	prandom_bytes_state(&rnd_state, writebuf, subpgsize);
+	predictable_rng_prandom_bytes_state(&rnd_state, writebuf, subpgsize);
 	clear_data(readbuf, subpgsize);
 	err = mtd_read(mtd, addr, subpgsize, &read, readbuf);
 	if (unlikely(err || read != subpgsize)) {
@@ -184,7 +184,7 @@ static int verify_eraseblock2(int ebnum)
 	for (k = 1; k < 33; ++k) {
 		if (addr + (subpgsize * k) > (loff_t)(ebnum + 1) * mtd->erasesize)
 			break;
-		prandom_bytes_state(&rnd_state, writebuf, subpgsize * k);
+		predictable_rng_prandom_bytes_state(&rnd_state, writebuf, subpgsize * k);
 		clear_data(readbuf, subpgsize * k);
 		err = mtd_read(mtd, addr, subpgsize * k, &read, readbuf);
 		if (unlikely(err || read != subpgsize * k)) {
diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
index ddc048069..e1a0e2c27 100644
--- a/drivers/scsi/fcoe/fcoe_ctlr.c
+++ b/drivers/scsi/fcoe/fcoe_ctlr.c
@@ -2224,7 +2224,7 @@ static void fcoe_ctlr_vn_restart(struct fcoe_ctlr *fip)
 	 */
 	port_id = fip->port_id;
 	if (fip->probe_tries)
-		port_id = prandom_u32_state(&fip->rnd_state) & 0xffff;
+		port_id = predictable_rng_prandom_u32_state(&fip->rnd_state) & 0xffff;
 	else if (!port_id)
 		port_id = fip->lp->wwpn & 0xffff;
 	if (!port_id || port_id == 0xffff)
diff --git a/include/linux/prandom.h b/include/linux/prandom.h
index e0a0759dd..d525ab02e 100644
--- a/include/linux/prandom.h
+++ b/include/linux/prandom.h
@@ -16,8 +16,8 @@ struct rnd_state {
 	__u32 s1, s2, s3, s4;
 };
 
-u32 prandom_u32_state(struct rnd_state *state);
-void prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes);
+u32 predictable_rng_prandom_u32_state(struct rnd_state *state);
+void predictable_rng_prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes);
 void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state);
 
 #define prandom_init_once(pcpu_state)			\
@@ -52,7 +52,7 @@ static inline u32 __seed(u32 x, u32 m)
 }
 
 /**
- * prandom_seed_state - set seed for prandom_u32_state().
+ * prandom_seed_state - set seed for predictable_rng_prandom_u32_state().
  * @state: pointer to state structure to receive the seed.
  * @seed: arbitrary 64-bit value to use as a seed.
  */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 25a54e045..4cb5421d9 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2599,7 +2599,7 @@ BPF_CALL_0(bpf_user_rnd_u32)
 	u32 res;
 
 	state = &get_cpu_var(bpf_user_rnd_state);
-	res = prandom_u32_state(state);
+	res = predictable_rng_prandom_u32_state(state);
 	put_cpu_var(bpf_user_rnd_state);
 
 	return res;
diff --git a/lib/interval_tree_test.c b/lib/interval_tree_test.c
index f37f4d44f..1a1a89a39 100644
--- a/lib/interval_tree_test.c
+++ b/lib/interval_tree_test.c
@@ -43,8 +43,8 @@ static void init(void)
 	int i;
 
 	for (i = 0; i < nnodes; i++) {
-		u32 b = (prandom_u32_state(&rnd) >> 4) % max_endpoint;
-		u32 a = (prandom_u32_state(&rnd) >> 4) % b;
+		u32 b = (predictable_rng_prandom_u32_state(&rnd) >> 4) % max_endpoint;
+		u32 a = (predictable_rng_prandom_u32_state(&rnd) >> 4) % b;
 
 		nodes[i].start = a;
 		nodes[i].last = b;
@@ -56,7 +56,7 @@ static void init(void)
 	 * which is pointless.
 	 */
 	for (i = 0; i < nsearches; i++)
-		queries[i] = (prandom_u32_state(&rnd) >> 4) % max_endpoint;
+		queries[i] = (predictable_rng_prandom_u32_state(&rnd) >> 4) % max_endpoint;
 }
 
 static int interval_tree_test_init(void)
diff --git a/lib/random32.c b/lib/random32.c
index 32060b852..3b34ea934 100644
--- a/lib/random32.c
+++ b/lib/random32.c
@@ -43,13 +43,13 @@
 #include <asm/unaligned.h>
 
 /**
- *	prandom_u32_state - seeded pseudo-random number generator.
+ *	predictable_rng_prandom_u32_state - seeded pseudo-random number generator.
  *	@state: pointer to state structure holding seeded state.
  *
  *	This is used for pseudo-randomness with no outside seeding.
  *	For more random results, use get_random_u32().
  */
-u32 prandom_u32_state(struct rnd_state *state)
+u32 predictable_rng_prandom_u32_state(struct rnd_state *state)
 {
 #define TAUSWORTHE(s, a, b, c, d) ((s & c) << d) ^ (((s << a) ^ s) >> b)
 	state->s1 = TAUSWORTHE(state->s1,  6U, 13U, 4294967294U, 18U);
@@ -59,10 +59,10 @@ u32 prandom_u32_state(struct rnd_state *state)
 
 	return (state->s1 ^ state->s2 ^ state->s3 ^ state->s4);
 }
-EXPORT_SYMBOL(prandom_u32_state);
+EXPORT_SYMBOL(predictable_rng_prandom_u32_state);
 
 /**
- *	prandom_bytes_state - get the requested number of pseudo-random bytes
+ *	predictable_rng_prandom_bytes_state - get the requested number of pseudo-random bytes
  *
  *	@state: pointer to state structure holding seeded state.
  *	@buf: where to copy the pseudo-random bytes to
@@ -71,18 +71,18 @@ EXPORT_SYMBOL(prandom_u32_state);
  *	This is used for pseudo-randomness with no outside seeding.
  *	For more random results, use get_random_bytes().
  */
-void prandom_bytes_state(struct rnd_state *state, void *buf, size_t bytes)
+void predictable_rng_prandom_bytes_state(struct rnd_state *state, void *buf, size_t bytes)
 {
 	u8 *ptr = buf;
 
 	while (bytes >= sizeof(u32)) {
-		put_unaligned(prandom_u32_state(state), (u32 *) ptr);
+		put_unaligned(predictable_rng_prandom_u32_state(state), (u32 *) ptr);
 		ptr += sizeof(u32);
 		bytes -= sizeof(u32);
 	}
 
 	if (bytes > 0) {
-		u32 rem = prandom_u32_state(state);
+		u32 rem = predictable_rng_prandom_u32_state(state);
 		do {
 			*ptr++ = (u8) rem;
 			bytes--;
@@ -90,21 +90,21 @@ void prandom_bytes_state(struct rnd_state *state, void *buf, size_t bytes)
 		} while (bytes > 0);
 	}
 }
-EXPORT_SYMBOL(prandom_bytes_state);
+EXPORT_SYMBOL(predictable_rng_prandom_bytes_state);
 
 static void prandom_warmup(struct rnd_state *state)
 {
 	/* Calling RNG ten times to satisfy recurrence condition */
-	prandom_u32_state(state);
-	prandom_u32_state(state);
-	prandom_u32_state(state);
-	prandom_u32_state(state);
-	prandom_u32_state(state);
-	prandom_u32_state(state);
-	prandom_u32_state(state);
-	prandom_u32_state(state);
-	prandom_u32_state(state);
-	prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
+	predictable_rng_prandom_u32_state(state);
 }
 
 void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state)
@@ -265,7 +265,7 @@ static int __init prandom_state_selftest(void)
 		prandom_state_selftest_seed(&state, test1[i].seed);
 		prandom_warmup(&state);
 
-		if (test1[i].result != prandom_u32_state(&state))
+		if (test1[i].result != predictable_rng_prandom_u32_state(&state))
 			error = true;
 	}
 
@@ -281,9 +281,9 @@ static int __init prandom_state_selftest(void)
 		prandom_warmup(&state);
 
 		for (j = 0; j < test2[i].iteration - 1; j++)
-			prandom_u32_state(&state);
+			predictable_rng_prandom_u32_state(&state);
 
-		if (test2[i].result != prandom_u32_state(&state))
+		if (test2[i].result != predictable_rng_prandom_u32_state(&state))
 			errors++;
 
 		runs++;
diff --git a/lib/rbtree_test.c b/lib/rbtree_test.c
index 41ae3c757..62dd773c8 100644
--- a/lib/rbtree_test.c
+++ b/lib/rbtree_test.c
@@ -150,8 +150,8 @@ static void init(void)
 {
 	int i;
 	for (i = 0; i < nnodes; i++) {
-		nodes[i].key = prandom_u32_state(&rnd);
-		nodes[i].val = prandom_u32_state(&rnd);
+		nodes[i].key = predictable_rng_prandom_u32_state(&rnd);
+		nodes[i].val = predictable_rng_prandom_u32_state(&rnd);
 	}
 }
 
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 582070416..b17e751cc 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -140,7 +140,7 @@ static int bpf_fill_maxinsns3(struct bpf_test *self)
 	prandom_seed_state(&rnd, 3141592653589793238ULL);
 
 	for (i = 0; i < len - 1; i++) {
-		__u32 k = prandom_u32_state(&rnd);
+		__u32 k = predictable_rng_prandom_u32_state(&rnd);
 
 		insn[i] = __BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, k);
 	}
diff --git a/lib/test_parman.c b/lib/test_parman.c
index 35e322436..f89b8192c 100644
--- a/lib/test_parman.c
+++ b/lib/test_parman.c
@@ -136,7 +136,7 @@ static void test_parman_rnd_init(struct test_parman *test_parman)
 
 static u32 test_parman_rnd_get(struct test_parman *test_parman)
 {
-	return prandom_u32_state(&test_parman->rnd);
+	return predictable_rng_prandom_u32_state(&test_parman->rnd);
 }
 
 static unsigned long test_parman_priority_gen(struct test_parman *test_parman)
diff --git a/lib/test_scanf.c b/lib/test_scanf.c
index b620cf7de..a2de3dee2 100644
--- a/lib/test_scanf.c
+++ b/lib/test_scanf.c
@@ -269,16 +269,16 @@ static void __init numbers_simple(void)
  */
 static u32 __init next_test_random(u32 max_bits)
 {
-	u32 n_bits = hweight32(prandom_u32_state(&rnd_state)) % (max_bits + 1);
+	u32 n_bits = hweight32(predictable_rng_prandom_u32_state(&rnd_state)) % (max_bits + 1);
 
-	return prandom_u32_state(&rnd_state) & GENMASK(n_bits, 0);
+	return predictable_rng_prandom_u32_state(&rnd_state) & GENMASK(n_bits, 0);
 }
 
 static unsigned long long __init next_test_random_ull(void)
 {
-	u32 rand1 = prandom_u32_state(&rnd_state);
+	u32 rand1 = predictable_rng_prandom_u32_state(&rnd_state);
 	u32 n_bits = (hweight32(rand1) * 3) % 64;
-	u64 val = (u64)prandom_u32_state(&rnd_state) * rand1;
+	u64 val = (u64)predictable_rng_prandom_u32_state(&rnd_state) * rand1;
 
 	return val & GENMASK_ULL(n_bits, 0);
 }
diff --git a/mm/slab.c b/mm/slab.c
index 59c8e28f7..ff71c5757 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2447,7 +2447,7 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab)
 
 		/* Fisher-Yates shuffle */
 		for (i = count - 1; i > 0; i--) {
-			rand = prandom_u32_state(&state.rnd_state);
+			rand = predictable_rng_prandom_u32_state(&state.rnd_state);
 			rand %= (i + 1);
 			swap_free_obj(slab, i, rand);
 		}
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0042fb273..deb764785 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1141,7 +1141,7 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
 
 	/* Fisher-Yates shuffle */
 	for (i = count - 1; i > 0; i--) {
-		rand = prandom_u32_state(state);
+		rand = predictable_rng_prandom_u32_state(state);
 		rand %= (i + 1);
 		swap(list[i], list[rand]);
 	}
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c
       [not found] <cover.1670778651.git.david.keisarschm@mail.huji.ac.il>
  2022-12-11 22:16 ` [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state david.keisarschm
@ 2022-12-11 22:16 ` david.keisarschm
  2022-12-12 18:03   ` Yonghong Song
  2022-12-11 22:16 ` [PATCH 3/5] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: david.keisarschm @ 2022-12-11 22:16 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: David, aksecurity, ilay.bahat1, bpf, linux-kernel, netdev

From: David <david.keisarschm@mail.huji.ac.il>

We changed the invocation of
 prandom_u32_state to get_random_u32.
 We deleted the maintained state,
 which was a CPU-variable,
 since get_random_u32 maintains its own CPU-variable.
 We also deleted the state initializer,
 since it is not needed anymore.

Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
---
 include/linux/bpf.h   |  1 -
 kernel/bpf/core.c     | 13 +------------
 kernel/bpf/verifier.c |  2 --
 net/core/filter.c     |  1 -
 4 files changed, 1 insertion(+), 16 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index c1bd1bd10..0689520b9 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2572,7 +2572,6 @@ const struct bpf_func_proto *tracing_prog_func_proto(
   enum bpf_func_id func_id, const struct bpf_prog *prog);
 
 /* Shared helpers among cBPF and eBPF. */
-void bpf_user_rnd_init_once(void);
 u64 bpf_user_rnd_u32(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
 u64 bpf_get_raw_cpu_id(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
 
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 4cb5421d9..a6f06894e 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2579,13 +2579,6 @@ void bpf_prog_free(struct bpf_prog *fp)
 }
 EXPORT_SYMBOL_GPL(bpf_prog_free);
 
-/* RNG for unpriviledged user space with separated state from prandom_u32(). */
-static DEFINE_PER_CPU(struct rnd_state, bpf_user_rnd_state);
-
-void bpf_user_rnd_init_once(void)
-{
-	prandom_init_once(&bpf_user_rnd_state);
-}
 
 BPF_CALL_0(bpf_user_rnd_u32)
 {
@@ -2595,12 +2588,8 @@ BPF_CALL_0(bpf_user_rnd_u32)
 	 * transformations. Register assignments from both sides are
 	 * different, f.e. classic always sets fn(ctx, A, X) here.
 	 */
-	struct rnd_state *state;
 	u32 res;
-
-	state = &get_cpu_var(bpf_user_rnd_state);
-	res = predictable_rng_prandom_u32_state(state);
-	put_cpu_var(bpf_user_rnd_state);
+	res = get_random_u32();
 
 	return res;
 }
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 264b3dc71..9f22fb3fa 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -14049,8 +14049,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 
 		if (insn->imm == BPF_FUNC_get_route_realm)
 			prog->dst_needed = 1;
-		if (insn->imm == BPF_FUNC_get_prandom_u32)
-			bpf_user_rnd_init_once();
 		if (insn->imm == BPF_FUNC_override_return)
 			prog->kprobe_override = 1;
 		if (insn->imm == BPF_FUNC_tail_call) {
diff --git a/net/core/filter.c b/net/core/filter.c
index bb0136e7a..7a595ac00 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -443,7 +443,6 @@ static bool convert_bpf_extensions(struct sock_filter *fp,
 			break;
 		case SKF_AD_OFF + SKF_AD_RANDOM:
 			*insn = BPF_EMIT_CALL(bpf_user_rnd_u32);
-			bpf_user_rnd_init_once();
 			break;
 		}
 		break;
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/5] Replace invocation of weak PRNG in mm/slab.c
       [not found] <cover.1670778651.git.david.keisarschm@mail.huji.ac.il>
  2022-12-11 22:16 ` [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state david.keisarschm
  2022-12-11 22:16 ` [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c david.keisarschm
@ 2022-12-11 22:16 ` david.keisarschm
  2022-12-11 22:16 ` [PATCH 4/5] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
  2022-12-11 22:16 ` [PATCH 5/5] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
  4 siblings, 0 replies; 16+ messages in thread
From: david.keisarschm @ 2022-12-11 22:16 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo
  Cc: David, aksecurity, ilay.bahat1, linux-mm, linux-kernel

From: David <david.keisarschm@mail.huji.ac.il>

We changed the invocation of
prandom_u32_state to get_random_u32.
 We also changed the freelist_init_state
to struct instead of a union,
since the rnd_state is not needed anymore -
 get_random_u32 maintains its own state.

Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
---
 mm/slab.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index ff71c5757..1476104f4 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2360,20 +2360,17 @@ static void cache_init_objs_debug(struct kmem_cache *cachep, struct slab *slab)
 
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
 /* Hold information during a freelist initialization */
-union freelist_init_state {
-	struct {
-		unsigned int pos;
-		unsigned int *list;
-		unsigned int count;
-	};
-	struct rnd_state rnd_state;
+struct freelist_init_state {
+	unsigned int pos;
+	unsigned int *list;
+	unsigned int count;
 };
 
 /*
  * Initialize the state based on the randomization method available.
  * return true if the pre-computed list is available, false otherwise.
  */
-static bool freelist_state_initialize(union freelist_init_state *state,
+static bool freelist_state_initialize(struct freelist_init_state *state,
 				struct kmem_cache *cachep,
 				unsigned int count)
 {
@@ -2385,7 +2382,6 @@ static bool freelist_state_initialize(union freelist_init_state *state,
 
 	/* Use a random state if the pre-computed list is not available */
 	if (!cachep->random_seq) {
-		prandom_seed_state(&state->rnd_state, rand);
 		ret = false;
 	} else {
 		state->list = cachep->random_seq;
@@ -2397,7 +2393,7 @@ static bool freelist_state_initialize(union freelist_init_state *state,
 }
 
 /* Get the next entry on the list and randomize it using a random shift */
-static freelist_idx_t next_random_slot(union freelist_init_state *state)
+static freelist_idx_t next_random_slot(struct freelist_init_state *state)
 {
 	if (state->pos >= state->count)
 		state->pos = 0;
@@ -2418,7 +2414,7 @@ static void swap_free_obj(struct slab *slab, unsigned int a, unsigned int b)
 static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab)
 {
 	unsigned int objfreelist = 0, i, rand, count = cachep->num;
-	union freelist_init_state state;
+	struct freelist_init_state state;
 	bool precomputed;
 
 	if (count < 2)
@@ -2447,7 +2443,7 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab)
 
 		/* Fisher-Yates shuffle */
 		for (i = count - 1; i > 0; i--) {
-			rand = predictable_rng_prandom_u32_state(&state.rnd_state);
+			rand = get_random_u32();
 			rand %= (i + 1);
 			swap_free_obj(slab, i, rand);
 		}
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/5] Replace invocation of weak PRNG inside mm/slab_common.c
       [not found] <cover.1670778651.git.david.keisarschm@mail.huji.ac.il>
                   ` (2 preceding siblings ...)
  2022-12-11 22:16 ` [PATCH 3/5] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
@ 2022-12-11 22:16 ` david.keisarschm
  2022-12-11 22:16 ` [PATCH 5/5] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
  4 siblings, 0 replies; 16+ messages in thread
From: david.keisarschm @ 2022-12-11 22:16 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo
  Cc: David, aksecurity, ilay.bahat1, linux-mm, linux-kernel

From: David <david.keisarschm@mail.huji.ac.il>

We changed the invocation
 of prandom_u32_state to get_random_u32.
 We also omitted the initial seeding for the state,
 since get_random_u32 maintains its own,
 so there is no need to keep
storing the state of prandom_u32_state here.

Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
---
 mm/slab_common.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index deb764785..6ac68b9a6 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1130,7 +1130,7 @@ EXPORT_SYMBOL(kmalloc_large_node);
 
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
 /* Randomize a generic freelist */
-static void freelist_randomize(struct rnd_state *state, unsigned int *list,
+static void freelist_randomize(unsigned int *list,
 			       unsigned int count)
 {
 	unsigned int rand;
@@ -1141,7 +1141,7 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
 
 	/* Fisher-Yates shuffle */
 	for (i = count - 1; i > 0; i--) {
-		rand = predictable_rng_prandom_u32_state(state);
+		rand = get_random_u32();
 		rand %= (i + 1);
 		swap(list[i], list[rand]);
 	}
@@ -1151,7 +1151,6 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
 int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count,
 				    gfp_t gfp)
 {
-	struct rnd_state state;
 
 	if (count < 2 || cachep->random_seq)
 		return 0;
@@ -1160,10 +1159,7 @@ int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count,
 	if (!cachep->random_seq)
 		return -ENOMEM;
 
-	/* Get best entropy at this stage of boot */
-	prandom_seed_state(&state, get_random_long());
-
-	freelist_randomize(&state, cachep->random_seq, count);
+	freelist_randomize(cachep->random_seq, count);
 	return 0;
 }
 
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 5/5] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c
       [not found] <cover.1670778651.git.david.keisarschm@mail.huji.ac.il>
                   ` (3 preceding siblings ...)
  2022-12-11 22:16 ` [PATCH 4/5] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
@ 2022-12-11 22:16 ` david.keisarschm
  4 siblings, 0 replies; 16+ messages in thread
From: david.keisarschm @ 2022-12-11 22:16 UTC (permalink / raw)
  To: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin
  Cc: David, aksecurity, ilay.bahat1, linux-kernel

From: David <david.keisarschm@mail.huji.ac.il>

We changed the invocation of
 prandom_bytes_state which is
 considered weak to get_random_bytes.
 We also omitted the call to the seeding function,
 since prandom_bytes matintains its own state,
 so there is no need for seeding here anymore.

Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
---
 arch/x86/mm/kaslr.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 66c17b449..466111c99 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -66,7 +66,6 @@ void __init kernel_randomize_memory(void)
 	size_t i;
 	unsigned long vaddr_start, vaddr;
 	unsigned long rand, memory_tb;
-	struct rnd_state rand_state;
 	unsigned long remain_entropy;
 	unsigned long vmemmap_size;
 
@@ -113,8 +112,6 @@ void __init kernel_randomize_memory(void)
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
 		remain_entropy -= get_padding(&kaslr_regions[i]);
 
-	prandom_seed_state(&rand_state, kaslr_get_random_long("Memory"));
-
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++) {
 		unsigned long entropy;
 
@@ -123,7 +120,7 @@ void __init kernel_randomize_memory(void)
 		 * available.
 		 */
 		entropy = remain_entropy / (ARRAY_SIZE(kaslr_regions) - i);
-		predictable_rng_prandom_bytes_state(&rand_state, &rand, sizeof(rand));
+		prandom_bytes(&rand, sizeof(rand));
 		entropy = (rand % (entropy + 1)) & PUD_MASK;
 		vaddr += entropy;
 		*kaslr_regions[i].base = vaddr;
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
  2022-12-11 22:16 ` [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state david.keisarschm
@ 2022-12-12  8:35   ` Andy Shevchenko
  2022-12-12 14:35   ` Jason A. Donenfeld
  1 sibling, 0 replies; 16+ messages in thread
From: Andy Shevchenko @ 2022-12-12  8:35 UTC (permalink / raw)
  To: david.keisarschm
  Cc: Dave Hansen, Peter Zijlstra, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, linux-kernel, intel-gfx,
	dri-devel, linux-mtd, linux-scsi, bpf, netdev, linux-mm

On Mon, Dec 12, 2022 at 12:16:04AM +0200, david.keisarschm@mail.huji.ac.il wrote:
> From: David <david.keisarschm@mail.huji.ac.il>
> 
> Since the two functions
>  prandom_byte_state and prandom_u32_state
>  use the weak prng prandom_u32,
>  we added the prefix predictable_rng,
>  to their signatures so it is clear they are weak.

It's fancy indentation.

...

>  		/* Fisher-Yates shuffle */
>  		for (i = count - 1; i > 0; i--) {
> -			rand = prandom_u32_state(&state.rnd_state);
> +			rand = predictable_rng_prandom_u32_state(&state.rnd_state);

Isn't it too many "random":s encoded in the name?

I would leave either "rng" or "[p]random".

>  			rand %= (i + 1);
>  			swap_free_obj(slab, i, rand);
>  		}

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
  2022-12-11 22:16 ` [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state david.keisarschm
  2022-12-12  8:35   ` Andy Shevchenko
@ 2022-12-12 14:35   ` Jason A. Donenfeld
  2022-12-14 12:33     ` Stanislaw Gruszka
  1 sibling, 1 reply; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-12-12 14:35 UTC (permalink / raw)
  To: david.keisarschm
  Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter, Miquel Raynal, Richard Weinberger,
	Vignesh Raghavendra, Hannes Reinecke, James E.J. Bottomley,
	Martin K. Petersen, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Jiri Pirko, Petr Mladek, Steven Rostedt, Sergey Senozhatsky,
	Andy Shevchenko, Rasmus Villemoes, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo, aksecurity,
	ilay.bahat1, linux-kernel, intel-gfx, dri-devel, linux-mtd,
	linux-scsi, bpf, netdev, linux-mm

Please CC me on future revisions.

As of 6.2, the prandom namespace is *only* for predictable randomness.
There's no need to rename anything. So nack on this patch 1/5.

With regards to the remaining patches in this series, if you want to
move prandom_u32_state callers over to get_random_bytes() and
get_random_u32(), that's fine from my perspective, but last I looked,
there was much usage in places where being repeatable was actually the
goal - test suites and such, where you want to be able to redo your
tests with the same seed. So you'll have to look at each instance case
by case and convince whoever maintains that code that they don't need
predictability. However, if you do that, the right functions to use are
get_random_bytes() and get_random_u32().

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c
  2022-12-11 22:16 ` [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c david.keisarschm
@ 2022-12-12 18:03   ` Yonghong Song
  2022-12-12 22:35     ` Amit Klein
  0 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2022-12-12 18:03 UTC (permalink / raw)
  To: david.keisarschm, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: aksecurity, ilay.bahat1, bpf, linux-kernel, netdev



On 12/11/22 2:16 PM, david.keisarschm@mail.huji.ac.il wrote:
> From: David <david.keisarschm@mail.huji.ac.il>
> 
> We changed the invocation of
>   prandom_u32_state to get_random_u32.
>   We deleted the maintained state,
>   which was a CPU-variable,
>   since get_random_u32 maintains its own CPU-variable.
>   We also deleted the state initializer,
>   since it is not needed anymore.
> 
> Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
> ---
>   include/linux/bpf.h   |  1 -
>   kernel/bpf/core.c     | 13 +------------
>   kernel/bpf/verifier.c |  2 --
>   net/core/filter.c     |  1 -
>   4 files changed, 1 insertion(+), 16 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index c1bd1bd10..0689520b9 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2572,7 +2572,6 @@ const struct bpf_func_proto *tracing_prog_func_proto(
>     enum bpf_func_id func_id, const struct bpf_prog *prog);
>   
>   /* Shared helpers among cBPF and eBPF. */
> -void bpf_user_rnd_init_once(void);
>   u64 bpf_user_rnd_u32(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
>   u64 bpf_get_raw_cpu_id(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
>   
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 4cb5421d9..a6f06894e 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2579,13 +2579,6 @@ void bpf_prog_free(struct bpf_prog *fp)
>   }
>   EXPORT_SYMBOL_GPL(bpf_prog_free);
>   
> -/* RNG for unpriviledged user space with separated state from prandom_u32(). */
> -static DEFINE_PER_CPU(struct rnd_state, bpf_user_rnd_state);
> -
> -void bpf_user_rnd_init_once(void)
> -{
> -	prandom_init_once(&bpf_user_rnd_state);
> -}
>   
>   BPF_CALL_0(bpf_user_rnd_u32)
>   {
> @@ -2595,12 +2588,8 @@ BPF_CALL_0(bpf_user_rnd_u32)
>   	 * transformations. Register assignments from both sides are
>   	 * different, f.e. classic always sets fn(ctx, A, X) here.
>   	 */
> -	struct rnd_state *state;
>   	u32 res;
> -
> -	state = &get_cpu_var(bpf_user_rnd_state);
> -	res = predictable_rng_prandom_u32_state(state);
> -	put_cpu_var(bpf_user_rnd_state);
> +	res = get_random_u32();
>   
>   	return res;
>   }

Please see the discussion here.
https://lore.kernel.org/bpf/87edtctz8t.fsf@toke.dk/
There is a performance concern with the above change.

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 264b3dc71..9f22fb3fa 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -14049,8 +14049,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>   
>   		if (insn->imm == BPF_FUNC_get_route_realm)
>   			prog->dst_needed = 1;
> -		if (insn->imm == BPF_FUNC_get_prandom_u32)
> -			bpf_user_rnd_init_once();
>   		if (insn->imm == BPF_FUNC_override_return)
>   			prog->kprobe_override = 1;
>   		if (insn->imm == BPF_FUNC_tail_call) {
> diff --git a/net/core/filter.c b/net/core/filter.c
> index bb0136e7a..7a595ac00 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -443,7 +443,6 @@ static bool convert_bpf_extensions(struct sock_filter *fp,
>   			break;
>   		case SKF_AD_OFF + SKF_AD_RANDOM:
>   			*insn = BPF_EMIT_CALL(bpf_user_rnd_u32);
> -			bpf_user_rnd_init_once();
>   			break;
>   		}
>   		break;

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c
  2022-12-12 18:03   ` Yonghong Song
@ 2022-12-12 22:35     ` Amit Klein
  2022-12-12 22:41       ` Jason A. Donenfeld
  0 siblings, 1 reply; 16+ messages in thread
From: Amit Klein @ 2022-12-12 22:35 UTC (permalink / raw)
  To: Yonghong Song
  Cc: david.keisarschm, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	ilay.bahat1, bpf, linux-kernel, netdev

On Mon, Dec 12, 2022 at 8:03 PM Yonghong Song <yhs@meta.com> wrote:
>
>
>
> On 12/11/22 2:16 PM, david.keisarschm@mail.huji.ac.il wrote:
> > From: David <david.keisarschm@mail.huji.ac.il>
> >
> > We changed the invocation of
> >   prandom_u32_state to get_random_u32.
> >   We deleted the maintained state,
> >   which was a CPU-variable,
> >   since get_random_u32 maintains its own CPU-variable.
> >   We also deleted the state initializer,
> >   since it is not needed anymore.
> >
> > Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
> > ---
> >   include/linux/bpf.h   |  1 -
> >   kernel/bpf/core.c     | 13 +------------
> >   kernel/bpf/verifier.c |  2 --
> >   net/core/filter.c     |  1 -
> >   4 files changed, 1 insertion(+), 16 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
[...]
> Please see the discussion here.
> https://lore.kernel.org/bpf/87edtctz8t.fsf@toke.dk/
> There is a performance concern with the above change.
>

I see. How about using (in this instance only!) the SipHash-based
solution which was the basis for prandom_u32() starting with commit
c51f8f88d705 (v5.10-rc1) up until commit d4150779e60f (v5.19-rc1)?
Copying the relevant snippets (minus comments, for brevity) from
/lib/random32.c and /include/linux/prandom.h from that era (the 32
random bits are generated by prandom_u32() at the bottom; I didn't
bother with initialization, and I don't know if the per_cpu logic is
needed here, or perhaps some other kind of locking, if any):


#define PRND_SIPROUND(v0, v1, v2, v3) ( \
v0 += v1, v1 = rol32(v1, 5), v2 += v3, v3 = rol32(v3, 8), \
v1 ^= v0, v0 = rol32(v0, 16), v3 ^= v2, \
v0 += v3, v3 = rol32(v3, 7), v2 += v1, v1 = rol32(v1, 13), \
v3 ^= v0, v1 ^= v2, v2 = rol32(v2, 16) \

)

struct siprand_state {
    unsigned long v0;
    unsigned long v1;
    unsigned long v2;
    unsigned long v3;
};

static DEFINE_PER_CPU(struct siprand_state, net_rand_state)
__latent_entropy;   // do we actually need this? -AK

static inline u32 siprand_u32(struct siprand_state *s)
{
    unsigned long v0 = s->v0, v1 = s->v1, v2 = s->v2, v3 = s->v3;

    PRND_SIPROUND(v0, v1, v2, v3);
    PRND_SIPROUND(v0, v1, v2, v3);
    s->v0 = v0;  s->v1 = v1;  s->v2 = v2;  s->v3 = v3;
    return v1 + v3;
}


u32 prandom_u32(void)
{
    struct siprand_state *state = get_cpu_ptr(&net_rand_state);
    u32 res = siprand_u32(state);

    trace_prandom_u32(res);
    put_cpu_ptr(&net_rand_state);
    return res;
}
EXPORT_SYMBOL(prandom_u32);

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c
  2022-12-12 22:35     ` Amit Klein
@ 2022-12-12 22:41       ` Jason A. Donenfeld
  0 siblings, 0 replies; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-12-12 22:41 UTC (permalink / raw)
  To: Amit Klein
  Cc: Yonghong Song, david.keisarschm, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh,
	Stanislav Fomichev, Hao Luo, Jiri Olsa, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, ilay.bahat1, bpf,
	linux-kernel, netdev

On Tue, Dec 13, 2022 at 12:35:24AM +0200, Amit Klein wrote:
> On Mon, Dec 12, 2022 at 8:03 PM Yonghong Song <yhs@meta.com> wrote:
> >
> >
> >
> > On 12/11/22 2:16 PM, david.keisarschm@mail.huji.ac.il wrote:
> > > From: David <david.keisarschm@mail.huji.ac.il>
> > >
> > > We changed the invocation of
> > >   prandom_u32_state to get_random_u32.
> > >   We deleted the maintained state,
> > >   which was a CPU-variable,
> > >   since get_random_u32 maintains its own CPU-variable.
> > >   We also deleted the state initializer,
> > >   since it is not needed anymore.
> > >
> > > Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
> > > ---
> > >   include/linux/bpf.h   |  1 -
> > >   kernel/bpf/core.c     | 13 +------------
> > >   kernel/bpf/verifier.c |  2 --
> > >   net/core/filter.c     |  1 -
> > >   4 files changed, 1 insertion(+), 16 deletions(-)
> > >
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> [...]
> > Please see the discussion here.
> > https://lore.kernel.org/bpf/87edtctz8t.fsf@toke.dk/
> > There is a performance concern with the above change.
> >
> 
> I see. How about using (in this instance only!) the SipHash-based
> solution which was the basis for prandom_u32() starting with commit
> c51f8f88d705 (v5.10-rc1) up until commit d4150779e60f (v5.19-rc1)?

Stop with this pseudo cryptographic garbage. Stop pushing this
everywhere. It was a hassle to undo this crap the first time around. The
last thing we need is to add it back.

Plus, there's no need for it either. I'll revisit the bpf patch if/when
it makes sense to do performance-wise.

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
  2022-12-12 14:35   ` Jason A. Donenfeld
@ 2022-12-14 12:33     ` Stanislaw Gruszka
  2022-12-14 15:15       ` Eric Dumazet
  0 siblings, 1 reply; 16+ messages in thread
From: Stanislaw Gruszka @ 2022-12-14 12:33 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: david.keisarschm, Vignesh Raghavendra, Peter Zijlstra,
	Dave Hansen, Rasmus Villemoes, Alexei Starovoitov, dri-devel,
	Song Liu, Eric Dumazet, linux-mtd, Stanislav Fomichev,
	Miquel Raynal, Roman Gushchin, Christoph Lameter, H. Peter Anvin,
	Daniel Borkmann, Richard Weinberger, x86, John Fastabend,
	Andrii Nakryiko, ilay.bahat1, Ingo Molnar, Steven Rostedt,
	Jiri Pirko, David Rientjes, Yonghong Song, Paolo Abeni,
	intel-gfx, Petr Mladek, Jiri Olsa, Hao Luo, James E.J. Bottomley,
	KP Singh, Hyeonggon Yoo, Jakub Kicinski, Borislav Petkov,
	Hannes Reinecke, Andy Lutomirski, Rodrigo Vivi, Thomas Gleixner,
	Andy Shevchenko, Andrew Morton, Vlastimil Babka, Tvrtko Ursulin,
	linux-scsi, Martin K. Petersen, linux-mm, netdev, bpf,
	linux-kernel, Pekka Enberg, Sergey Senozhatsky, aksecurity,
	Joonsoo Kim, Martin KaFai Lau, David S. Miller

On Mon, Dec 12, 2022 at 03:35:20PM +0100, Jason A. Donenfeld wrote:
> Please CC me on future revisions.
> 
> As of 6.2, the prandom namespace is *only* for predictable randomness.
> There's no need to rename anything. So nack on this patch 1/5.

It is not obvious (for casual developers like me) that p in prandom
stands for predictable. Some renaming would be useful IMHO.

Regards
Stanislaw

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
  2022-12-14 12:33     ` Stanislaw Gruszka
@ 2022-12-14 15:15       ` Eric Dumazet
  2022-12-14 15:53         ` Andy Shevchenko
  2022-12-14 16:21         ` Stanislaw Gruszka
  0 siblings, 2 replies; 16+ messages in thread
From: Eric Dumazet @ 2022-12-14 15:15 UTC (permalink / raw)
  To: Stanislaw Gruszka
  Cc: Jason A. Donenfeld, david.keisarschm, Vignesh Raghavendra,
	Peter Zijlstra, Dave Hansen, Rasmus Villemoes,
	Alexei Starovoitov, dri-devel, Song Liu, linux-mtd,
	Stanislav Fomichev, Miquel Raynal, Roman Gushchin,
	Christoph Lameter, H. Peter Anvin, Daniel Borkmann,
	Richard Weinberger, x86, John Fastabend, Andrii Nakryiko,
	ilay.bahat1, Ingo Molnar, Steven Rostedt, Jiri Pirko,
	David Rientjes, Yonghong Song, Paolo Abeni, intel-gfx,
	Petr Mladek, Jiri Olsa, Hao Luo, James E.J. Bottomley, KP Singh,
	Hyeonggon Yoo, Jakub Kicinski, Borislav Petkov, Hannes Reinecke,
	Andy Lutomirski, Rodrigo Vivi, Thomas Gleixner, Andy Shevchenko,
	Andrew Morton, Vlastimil Babka, Tvrtko Ursulin, linux-scsi,
	Martin K. Petersen, linux-mm, netdev, bpf, linux-kernel,
	Pekka Enberg, Sergey Senozhatsky, aksecurity, Joonsoo Kim,
	Martin KaFai Lau, David S. Miller

On Wed, Dec 14, 2022 at 1:34 PM Stanislaw Gruszka
<stanislaw.gruszka@linux.intel.com> wrote:
>
> On Mon, Dec 12, 2022 at 03:35:20PM +0100, Jason A. Donenfeld wrote:
> > Please CC me on future revisions.
> >
> > As of 6.2, the prandom namespace is *only* for predictable randomness.
> > There's no need to rename anything. So nack on this patch 1/5.
>
> It is not obvious (for casual developers like me) that p in prandom
> stands for predictable. Some renaming would be useful IMHO.

Renaming makes backports more complicated, because stable teams will
have to 'undo' name changes.
Stable teams are already overwhelmed by the amount of backports, and
silly merge conflicts.

Take another example :

u64 timecounter_read(struct timecounter *tc)

You would think this function would read the timecounter, right ?

Well, it _updates_ many fields from @tc, so a 'better name' would also
be useful.

linux kernel is not for casual readers.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
  2022-12-14 15:15       ` Eric Dumazet
@ 2022-12-14 15:53         ` Andy Shevchenko
  2022-12-14 15:57           ` Andy Shevchenko
  2022-12-14 16:21         ` Stanislaw Gruszka
  1 sibling, 1 reply; 16+ messages in thread
From: Andy Shevchenko @ 2022-12-14 15:53 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Stanislaw Gruszka, david.keisarschm, dri-devel, linux-mtd,
	linux-scsi, linux-mm, netdev, bpf, linux-kernel

On Wed, Dec 14, 2022 at 04:15:49PM +0100, Eric Dumazet wrote:
> On Wed, Dec 14, 2022 at 1:34 PM Stanislaw Gruszka
> <stanislaw.gruszka@linux.intel.com> wrote:
> > On Mon, Dec 12, 2022 at 03:35:20PM +0100, Jason A. Donenfeld wrote:
> > > Please CC me on future revisions.
> > >
> > > As of 6.2, the prandom namespace is *only* for predictable randomness.
> > > There's no need to rename anything. So nack on this patch 1/5.
> >
> > It is not obvious (for casual developers like me) that p in prandom
> > stands for predictable. Some renaming would be useful IMHO.
> 
> Renaming makes backports more complicated, because stable teams will
> have to 'undo' name changes.
> Stable teams are already overwhelmed by the amount of backports, and
> silly merge conflicts.
> 
> Take another example :
> 
> u64 timecounter_read(struct timecounter *tc)
> 
> You would think this function would read the timecounter, right ?
> 
> Well, it _updates_ many fields from @tc, so a 'better name' would also
> be useful.

Right, at some point we become into the world of

#define true 0

because... (read below)

> linux kernel is not for casual readers.

P.S. I believe you applied a common sense and in some cases
     the renames are necessary.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
  2022-12-14 15:53         ` Andy Shevchenko
@ 2022-12-14 15:57           ` Andy Shevchenko
  0 siblings, 0 replies; 16+ messages in thread
From: Andy Shevchenko @ 2022-12-14 15:57 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Stanislaw Gruszka, david.keisarschm, dri-devel, linux-mtd,
	linux-scsi, linux-mm, netdev, bpf, linux-kernel

On Wed, Dec 14, 2022 at 05:53:52PM +0200, Andy Shevchenko wrote:
> On Wed, Dec 14, 2022 at 04:15:49PM +0100, Eric Dumazet wrote:
> > On Wed, Dec 14, 2022 at 1:34 PM Stanislaw Gruszka
> > <stanislaw.gruszka@linux.intel.com> wrote:
> > > On Mon, Dec 12, 2022 at 03:35:20PM +0100, Jason A. Donenfeld wrote:
> > > > Please CC me on future revisions.
> > > >
> > > > As of 6.2, the prandom namespace is *only* for predictable randomness.
> > > > There's no need to rename anything. So nack on this patch 1/5.
> > >
> > > It is not obvious (for casual developers like me) that p in prandom
> > > stands for predictable. Some renaming would be useful IMHO.
> > 
> > Renaming makes backports more complicated, because stable teams will
> > have to 'undo' name changes.
> > Stable teams are already overwhelmed by the amount of backports, and
> > silly merge conflicts.
> > 
> > Take another example :
> > 
> > u64 timecounter_read(struct timecounter *tc)
> > 
> > You would think this function would read the timecounter, right ?
> > 
> > Well, it _updates_ many fields from @tc, so a 'better name' would also
> > be useful.
> 
> Right, at some point we become into the world of
> 
> #define true 0
> 
> because... (read below)
> 
> > linux kernel is not for casual readers.
> 
> P.S. I believe you applied a common sense and in some cases
>      the renames are necessary.

And before you become to a wrong conclusion by reading between the lines,
no, I'm not taking either side (to rename or not to rename) in this case.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
  2022-12-14 15:15       ` Eric Dumazet
  2022-12-14 15:53         ` Andy Shevchenko
@ 2022-12-14 16:21         ` Stanislaw Gruszka
  2022-12-14 18:28           ` Theodore Ts'o
  1 sibling, 1 reply; 16+ messages in thread
From: Stanislaw Gruszka @ 2022-12-14 16:21 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Jason A. Donenfeld, Vignesh Raghavendra, Peter Zijlstra,
	Joonsoo Kim, Roman Gushchin, Rasmus Villemoes,
	Alexei Starovoitov, dri-devel, Song Liu, linux-mtd,
	Stanislav Fomichev, H. Peter Anvin, Hyeonggon Yoo,
	Christoph Lameter, Daniel Borkmann, Richard Weinberger, x86,
	John Fastabend, Andrii Nakryiko, ilay.bahat1, Ingo Molnar,
	David Rientjes, Yonghong Song, Paolo Abeni, James E.J. Bottomley,
	Petr Mladek, david.keisarschm, Dave Hansen, Tvrtko Ursulin,
	Miquel Raynal, intel-gfx, Steven Rostedt, KP Singh,
	Jakub Kicinski, Rodrigo Vivi, Borislav Petkov, Hannes Reinecke,
	Andy Lutomirski, Jiri Pirko, Thomas Gleixner, Andy Shevchenko,
	bpf, Vlastimil Babka, Hao Luo, linux-scsi, Martin K. Petersen,
	linux-mm, netdev, linux-kernel, Pekka Enberg, Sergey Senozhatsky,
	aksecurity, Jiri Olsa, Andrew Morton, Martin KaFai Lau,
	David S. Miller

On Wed, Dec 14, 2022 at 04:15:49PM +0100, Eric Dumazet wrote:
> On Wed, Dec 14, 2022 at 1:34 PM Stanislaw Gruszka
> <stanislaw.gruszka@linux.intel.com> wrote:
> >
> > On Mon, Dec 12, 2022 at 03:35:20PM +0100, Jason A. Donenfeld wrote:
> > > Please CC me on future revisions.
> > >
> > > As of 6.2, the prandom namespace is *only* for predictable randomness.
> > > There's no need to rename anything. So nack on this patch 1/5.
> >
> > It is not obvious (for casual developers like me) that p in prandom
> > stands for predictable. Some renaming would be useful IMHO.
> 
> Renaming makes backports more complicated, because stable teams will
> have to 'undo' name changes.
> Stable teams are already overwhelmed by the amount of backports, and
> silly merge conflicts.

Since when backporting problems is valid argument for stop making
changes? That's new for me.

> linux kernel is not for casual readers.

Sure.

Regards
Stanislaw

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state
  2022-12-14 16:21         ` Stanislaw Gruszka
@ 2022-12-14 18:28           ` Theodore Ts'o
  0 siblings, 0 replies; 16+ messages in thread
From: Theodore Ts'o @ 2022-12-14 18:28 UTC (permalink / raw)
  To: Stanislaw Gruszka
  Cc: Eric Dumazet, Jason A. Donenfeld, Vignesh Raghavendra,
	Peter Zijlstra, Joonsoo Kim, Roman Gushchin, Rasmus Villemoes,
	Alexei Starovoitov, dri-devel, Song Liu, linux-mtd,
	Stanislav Fomichev, H. Peter Anvin, Hyeonggon Yoo,
	Christoph Lameter, Daniel Borkmann, Richard Weinberger, x86,
	John Fastabend, Andrii Nakryiko, ilay.bahat1, Ingo Molnar,
	David Rientjes, Yonghong Song, Paolo Abeni, James E.J. Bottomley,
	Petr Mladek, david.keisarschm, Dave Hansen, Tvrtko Ursulin,
	Miquel Raynal, intel-gfx, Steven Rostedt, KP Singh,
	Jakub Kicinski, Rodrigo Vivi, Borislav Petkov, Hannes Reinecke,
	Andy Lutomirski, Jiri Pirko, Thomas Gleixner, Andy Shevchenko,
	bpf, Vlastimil Babka, Hao Luo, linux-scsi, Martin K. Petersen,
	linux-mm, netdev, linux-kernel, Pekka Enberg, Sergey Senozhatsky,
	aksecurity, Jiri Olsa, Andrew Morton, Martin KaFai Lau,
	David S. Miller

On Wed, Dec 14, 2022 at 05:21:17PM +0100, Stanislaw Gruszka wrote:
> On Wed, Dec 14, 2022 at 04:15:49PM +0100, Eric Dumazet wrote:
> > On Wed, Dec 14, 2022 at 1:34 PM Stanislaw Gruszka
> > <stanislaw.gruszka@linux.intel.com> wrote:
> > >
> > > On Mon, Dec 12, 2022 at 03:35:20PM +0100, Jason A. Donenfeld wrote:
> > > > Please CC me on future revisions.
> > > >
> > > > As of 6.2, the prandom namespace is *only* for predictable randomness.
> > > > There's no need to rename anything. So nack on this patch 1/5.
> > >
> > > It is not obvious (for casual developers like me) that p in prandom
> > > stands for predictable. Some renaming would be useful IMHO.

I disagree.  pseudo-random has *always* menat "predictable".  And the
'p' in prandom was originally "pseudo-random".  In userspace,
random(3) is also pseudo-random, and is ***utterly*** predictable.  So
the original use of prandom() was a bit more of an explicit nod to the
fact that prandom is something which is inherently predictable.

So I don't think it's needed to rename it, whether it's to
"predictable_rng_prandom_u32", or "no_you_idiot_dont_you_dare_use_it_for_cryptographi_purposes_prandom_u32".

I think we need to assume a certain base level of competence,
especially for someone who is messing with security psensitive kernel
code.  If a developer doesn't know that a prng is predictable, that's
probably the *least* of the sort of mistakes that they might make.

					- Ted

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-12-14 18:29 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <cover.1670778651.git.david.keisarschm@mail.huji.ac.il>
2022-12-11 22:16 ` [PATCH 1/5] Renaming weak prng invocations - prandom_bytes_state, prandom_u32_state david.keisarschm
2022-12-12  8:35   ` Andy Shevchenko
2022-12-12 14:35   ` Jason A. Donenfeld
2022-12-14 12:33     ` Stanislaw Gruszka
2022-12-14 15:15       ` Eric Dumazet
2022-12-14 15:53         ` Andy Shevchenko
2022-12-14 15:57           ` Andy Shevchenko
2022-12-14 16:21         ` Stanislaw Gruszka
2022-12-14 18:28           ` Theodore Ts'o
2022-12-11 22:16 ` [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c david.keisarschm
2022-12-12 18:03   ` Yonghong Song
2022-12-12 22:35     ` Amit Klein
2022-12-12 22:41       ` Jason A. Donenfeld
2022-12-11 22:16 ` [PATCH 3/5] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
2022-12-11 22:16 ` [PATCH 4/5] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
2022-12-11 22:16 ` [PATCH 5/5] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).