linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] Replace invocations of prandom_u32() with get_random_u32()
@ 2022-12-18 18:18 david.keisarschm
  2022-12-18 18:18 ` [PATCH v3 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: david.keisarschm @ 2022-12-18 18:18 UTC (permalink / raw)
  To: linux-kernel; +Cc: Jason, David Keisar Schmidt, aksecurity, ilay.bahat1, bpf

From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>

Hi,

This third series add some changes to the commit messages,
and also replaces get_random_u32 with get_random_u32_below,
in a case a modulo operation is done on the result.

The security improvements for prandom_u32 done in commits c51f8f88d705
from October 2020 and d4150779e60f from May 2022 didn't handle the cases
when prandom_bytes_state() and prandom_u32_state() are used.

Specifically, this weak randomization takes place in three cases:
    1.	mm/slab.c
    2.	mm/slab_common.c
    3.	arch/x86/mm/kaslr.c

The first two invocations (mm/slab.c, mm/slab_common.c) are used to create
randomization in the slab allocator freelists.
This is done to make sure attackers can’t obtain information on the heap state.

The last invocation, inside arch/x86/mm/kaslr.c,
randomizes the virtual address space of kernel memory regions.
Hence, we have added the necessary changes to make those randomizations stronger,
switching  prandom_u32 instances to get_random_u32.

# Changes since v2

* edited commit message in all three patches.
* replaced instances of get_random_u32 with get_random_u32_below
    in mm/slab.c, mm/slab_common.c

# Changes since v1

* omitted the renaming patch, per the feedback we received
* omitted the replace of prandom_u32_state with get_random_u32 in bpf/core.c
 as it turned out to be a duplicate of a patch suggested earlier by Jason Donenfeld

Regards,


David Keisar Schmidt (3):
  Replace invocation of weak PRNG in mm/slab.c
  Replace invocation of weak PRNG inside mm/slab_common.c
  Replace invocation of weak PRNG in arch/x86/mm/kaslr.c

 arch/x86/mm/kaslr.c |  5 +----
 mm/slab.c           | 25 ++++++++++---------------
 mm/slab_common.c    | 11 +++--------
 3 files changed, 14 insertions(+), 27 deletions(-)

-- 
2.38.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 1/3] Replace invocation of weak PRNG in mm/slab.c
  2022-12-18 18:18 [PATCH v3 0/3] Replace invocations of prandom_u32() with get_random_u32() david.keisarschm
@ 2022-12-18 18:18 ` david.keisarschm
  2022-12-18 18:18 ` [PATCH v3 2/3] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
  2022-12-18 18:19 ` [PATCH v3 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
  2 siblings, 0 replies; 5+ messages in thread
From: david.keisarschm @ 2022-12-18 18:18 UTC (permalink / raw)
  To: linux-kernel, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo
  Cc: Jason, David Keisar Schmidt, aksecurity, ilay.bahat1, linux-mm

From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>

This third series add some changes to the commit messages,
and also replaces get_random_u32 with get_random_u32_below,
in a case a modulo operation is done on the result.

The Slab allocator randomization uses the prandom_u32 PRNG.
That was added to prevent attackers to obtain information on the heap state,
by randomizing the freelists state.

However, this PRNG turned out to be weak, as noted in commit c51f8f88d705
To fix it, we have changed the invocation of prandom_u32_state to get_random_u32
to ensure the PRNG is strong.

Since a modulo operation is applied right after that,
we used get_random_u32_below, to achieve uniformity.

In addition, we changed the freelist_init_state union to struct,
since the rnd_state inside which is used to store the state of prandom_u32,
is not needed anymore, since get_random_u32 maintains its own state.

# Changes since v2

* edited commit message.
* replaced instances of get_random_u32 with get_random_u32_below
    in mm/slab.c.


Signed-off-by: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>
---
 mm/slab.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 59c8e28f7..c259e0b09 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2360,20 +2360,17 @@ static void cache_init_objs_debug(struct kmem_cache *cachep, struct slab *slab)
 
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
 /* Hold information during a freelist initialization */
-union freelist_init_state {
-	struct {
-		unsigned int pos;
-		unsigned int *list;
-		unsigned int count;
-	};
-	struct rnd_state rnd_state;
+struct freelist_init_state {
+	unsigned int pos;
+	unsigned int *list;
+	unsigned int count;
 };
 
 /*
  * Initialize the state based on the randomization method available.
  * return true if the pre-computed list is available, false otherwise.
  */
-static bool freelist_state_initialize(union freelist_init_state *state,
+static bool freelist_state_initialize(struct freelist_init_state *state,
 				struct kmem_cache *cachep,
 				unsigned int count)
 {
@@ -2381,23 +2378,22 @@ static bool freelist_state_initialize(union freelist_init_state *state,
 	unsigned int rand;
 
 	/* Use best entropy available to define a random shift */
-	rand = get_random_u32();
+	rand = get_random_u32_below(count);
 
 	/* Use a random state if the pre-computed list is not available */
 	if (!cachep->random_seq) {
-		prandom_seed_state(&state->rnd_state, rand);
 		ret = false;
 	} else {
 		state->list = cachep->random_seq;
 		state->count = count;
-		state->pos = rand % count;
+		state->pos = rand;
 		ret = true;
 	}
 	return ret;
 }
 
 /* Get the next entry on the list and randomize it using a random shift */
-static freelist_idx_t next_random_slot(union freelist_init_state *state)
+static freelist_idx_t next_random_slot(struct freelist_init_state *state)
 {
 	if (state->pos >= state->count)
 		state->pos = 0;
@@ -2418,7 +2414,7 @@ static void swap_free_obj(struct slab *slab, unsigned int a, unsigned int b)
 static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab)
 {
 	unsigned int objfreelist = 0, i, rand, count = cachep->num;
-	union freelist_init_state state;
+	struct freelist_init_state state;
 	bool precomputed;
 
 	if (count < 2)
@@ -2447,8 +2443,7 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab)
 
 		/* Fisher-Yates shuffle */
 		for (i = count - 1; i > 0; i--) {
-			rand = prandom_u32_state(&state.rnd_state);
-			rand %= (i + 1);
+			rand = get_random_u32_below(i+1);
 			swap_free_obj(slab, i, rand);
 		}
 	} else {
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 2/3] Replace invocation of weak PRNG inside mm/slab_common.c
  2022-12-18 18:18 [PATCH v3 0/3] Replace invocations of prandom_u32() with get_random_u32() david.keisarschm
  2022-12-18 18:18 ` [PATCH v3 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
@ 2022-12-18 18:18 ` david.keisarschm
  2022-12-18 18:19 ` [PATCH v3 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
  2 siblings, 0 replies; 5+ messages in thread
From: david.keisarschm @ 2022-12-18 18:18 UTC (permalink / raw)
  To: linux-kernel, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo
  Cc: Jason, David Keisar Schmidt, aksecurity, ilay.bahat1, linux-mm

From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>

This third series add some changes to the commit messages,
and also replaces get_random_u32 with get_random_u32_below,
in a case a modulo operation is done on the result.

The Slab allocator randomization inside slab_common.c uses the prandom_u32 PRNG.
That was added to prevent attackers to obtain information on the heap state.

However, this PRNG turned out to be weak, as noted in commit c51f8f88d705
To fix it, we have changed the invocation of prandom_u32_state to get_random_u32
to ensure the PRNG is strong.

Since a modulo operation is applied right after that,
in the Fisher-Yates shuffle, we used get_random_u32_below, to achieve uniformity.

# Changes since v2

* edited commit message.
* replaced instances of get_random_u32 with get_random_u32_below
    in mm/slab_common.c.


Signed-off-by: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>
---
 mm/slab_common.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0042fb273..e254b2f55 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1130,7 +1130,7 @@ EXPORT_SYMBOL(kmalloc_large_node);
 
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
 /* Randomize a generic freelist */
-static void freelist_randomize(struct rnd_state *state, unsigned int *list,
+static void freelist_randomize(unsigned int *list,
 			       unsigned int count)
 {
 	unsigned int rand;
@@ -1141,8 +1141,7 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
 
 	/* Fisher-Yates shuffle */
 	for (i = count - 1; i > 0; i--) {
-		rand = prandom_u32_state(state);
-		rand %= (i + 1);
+		rand = get_random_u32_below(i+1);
 		swap(list[i], list[rand]);
 	}
 }
@@ -1151,7 +1150,6 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
 int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count,
 				    gfp_t gfp)
 {
-	struct rnd_state state;
 
 	if (count < 2 || cachep->random_seq)
 		return 0;
@@ -1160,10 +1158,7 @@ int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count,
 	if (!cachep->random_seq)
 		return -ENOMEM;
 
-	/* Get best entropy at this stage of boot */
-	prandom_seed_state(&state, get_random_long());
-
-	freelist_randomize(&state, cachep->random_seq, count);
+	freelist_randomize(cachep->random_seq, count);
 	return 0;
 }
 
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c
  2022-12-18 18:18 [PATCH v3 0/3] Replace invocations of prandom_u32() with get_random_u32() david.keisarschm
  2022-12-18 18:18 ` [PATCH v3 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
  2022-12-18 18:18 ` [PATCH v3 2/3] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
@ 2022-12-18 18:19 ` david.keisarschm
  2023-01-06 19:23   ` Kees Cook
  2 siblings, 1 reply; 5+ messages in thread
From: david.keisarschm @ 2022-12-18 18:19 UTC (permalink / raw)
  To: linux-kernel, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin
  Cc: Jason, David Keisar Schmidt, aksecurity, ilay.bahat1

From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>

This third series add some changes to the commit messages,
and also replaces get_random_u32 with get_random_u32_below,
in a case a modulo operation is done on the result.

The memory randomization of the virtual address space of kernel memory regions
(physical memory mapping, vmalloc & vmemmap) inside arch/x86/mm/kaslr.c
is based on the function prandom_bytes_state which uses the prandom_u32 PRNG.

However, this PRNG turned out to be weak, as noted in commit c51f8f88d705
To fix it, we have changed the invocation of prandom_bytes_state to get_random_bytes.

Unlike get_random_bytes which maintains its own state, prandom_bytes state needs to be seeded,
thus, we have omitted the call to the seeding function, since it is not needed anymore.

# Changes since v2

* edited commit message.


Signed-off-by: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>
---
 arch/x86/mm/kaslr.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 557f0fe25..9ef8993d5 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -66,7 +66,6 @@ void __init kernel_randomize_memory(void)
 	size_t i;
 	unsigned long vaddr_start, vaddr;
 	unsigned long rand, memory_tb;
-	struct rnd_state rand_state;
 	unsigned long remain_entropy;
 	unsigned long vmemmap_size;
 
@@ -113,8 +112,6 @@ void __init kernel_randomize_memory(void)
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
 		remain_entropy -= get_padding(&kaslr_regions[i]);
 
-	prandom_seed_state(&rand_state, kaslr_get_random_long("Memory"));
-
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++) {
 		unsigned long entropy;
 
@@ -123,7 +120,7 @@ void __init kernel_randomize_memory(void)
 		 * available.
 		 */
 		entropy = remain_entropy / (ARRAY_SIZE(kaslr_regions) - i);
-		prandom_bytes_state(&rand_state, &rand, sizeof(rand));
+		get_random_bytes(&rand, sizeof(rand));
 		entropy = (rand % (entropy + 1)) & PUD_MASK;
 		vaddr += entropy;
 		*kaslr_regions[i].base = vaddr;
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c
  2022-12-18 18:19 ` [PATCH v3 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
@ 2023-01-06 19:23   ` Kees Cook
  0 siblings, 0 replies; 5+ messages in thread
From: Kees Cook @ 2023-01-06 19:23 UTC (permalink / raw)
  To: david.keisarschm
  Cc: linux-kernel, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Jason, aksecurity, ilay.bahat1

On Sun, Dec 18, 2022 at 08:19:00PM +0200, david.keisarschm@mail.huji.ac.il wrote:
> From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>
> 
> This third series add some changes to the commit messages,
> and also replaces get_random_u32 with get_random_u32_below,
> in a case a modulo operation is done on the result.
> 
> The memory randomization of the virtual address space of kernel memory regions
> (physical memory mapping, vmalloc & vmemmap) inside arch/x86/mm/kaslr.c
> is based on the function prandom_bytes_state which uses the prandom_u32 PRNG.
> 
> However, this PRNG turned out to be weak, as noted in commit c51f8f88d705
> To fix it, we have changed the invocation of prandom_bytes_state to get_random_bytes.
> 
> Unlike get_random_bytes which maintains its own state, prandom_bytes state needs to be seeded,
> thus, we have omitted the call to the seeding function, since it is not needed anymore.

I'd really rather not do this. prandom is being seeded from "true" RNG,
and it allows for the KASLR to be hand-seeded for a repeatable layout
for doing debugging and performance analysis (for the coming FG-KASLR).

AIUI, prandom is weak due to its shared state (which KASLR's use doesn't
have) and its predictability over time (but KASLR uses it only at
boot-time). And being able to recover the outputs would mean KASLR was
already broken, so there isn't anything that becomes MORE exposed.

If there is some other weakness, then sure, we can re-evaluate it, but
for now I'd rather leave this as-is.

-Kees

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-01-06 19:23 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-18 18:18 [PATCH v3 0/3] Replace invocations of prandom_u32() with get_random_u32() david.keisarschm
2022-12-18 18:18 ` [PATCH v3 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
2022-12-18 18:18 ` [PATCH v3 2/3] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
2022-12-18 18:19 ` [PATCH v3 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
2023-01-06 19:23   ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).