* [PATCH v2 0/3] Replace invocations of prandom_u32_state, prandom_bytes_state with get_random_u32, get_random_bytes
@ 2022-12-13 10:34 david.keisarschm
2022-12-13 10:34 ` [PATCH v2 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: david.keisarschm @ 2022-12-13 10:34 UTC (permalink / raw)
To: linux-kernel; +Cc: David, aksecurity, ilay.bahat1
From: David <david.keisarschm@mail.huji.ac.il>
The security improvements for prandom_u32 done specifically in
commits c51f8f88d705e06bd696d7510aff22b33eb8e638 from October 2020
and d4150779e60fb6c49be25572596b2cdfc5d46a09 from May 2022)
didn't handle the cases when prandom_bytes_state() and prandom_u32_state()
are used. We have now added the necessary changes to handle
these cases as well.
David (3):
Replace invocation of weak PRNG in mm/slab.c
Replace invocation of weak PRNG inside mm/slab_common.c
Replace invocation of weak PRNG in arch/x86/mm/kaslr.c
arch/x86/mm/kaslr.c | 5 +----
mm/slab.c | 20 ++++++++------------
mm/slab_common.c | 10 +++-------
3 files changed, 12 insertions(+), 23 deletions(-)
--
2.38.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2 1/3] Replace invocation of weak PRNG in mm/slab.c
2022-12-13 10:34 [PATCH v2 0/3] Replace invocations of prandom_u32_state, prandom_bytes_state with get_random_u32, get_random_bytes david.keisarschm
@ 2022-12-13 10:34 ` david.keisarschm
2022-12-13 15:02 ` Matthew Wilcox
2022-12-13 10:34 ` [PATCH v2 2/3] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
2022-12-13 10:34 ` [PATCH v2 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
2 siblings, 1 reply; 5+ messages in thread
From: david.keisarschm @ 2022-12-13 10:34 UTC (permalink / raw)
To: linux-kernel, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Vlastimil Babka, Roman Gushchin,
Hyeonggon Yoo
Cc: David, aksecurity, ilay.bahat1, linux-mm
From: David <david.keisarschm@mail.huji.ac.il>
We changed the invocation
of prandom_u32_state to get_random_u32.
We also changed the freelist_init_state
to struct instead of a union,
since the rnd_state is not needed anymore
- get_random_u32 maintains its own state.
This change it important since it
is make the slab allocator randomization
stronger.
Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
---
mm/slab.c | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 92d6b1d48..1476104f4 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2360,20 +2360,17 @@ static void cache_init_objs_debug(struct kmem_cache *cachep, struct slab *slab)
#ifdef CONFIG_SLAB_FREELIST_RANDOM
/* Hold information during a freelist initialization */
-union freelist_init_state {
- struct {
- unsigned int pos;
- unsigned int *list;
- unsigned int count;
- };
- struct rnd_state rnd_state;
+struct freelist_init_state {
+ unsigned int pos;
+ unsigned int *list;
+ unsigned int count;
};
/*
* Initialize the state based on the randomization method available.
* return true if the pre-computed list is available, false otherwise.
*/
-static bool freelist_state_initialize(union freelist_init_state *state,
+static bool freelist_state_initialize(struct freelist_init_state *state,
struct kmem_cache *cachep,
unsigned int count)
{
@@ -2385,7 +2382,6 @@ static bool freelist_state_initialize(union freelist_init_state *state,
/* Use a random state if the pre-computed list is not available */
if (!cachep->random_seq) {
- prandom_seed_state(&state->rnd_state, rand);
ret = false;
} else {
state->list = cachep->random_seq;
@@ -2397,7 +2393,7 @@ static bool freelist_state_initialize(union freelist_init_state *state,
}
/* Get the next entry on the list and randomize it using a random shift */
-static freelist_idx_t next_random_slot(union freelist_init_state *state)
+static freelist_idx_t next_random_slot(struct freelist_init_state *state)
{
if (state->pos >= state->count)
state->pos = 0;
@@ -2418,7 +2414,7 @@ static void swap_free_obj(struct slab *slab, unsigned int a, unsigned int b)
static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab)
{
unsigned int objfreelist = 0, i, rand, count = cachep->num;
- union freelist_init_state state;
+ struct freelist_init_state state;
bool precomputed;
if (count < 2)
@@ -2447,7 +2443,7 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab)
/* Fisher-Yates shuffle */
for (i = count - 1; i > 0; i--) {
- rand = prandom_u32_state(&state.rnd_state);
+ rand = get_random_u32();
rand %= (i + 1);
swap_free_obj(slab, i, rand);
}
--
2.38.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 2/3] Replace invocation of weak PRNG inside mm/slab_common.c
2022-12-13 10:34 [PATCH v2 0/3] Replace invocations of prandom_u32_state, prandom_bytes_state with get_random_u32, get_random_bytes david.keisarschm
2022-12-13 10:34 ` [PATCH v2 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
@ 2022-12-13 10:34 ` david.keisarschm
2022-12-13 10:34 ` [PATCH v2 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
2 siblings, 0 replies; 5+ messages in thread
From: david.keisarschm @ 2022-12-13 10:34 UTC (permalink / raw)
To: linux-kernel, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Vlastimil Babka, Roman Gushchin,
Hyeonggon Yoo
Cc: David, aksecurity, ilay.bahat1, linux-mm
From: David <david.keisarschm@mail.huji.ac.il>
We changed the invocation
of prandom_u32_state to
get_random_u32. We also omitted
the initial seeding for the state,
since get_random_u32 maintains
its own, so there is no need
to keep storing the state of
prandom_u32_state here.
Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
---
mm/slab_common.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index b79755716..6ac68b9a6 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1130,7 +1130,7 @@ EXPORT_SYMBOL(kmalloc_large_node);
#ifdef CONFIG_SLAB_FREELIST_RANDOM
/* Randomize a generic freelist */
-static void freelist_randomize(struct rnd_state *state, unsigned int *list,
+static void freelist_randomize(unsigned int *list,
unsigned int count)
{
unsigned int rand;
@@ -1141,7 +1141,7 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
/* Fisher-Yates shuffle */
for (i = count - 1; i > 0; i--) {
- rand = prandom_u32_state(state);
+ rand = get_random_u32();
rand %= (i + 1);
swap(list[i], list[rand]);
}
@@ -1151,7 +1151,6 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count,
gfp_t gfp)
{
- struct rnd_state state;
if (count < 2 || cachep->random_seq)
return 0;
@@ -1160,10 +1159,7 @@ int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count,
if (!cachep->random_seq)
return -ENOMEM;
- /* Get best entropy at this stage of boot */
- prandom_seed_state(&state, get_random_long());
-
- freelist_randomize(&state, cachep->random_seq, count);
+ freelist_randomize(cachep->random_seq, count);
return 0;
}
--
2.38.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c
2022-12-13 10:34 [PATCH v2 0/3] Replace invocations of prandom_u32_state, prandom_bytes_state with get_random_u32, get_random_bytes david.keisarschm
2022-12-13 10:34 ` [PATCH v2 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
2022-12-13 10:34 ` [PATCH v2 2/3] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
@ 2022-12-13 10:34 ` david.keisarschm
2 siblings, 0 replies; 5+ messages in thread
From: david.keisarschm @ 2022-12-13 10:34 UTC (permalink / raw)
To: linux-kernel, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
H. Peter Anvin
Cc: David, aksecurity, ilay.bahat1
From: David <david.keisarschm@mail.huji.ac.il>
We changed the invocation
of prandom_bytes_state which is
considered weak to get_random_bytes.
We also omitted the call to the
seeding function,
since get_random_bytes matintains
its own state,
so there is no need for seeding here anymore.
This is important for the memory initial state
randomization.
Signed-off-by: David <david.keisarschm@mail.huji.ac.il>
---
arch/x86/mm/kaslr.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 0bb083979..9ef8993d5 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -66,7 +66,6 @@ void __init kernel_randomize_memory(void)
size_t i;
unsigned long vaddr_start, vaddr;
unsigned long rand, memory_tb;
- struct rnd_state rand_state;
unsigned long remain_entropy;
unsigned long vmemmap_size;
@@ -113,8 +112,6 @@ void __init kernel_randomize_memory(void)
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
remain_entropy -= get_padding(&kaslr_regions[i]);
- prandom_seed_state(&rand_state, kaslr_get_random_long("Memory"));
-
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++) {
unsigned long entropy;
@@ -123,7 +120,7 @@ void __init kernel_randomize_memory(void)
* available.
*/
entropy = remain_entropy / (ARRAY_SIZE(kaslr_regions) - i);
- prandom_bytes_state(&rand_state, &rand, sizeof(rand));
+ get_random_bytes(&rand, sizeof(rand));
entropy = (rand % (entropy + 1)) & PUD_MASK;
vaddr += entropy;
*kaslr_regions[i].base = vaddr;
--
2.38.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/3] Replace invocation of weak PRNG in mm/slab.c
2022-12-13 10:34 ` [PATCH v2 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
@ 2022-12-13 15:02 ` Matthew Wilcox
0 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox @ 2022-12-13 15:02 UTC (permalink / raw)
To: david.keisarschm
Cc: linux-kernel, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Vlastimil Babka, Roman Gushchin,
Hyeonggon Yoo, aksecurity, ilay.bahat1, linux-mm
On Tue, Dec 13, 2022 at 12:34:57PM +0200, david.keisarschm@mail.huji.ac.il wrote:
> From: David <david.keisarschm@mail.huji.ac.il>
It's normal to include the surname in your sign-off, fwiw.
> @@ -2447,7 +2443,7 @@ static bool shuffle_freelist(struct kmem_cache *cachep, struct slab *slab)
>
> /* Fisher-Yates shuffle */
> for (i = count - 1; i > 0; i--) {
> - rand = prandom_u32_state(&state.rnd_state);
> + rand = get_random_u32();
> rand %= (i + 1);
Shouldn't this be "rand = get_random_u32_below(i + 1)"?
> swap_free_obj(slab, i, rand);
> }
> --
> 2.38.0
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-12-13 15:02 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-13 10:34 [PATCH v2 0/3] Replace invocations of prandom_u32_state, prandom_bytes_state with get_random_u32, get_random_bytes david.keisarschm
2022-12-13 10:34 ` [PATCH v2 1/3] Replace invocation of weak PRNG in mm/slab.c david.keisarschm
2022-12-13 15:02 ` Matthew Wilcox
2022-12-13 10:34 ` [PATCH v2 2/3] Replace invocation of weak PRNG inside mm/slab_common.c david.keisarschm
2022-12-13 10:34 ` [PATCH v2 3/3] Replace invocation of weak PRNG in arch/x86/mm/kaslr.c david.keisarschm
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.