linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 5/5] Apply the PG_sensitive flag to the CryptoAPI subsystem
@ 2009-05-20 19:05 Larry H.
  2009-05-30 18:05 ` Ingo Molnar
  0 siblings, 1 reply; 4+ messages in thread
From: Larry H. @ 2009-05-20 19:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: Linus Torvalds, linux-mm, Ingo Molnar, pageexec, linux-crypto

This patch deploys the use of the PG_sensitive page allocator flag
within the CryptoAPI subsystem, in all relevant locations where
algorithm and key contexts are allocated.

Some calls to memset for zeroing the buffers have been removed to
avoid duplication of the sanitizing process, since this is already
taken care of by the allocator during page freeing.

The only noticeable impact on performance might be in the blkcipher
modifications, although this is likely negligible and balanced with
the security benefits of this patch in the long term.

Signed-off-by: Larry H. <research@subreption.com>

---
 crypto/ablkcipher.c |    3 +--
 crypto/aead.c       |    5 ++---
 crypto/ahash.c      |    3 +--
 crypto/algapi.c     |    2 +-
 crypto/api.c        |    7 +++----
 crypto/authenc.c    |    2 +-
 crypto/blkcipher.c  |   11 +++++------
 crypto/cipher.c     |    3 +--
 crypto/cryptd.c     |    2 +-
 crypto/gcm.c        |    6 +++---
 crypto/hash.c       |    3 +--
 crypto/rng.c        |    2 +-
 crypto/seqiv.c      |    5 +++--
 crypto/shash.c      |    3 +--
 14 files changed, 25 insertions(+), 32 deletions(-)

Index: linux-2.6/crypto/ablkcipher.c
===================================================================
--- linux-2.6.orig/crypto/ablkcipher.c
+++ linux-2.6/crypto/ablkcipher.c
@@ -35,14 +35,13 @@ static int setkey_unaligned(struct crypt
 	unsigned long absize;
 
 	absize = keylen + alignmask;
-	buffer = kmalloc(absize, GFP_ATOMIC);
+	buffer = kmalloc(absize, GFP_ATOMIC | GFP_SENSITIVE);
 	if (!buffer)
 		return -ENOMEM;
 
 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 	memcpy(alignbuffer, key, keylen);
 	ret = cipher->setkey(tfm, alignbuffer, keylen);
-	memset(alignbuffer, 0, keylen);
 	kfree(buffer);
 	return ret;
 }
Index: linux-2.6/crypto/aead.c
===================================================================
--- linux-2.6.orig/crypto/aead.c
+++ linux-2.6/crypto/aead.c
@@ -33,14 +33,13 @@ static int setkey_unaligned(struct crypt
 	unsigned long absize;
 
 	absize = keylen + alignmask;
-	buffer = kmalloc(absize, GFP_ATOMIC);
+	buffer = kmalloc(absize, GFP_ATOMIC | GFP_SENSITIVE);
 	if (!buffer)
 		return -ENOMEM;
 
 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 	memcpy(alignbuffer, key, keylen);
 	ret = aead->setkey(tfm, alignbuffer, keylen);
-	memset(alignbuffer, 0, keylen);
 	kfree(buffer);
 	return ret;
 }
@@ -232,7 +231,7 @@ struct crypto_instance *aead_geniv_alloc
 	if (IS_ERR(name))
 		return ERR_PTR(err);
 
-	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL | GFP_SENSITIVE);
 	if (!inst)
 		return ERR_PTR(-ENOMEM);
 
Index: linux-2.6/crypto/ahash.c
===================================================================
--- linux-2.6.orig/crypto/ahash.c
+++ linux-2.6/crypto/ahash.c
@@ -138,14 +138,13 @@ static int ahash_setkey_unaligned(struct
 	unsigned long absize;
 
 	absize = keylen + alignmask;
-	buffer = kmalloc(absize, GFP_ATOMIC);
+	buffer = kmalloc(absize, GFP_ATOMIC | GFP_SENSITIVE);
 	if (!buffer)
 		return -ENOMEM;
 
 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 	memcpy(alignbuffer, key, keylen);
 	ret = ahash->setkey(tfm, alignbuffer, keylen);
-	memset(alignbuffer, 0, keylen);
 	kfree(buffer);
 	return ret;
 }
Index: linux-2.6/crypto/algapi.c
===================================================================
--- linux-2.6.orig/crypto/algapi.c
+++ linux-2.6/crypto/algapi.c
@@ -634,7 +634,7 @@ struct crypto_instance *crypto_alloc_ins
 	struct crypto_spawn *spawn;
 	int err;
 
-	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL | GFP_SENSITIVE);
 	if (!inst)
 		return ERR_PTR(-ENOMEM);
 
Index: linux-2.6/crypto/api.c
===================================================================
--- linux-2.6.orig/crypto/api.c
+++ linux-2.6/crypto/api.c
@@ -114,7 +114,7 @@ struct crypto_larval *crypto_larval_allo
 {
 	struct crypto_larval *larval;
 
-	larval = kzalloc(sizeof(*larval), GFP_KERNEL);
+	larval = kzalloc(sizeof(*larval), GFP_KERNEL | GFP_SENSITIVE);
 	if (!larval)
 		return ERR_PTR(-ENOMEM);
 
@@ -380,7 +380,7 @@ struct crypto_tfm *__crypto_alloc_tfm(st
 	int err = -ENOMEM;
 
 	tfm_size = sizeof(*tfm) + crypto_ctxsize(alg, type, mask);
-	tfm = kzalloc(tfm_size, GFP_KERNEL);
+	tfm = kzalloc(tfm_size, GFP_KERNEL | GFP_SENSITIVE);
 	if (tfm == NULL)
 		goto out_err;
 
@@ -476,7 +476,7 @@ struct crypto_tfm *crypto_create_tfm(str
 	tfmsize = frontend->tfmsize;
 	total = tfmsize + sizeof(*tfm) + frontend->extsize(alg, frontend);
 
-	mem = kzalloc(total, GFP_KERNEL);
+	mem = kzalloc(total, GFP_KERNEL | GFP_SENSITIVE);
 	if (mem == NULL)
 		goto out_err;
 
@@ -592,7 +592,6 @@ void crypto_destroy_tfm(void *mem, struc
 		alg->cra_exit(tfm);
 	crypto_exit_ops(tfm);
 	crypto_mod_put(alg);
-	memset(mem, 0, size);
 	kfree(mem);
 }
 EXPORT_SYMBOL_GPL(crypto_destroy_tfm);
Index: linux-2.6/crypto/authenc.c
===================================================================
--- linux-2.6.orig/crypto/authenc.c
+++ linux-2.6/crypto/authenc.c
@@ -397,7 +397,7 @@ static struct crypto_instance *crypto_au
 	if (IS_ERR(enc_name))
 		goto out_put_auth;
 
-	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL | GFP_SENSITIVE);
 	err = -ENOMEM;
 	if (!inst)
 		goto out_put_auth;
Index: linux-2.6/crypto/blkcipher.c
===================================================================
--- linux-2.6.orig/crypto/blkcipher.c
+++ linux-2.6/crypto/blkcipher.c
@@ -161,7 +161,7 @@ static inline int blkcipher_next_slow(st
 
 	n = aligned_bsize * 3 - (alignmask + 1) +
 	    (alignmask & ~(crypto_tfm_ctx_alignment() - 1));
-	walk->buffer = kmalloc(n, GFP_ATOMIC);
+	walk->buffer = kmalloc(n, GFP_ATOMIC | GFP_SENSITIVE);
 	if (!walk->buffer)
 		return blkcipher_walk_done(desc, walk, -ENOMEM);
 
@@ -242,7 +242,7 @@ static int blkcipher_walk_next(struct bl
 	    !scatterwalk_aligned(&walk->out, alignmask)) {
 		walk->flags |= BLKCIPHER_WALK_COPY;
 		if (!walk->page) {
-			walk->page = (void *)__get_free_page(GFP_ATOMIC);
+			walk->page = (void *)__get_free_page(GFP_ATOMIC|GFP_SENSITIVE);
 			if (!walk->page)
 				n = 0;
 		}
@@ -287,7 +287,7 @@ static inline int blkcipher_copy_iv(stru
 	u8 *iv;
 
 	size += alignmask & ~(crypto_tfm_ctx_alignment() - 1);
-	walk->buffer = kmalloc(size, GFP_ATOMIC);
+	walk->buffer = kmalloc(size, GFP_ATOMIC | GFP_SENSITIVE);
 	if (!walk->buffer)
 		return -ENOMEM;
 
@@ -366,14 +366,13 @@ static int setkey_unaligned(struct crypt
 	unsigned long absize;
 
 	absize = keylen + alignmask;
-	buffer = kmalloc(absize, GFP_ATOMIC);
+	buffer = kmalloc(absize, GFP_ATOMIC | GFP_SENSITIVE);
 	if (!buffer)
 		return -ENOMEM;
 
 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 	memcpy(alignbuffer, key, keylen);
 	ret = cipher->setkey(tfm, alignbuffer, keylen);
-	memset(alignbuffer, 0, keylen);
 	kfree(buffer);
 	return ret;
 }
@@ -569,7 +568,7 @@ struct crypto_instance *skcipher_geniv_a
 	if (IS_ERR(name))
 		return ERR_PTR(err);
 
-	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL | GFP_SENSITIVE);
 	if (!inst)
 		return ERR_PTR(-ENOMEM);
 
Index: linux-2.6/crypto/cipher.c
===================================================================
--- linux-2.6.orig/crypto/cipher.c
+++ linux-2.6/crypto/cipher.c
@@ -30,14 +30,13 @@ static int setkey_unaligned(struct crypt
 	unsigned long absize;
 
 	absize = keylen + alignmask;
-	buffer = kmalloc(absize, GFP_ATOMIC);
+	buffer = kmalloc(absize, GFP_ATOMIC | GFP_SENSITIVE);
 	if (!buffer)
 		return -ENOMEM;
 
 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 	memcpy(alignbuffer, key, keylen);
 	ret = cia->cia_setkey(tfm, alignbuffer, keylen);
-	memset(alignbuffer, 0, keylen);
 	kfree(buffer);
 	return ret;
 
Index: linux-2.6/crypto/cryptd.c
===================================================================
--- linux-2.6.orig/crypto/cryptd.c
+++ linux-2.6/crypto/cryptd.c
@@ -196,7 +196,7 @@ static struct crypto_instance *cryptd_al
 	struct cryptd_instance_ctx *ctx;
 	int err;
 
-	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL | GFP_SENSITIVE);
 	if (!inst) {
 		inst = ERR_PTR(-ENOMEM);
 		goto out;
Index: linux-2.6/crypto/gcm.c
===================================================================
--- linux-2.6.orig/crypto/gcm.c
+++ linux-2.6/crypto/gcm.c
@@ -208,7 +208,7 @@ static int crypto_gcm_setkey(struct cryp
 				       CRYPTO_TFM_RES_MASK);
 
 	data = kzalloc(sizeof(*data) + crypto_ablkcipher_reqsize(ctr),
-		       GFP_KERNEL);
+		       GFP_KERNEL | GFP_SENSITIVE);
 	if (!data)
 		return -ENOMEM;
 
@@ -454,7 +454,7 @@ static struct crypto_instance *crypto_gc
 	if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
 		return ERR_PTR(-EINVAL);
 
-	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL | GFP_SENSITIVE);
 	if (!inst)
 		return ERR_PTR(-ENOMEM);
 
@@ -703,7 +703,7 @@ static struct crypto_instance *crypto_rf
 	if (IS_ERR(ccm_name))
 		return ERR_PTR(err);
 
-	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL | GFP_SENSITIVE);
 	if (!inst)
 		return ERR_PTR(-ENOMEM);
 
Index: linux-2.6/crypto/hash.c
===================================================================
--- linux-2.6.orig/crypto/hash.c
+++ linux-2.6/crypto/hash.c
@@ -35,14 +35,13 @@ static int hash_setkey_unaligned(struct 
 	unsigned long absize;
 
 	absize = keylen + alignmask;
-	buffer = kmalloc(absize, GFP_ATOMIC);
+	buffer = kmalloc(absize, GFP_ATOMIC | GFP_SENSITIVE);
 	if (!buffer)
 		return -ENOMEM;
 
 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 	memcpy(alignbuffer, key, keylen);
 	ret = alg->setkey(crt, alignbuffer, keylen);
-	memset(alignbuffer, 0, keylen);
 	kfree(buffer);
 	return ret;
 }
Index: linux-2.6/crypto/rng.c
===================================================================
--- linux-2.6.orig/crypto/rng.c
+++ linux-2.6/crypto/rng.c
@@ -32,7 +32,7 @@ static int rngapi_reset(struct crypto_rn
 	int err;
 
 	if (!seed && slen) {
-		buf = kmalloc(slen, GFP_KERNEL);
+		buf = kmalloc(slen, GFP_KERNEL | GFP_SENSITIVE);
 		if (!buf)
 			return -ENOMEM;
 
Index: linux-2.6/crypto/seqiv.c
===================================================================
--- linux-2.6.orig/crypto/seqiv.c
+++ linux-2.6/crypto/seqiv.c
@@ -115,9 +115,10 @@ static int seqiv_givencrypt(struct skcip
 
 	if (unlikely(!IS_ALIGNED((unsigned long)info,
 				 crypto_ablkcipher_alignmask(geniv) + 1))) {
-		info = kmalloc(ivsize, req->creq.base.flags &
+		info = kmalloc(ivsize, (req->creq.base.flags &
 				       CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
-								  GFP_ATOMIC);
+								  GFP_ATOMIC)
+					| GFP_SENSITIVE);
 		if (!info)
 			return -ENOMEM;
 
Index: linux-2.6/crypto/shash.c
===================================================================
--- linux-2.6.orig/crypto/shash.c
+++ linux-2.6/crypto/shash.c
@@ -37,14 +37,13 @@ static int shash_setkey_unaligned(struct
 	int err;
 
 	absize = keylen + (alignmask & ~(CRYPTO_MINALIGN - 1));
-	buffer = kmalloc(absize, GFP_KERNEL);
+	buffer = kmalloc(absize, GFP_KERNEL | GFP_SENSITIVE);
 	if (!buffer)
 		return -ENOMEM;
 
 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 	memcpy(alignbuffer, key, keylen);
 	err = shash->setkey(tfm, alignbuffer, keylen);
-	memset(alignbuffer, 0, keylen);
 	kfree(buffer);
 	return err;
 }

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [patch 5/5] Apply the PG_sensitive flag to the CryptoAPI subsystem
  2009-05-20 19:05 [patch 5/5] Apply the PG_sensitive flag to the CryptoAPI subsystem Larry H.
@ 2009-05-30 18:05 ` Ingo Molnar
  2009-05-31 10:14   ` pageexec
  0 siblings, 1 reply; 4+ messages in thread
From: Ingo Molnar @ 2009-05-30 18:05 UTC (permalink / raw)
  To: Larry H.
  Cc: linux-kernel, Linus Torvalds, linux-mm, Ingo Molnar, pageexec,
	linux-crypto, Pekka Enberg, Peter Zijlstra


* Larry H. <research@subreption.com> wrote:

> This patch deploys the use of the PG_sensitive page allocator flag 
> within the CryptoAPI subsystem, in all relevant locations where 
> algorithm and key contexts are allocated.
> 
> Some calls to memset for zeroing the buffers have been removed to 
> avoid duplication of the sanitizing process, since this is already 
> taken care of by the allocator during page freeing.
> 
> The only noticeable impact on performance might be in the 
> blkcipher modifications, although this is likely negligible and 
> balanced with the security benefits of this patch in the long 
> term.

I think there's a rather significant omission here: there's no 
discussion about on-kernel-stack information leaking out.

If a thread that does a crypto call happens to leave sensitive 
on-stack data (this can happen easily as stack variables are not 
cleared on return), or if a future variant or modification of a 
crypto algorithm leaves such information around - then there's 
nothing that keeps that data from potentially leaking out.

This is not academic and it happens all the time: only look at 
various crash dumps on lkml. For example this crash shows such 
leaked information:

[   96.138788]  [<ffffffff810ab62e>] perf_counter_exit_task+0x10e/0x3f3
[   96.145464]  [<ffffffff8104cf46>] do_exit+0x2e7/0x722
[   96.150837]  [<ffffffff810630cf>] ? up_read+0x9/0xb
[   96.156036]  [<ffffffff8151cc0b>] ? do_page_fault+0x27d/0x2a5
[   96.162141]  [<ffffffff8104d3f4>] do_group_exit+0x73/0xa0
[   96.167860]  [<ffffffff8104d433>] sys_exit_group+0x12/0x16
[   96.173665]  [<ffffffff8100bb2b>] system_call_fastpath+0x16/0x1b

The 'ffffffff8151cc0b' 64-bit word is actually a leftover from a 
previous system context. ( And this is at the bottom of the stack 
that gets cleared all the time - the top of the kernel stack is a 
lot more more persistent in practice and crypto calls tend to have a 
healthy stack footprint. )

So IMO the GFP_SENSITIVE facility (beyond being misnomer - it should 
be something like GFP_NON_PERSISTENT instead) actually results in 
_worse_ security in the end: because people (and organizations) 
'think' that their keys are safe against information leaks via this 
space, while they are not. The kernel stack can be freed, be reused 
by something else partially and then written out to disk (say as 
part of hibernation) where it's recoverable from the disk image.

So this whole facility probably only makes sense if all kernel 
stacks that handle sensitive data are zeroed on free. But i havent 
seen any kernel thread stack clearing functionality in this 
patch-set - is it an intentional omission? (or have i missed some 
aspect of the patch-set)

Also, there's no discussion about long-lived threads keeping 
sensitive information in there kernel stack indefinitely.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [patch 5/5] Apply the PG_sensitive flag to the CryptoAPI subsystem
  2009-05-30 18:05 ` Ingo Molnar
@ 2009-05-31 10:14   ` pageexec
  2009-05-31 10:34     ` Alan Cox
  0 siblings, 1 reply; 4+ messages in thread
From: pageexec @ 2009-05-31 10:14 UTC (permalink / raw)
  To: Larry H., Ingo Molnar
  Cc: linux-kernel, Linus Torvalds, linux-mm, Ingo Molnar,
	linux-crypto, Pekka Enberg, Peter Zijlstra

On 30 May 2009 at 20:05, Ingo Molnar wrote:

> I think there's a rather significant omission here: there's no 
> discussion about on-kernel-stack information leaking out.
> 
> If a thread that does a crypto call happens to leave sensitive 
> on-stack data (this can happen easily as stack variables are not 
> cleared on return), or if a future variant or modification of a 
> crypto algorithm leaves such information around - then there's 
> nothing that keeps that data from potentially leaking out.
> 
> This is not academic and it happens all the time: only look at 
> various crash dumps on lkml. For example this crash shows such 
> leaked information:
> 
> [   96.138788]  [<ffffffff810ab62e>] perf_counter_exit_task+0x10e/0x3f3
> [   96.145464]  [<ffffffff8104cf46>] do_exit+0x2e7/0x722
> [   96.150837]  [<ffffffff810630cf>] ? up_read+0x9/0xb
> [   96.156036]  [<ffffffff8151cc0b>] ? do_page_fault+0x27d/0x2a5
> [   96.162141]  [<ffffffff8104d3f4>] do_group_exit+0x73/0xa0
> [   96.167860]  [<ffffffff8104d433>] sys_exit_group+0x12/0x16
> [   96.173665]  [<ffffffff8100bb2b>] system_call_fastpath+0x16/0x1b
> 
> The 'ffffffff8151cc0b' 64-bit word is actually a leftover from a 
> previous system context. ( And this is at the bottom of the stack 
> that gets cleared all the time - the top of the kernel stack is a 
> lot more more persistent in practice and crypto calls tend to have a 
> healthy stack footprint. )
> 
> So IMO the GFP_SENSITIVE facility (beyond being misnomer - it should 
> be something like GFP_NON_PERSISTENT instead) actually results in 
> _worse_ security in the end: because people (and organizations) 
> 'think' that their keys are safe against information leaks via this 
> space, while they are not. The kernel stack can be freed, be reused 
> by something else partially and then written out to disk (say as 
> part of hibernation) where it's recoverable from the disk image.
> 
> So this whole facility probably only makes sense if all kernel 
> stacks that handle sensitive data are zeroed on free. But i havent 
> seen any kernel thread stack clearing functionality in this 
> patch-set - is it an intentional omission? (or have i missed some 
> aspect of the patch-set)

i think you missed the fact that the page flag based approach had been
abandoned already in favour of unconditional page sanitizing on free
(modulo a kernel boot option). the other approach of doing the sanitizing
on a smaller allocation base (kfree, etc) is orthogonal to this one since
they address the lifetime problem at different levels (i'm just making it
clear since you brought up a freed kernel stack ending up in a hibernation
image and leaving data there, that obviously won't happen as the freed
kernel stack pages will be sanitized on free).

now as for kernel stacks. first of all, the original idea of sanitization
was meant to address userland secrets staying around for too long, little
if any of that is long-lived on kernel stacks.

kernel data lifetime got affected by virtue of doing the sanitization at
the lowest possible level of the page allocator (which was in turn favoured
over the page flag and strict 'userland data only' sanitization due to its
simplicity, a few lines of code literally). so consider that as a fortunate
sideeffect.

with that said, there's certainly room for evolution, both in addressing
more kind of data (it's not only the kernel stack you mention but also the
userland stack whose unused pages can be taken back for example) and/or
reducing lifetime further. i personally never bothered with any of that
because the original request/goal was already addressed.

> Also, there's no discussion about long-lived threads keeping 
> sensitive information in there kernel stack indefinitely.

kernel stack clearing isn't hard to do, just do it on every syscall exit
and in the infinite loop for kernel threads.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [patch 5/5] Apply the PG_sensitive flag to the CryptoAPI subsystem
  2009-05-31 10:14   ` pageexec
@ 2009-05-31 10:34     ` Alan Cox
  0 siblings, 0 replies; 4+ messages in thread
From: Alan Cox @ 2009-05-31 10:34 UTC (permalink / raw)
  To: pageexec
  Cc: Larry H.,
	Ingo Molnar, linux-kernel, Linus Torvalds, linux-mm, Ingo Molnar,
	linux-crypto, Pekka Enberg, Peter Zijlstra

> > Also, there's no discussion about long-lived threads keeping 
> > sensitive information in there kernel stack indefinitely.
> 
> kernel stack clearing isn't hard to do, just do it on every syscall exit
> and in the infinite loop for kernel threads.

Actually that is probably not as important. In most cases you would be
leaking data between syscalls made by the same thread. 


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2009-05-31 10:34 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-20 19:05 [patch 5/5] Apply the PG_sensitive flag to the CryptoAPI subsystem Larry H.
2009-05-30 18:05 ` Ingo Molnar
2009-05-31 10:14   ` pageexec
2009-05-31 10:34     ` Alan Cox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).