kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
* Re: [RFC PATCH 0/6] Process-local memory allocations
       [not found] <cover.1542722764.git.jsteckli@amazon.de>
@ 2018-11-20 23:26 ` Tycho Andersen
  2018-11-21 17:18   ` Igor Stoppa
  2018-12-13 14:28   ` Julian Stecklina
  0 siblings, 2 replies; 10+ messages in thread
From: Tycho Andersen @ 2018-11-20 23:26 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: kernel-hardening, Liran Alon, Jonathan Adams, David Woodhouse,
	Igor Stoppa

On Tue, Nov 20, 2018 at 03:07:59PM +0100, Julian Stecklina wrote:
> In a world with processor information leak vulnerabilities, having a treasure
> trove of information available for leaking in the global kernel address space is
> starting to be a liability. The biggest offender is the linear mapping of all
> physical memory and there are already efforts (XPFO) to start addressing this.
> In this patch series, I'd like to propose breaking up the kernel address space
> further and introduce process-local mappings in the kernel.
> 
> The rationale is that there are allocations in the kernel containing data that
> should only be accessible when the kernel is executing in the context of a
> specific process. A prime example is KVM vCPU state. This patch series
> introduces process-local memory in the kernel address space by claiming a PGD
> entry for this specific purpose. Then it converts KVM on x86 to use these new
> primitives to store GPR and FPU registers of vCPUs. KVM is a good testing
> ground, because it makes sure userspace can only interact with a VM from a
> single process.
> 
> Process-local allocations in the kernel can be part of a robust L1TF mitigation
> strategy that works even with SMT enabled. The specific goal here is to make it
> harder for a random thread using cache load gadget (usually a bounds check of a
> system call argument plus array access suffices) to prefetch interesting data
> into the L1 cache and use L1TF to leak this data.
> 
> The patch set applies to kvm/next [1]. Feedback is very welcome, both about the
> general approach and the actual implementation. As far as testing goes, the KVM
> unit tests seem happy on Intel. AMD is only compile tested at the moment.

This seems similar in spirit to prmem:
https://lore.kernel.org/lkml/20181023213504.28905-2-igor.stoppa@huawei.com/T/#u

Basically, we have some special memory that we want to leave unmapped
(read only) most of the time, but map it (writable) sometimes. I
wonder if we should merge the APIs in to one

spmemalloc(size, flags, PRLOCAL)

type thing? Could we share some infrastructure then? (I also didn't
follow what happened to the patches Nadav was going to send that might
replace prmem somehow.)

Tycho

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
  2018-11-20 23:26 ` [RFC PATCH 0/6] Process-local memory allocations Tycho Andersen
@ 2018-11-21 17:18   ` Igor Stoppa
  2018-11-21 17:48     ` Tycho Andersen
  2018-12-13 14:28   ` Julian Stecklina
  1 sibling, 1 reply; 10+ messages in thread
From: Igor Stoppa @ 2018-11-21 17:18 UTC (permalink / raw)
  To: Tycho Andersen, Julian Stecklina
  Cc: kernel-hardening, Liran Alon, Jonathan Adams, David Woodhouse

Hi,

On 21/11/2018 01:26, Tycho Andersen wrote:
> On Tue, Nov 20, 2018 at 03:07:59PM +0100, Julian Stecklina wrote:
>> In a world with processor information leak vulnerabilities, having a treasure
>> trove of information available for leaking in the global kernel address space is
>> starting to be a liability. The biggest offender is the linear mapping of all
>> physical memory and there are already efforts (XPFO) to start addressing this.
>> In this patch series, I'd like to propose breaking up the kernel address space
>> further and introduce process-local mappings in the kernel.
>>
>> The rationale is that there are allocations in the kernel containing data that
>> should only be accessible when the kernel is executing in the context of a
>> specific process. A prime example is KVM vCPU state. This patch series
>> introduces process-local memory in the kernel address space by claiming a PGD
>> entry for this specific purpose. Then it converts KVM on x86 to use these new
>> primitives to store GPR and FPU registers of vCPUs. KVM is a good testing
>> ground, because it makes sure userspace can only interact with a VM from a
>> single process.
>>
>> Process-local allocations in the kernel can be part of a robust L1TF mitigation
>> strategy that works even with SMT enabled. The specific goal here is to make it
>> harder for a random thread using cache load gadget (usually a bounds check of a
>> system call argument plus array access suffices) to prefetch interesting data
>> into the L1 cache and use L1TF to leak this data.
>>
>> The patch set applies to kvm/next [1]. Feedback is very welcome, both about the
>> general approach and the actual implementation. As far as testing goes, the KVM
>> unit tests seem happy on Intel. AMD is only compile tested at the moment.

Where is the full set of patches?
I'm sorry, I searched both KVM and LKML archives, but I couldn't find it.

> This seems similar in spirit to prmem:
> https://lore.kernel.org/lkml/20181023213504.28905-2-igor.stoppa@huawei.com/T/#u
> 
> Basically, we have some special memory that we want to leave unmapped
> (read only) most of the time, but map it (writable) sometimes. I
> wonder if we should merge the APIs in to one
> 
> spmemalloc(size, flags, PRLOCAL)
> 
> type thing? Could we share some infrastructure then?

For what I can understand from the intro only, this seems to focus on 
"local" information, related to processes, while prmem was mostly aimed, 
at least at this stage, at system-level features, like LSM or SELinux.
Those have probably little value, for protection from reading.
And they are used quite often, typically on a critical path.
The main thing is to prevent rogue writes, but reads are not a problem.
Hiding/unhiding them even from read operations might not be so useful.

However other components, like the kernel keyring, are used less 
frequently and might be worth protecting them even from read operations.

> (I also didn't
> follow what happened to the patches Nadav was going to send that might
> replace prmem somehow.)

I just replied to the old prmem thread - I stil lhave some doubts about 
the implementation, however my understanding is that I could replicate 
or at least base on those patches the very low level part of the write 
rare mechanism.

--
igor

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
  2018-11-21 17:18   ` Igor Stoppa
@ 2018-11-21 17:48     ` Tycho Andersen
  2018-11-21 18:12       ` Igor Stoppa
       [not found]       ` <1542904826.6344.1.camel@amazon.de>
  0 siblings, 2 replies; 10+ messages in thread
From: Tycho Andersen @ 2018-11-21 17:48 UTC (permalink / raw)
  To: Igor Stoppa
  Cc: Julian Stecklina, kernel-hardening, Liran Alon, Jonathan Adams,
	David Woodhouse

On Wed, Nov 21, 2018 at 07:18:17PM +0200, Igor Stoppa wrote:
> Hi,
> 
> On 21/11/2018 01:26, Tycho Andersen wrote:
> > On Tue, Nov 20, 2018 at 03:07:59PM +0100, Julian Stecklina wrote:
> > > In a world with processor information leak vulnerabilities, having a treasure
> > > trove of information available for leaking in the global kernel address space is
> > > starting to be a liability. The biggest offender is the linear mapping of all
> > > physical memory and there are already efforts (XPFO) to start addressing this.
> > > In this patch series, I'd like to propose breaking up the kernel address space
> > > further and introduce process-local mappings in the kernel.
> > > 
> > > The rationale is that there are allocations in the kernel containing data that
> > > should only be accessible when the kernel is executing in the context of a
> > > specific process. A prime example is KVM vCPU state. This patch series
> > > introduces process-local memory in the kernel address space by claiming a PGD
> > > entry for this specific purpose. Then it converts KVM on x86 to use these new
> > > primitives to store GPR and FPU registers of vCPUs. KVM is a good testing
> > > ground, because it makes sure userspace can only interact with a VM from a
> > > single process.
> > > 
> > > Process-local allocations in the kernel can be part of a robust L1TF mitigation
> > > strategy that works even with SMT enabled. The specific goal here is to make it
> > > harder for a random thread using cache load gadget (usually a bounds check of a
> > > system call argument plus array access suffices) to prefetch interesting data
> > > into the L1 cache and use L1TF to leak this data.
> > > 
> > > The patch set applies to kvm/next [1]. Feedback is very welcome, both about the
> > > general approach and the actual implementation. As far as testing goes, the KVM
> > > unit tests seem happy on Intel. AMD is only compile tested at the moment.
> 
> Where is the full set of patches?
> I'm sorry, I searched both KVM and LKML archives, but I couldn't find it.

It looks like they were only sent to kernel hardening, but not sent to
the archives? I only see our replies here:

https://www.openwall.com/lists/kernel-hardening/

Julian, perhaps you can re-send with a CC to lkml as well?

> > This seems similar in spirit to prmem:
> > https://lore.kernel.org/lkml/20181023213504.28905-2-igor.stoppa@huawei.com/T/#u
> > 
> > Basically, we have some special memory that we want to leave unmapped
> > (read only) most of the time, but map it (writable) sometimes. I
> > wonder if we should merge the APIs in to one
> > 
> > spmemalloc(size, flags, PRLOCAL)
> > 
> > type thing? Could we share some infrastructure then?
> 
> For what I can understand from the intro only, this seems to focus on
> "local" information, related to processes, while prmem was mostly aimed, at
> least at this stage, at system-level features, like LSM or SELinux.
> Those have probably little value, for protection from reading.
> And they are used quite often, typically on a critical path.
> The main thing is to prevent rogue writes, but reads are not a problem.
> Hiding/unhiding them even from read operations might not be so useful.
> 
> However other components, like the kernel keyring, are used less frequently
> and might be worth protecting them even from read operations.

Right, the goals are different, but the idea is basically the same. We
allocate memory in some "special" way. I'm just wondering if we'll be
adding more of these special ways in the future, and if it's worth
synchronizing the APIs so that it's easy for people to use.

> > (I also didn't
> > follow what happened to the patches Nadav was going to send that might
> > replace prmem somehow.)
> 
> I just replied to the old prmem thread - I stil lhave some doubts about the
> implementation, however my understanding is that I could replicate or at
> least base on those patches the very low level part of the write rare
> mechanism.

Cool, thanks.

Tycho

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
  2018-11-21 17:48     ` Tycho Andersen
@ 2018-11-21 18:12       ` Igor Stoppa
       [not found]       ` <1542904826.6344.1.camel@amazon.de>
  1 sibling, 0 replies; 10+ messages in thread
From: Igor Stoppa @ 2018-11-21 18:12 UTC (permalink / raw)
  To: Tycho Andersen
  Cc: Julian Stecklina, kernel-hardening, Liran Alon, Jonathan Adams,
	David Woodhouse



On 21/11/2018 19:48, Tycho Andersen wrote:
> On Wed, Nov 21, 2018 at 07:18:17PM +0200, Igor Stoppa wrote:
>> Hi,
>>
>> On 21/11/2018 01:26, Tycho Andersen wrote:

[...]

>>> This seems similar in spirit to prmem:
>>> https://lore.kernel.org/lkml/20181023213504.28905-2-igor.stoppa@huawei.com/T/#u
>>>
>>> Basically, we have some special memory that we want to leave unmapped
>>> (read only) most of the time, but map it (writable) sometimes. I
>>> wonder if we should merge the APIs in to one
>>>
>>> spmemalloc(size, flags, PRLOCAL)
>>>
>>> type thing? Could we share some infrastructure then?
>>
>> For what I can understand from the intro only, this seems to focus on
>> "local" information, related to processes, while prmem was mostly aimed, at
>> least at this stage, at system-level features, like LSM or SELinux.
>> Those have probably little value, for protection from reading.
>> And they are used quite often, typically on a critical path.
>> The main thing is to prevent rogue writes, but reads are not a problem.
>> Hiding/unhiding them even from read operations might not be so useful.
>>
>> However other components, like the kernel keyring, are used less frequently
>> and might be worth protecting them even from read operations.
> 
> Right, the goals are different, but the idea is basically the same. We
> allocate memory in some "special" way. I'm just wondering if we'll be
> adding more of these special ways in the future, and if it's worth
> synchronizing the APIs so that it's easy for people to use.

Yes, I agree. I do not see much use for this "locality" in most of the 
use cases that I have looked into, apart from maybe the kernel keyring,
but it might be possible to add a "scope" property to a memory pool, if 
it is associated to some very specific code.

Doing it for system-level components, however, might introduce too big 
an overhead. In these cases, the scope would stay global.

But I really need to see the code, before I can say more.

--
igor

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
       [not found]       ` <1542904826.6344.1.camel@amazon.de>
@ 2018-11-23 16:24         ` Igor Stoppa
  2018-11-23 17:04           ` Solar Designer
  0 siblings, 1 reply; 10+ messages in thread
From: Igor Stoppa @ 2018-11-23 16:24 UTC (permalink / raw)
  To: Stecklina, Julian, tycho; +Cc: liran.alon, jwadams, kernel-hardening, dwmw2



On 22/11/2018 18:40, Stecklina, Julian wrote:
> On Wed, 2018-11-21 at 10:48 -0700, Tycho Andersen wrote:
>>> Where is the full set of patches?
>>> I'm sorry, I searched both KVM and LKML archives, but I couldn't find it.
>>
>> It looks like they were only sent to kernel hardening, but not sent to
>> the archives? I only see our replies here:
>>
>> https://www.openwall.com/lists/kernel-hardening/
>>
>> Julian, perhaps you can re-send with a CC to lkml as well?
> 
> Will do. I messed up the threading, due to git send-email being hostile to me.
> Sorry for the confusion.

If you are not subscribed to the hardening ml, it will neither archive, 
nor forward to subscribers your emails.

This very mail you wrote is not archived either.

--
igor

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
  2018-11-23 16:24         ` Igor Stoppa
@ 2018-11-23 17:04           ` Solar Designer
  2018-11-23 17:23             ` Solar Designer
  0 siblings, 1 reply; 10+ messages in thread
From: Solar Designer @ 2018-11-23 17:04 UTC (permalink / raw)
  To: Igor Stoppa
  Cc: Stecklina, Julian, tycho, liran.alon, jwadams, kernel-hardening, dwmw2

On Fri, Nov 23, 2018 at 06:24:08PM +0200, Igor Stoppa wrote:
> On 22/11/2018 18:40, Stecklina, Julian wrote:
> >On Wed, 2018-11-21 at 10:48 -0700, Tycho Andersen wrote:
> >>>Where is the full set of patches?
> >>>I'm sorry, I searched both KVM and LKML archives, but I couldn't find it.
> >>
> >>It looks like they were only sent to kernel hardening, but not sent to
> >>the archives? I only see our replies here:
> >>
> >>https://www.openwall.com/lists/kernel-hardening/
> >>
> >>Julian, perhaps you can re-send with a CC to lkml as well?
> >
> >Will do. I messed up the threading, due to git send-email being hostile to 
> >me.
> >Sorry for the confusion.
> 
> If you are not subscribed to the hardening ml, it will neither archive, 
> nor forward to subscribers your emails.

No.  There's no requirement to be subscribed to kernel-hardening in
order to be able to post, nor for messages to be archived.

> This very mail you wrote is not archived either.

Julian's message in fact did not appear on kernel-hardening, and the
reason is that Amazon's mail servers - for reasons unknown to me (and to
an Amazoner with whom I tried discussing it before) sometimes(?!) add
the "Precedence: junk" header on messages.  ezmlm-idx drops such
messages by default (and this behavior is documented).  I'm not eager to
patch this out.  ezmlm-idx also drops messages with "Precedence: bulk",
and it sets that header on messages it sends to list subscribers, which
helps against loops.  Another reason is to avoid vacation auto-replies,
which I think can use either header.  It'd be weird to continue dropping
"bulk", yet start accepting "junk", wouldn't it?  But if this is
unfixable on Amazon's end, I'll have to.

Alexander

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
  2018-11-23 17:04           ` Solar Designer
@ 2018-11-23 17:23             ` Solar Designer
  0 siblings, 0 replies; 10+ messages in thread
From: Solar Designer @ 2018-11-23 17:23 UTC (permalink / raw)
  To: Igor Stoppa
  Cc: Stecklina, Julian, tycho, liran.alon, jwadams, kernel-hardening, dwmw2

On Fri, Nov 23, 2018 at 06:04:01PM +0100, Solar Designer wrote:
> On Fri, Nov 23, 2018 at 06:24:08PM +0200, Igor Stoppa wrote:
> > On 22/11/2018 18:40, Stecklina, Julian wrote:
> > >On Wed, 2018-11-21 at 10:48 -0700, Tycho Andersen wrote:
> > >>>Where is the full set of patches?
> > >>>I'm sorry, I searched both KVM and LKML archives, but I couldn't find it.
> > >>
> > >>It looks like they were only sent to kernel hardening, but not sent to
> > >>the archives? I only see our replies here:
> > >>
> > >>https://www.openwall.com/lists/kernel-hardening/
> > >>
> > >>Julian, perhaps you can re-send with a CC to lkml as well?
> > >
> > >Will do. I messed up the threading, due to git send-email being hostile to 
> > >me.
> > >Sorry for the confusion.
> > 
> > If you are not subscribed to the hardening ml, it will neither archive, 
> > nor forward to subscribers your emails.
> 
> No.  There's no requirement to be subscribed to kernel-hardening in
> order to be able to post, nor for messages to be archived.
> 
> > This very mail you wrote is not archived either.
> 
> Julian's message in fact did not appear on kernel-hardening, and the
> reason is that Amazon's mail servers - for reasons unknown to me (and to
> an Amazoner with whom I tried discussing it before) sometimes(?!) add
> the "Precedence: junk" header on messages.  ezmlm-idx drops such
> messages by default (and this behavior is documented).  I'm not eager to
> patch this out.  ezmlm-idx also drops messages with "Precedence: bulk",
> and it sets that header on messages it sends to list subscribers, which
> helps against loops.  Another reason is to avoid vacation auto-replies,
> which I think can use either header.  It'd be weird to continue dropping
> "bulk", yet start accepting "junk", wouldn't it?  But if this is
> unfixable on Amazon's end, I'll have to.

OK, I've just patched the "junk" check out of ezmlm-reject (processing
of messages to the posting addresses), but kept it in ezmlm-weed
(processing of messages to the bounce addresses).  I hope Julian's
messages will be getting through now.

Alexander

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
  2018-11-20 23:26 ` [RFC PATCH 0/6] Process-local memory allocations Tycho Andersen
  2018-11-21 17:18   ` Igor Stoppa
@ 2018-12-13 14:28   ` Julian Stecklina
  2018-12-14  2:09     ` Tycho Andersen
  2018-12-19 23:00     ` Igor Stoppa
  1 sibling, 2 replies; 10+ messages in thread
From: Julian Stecklina @ 2018-12-13 14:28 UTC (permalink / raw)
  To: Tycho Andersen
  Cc: kernel-hardening, Liran Alon, Jonathan Adams, David Woodhouse,
	Igor Stoppa

Tycho,

sorry for the late response, I just returned from vacation.

Tycho Andersen <tycho@tycho.ws> writes:

> On Tue, Nov 20, 2018 at 03:07:59PM +0100, Julian Stecklina wrote:
>> In a world with processor information leak vulnerabilities, having a treasure
>> trove of information available for leaking in the global kernel address space
>> is
>> starting to be a liability. The biggest offender is the linear mapping of all
>> physical memory and there are already efforts (XPFO) to start addressing this.
>> In this patch series, I'd like to propose breaking up the kernel address space
>> further and introduce process-local mappings in the kernel.
>> 
>> The rationale is that there are allocations in the kernel containing data that
>> should only be accessible when the kernel is executing in the context of a
>> specific process. A prime example is KVM vCPU state. This patch series
>> introduces process-local memory in the kernel address space by claiming a PGD
>> entry for this specific purpose. Then it converts KVM on x86 to use these new
>> primitives to store GPR and FPU registers of vCPUs. KVM is a good testing
>> ground, because it makes sure userspace can only interact with a VM from a
>> single process.
[...]
> This seems similar in spirit to prmem:
> https://lore.kernel.org/lkml/20181023213504.28905-2-igor.stoppa@huawei.com/T/#u

It's similar in the sense that it adds a new way to allocate memory with
interesting properties. In that sense, it would be useful to have a
common allocation function.

As far as usage is concerned, prmem and the process local memory is very
different. From a quick glance at the prmem patchset, I see that it
works for statically allocated memory. Process-local memory is by
definition bound to the lifetime of a process (or address space to be
more precise) and thus cannot be statically allocated. It's breaking up
the global address space of the kernel and starts binding some
allocations in the kernel to specific processes.

My goal is specifically L1TF mitigation and preventing random parts of
the kernel from prefetching user data, but this approach is equally
effective for ordinary info leak vulnerabilities.

> Basically, we have some special memory that we want to leave unmapped
> (read only) most of the time, but map it (writable) sometimes. I

Unmapped should really be unmapped, i.e. not present in the page table.
Having it read-only defeats the purpose.

> wonder if we should merge the APIs in to one
>
> spmemalloc(size, flags, PRLOCAL)
>
> type thing? Could we share some infrastructure then? (I also didn't
> follow what happened to the patches Nadav was going to send that might
> replace prmem somehow.)

When writing the patch series, I had the feeling that the whole
bookkeeping of what is allocated where is something that could be
abstracted and re-used. Maybe there is already such an abstraction layer
in the kernel and I just didn't find it.

Julian

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
  2018-12-13 14:28   ` Julian Stecklina
@ 2018-12-14  2:09     ` Tycho Andersen
  2018-12-19 23:00     ` Igor Stoppa
  1 sibling, 0 replies; 10+ messages in thread
From: Tycho Andersen @ 2018-12-14  2:09 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: kernel-hardening, Liran Alon, Jonathan Adams, David Woodhouse,
	Igor Stoppa

On Thu, Dec 13, 2018 at 03:28:15PM +0100, Julian Stecklina wrote:
> Tycho,
> 
> sorry for the late response, I just returned from vacation.
> 
> Tycho Andersen <tycho@tycho.ws> writes:
> 
> > On Tue, Nov 20, 2018 at 03:07:59PM +0100, Julian Stecklina wrote:
> >> In a world with processor information leak vulnerabilities, having a treasure
> >> trove of information available for leaking in the global kernel address space
> >> is
> >> starting to be a liability. The biggest offender is the linear mapping of all
> >> physical memory and there are already efforts (XPFO) to start addressing this.
> >> In this patch series, I'd like to propose breaking up the kernel address space
> >> further and introduce process-local mappings in the kernel.
> >> 
> >> The rationale is that there are allocations in the kernel containing data that
> >> should only be accessible when the kernel is executing in the context of a
> >> specific process. A prime example is KVM vCPU state. This patch series
> >> introduces process-local memory in the kernel address space by claiming a PGD
> >> entry for this specific purpose. Then it converts KVM on x86 to use these new
> >> primitives to store GPR and FPU registers of vCPUs. KVM is a good testing
> >> ground, because it makes sure userspace can only interact with a VM from a
> >> single process.
> [...]
> > This seems similar in spirit to prmem:
> > https://lore.kernel.org/lkml/20181023213504.28905-2-igor.stoppa@huawei.com/T/#u
> 
> It's similar in the sense that it adds a new way to allocate memory with
> interesting properties. In that sense, it would be useful to have a
> common allocation function.
> 
> As far as usage is concerned, prmem and the process local memory is very
> different. From a quick glance at the prmem patchset, I see that it
> works for statically allocated memory. Process-local memory is by
> definition bound to the lifetime of a process (or address space to be
> more precise) and thus cannot be statically allocated. It's breaking up
> the global address space of the kernel and starts binding some
> allocations in the kernel to specific processes.
> 
> My goal is specifically L1TF mitigation and preventing random parts of
> the kernel from prefetching user data, but this approach is equally
> effective for ordinary info leak vulnerabilities.
> 
> > Basically, we have some special memory that we want to leave unmapped
> > (read only) most of the time, but map it (writable) sometimes. I
> 
> Unmapped should really be unmapped, i.e. not present in the page table.
> Having it read-only defeats the purpose.

Yeah, sorry about the crappy notation, the parens were supposed to
indicate the prmem bits, and the non-parens were this patch's bits.

> > wonder if we should merge the APIs in to one
> >
> > spmemalloc(size, flags, PRLOCAL)
> >
> > type thing? Could we share some infrastructure then? (I also didn't
> > follow what happened to the patches Nadav was going to send that might
> > replace prmem somehow.)
> 
> When writing the patch series, I had the feeling that the whole
> bookkeeping of what is allocated where is something that could be
> abstracted and re-used. Maybe there is already such an abstraction layer
> in the kernel and I just didn't find it.

Yeah, page_ext isn't a good fit, since you don't want to keep track of
this metadata for most pages, just ones that have this special
property. Hence the comparison to prmem, that series also wants to
track special properties of some subset of pages.

Tycho

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/6] Process-local memory allocations
  2018-12-13 14:28   ` Julian Stecklina
  2018-12-14  2:09     ` Tycho Andersen
@ 2018-12-19 23:00     ` Igor Stoppa
  1 sibling, 0 replies; 10+ messages in thread
From: Igor Stoppa @ 2018-12-19 23:00 UTC (permalink / raw)
  To: Julian Stecklina, Tycho Andersen
  Cc: kernel-hardening, Liran Alon, Jonathan Adams, David Woodhouse

Hi,

On 13/12/2018 16:28, Julian Stecklina wrote:

> As far as usage is concerned, prmem and the process local memory is very
> different. From a quick glance at the prmem patchset, I see that it
> works for statically allocated memory. 

I suspect you have looked only at the latest patchset, which is a subset 
of the whole feature-set.
It was agreed that I would first address the integration of write-rare 
for statically allocated memory, and so I did.

But the origin of this work actually resides in protecting the  SELinux 
policyDB, which is entirely dynamically allocated.

> Process-local memory is by
> definition bound to the lifetime of a process (or address space to be
> more precise) and thus cannot be statically allocated. It's breaking up
> the global address space of the kernel and starts binding some
> allocations in the kernel to specific processes.

yes, I agree with the explanation you give, however the starting 
assumption (prmem is only static) is not correct.
> 
> My goal is specifically L1TF mitigation and preventing random parts of
> the kernel from prefetching user data, but this approach is equally
> effective for ordinary info leak vulnerabilities.
> 
>> Basically, we have some special memory that we want to leave unmapped
>> (read only) most of the time, but map it (writable) sometimes.


This would apply nicely also to kernel data that is particularly 
sensitive and one would not want to expose to random (possibly buggy) reads.

> Unmapped should really be unmapped, i.e. not present in the page table.
> Having it read-only defeats the purpose.

Given the current implementation (for x86_64) which uses an alternate 
mapping for writable, it should not be too difficult to use a similar 
mechanism for implementing a sort of alternative to a TEE data-vault.

Instead of using a special function to alter normally readable data, it 
would use a special function to read/write normally unaccessible data.

But, before getting into that, I'd prefer to first get the write rare 
static merged or at least ACKed.

--
igor

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-12-19 23:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <cover.1542722764.git.jsteckli@amazon.de>
2018-11-20 23:26 ` [RFC PATCH 0/6] Process-local memory allocations Tycho Andersen
2018-11-21 17:18   ` Igor Stoppa
2018-11-21 17:48     ` Tycho Andersen
2018-11-21 18:12       ` Igor Stoppa
     [not found]       ` <1542904826.6344.1.camel@amazon.de>
2018-11-23 16:24         ` Igor Stoppa
2018-11-23 17:04           ` Solar Designer
2018-11-23 17:23             ` Solar Designer
2018-12-13 14:28   ` Julian Stecklina
2018-12-14  2:09     ` Tycho Andersen
2018-12-19 23:00     ` Igor Stoppa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).