linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
       [not found] <159009507306.847224.8502634072429766747.stgit@dwillia2-desk3.amr.corp.intel.com>
@ 2021-05-27 20:58 ` Bjorn Helgaas
  2021-05-27 21:30   ` Dan Williams
  0 siblings, 1 reply; 9+ messages in thread
From: Bjorn Helgaas @ 2021-05-27 20:58 UTC (permalink / raw)
  To: Dan Williams
  Cc: gregkh, Arnd Bergmann, Ingo Molnar, Kees Cook, Matthew Wilcox,
	Russell King, Andrew Morton, linux-kernel, linux-mm, linux-pci,
	Daniel Vetter, Krzysztof Wilczyński, Jason Gunthorpe,
	Christoph Hellwig

[+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]

On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
> Close the hole of holding a mapping over kernel driver takeover event of
> a given address range.
> 
> Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
> introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
> kernel against scenarios where a /dev/mem user tramples memory that a
> kernel driver owns. However, this protection only prevents *new* read(),
> write() and mmap() requests. Established mappings prior to the driver
> calling request_mem_region() are left alone.
> 
> Especially with persistent memory, and the core kernel metadata that is
> stored there, there are plentiful scenarios for a /dev/mem user to
> violate the expectations of the driver and cause amplified damage.
> 
> Teach request_mem_region() to find and shoot down active /dev/mem
> mappings that it believes it has successfully claimed for the exclusive
> use of the driver. Effectively a driver call to request_mem_region()
> becomes a hole-punch on the /dev/mem device.

This idea of hole-punching /dev/mem has since been extended to PCI
BARs via [1].

Correct me if I'm wrong: I think this means that if a user process has
mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
that region via pci_request_region() or similar, we punch holes in the
the user process mmap.  The driver might be happy, but my guess is the
user starts seeing segmentation violations for no obvious reason and
is not happy.

Apart from the user process issue, the implementation of [1] is
problematic for PCI because the mmappable sysfs attributes now depend
on iomem_init_inode(), an fs_initcall, which means they can't be
static attributes, which ultimately leads to races in creating them.

So I'm raising the question of whether this hole-punch is the right
strategy.

  - Prior to revoke_iomem(), __request_region() was very
    self-contained and really only depended on the resource tree.  Now
    it depends on a lot of higher-level MM machinery to shoot down
    mappings of other tasks.  This adds quite a bit of complexity and
    some new ordering constraints.

  - Punching holes in the address space of an existing process seems
    unfriendly.  Maybe the driver's __request_region() should fail
    instead, since the driver should be prepared to handle failure
    there anyway.

  - [2] suggests that the hole punch protects drivers from /dev/mem
    writers, especially with persistent memory.  I'm not really
    convinced.  The hole punch does nothing to prevent a user process
    from mmapping and corrupting something before the driver loads.

Bjorn

[1] https://git.kernel.org/linus/636b21b50152
[2] https://git.kernel.org/linus/3234ac664a87

> The typical usage of unmap_mapping_range() is part of
> truncate_pagecache() to punch a hole in a file, but in this case the
> implementation is only doing the "first half" of a hole punch. Namely it
> is just evacuating current established mappings of the "hole", and it
> relies on the fact that /dev/mem establishes mappings in terms of
> absolute physical address offsets. Once existing mmap users are
> invalidated they can attempt to re-establish the mapping, or attempt to
> continue issuing read(2) / write(2) to the invalidated extent, but they
> will then be subject to the CONFIG_IO_STRICT_DEVMEM checking that can
> block those subsequent accesses.
> 
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Fixes: 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
  2021-05-27 20:58 ` [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region Bjorn Helgaas
@ 2021-05-27 21:30   ` Dan Williams
  2021-05-28  8:58     ` David Hildenbrand
  2021-06-03  3:39     ` Bjorn Helgaas
  0 siblings, 2 replies; 9+ messages in thread
From: Dan Williams @ 2021-05-27 21:30 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Greg KH, Arnd Bergmann, Ingo Molnar, Kees Cook, Matthew Wilcox,
	Russell King, Andrew Morton, Linux Kernel Mailing List, Linux MM,
	Linux PCI, Daniel Vetter, Krzysztof Wilczyński,
	Jason Gunthorpe, Christoph Hellwig

On Thu, May 27, 2021 at 1:58 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> [+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]
>
> On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
> > Close the hole of holding a mapping over kernel driver takeover event of
> > a given address range.
> >
> > Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
> > introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
> > kernel against scenarios where a /dev/mem user tramples memory that a
> > kernel driver owns. However, this protection only prevents *new* read(),
> > write() and mmap() requests. Established mappings prior to the driver
> > calling request_mem_region() are left alone.
> >
> > Especially with persistent memory, and the core kernel metadata that is
> > stored there, there are plentiful scenarios for a /dev/mem user to
> > violate the expectations of the driver and cause amplified damage.
> >
> > Teach request_mem_region() to find and shoot down active /dev/mem
> > mappings that it believes it has successfully claimed for the exclusive
> > use of the driver. Effectively a driver call to request_mem_region()
> > becomes a hole-punch on the /dev/mem device.
>
> This idea of hole-punching /dev/mem has since been extended to PCI
> BARs via [1].
>
> Correct me if I'm wrong: I think this means that if a user process has
> mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
> that region via pci_request_region() or similar, we punch holes in the
> the user process mmap.  The driver might be happy, but my guess is the
> user starts seeing segmentation violations for no obvious reason and
> is not happy.
>
> Apart from the user process issue, the implementation of [1] is
> problematic for PCI because the mmappable sysfs attributes now depend
> on iomem_init_inode(), an fs_initcall, which means they can't be
> static attributes, which ultimately leads to races in creating them.

See the comments in iomem_get_mapping(), and revoke_iomem():

        /*
         * Check that the initialization has completed. Losing the race
         * is ok because it means drivers are claiming resources before
         * the fs_initcall level of init and prevent iomem_get_mapping users
         * from establishing mappings.
         */

...the observation being that it is ok for the revocation inode to
come on later in the boot process because userspace won't be able to
use the fs yet. So any missed calls to revoke_iomem() would fall back
to userspace just seeing the resource busy in the first instance. I.e.
through the normal devmem_is_allowed() exclusion.

>
> So I'm raising the question of whether this hole-punch is the right
> strategy.
>
>   - Prior to revoke_iomem(), __request_region() was very
>     self-contained and really only depended on the resource tree.  Now
>     it depends on a lot of higher-level MM machinery to shoot down
>     mappings of other tasks.  This adds quite a bit of complexity and
>     some new ordering constraints.
>
>   - Punching holes in the address space of an existing process seems
>     unfriendly.  Maybe the driver's __request_region() should fail
>     instead, since the driver should be prepared to handle failure
>     there anyway.

It's prepared to handle failure, but in this case it is dealing with a
root user of 2 minds.

>
>   - [2] suggests that the hole punch protects drivers from /dev/mem
>     writers, especially with persistent memory.  I'm not really
>     convinced.  The hole punch does nothing to prevent a user process
>     from mmapping and corrupting something before the driver loads.

The motivation for this was a case that was swapping between /dev/mem
access and /dev/pmem0 access and they forgot to stop using /dev/mem
when they switched to /dev/pmem0. If root wants to use /dev/mem it can
use it, if root wants to stop the driver from loading it can set
mopdrobe policy or manually unbind, and if root asks the kernel to
load the driver while it is actively using /dev/mem something has to
give. Given root has other options to stop a driver the decision to
revoke userspace access when root messes up and causes a collision
seems prudent to me.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
  2021-05-27 21:30   ` Dan Williams
@ 2021-05-28  8:58     ` David Hildenbrand
  2021-05-28 16:42       ` Dan Williams
  2021-06-03  3:39     ` Bjorn Helgaas
  1 sibling, 1 reply; 9+ messages in thread
From: David Hildenbrand @ 2021-05-28  8:58 UTC (permalink / raw)
  To: Dan Williams, Bjorn Helgaas
  Cc: Greg KH, Arnd Bergmann, Ingo Molnar, Kees Cook, Matthew Wilcox,
	Russell King, Andrew Morton, Linux Kernel Mailing List, Linux MM,
	Linux PCI, Daniel Vetter, Krzysztof Wilczyński,
	Jason Gunthorpe, Christoph Hellwig

On 27.05.21 23:30, Dan Williams wrote:
> On Thu, May 27, 2021 at 1:58 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>>
>> [+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]
>>
>> On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
>>> Close the hole of holding a mapping over kernel driver takeover event of
>>> a given address range.
>>>
>>> Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
>>> introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
>>> kernel against scenarios where a /dev/mem user tramples memory that a
>>> kernel driver owns. However, this protection only prevents *new* read(),
>>> write() and mmap() requests. Established mappings prior to the driver
>>> calling request_mem_region() are left alone.
>>>
>>> Especially with persistent memory, and the core kernel metadata that is
>>> stored there, there are plentiful scenarios for a /dev/mem user to
>>> violate the expectations of the driver and cause amplified damage.
>>>
>>> Teach request_mem_region() to find and shoot down active /dev/mem
>>> mappings that it believes it has successfully claimed for the exclusive
>>> use of the driver. Effectively a driver call to request_mem_region()
>>> becomes a hole-punch on the /dev/mem device.
>>
>> This idea of hole-punching /dev/mem has since been extended to PCI
>> BARs via [1].
>>
>> Correct me if I'm wrong: I think this means that if a user process has
>> mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
>> that region via pci_request_region() or similar, we punch holes in the
>> the user process mmap.  The driver might be happy, but my guess is the
>> user starts seeing segmentation violations for no obvious reason and
>> is not happy.
>>
>> Apart from the user process issue, the implementation of [1] is
>> problematic for PCI because the mmappable sysfs attributes now depend
>> on iomem_init_inode(), an fs_initcall, which means they can't be
>> static attributes, which ultimately leads to races in creating them.
> 
> See the comments in iomem_get_mapping(), and revoke_iomem():
> 
>          /*
>           * Check that the initialization has completed. Losing the race
>           * is ok because it means drivers are claiming resources before
>           * the fs_initcall level of init and prevent iomem_get_mapping users
>           * from establishing mappings.
>           */
> 
> ...the observation being that it is ok for the revocation inode to
> come on later in the boot process because userspace won't be able to
> use the fs yet. So any missed calls to revoke_iomem() would fall back
> to userspace just seeing the resource busy in the first instance. I.e.
> through the normal devmem_is_allowed() exclusion.
> 
>>
>> So I'm raising the question of whether this hole-punch is the right
>> strategy.
>>
>>    - Prior to revoke_iomem(), __request_region() was very
>>      self-contained and really only depended on the resource tree.  Now
>>      it depends on a lot of higher-level MM machinery to shoot down
>>      mappings of other tasks.  This adds quite a bit of complexity and
>>      some new ordering constraints.
>>
>>    - Punching holes in the address space of an existing process seems
>>      unfriendly.  Maybe the driver's __request_region() should fail
>>      instead, since the driver should be prepared to handle failure
>>      there anyway.
> 
> It's prepared to handle failure, but in this case it is dealing with a
> root user of 2 minds.
> 
>>
>>    - [2] suggests that the hole punch protects drivers from /dev/mem
>>      writers, especially with persistent memory.  I'm not really
>>      convinced.  The hole punch does nothing to prevent a user process
>>      from mmapping and corrupting something before the driver loads.
> 
> The motivation for this was a case that was swapping between /dev/mem
> access and /dev/pmem0 access and they forgot to stop using /dev/mem
> when they switched to /dev/pmem0. If root wants to use /dev/mem it can
> use it, if root wants to stop the driver from loading it can set
> mopdrobe policy or manually unbind, and if root asks the kernel to
> load the driver while it is actively using /dev/mem something has to
> give. Given root has other options to stop a driver the decision to
> revoke userspace access when root messes up and causes a collision
> seems prudent to me.
> 

Is there a real use case for mapping pmem via /dev/mem or could we just 
prohibit the access to these areas completely?

What's the use case for "swapping between /dev/mem access and /dev/pmem0 
access" ?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
  2021-05-28  8:58     ` David Hildenbrand
@ 2021-05-28 16:42       ` Dan Williams
  2021-05-28 16:51         ` David Hildenbrand
  0 siblings, 1 reply; 9+ messages in thread
From: Dan Williams @ 2021-05-28 16:42 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Bjorn Helgaas, Greg KH, Arnd Bergmann, Ingo Molnar, Kees Cook,
	Matthew Wilcox, Russell King, Andrew Morton,
	Linux Kernel Mailing List, Linux MM, Linux PCI, Daniel Vetter,
	Krzysztof Wilczyński, Jason Gunthorpe, Christoph Hellwig

On Fri, May 28, 2021 at 1:58 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 27.05.21 23:30, Dan Williams wrote:
> > On Thu, May 27, 2021 at 1:58 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >>
> >> [+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]
> >>
> >> On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
> >>> Close the hole of holding a mapping over kernel driver takeover event of
> >>> a given address range.
> >>>
> >>> Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
> >>> introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
> >>> kernel against scenarios where a /dev/mem user tramples memory that a
> >>> kernel driver owns. However, this protection only prevents *new* read(),
> >>> write() and mmap() requests. Established mappings prior to the driver
> >>> calling request_mem_region() are left alone.
> >>>
> >>> Especially with persistent memory, and the core kernel metadata that is
> >>> stored there, there are plentiful scenarios for a /dev/mem user to
> >>> violate the expectations of the driver and cause amplified damage.
> >>>
> >>> Teach request_mem_region() to find and shoot down active /dev/mem
> >>> mappings that it believes it has successfully claimed for the exclusive
> >>> use of the driver. Effectively a driver call to request_mem_region()
> >>> becomes a hole-punch on the /dev/mem device.
> >>
> >> This idea of hole-punching /dev/mem has since been extended to PCI
> >> BARs via [1].
> >>
> >> Correct me if I'm wrong: I think this means that if a user process has
> >> mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
> >> that region via pci_request_region() or similar, we punch holes in the
> >> the user process mmap.  The driver might be happy, but my guess is the
> >> user starts seeing segmentation violations for no obvious reason and
> >> is not happy.
> >>
> >> Apart from the user process issue, the implementation of [1] is
> >> problematic for PCI because the mmappable sysfs attributes now depend
> >> on iomem_init_inode(), an fs_initcall, which means they can't be
> >> static attributes, which ultimately leads to races in creating them.
> >
> > See the comments in iomem_get_mapping(), and revoke_iomem():
> >
> >          /*
> >           * Check that the initialization has completed. Losing the race
> >           * is ok because it means drivers are claiming resources before
> >           * the fs_initcall level of init and prevent iomem_get_mapping users
> >           * from establishing mappings.
> >           */
> >
> > ...the observation being that it is ok for the revocation inode to
> > come on later in the boot process because userspace won't be able to
> > use the fs yet. So any missed calls to revoke_iomem() would fall back
> > to userspace just seeing the resource busy in the first instance. I.e.
> > through the normal devmem_is_allowed() exclusion.
> >
> >>
> >> So I'm raising the question of whether this hole-punch is the right
> >> strategy.
> >>
> >>    - Prior to revoke_iomem(), __request_region() was very
> >>      self-contained and really only depended on the resource tree.  Now
> >>      it depends on a lot of higher-level MM machinery to shoot down
> >>      mappings of other tasks.  This adds quite a bit of complexity and
> >>      some new ordering constraints.
> >>
> >>    - Punching holes in the address space of an existing process seems
> >>      unfriendly.  Maybe the driver's __request_region() should fail
> >>      instead, since the driver should be prepared to handle failure
> >>      there anyway.
> >
> > It's prepared to handle failure, but in this case it is dealing with a
> > root user of 2 minds.
> >
> >>
> >>    - [2] suggests that the hole punch protects drivers from /dev/mem
> >>      writers, especially with persistent memory.  I'm not really
> >>      convinced.  The hole punch does nothing to prevent a user process
> >>      from mmapping and corrupting something before the driver loads.
> >
> > The motivation for this was a case that was swapping between /dev/mem
> > access and /dev/pmem0 access and they forgot to stop using /dev/mem
> > when they switched to /dev/pmem0. If root wants to use /dev/mem it can
> > use it, if root wants to stop the driver from loading it can set
> > mopdrobe policy or manually unbind, and if root asks the kernel to
> > load the driver while it is actively using /dev/mem something has to
> > give. Given root has other options to stop a driver the decision to
> > revoke userspace access when root messes up and causes a collision
> > seems prudent to me.
> >
>
> Is there a real use case for mapping pmem via /dev/mem or could we just
> prohibit the access to these areas completely?

The kernel offers conflicting access to iomem resources and a
long-standing mechanism to enforce mutual exclusion
(CONFIG_IO_STRICT_DEVMEM) between those interfaces. That mechanism was
found to be incomplete for the case where a /dev/mem mapping is
maintained after a kernel driver is attached, and incomplete for other
mechanisms to map iomem like pci-sysfs. This was found with PMEM, but
the issue is larger and applies to userspace drivers / debug in
general.

> What's the use case for "swapping between /dev/mem access and /dev/pmem0
> access" ?

"Who knows". I mean, I know in this case it was a platform validation
test using /dev/mem for "reasons", but I am not sure that is relevant
to the wider concern. If CONFIG_IO_STRICT_DEVMEM=n exclusion is
enforced when drivers pass the IORESOURCE_EXCLUSIVE flag, if
CONFIG_IO_STRICT_DEVMEM=y exclusion is enforced whenever the kernel
marks a resource IORESOURCE_BUSY, and if kernel lockdown is enabled
the driver state is moot as LOCKDOWN_DEV_MEM and LOCKDOWN_PCI_ACCESS
policy is in effect.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
  2021-05-28 16:42       ` Dan Williams
@ 2021-05-28 16:51         ` David Hildenbrand
  0 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand @ 2021-05-28 16:51 UTC (permalink / raw)
  To: Dan Williams
  Cc: Bjorn Helgaas, Greg KH, Arnd Bergmann, Ingo Molnar, Kees Cook,
	Matthew Wilcox, Russell King, Andrew Morton,
	Linux Kernel Mailing List, Linux MM, Linux PCI, Daniel Vetter,
	Krzysztof Wilczyński, Jason Gunthorpe, Christoph Hellwig

On 28.05.21 18:42, Dan Williams wrote:
> On Fri, May 28, 2021 at 1:58 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 27.05.21 23:30, Dan Williams wrote:
>>> On Thu, May 27, 2021 at 1:58 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>>>>
>>>> [+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]
>>>>
>>>> On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
>>>>> Close the hole of holding a mapping over kernel driver takeover event of
>>>>> a given address range.
>>>>>
>>>>> Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
>>>>> introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
>>>>> kernel against scenarios where a /dev/mem user tramples memory that a
>>>>> kernel driver owns. However, this protection only prevents *new* read(),
>>>>> write() and mmap() requests. Established mappings prior to the driver
>>>>> calling request_mem_region() are left alone.
>>>>>
>>>>> Especially with persistent memory, and the core kernel metadata that is
>>>>> stored there, there are plentiful scenarios for a /dev/mem user to
>>>>> violate the expectations of the driver and cause amplified damage.
>>>>>
>>>>> Teach request_mem_region() to find and shoot down active /dev/mem
>>>>> mappings that it believes it has successfully claimed for the exclusive
>>>>> use of the driver. Effectively a driver call to request_mem_region()
>>>>> becomes a hole-punch on the /dev/mem device.
>>>>
>>>> This idea of hole-punching /dev/mem has since been extended to PCI
>>>> BARs via [1].
>>>>
>>>> Correct me if I'm wrong: I think this means that if a user process has
>>>> mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
>>>> that region via pci_request_region() or similar, we punch holes in the
>>>> the user process mmap.  The driver might be happy, but my guess is the
>>>> user starts seeing segmentation violations for no obvious reason and
>>>> is not happy.
>>>>
>>>> Apart from the user process issue, the implementation of [1] is
>>>> problematic for PCI because the mmappable sysfs attributes now depend
>>>> on iomem_init_inode(), an fs_initcall, which means they can't be
>>>> static attributes, which ultimately leads to races in creating them.
>>>
>>> See the comments in iomem_get_mapping(), and revoke_iomem():
>>>
>>>           /*
>>>            * Check that the initialization has completed. Losing the race
>>>            * is ok because it means drivers are claiming resources before
>>>            * the fs_initcall level of init and prevent iomem_get_mapping users
>>>            * from establishing mappings.
>>>            */
>>>
>>> ...the observation being that it is ok for the revocation inode to
>>> come on later in the boot process because userspace won't be able to
>>> use the fs yet. So any missed calls to revoke_iomem() would fall back
>>> to userspace just seeing the resource busy in the first instance. I.e.
>>> through the normal devmem_is_allowed() exclusion.
>>>
>>>>
>>>> So I'm raising the question of whether this hole-punch is the right
>>>> strategy.
>>>>
>>>>     - Prior to revoke_iomem(), __request_region() was very
>>>>       self-contained and really only depended on the resource tree.  Now
>>>>       it depends on a lot of higher-level MM machinery to shoot down
>>>>       mappings of other tasks.  This adds quite a bit of complexity and
>>>>       some new ordering constraints.
>>>>
>>>>     - Punching holes in the address space of an existing process seems
>>>>       unfriendly.  Maybe the driver's __request_region() should fail
>>>>       instead, since the driver should be prepared to handle failure
>>>>       there anyway.
>>>
>>> It's prepared to handle failure, but in this case it is dealing with a
>>> root user of 2 minds.
>>>
>>>>
>>>>     - [2] suggests that the hole punch protects drivers from /dev/mem
>>>>       writers, especially with persistent memory.  I'm not really
>>>>       convinced.  The hole punch does nothing to prevent a user process
>>>>       from mmapping and corrupting something before the driver loads.
>>>
>>> The motivation for this was a case that was swapping between /dev/mem
>>> access and /dev/pmem0 access and they forgot to stop using /dev/mem
>>> when they switched to /dev/pmem0. If root wants to use /dev/mem it can
>>> use it, if root wants to stop the driver from loading it can set
>>> mopdrobe policy or manually unbind, and if root asks the kernel to
>>> load the driver while it is actively using /dev/mem something has to
>>> give. Given root has other options to stop a driver the decision to
>>> revoke userspace access when root messes up and causes a collision
>>> seems prudent to me.
>>>
>>
>> Is there a real use case for mapping pmem via /dev/mem or could we just
>> prohibit the access to these areas completely?
> 
> The kernel offers conflicting access to iomem resources and a
> long-standing mechanism to enforce mutual exclusion
> (CONFIG_IO_STRICT_DEVMEM) between those interfaces. That mechanism was
> found to be incomplete for the case where a /dev/mem mapping is
> maintained after a kernel driver is attached, and incomplete for other
> mechanisms to map iomem like pci-sysfs. This was found with PMEM, but
> the issue is larger and applies to userspace drivers / debug in
> general.
> 
>> What's the use case for "swapping between /dev/mem access and /dev/pmem0
>> access" ?
> 
> "Who knows". I mean, I know in this case it was a platform validation
> test using /dev/mem for "reasons", but I am not sure that is relevant
> to the wider concern. If CONFIG_IO_STRICT_DEVMEM=n exclusion is
> enforced when drivers pass the IORESOURCE_EXCLUSIVE flag, if
> CONFIG_IO_STRICT_DEVMEM=y exclusion is enforced whenever the kernel
> marks a resource IORESOURCE_BUSY, and if kernel lockdown is enabled
> the driver state is moot as LOCKDOWN_DEV_MEM and LOCKDOWN_PCI_ACCESS
> policy is in effect.
> 

I was thinking about a mechanism to permanently disallow /dev/mem access 
to specific memory regions (BUSY or not) in any /dev/mem mode. In my 
case, it would apply to the whole virtio-mem provided memory region. 
Once the driver is loaded, it would disallow access to the whole region.

I thought about doing it via the kernel resource tree, extending the 
EXCLUSIVE flag to !BUSY SYSRAM regions. But a simplistic list managed in 
/dev/mem code would also be possible.

That's why I wondered if we could just disallow access to these physical 
PMEM memory regions right from the start similarly, such that we don't 
have to really care about revoking in case of PMEM anymore.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
  2021-05-27 21:30   ` Dan Williams
  2021-05-28  8:58     ` David Hildenbrand
@ 2021-06-03  3:39     ` Bjorn Helgaas
  2021-06-03  4:15       ` Dan Williams
  1 sibling, 1 reply; 9+ messages in thread
From: Bjorn Helgaas @ 2021-06-03  3:39 UTC (permalink / raw)
  To: Dan Williams
  Cc: Greg KH, Arnd Bergmann, Ingo Molnar, Kees Cook, Matthew Wilcox,
	Russell King, Andrew Morton, Linux Kernel Mailing List, Linux MM,
	Linux PCI, Daniel Vetter, Krzysztof Wilczyński,
	Jason Gunthorpe, Christoph Hellwig, Pali Rohár,
	Oliver O'Halloran

[+cc Pali, Oliver]

On Thu, May 27, 2021 at 02:30:31PM -0700, Dan Williams wrote:
> On Thu, May 27, 2021 at 1:58 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > [+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]
> >
> > On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
> > > Close the hole of holding a mapping over kernel driver takeover event of
> > > a given address range.
> > >
> > > Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
> > > introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
> > > kernel against scenarios where a /dev/mem user tramples memory that a
> > > kernel driver owns. However, this protection only prevents *new* read(),
> > > write() and mmap() requests. Established mappings prior to the driver
> > > calling request_mem_region() are left alone.
> > >
> > > Especially with persistent memory, and the core kernel metadata that is
> > > stored there, there are plentiful scenarios for a /dev/mem user to
> > > violate the expectations of the driver and cause amplified damage.
> > >
> > > Teach request_mem_region() to find and shoot down active /dev/mem
> > > mappings that it believes it has successfully claimed for the exclusive
> > > use of the driver. Effectively a driver call to request_mem_region()
> > > becomes a hole-punch on the /dev/mem device.
> >
> > This idea of hole-punching /dev/mem has since been extended to PCI
> > BARs via [1].
> >
> > Correct me if I'm wrong: I think this means that if a user process has
> > mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
> > that region via pci_request_region() or similar, we punch holes in the
> > the user process mmap.  The driver might be happy, but my guess is the
> > user starts seeing segmentation violations for no obvious reason and
> > is not happy.
> >
> > Apart from the user process issue, the implementation of [1] is
> > problematic for PCI because the mmappable sysfs attributes now depend
> > on iomem_init_inode(), an fs_initcall, which means they can't be
> > static attributes, which ultimately leads to races in creating them.
> 
> See the comments in iomem_get_mapping(), and revoke_iomem():
> 
>         /*
>          * Check that the initialization has completed. Losing the race
>          * is ok because it means drivers are claiming resources before
>          * the fs_initcall level of init and prevent iomem_get_mapping users
>          * from establishing mappings.
>          */
> 
> ...the observation being that it is ok for the revocation inode to
> come on later in the boot process because userspace won't be able to
> use the fs yet. So any missed calls to revoke_iomem() would fall back
> to userspace just seeing the resource busy in the first instance. I.e.
> through the normal devmem_is_allowed() exclusion.

I did see that comment, but the race I meant is different.  Pali wrote
up a nice analysis of it [3].

Here's the typical enumeration flow for PCI:

  acpi_pci_root_add                 <-- subsys_initcall (4)
    pci_acpi_scan_root
      ...
        pci_device_add
          device_initialize
          device_add
            device_add_attrs        <-- static sysfs attributes created
    ...
    pci_bus_add_devices
      pci_bus_add_device
        pci_create_sysfs_dev_files
          if (!sysfs_initialized) return;    <-- Ugh :)
          ...
            attr->mmap = pci_mmap_resource_uc
            attr->mapping = iomem_get_mapping()  <-- new dependency
              return iomem_inode->i_mapping
            sysfs_create_bin_file   <-- dynamic sysfs attributes created

  iomem_init_inode                  <-- fs_initcall (5)
    iomem_inode = ...               <-- now iomem_get_mapping() works

  pci_sysfs_init                    <-- late_initcall (7)
    sysfs_initialized = 1           <-- Ugh (see above)
    for_each_pci_dev(dev)           <-- Ugh
      pci_create_sysfs_dev_files(dev)

The race is between the pci_sysfs_init() initcall (intended for
boot-time devices) and the pci_bus_add_device() path (used for all
devices including hot-added ones).  Pali outlined cases where we call
pci_create_sysfs_dev_files() from both paths for the same device.

"sysfs_initialized" is a gross hack that prevents this most of the
time, but not always.  I want to get rid of it and pci_sysfs_init().

Oliver had the excellent idea of using static sysfs attributes to do
this cleanly [4].  If we can convert things to static attributes, the
device core creates them in device_add(), so we don't have to create
them in pci_create_sysfs_dev_files().

Krzysztof recently did some very nice work to convert most things to
static attributes, e.g., [5].  But we can't do this for the PCI BAR
attributes because they support ->mmap(), which now depends on
iomem_get_mapping(), which IIUC doesn't work until after fs_initcalls.

> > So I'm raising the question of whether this hole-punch is the right
> > strategy.
> >
> >   - Prior to revoke_iomem(), __request_region() was very
> >     self-contained and really only depended on the resource tree.  Now
> >     it depends on a lot of higher-level MM machinery to shoot down
> >     mappings of other tasks.  This adds quite a bit of complexity and
> >     some new ordering constraints.
> >
> >   - Punching holes in the address space of an existing process seems
> >     unfriendly.  Maybe the driver's __request_region() should fail
> >     instead, since the driver should be prepared to handle failure
> >     there anyway.
> 
> It's prepared to handle failure, but in this case it is dealing with a
> root user of 2 minds.
> 
> >   - [2] suggests that the hole punch protects drivers from /dev/mem
> >     writers, especially with persistent memory.  I'm not really
> >     convinced.  The hole punch does nothing to prevent a user process
> >     from mmapping and corrupting something before the driver loads.
> 
> The motivation for this was a case that was swapping between /dev/mem
> access and /dev/pmem0 access and they forgot to stop using /dev/mem
> when they switched to /dev/pmem0. If root wants to use /dev/mem it can
> use it, if root wants to stop the driver from loading it can set
> mopdrobe policy or manually unbind, and if root asks the kernel to
> load the driver while it is actively using /dev/mem something has to
> give. Given root has other options to stop a driver the decision to
> revoke userspace access when root messes up and causes a collision
> seems prudent to me.

[3] https://lore.kernel.org/linux-pci/20200716110423.xtfyb3n6tn5ixedh@pali/
[4] https://lore.kernel.org/linux-pci/CAOSf1CHss03DBSDO4PmTtMp0tCEu5kScn704ZEwLKGXQzBfqaA@mail.gmail.com/
[5] https://git.kernel.org/linus/e1d3f3268b0e

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
  2021-06-03  3:39     ` Bjorn Helgaas
@ 2021-06-03  4:15       ` Dan Williams
  2021-06-03 18:11         ` Bjorn Helgaas
  0 siblings, 1 reply; 9+ messages in thread
From: Dan Williams @ 2021-06-03  4:15 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Greg KH, Arnd Bergmann, Ingo Molnar, Kees Cook, Matthew Wilcox,
	Russell King, Andrew Morton, Linux Kernel Mailing List, Linux MM,
	Linux PCI, Daniel Vetter, Krzysztof Wilczyński,
	Jason Gunthorpe, Christoph Hellwig, Pali Rohár,
	Oliver O'Halloran

On Wed, Jun 2, 2021 at 8:40 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> [+cc Pali, Oliver]
>
> On Thu, May 27, 2021 at 02:30:31PM -0700, Dan Williams wrote:
> > On Thu, May 27, 2021 at 1:58 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > >
> > > [+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]
> > >
> > > On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
> > > > Close the hole of holding a mapping over kernel driver takeover event of
> > > > a given address range.
> > > >
> > > > Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
> > > > introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
> > > > kernel against scenarios where a /dev/mem user tramples memory that a
> > > > kernel driver owns. However, this protection only prevents *new* read(),
> > > > write() and mmap() requests. Established mappings prior to the driver
> > > > calling request_mem_region() are left alone.
> > > >
> > > > Especially with persistent memory, and the core kernel metadata that is
> > > > stored there, there are plentiful scenarios for a /dev/mem user to
> > > > violate the expectations of the driver and cause amplified damage.
> > > >
> > > > Teach request_mem_region() to find and shoot down active /dev/mem
> > > > mappings that it believes it has successfully claimed for the exclusive
> > > > use of the driver. Effectively a driver call to request_mem_region()
> > > > becomes a hole-punch on the /dev/mem device.
> > >
> > > This idea of hole-punching /dev/mem has since been extended to PCI
> > > BARs via [1].
> > >
> > > Correct me if I'm wrong: I think this means that if a user process has
> > > mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
> > > that region via pci_request_region() or similar, we punch holes in the
> > > the user process mmap.  The driver might be happy, but my guess is the
> > > user starts seeing segmentation violations for no obvious reason and
> > > is not happy.
> > >
> > > Apart from the user process issue, the implementation of [1] is
> > > problematic for PCI because the mmappable sysfs attributes now depend
> > > on iomem_init_inode(), an fs_initcall, which means they can't be
> > > static attributes, which ultimately leads to races in creating them.
> >
> > See the comments in iomem_get_mapping(), and revoke_iomem():
> >
> >         /*
> >          * Check that the initialization has completed. Losing the race
> >          * is ok because it means drivers are claiming resources before
> >          * the fs_initcall level of init and prevent iomem_get_mapping users
> >          * from establishing mappings.
> >          */
> >
> > ...the observation being that it is ok for the revocation inode to
> > come on later in the boot process because userspace won't be able to
> > use the fs yet. So any missed calls to revoke_iomem() would fall back
> > to userspace just seeing the resource busy in the first instance. I.e.
> > through the normal devmem_is_allowed() exclusion.
>
> I did see that comment, but the race I meant is different.  Pali wrote
> up a nice analysis of it [3].
>
> Here's the typical enumeration flow for PCI:
>
>   acpi_pci_root_add                 <-- subsys_initcall (4)
>     pci_acpi_scan_root
>       ...
>         pci_device_add
>           device_initialize
>           device_add
>             device_add_attrs        <-- static sysfs attributes created
>     ...
>     pci_bus_add_devices
>       pci_bus_add_device
>         pci_create_sysfs_dev_files
>           if (!sysfs_initialized) return;    <-- Ugh :)
>           ...
>             attr->mmap = pci_mmap_resource_uc
>             attr->mapping = iomem_get_mapping()  <-- new dependency
>               return iomem_inode->i_mapping
>             sysfs_create_bin_file   <-- dynamic sysfs attributes created
>
>   iomem_init_inode                  <-- fs_initcall (5)
>     iomem_inode = ...               <-- now iomem_get_mapping() works
>
>   pci_sysfs_init                    <-- late_initcall (7)
>     sysfs_initialized = 1           <-- Ugh (see above)
>     for_each_pci_dev(dev)           <-- Ugh
>       pci_create_sysfs_dev_files(dev)
>
> The race is between the pci_sysfs_init() initcall (intended for
> boot-time devices) and the pci_bus_add_device() path (used for all
> devices including hot-added ones).  Pali outlined cases where we call
> pci_create_sysfs_dev_files() from both paths for the same device.
>
> "sysfs_initialized" is a gross hack that prevents this most of the
> time, but not always.  I want to get rid of it and pci_sysfs_init().
>
> Oliver had the excellent idea of using static sysfs attributes to do
> this cleanly [4].  If we can convert things to static attributes, the
> device core creates them in device_add(), so we don't have to create
> them in pci_create_sysfs_dev_files().
>
> Krzysztof recently did some very nice work to convert most things to
> static attributes, e.g., [5].  But we can't do this for the PCI BAR
> attributes because they support ->mmap(), which now depends on
> iomem_get_mapping(), which IIUC doesn't work until after fs_initcalls.

Ah, sorry, yes, I see the race now. And yes, anything that gets in the
way of the static attribute conversion needs fixing. How about
something like this?

diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
index beb8d1f4fafe..c8bc249750d6 100644
--- a/drivers/pci/pci-sysfs.c
+++ b/drivers/pci/pci-sysfs.c
@@ -1195,7 +1195,7 @@ static int pci_create_attr(struct pci_dev *pdev,
int num, int write_combine)
                }
        }
        if (res_attr->mmap)
-               res_attr->mapping = iomem_get_mapping();
+               res_attr->mapping = iomem_get_mapping;
        res_attr->attr.name = res_attr_name;
        res_attr->attr.mode = 0600;
        res_attr->size = pci_resource_len(pdev, num);
diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
index 9aefa7779b29..a3ee4c32a264 100644
--- a/fs/sysfs/file.c
+++ b/fs/sysfs/file.c
@@ -175,7 +175,7 @@ static int sysfs_kf_bin_open(struct kernfs_open_file *of)
        struct bin_attribute *battr = of->kn->priv;

        if (battr->mapping)
-               of->file->f_mapping = battr->mapping;
+               of->file->f_mapping = battr->mapping();

        return 0;
 }
diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
index d76a1ddf83a3..fbb7c7df545c 100644
--- a/include/linux/sysfs.h
+++ b/include/linux/sysfs.h
@@ -170,7 +170,7 @@ struct bin_attribute {
        struct attribute        attr;
        size_t                  size;
        void                    *private;
-       struct address_space    *mapping;
+       struct address_space *(*mapping)(void);
        ssize_t (*read)(struct file *, struct kobject *, struct bin_attribute *,
                        char *, loff_t, size_t);
        ssize_t (*write)(struct file *, struct kobject *, struct
bin_attribute *,

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
  2021-06-03  4:15       ` Dan Williams
@ 2021-06-03 18:11         ` Bjorn Helgaas
  2021-06-03 18:28           ` Dan Williams
  0 siblings, 1 reply; 9+ messages in thread
From: Bjorn Helgaas @ 2021-06-03 18:11 UTC (permalink / raw)
  To: Dan Williams, Daniel Vetter
  Cc: Greg KH, Arnd Bergmann, Ingo Molnar, Kees Cook, Matthew Wilcox,
	Russell King, Andrew Morton, Linux Kernel Mailing List, Linux MM,
	Linux PCI, Krzysztof Wilczyński, Jason Gunthorpe,
	Christoph Hellwig, Pali Rohár, Oliver O'Halloran

On Wed, Jun 02, 2021 at 09:15:35PM -0700, Dan Williams wrote:
> On Wed, Jun 2, 2021 at 8:40 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > [+cc Pali, Oliver]
> >
> > On Thu, May 27, 2021 at 02:30:31PM -0700, Dan Williams wrote:
> > > On Thu, May 27, 2021 at 1:58 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > >
> > > > [+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]
> > > >
> > > > On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
> > > > > Close the hole of holding a mapping over kernel driver takeover event of
> > > > > a given address range.
> > > > >
> > > > > Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
> > > > > introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
> > > > > kernel against scenarios where a /dev/mem user tramples memory that a
> > > > > kernel driver owns. However, this protection only prevents *new* read(),
> > > > > write() and mmap() requests. Established mappings prior to the driver
> > > > > calling request_mem_region() are left alone.
> > > > >
> > > > > Especially with persistent memory, and the core kernel metadata that is
> > > > > stored there, there are plentiful scenarios for a /dev/mem user to
> > > > > violate the expectations of the driver and cause amplified damage.
> > > > >
> > > > > Teach request_mem_region() to find and shoot down active /dev/mem
> > > > > mappings that it believes it has successfully claimed for the exclusive
> > > > > use of the driver. Effectively a driver call to request_mem_region()
> > > > > becomes a hole-punch on the /dev/mem device.
> > > >
> > > > This idea of hole-punching /dev/mem has since been extended to PCI
> > > > BARs via [1].
> > > >
> > > > Correct me if I'm wrong: I think this means that if a user process has
> > > > mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
> > > > that region via pci_request_region() or similar, we punch holes in the
> > > > the user process mmap.  The driver might be happy, but my guess is the
> > > > user starts seeing segmentation violations for no obvious reason and
> > > > is not happy.
> > > >
> > > > Apart from the user process issue, the implementation of [1] is
> > > > problematic for PCI because the mmappable sysfs attributes now depend
> > > > on iomem_init_inode(), an fs_initcall, which means they can't be
> > > > static attributes, which ultimately leads to races in creating them.
> > >
> > > See the comments in iomem_get_mapping(), and revoke_iomem():
> > >
> > >         /*
> > >          * Check that the initialization has completed. Losing the race
> > >          * is ok because it means drivers are claiming resources before
> > >          * the fs_initcall level of init and prevent iomem_get_mapping users
> > >          * from establishing mappings.
> > >          */
> > >
> > > ...the observation being that it is ok for the revocation inode to
> > > come on later in the boot process because userspace won't be able to
> > > use the fs yet. So any missed calls to revoke_iomem() would fall back
> > > to userspace just seeing the resource busy in the first instance. I.e.
> > > through the normal devmem_is_allowed() exclusion.
> >
> > I did see that comment, but the race I meant is different.  Pali wrote
> > up a nice analysis of it [3].
> >
> > Here's the typical enumeration flow for PCI:
> >
> >   acpi_pci_root_add                 <-- subsys_initcall (4)
> >     pci_acpi_scan_root
> >       ...
> >         pci_device_add
> >           device_initialize
> >           device_add
> >             device_add_attrs        <-- static sysfs attributes created
> >     ...
> >     pci_bus_add_devices
> >       pci_bus_add_device
> >         pci_create_sysfs_dev_files
> >           if (!sysfs_initialized) return;    <-- Ugh :)
> >           ...
> >             attr->mmap = pci_mmap_resource_uc
> >             attr->mapping = iomem_get_mapping()  <-- new dependency
> >               return iomem_inode->i_mapping
> >             sysfs_create_bin_file   <-- dynamic sysfs attributes created
> >
> >   iomem_init_inode                  <-- fs_initcall (5)
> >     iomem_inode = ...               <-- now iomem_get_mapping() works
> >
> >   pci_sysfs_init                    <-- late_initcall (7)
> >     sysfs_initialized = 1           <-- Ugh (see above)
> >     for_each_pci_dev(dev)           <-- Ugh
> >       pci_create_sysfs_dev_files(dev)
> >
> > The race is between the pci_sysfs_init() initcall (intended for
> > boot-time devices) and the pci_bus_add_device() path (used for all
> > devices including hot-added ones).  Pali outlined cases where we call
> > pci_create_sysfs_dev_files() from both paths for the same device.
> >
> > "sysfs_initialized" is a gross hack that prevents this most of the
> > time, but not always.  I want to get rid of it and pci_sysfs_init().
> >
> > Oliver had the excellent idea of using static sysfs attributes to do
> > this cleanly [4].  If we can convert things to static attributes, the
> > device core creates them in device_add(), so we don't have to create
> > them in pci_create_sysfs_dev_files().
> >
> > Krzysztof recently did some very nice work to convert most things to
> > static attributes, e.g., [5].  But we can't do this for the PCI BAR
> > attributes because they support ->mmap(), which now depends on
> > iomem_get_mapping(), which IIUC doesn't work until after fs_initcalls.
> 
> Ah, sorry, yes, I see the race now. And yes, anything that gets in the
> way of the static attribute conversion needs fixing. How about
> something like this?

That looks like it would solve our problem, thanks a lot!  Obvious in
retrospect, like all good ideas :)

Krzysztof noticed a couple other users of iomem_get_mapping()
added by:

  71a1d8ed900f ("resource: Move devmem revoke code to resource framework")
  636b21b50152 ("PCI: Revoke mappings like devmem")

I *could* extend your patch below to cover all these, but it's kind of
outside my comfort zone, so I'd feel better if Daniel V (who wrote the
commits above) could take a look and do a follow-up.

If I could take the resulting patch via PCI, we might even be able to
get the last static attribute conversions in this cycle.

> diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
> index beb8d1f4fafe..c8bc249750d6 100644
> --- a/drivers/pci/pci-sysfs.c
> +++ b/drivers/pci/pci-sysfs.c
> @@ -1195,7 +1195,7 @@ static int pci_create_attr(struct pci_dev *pdev,
> int num, int write_combine)
>                 }
>         }
>         if (res_attr->mmap)
> -               res_attr->mapping = iomem_get_mapping();
> +               res_attr->mapping = iomem_get_mapping;
>         res_attr->attr.name = res_attr_name;
>         res_attr->attr.mode = 0600;
>         res_attr->size = pci_resource_len(pdev, num);
> diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
> index 9aefa7779b29..a3ee4c32a264 100644
> --- a/fs/sysfs/file.c
> +++ b/fs/sysfs/file.c
> @@ -175,7 +175,7 @@ static int sysfs_kf_bin_open(struct kernfs_open_file *of)
>         struct bin_attribute *battr = of->kn->priv;
> 
>         if (battr->mapping)
> -               of->file->f_mapping = battr->mapping;
> +               of->file->f_mapping = battr->mapping();
> 
>         return 0;
>  }
> diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
> index d76a1ddf83a3..fbb7c7df545c 100644
> --- a/include/linux/sysfs.h
> +++ b/include/linux/sysfs.h
> @@ -170,7 +170,7 @@ struct bin_attribute {
>         struct attribute        attr;
>         size_t                  size;
>         void                    *private;
> -       struct address_space    *mapping;
> +       struct address_space *(*mapping)(void);
>         ssize_t (*read)(struct file *, struct kobject *, struct bin_attribute *,
>                         char *, loff_t, size_t);
>         ssize_t (*write)(struct file *, struct kobject *, struct
> bin_attribute *,

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region
  2021-06-03 18:11         ` Bjorn Helgaas
@ 2021-06-03 18:28           ` Dan Williams
  0 siblings, 0 replies; 9+ messages in thread
From: Dan Williams @ 2021-06-03 18:28 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Daniel Vetter, Greg KH, Arnd Bergmann, Ingo Molnar, Kees Cook,
	Matthew Wilcox, Russell King, Andrew Morton,
	Linux Kernel Mailing List, Linux MM, Linux PCI,
	Krzysztof Wilczyński, Jason Gunthorpe, Christoph Hellwig,
	Pali Rohár, Oliver O'Halloran

On Thu, Jun 3, 2021 at 11:12 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Wed, Jun 02, 2021 at 09:15:35PM -0700, Dan Williams wrote:
> > On Wed, Jun 2, 2021 at 8:40 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > >
> > > [+cc Pali, Oliver]
> > >
> > > On Thu, May 27, 2021 at 02:30:31PM -0700, Dan Williams wrote:
> > > > On Thu, May 27, 2021 at 1:58 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > >
> > > > > [+cc Daniel, Krzysztof, Jason, Christoph, linux-pci]
> > > > >
> > > > > On Thu, May 21, 2020 at 02:06:17PM -0700, Dan Williams wrote:
> > > > > > Close the hole of holding a mapping over kernel driver takeover event of
> > > > > > a given address range.
> > > > > >
> > > > > > Commit 90a545e98126 ("restrict /dev/mem to idle io memory ranges")
> > > > > > introduced CONFIG_IO_STRICT_DEVMEM with the goal of protecting the
> > > > > > kernel against scenarios where a /dev/mem user tramples memory that a
> > > > > > kernel driver owns. However, this protection only prevents *new* read(),
> > > > > > write() and mmap() requests. Established mappings prior to the driver
> > > > > > calling request_mem_region() are left alone.
> > > > > >
> > > > > > Especially with persistent memory, and the core kernel metadata that is
> > > > > > stored there, there are plentiful scenarios for a /dev/mem user to
> > > > > > violate the expectations of the driver and cause amplified damage.
> > > > > >
> > > > > > Teach request_mem_region() to find and shoot down active /dev/mem
> > > > > > mappings that it believes it has successfully claimed for the exclusive
> > > > > > use of the driver. Effectively a driver call to request_mem_region()
> > > > > > becomes a hole-punch on the /dev/mem device.
> > > > >
> > > > > This idea of hole-punching /dev/mem has since been extended to PCI
> > > > > BARs via [1].
> > > > >
> > > > > Correct me if I'm wrong: I think this means that if a user process has
> > > > > mmapped a PCI BAR via sysfs, and a kernel driver subsequently requests
> > > > > that region via pci_request_region() or similar, we punch holes in the
> > > > > the user process mmap.  The driver might be happy, but my guess is the
> > > > > user starts seeing segmentation violations for no obvious reason and
> > > > > is not happy.
> > > > >
> > > > > Apart from the user process issue, the implementation of [1] is
> > > > > problematic for PCI because the mmappable sysfs attributes now depend
> > > > > on iomem_init_inode(), an fs_initcall, which means they can't be
> > > > > static attributes, which ultimately leads to races in creating them.
> > > >
> > > > See the comments in iomem_get_mapping(), and revoke_iomem():
> > > >
> > > >         /*
> > > >          * Check that the initialization has completed. Losing the race
> > > >          * is ok because it means drivers are claiming resources before
> > > >          * the fs_initcall level of init and prevent iomem_get_mapping users
> > > >          * from establishing mappings.
> > > >          */
> > > >
> > > > ...the observation being that it is ok for the revocation inode to
> > > > come on later in the boot process because userspace won't be able to
> > > > use the fs yet. So any missed calls to revoke_iomem() would fall back
> > > > to userspace just seeing the resource busy in the first instance. I.e.
> > > > through the normal devmem_is_allowed() exclusion.
> > >
> > > I did see that comment, but the race I meant is different.  Pali wrote
> > > up a nice analysis of it [3].
> > >
> > > Here's the typical enumeration flow for PCI:
> > >
> > >   acpi_pci_root_add                 <-- subsys_initcall (4)
> > >     pci_acpi_scan_root
> > >       ...
> > >         pci_device_add
> > >           device_initialize
> > >           device_add
> > >             device_add_attrs        <-- static sysfs attributes created
> > >     ...
> > >     pci_bus_add_devices
> > >       pci_bus_add_device
> > >         pci_create_sysfs_dev_files
> > >           if (!sysfs_initialized) return;    <-- Ugh :)
> > >           ...
> > >             attr->mmap = pci_mmap_resource_uc
> > >             attr->mapping = iomem_get_mapping()  <-- new dependency
> > >               return iomem_inode->i_mapping
> > >             sysfs_create_bin_file   <-- dynamic sysfs attributes created
> > >
> > >   iomem_init_inode                  <-- fs_initcall (5)
> > >     iomem_inode = ...               <-- now iomem_get_mapping() works
> > >
> > >   pci_sysfs_init                    <-- late_initcall (7)
> > >     sysfs_initialized = 1           <-- Ugh (see above)
> > >     for_each_pci_dev(dev)           <-- Ugh
> > >       pci_create_sysfs_dev_files(dev)
> > >
> > > The race is between the pci_sysfs_init() initcall (intended for
> > > boot-time devices) and the pci_bus_add_device() path (used for all
> > > devices including hot-added ones).  Pali outlined cases where we call
> > > pci_create_sysfs_dev_files() from both paths for the same device.
> > >
> > > "sysfs_initialized" is a gross hack that prevents this most of the
> > > time, but not always.  I want to get rid of it and pci_sysfs_init().
> > >
> > > Oliver had the excellent idea of using static sysfs attributes to do
> > > this cleanly [4].  If we can convert things to static attributes, the
> > > device core creates them in device_add(), so we don't have to create
> > > them in pci_create_sysfs_dev_files().
> > >
> > > Krzysztof recently did some very nice work to convert most things to
> > > static attributes, e.g., [5].  But we can't do this for the PCI BAR
> > > attributes because they support ->mmap(), which now depends on
> > > iomem_get_mapping(), which IIUC doesn't work until after fs_initcalls.
> >
> > Ah, sorry, yes, I see the race now. And yes, anything that gets in the
> > way of the static attribute conversion needs fixing. How about
> > something like this?
>
> That looks like it would solve our problem, thanks a lot!  Obvious in
> retrospect, like all good ideas :)
>
> Krzysztof noticed a couple other users of iomem_get_mapping()
> added by:
>
>   71a1d8ed900f ("resource: Move devmem revoke code to resource framework")
>   636b21b50152 ("PCI: Revoke mappings like devmem")
>
> I *could* extend your patch below to cover all these, but it's kind of
> outside my comfort zone, so I'd feel better if Daniel V (who wrote the
> commits above) could take a look and do a follow-up.
>
> If I could take the resulting patch via PCI, we might even be able to
> get the last static attribute conversions in this cycle.

Sounds good, I'll circle back and give it a try if Daniel does not get
a chance to chime in in the next few days.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-06-03 18:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <159009507306.847224.8502634072429766747.stgit@dwillia2-desk3.amr.corp.intel.com>
2021-05-27 20:58 ` [PATCH v4] /dev/mem: Revoke mappings when a driver claims the region Bjorn Helgaas
2021-05-27 21:30   ` Dan Williams
2021-05-28  8:58     ` David Hildenbrand
2021-05-28 16:42       ` Dan Williams
2021-05-28 16:51         ` David Hildenbrand
2021-06-03  3:39     ` Bjorn Helgaas
2021-06-03  4:15       ` Dan Williams
2021-06-03 18:11         ` Bjorn Helgaas
2021-06-03 18:28           ` Dan Williams

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).