All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/11] mm: Hardened usercopy
@ 2016-07-13 21:55 ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Hi,

This is a start of the mainline port of PAX_USERCOPY[1]. After I started
writing tests (now in lkdtm in -next) for Casey's earlier port[2], I
kept tweaking things further and further until I ended up with a whole
new patch series. To that end, I took Rik's feedback and made a number
of other changes and clean-ups as well.

Based on my understanding, PAX_USERCOPY was designed to catch a
few classes of flaws (mainly bad bounds checking) around the use of
copy_to_user()/copy_from_user(). These changes don't touch get_user() and
put_user(), since these operate on constant sized lengths, and tend to be
much less vulnerable. There are effectively three distinct protections in
the whole series, each of which I've given a separate CONFIG, though this
patch set is only the first of the three intended protections. (Generally
speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
(this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
(future).)

This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
being copied to/from userspace meet certain criteria:
- if address is a heap object, the size must not exceed the object's
  allocated size. (This will catch all kinds of heap overflow flaws.)
- if address range is in the current process stack, it must be within the
  current stack frame (if such checking is possible) or at least entirely
  within the current process's stack. (This could catch large lengths that
  would have extended beyond the current process stack, or overflows if
  their length extends back into the original stack.)
- if the address range is part of kernel data, rodata, or bss, allow it.
- if address range is page-allocated, that it doesn't span multiple
  allocations.
- if address is within the kernel text, reject it.
- everything else is accepted

The patches in the series are:
- Support for arch-specific stack frame checking:
	1- mm: Implement stack frame object validation
- The core copy_to/from_user() checks, without the slab object checks:
        2- mm: Hardened usercopy
- Per-arch enablement of the protection:
        3- x86/uaccess: Enable hardened usercopy
        4- ARM: uaccess: Enable hardened usercopy
        5- arm64/uaccess: Enable hardened usercopy
        6- ia64/uaccess: Enable hardened usercopy
        7- powerpc/uaccess: Enable hardened usercopy
        8- sparc/uaccess: Enable hardened usercopy
        9- s390/uaccess: Enable hardened usercopy
- The heap allocator implementation of object size checking:
       10- mm: SLAB hardened usercopy support
       11- mm: SLUB hardened usercopy support

Some notes:

- This is expected to apply on top of -next which contains fixes for the
  position of _etext on both arm and arm64.

- I couldn't detect a measurable performance change with these features
  enabled. Kernel build times were unchanged, hackbench was unchanged,
  etc. I think we could flip this to "on by default" at some point, but
  for now, I'm leaving it off until I can get some more definitive
  measurements.

- The SLOB support extracted from grsecurity seems entirely broken. I
  have no idea what's going on there, I spent my time testing SLAB and
  SLUB. Having someone else look at SLOB would be nice, but this series
  doesn't depend on it.

Additional features that would be nice, but aren't blocking this series:

- Needs more architecture support for stack frame checking (only x86 now).


Thanks!

-Kees

[1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
[2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5

v2:
- added s390 support
- handle slub red zone
- disallow writes to rodata area
- stack frame walker now CONFIG-controlled arch-specific helper

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 0/11] mm: Hardened usercopy
@ 2016-07-13 21:55 ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Hi,

This is a start of the mainline port of PAX_USERCOPY[1]. After I started
writing tests (now in lkdtm in -next) for Casey's earlier port[2], I
kept tweaking things further and further until I ended up with a whole
new patch series. To that end, I took Rik's feedback and made a number
of other changes and clean-ups as well.

Based on my understanding, PAX_USERCOPY was designed to catch a
few classes of flaws (mainly bad bounds checking) around the use of
copy_to_user()/copy_from_user(). These changes don't touch get_user() and
put_user(), since these operate on constant sized lengths, and tend to be
much less vulnerable. There are effectively three distinct protections in
the whole series, each of which I've given a separate CONFIG, though this
patch set is only the first of the three intended protections. (Generally
speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
(this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
(future).)

This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
being copied to/from userspace meet certain criteria:
- if address is a heap object, the size must not exceed the object's
  allocated size. (This will catch all kinds of heap overflow flaws.)
- if address range is in the current process stack, it must be within the
  current stack frame (if such checking is possible) or at least entirely
  within the current process's stack. (This could catch large lengths that
  would have extended beyond the current process stack, or overflows if
  their length extends back into the original stack.)
- if the address range is part of kernel data, rodata, or bss, allow it.
- if address range is page-allocated, that it doesn't span multiple
  allocations.
- if address is within the kernel text, reject it.
- everything else is accepted

The patches in the series are:
- Support for arch-specific stack frame checking:
	1- mm: Implement stack frame object validation
- The core copy_to/from_user() checks, without the slab object checks:
        2- mm: Hardened usercopy
- Per-arch enablement of the protection:
        3- x86/uaccess: Enable hardened usercopy
        4- ARM: uaccess: Enable hardened usercopy
        5- arm64/uaccess: Enable hardened usercopy
        6- ia64/uaccess: Enable hardened usercopy
        7- powerpc/uaccess: Enable hardened usercopy
        8- sparc/uaccess: Enable hardened usercopy
        9- s390/uaccess: Enable hardened usercopy
- The heap allocator implementation of object size checking:
       10- mm: SLAB hardened usercopy support
       11- mm: SLUB hardened usercopy support

Some notes:

- This is expected to apply on top of -next which contains fixes for the
  position of _etext on both arm and arm64.

- I couldn't detect a measurable performance change with these features
  enabled. Kernel build times were unchanged, hackbench was unchanged,
  etc. I think we could flip this to "on by default" at some point, but
  for now, I'm leaving it off until I can get some more definitive
  measurements.

- The SLOB support extracted from grsecurity seems entirely broken. I
  have no idea what's going on there, I spent my time testing SLAB and
  SLUB. Having someone else look at SLOB would be nice, but this series
  doesn't depend on it.

Additional features that would be nice, but aren't blocking this series:

- Needs more architecture support for stack frame checking (only x86 now).


Thanks!

-Kees

[1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
[2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5

v2:
- added s390 support
- handle slub red zone
- disallow writes to rodata area
- stack frame walker now CONFIG-controlled arch-specific helper


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 0/11] mm: Hardened usercopy
@ 2016-07-13 21:55 ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Hi,

This is a start of the mainline port of PAX_USERCOPY[1]. After I started
writing tests (now in lkdtm in -next) for Casey's earlier port[2], I
kept tweaking things further and further until I ended up with a whole
new patch series. To that end, I took Rik's feedback and made a number
of other changes and clean-ups as well.

Based on my understanding, PAX_USERCOPY was designed to catch a
few classes of flaws (mainly bad bounds checking) around the use of
copy_to_user()/copy_from_user(). These changes don't touch get_user() and
put_user(), since these operate on constant sized lengths, and tend to be
much less vulnerable. There are effectively three distinct protections in
the whole series, each of which I've given a separate CONFIG, though this
patch set is only the first of the three intended protections. (Generally
speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
(this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
(future).)

This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
being copied to/from userspace meet certain criteria:
- if address is a heap object, the size must not exceed the object's
  allocated size. (This will catch all kinds of heap overflow flaws.)
- if address range is in the current process stack, it must be within the
  current stack frame (if such checking is possible) or at least entirely
  within the current process's stack. (This could catch large lengths that
  would have extended beyond the current process stack, or overflows if
  their length extends back into the original stack.)
- if the address range is part of kernel data, rodata, or bss, allow it.
- if address range is page-allocated, that it doesn't span multiple
  allocations.
- if address is within the kernel text, reject it.
- everything else is accepted

The patches in the series are:
- Support for arch-specific stack frame checking:
	1- mm: Implement stack frame object validation
- The core copy_to/from_user() checks, without the slab object checks:
        2- mm: Hardened usercopy
- Per-arch enablement of the protection:
        3- x86/uaccess: Enable hardened usercopy
        4- ARM: uaccess: Enable hardened usercopy
        5- arm64/uaccess: Enable hardened usercopy
        6- ia64/uaccess: Enable hardened usercopy
        7- powerpc/uaccess: Enable hardened usercopy
        8- sparc/uaccess: Enable hardened usercopy
        9- s390/uaccess: Enable hardened usercopy
- The heap allocator implementation of object size checking:
       10- mm: SLAB hardened usercopy support
       11- mm: SLUB hardened usercopy support

Some notes:

- This is expected to apply on top of -next which contains fixes for the
  position of _etext on both arm and arm64.

- I couldn't detect a measurable performance change with these features
  enabled. Kernel build times were unchanged, hackbench was unchanged,
  etc. I think we could flip this to "on by default" at some point, but
  for now, I'm leaving it off until I can get some more definitive
  measurements.

- The SLOB support extracted from grsecurity seems entirely broken. I
  have no idea what's going on there, I spent my time testing SLAB and
  SLUB. Having someone else look at SLOB would be nice, but this series
  doesn't depend on it.

Additional features that would be nice, but aren't blocking this series:

- Needs more architecture support for stack frame checking (only x86 now).


Thanks!

-Kees

[1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
[2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5

v2:
- added s390 support
- handle slub red zone
- disallow writes to rodata area
- stack frame walker now CONFIG-controlled arch-specific helper



^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 0/11] mm: Hardened usercopy
@ 2016-07-13 21:55 ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Hi,

This is a start of the mainline port of PAX_USERCOPY[1]. After I started
writing tests (now in lkdtm in -next) for Casey's earlier port[2], I
kept tweaking things further and further until I ended up with a whole
new patch series. To that end, I took Rik's feedback and made a number
of other changes and clean-ups as well.

Based on my understanding, PAX_USERCOPY was designed to catch a
few classes of flaws (mainly bad bounds checking) around the use of
copy_to_user()/copy_from_user(). These changes don't touch get_user() and
put_user(), since these operate on constant sized lengths, and tend to be
much less vulnerable. There are effectively three distinct protections in
the whole series, each of which I've given a separate CONFIG, though this
patch set is only the first of the three intended protections. (Generally
speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
(this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
(future).)

This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
being copied to/from userspace meet certain criteria:
- if address is a heap object, the size must not exceed the object's
  allocated size. (This will catch all kinds of heap overflow flaws.)
- if address range is in the current process stack, it must be within the
  current stack frame (if such checking is possible) or at least entirely
  within the current process's stack. (This could catch large lengths that
  would have extended beyond the current process stack, or overflows if
  their length extends back into the original stack.)
- if the address range is part of kernel data, rodata, or bss, allow it.
- if address range is page-allocated, that it doesn't span multiple
  allocations.
- if address is within the kernel text, reject it.
- everything else is accepted

The patches in the series are:
- Support for arch-specific stack frame checking:
	1- mm: Implement stack frame object validation
- The core copy_to/from_user() checks, without the slab object checks:
        2- mm: Hardened usercopy
- Per-arch enablement of the protection:
        3- x86/uaccess: Enable hardened usercopy
        4- ARM: uaccess: Enable hardened usercopy
        5- arm64/uaccess: Enable hardened usercopy
        6- ia64/uaccess: Enable hardened usercopy
        7- powerpc/uaccess: Enable hardened usercopy
        8- sparc/uaccess: Enable hardened usercopy
        9- s390/uaccess: Enable hardened usercopy
- The heap allocator implementation of object size checking:
       10- mm: SLAB hardened usercopy support
       11- mm: SLUB hardened usercopy support

Some notes:

- This is expected to apply on top of -next which contains fixes for the
  position of _etext on both arm and arm64.

- I couldn't detect a measurable performance change with these features
  enabled. Kernel build times were unchanged, hackbench was unchanged,
  etc. I think we could flip this to "on by default" at some point, but
  for now, I'm leaving it off until I can get some more definitive
  measurements.

- The SLOB support extracted from grsecurity seems entirely broken. I
  have no idea what's going on there, I spent my time testing SLAB and
  SLUB. Having someone else look at SLOB would be nice, but this series
  doesn't depend on it.

Additional features that would be nice, but aren't blocking this series:

- Needs more architecture support for stack frame checking (only x86 now).


Thanks!

-Kees

[1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
[2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5

v2:
- added s390 support
- handle slub red zone
- disallow writes to rodata area
- stack frame walker now CONFIG-controlled arch-specific helper


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 0/11] mm: Hardened usercopy
@ 2016-07-13 21:55 ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

This is a start of the mainline port of PAX_USERCOPY[1]. After I started
writing tests (now in lkdtm in -next) for Casey's earlier port[2], I
kept tweaking things further and further until I ended up with a whole
new patch series. To that end, I took Rik's feedback and made a number
of other changes and clean-ups as well.

Based on my understanding, PAX_USERCOPY was designed to catch a
few classes of flaws (mainly bad bounds checking) around the use of
copy_to_user()/copy_from_user(). These changes don't touch get_user() and
put_user(), since these operate on constant sized lengths, and tend to be
much less vulnerable. There are effectively three distinct protections in
the whole series, each of which I've given a separate CONFIG, though this
patch set is only the first of the three intended protections. (Generally
speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
(this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
(future).)

This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
being copied to/from userspace meet certain criteria:
- if address is a heap object, the size must not exceed the object's
  allocated size. (This will catch all kinds of heap overflow flaws.)
- if address range is in the current process stack, it must be within the
  current stack frame (if such checking is possible) or at least entirely
  within the current process's stack. (This could catch large lengths that
  would have extended beyond the current process stack, or overflows if
  their length extends back into the original stack.)
- if the address range is part of kernel data, rodata, or bss, allow it.
- if address range is page-allocated, that it doesn't span multiple
  allocations.
- if address is within the kernel text, reject it.
- everything else is accepted

The patches in the series are:
- Support for arch-specific stack frame checking:
	1- mm: Implement stack frame object validation
- The core copy_to/from_user() checks, without the slab object checks:
        2- mm: Hardened usercopy
- Per-arch enablement of the protection:
        3- x86/uaccess: Enable hardened usercopy
        4- ARM: uaccess: Enable hardened usercopy
        5- arm64/uaccess: Enable hardened usercopy
        6- ia64/uaccess: Enable hardened usercopy
        7- powerpc/uaccess: Enable hardened usercopy
        8- sparc/uaccess: Enable hardened usercopy
        9- s390/uaccess: Enable hardened usercopy
- The heap allocator implementation of object size checking:
       10- mm: SLAB hardened usercopy support
       11- mm: SLUB hardened usercopy support

Some notes:

- This is expected to apply on top of -next which contains fixes for the
  position of _etext on both arm and arm64.

- I couldn't detect a measurable performance change with these features
  enabled. Kernel build times were unchanged, hackbench was unchanged,
  etc. I think we could flip this to "on by default" at some point, but
  for now, I'm leaving it off until I can get some more definitive
  measurements.

- The SLOB support extracted from grsecurity seems entirely broken. I
  have no idea what's going on there, I spent my time testing SLAB and
  SLUB. Having someone else look at SLOB would be nice, but this series
  doesn't depend on it.

Additional features that would be nice, but aren't blocking this series:

- Needs more architecture support for stack frame checking (only x86 now).


Thanks!

-Kees

[1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
[2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5

v2:
- added s390 support
- handle slub red zone
- disallow writes to rodata area
- stack frame walker now CONFIG-controlled arch-specific helper

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 0/11] mm: Hardened usercopy
@ 2016-07-13 21:55 ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Hi,

This is a start of the mainline port of PAX_USERCOPY[1]. After I started
writing tests (now in lkdtm in -next) for Casey's earlier port[2], I
kept tweaking things further and further until I ended up with a whole
new patch series. To that end, I took Rik's feedback and made a number
of other changes and clean-ups as well.

Based on my understanding, PAX_USERCOPY was designed to catch a
few classes of flaws (mainly bad bounds checking) around the use of
copy_to_user()/copy_from_user(). These changes don't touch get_user() and
put_user(), since these operate on constant sized lengths, and tend to be
much less vulnerable. There are effectively three distinct protections in
the whole series, each of which I've given a separate CONFIG, though this
patch set is only the first of the three intended protections. (Generally
speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
(this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
(future).)

This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
being copied to/from userspace meet certain criteria:
- if address is a heap object, the size must not exceed the object's
  allocated size. (This will catch all kinds of heap overflow flaws.)
- if address range is in the current process stack, it must be within the
  current stack frame (if such checking is possible) or at least entirely
  within the current process's stack. (This could catch large lengths that
  would have extended beyond the current process stack, or overflows if
  their length extends back into the original stack.)
- if the address range is part of kernel data, rodata, or bss, allow it.
- if address range is page-allocated, that it doesn't span multiple
  allocations.
- if address is within the kernel text, reject it.
- everything else is accepted

The patches in the series are:
- Support for arch-specific stack frame checking:
	1- mm: Implement stack frame object validation
- The core copy_to/from_user() checks, without the slab object checks:
        2- mm: Hardened usercopy
- Per-arch enablement of the protection:
        3- x86/uaccess: Enable hardened usercopy
        4- ARM: uaccess: Enable hardened usercopy
        5- arm64/uaccess: Enable hardened usercopy
        6- ia64/uaccess: Enable hardened usercopy
        7- powerpc/uaccess: Enable hardened usercopy
        8- sparc/uaccess: Enable hardened usercopy
        9- s390/uaccess: Enable hardened usercopy
- The heap allocator implementation of object size checking:
       10- mm: SLAB hardened usercopy support
       11- mm: SLUB hardened usercopy support

Some notes:

- This is expected to apply on top of -next which contains fixes for the
  position of _etext on both arm and arm64.

- I couldn't detect a measurable performance change with these features
  enabled. Kernel build times were unchanged, hackbench was unchanged,
  etc. I think we could flip this to "on by default" at some point, but
  for now, I'm leaving it off until I can get some more definitive
  measurements.

- The SLOB support extracted from grsecurity seems entirely broken. I
  have no idea what's going on there, I spent my time testing SLAB and
  SLUB. Having someone else look at SLOB would be nice, but this series
  doesn't depend on it.

Additional features that would be nice, but aren't blocking this series:

- Needs more architecture support for stack frame checking (only x86 now).


Thanks!

-Kees

[1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
[2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5

v2:
- added s390 support
- handle slub red zone
- disallow writes to rodata area
- stack frame walker now CONFIG-controlled arch-specific helper

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:55   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

This creates per-architecture function arch_within_stack_frames() that
should validate if a given object is contained by a kernel stack frame.
Initial implementation is on x86.

This is based on code from PaX.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                       |  9 ++++++++
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/thread_info.h | 44 ++++++++++++++++++++++++++++++++++++++
 include/linux/thread_info.h        |  9 ++++++++
 4 files changed, 63 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index d794384a0404..5e2776562035 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -424,6 +424,15 @@ config CC_STACKPROTECTOR_STRONG
 
 endchoice
 
+config HAVE_ARCH_WITHIN_STACK_FRAMES
+	bool
+	help
+	  An architecture should select this if it can walk the kernel stack
+	  frames to determine if an object is part of either the arguments
+	  or local variables (i.e. that it excludes saved return addresses,
+	  and similar) by implementing an inline arch_within_stack_frames(),
+	  which is used by CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0a7b885964ba..4407f596b72c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
 	select HAVE_ARCH_SOFT_DIRTY		if X86_64
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	select HAVE_ARCH_WITHIN_STACK_FRAMES
 	select HAVE_EBPF_JIT			if X86_64
 	select HAVE_CC_STACKPROTECTOR
 	select HAVE_CMPXCHG_DOUBLE
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 30c133ac05cd..ab386f1336f2 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -180,6 +180,50 @@ static inline unsigned long current_stack_pointer(void)
 	return sp;
 }
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ *		 1 if within a frame
+ *		-1 if placed across a frame boundary (or outside stack)
+ *		 0 unable to determine (no frame pointers, etc)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+#if defined(CONFIG_FRAME_POINTER)
+	const void *frame = NULL;
+	const void *oldframe;
+
+	oldframe = __builtin_frame_address(1);
+	if (oldframe)
+		frame = __builtin_frame_address(2);
+	/*
+	 * low ----------------------------------------------> high
+	 * [saved bp][saved ip][args][local vars][saved bp][saved ip]
+	 *                     ^----------------^
+	 *               allow copies only within here
+	 */
+	while (stack <= frame && frame < stackend) {
+		/*
+		 * If obj + len extends past the last frame, this
+		 * check won't pass and the next frame will be 0,
+		 * causing us to bail out and correctly report
+		 * the copy as invalid.
+		 */
+		if (obj + len <= frame)
+			return obj >= oldframe + 2 * sizeof(void *) ? 1 : -1;
+		oldframe = frame;
+		frame = *(const void * const *)frame;
+	}
+	return -1;
+#else
+	return 0;
+#endif
+}
+
 #else /* !__ASSEMBLY__ */
 
 #ifdef CONFIG_X86_64
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index b4c2a485b28a..3d5c80b4391d 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -146,6 +146,15 @@ static inline bool test_and_clear_restore_sigmask(void)
 #error "no set_restore_sigmask() provided and default one won't work"
 #endif
 
+#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+	return 0;
+}
+#endif
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

This creates per-architecture function arch_within_stack_frames() that
should validate if a given object is contained by a kernel stack frame.
Initial implementation is on x86.

This is based on code from PaX.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                       |  9 ++++++++
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/thread_info.h | 44 ++++++++++++++++++++++++++++++++++++++
 include/linux/thread_info.h        |  9 ++++++++
 4 files changed, 63 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index d794384a0404..5e2776562035 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -424,6 +424,15 @@ config CC_STACKPROTECTOR_STRONG
 
 endchoice
 
+config HAVE_ARCH_WITHIN_STACK_FRAMES
+	bool
+	help
+	  An architecture should select this if it can walk the kernel stack
+	  frames to determine if an object is part of either the arguments
+	  or local variables (i.e. that it excludes saved return addresses,
+	  and similar) by implementing an inline arch_within_stack_frames(),
+	  which is used by CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0a7b885964ba..4407f596b72c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
 	select HAVE_ARCH_SOFT_DIRTY		if X86_64
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	select HAVE_ARCH_WITHIN_STACK_FRAMES
 	select HAVE_EBPF_JIT			if X86_64
 	select HAVE_CC_STACKPROTECTOR
 	select HAVE_CMPXCHG_DOUBLE
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 30c133ac05cd..ab386f1336f2 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -180,6 +180,50 @@ static inline unsigned long current_stack_pointer(void)
 	return sp;
 }
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ *		 1 if within a frame
+ *		-1 if placed across a frame boundary (or outside stack)
+ *		 0 unable to determine (no frame pointers, etc)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+#if defined(CONFIG_FRAME_POINTER)
+	const void *frame = NULL;
+	const void *oldframe;
+
+	oldframe = __builtin_frame_address(1);
+	if (oldframe)
+		frame = __builtin_frame_address(2);
+	/*
+	 * low ----------------------------------------------> high
+	 * [saved bp][saved ip][args][local vars][saved bp][saved ip]
+	 *                     ^----------------^
+	 *               allow copies only within here
+	 */
+	while (stack <= frame && frame < stackend) {
+		/*
+		 * If obj + len extends past the last frame, this
+		 * check won't pass and the next frame will be 0,
+		 * causing us to bail out and correctly report
+		 * the copy as invalid.
+		 */
+		if (obj + len <= frame)
+			return obj >= oldframe + 2 * sizeof(void *) ? 1 : -1;
+		oldframe = frame;
+		frame = *(const void * const *)frame;
+	}
+	return -1;
+#else
+	return 0;
+#endif
+}
+
 #else /* !__ASSEMBLY__ */
 
 #ifdef CONFIG_X86_64
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index b4c2a485b28a..3d5c80b4391d 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -146,6 +146,15 @@ static inline bool test_and_clear_restore_sigmask(void)
 #error "no set_restore_sigmask() provided and default one won't work"
 #endif
 
+#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+	return 0;
+}
+#endif
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

This creates per-architecture function arch_within_stack_frames() that
should validate if a given object is contained by a kernel stack frame.
Initial implementation is on x86.

This is based on code from PaX.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                       |  9 ++++++++
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/thread_info.h | 44 ++++++++++++++++++++++++++++++++++++++
 include/linux/thread_info.h        |  9 ++++++++
 4 files changed, 63 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index d794384a0404..5e2776562035 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -424,6 +424,15 @@ config CC_STACKPROTECTOR_STRONG
 
 endchoice
 
+config HAVE_ARCH_WITHIN_STACK_FRAMES
+	bool
+	help
+	  An architecture should select this if it can walk the kernel stack
+	  frames to determine if an object is part of either the arguments
+	  or local variables (i.e. that it excludes saved return addresses,
+	  and similar) by implementing an inline arch_within_stack_frames(),
+	  which is used by CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0a7b885964ba..4407f596b72c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
 	select HAVE_ARCH_SOFT_DIRTY		if X86_64
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	select HAVE_ARCH_WITHIN_STACK_FRAMES
 	select HAVE_EBPF_JIT			if X86_64
 	select HAVE_CC_STACKPROTECTOR
 	select HAVE_CMPXCHG_DOUBLE
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 30c133ac05cd..ab386f1336f2 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -180,6 +180,50 @@ static inline unsigned long current_stack_pointer(void)
 	return sp;
 }
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ *		 1 if within a frame
+ *		-1 if placed across a frame boundary (or outside stack)
+ *		 0 unable to determine (no frame pointers, etc)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+#if defined(CONFIG_FRAME_POINTER)
+	const void *frame = NULL;
+	const void *oldframe;
+
+	oldframe = __builtin_frame_address(1);
+	if (oldframe)
+		frame = __builtin_frame_address(2);
+	/*
+	 * low ----------------------------------------------> high
+	 * [saved bp][saved ip][args][local vars][saved bp][saved ip]
+	 *                     ^----------------^
+	 *               allow copies only within here
+	 */
+	while (stack <= frame && frame < stackend) {
+		/*
+		 * If obj + len extends past the last frame, this
+		 * check won't pass and the next frame will be 0,
+		 * causing us to bail out and correctly report
+		 * the copy as invalid.
+		 */
+		if (obj + len <= frame)
+			return obj >= oldframe + 2 * sizeof(void *) ? 1 : -1;
+		oldframe = frame;
+		frame = *(const void * const *)frame;
+	}
+	return -1;
+#else
+	return 0;
+#endif
+}
+
 #else /* !__ASSEMBLY__ */
 
 #ifdef CONFIG_X86_64
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index b4c2a485b28a..3d5c80b4391d 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -146,6 +146,15 @@ static inline bool test_and_clear_restore_sigmask(void)
 #error "no set_restore_sigmask() provided and default one won't work"
 #endif
 
+#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+	return 0;
+}
+#endif
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

This creates per-architecture function arch_within_stack_frames() that
should validate if a given object is contained by a kernel stack frame.
Initial implementation is on x86.

This is based on code from PaX.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                       |  9 ++++++++
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/thread_info.h | 44 ++++++++++++++++++++++++++++++++++++++
 include/linux/thread_info.h        |  9 ++++++++
 4 files changed, 63 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index d794384a0404..5e2776562035 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -424,6 +424,15 @@ config CC_STACKPROTECTOR_STRONG
 
 endchoice
 
+config HAVE_ARCH_WITHIN_STACK_FRAMES
+	bool
+	help
+	  An architecture should select this if it can walk the kernel stack
+	  frames to determine if an object is part of either the arguments
+	  or local variables (i.e. that it excludes saved return addresses,
+	  and similar) by implementing an inline arch_within_stack_frames(),
+	  which is used by CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0a7b885964ba..4407f596b72c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
 	select HAVE_ARCH_SOFT_DIRTY		if X86_64
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	select HAVE_ARCH_WITHIN_STACK_FRAMES
 	select HAVE_EBPF_JIT			if X86_64
 	select HAVE_CC_STACKPROTECTOR
 	select HAVE_CMPXCHG_DOUBLE
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 30c133ac05cd..ab386f1336f2 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -180,6 +180,50 @@ static inline unsigned long current_stack_pointer(void)
 	return sp;
 }
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ *		 1 if within a frame
+ *		-1 if placed across a frame boundary (or outside stack)
+ *		 0 unable to determine (no frame pointers, etc)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+#if defined(CONFIG_FRAME_POINTER)
+	const void *frame = NULL;
+	const void *oldframe;
+
+	oldframe = __builtin_frame_address(1);
+	if (oldframe)
+		frame = __builtin_frame_address(2);
+	/*
+	 * low ----------------------------------------------> high
+	 * [saved bp][saved ip][args][local vars][saved bp][saved ip]
+	 *                     ^----------------^
+	 *               allow copies only within here
+	 */
+	while (stack <= frame && frame < stackend) {
+		/*
+		 * If obj + len extends past the last frame, this
+		 * check won't pass and the next frame will be 0,
+		 * causing us to bail out and correctly report
+		 * the copy as invalid.
+		 */
+		if (obj + len <= frame)
+			return obj >= oldframe + 2 * sizeof(void *) ? 1 : -1;
+		oldframe = frame;
+		frame = *(const void * const *)frame;
+	}
+	return -1;
+#else
+	return 0;
+#endif
+}
+
 #else /* !__ASSEMBLY__ */
 
 #ifdef CONFIG_X86_64
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index b4c2a485b28a..3d5c80b4391d 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -146,6 +146,15 @@ static inline bool test_and_clear_restore_sigmask(void)
 #error "no set_restore_sigmask() provided and default one won't work"
 #endif
 
+#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+	return 0;
+}
+#endif
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-arm-kernel

This creates per-architecture function arch_within_stack_frames() that
should validate if a given object is contained by a kernel stack frame.
Initial implementation is on x86.

This is based on code from PaX.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                       |  9 ++++++++
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/thread_info.h | 44 ++++++++++++++++++++++++++++++++++++++
 include/linux/thread_info.h        |  9 ++++++++
 4 files changed, 63 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index d794384a0404..5e2776562035 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -424,6 +424,15 @@ config CC_STACKPROTECTOR_STRONG
 
 endchoice
 
+config HAVE_ARCH_WITHIN_STACK_FRAMES
+	bool
+	help
+	  An architecture should select this if it can walk the kernel stack
+	  frames to determine if an object is part of either the arguments
+	  or local variables (i.e. that it excludes saved return addresses,
+	  and similar) by implementing an inline arch_within_stack_frames(),
+	  which is used by CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0a7b885964ba..4407f596b72c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
 	select HAVE_ARCH_SOFT_DIRTY		if X86_64
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	select HAVE_ARCH_WITHIN_STACK_FRAMES
 	select HAVE_EBPF_JIT			if X86_64
 	select HAVE_CC_STACKPROTECTOR
 	select HAVE_CMPXCHG_DOUBLE
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 30c133ac05cd..ab386f1336f2 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -180,6 +180,50 @@ static inline unsigned long current_stack_pointer(void)
 	return sp;
 }
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ *		 1 if within a frame
+ *		-1 if placed across a frame boundary (or outside stack)
+ *		 0 unable to determine (no frame pointers, etc)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+#if defined(CONFIG_FRAME_POINTER)
+	const void *frame = NULL;
+	const void *oldframe;
+
+	oldframe = __builtin_frame_address(1);
+	if (oldframe)
+		frame = __builtin_frame_address(2);
+	/*
+	 * low ----------------------------------------------> high
+	 * [saved bp][saved ip][args][local vars][saved bp][saved ip]
+	 *                     ^----------------^
+	 *               allow copies only within here
+	 */
+	while (stack <= frame && frame < stackend) {
+		/*
+		 * If obj + len extends past the last frame, this
+		 * check won't pass and the next frame will be 0,
+		 * causing us to bail out and correctly report
+		 * the copy as invalid.
+		 */
+		if (obj + len <= frame)
+			return obj >= oldframe + 2 * sizeof(void *) ? 1 : -1;
+		oldframe = frame;
+		frame = *(const void * const *)frame;
+	}
+	return -1;
+#else
+	return 0;
+#endif
+}
+
 #else /* !__ASSEMBLY__ */
 
 #ifdef CONFIG_X86_64
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index b4c2a485b28a..3d5c80b4391d 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -146,6 +146,15 @@ static inline bool test_and_clear_restore_sigmask(void)
 #error "no set_restore_sigmask() provided and default one won't work"
 #endif
 
+#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+	return 0;
+}
+#endif
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

This creates per-architecture function arch_within_stack_frames() that
should validate if a given object is contained by a kernel stack frame.
Initial implementation is on x86.

This is based on code from PaX.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                       |  9 ++++++++
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/thread_info.h | 44 ++++++++++++++++++++++++++++++++++++++
 include/linux/thread_info.h        |  9 ++++++++
 4 files changed, 63 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index d794384a0404..5e2776562035 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -424,6 +424,15 @@ config CC_STACKPROTECTOR_STRONG
 
 endchoice
 
+config HAVE_ARCH_WITHIN_STACK_FRAMES
+	bool
+	help
+	  An architecture should select this if it can walk the kernel stack
+	  frames to determine if an object is part of either the arguments
+	  or local variables (i.e. that it excludes saved return addresses,
+	  and similar) by implementing an inline arch_within_stack_frames(),
+	  which is used by CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0a7b885964ba..4407f596b72c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
 	select HAVE_ARCH_SOFT_DIRTY		if X86_64
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	select HAVE_ARCH_WITHIN_STACK_FRAMES
 	select HAVE_EBPF_JIT			if X86_64
 	select HAVE_CC_STACKPROTECTOR
 	select HAVE_CMPXCHG_DOUBLE
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 30c133ac05cd..ab386f1336f2 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -180,6 +180,50 @@ static inline unsigned long current_stack_pointer(void)
 	return sp;
 }
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ *		 1 if within a frame
+ *		-1 if placed across a frame boundary (or outside stack)
+ *		 0 unable to determine (no frame pointers, etc)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+#if defined(CONFIG_FRAME_POINTER)
+	const void *frame = NULL;
+	const void *oldframe;
+
+	oldframe = __builtin_frame_address(1);
+	if (oldframe)
+		frame = __builtin_frame_address(2);
+	/*
+	 * low ----------------------------------------------> high
+	 * [saved bp][saved ip][args][local vars][saved bp][saved ip]
+	 *                     ^----------------^
+	 *               allow copies only within here
+	 */
+	while (stack <= frame && frame < stackend) {
+		/*
+		 * If obj + len extends past the last frame, this
+		 * check won't pass and the next frame will be 0,
+		 * causing us to bail out and correctly report
+		 * the copy as invalid.
+		 */
+		if (obj + len <= frame)
+			return obj >= oldframe + 2 * sizeof(void *) ? 1 : -1;
+		oldframe = frame;
+		frame = *(const void * const *)frame;
+	}
+	return -1;
+#else
+	return 0;
+#endif
+}
+
 #else /* !__ASSEMBLY__ */
 
 #ifdef CONFIG_X86_64
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index b4c2a485b28a..3d5c80b4391d 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -146,6 +146,15 @@ static inline bool test_and_clear_restore_sigmask(void)
 #error "no set_restore_sigmask() provided and default one won't work"
 #endif
 
+#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+	return 0;
+}
+#endif
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:55   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
    implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations
- if on the stack
  - object must not extend before/after the current process task
  - object must be contained by the current stack frame (when there is
    arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                |   7 ++
 include/linux/slab.h        |  12 +++
 include/linux/thread_info.h |  15 +++
 mm/Makefile                 |   4 +
 mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
 security/Kconfig            |  27 ++++++
 6 files changed, 284 insertions(+)
 create mode 100644 mm/usercopy.c

diff --git a/arch/Kconfig b/arch/Kconfig
index 5e2776562035..195ee4cc939a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
 	  and similar) by implementing an inline arch_within_stack_frames(),
 	  which is used by CONFIG_HARDENED_USERCOPY.
 
+config HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	bool
+	help
+	  An architecture should select this if it has a secondary linear
+	  mapping of the kernel text. This is used to verify that kernel
+	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d00a66..96a16a3fb7cb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,6 +155,18 @@ void kfree(const void *);
 void kzfree(const void *);
 size_t ksize(const void *);
 
+#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page);
+#else
+static inline const char *__check_heap_object(const void *ptr,
+					      unsigned long n,
+					      struct page *page)
+{
+	return NULL;
+}
+#endif
+
 /*
  * Some archs want to perform DMA into kmalloc caches and need a guaranteed
  * alignment larger than the alignment of a 64-bit integer.
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 3d5c80b4391d..f24b99eac969 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
 }
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+extern void __check_object_size(const void *ptr, unsigned long n,
+					bool to_user);
+
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{
+	__check_object_size(ptr, n, to_user);
+}
+#else
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{ }
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
diff --git a/mm/Makefile b/mm/Makefile
index 78c6f7dedb83..32d37247c7e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
 KCOV_INSTRUMENT_mmzone.o := n
 KCOV_INSTRUMENT_vmstat.o := n
 
+# Since __builtin_frame_address does work as used, disable the warning.
+CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
 obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
 obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
+obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
diff --git a/mm/usercopy.c b/mm/usercopy.c
new file mode 100644
index 000000000000..4161a1fb1909
--- /dev/null
+++ b/mm/usercopy.c
@@ -0,0 +1,219 @@
+/*
+ * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
+ * which are designed to protect kernel memory from needless exposure
+ * and overwrite under many unintended conditions. This code is based
+ * on PAX_USERCOPY, which is:
+ *
+ * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
+ * Security Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <asm/sections.h>
+
+/*
+ * Checks if a given pointer and length is contained by the current
+ * stack frame (if possible).
+ *
+ *	0: not at all on the stack
+ *	1: fully within a valid stack frame
+ *	2: fully on the stack (when can't do frame-checking)
+ *	-1: error condition (invalid stack position or bad stack frame)
+ */
+static noinline int check_stack_object(const void *obj, unsigned long len)
+{
+	const void * const stack = task_stack_page(current);
+	const void * const stackend = stack + THREAD_SIZE;
+	int ret;
+
+	/* Object is not on the stack at all. */
+	if (obj + len <= stack || stackend <= obj)
+		return 0;
+
+	/*
+	 * Reject: object partially overlaps the stack (passing the
+	 * the check above means at least one end is within the stack,
+	 * so if this check fails, the other end is outside the stack).
+	 */
+	if (obj < stack || stackend < obj + len)
+		return -1;
+
+	/* Check if object is safely within a valid frame. */
+	ret = arch_within_stack_frames(stack, stackend, obj, len);
+	if (ret)
+		return ret;
+
+	return 2;
+}
+
+static void report_usercopy(const void *ptr, unsigned long len,
+			    bool to_user, const char *type)
+{
+	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
+		to_user ? "exposure" : "overwrite",
+		to_user ? "from" : "to", ptr, type ? : "unknown", len);
+	dump_stack();
+	do_group_exit(SIGKILL);
+}
+
+/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
+static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
+		     unsigned long high)
+{
+	unsigned long check_low = (uintptr_t)ptr;
+	unsigned long check_high = check_low + n;
+
+	/* Does not overlap if entirely above or entirely below. */
+	if (check_low >= high || check_high < low)
+		return false;
+
+	return true;
+}
+
+/* Is this address range in the kernel text area? */
+static inline const char *check_kernel_text_object(const void *ptr,
+						   unsigned long n)
+{
+	unsigned long textlow = (unsigned long)_stext;
+	unsigned long texthigh = (unsigned long)_etext;
+
+	if (overlaps(ptr, n, textlow, texthigh))
+		return "<kernel text>";
+
+#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	/* Check against linear mapping as well. */
+	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
+		     (unsigned long)__va(__pa(texthigh))))
+		return "<linear kernel text>";
+#endif
+
+	return NULL;
+}
+
+static inline const char *check_bogus_address(const void *ptr, unsigned long n)
+{
+	/* Reject if object wraps past end of memory. */
+	if (ptr + n < ptr)
+		return "<wrapped address>";
+
+	/* Reject if NULL or ZERO-allocation. */
+	if (ZERO_OR_NULL_PTR(ptr))
+		return "<null>";
+
+	return NULL;
+}
+
+static inline const char *check_heap_object(const void *ptr, unsigned long n,
+					    bool to_user)
+{
+	struct page *page, *endpage;
+	const void *end = ptr + n - 1;
+
+	if (!virt_addr_valid(ptr))
+		return NULL;
+
+	page = virt_to_head_page(ptr);
+
+	/* Check slab allocator for flags and size. */
+	if (PageSlab(page))
+		return __check_heap_object(ptr, n, page);
+
+	/*
+	 * Sometimes the kernel data regions are not marked Reserved (see
+	 * check below). And sometimes [_sdata,_edata) does not cover
+	 * rodata and/or bss, so check each range explicitly.
+	 */
+
+	/* Allow reads of kernel rodata region (if not marked as Reserved). */
+	if (ptr >= (const void *)__start_rodata &&
+	    end <= (const void *)__end_rodata) {
+		if (!to_user)
+			return "<rodata>";
+		return NULL;
+	}
+
+	/* Allow kernel data region (if not marked as Reserved). */
+	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
+		return NULL;
+
+	/* Allow kernel bss region (if not marked as Reserved). */
+	if (ptr >= (const void *)__bss_start &&
+	    end <= (const void *)__bss_stop)
+		return NULL;
+
+	/* Is the object wholly within one base page? */
+	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
+		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
+		return NULL;
+
+	/* Allow if start and end are inside the same compound page. */
+	endpage = virt_to_head_page(end);
+	if (likely(endpage == page))
+		return NULL;
+
+	/* Allow special areas, device memory, and sometimes kernel data. */
+	if (PageReserved(page) && PageReserved(endpage))
+		return NULL;
+
+	/* Uh oh. The "object" spans several independently allocated pages. */
+	return "<spans multiple pages>";
+}
+
+/*
+ * Validates that the given object is one of:
+ * - known safe heap object
+ * - known safe stack object
+ * - not in kernel text
+ */
+void __check_object_size(const void *ptr, unsigned long n, bool to_user)
+{
+	const char *err;
+
+	/* Skip all tests if size is zero. */
+	if (!n)
+		return;
+
+	/* Check for invalid addresses. */
+	err = check_bogus_address(ptr, n);
+	if (err)
+		goto report;
+
+	/* Check for bad heap object. */
+	err = check_heap_object(ptr, n, to_user);
+	if (err)
+		goto report;
+
+	/* Check for bad stack object. */
+	switch (check_stack_object(ptr, n)) {
+	case 0:
+		/* Object is not touching the current process stack. */
+		break;
+	case 1:
+	case 2:
+		/*
+		 * Object is either in the correct frame (when it
+		 * is possible to check) or just generally on the
+		 * process stack (when frame checking not available).
+		 */
+		return;
+	default:
+		err = "<process stack>";
+		goto report;
+	}
+
+	/* Check for object in kernel to avoid text exposure. */
+	err = check_kernel_text_object(ptr, n);
+	if (!err)
+		return;
+
+report:
+	report_usercopy(ptr, n, to_user, err);
+}
+EXPORT_SYMBOL(__check_object_size);
diff --git a/security/Kconfig b/security/Kconfig
index 176758cdfa57..63340ad0b9f9 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -118,6 +118,33 @@ config LSM_MMAP_MIN_ADDR
 	  this low address space will need the permission specific to the
 	  systems running LSM.
 
+config HAVE_HARDENED_USERCOPY_ALLOCATOR
+	bool
+	help
+	  The heap allocator implements __check_heap_object() for
+	  validating memory ranges against heap object sizes in
+	  support of CONFIG_HARDENED_USERCOPY.
+
+config HAVE_ARCH_HARDENED_USERCOPY
+	bool
+	help
+	  The architecture supports CONFIG_HARDENED_USERCOPY by
+	  calling check_object_size() just before performing the
+	  userspace copies in the low level implementation of
+	  copy_to_user() and copy_from_user().
+
+config HARDENED_USERCOPY
+	bool "Harden memory copies between kernel and userspace"
+	depends on HAVE_ARCH_HARDENED_USERCOPY
+	help
+	  This option checks for obviously wrong memory regions when
+	  copying memory to/from the kernel (via copy_to_user() and
+	  copy_from_user() functions) by rejecting memory ranges that
+	  are larger than the specified heap object, span multiple
+	  separately allocates pages, are not on the process stack,
+	  or are part of the kernel text. This kills entire classes
+	  of heap overflow exploits and similar kernel memory exposures.
+
 source security/selinux/Kconfig
 source security/smack/Kconfig
 source security/tomoyo/Kconfig
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
    implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations
- if on the stack
  - object must not extend before/after the current process task
  - object must be contained by the current stack frame (when there is
    arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                |   7 ++
 include/linux/slab.h        |  12 +++
 include/linux/thread_info.h |  15 +++
 mm/Makefile                 |   4 +
 mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
 security/Kconfig            |  27 ++++++
 6 files changed, 284 insertions(+)
 create mode 100644 mm/usercopy.c

diff --git a/arch/Kconfig b/arch/Kconfig
index 5e2776562035..195ee4cc939a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
 	  and similar) by implementing an inline arch_within_stack_frames(),
 	  which is used by CONFIG_HARDENED_USERCOPY.
 
+config HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	bool
+	help
+	  An architecture should select this if it has a secondary linear
+	  mapping of the kernel text. This is used to verify that kernel
+	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d00a66..96a16a3fb7cb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,6 +155,18 @@ void kfree(const void *);
 void kzfree(const void *);
 size_t ksize(const void *);
 
+#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page);
+#else
+static inline const char *__check_heap_object(const void *ptr,
+					      unsigned long n,
+					      struct page *page)
+{
+	return NULL;
+}
+#endif
+
 /*
  * Some archs want to perform DMA into kmalloc caches and need a guaranteed
  * alignment larger than the alignment of a 64-bit integer.
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 3d5c80b4391d..f24b99eac969 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
 }
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+extern void __check_object_size(const void *ptr, unsigned long n,
+					bool to_user);
+
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{
+	__check_object_size(ptr, n, to_user);
+}
+#else
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{ }
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
diff --git a/mm/Makefile b/mm/Makefile
index 78c6f7dedb83..32d37247c7e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
 KCOV_INSTRUMENT_mmzone.o := n
 KCOV_INSTRUMENT_vmstat.o := n
 
+# Since __builtin_frame_address does work as used, disable the warning.
+CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
 obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
 obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
+obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
diff --git a/mm/usercopy.c b/mm/usercopy.c
new file mode 100644
index 000000000000..4161a1fb1909
--- /dev/null
+++ b/mm/usercopy.c
@@ -0,0 +1,219 @@
+/*
+ * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
+ * which are designed to protect kernel memory from needless exposure
+ * and overwrite under many unintended conditions. This code is based
+ * on PAX_USERCOPY, which is:
+ *
+ * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
+ * Security Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <asm/sections.h>
+
+/*
+ * Checks if a given pointer and length is contained by the current
+ * stack frame (if possible).
+ *
+ *	0: not at all on the stack
+ *	1: fully within a valid stack frame
+ *	2: fully on the stack (when can't do frame-checking)
+ *	-1: error condition (invalid stack position or bad stack frame)
+ */
+static noinline int check_stack_object(const void *obj, unsigned long len)
+{
+	const void * const stack = task_stack_page(current);
+	const void * const stackend = stack + THREAD_SIZE;
+	int ret;
+
+	/* Object is not on the stack at all. */
+	if (obj + len <= stack || stackend <= obj)
+		return 0;
+
+	/*
+	 * Reject: object partially overlaps the stack (passing the
+	 * the check above means at least one end is within the stack,
+	 * so if this check fails, the other end is outside the stack).
+	 */
+	if (obj < stack || stackend < obj + len)
+		return -1;
+
+	/* Check if object is safely within a valid frame. */
+	ret = arch_within_stack_frames(stack, stackend, obj, len);
+	if (ret)
+		return ret;
+
+	return 2;
+}
+
+static void report_usercopy(const void *ptr, unsigned long len,
+			    bool to_user, const char *type)
+{
+	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
+		to_user ? "exposure" : "overwrite",
+		to_user ? "from" : "to", ptr, type ? : "unknown", len);
+	dump_stack();
+	do_group_exit(SIGKILL);
+}
+
+/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
+static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
+		     unsigned long high)
+{
+	unsigned long check_low = (uintptr_t)ptr;
+	unsigned long check_high = check_low + n;
+
+	/* Does not overlap if entirely above or entirely below. */
+	if (check_low >= high || check_high < low)
+		return false;
+
+	return true;
+}
+
+/* Is this address range in the kernel text area? */
+static inline const char *check_kernel_text_object(const void *ptr,
+						   unsigned long n)
+{
+	unsigned long textlow = (unsigned long)_stext;
+	unsigned long texthigh = (unsigned long)_etext;
+
+	if (overlaps(ptr, n, textlow, texthigh))
+		return "<kernel text>";
+
+#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	/* Check against linear mapping as well. */
+	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
+		     (unsigned long)__va(__pa(texthigh))))
+		return "<linear kernel text>";
+#endif
+
+	return NULL;
+}
+
+static inline const char *check_bogus_address(const void *ptr, unsigned long n)
+{
+	/* Reject if object wraps past end of memory. */
+	if (ptr + n < ptr)
+		return "<wrapped address>";
+
+	/* Reject if NULL or ZERO-allocation. */
+	if (ZERO_OR_NULL_PTR(ptr))
+		return "<null>";
+
+	return NULL;
+}
+
+static inline const char *check_heap_object(const void *ptr, unsigned long n,
+					    bool to_user)
+{
+	struct page *page, *endpage;
+	const void *end = ptr + n - 1;
+
+	if (!virt_addr_valid(ptr))
+		return NULL;
+
+	page = virt_to_head_page(ptr);
+
+	/* Check slab allocator for flags and size. */
+	if (PageSlab(page))
+		return __check_heap_object(ptr, n, page);
+
+	/*
+	 * Sometimes the kernel data regions are not marked Reserved (see
+	 * check below). And sometimes [_sdata,_edata) does not cover
+	 * rodata and/or bss, so check each range explicitly.
+	 */
+
+	/* Allow reads of kernel rodata region (if not marked as Reserved). */
+	if (ptr >= (const void *)__start_rodata &&
+	    end <= (const void *)__end_rodata) {
+		if (!to_user)
+			return "<rodata>";
+		return NULL;
+	}
+
+	/* Allow kernel data region (if not marked as Reserved). */
+	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
+		return NULL;
+
+	/* Allow kernel bss region (if not marked as Reserved). */
+	if (ptr >= (const void *)__bss_start &&
+	    end <= (const void *)__bss_stop)
+		return NULL;
+
+	/* Is the object wholly within one base page? */
+	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
+		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
+		return NULL;
+
+	/* Allow if start and end are inside the same compound page. */
+	endpage = virt_to_head_page(end);
+	if (likely(endpage == page))
+		return NULL;
+
+	/* Allow special areas, device memory, and sometimes kernel data. */
+	if (PageReserved(page) && PageReserved(endpage))
+		return NULL;
+
+	/* Uh oh. The "object" spans several independently allocated pages. */
+	return "<spans multiple pages>";
+}
+
+/*
+ * Validates that the given object is one of:
+ * - known safe heap object
+ * - known safe stack object
+ * - not in kernel text
+ */
+void __check_object_size(const void *ptr, unsigned long n, bool to_user)
+{
+	const char *err;
+
+	/* Skip all tests if size is zero. */
+	if (!n)
+		return;
+
+	/* Check for invalid addresses. */
+	err = check_bogus_address(ptr, n);
+	if (err)
+		goto report;
+
+	/* Check for bad heap object. */
+	err = check_heap_object(ptr, n, to_user);
+	if (err)
+		goto report;
+
+	/* Check for bad stack object. */
+	switch (check_stack_object(ptr, n)) {
+	case 0:
+		/* Object is not touching the current process stack. */
+		break;
+	case 1:
+	case 2:
+		/*
+		 * Object is either in the correct frame (when it
+		 * is possible to check) or just generally on the
+		 * process stack (when frame checking not available).
+		 */
+		return;
+	default:
+		err = "<process stack>";
+		goto report;
+	}
+
+	/* Check for object in kernel to avoid text exposure. */
+	err = check_kernel_text_object(ptr, n);
+	if (!err)
+		return;
+
+report:
+	report_usercopy(ptr, n, to_user, err);
+}
+EXPORT_SYMBOL(__check_object_size);
diff --git a/security/Kconfig b/security/Kconfig
index 176758cdfa57..63340ad0b9f9 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -118,6 +118,33 @@ config LSM_MMAP_MIN_ADDR
 	  this low address space will need the permission specific to the
 	  systems running LSM.
 
+config HAVE_HARDENED_USERCOPY_ALLOCATOR
+	bool
+	help
+	  The heap allocator implements __check_heap_object() for
+	  validating memory ranges against heap object sizes in
+	  support of CONFIG_HARDENED_USERCOPY.
+
+config HAVE_ARCH_HARDENED_USERCOPY
+	bool
+	help
+	  The architecture supports CONFIG_HARDENED_USERCOPY by
+	  calling check_object_size() just before performing the
+	  userspace copies in the low level implementation of
+	  copy_to_user() and copy_from_user().
+
+config HARDENED_USERCOPY
+	bool "Harden memory copies between kernel and userspace"
+	depends on HAVE_ARCH_HARDENED_USERCOPY
+	help
+	  This option checks for obviously wrong memory regions when
+	  copying memory to/from the kernel (via copy_to_user() and
+	  copy_from_user() functions) by rejecting memory ranges that
+	  are larger than the specified heap object, span multiple
+	  separately allocates pages, are not on the process stack,
+	  or are part of the kernel text. This kills entire classes
+	  of heap overflow exploits and similar kernel memory exposures.
+
 source security/selinux/Kconfig
 source security/smack/Kconfig
 source security/tomoyo/Kconfig
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
    implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations
- if on the stack
  - object must not extend before/after the current process task
  - object must be contained by the current stack frame (when there is
    arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                |   7 ++
 include/linux/slab.h        |  12 +++
 include/linux/thread_info.h |  15 +++
 mm/Makefile                 |   4 +
 mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
 security/Kconfig            |  27 ++++++
 6 files changed, 284 insertions(+)
 create mode 100644 mm/usercopy.c

diff --git a/arch/Kconfig b/arch/Kconfig
index 5e2776562035..195ee4cc939a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
 	  and similar) by implementing an inline arch_within_stack_frames(),
 	  which is used by CONFIG_HARDENED_USERCOPY.
 
+config HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	bool
+	help
+	  An architecture should select this if it has a secondary linear
+	  mapping of the kernel text. This is used to verify that kernel
+	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d00a66..96a16a3fb7cb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,6 +155,18 @@ void kfree(const void *);
 void kzfree(const void *);
 size_t ksize(const void *);
 
+#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page);
+#else
+static inline const char *__check_heap_object(const void *ptr,
+					      unsigned long n,
+					      struct page *page)
+{
+	return NULL;
+}
+#endif
+
 /*
  * Some archs want to perform DMA into kmalloc caches and need a guaranteed
  * alignment larger than the alignment of a 64-bit integer.
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 3d5c80b4391d..f24b99eac969 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
 }
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+extern void __check_object_size(const void *ptr, unsigned long n,
+					bool to_user);
+
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{
+	__check_object_size(ptr, n, to_user);
+}
+#else
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{ }
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
diff --git a/mm/Makefile b/mm/Makefile
index 78c6f7dedb83..32d37247c7e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
 KCOV_INSTRUMENT_mmzone.o := n
 KCOV_INSTRUMENT_vmstat.o := n
 
+# Since __builtin_frame_address does work as used, disable the warning.
+CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
 obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
 obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
+obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
diff --git a/mm/usercopy.c b/mm/usercopy.c
new file mode 100644
index 000000000000..4161a1fb1909
--- /dev/null
+++ b/mm/usercopy.c
@@ -0,0 +1,219 @@
+/*
+ * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
+ * which are designed to protect kernel memory from needless exposure
+ * and overwrite under many unintended conditions. This code is based
+ * on PAX_USERCOPY, which is:
+ *
+ * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
+ * Security Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <asm/sections.h>
+
+/*
+ * Checks if a given pointer and length is contained by the current
+ * stack frame (if possible).
+ *
+ *	0: not at all on the stack
+ *	1: fully within a valid stack frame
+ *	2: fully on the stack (when can't do frame-checking)
+ *	-1: error condition (invalid stack position or bad stack frame)
+ */
+static noinline int check_stack_object(const void *obj, unsigned long len)
+{
+	const void * const stack = task_stack_page(current);
+	const void * const stackend = stack + THREAD_SIZE;
+	int ret;
+
+	/* Object is not on the stack at all. */
+	if (obj + len <= stack || stackend <= obj)
+		return 0;
+
+	/*
+	 * Reject: object partially overlaps the stack (passing the
+	 * the check above means at least one end is within the stack,
+	 * so if this check fails, the other end is outside the stack).
+	 */
+	if (obj < stack || stackend < obj + len)
+		return -1;
+
+	/* Check if object is safely within a valid frame. */
+	ret = arch_within_stack_frames(stack, stackend, obj, len);
+	if (ret)
+		return ret;
+
+	return 2;
+}
+
+static void report_usercopy(const void *ptr, unsigned long len,
+			    bool to_user, const char *type)
+{
+	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
+		to_user ? "exposure" : "overwrite",
+		to_user ? "from" : "to", ptr, type ? : "unknown", len);
+	dump_stack();
+	do_group_exit(SIGKILL);
+}
+
+/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
+static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
+		     unsigned long high)
+{
+	unsigned long check_low = (uintptr_t)ptr;
+	unsigned long check_high = check_low + n;
+
+	/* Does not overlap if entirely above or entirely below. */
+	if (check_low >= high || check_high < low)
+		return false;
+
+	return true;
+}
+
+/* Is this address range in the kernel text area? */
+static inline const char *check_kernel_text_object(const void *ptr,
+						   unsigned long n)
+{
+	unsigned long textlow = (unsigned long)_stext;
+	unsigned long texthigh = (unsigned long)_etext;
+
+	if (overlaps(ptr, n, textlow, texthigh))
+		return "<kernel text>";
+
+#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	/* Check against linear mapping as well. */
+	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
+		     (unsigned long)__va(__pa(texthigh))))
+		return "<linear kernel text>";
+#endif
+
+	return NULL;
+}
+
+static inline const char *check_bogus_address(const void *ptr, unsigned long n)
+{
+	/* Reject if object wraps past end of memory. */
+	if (ptr + n < ptr)
+		return "<wrapped address>";
+
+	/* Reject if NULL or ZERO-allocation. */
+	if (ZERO_OR_NULL_PTR(ptr))
+		return "<null>";
+
+	return NULL;
+}
+
+static inline const char *check_heap_object(const void *ptr, unsigned long n,
+					    bool to_user)
+{
+	struct page *page, *endpage;
+	const void *end = ptr + n - 1;
+
+	if (!virt_addr_valid(ptr))
+		return NULL;
+
+	page = virt_to_head_page(ptr);
+
+	/* Check slab allocator for flags and size. */
+	if (PageSlab(page))
+		return __check_heap_object(ptr, n, page);
+
+	/*
+	 * Sometimes the kernel data regions are not marked Reserved (see
+	 * check below). And sometimes [_sdata,_edata) does not cover
+	 * rodata and/or bss, so check each range explicitly.
+	 */
+
+	/* Allow reads of kernel rodata region (if not marked as Reserved). */
+	if (ptr >= (const void *)__start_rodata &&
+	    end <= (const void *)__end_rodata) {
+		if (!to_user)
+			return "<rodata>";
+		return NULL;
+	}
+
+	/* Allow kernel data region (if not marked as Reserved). */
+	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
+		return NULL;
+
+	/* Allow kernel bss region (if not marked as Reserved). */
+	if (ptr >= (const void *)__bss_start &&
+	    end <= (const void *)__bss_stop)
+		return NULL;
+
+	/* Is the object wholly within one base page? */
+	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) =
+		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
+		return NULL;
+
+	/* Allow if start and end are inside the same compound page. */
+	endpage = virt_to_head_page(end);
+	if (likely(endpage = page))
+		return NULL;
+
+	/* Allow special areas, device memory, and sometimes kernel data. */
+	if (PageReserved(page) && PageReserved(endpage))
+		return NULL;
+
+	/* Uh oh. The "object" spans several independently allocated pages. */
+	return "<spans multiple pages>";
+}
+
+/*
+ * Validates that the given object is one of:
+ * - known safe heap object
+ * - known safe stack object
+ * - not in kernel text
+ */
+void __check_object_size(const void *ptr, unsigned long n, bool to_user)
+{
+	const char *err;
+
+	/* Skip all tests if size is zero. */
+	if (!n)
+		return;
+
+	/* Check for invalid addresses. */
+	err = check_bogus_address(ptr, n);
+	if (err)
+		goto report;
+
+	/* Check for bad heap object. */
+	err = check_heap_object(ptr, n, to_user);
+	if (err)
+		goto report;
+
+	/* Check for bad stack object. */
+	switch (check_stack_object(ptr, n)) {
+	case 0:
+		/* Object is not touching the current process stack. */
+		break;
+	case 1:
+	case 2:
+		/*
+		 * Object is either in the correct frame (when it
+		 * is possible to check) or just generally on the
+		 * process stack (when frame checking not available).
+		 */
+		return;
+	default:
+		err = "<process stack>";
+		goto report;
+	}
+
+	/* Check for object in kernel to avoid text exposure. */
+	err = check_kernel_text_object(ptr, n);
+	if (!err)
+		return;
+
+report:
+	report_usercopy(ptr, n, to_user, err);
+}
+EXPORT_SYMBOL(__check_object_size);
diff --git a/security/Kconfig b/security/Kconfig
index 176758cdfa57..63340ad0b9f9 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -118,6 +118,33 @@ config LSM_MMAP_MIN_ADDR
 	  this low address space will need the permission specific to the
 	  systems running LSM.
 
+config HAVE_HARDENED_USERCOPY_ALLOCATOR
+	bool
+	help
+	  The heap allocator implements __check_heap_object() for
+	  validating memory ranges against heap object sizes in
+	  support of CONFIG_HARDENED_USERCOPY.
+
+config HAVE_ARCH_HARDENED_USERCOPY
+	bool
+	help
+	  The architecture supports CONFIG_HARDENED_USERCOPY by
+	  calling check_object_size() just before performing the
+	  userspace copies in the low level implementation of
+	  copy_to_user() and copy_from_user().
+
+config HARDENED_USERCOPY
+	bool "Harden memory copies between kernel and userspace"
+	depends on HAVE_ARCH_HARDENED_USERCOPY
+	help
+	  This option checks for obviously wrong memory regions when
+	  copying memory to/from the kernel (via copy_to_user() and
+	  copy_from_user() functions) by rejecting memory ranges that
+	  are larger than the specified heap object, span multiple
+	  separately allocates pages, are not on the process stack,
+	  or are part of the kernel text. This kills entire classes
+	  of heap overflow exploits and similar kernel memory exposures.
+
 source security/selinux/Kconfig
 source security/smack/Kconfig
 source security/tomoyo/Kconfig
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
    implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations
- if on the stack
  - object must not extend before/after the current process task
  - object must be contained by the current stack frame (when there is
    arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                |   7 ++
 include/linux/slab.h        |  12 +++
 include/linux/thread_info.h |  15 +++
 mm/Makefile                 |   4 +
 mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
 security/Kconfig            |  27 ++++++
 6 files changed, 284 insertions(+)
 create mode 100644 mm/usercopy.c

diff --git a/arch/Kconfig b/arch/Kconfig
index 5e2776562035..195ee4cc939a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
 	  and similar) by implementing an inline arch_within_stack_frames(),
 	  which is used by CONFIG_HARDENED_USERCOPY.
 
+config HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	bool
+	help
+	  An architecture should select this if it has a secondary linear
+	  mapping of the kernel text. This is used to verify that kernel
+	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d00a66..96a16a3fb7cb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,6 +155,18 @@ void kfree(const void *);
 void kzfree(const void *);
 size_t ksize(const void *);
 
+#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page);
+#else
+static inline const char *__check_heap_object(const void *ptr,
+					      unsigned long n,
+					      struct page *page)
+{
+	return NULL;
+}
+#endif
+
 /*
  * Some archs want to perform DMA into kmalloc caches and need a guaranteed
  * alignment larger than the alignment of a 64-bit integer.
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 3d5c80b4391d..f24b99eac969 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
 }
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+extern void __check_object_size(const void *ptr, unsigned long n,
+					bool to_user);
+
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{
+	__check_object_size(ptr, n, to_user);
+}
+#else
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{ }
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
diff --git a/mm/Makefile b/mm/Makefile
index 78c6f7dedb83..32d37247c7e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
 KCOV_INSTRUMENT_mmzone.o := n
 KCOV_INSTRUMENT_vmstat.o := n
 
+# Since __builtin_frame_address does work as used, disable the warning.
+CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
 obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
 obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
+obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
diff --git a/mm/usercopy.c b/mm/usercopy.c
new file mode 100644
index 000000000000..4161a1fb1909
--- /dev/null
+++ b/mm/usercopy.c
@@ -0,0 +1,219 @@
+/*
+ * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
+ * which are designed to protect kernel memory from needless exposure
+ * and overwrite under many unintended conditions. This code is based
+ * on PAX_USERCOPY, which is:
+ *
+ * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
+ * Security Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <asm/sections.h>
+
+/*
+ * Checks if a given pointer and length is contained by the current
+ * stack frame (if possible).
+ *
+ *	0: not at all on the stack
+ *	1: fully within a valid stack frame
+ *	2: fully on the stack (when can't do frame-checking)
+ *	-1: error condition (invalid stack position or bad stack frame)
+ */
+static noinline int check_stack_object(const void *obj, unsigned long len)
+{
+	const void * const stack = task_stack_page(current);
+	const void * const stackend = stack + THREAD_SIZE;
+	int ret;
+
+	/* Object is not on the stack at all. */
+	if (obj + len <= stack || stackend <= obj)
+		return 0;
+
+	/*
+	 * Reject: object partially overlaps the stack (passing the
+	 * the check above means at least one end is within the stack,
+	 * so if this check fails, the other end is outside the stack).
+	 */
+	if (obj < stack || stackend < obj + len)
+		return -1;
+
+	/* Check if object is safely within a valid frame. */
+	ret = arch_within_stack_frames(stack, stackend, obj, len);
+	if (ret)
+		return ret;
+
+	return 2;
+}
+
+static void report_usercopy(const void *ptr, unsigned long len,
+			    bool to_user, const char *type)
+{
+	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
+		to_user ? "exposure" : "overwrite",
+		to_user ? "from" : "to", ptr, type ? : "unknown", len);
+	dump_stack();
+	do_group_exit(SIGKILL);
+}
+
+/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
+static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
+		     unsigned long high)
+{
+	unsigned long check_low = (uintptr_t)ptr;
+	unsigned long check_high = check_low + n;
+
+	/* Does not overlap if entirely above or entirely below. */
+	if (check_low >= high || check_high < low)
+		return false;
+
+	return true;
+}
+
+/* Is this address range in the kernel text area? */
+static inline const char *check_kernel_text_object(const void *ptr,
+						   unsigned long n)
+{
+	unsigned long textlow = (unsigned long)_stext;
+	unsigned long texthigh = (unsigned long)_etext;
+
+	if (overlaps(ptr, n, textlow, texthigh))
+		return "<kernel text>";
+
+#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	/* Check against linear mapping as well. */
+	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
+		     (unsigned long)__va(__pa(texthigh))))
+		return "<linear kernel text>";
+#endif
+
+	return NULL;
+}
+
+static inline const char *check_bogus_address(const void *ptr, unsigned long n)
+{
+	/* Reject if object wraps past end of memory. */
+	if (ptr + n < ptr)
+		return "<wrapped address>";
+
+	/* Reject if NULL or ZERO-allocation. */
+	if (ZERO_OR_NULL_PTR(ptr))
+		return "<null>";
+
+	return NULL;
+}
+
+static inline const char *check_heap_object(const void *ptr, unsigned long n,
+					    bool to_user)
+{
+	struct page *page, *endpage;
+	const void *end = ptr + n - 1;
+
+	if (!virt_addr_valid(ptr))
+		return NULL;
+
+	page = virt_to_head_page(ptr);
+
+	/* Check slab allocator for flags and size. */
+	if (PageSlab(page))
+		return __check_heap_object(ptr, n, page);
+
+	/*
+	 * Sometimes the kernel data regions are not marked Reserved (see
+	 * check below). And sometimes [_sdata,_edata) does not cover
+	 * rodata and/or bss, so check each range explicitly.
+	 */
+
+	/* Allow reads of kernel rodata region (if not marked as Reserved). */
+	if (ptr >= (const void *)__start_rodata &&
+	    end <= (const void *)__end_rodata) {
+		if (!to_user)
+			return "<rodata>";
+		return NULL;
+	}
+
+	/* Allow kernel data region (if not marked as Reserved). */
+	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
+		return NULL;
+
+	/* Allow kernel bss region (if not marked as Reserved). */
+	if (ptr >= (const void *)__bss_start &&
+	    end <= (const void *)__bss_stop)
+		return NULL;
+
+	/* Is the object wholly within one base page? */
+	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
+		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
+		return NULL;
+
+	/* Allow if start and end are inside the same compound page. */
+	endpage = virt_to_head_page(end);
+	if (likely(endpage == page))
+		return NULL;
+
+	/* Allow special areas, device memory, and sometimes kernel data. */
+	if (PageReserved(page) && PageReserved(endpage))
+		return NULL;
+
+	/* Uh oh. The "object" spans several independently allocated pages. */
+	return "<spans multiple pages>";
+}
+
+/*
+ * Validates that the given object is one of:
+ * - known safe heap object
+ * - known safe stack object
+ * - not in kernel text
+ */
+void __check_object_size(const void *ptr, unsigned long n, bool to_user)
+{
+	const char *err;
+
+	/* Skip all tests if size is zero. */
+	if (!n)
+		return;
+
+	/* Check for invalid addresses. */
+	err = check_bogus_address(ptr, n);
+	if (err)
+		goto report;
+
+	/* Check for bad heap object. */
+	err = check_heap_object(ptr, n, to_user);
+	if (err)
+		goto report;
+
+	/* Check for bad stack object. */
+	switch (check_stack_object(ptr, n)) {
+	case 0:
+		/* Object is not touching the current process stack. */
+		break;
+	case 1:
+	case 2:
+		/*
+		 * Object is either in the correct frame (when it
+		 * is possible to check) or just generally on the
+		 * process stack (when frame checking not available).
+		 */
+		return;
+	default:
+		err = "<process stack>";
+		goto report;
+	}
+
+	/* Check for object in kernel to avoid text exposure. */
+	err = check_kernel_text_object(ptr, n);
+	if (!err)
+		return;
+
+report:
+	report_usercopy(ptr, n, to_user, err);
+}
+EXPORT_SYMBOL(__check_object_size);
diff --git a/security/Kconfig b/security/Kconfig
index 176758cdfa57..63340ad0b9f9 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -118,6 +118,33 @@ config LSM_MMAP_MIN_ADDR
 	  this low address space will need the permission specific to the
 	  systems running LSM.
 
+config HAVE_HARDENED_USERCOPY_ALLOCATOR
+	bool
+	help
+	  The heap allocator implements __check_heap_object() for
+	  validating memory ranges against heap object sizes in
+	  support of CONFIG_HARDENED_USERCOPY.
+
+config HAVE_ARCH_HARDENED_USERCOPY
+	bool
+	help
+	  The architecture supports CONFIG_HARDENED_USERCOPY by
+	  calling check_object_size() just before performing the
+	  userspace copies in the low level implementation of
+	  copy_to_user() and copy_from_user().
+
+config HARDENED_USERCOPY
+	bool "Harden memory copies between kernel and userspace"
+	depends on HAVE_ARCH_HARDENED_USERCOPY
+	help
+	  This option checks for obviously wrong memory regions when
+	  copying memory to/from the kernel (via copy_to_user() and
+	  copy_from_user() functions) by rejecting memory ranges that
+	  are larger than the specified heap object, span multiple
+	  separately allocates pages, are not on the process stack,
+	  or are part of the kernel text. This kills entire classes
+	  of heap overflow exploits and similar kernel memory exposures.
+
 source security/selinux/Kconfig
 source security/smack/Kconfig
 source security/tomoyo/Kconfig
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-arm-kernel

This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
    implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations
- if on the stack
  - object must not extend before/after the current process task
  - object must be contained by the current stack frame (when there is
    arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                |   7 ++
 include/linux/slab.h        |  12 +++
 include/linux/thread_info.h |  15 +++
 mm/Makefile                 |   4 +
 mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
 security/Kconfig            |  27 ++++++
 6 files changed, 284 insertions(+)
 create mode 100644 mm/usercopy.c

diff --git a/arch/Kconfig b/arch/Kconfig
index 5e2776562035..195ee4cc939a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
 	  and similar) by implementing an inline arch_within_stack_frames(),
 	  which is used by CONFIG_HARDENED_USERCOPY.
 
+config HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	bool
+	help
+	  An architecture should select this if it has a secondary linear
+	  mapping of the kernel text. This is used to verify that kernel
+	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d00a66..96a16a3fb7cb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,6 +155,18 @@ void kfree(const void *);
 void kzfree(const void *);
 size_t ksize(const void *);
 
+#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page);
+#else
+static inline const char *__check_heap_object(const void *ptr,
+					      unsigned long n,
+					      struct page *page)
+{
+	return NULL;
+}
+#endif
+
 /*
  * Some archs want to perform DMA into kmalloc caches and need a guaranteed
  * alignment larger than the alignment of a 64-bit integer.
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 3d5c80b4391d..f24b99eac969 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
 }
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+extern void __check_object_size(const void *ptr, unsigned long n,
+					bool to_user);
+
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{
+	__check_object_size(ptr, n, to_user);
+}
+#else
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{ }
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
diff --git a/mm/Makefile b/mm/Makefile
index 78c6f7dedb83..32d37247c7e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
 KCOV_INSTRUMENT_mmzone.o := n
 KCOV_INSTRUMENT_vmstat.o := n
 
+# Since __builtin_frame_address does work as used, disable the warning.
+CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
 obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
 obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
+obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
diff --git a/mm/usercopy.c b/mm/usercopy.c
new file mode 100644
index 000000000000..4161a1fb1909
--- /dev/null
+++ b/mm/usercopy.c
@@ -0,0 +1,219 @@
+/*
+ * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
+ * which are designed to protect kernel memory from needless exposure
+ * and overwrite under many unintended conditions. This code is based
+ * on PAX_USERCOPY, which is:
+ *
+ * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
+ * Security Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <asm/sections.h>
+
+/*
+ * Checks if a given pointer and length is contained by the current
+ * stack frame (if possible).
+ *
+ *	0: not at all on the stack
+ *	1: fully within a valid stack frame
+ *	2: fully on the stack (when can't do frame-checking)
+ *	-1: error condition (invalid stack position or bad stack frame)
+ */
+static noinline int check_stack_object(const void *obj, unsigned long len)
+{
+	const void * const stack = task_stack_page(current);
+	const void * const stackend = stack + THREAD_SIZE;
+	int ret;
+
+	/* Object is not on the stack at all. */
+	if (obj + len <= stack || stackend <= obj)
+		return 0;
+
+	/*
+	 * Reject: object partially overlaps the stack (passing the
+	 * the check above means@least one end is within the stack,
+	 * so if this check fails, the other end is outside the stack).
+	 */
+	if (obj < stack || stackend < obj + len)
+		return -1;
+
+	/* Check if object is safely within a valid frame. */
+	ret = arch_within_stack_frames(stack, stackend, obj, len);
+	if (ret)
+		return ret;
+
+	return 2;
+}
+
+static void report_usercopy(const void *ptr, unsigned long len,
+			    bool to_user, const char *type)
+{
+	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
+		to_user ? "exposure" : "overwrite",
+		to_user ? "from" : "to", ptr, type ? : "unknown", len);
+	dump_stack();
+	do_group_exit(SIGKILL);
+}
+
+/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
+static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
+		     unsigned long high)
+{
+	unsigned long check_low = (uintptr_t)ptr;
+	unsigned long check_high = check_low + n;
+
+	/* Does not overlap if entirely above or entirely below. */
+	if (check_low >= high || check_high < low)
+		return false;
+
+	return true;
+}
+
+/* Is this address range in the kernel text area? */
+static inline const char *check_kernel_text_object(const void *ptr,
+						   unsigned long n)
+{
+	unsigned long textlow = (unsigned long)_stext;
+	unsigned long texthigh = (unsigned long)_etext;
+
+	if (overlaps(ptr, n, textlow, texthigh))
+		return "<kernel text>";
+
+#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	/* Check against linear mapping as well. */
+	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
+		     (unsigned long)__va(__pa(texthigh))))
+		return "<linear kernel text>";
+#endif
+
+	return NULL;
+}
+
+static inline const char *check_bogus_address(const void *ptr, unsigned long n)
+{
+	/* Reject if object wraps past end of memory. */
+	if (ptr + n < ptr)
+		return "<wrapped address>";
+
+	/* Reject if NULL or ZERO-allocation. */
+	if (ZERO_OR_NULL_PTR(ptr))
+		return "<null>";
+
+	return NULL;
+}
+
+static inline const char *check_heap_object(const void *ptr, unsigned long n,
+					    bool to_user)
+{
+	struct page *page, *endpage;
+	const void *end = ptr + n - 1;
+
+	if (!virt_addr_valid(ptr))
+		return NULL;
+
+	page = virt_to_head_page(ptr);
+
+	/* Check slab allocator for flags and size. */
+	if (PageSlab(page))
+		return __check_heap_object(ptr, n, page);
+
+	/*
+	 * Sometimes the kernel data regions are not marked Reserved (see
+	 * check below). And sometimes [_sdata,_edata) does not cover
+	 * rodata and/or bss, so check each range explicitly.
+	 */
+
+	/* Allow reads of kernel rodata region (if not marked as Reserved). */
+	if (ptr >= (const void *)__start_rodata &&
+	    end <= (const void *)__end_rodata) {
+		if (!to_user)
+			return "<rodata>";
+		return NULL;
+	}
+
+	/* Allow kernel data region (if not marked as Reserved). */
+	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
+		return NULL;
+
+	/* Allow kernel bss region (if not marked as Reserved). */
+	if (ptr >= (const void *)__bss_start &&
+	    end <= (const void *)__bss_stop)
+		return NULL;
+
+	/* Is the object wholly within one base page? */
+	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
+		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
+		return NULL;
+
+	/* Allow if start and end are inside the same compound page. */
+	endpage = virt_to_head_page(end);
+	if (likely(endpage == page))
+		return NULL;
+
+	/* Allow special areas, device memory, and sometimes kernel data. */
+	if (PageReserved(page) && PageReserved(endpage))
+		return NULL;
+
+	/* Uh oh. The "object" spans several independently allocated pages. */
+	return "<spans multiple pages>";
+}
+
+/*
+ * Validates that the given object is one of:
+ * - known safe heap object
+ * - known safe stack object
+ * - not in kernel text
+ */
+void __check_object_size(const void *ptr, unsigned long n, bool to_user)
+{
+	const char *err;
+
+	/* Skip all tests if size is zero. */
+	if (!n)
+		return;
+
+	/* Check for invalid addresses. */
+	err = check_bogus_address(ptr, n);
+	if (err)
+		goto report;
+
+	/* Check for bad heap object. */
+	err = check_heap_object(ptr, n, to_user);
+	if (err)
+		goto report;
+
+	/* Check for bad stack object. */
+	switch (check_stack_object(ptr, n)) {
+	case 0:
+		/* Object is not touching the current process stack. */
+		break;
+	case 1:
+	case 2:
+		/*
+		 * Object is either in the correct frame (when it
+		 * is possible to check) or just generally on the
+		 * process stack (when frame checking not available).
+		 */
+		return;
+	default:
+		err = "<process stack>";
+		goto report;
+	}
+
+	/* Check for object in kernel to avoid text exposure. */
+	err = check_kernel_text_object(ptr, n);
+	if (!err)
+		return;
+
+report:
+	report_usercopy(ptr, n, to_user, err);
+}
+EXPORT_SYMBOL(__check_object_size);
diff --git a/security/Kconfig b/security/Kconfig
index 176758cdfa57..63340ad0b9f9 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -118,6 +118,33 @@ config LSM_MMAP_MIN_ADDR
 	  this low address space will need the permission specific to the
 	  systems running LSM.
 
+config HAVE_HARDENED_USERCOPY_ALLOCATOR
+	bool
+	help
+	  The heap allocator implements __check_heap_object() for
+	  validating memory ranges against heap object sizes in
+	  support of CONFIG_HARDENED_USERCOPY.
+
+config HAVE_ARCH_HARDENED_USERCOPY
+	bool
+	help
+	  The architecture supports CONFIG_HARDENED_USERCOPY by
+	  calling check_object_size() just before performing the
+	  userspace copies in the low level implementation of
+	  copy_to_user() and copy_from_user().
+
+config HARDENED_USERCOPY
+	bool "Harden memory copies between kernel and userspace"
+	depends on HAVE_ARCH_HARDENED_USERCOPY
+	help
+	  This option checks for obviously wrong memory regions when
+	  copying memory to/from the kernel (via copy_to_user() and
+	  copy_from_user() functions) by rejecting memory ranges that
+	  are larger than the specified heap object, span multiple
+	  separately allocates pages, are not on the process stack,
+	  or are part of the kernel text. This kills entire classes
+	  of heap overflow exploits and similar kernel memory exposures.
+
 source security/selinux/Kconfig
 source security/smack/Kconfig
 source security/tomoyo/Kconfig
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
    implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations
- if on the stack
  - object must not extend before/after the current process task
  - object must be contained by the current stack frame (when there is
    arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                |   7 ++
 include/linux/slab.h        |  12 +++
 include/linux/thread_info.h |  15 +++
 mm/Makefile                 |   4 +
 mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
 security/Kconfig            |  27 ++++++
 6 files changed, 284 insertions(+)
 create mode 100644 mm/usercopy.c

diff --git a/arch/Kconfig b/arch/Kconfig
index 5e2776562035..195ee4cc939a 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
 	  and similar) by implementing an inline arch_within_stack_frames(),
 	  which is used by CONFIG_HARDENED_USERCOPY.
 
+config HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	bool
+	help
+	  An architecture should select this if it has a secondary linear
+	  mapping of the kernel text. This is used to verify that kernel
+	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d00a66..96a16a3fb7cb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,6 +155,18 @@ void kfree(const void *);
 void kzfree(const void *);
 size_t ksize(const void *);
 
+#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page);
+#else
+static inline const char *__check_heap_object(const void *ptr,
+					      unsigned long n,
+					      struct page *page)
+{
+	return NULL;
+}
+#endif
+
 /*
  * Some archs want to perform DMA into kmalloc caches and need a guaranteed
  * alignment larger than the alignment of a 64-bit integer.
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 3d5c80b4391d..f24b99eac969 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
 }
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+extern void __check_object_size(const void *ptr, unsigned long n,
+					bool to_user);
+
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{
+	__check_object_size(ptr, n, to_user);
+}
+#else
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{ }
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
diff --git a/mm/Makefile b/mm/Makefile
index 78c6f7dedb83..32d37247c7e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
 KCOV_INSTRUMENT_mmzone.o := n
 KCOV_INSTRUMENT_vmstat.o := n
 
+# Since __builtin_frame_address does work as used, disable the warning.
+CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
 obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
 obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
+obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
diff --git a/mm/usercopy.c b/mm/usercopy.c
new file mode 100644
index 000000000000..4161a1fb1909
--- /dev/null
+++ b/mm/usercopy.c
@@ -0,0 +1,219 @@
+/*
+ * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
+ * which are designed to protect kernel memory from needless exposure
+ * and overwrite under many unintended conditions. This code is based
+ * on PAX_USERCOPY, which is:
+ *
+ * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
+ * Security Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <asm/sections.h>
+
+/*
+ * Checks if a given pointer and length is contained by the current
+ * stack frame (if possible).
+ *
+ *	0: not at all on the stack
+ *	1: fully within a valid stack frame
+ *	2: fully on the stack (when can't do frame-checking)
+ *	-1: error condition (invalid stack position or bad stack frame)
+ */
+static noinline int check_stack_object(const void *obj, unsigned long len)
+{
+	const void * const stack = task_stack_page(current);
+	const void * const stackend = stack + THREAD_SIZE;
+	int ret;
+
+	/* Object is not on the stack at all. */
+	if (obj + len <= stack || stackend <= obj)
+		return 0;
+
+	/*
+	 * Reject: object partially overlaps the stack (passing the
+	 * the check above means at least one end is within the stack,
+	 * so if this check fails, the other end is outside the stack).
+	 */
+	if (obj < stack || stackend < obj + len)
+		return -1;
+
+	/* Check if object is safely within a valid frame. */
+	ret = arch_within_stack_frames(stack, stackend, obj, len);
+	if (ret)
+		return ret;
+
+	return 2;
+}
+
+static void report_usercopy(const void *ptr, unsigned long len,
+			    bool to_user, const char *type)
+{
+	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
+		to_user ? "exposure" : "overwrite",
+		to_user ? "from" : "to", ptr, type ? : "unknown", len);
+	dump_stack();
+	do_group_exit(SIGKILL);
+}
+
+/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
+static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
+		     unsigned long high)
+{
+	unsigned long check_low = (uintptr_t)ptr;
+	unsigned long check_high = check_low + n;
+
+	/* Does not overlap if entirely above or entirely below. */
+	if (check_low >= high || check_high < low)
+		return false;
+
+	return true;
+}
+
+/* Is this address range in the kernel text area? */
+static inline const char *check_kernel_text_object(const void *ptr,
+						   unsigned long n)
+{
+	unsigned long textlow = (unsigned long)_stext;
+	unsigned long texthigh = (unsigned long)_etext;
+
+	if (overlaps(ptr, n, textlow, texthigh))
+		return "<kernel text>";
+
+#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
+	/* Check against linear mapping as well. */
+	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
+		     (unsigned long)__va(__pa(texthigh))))
+		return "<linear kernel text>";
+#endif
+
+	return NULL;
+}
+
+static inline const char *check_bogus_address(const void *ptr, unsigned long n)
+{
+	/* Reject if object wraps past end of memory. */
+	if (ptr + n < ptr)
+		return "<wrapped address>";
+
+	/* Reject if NULL or ZERO-allocation. */
+	if (ZERO_OR_NULL_PTR(ptr))
+		return "<null>";
+
+	return NULL;
+}
+
+static inline const char *check_heap_object(const void *ptr, unsigned long n,
+					    bool to_user)
+{
+	struct page *page, *endpage;
+	const void *end = ptr + n - 1;
+
+	if (!virt_addr_valid(ptr))
+		return NULL;
+
+	page = virt_to_head_page(ptr);
+
+	/* Check slab allocator for flags and size. */
+	if (PageSlab(page))
+		return __check_heap_object(ptr, n, page);
+
+	/*
+	 * Sometimes the kernel data regions are not marked Reserved (see
+	 * check below). And sometimes [_sdata,_edata) does not cover
+	 * rodata and/or bss, so check each range explicitly.
+	 */
+
+	/* Allow reads of kernel rodata region (if not marked as Reserved). */
+	if (ptr >= (const void *)__start_rodata &&
+	    end <= (const void *)__end_rodata) {
+		if (!to_user)
+			return "<rodata>";
+		return NULL;
+	}
+
+	/* Allow kernel data region (if not marked as Reserved). */
+	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
+		return NULL;
+
+	/* Allow kernel bss region (if not marked as Reserved). */
+	if (ptr >= (const void *)__bss_start &&
+	    end <= (const void *)__bss_stop)
+		return NULL;
+
+	/* Is the object wholly within one base page? */
+	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
+		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
+		return NULL;
+
+	/* Allow if start and end are inside the same compound page. */
+	endpage = virt_to_head_page(end);
+	if (likely(endpage == page))
+		return NULL;
+
+	/* Allow special areas, device memory, and sometimes kernel data. */
+	if (PageReserved(page) && PageReserved(endpage))
+		return NULL;
+
+	/* Uh oh. The "object" spans several independently allocated pages. */
+	return "<spans multiple pages>";
+}
+
+/*
+ * Validates that the given object is one of:
+ * - known safe heap object
+ * - known safe stack object
+ * - not in kernel text
+ */
+void __check_object_size(const void *ptr, unsigned long n, bool to_user)
+{
+	const char *err;
+
+	/* Skip all tests if size is zero. */
+	if (!n)
+		return;
+
+	/* Check for invalid addresses. */
+	err = check_bogus_address(ptr, n);
+	if (err)
+		goto report;
+
+	/* Check for bad heap object. */
+	err = check_heap_object(ptr, n, to_user);
+	if (err)
+		goto report;
+
+	/* Check for bad stack object. */
+	switch (check_stack_object(ptr, n)) {
+	case 0:
+		/* Object is not touching the current process stack. */
+		break;
+	case 1:
+	case 2:
+		/*
+		 * Object is either in the correct frame (when it
+		 * is possible to check) or just generally on the
+		 * process stack (when frame checking not available).
+		 */
+		return;
+	default:
+		err = "<process stack>";
+		goto report;
+	}
+
+	/* Check for object in kernel to avoid text exposure. */
+	err = check_kernel_text_object(ptr, n);
+	if (!err)
+		return;
+
+report:
+	report_usercopy(ptr, n, to_user, err);
+}
+EXPORT_SYMBOL(__check_object_size);
diff --git a/security/Kconfig b/security/Kconfig
index 176758cdfa57..63340ad0b9f9 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -118,6 +118,33 @@ config LSM_MMAP_MIN_ADDR
 	  this low address space will need the permission specific to the
 	  systems running LSM.
 
+config HAVE_HARDENED_USERCOPY_ALLOCATOR
+	bool
+	help
+	  The heap allocator implements __check_heap_object() for
+	  validating memory ranges against heap object sizes in
+	  support of CONFIG_HARDENED_USERCOPY.
+
+config HAVE_ARCH_HARDENED_USERCOPY
+	bool
+	help
+	  The architecture supports CONFIG_HARDENED_USERCOPY by
+	  calling check_object_size() just before performing the
+	  userspace copies in the low level implementation of
+	  copy_to_user() and copy_from_user().
+
+config HARDENED_USERCOPY
+	bool "Harden memory copies between kernel and userspace"
+	depends on HAVE_ARCH_HARDENED_USERCOPY
+	help
+	  This option checks for obviously wrong memory regions when
+	  copying memory to/from the kernel (via copy_to_user() and
+	  copy_from_user() functions) by rejecting memory ranges that
+	  are larger than the specified heap object, span multiple
+	  separately allocates pages, are not on the process stack,
+	  or are part of the kernel text. This kills entire classes
+	  of heap overflow exploits and similar kernel memory exposures.
+
 source security/selinux/Kconfig
 source security/smack/Kconfig
 source security/tomoyo/Kconfig
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 03/11] x86/uaccess: Enable hardened usercopy
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:55   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in
copy_*_user() and __copy_*_user() because copy_*_user() actually calls
down to _copy_*_user() and not __copy_*_user().

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                  |  2 ++
 arch/x86/include/asm/uaccess.h    | 10 ++++++----
 arch/x86/include/asm/uaccess_32.h |  2 ++
 arch/x86/include/asm/uaccess_64.h |  2 ++
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4407f596b72c..39d89e058249 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -80,11 +80,13 @@ config X86
 	select HAVE_ALIGNED_STRUCT_PAGE		if SLUB
 	select HAVE_AOUT			if X86_32
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if X86_64 && SPARSEMEM_VMEMMAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING	if X86_64
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 2982387ba817..aa9cc58409c6 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
 	 * case, and do only runtime checking for non-constant sizes.
 	 */
 
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(to, n, false);
 		n = _copy_from_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_from_user_overflow();
 	else
 		__copy_from_user_overflow(sz, n);
@@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 	might_fault();
 
 	/* See the comment in copy_from_user() above. */
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(from, n, true);
 		n = _copy_to_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_to_user_overflow();
 	else
 		__copy_to_user_overflow(sz, n);
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 4b32da24faaf..7d3bdd1ed697 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	return __copy_to_user_ll(to, from, n);
 }
 
@@ -95,6 +96,7 @@ static __always_inline unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	might_fault();
+	check_object_size(to, n, false);
 	if (__builtin_constant_p(n)) {
 		unsigned long ret;
 
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 2eac2aa3e37f..673059a109fe 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(dst, size, false);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic(dst, (__force void *)src, size);
 	switch (size) {
@@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(src, size, true);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic((__force void *)dst, src, size);
 	switch (size) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 03/11] x86/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in
copy_*_user() and __copy_*_user() because copy_*_user() actually calls
down to _copy_*_user() and not __copy_*_user().

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                  |  2 ++
 arch/x86/include/asm/uaccess.h    | 10 ++++++----
 arch/x86/include/asm/uaccess_32.h |  2 ++
 arch/x86/include/asm/uaccess_64.h |  2 ++
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4407f596b72c..39d89e058249 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -80,11 +80,13 @@ config X86
 	select HAVE_ALIGNED_STRUCT_PAGE		if SLUB
 	select HAVE_AOUT			if X86_32
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if X86_64 && SPARSEMEM_VMEMMAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING	if X86_64
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 2982387ba817..aa9cc58409c6 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
 	 * case, and do only runtime checking for non-constant sizes.
 	 */
 
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(to, n, false);
 		n = _copy_from_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_from_user_overflow();
 	else
 		__copy_from_user_overflow(sz, n);
@@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 	might_fault();
 
 	/* See the comment in copy_from_user() above. */
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(from, n, true);
 		n = _copy_to_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_to_user_overflow();
 	else
 		__copy_to_user_overflow(sz, n);
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 4b32da24faaf..7d3bdd1ed697 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	return __copy_to_user_ll(to, from, n);
 }
 
@@ -95,6 +96,7 @@ static __always_inline unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	might_fault();
+	check_object_size(to, n, false);
 	if (__builtin_constant_p(n)) {
 		unsigned long ret;
 
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 2eac2aa3e37f..673059a109fe 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(dst, size, false);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic(dst, (__force void *)src, size);
 	switch (size) {
@@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(src, size, true);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic((__force void *)dst, src, size);
 	switch (size) {
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 03/11] x86/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in
copy_*_user() and __copy_*_user() because copy_*_user() actually calls
down to _copy_*_user() and not __copy_*_user().

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                  |  2 ++
 arch/x86/include/asm/uaccess.h    | 10 ++++++----
 arch/x86/include/asm/uaccess_32.h |  2 ++
 arch/x86/include/asm/uaccess_64.h |  2 ++
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4407f596b72c..39d89e058249 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -80,11 +80,13 @@ config X86
 	select HAVE_ALIGNED_STRUCT_PAGE		if SLUB
 	select HAVE_AOUT			if X86_32
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if X86_64 && SPARSEMEM_VMEMMAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING	if X86_64
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 2982387ba817..aa9cc58409c6 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
 	 * case, and do only runtime checking for non-constant sizes.
 	 */
 
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(to, n, false);
 		n = _copy_from_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_from_user_overflow();
 	else
 		__copy_from_user_overflow(sz, n);
@@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 	might_fault();
 
 	/* See the comment in copy_from_user() above. */
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(from, n, true);
 		n = _copy_to_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_to_user_overflow();
 	else
 		__copy_to_user_overflow(sz, n);
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 4b32da24faaf..7d3bdd1ed697 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	return __copy_to_user_ll(to, from, n);
 }
 
@@ -95,6 +96,7 @@ static __always_inline unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	might_fault();
+	check_object_size(to, n, false);
 	if (__builtin_constant_p(n)) {
 		unsigned long ret;
 
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 2eac2aa3e37f..673059a109fe 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(dst, size, false);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic(dst, (__force void *)src, size);
 	switch (size) {
@@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(src, size, true);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic((__force void *)dst, src, size);
 	switch (size) {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 03/11] x86/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in
copy_*_user() and __copy_*_user() because copy_*_user() actually calls
down to _copy_*_user() and not __copy_*_user().

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                  |  2 ++
 arch/x86/include/asm/uaccess.h    | 10 ++++++----
 arch/x86/include/asm/uaccess_32.h |  2 ++
 arch/x86/include/asm/uaccess_64.h |  2 ++
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4407f596b72c..39d89e058249 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -80,11 +80,13 @@ config X86
 	select HAVE_ALIGNED_STRUCT_PAGE		if SLUB
 	select HAVE_AOUT			if X86_32
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if X86_64 && SPARSEMEM_VMEMMAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING	if X86_64
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 2982387ba817..aa9cc58409c6 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
 	 * case, and do only runtime checking for non-constant sizes.
 	 */
 
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(to, n, false);
 		n = _copy_from_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_from_user_overflow();
 	else
 		__copy_from_user_overflow(sz, n);
@@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 	might_fault();
 
 	/* See the comment in copy_from_user() above. */
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(from, n, true);
 		n = _copy_to_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_to_user_overflow();
 	else
 		__copy_to_user_overflow(sz, n);
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 4b32da24faaf..7d3bdd1ed697 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	return __copy_to_user_ll(to, from, n);
 }
 
@@ -95,6 +96,7 @@ static __always_inline unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	might_fault();
+	check_object_size(to, n, false);
 	if (__builtin_constant_p(n)) {
 		unsigned long ret;
 
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 2eac2aa3e37f..673059a109fe 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(dst, size, false);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic(dst, (__force void *)src, size);
 	switch (size) {
@@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(src, size, true);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic((__force void *)dst, src, size);
 	switch (size) {
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 03/11] x86/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in
copy_*_user() and __copy_*_user() because copy_*_user() actually calls
down to _copy_*_user() and not __copy_*_user().

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                  |  2 ++
 arch/x86/include/asm/uaccess.h    | 10 ++++++----
 arch/x86/include/asm/uaccess_32.h |  2 ++
 arch/x86/include/asm/uaccess_64.h |  2 ++
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4407f596b72c..39d89e058249 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -80,11 +80,13 @@ config X86
 	select HAVE_ALIGNED_STRUCT_PAGE		if SLUB
 	select HAVE_AOUT			if X86_32
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if X86_64 && SPARSEMEM_VMEMMAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING	if X86_64
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 2982387ba817..aa9cc58409c6 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
 	 * case, and do only runtime checking for non-constant sizes.
 	 */
 
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(to, n, false);
 		n = _copy_from_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_from_user_overflow();
 	else
 		__copy_from_user_overflow(sz, n);
@@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 	might_fault();
 
 	/* See the comment in copy_from_user() above. */
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(from, n, true);
 		n = _copy_to_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_to_user_overflow();
 	else
 		__copy_to_user_overflow(sz, n);
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 4b32da24faaf..7d3bdd1ed697 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	return __copy_to_user_ll(to, from, n);
 }
 
@@ -95,6 +96,7 @@ static __always_inline unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	might_fault();
+	check_object_size(to, n, false);
 	if (__builtin_constant_p(n)) {
 		unsigned long ret;
 
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 2eac2aa3e37f..673059a109fe 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(dst, size, false);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic(dst, (__force void *)src, size);
 	switch (size) {
@@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(src, size, true);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic((__force void *)dst, src, size);
 	switch (size) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 03/11] x86/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in
copy_*_user() and __copy_*_user() because copy_*_user() actually calls
down to _copy_*_user() and not __copy_*_user().

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                  |  2 ++
 arch/x86/include/asm/uaccess.h    | 10 ++++++----
 arch/x86/include/asm/uaccess_32.h |  2 ++
 arch/x86/include/asm/uaccess_64.h |  2 ++
 4 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4407f596b72c..39d89e058249 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -80,11 +80,13 @@ config X86
 	select HAVE_ALIGNED_STRUCT_PAGE		if SLUB
 	select HAVE_AOUT			if X86_32
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if X86_64 && SPARSEMEM_VMEMMAP
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_KMEMCHECK
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING	if X86_64
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 2982387ba817..aa9cc58409c6 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
 	 * case, and do only runtime checking for non-constant sizes.
 	 */
 
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(to, n, false);
 		n = _copy_from_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_from_user_overflow();
 	else
 		__copy_from_user_overflow(sz, n);
@@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 	might_fault();
 
 	/* See the comment in copy_from_user() above. */
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(from, n, true);
 		n = _copy_to_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if(__builtin_constant_p(n))
 		copy_to_user_overflow();
 	else
 		__copy_to_user_overflow(sz, n);
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 4b32da24faaf..7d3bdd1ed697 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	return __copy_to_user_ll(to, from, n);
 }
 
@@ -95,6 +96,7 @@ static __always_inline unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	might_fault();
+	check_object_size(to, n, false);
 	if (__builtin_constant_p(n)) {
 		unsigned long ret;
 
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 2eac2aa3e37f..673059a109fe 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(dst, size, false);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic(dst, (__force void *)src, size);
 	switch (size) {
@@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(src, size, true);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic((__force void *)dst, src, size);
 	switch (size) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 04/11] ARM: uaccess: Enable hardened usercopy
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:55   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on arm.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/Kconfig               |  1 +
 arch/arm/include/asm/uaccess.h | 11 +++++++++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 90542db1220d..f56b29b3f57e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -35,6 +35,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 35c9db857ebe..7fb59199c6bb 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -496,7 +496,10 @@ arm_copy_from_user(void *to, const void __user *from, unsigned long n);
 static inline unsigned long __must_check
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(to, n, false);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_from_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
@@ -511,11 +514,15 @@ static inline unsigned long __must_check
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 #ifndef CONFIG_UACCESS_WITH_MEMCPY
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(from, n, true);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_to_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
 #else
+	check_object_size(from, n, true);
 	return arm_copy_to_user(to, from, n);
 #endif
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 04/11] ARM: uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Enables CONFIG_HARDENED_USERCOPY checks on arm.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/Kconfig               |  1 +
 arch/arm/include/asm/uaccess.h | 11 +++++++++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 90542db1220d..f56b29b3f57e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -35,6 +35,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 35c9db857ebe..7fb59199c6bb 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -496,7 +496,10 @@ arm_copy_from_user(void *to, const void __user *from, unsigned long n);
 static inline unsigned long __must_check
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(to, n, false);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_from_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
@@ -511,11 +514,15 @@ static inline unsigned long __must_check
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 #ifndef CONFIG_UACCESS_WITH_MEMCPY
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(from, n, true);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_to_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
 #else
+	check_object_size(from, n, true);
 	return arm_copy_to_user(to, from, n);
 #endif
 }
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 04/11] ARM: uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on arm.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/Kconfig               |  1 +
 arch/arm/include/asm/uaccess.h | 11 +++++++++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 90542db1220d..f56b29b3f57e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -35,6 +35,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 35c9db857ebe..7fb59199c6bb 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -496,7 +496,10 @@ arm_copy_from_user(void *to, const void __user *from, unsigned long n);
 static inline unsigned long __must_check
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(to, n, false);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_from_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
@@ -511,11 +514,15 @@ static inline unsigned long __must_check
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 #ifndef CONFIG_UACCESS_WITH_MEMCPY
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(from, n, true);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_to_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
 #else
+	check_object_size(from, n, true);
 	return arm_copy_to_user(to, from, n);
 #endif
 }
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 04/11] ARM: uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on arm.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/Kconfig               |  1 +
 arch/arm/include/asm/uaccess.h | 11 +++++++++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 90542db1220d..f56b29b3f57e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -35,6 +35,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 35c9db857ebe..7fb59199c6bb 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -496,7 +496,10 @@ arm_copy_from_user(void *to, const void __user *from, unsigned long n);
 static inline unsigned long __must_check
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(to, n, false);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_from_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
@@ -511,11 +514,15 @@ static inline unsigned long __must_check
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 #ifndef CONFIG_UACCESS_WITH_MEMCPY
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(from, n, true);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_to_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
 #else
+	check_object_size(from, n, true);
 	return arm_copy_to_user(to, from, n);
 #endif
 }
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 04/11] ARM: uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on arm.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/Kconfig               |  1 +
 arch/arm/include/asm/uaccess.h | 11 +++++++++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 90542db1220d..f56b29b3f57e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -35,6 +35,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 35c9db857ebe..7fb59199c6bb 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -496,7 +496,10 @@ arm_copy_from_user(void *to, const void __user *from, unsigned long n);
 static inline unsigned long __must_check
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(to, n, false);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_from_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
@@ -511,11 +514,15 @@ static inline unsigned long __must_check
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 #ifndef CONFIG_UACCESS_WITH_MEMCPY
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(from, n, true);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_to_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
 #else
+	check_object_size(from, n, true);
 	return arm_copy_to_user(to, from, n);
 #endif
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 04/11] ARM: uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on arm.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/Kconfig               |  1 +
 arch/arm/include/asm/uaccess.h | 11 +++++++++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 90542db1220d..f56b29b3f57e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -35,6 +35,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 35c9db857ebe..7fb59199c6bb 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -496,7 +496,10 @@ arm_copy_from_user(void *to, const void __user *from, unsigned long n);
 static inline unsigned long __must_check
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(to, n, false);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_from_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
@@ -511,11 +514,15 @@ static inline unsigned long __must_check
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 #ifndef CONFIG_UACCESS_WITH_MEMCPY
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(from, n, true);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_to_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
 #else
+	check_object_size(from, n, true);
 	return arm_copy_to_user(to, from, n);
 #endif
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 05/11] arm64/uaccess: Enable hardened usercopy
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:55   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next,
renames the low-level functions to __arch_copy_*_user() so a static inline
can do additional work before the copy.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm64/Kconfig               |  2 ++
 arch/arm64/include/asm/uaccess.h | 16 ++++++++++++++--
 arch/arm64/kernel/arm64ksyms.c   |  4 ++--
 arch/arm64/lib/copy_from_user.S  |  4 ++--
 arch/arm64/lib/copy_to_user.S    |  4 ++--
 5 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..b771cd97f74b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -51,10 +51,12 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 9e397a542756..5d0dacdb695b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -256,11 +256,23 @@ do {									\
 		-EFAULT;						\
 })
 
-extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n);
-extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
 extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
 extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
 
+static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	check_object_size(to, n, false);
+	return __arch_copy_from_user(to, from, n);
+}
+
+static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	check_object_size(from, n, true);
+	return __arch_copy_to_user(to, from, n);
+}
+
 static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	if (access_ok(VERIFY_READ, from, n))
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b05a45..2dc44406a7ad 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -34,8 +34,8 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(clear_page);
 
 	/* user mem (segment) */
-EXPORT_SYMBOL(__copy_from_user);
-EXPORT_SYMBOL(__copy_to_user);
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
 EXPORT_SYMBOL(__clear_user);
 EXPORT_SYMBOL(__copy_in_user);
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 17e8306dca29..0b90497d4424 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -66,7 +66,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_from_user)
+ENTRY(__arch_copy_from_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0				// Nothing to copy
 	ret
-ENDPROC(__copy_from_user)
+ENDPROC(__arch_copy_from_user)
 
 	.section .fixup,"ax"
 	.align	2
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 21faae60f988..7a7efe255034 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -65,7 +65,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_to_user)
+ENTRY(__arch_copy_to_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
-ENDPROC(__copy_to_user)
+ENDPROC(__arch_copy_to_user)
 
 	.section .fixup,"ax"
 	.align	2
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 05/11] arm64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next,
renames the low-level functions to __arch_copy_*_user() so a static inline
can do additional work before the copy.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm64/Kconfig               |  2 ++
 arch/arm64/include/asm/uaccess.h | 16 ++++++++++++++--
 arch/arm64/kernel/arm64ksyms.c   |  4 ++--
 arch/arm64/lib/copy_from_user.S  |  4 ++--
 arch/arm64/lib/copy_to_user.S    |  4 ++--
 5 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..b771cd97f74b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -51,10 +51,12 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 9e397a542756..5d0dacdb695b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -256,11 +256,23 @@ do {									\
 		-EFAULT;						\
 })
 
-extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n);
-extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
 extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
 extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
 
+static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	check_object_size(to, n, false);
+	return __arch_copy_from_user(to, from, n);
+}
+
+static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	check_object_size(from, n, true);
+	return __arch_copy_to_user(to, from, n);
+}
+
 static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	if (access_ok(VERIFY_READ, from, n))
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b05a45..2dc44406a7ad 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -34,8 +34,8 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(clear_page);
 
 	/* user mem (segment) */
-EXPORT_SYMBOL(__copy_from_user);
-EXPORT_SYMBOL(__copy_to_user);
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
 EXPORT_SYMBOL(__clear_user);
 EXPORT_SYMBOL(__copy_in_user);
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 17e8306dca29..0b90497d4424 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -66,7 +66,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_from_user)
+ENTRY(__arch_copy_from_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0				// Nothing to copy
 	ret
-ENDPROC(__copy_from_user)
+ENDPROC(__arch_copy_from_user)
 
 	.section .fixup,"ax"
 	.align	2
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 21faae60f988..7a7efe255034 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -65,7 +65,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_to_user)
+ENTRY(__arch_copy_to_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
-ENDPROC(__copy_to_user)
+ENDPROC(__arch_copy_to_user)
 
 	.section .fixup,"ax"
 	.align	2
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 05/11] arm64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next,
renames the low-level functions to __arch_copy_*_user() so a static inline
can do additional work before the copy.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm64/Kconfig               |  2 ++
 arch/arm64/include/asm/uaccess.h | 16 ++++++++++++++--
 arch/arm64/kernel/arm64ksyms.c   |  4 ++--
 arch/arm64/lib/copy_from_user.S  |  4 ++--
 arch/arm64/lib/copy_to_user.S    |  4 ++--
 5 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..b771cd97f74b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -51,10 +51,12 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 9e397a542756..5d0dacdb695b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -256,11 +256,23 @@ do {									\
 		-EFAULT;						\
 })
 
-extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n);
-extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
 extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
 extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
 
+static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	check_object_size(to, n, false);
+	return __arch_copy_from_user(to, from, n);
+}
+
+static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	check_object_size(from, n, true);
+	return __arch_copy_to_user(to, from, n);
+}
+
 static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	if (access_ok(VERIFY_READ, from, n))
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b05a45..2dc44406a7ad 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -34,8 +34,8 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(clear_page);
 
 	/* user mem (segment) */
-EXPORT_SYMBOL(__copy_from_user);
-EXPORT_SYMBOL(__copy_to_user);
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
 EXPORT_SYMBOL(__clear_user);
 EXPORT_SYMBOL(__copy_in_user);
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 17e8306dca29..0b90497d4424 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -66,7 +66,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_from_user)
+ENTRY(__arch_copy_from_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0				// Nothing to copy
 	ret
-ENDPROC(__copy_from_user)
+ENDPROC(__arch_copy_from_user)
 
 	.section .fixup,"ax"
 	.align	2
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 21faae60f988..7a7efe255034 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -65,7 +65,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_to_user)
+ENTRY(__arch_copy_to_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
-ENDPROC(__copy_to_user)
+ENDPROC(__arch_copy_to_user)
 
 	.section .fixup,"ax"
 	.align	2
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 05/11] arm64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next,
renames the low-level functions to __arch_copy_*_user() so a static inline
can do additional work before the copy.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm64/Kconfig               |  2 ++
 arch/arm64/include/asm/uaccess.h | 16 ++++++++++++++--
 arch/arm64/kernel/arm64ksyms.c   |  4 ++--
 arch/arm64/lib/copy_from_user.S  |  4 ++--
 arch/arm64/lib/copy_to_user.S    |  4 ++--
 5 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..b771cd97f74b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -51,10 +51,12 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 9e397a542756..5d0dacdb695b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -256,11 +256,23 @@ do {									\
 		-EFAULT;						\
 })
 
-extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n);
-extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
 extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
 extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
 
+static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	check_object_size(to, n, false);
+	return __arch_copy_from_user(to, from, n);
+}
+
+static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	check_object_size(from, n, true);
+	return __arch_copy_to_user(to, from, n);
+}
+
 static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	if (access_ok(VERIFY_READ, from, n))
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b05a45..2dc44406a7ad 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -34,8 +34,8 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(clear_page);
 
 	/* user mem (segment) */
-EXPORT_SYMBOL(__copy_from_user);
-EXPORT_SYMBOL(__copy_to_user);
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
 EXPORT_SYMBOL(__clear_user);
 EXPORT_SYMBOL(__copy_in_user);
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 17e8306dca29..0b90497d4424 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -66,7 +66,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_from_user)
+ENTRY(__arch_copy_from_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0				// Nothing to copy
 	ret
-ENDPROC(__copy_from_user)
+ENDPROC(__arch_copy_from_user)
 
 	.section .fixup,"ax"
 	.align	2
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 21faae60f988..7a7efe255034 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -65,7 +65,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_to_user)
+ENTRY(__arch_copy_to_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
-ENDPROC(__copy_to_user)
+ENDPROC(__arch_copy_to_user)
 
 	.section .fixup,"ax"
 	.align	2
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 05/11] arm64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next,
renames the low-level functions to __arch_copy_*_user() so a static inline
can do additional work before the copy.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm64/Kconfig               |  2 ++
 arch/arm64/include/asm/uaccess.h | 16 ++++++++++++++--
 arch/arm64/kernel/arm64ksyms.c   |  4 ++--
 arch/arm64/lib/copy_from_user.S  |  4 ++--
 arch/arm64/lib/copy_to_user.S    |  4 ++--
 5 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..b771cd97f74b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -51,10 +51,12 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 9e397a542756..5d0dacdb695b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -256,11 +256,23 @@ do {									\
 		-EFAULT;						\
 })
 
-extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n);
-extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
 extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
 extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
 
+static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	check_object_size(to, n, false);
+	return __arch_copy_from_user(to, from, n);
+}
+
+static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	check_object_size(from, n, true);
+	return __arch_copy_to_user(to, from, n);
+}
+
 static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	if (access_ok(VERIFY_READ, from, n))
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b05a45..2dc44406a7ad 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -34,8 +34,8 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(clear_page);
 
 	/* user mem (segment) */
-EXPORT_SYMBOL(__copy_from_user);
-EXPORT_SYMBOL(__copy_to_user);
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
 EXPORT_SYMBOL(__clear_user);
 EXPORT_SYMBOL(__copy_in_user);
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 17e8306dca29..0b90497d4424 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -66,7 +66,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_from_user)
+ENTRY(__arch_copy_from_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0				// Nothing to copy
 	ret
-ENDPROC(__copy_from_user)
+ENDPROC(__arch_copy_from_user)
 
 	.section .fixup,"ax"
 	.align	2
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 21faae60f988..7a7efe255034 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -65,7 +65,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_to_user)
+ENTRY(__arch_copy_to_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
-ENDPROC(__copy_to_user)
+ENDPROC(__arch_copy_to_user)
 
 	.section .fixup,"ax"
 	.align	2
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 05/11] arm64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next,
renames the low-level functions to __arch_copy_*_user() so a static inline
can do additional work before the copy.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm64/Kconfig               |  2 ++
 arch/arm64/include/asm/uaccess.h | 16 ++++++++++++++--
 arch/arm64/kernel/arm64ksyms.c   |  4 ++--
 arch/arm64/lib/copy_from_user.S  |  4 ++--
 arch/arm64/lib/copy_to_user.S    |  4 ++--
 5 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..b771cd97f74b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -51,10 +51,12 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
+	select HAVE_ARCH_LINEAR_KERNEL_MAPPING
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 9e397a542756..5d0dacdb695b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -256,11 +256,23 @@ do {									\
 		-EFAULT;						\
 })
 
-extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n);
-extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
 extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
 extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
 
+static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	check_object_size(to, n, false);
+	return __arch_copy_from_user(to, from, n);
+}
+
+static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	check_object_size(from, n, true);
+	return __arch_copy_to_user(to, from, n);
+}
+
 static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	if (access_ok(VERIFY_READ, from, n))
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b05a45..2dc44406a7ad 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -34,8 +34,8 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(clear_page);
 
 	/* user mem (segment) */
-EXPORT_SYMBOL(__copy_from_user);
-EXPORT_SYMBOL(__copy_to_user);
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
 EXPORT_SYMBOL(__clear_user);
 EXPORT_SYMBOL(__copy_in_user);
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 17e8306dca29..0b90497d4424 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -66,7 +66,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_from_user)
+ENTRY(__arch_copy_from_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0				// Nothing to copy
 	ret
-ENDPROC(__copy_from_user)
+ENDPROC(__arch_copy_from_user)
 
 	.section .fixup,"ax"
 	.align	2
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 21faae60f988..7a7efe255034 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -65,7 +65,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_to_user)
+ENTRY(__arch_copy_to_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
-ENDPROC(__copy_to_user)
+ENDPROC(__arch_copy_to_user)
 
 	.section .fixup,"ax"
 	.align	2
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 06/11] ia64/uaccess: Enable hardened usercopy
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:55   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on ia64.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/ia64/Kconfig               |  1 +
 arch/ia64/include/asm/uaccess.h | 18 +++++++++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index f80758cb7157..32a87ef516a0 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -53,6 +53,7 @@ config IA64
 	select MODULES_USE_ELF_RELA
 	select ARCH_USE_CMPXCHG_LOCKREF
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	default y
 	help
 	  The Itanium Processor Family is Intel's 64-bit successor to
diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h
index 2189d5ddc1ee..465c70982f40 100644
--- a/arch/ia64/include/asm/uaccess.h
+++ b/arch/ia64/include/asm/uaccess.h
@@ -241,12 +241,18 @@ extern unsigned long __must_check __copy_user (void __user *to, const void __use
 static inline unsigned long
 __copy_to_user (void __user *to, const void *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(from, count, true);
+
 	return __copy_user(to, (__force void __user *) from, count);
 }
 
 static inline unsigned long
 __copy_from_user (void *to, const void __user *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(to, count, false);
+
 	return __copy_user((__force void __user *) to, from, count);
 }
 
@@ -258,8 +264,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	const void *__cu_from = (from);							\
 	long __cu_len = (n);								\
 											\
-	if (__access_ok(__cu_to, __cu_len, get_fs()))					\
-		__cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len);	\
+	if (__access_ok(__cu_to, __cu_len, get_fs())) {					\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_from, __cu_len, true);			\
+		__cu_len = __copy_user(__cu_to, (__force void __user *)  __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
@@ -270,8 +279,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	long __cu_len = (n);								\
 											\
 	__chk_user_ptr(__cu_from);							\
-	if (__access_ok(__cu_from, __cu_len, get_fs()))					\
+	if (__access_ok(__cu_from, __cu_len, get_fs())) {				\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_to, __cu_len, false);			\
 		__cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 06/11] ia64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Enables CONFIG_HARDENED_USERCOPY checks on ia64.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/ia64/Kconfig               |  1 +
 arch/ia64/include/asm/uaccess.h | 18 +++++++++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index f80758cb7157..32a87ef516a0 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -53,6 +53,7 @@ config IA64
 	select MODULES_USE_ELF_RELA
 	select ARCH_USE_CMPXCHG_LOCKREF
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	default y
 	help
 	  The Itanium Processor Family is Intel's 64-bit successor to
diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h
index 2189d5ddc1ee..465c70982f40 100644
--- a/arch/ia64/include/asm/uaccess.h
+++ b/arch/ia64/include/asm/uaccess.h
@@ -241,12 +241,18 @@ extern unsigned long __must_check __copy_user (void __user *to, const void __use
 static inline unsigned long
 __copy_to_user (void __user *to, const void *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(from, count, true);
+
 	return __copy_user(to, (__force void __user *) from, count);
 }
 
 static inline unsigned long
 __copy_from_user (void *to, const void __user *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(to, count, false);
+
 	return __copy_user((__force void __user *) to, from, count);
 }
 
@@ -258,8 +264,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	const void *__cu_from = (from);							\
 	long __cu_len = (n);								\
 											\
-	if (__access_ok(__cu_to, __cu_len, get_fs()))					\
-		__cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len);	\
+	if (__access_ok(__cu_to, __cu_len, get_fs())) {					\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_from, __cu_len, true);			\
+		__cu_len = __copy_user(__cu_to, (__force void __user *)  __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
@@ -270,8 +279,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	long __cu_len = (n);								\
 											\
 	__chk_user_ptr(__cu_from);							\
-	if (__access_ok(__cu_from, __cu_len, get_fs()))					\
+	if (__access_ok(__cu_from, __cu_len, get_fs())) {				\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_to, __cu_len, false);			\
 		__cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 06/11] ia64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on ia64.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/ia64/Kconfig               |  1 +
 arch/ia64/include/asm/uaccess.h | 18 +++++++++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index f80758cb7157..32a87ef516a0 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -53,6 +53,7 @@ config IA64
 	select MODULES_USE_ELF_RELA
 	select ARCH_USE_CMPXCHG_LOCKREF
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	default y
 	help
 	  The Itanium Processor Family is Intel's 64-bit successor to
diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h
index 2189d5ddc1ee..465c70982f40 100644
--- a/arch/ia64/include/asm/uaccess.h
+++ b/arch/ia64/include/asm/uaccess.h
@@ -241,12 +241,18 @@ extern unsigned long __must_check __copy_user (void __user *to, const void __use
 static inline unsigned long
 __copy_to_user (void __user *to, const void *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(from, count, true);
+
 	return __copy_user(to, (__force void __user *) from, count);
 }
 
 static inline unsigned long
 __copy_from_user (void *to, const void __user *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(to, count, false);
+
 	return __copy_user((__force void __user *) to, from, count);
 }
 
@@ -258,8 +264,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	const void *__cu_from = (from);							\
 	long __cu_len = (n);								\
 											\
-	if (__access_ok(__cu_to, __cu_len, get_fs()))					\
-		__cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len);	\
+	if (__access_ok(__cu_to, __cu_len, get_fs())) {					\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_from, __cu_len, true);			\
+		__cu_len = __copy_user(__cu_to, (__force void __user *)  __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
@@ -270,8 +279,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	long __cu_len = (n);								\
 											\
 	__chk_user_ptr(__cu_from);							\
-	if (__access_ok(__cu_from, __cu_len, get_fs()))					\
+	if (__access_ok(__cu_from, __cu_len, get_fs())) {				\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_to, __cu_len, false);			\
 		__cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 06/11] ia64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on ia64.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/ia64/Kconfig               |  1 +
 arch/ia64/include/asm/uaccess.h | 18 +++++++++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index f80758cb7157..32a87ef516a0 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -53,6 +53,7 @@ config IA64
 	select MODULES_USE_ELF_RELA
 	select ARCH_USE_CMPXCHG_LOCKREF
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	default y
 	help
 	  The Itanium Processor Family is Intel's 64-bit successor to
diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h
index 2189d5ddc1ee..465c70982f40 100644
--- a/arch/ia64/include/asm/uaccess.h
+++ b/arch/ia64/include/asm/uaccess.h
@@ -241,12 +241,18 @@ extern unsigned long __must_check __copy_user (void __user *to, const void __use
 static inline unsigned long
 __copy_to_user (void __user *to, const void *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(from, count, true);
+
 	return __copy_user(to, (__force void __user *) from, count);
 }
 
 static inline unsigned long
 __copy_from_user (void *to, const void __user *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(to, count, false);
+
 	return __copy_user((__force void __user *) to, from, count);
 }
 
@@ -258,8 +264,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	const void *__cu_from = (from);							\
 	long __cu_len = (n);								\
 											\
-	if (__access_ok(__cu_to, __cu_len, get_fs()))					\
-		__cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len);	\
+	if (__access_ok(__cu_to, __cu_len, get_fs())) {					\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_from, __cu_len, true);			\
+		__cu_len = __copy_user(__cu_to, (__force void __user *)  __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
@@ -270,8 +279,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	long __cu_len = (n);								\
 											\
 	__chk_user_ptr(__cu_from);							\
-	if (__access_ok(__cu_from, __cu_len, get_fs()))					\
+	if (__access_ok(__cu_from, __cu_len, get_fs())) {				\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_to, __cu_len, false);			\
 		__cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 06/11] ia64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on ia64.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/ia64/Kconfig               |  1 +
 arch/ia64/include/asm/uaccess.h | 18 +++++++++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index f80758cb7157..32a87ef516a0 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -53,6 +53,7 @@ config IA64
 	select MODULES_USE_ELF_RELA
 	select ARCH_USE_CMPXCHG_LOCKREF
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	default y
 	help
 	  The Itanium Processor Family is Intel's 64-bit successor to
diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h
index 2189d5ddc1ee..465c70982f40 100644
--- a/arch/ia64/include/asm/uaccess.h
+++ b/arch/ia64/include/asm/uaccess.h
@@ -241,12 +241,18 @@ extern unsigned long __must_check __copy_user (void __user *to, const void __use
 static inline unsigned long
 __copy_to_user (void __user *to, const void *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(from, count, true);
+
 	return __copy_user(to, (__force void __user *) from, count);
 }
 
 static inline unsigned long
 __copy_from_user (void *to, const void __user *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(to, count, false);
+
 	return __copy_user((__force void __user *) to, from, count);
 }
 
@@ -258,8 +264,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	const void *__cu_from = (from);							\
 	long __cu_len = (n);								\
 											\
-	if (__access_ok(__cu_to, __cu_len, get_fs()))					\
-		__cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len);	\
+	if (__access_ok(__cu_to, __cu_len, get_fs())) {					\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_from, __cu_len, true);			\
+		__cu_len = __copy_user(__cu_to, (__force void __user *)  __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
@@ -270,8 +279,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	long __cu_len = (n);								\
 											\
 	__chk_user_ptr(__cu_from);							\
-	if (__access_ok(__cu_from, __cu_len, get_fs()))					\
+	if (__access_ok(__cu_from, __cu_len, get_fs())) {				\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_to, __cu_len, false);			\
 		__cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 06/11] ia64/uaccess: Enable hardened usercopy
@ 2016-07-13 21:55   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on ia64.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/ia64/Kconfig               |  1 +
 arch/ia64/include/asm/uaccess.h | 18 +++++++++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index f80758cb7157..32a87ef516a0 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -53,6 +53,7 @@ config IA64
 	select MODULES_USE_ELF_RELA
 	select ARCH_USE_CMPXCHG_LOCKREF
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	default y
 	help
 	  The Itanium Processor Family is Intel's 64-bit successor to
diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h
index 2189d5ddc1ee..465c70982f40 100644
--- a/arch/ia64/include/asm/uaccess.h
+++ b/arch/ia64/include/asm/uaccess.h
@@ -241,12 +241,18 @@ extern unsigned long __must_check __copy_user (void __user *to, const void __use
 static inline unsigned long
 __copy_to_user (void __user *to, const void *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(from, count, true);
+
 	return __copy_user(to, (__force void __user *) from, count);
 }
 
 static inline unsigned long
 __copy_from_user (void *to, const void __user *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(to, count, false);
+
 	return __copy_user((__force void __user *) to, from, count);
 }
 
@@ -258,8 +264,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	const void *__cu_from = (from);							\
 	long __cu_len = (n);								\
 											\
-	if (__access_ok(__cu_to, __cu_len, get_fs()))					\
-		__cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len);	\
+	if (__access_ok(__cu_to, __cu_len, get_fs())) {					\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_from, __cu_len, true);			\
+		__cu_len = __copy_user(__cu_to, (__force void __user *)  __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
@@ -270,8 +279,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	long __cu_len = (n);								\
 											\
 	__chk_user_ptr(__cu_from);							\
-	if (__access_ok(__cu_from, __cu_len, get_fs()))					\
+	if (__access_ok(__cu_from, __cu_len, get_fs())) {				\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_to, __cu_len, false);			\
 		__cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 07/11] powerpc/uaccess: Enable hardened usercopy
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:56   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on powerpc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/Kconfig               |  1 +
 arch/powerpc/include/asm/uaccess.h | 21 +++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 01f7464d9fea..b7a18b2604be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -164,6 +164,7 @@ config PPC
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
 	select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index b7c20f0b8fbe..c1dc6c14deb8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -310,10 +310,15 @@ static inline unsigned long copy_from_user(void *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_READ, from, n))
+	if (access_ok(VERIFY_READ, from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	if ((unsigned long)from < TASK_SIZE) {
 		over = (unsigned long)from + n - TASK_SIZE;
+		if (!__builtin_constant_p(n - over))
+			check_object_size(to, n - over, false);
 		return __copy_tofrom_user((__force void __user *)to, from,
 				n - over) + over;
 	}
@@ -325,10 +330,15 @@ static inline unsigned long copy_to_user(void __user *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_WRITE, to, n))
+	if (access_ok(VERIFY_WRITE, to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_tofrom_user(to, (__force void __user *)from, n);
+	}
 	if ((unsigned long)to < TASK_SIZE) {
 		over = (unsigned long)to + n - TASK_SIZE;
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n - over, true);
 		return __copy_tofrom_user(to, (__force void __user *)from,
 				n - over) + over;
 	}
@@ -372,6 +382,10 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 		if (ret == 0)
 			return 0;
 	}
+
+	if (!__builtin_constant_p(n))
+		check_object_size(to, n, false);
+
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -398,6 +412,9 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 07/11] powerpc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Enables CONFIG_HARDENED_USERCOPY checks on powerpc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/Kconfig               |  1 +
 arch/powerpc/include/asm/uaccess.h | 21 +++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 01f7464d9fea..b7a18b2604be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -164,6 +164,7 @@ config PPC
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
 	select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index b7c20f0b8fbe..c1dc6c14deb8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -310,10 +310,15 @@ static inline unsigned long copy_from_user(void *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_READ, from, n))
+	if (access_ok(VERIFY_READ, from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	if ((unsigned long)from < TASK_SIZE) {
 		over = (unsigned long)from + n - TASK_SIZE;
+		if (!__builtin_constant_p(n - over))
+			check_object_size(to, n - over, false);
 		return __copy_tofrom_user((__force void __user *)to, from,
 				n - over) + over;
 	}
@@ -325,10 +330,15 @@ static inline unsigned long copy_to_user(void __user *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_WRITE, to, n))
+	if (access_ok(VERIFY_WRITE, to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_tofrom_user(to, (__force void __user *)from, n);
+	}
 	if ((unsigned long)to < TASK_SIZE) {
 		over = (unsigned long)to + n - TASK_SIZE;
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n - over, true);
 		return __copy_tofrom_user(to, (__force void __user *)from,
 				n - over) + over;
 	}
@@ -372,6 +382,10 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 		if (ret == 0)
 			return 0;
 	}
+
+	if (!__builtin_constant_p(n))
+		check_object_size(to, n, false);
+
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -398,6 +412,9 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 07/11] powerpc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on powerpc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/Kconfig               |  1 +
 arch/powerpc/include/asm/uaccess.h | 21 +++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 01f7464d9fea..b7a18b2604be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -164,6 +164,7 @@ config PPC
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
 	select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index b7c20f0b8fbe..c1dc6c14deb8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -310,10 +310,15 @@ static inline unsigned long copy_from_user(void *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_READ, from, n))
+	if (access_ok(VERIFY_READ, from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	if ((unsigned long)from < TASK_SIZE) {
 		over = (unsigned long)from + n - TASK_SIZE;
+		if (!__builtin_constant_p(n - over))
+			check_object_size(to, n - over, false);
 		return __copy_tofrom_user((__force void __user *)to, from,
 				n - over) + over;
 	}
@@ -325,10 +330,15 @@ static inline unsigned long copy_to_user(void __user *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_WRITE, to, n))
+	if (access_ok(VERIFY_WRITE, to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_tofrom_user(to, (__force void __user *)from, n);
+	}
 	if ((unsigned long)to < TASK_SIZE) {
 		over = (unsigned long)to + n - TASK_SIZE;
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n - over, true);
 		return __copy_tofrom_user(to, (__force void __user *)from,
 				n - over) + over;
 	}
@@ -372,6 +382,10 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 		if (ret = 0)
 			return 0;
 	}
+
+	if (!__builtin_constant_p(n))
+		check_object_size(to, n, false);
+
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -398,6 +412,9 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret = 0)
 			return 0;
 	}
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 07/11] powerpc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on powerpc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/Kconfig               |  1 +
 arch/powerpc/include/asm/uaccess.h | 21 +++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 01f7464d9fea..b7a18b2604be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -164,6 +164,7 @@ config PPC
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
 	select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index b7c20f0b8fbe..c1dc6c14deb8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -310,10 +310,15 @@ static inline unsigned long copy_from_user(void *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_READ, from, n))
+	if (access_ok(VERIFY_READ, from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	if ((unsigned long)from < TASK_SIZE) {
 		over = (unsigned long)from + n - TASK_SIZE;
+		if (!__builtin_constant_p(n - over))
+			check_object_size(to, n - over, false);
 		return __copy_tofrom_user((__force void __user *)to, from,
 				n - over) + over;
 	}
@@ -325,10 +330,15 @@ static inline unsigned long copy_to_user(void __user *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_WRITE, to, n))
+	if (access_ok(VERIFY_WRITE, to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_tofrom_user(to, (__force void __user *)from, n);
+	}
 	if ((unsigned long)to < TASK_SIZE) {
 		over = (unsigned long)to + n - TASK_SIZE;
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n - over, true);
 		return __copy_tofrom_user(to, (__force void __user *)from,
 				n - over) + over;
 	}
@@ -372,6 +382,10 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 		if (ret == 0)
 			return 0;
 	}
+
+	if (!__builtin_constant_p(n))
+		check_object_size(to, n, false);
+
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -398,6 +412,9 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 07/11] powerpc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on powerpc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/Kconfig               |  1 +
 arch/powerpc/include/asm/uaccess.h | 21 +++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 01f7464d9fea..b7a18b2604be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -164,6 +164,7 @@ config PPC
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
 	select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index b7c20f0b8fbe..c1dc6c14deb8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -310,10 +310,15 @@ static inline unsigned long copy_from_user(void *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_READ, from, n))
+	if (access_ok(VERIFY_READ, from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	if ((unsigned long)from < TASK_SIZE) {
 		over = (unsigned long)from + n - TASK_SIZE;
+		if (!__builtin_constant_p(n - over))
+			check_object_size(to, n - over, false);
 		return __copy_tofrom_user((__force void __user *)to, from,
 				n - over) + over;
 	}
@@ -325,10 +330,15 @@ static inline unsigned long copy_to_user(void __user *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_WRITE, to, n))
+	if (access_ok(VERIFY_WRITE, to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_tofrom_user(to, (__force void __user *)from, n);
+	}
 	if ((unsigned long)to < TASK_SIZE) {
 		over = (unsigned long)to + n - TASK_SIZE;
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n - over, true);
 		return __copy_tofrom_user(to, (__force void __user *)from,
 				n - over) + over;
 	}
@@ -372,6 +382,10 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 		if (ret == 0)
 			return 0;
 	}
+
+	if (!__builtin_constant_p(n))
+		check_object_size(to, n, false);
+
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -398,6 +412,9 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 07/11] powerpc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on powerpc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/Kconfig               |  1 +
 arch/powerpc/include/asm/uaccess.h | 21 +++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 01f7464d9fea..b7a18b2604be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -164,6 +164,7 @@ config PPC
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
 	select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index b7c20f0b8fbe..c1dc6c14deb8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -310,10 +310,15 @@ static inline unsigned long copy_from_user(void *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_READ, from, n))
+	if (access_ok(VERIFY_READ, from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	if ((unsigned long)from < TASK_SIZE) {
 		over = (unsigned long)from + n - TASK_SIZE;
+		if (!__builtin_constant_p(n - over))
+			check_object_size(to, n - over, false);
 		return __copy_tofrom_user((__force void __user *)to, from,
 				n - over) + over;
 	}
@@ -325,10 +330,15 @@ static inline unsigned long copy_to_user(void __user *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_WRITE, to, n))
+	if (access_ok(VERIFY_WRITE, to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_tofrom_user(to, (__force void __user *)from, n);
+	}
 	if ((unsigned long)to < TASK_SIZE) {
 		over = (unsigned long)to + n - TASK_SIZE;
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n - over, true);
 		return __copy_tofrom_user(to, (__force void __user *)from,
 				n - over) + over;
 	}
@@ -372,6 +382,10 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 		if (ret == 0)
 			return 0;
 	}
+
+	if (!__builtin_constant_p(n))
+		check_object_size(to, n, false);
+
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -398,6 +412,9 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 08/11] sparc/uaccess: Enable hardened usercopy
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:56   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on sparc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/sparc/Kconfig                  |  1 +
 arch/sparc/include/asm/uaccess_32.h | 14 ++++++++++----
 arch/sparc/include/asm/uaccess_64.h | 11 +++++++++--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 546293d9e6c5..59b09600dd32 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -43,6 +43,7 @@ config SPARC
 	select OLD_SIGSUSPEND
 	select ARCH_HAS_SG_CHAIN
 	select CPU_NO_EFFICIENT_FFS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config SPARC32
 	def_bool !64BIT
diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/uaccess_32.h
index 57aca2792d29..341a5a133f48 100644
--- a/arch/sparc/include/asm/uaccess_32.h
+++ b/arch/sparc/include/asm/uaccess_32.h
@@ -248,22 +248,28 @@ unsigned long __copy_user(void __user *to, const void __user *from, unsigned lon
 
 static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) to, n))
+	if (n && __access_ok((unsigned long) to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_user(to, (__force void __user *) from, n);
-	else
+	} else
 		return n;
 }
 
 static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
 	return __copy_user(to, (__force void __user *) from, n);
 }
 
 static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) from, n))
+	if (n && __access_ok((unsigned long) from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_user((__force void __user *) to, from, n);
-	else
+	} else
 		return n;
 }
 
diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h
index e9a51d64974d..8bda94fab8e8 100644
--- a/arch/sparc/include/asm/uaccess_64.h
+++ b/arch/sparc/include/asm/uaccess_64.h
@@ -210,8 +210,12 @@ unsigned long copy_from_user_fixup(void *to, const void __user *from,
 static inline unsigned long __must_check
 copy_from_user(void *to, const void __user *from, unsigned long size)
 {
-	unsigned long ret = ___copy_from_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(to, size, false);
+
+	ret = ___copy_from_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_from_user_fixup(to, from, size);
 
@@ -227,8 +231,11 @@ unsigned long copy_to_user_fixup(void __user *to, const void *from,
 static inline unsigned long __must_check
 copy_to_user(void __user *to, const void *from, unsigned long size)
 {
-	unsigned long ret = ___copy_to_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(from, size, true);
+	ret = ___copy_to_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_to_user_fixup(to, from, size);
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 08/11] sparc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Enables CONFIG_HARDENED_USERCOPY checks on sparc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/sparc/Kconfig                  |  1 +
 arch/sparc/include/asm/uaccess_32.h | 14 ++++++++++----
 arch/sparc/include/asm/uaccess_64.h | 11 +++++++++--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 546293d9e6c5..59b09600dd32 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -43,6 +43,7 @@ config SPARC
 	select OLD_SIGSUSPEND
 	select ARCH_HAS_SG_CHAIN
 	select CPU_NO_EFFICIENT_FFS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config SPARC32
 	def_bool !64BIT
diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/uaccess_32.h
index 57aca2792d29..341a5a133f48 100644
--- a/arch/sparc/include/asm/uaccess_32.h
+++ b/arch/sparc/include/asm/uaccess_32.h
@@ -248,22 +248,28 @@ unsigned long __copy_user(void __user *to, const void __user *from, unsigned lon
 
 static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) to, n))
+	if (n && __access_ok((unsigned long) to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_user(to, (__force void __user *) from, n);
-	else
+	} else
 		return n;
 }
 
 static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
 	return __copy_user(to, (__force void __user *) from, n);
 }
 
 static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) from, n))
+	if (n && __access_ok((unsigned long) from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_user((__force void __user *) to, from, n);
-	else
+	} else
 		return n;
 }
 
diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h
index e9a51d64974d..8bda94fab8e8 100644
--- a/arch/sparc/include/asm/uaccess_64.h
+++ b/arch/sparc/include/asm/uaccess_64.h
@@ -210,8 +210,12 @@ unsigned long copy_from_user_fixup(void *to, const void __user *from,
 static inline unsigned long __must_check
 copy_from_user(void *to, const void __user *from, unsigned long size)
 {
-	unsigned long ret = ___copy_from_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(to, size, false);
+
+	ret = ___copy_from_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_from_user_fixup(to, from, size);
 
@@ -227,8 +231,11 @@ unsigned long copy_to_user_fixup(void __user *to, const void *from,
 static inline unsigned long __must_check
 copy_to_user(void __user *to, const void *from, unsigned long size)
 {
-	unsigned long ret = ___copy_to_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(from, size, true);
+	ret = ___copy_to_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_to_user_fixup(to, from, size);
 	return ret;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 08/11] sparc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on sparc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/sparc/Kconfig                  |  1 +
 arch/sparc/include/asm/uaccess_32.h | 14 ++++++++++----
 arch/sparc/include/asm/uaccess_64.h | 11 +++++++++--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 546293d9e6c5..59b09600dd32 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -43,6 +43,7 @@ config SPARC
 	select OLD_SIGSUSPEND
 	select ARCH_HAS_SG_CHAIN
 	select CPU_NO_EFFICIENT_FFS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config SPARC32
 	def_bool !64BIT
diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/uaccess_32.h
index 57aca2792d29..341a5a133f48 100644
--- a/arch/sparc/include/asm/uaccess_32.h
+++ b/arch/sparc/include/asm/uaccess_32.h
@@ -248,22 +248,28 @@ unsigned long __copy_user(void __user *to, const void __user *from, unsigned lon
 
 static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) to, n))
+	if (n && __access_ok((unsigned long) to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_user(to, (__force void __user *) from, n);
-	else
+	} else
 		return n;
 }
 
 static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
 	return __copy_user(to, (__force void __user *) from, n);
 }
 
 static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) from, n))
+	if (n && __access_ok((unsigned long) from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_user((__force void __user *) to, from, n);
-	else
+	} else
 		return n;
 }
 
diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h
index e9a51d64974d..8bda94fab8e8 100644
--- a/arch/sparc/include/asm/uaccess_64.h
+++ b/arch/sparc/include/asm/uaccess_64.h
@@ -210,8 +210,12 @@ unsigned long copy_from_user_fixup(void *to, const void __user *from,
 static inline unsigned long __must_check
 copy_from_user(void *to, const void __user *from, unsigned long size)
 {
-	unsigned long ret = ___copy_from_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(to, size, false);
+
+	ret = ___copy_from_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_from_user_fixup(to, from, size);
 
@@ -227,8 +231,11 @@ unsigned long copy_to_user_fixup(void __user *to, const void *from,
 static inline unsigned long __must_check
 copy_to_user(void __user *to, const void *from, unsigned long size)
 {
-	unsigned long ret = ___copy_to_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(from, size, true);
+	ret = ___copy_to_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_to_user_fixup(to, from, size);
 	return ret;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 08/11] sparc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on sparc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/sparc/Kconfig                  |  1 +
 arch/sparc/include/asm/uaccess_32.h | 14 ++++++++++----
 arch/sparc/include/asm/uaccess_64.h | 11 +++++++++--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 546293d9e6c5..59b09600dd32 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -43,6 +43,7 @@ config SPARC
 	select OLD_SIGSUSPEND
 	select ARCH_HAS_SG_CHAIN
 	select CPU_NO_EFFICIENT_FFS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config SPARC32
 	def_bool !64BIT
diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/uaccess_32.h
index 57aca2792d29..341a5a133f48 100644
--- a/arch/sparc/include/asm/uaccess_32.h
+++ b/arch/sparc/include/asm/uaccess_32.h
@@ -248,22 +248,28 @@ unsigned long __copy_user(void __user *to, const void __user *from, unsigned lon
 
 static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) to, n))
+	if (n && __access_ok((unsigned long) to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_user(to, (__force void __user *) from, n);
-	else
+	} else
 		return n;
 }
 
 static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
 	return __copy_user(to, (__force void __user *) from, n);
 }
 
 static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) from, n))
+	if (n && __access_ok((unsigned long) from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_user((__force void __user *) to, from, n);
-	else
+	} else
 		return n;
 }
 
diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h
index e9a51d64974d..8bda94fab8e8 100644
--- a/arch/sparc/include/asm/uaccess_64.h
+++ b/arch/sparc/include/asm/uaccess_64.h
@@ -210,8 +210,12 @@ unsigned long copy_from_user_fixup(void *to, const void __user *from,
 static inline unsigned long __must_check
 copy_from_user(void *to, const void __user *from, unsigned long size)
 {
-	unsigned long ret = ___copy_from_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(to, size, false);
+
+	ret = ___copy_from_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_from_user_fixup(to, from, size);
 
@@ -227,8 +231,11 @@ unsigned long copy_to_user_fixup(void __user *to, const void *from,
 static inline unsigned long __must_check
 copy_to_user(void __user *to, const void *from, unsigned long size)
 {
-	unsigned long ret = ___copy_to_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(from, size, true);
+	ret = ___copy_to_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_to_user_fixup(to, from, size);
 	return ret;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 08/11] sparc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on sparc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/sparc/Kconfig                  |  1 +
 arch/sparc/include/asm/uaccess_32.h | 14 ++++++++++----
 arch/sparc/include/asm/uaccess_64.h | 11 +++++++++--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 546293d9e6c5..59b09600dd32 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -43,6 +43,7 @@ config SPARC
 	select OLD_SIGSUSPEND
 	select ARCH_HAS_SG_CHAIN
 	select CPU_NO_EFFICIENT_FFS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config SPARC32
 	def_bool !64BIT
diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/uaccess_32.h
index 57aca2792d29..341a5a133f48 100644
--- a/arch/sparc/include/asm/uaccess_32.h
+++ b/arch/sparc/include/asm/uaccess_32.h
@@ -248,22 +248,28 @@ unsigned long __copy_user(void __user *to, const void __user *from, unsigned lon
 
 static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) to, n))
+	if (n && __access_ok((unsigned long) to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_user(to, (__force void __user *) from, n);
-	else
+	} else
 		return n;
 }
 
 static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
 	return __copy_user(to, (__force void __user *) from, n);
 }
 
 static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) from, n))
+	if (n && __access_ok((unsigned long) from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_user((__force void __user *) to, from, n);
-	else
+	} else
 		return n;
 }
 
diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h
index e9a51d64974d..8bda94fab8e8 100644
--- a/arch/sparc/include/asm/uaccess_64.h
+++ b/arch/sparc/include/asm/uaccess_64.h
@@ -210,8 +210,12 @@ unsigned long copy_from_user_fixup(void *to, const void __user *from,
 static inline unsigned long __must_check
 copy_from_user(void *to, const void __user *from, unsigned long size)
 {
-	unsigned long ret = ___copy_from_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(to, size, false);
+
+	ret = ___copy_from_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_from_user_fixup(to, from, size);
 
@@ -227,8 +231,11 @@ unsigned long copy_to_user_fixup(void __user *to, const void *from,
 static inline unsigned long __must_check
 copy_to_user(void __user *to, const void *from, unsigned long size)
 {
-	unsigned long ret = ___copy_to_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(from, size, true);
+	ret = ___copy_to_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_to_user_fixup(to, from, size);
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 08/11] sparc/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on sparc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/sparc/Kconfig                  |  1 +
 arch/sparc/include/asm/uaccess_32.h | 14 ++++++++++----
 arch/sparc/include/asm/uaccess_64.h | 11 +++++++++--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 546293d9e6c5..59b09600dd32 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -43,6 +43,7 @@ config SPARC
 	select OLD_SIGSUSPEND
 	select ARCH_HAS_SG_CHAIN
 	select CPU_NO_EFFICIENT_FFS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config SPARC32
 	def_bool !64BIT
diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/uaccess_32.h
index 57aca2792d29..341a5a133f48 100644
--- a/arch/sparc/include/asm/uaccess_32.h
+++ b/arch/sparc/include/asm/uaccess_32.h
@@ -248,22 +248,28 @@ unsigned long __copy_user(void __user *to, const void __user *from, unsigned lon
 
 static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) to, n))
+	if (n && __access_ok((unsigned long) to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_user(to, (__force void __user *) from, n);
-	else
+	} else
 		return n;
 }
 
 static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
 	return __copy_user(to, (__force void __user *) from, n);
 }
 
 static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) from, n))
+	if (n && __access_ok((unsigned long) from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_user((__force void __user *) to, from, n);
-	else
+	} else
 		return n;
 }
 
diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h
index e9a51d64974d..8bda94fab8e8 100644
--- a/arch/sparc/include/asm/uaccess_64.h
+++ b/arch/sparc/include/asm/uaccess_64.h
@@ -210,8 +210,12 @@ unsigned long copy_from_user_fixup(void *to, const void __user *from,
 static inline unsigned long __must_check
 copy_from_user(void *to, const void __user *from, unsigned long size)
 {
-	unsigned long ret = ___copy_from_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(to, size, false);
+
+	ret = ___copy_from_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_from_user_fixup(to, from, size);
 
@@ -227,8 +231,11 @@ unsigned long copy_to_user_fixup(void __user *to, const void *from,
 static inline unsigned long __must_check
 copy_to_user(void __user *to, const void *from, unsigned long size)
 {
-	unsigned long ret = ___copy_to_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(from, size, true);
+	ret = ___copy_to_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_to_user_fixup(to, from, size);
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 09/11] s390/uaccess: Enable hardened usercopy
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:56   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on s390.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/s390/Kconfig       | 1 +
 arch/s390/lib/uaccess.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9f694311c9ed 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -122,6 +122,7 @@ config S390
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_EARLY_PFN_TO_NID
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index ae4de559e3a0..6986c20166f0 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -104,6 +104,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 
 unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	check_object_size(to, n, false);
 	if (static_branch_likely(&have_mvcos))
 		return copy_from_user_mvcos(to, from, n);
 	return copy_from_user_mvcp(to, from, n);
@@ -177,6 +178,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 
 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	if (static_branch_likely(&have_mvcos))
 		return copy_to_user_mvcos(to, from, n);
 	return copy_to_user_mvcs(to, from, n);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 09/11] s390/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Enables CONFIG_HARDENED_USERCOPY checks on s390.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/s390/Kconfig       | 1 +
 arch/s390/lib/uaccess.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9f694311c9ed 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -122,6 +122,7 @@ config S390
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_EARLY_PFN_TO_NID
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index ae4de559e3a0..6986c20166f0 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -104,6 +104,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 
 unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	check_object_size(to, n, false);
 	if (static_branch_likely(&have_mvcos))
 		return copy_from_user_mvcos(to, from, n);
 	return copy_from_user_mvcp(to, from, n);
@@ -177,6 +178,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 
 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	if (static_branch_likely(&have_mvcos))
 		return copy_to_user_mvcos(to, from, n);
 	return copy_to_user_mvcs(to, from, n);
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 09/11] s390/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on s390.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/s390/Kconfig       | 1 +
 arch/s390/lib/uaccess.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9f694311c9ed 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -122,6 +122,7 @@ config S390
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_EARLY_PFN_TO_NID
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index ae4de559e3a0..6986c20166f0 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -104,6 +104,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 
 unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	check_object_size(to, n, false);
 	if (static_branch_likely(&have_mvcos))
 		return copy_from_user_mvcos(to, from, n);
 	return copy_from_user_mvcp(to, from, n);
@@ -177,6 +178,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 
 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	if (static_branch_likely(&have_mvcos))
 		return copy_to_user_mvcos(to, from, n);
 	return copy_to_user_mvcs(to, from, n);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 09/11] s390/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on s390.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/s390/Kconfig       | 1 +
 arch/s390/lib/uaccess.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9f694311c9ed 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -122,6 +122,7 @@ config S390
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_EARLY_PFN_TO_NID
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index ae4de559e3a0..6986c20166f0 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -104,6 +104,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 
 unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	check_object_size(to, n, false);
 	if (static_branch_likely(&have_mvcos))
 		return copy_from_user_mvcos(to, from, n);
 	return copy_from_user_mvcp(to, from, n);
@@ -177,6 +178,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 
 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	if (static_branch_likely(&have_mvcos))
 		return copy_to_user_mvcos(to, from, n);
 	return copy_to_user_mvcs(to, from, n);
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 09/11] s390/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on s390.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/s390/Kconfig       | 1 +
 arch/s390/lib/uaccess.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9f694311c9ed 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -122,6 +122,7 @@ config S390
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_EARLY_PFN_TO_NID
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index ae4de559e3a0..6986c20166f0 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -104,6 +104,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 
 unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	check_object_size(to, n, false);
 	if (static_branch_likely(&have_mvcos))
 		return copy_from_user_mvcos(to, from, n);
 	return copy_from_user_mvcp(to, from, n);
@@ -177,6 +178,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 
 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	if (static_branch_likely(&have_mvcos))
 		return copy_to_user_mvcos(to, from, n);
 	return copy_to_user_mvcs(to, from, n);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 09/11] s390/uaccess: Enable hardened usercopy
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Enables CONFIG_HARDENED_USERCOPY checks on s390.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/s390/Kconfig       | 1 +
 arch/s390/lib/uaccess.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9f694311c9ed 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -122,6 +122,7 @@ config S390
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_EARLY_PFN_TO_NID
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index ae4de559e3a0..6986c20166f0 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -104,6 +104,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 
 unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	check_object_size(to, n, false);
 	if (static_branch_likely(&have_mvcos))
 		return copy_from_user_mvcos(to, from, n);
 	return copy_from_user_mvcp(to, from, n);
@@ -177,6 +178,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 
 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	if (static_branch_likely(&have_mvcos))
 		return copy_to_user_mvcos(to, from, n);
 	return copy_to_user_mvcs(to, from, n);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 10/11] mm: SLAB hardened usercopy support
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:56   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slab.c    | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index f755a602d4a1..798c2020ee7c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1757,6 +1757,7 @@ choice
 
 config SLAB
 	bool "SLAB"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	  The regular slab allocator that is established and known to work
 	  well in all environments. It organizes cache hot objects in
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1e6bc9..5e2d5f349aca 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
 module_init(slab_proc_init);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *cachep;
+	unsigned int objnr;
+	unsigned long offset;
+
+	/* Find and validate object. */
+	cachep = page->slab_cache;
+	objnr = obj_to_index(cachep, page, (void *)ptr);
+	BUG_ON(objnr >= cachep->num);
+
+	/* Find offset within object. */
+	offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= cachep->object_size && n <= cachep->object_size - offset)
+		return NULL;
+
+	return cachep->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 /**
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 10/11] mm: SLAB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slab.c    | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index f755a602d4a1..798c2020ee7c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1757,6 +1757,7 @@ choice
 
 config SLAB
 	bool "SLAB"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	  The regular slab allocator that is established and known to work
 	  well in all environments. It organizes cache hot objects in
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1e6bc9..5e2d5f349aca 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
 module_init(slab_proc_init);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *cachep;
+	unsigned int objnr;
+	unsigned long offset;
+
+	/* Find and validate object. */
+	cachep = page->slab_cache;
+	objnr = obj_to_index(cachep, page, (void *)ptr);
+	BUG_ON(objnr >= cachep->num);
+
+	/* Find offset within object. */
+	offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= cachep->object_size && n <= cachep->object_size - offset)
+		return NULL;
+
+	return cachep->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 /**
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 10/11] mm: SLAB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slab.c    | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index f755a602d4a1..798c2020ee7c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1757,6 +1757,7 @@ choice
 
 config SLAB
 	bool "SLAB"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	  The regular slab allocator that is established and known to work
 	  well in all environments. It organizes cache hot objects in
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1e6bc9..5e2d5f349aca 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
 module_init(slab_proc_init);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *cachep;
+	unsigned int objnr;
+	unsigned long offset;
+
+	/* Find and validate object. */
+	cachep = page->slab_cache;
+	objnr = obj_to_index(cachep, page, (void *)ptr);
+	BUG_ON(objnr >= cachep->num);
+
+	/* Find offset within object. */
+	offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= cachep->object_size && n <= cachep->object_size - offset)
+		return NULL;
+
+	return cachep->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 /**
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 10/11] mm: SLAB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slab.c    | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index f755a602d4a1..798c2020ee7c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1757,6 +1757,7 @@ choice
 
 config SLAB
 	bool "SLAB"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	  The regular slab allocator that is established and known to work
 	  well in all environments. It organizes cache hot objects in
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1e6bc9..5e2d5f349aca 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
 module_init(slab_proc_init);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *cachep;
+	unsigned int objnr;
+	unsigned long offset;
+
+	/* Find and validate object. */
+	cachep = page->slab_cache;
+	objnr = obj_to_index(cachep, page, (void *)ptr);
+	BUG_ON(objnr >= cachep->num);
+
+	/* Find offset within object. */
+	offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= cachep->object_size && n <= cachep->object_size - offset)
+		return NULL;
+
+	return cachep->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 /**
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 10/11] mm: SLAB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-arm-kernel

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slab.c    | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index f755a602d4a1..798c2020ee7c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1757,6 +1757,7 @@ choice
 
 config SLAB
 	bool "SLAB"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	  The regular slab allocator that is established and known to work
 	  well in all environments. It organizes cache hot objects in
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1e6bc9..5e2d5f349aca 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
 module_init(slab_proc_init);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *cachep;
+	unsigned int objnr;
+	unsigned long offset;
+
+	/* Find and validate object. */
+	cachep = page->slab_cache;
+	objnr = obj_to_index(cachep, page, (void *)ptr);
+	BUG_ON(objnr >= cachep->num);
+
+	/* Find offset within object. */
+	offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= cachep->object_size && n <= cachep->object_size - offset)
+		return NULL;
+
+	return cachep->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 /**
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 10/11] mm: SLAB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slab.c    | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index f755a602d4a1..798c2020ee7c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1757,6 +1757,7 @@ choice
 
 config SLAB
 	bool "SLAB"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	  The regular slab allocator that is established and known to work
 	  well in all environments. It organizes cache hot objects in
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1e6bc9..5e2d5f349aca 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
 module_init(slab_proc_init);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *cachep;
+	unsigned int objnr;
+	unsigned long offset;
+
+	/* Find and validate object. */
+	cachep = page->slab_cache;
+	objnr = obj_to_index(cachep, page, (void *)ptr);
+	BUG_ON(objnr >= cachep->num);
+
+	/* Find offset within object. */
+	offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= cachep->object_size && n <= cachep->object_size - offset)
+		return NULL;
+
+	return cachep->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 /**
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 11/11] mm: SLUB hardened usercopy support
  2016-07-13 21:55 ` Kees Cook
                     ` (3 preceding siblings ...)
  (?)
@ 2016-07-13 21:56   ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix from Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@ config SLAB
 
 config SLUB
 	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	   SLUB is a slab allocator that minimizes cache line usage
 	   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..7dee3d9a5843 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *s;
+	unsigned long offset;
+	size_t object_size;
+
+	/* Find object and usable object size. */
+	s = page->slab_cache;
+	object_size = slab_ksize(s);
+
+	/* Find offset within object. */
+	offset = (ptr - page_address(page)) % s->size;
+
+	/* Adjust for redzone and reject if within the redzone. */
+	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
+		if (offset < s->red_left_pad)
+			return s->name;
+		offset -= s->red_left_pad;
+	}
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= object_size && n <= object_size - offset)
+		return NULL;
+
+	return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
 	struct page *page;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix from Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@ config SLAB
 
 config SLUB
 	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	   SLUB is a slab allocator that minimizes cache line usage
 	   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..7dee3d9a5843 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *s;
+	unsigned long offset;
+	size_t object_size;
+
+	/* Find object and usable object size. */
+	s = page->slab_cache;
+	object_size = slab_ksize(s);
+
+	/* Find offset within object. */
+	offset = (ptr - page_address(page)) % s->size;
+
+	/* Adjust for redzone and reject if within the redzone. */
+	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
+		if (offset < s->red_left_pad)
+			return s->name;
+		offset -= s->red_left_pad;
+	}
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= object_size && n <= object_size - offset)
+		return NULL;
+
+	return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
 	struct page *page;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix from Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@ config SLAB
 
 config SLUB
 	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	   SLUB is a slab allocator that minimizes cache line usage
 	   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..7dee3d9a5843 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *s;
+	unsigned long offset;
+	size_t object_size;
+
+	/* Find object and usable object size. */
+	s = page->slab_cache;
+	object_size = slab_ksize(s);
+
+	/* Find offset within object. */
+	offset = (ptr - page_address(page)) % s->size;
+
+	/* Adjust for redzone and reject if within the redzone. */
+	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
+		if (offset < s->red_left_pad)
+			return s->name;
+		offset -= s->red_left_pad;
+	}
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= object_size && n <= object_size - offset)
+		return NULL;
+
+	return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
 	struct page *page;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix from Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@ config SLAB
 
 config SLUB
 	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	   SLUB is a slab allocator that minimizes cache line usage
 	   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..7dee3d9a5843 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *s;
+	unsigned long offset;
+	size_t object_size;
+
+	/* Find object and usable object size. */
+	s = page->slab_cache;
+	object_size = slab_ksize(s);
+
+	/* Find offset within object. */
+	offset = (ptr - page_address(page)) % s->size;
+
+	/* Adjust for redzone and reject if within the redzone. */
+	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
+		if (offset < s->red_left_pad)
+			return s->name;
+		offset -= s->red_left_pad;
+	}
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= object_size && n <= object_size - offset)
+		return NULL;
+
+	return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
 	struct page *page;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-arm-kernel

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix from Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@ config SLAB
 
 config SLUB
 	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	   SLUB is a slab allocator that minimizes cache line usage
 	   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..7dee3d9a5843 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *s;
+	unsigned long offset;
+	size_t object_size;
+
+	/* Find object and usable object size. */
+	s = page->slab_cache;
+	object_size = slab_ksize(s);
+
+	/* Find offset within object. */
+	offset = (ptr - page_address(page)) % s->size;
+
+	/* Adjust for redzone and reject if within the redzone. */
+	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
+		if (offset < s->red_left_pad)
+			return s->name;
+		offset -= s->red_left_pad;
+	}
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= object_size && n <= object_size - offset)
+		return NULL;
+
+	return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
 	struct page *page;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* [kernel-hardening] [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-13 21:56   ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix from Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  1 +
 mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@ config SLAB
 
 config SLUB
 	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	   SLUB is a slab allocator that minimizes cache line usage
 	   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..7dee3d9a5843 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *s;
+	unsigned long offset;
+	size_t object_size;
+
+	/* Find object and usable object size. */
+	s = page->slab_cache;
+	object_size = slab_ksize(s);
+
+	/* Find offset within object. */
+	offset = (ptr - page_address(page)) % s->size;
+
+	/* Adjust for redzone and reject if within the redzone. */
+	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
+		if (offset < s->red_left_pad)
+			return s->name;
+		offset -= s->red_left_pad;
+	}
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= object_size && n <= object_size - offset)
+		return NULL;
+
+	return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
 	struct page *page;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
  2016-07-13 21:55   ` Kees Cook
                       ` (4 preceding siblings ...)
  (?)
@ 2016-07-13 22:01     ` Andy Lutomirski
  -1 siblings, 0 replies; 203+ messages in thread
From: Andy Lutomirski @ 2016-07-13 22:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> This creates per-architecture function arch_within_stack_frames() that
> should validate if a given object is contained by a kernel stack frame.
> Initial implementation is on x86.
>
> This is based on code from PaX.
>

This, along with Josh's livepatch work, are two examples of unwinders
that matter for correctness instead of just debugging.  ISTM this
should just use Josh's code directly once it's been written.

--Andy

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:01     ` Andy Lutomirski
  0 siblings, 0 replies; 203+ messages in thread
From: Andy Lutomirski @ 2016-07-13 22:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause

On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> This creates per-architecture function arch_within_stack_frames() that
> should validate if a given object is contained by a kernel stack frame.
> Initial implementation is on x86.
>
> This is based on code from PaX.
>

This, along with Josh's livepatch work, are two examples of unwinders
that matter for correctness instead of just debugging.  ISTM this
should just use Josh's code directly once it's been written.

--Andy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:01     ` Andy Lutomirski
  0 siblings, 0 replies; 203+ messages in thread
From: Andy Lutomirski @ 2016-07-13 22:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> This creates per-architecture function arch_within_stack_frames() that
> should validate if a given object is contained by a kernel stack frame.
> Initial implementation is on x86.
>
> This is based on code from PaX.
>

This, along with Josh's livepatch work, are two examples of unwinders
that matter for correctness instead of just debugging.  ISTM this
should just use Josh's code directly once it's been written.

--Andy

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:01     ` Andy Lutomirski
  0 siblings, 0 replies; 203+ messages in thread
From: Andy Lutomirski @ 2016-07-13 22:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> This creates per-architecture function arch_within_stack_frames() that
> should validate if a given object is contained by a kernel stack frame.
> Initial implementation is on x86.
>
> This is based on code from PaX.
>

This, along with Josh's livepatch work, are two examples of unwinders
that matter for correctness instead of just debugging.  ISTM this
should just use Josh's code directly once it's been written.

--Andy

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:01     ` Andy Lutomirski
  0 siblings, 0 replies; 203+ messages in thread
From: Andy Lutomirski @ 2016-07-13 22:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> This creates per-architecture function arch_within_stack_frames() that
> should validate if a given object is contained by a kernel stack frame.
> Initial implementation is on x86.
>
> This is based on code from PaX.
>

This, along with Josh's livepatch work, are two examples of unwinders
that matter for correctness instead of just debugging.  ISTM this
should just use Josh's code directly once it's been written.

--Andy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:01     ` Andy Lutomirski
  0 siblings, 0 replies; 203+ messages in thread
From: Andy Lutomirski @ 2016-07-13 22:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> This creates per-architecture function arch_within_stack_frames() that
> should validate if a given object is contained by a kernel stack frame.
> Initial implementation is on x86.
>
> This is based on code from PaX.
>

This, along with Josh's livepatch work, are two examples of unwinders
that matter for correctness instead of just debugging.  ISTM this
should just use Josh's code directly once it's been written.

--Andy

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:01     ` Andy Lutomirski
  0 siblings, 0 replies; 203+ messages in thread
From: Andy Lutomirski @ 2016-07-13 22:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> This creates per-architecture function arch_within_stack_frames() that
> should validate if a given object is contained by a kernel stack frame.
> Initial implementation is on x86.
>
> This is based on code from PaX.
>

This, along with Josh's livepatch work, are two examples of unwinders
that matter for correctness instead of just debugging.  ISTM this
should just use Josh's code directly once it's been written.

--Andy

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
  2016-07-13 22:01     ` Andy Lutomirski
                         ` (4 preceding siblings ...)
  (?)
@ 2016-07-13 22:04       ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 22:04 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> This creates per-architecture function arch_within_stack_frames() that
>> should validate if a given object is contained by a kernel stack frame.
>> Initial implementation is on x86.
>>
>> This is based on code from PaX.
>>
>
> This, along with Josh's livepatch work, are two examples of unwinders
> that matter for correctness instead of just debugging.  ISTM this
> should just use Josh's code directly once it's been written.

Do you have URL for Josh's code? I'd love to see what happening there.

In the meantime, usercopy can use this...

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:04       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 22:04 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause

On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> This creates per-architecture function arch_within_stack_frames() that
>> should validate if a given object is contained by a kernel stack frame.
>> Initial implementation is on x86.
>>
>> This is based on code from PaX.
>>
>
> This, along with Josh's livepatch work, are two examples of unwinders
> that matter for correctness instead of just debugging.  ISTM this
> should just use Josh's code directly once it's been written.

Do you have URL for Josh's code? I'd love to see what happening there.

In the meantime, usercopy can use this...

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:04       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 22:04 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> This creates per-architecture function arch_within_stack_frames() that
>> should validate if a given object is contained by a kernel stack frame.
>> Initial implementation is on x86.
>>
>> This is based on code from PaX.
>>
>
> This, along with Josh's livepatch work, are two examples of unwinders
> that matter for correctness instead of just debugging.  ISTM this
> should just use Josh's code directly once it's been written.

Do you have URL for Josh's code? I'd love to see what happening there.

In the meantime, usercopy can use this...

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:04       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 22:04 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> This creates per-architecture function arch_within_stack_frames() that
>> should validate if a given object is contained by a kernel stack frame.
>> Initial implementation is on x86.
>>
>> This is based on code from PaX.
>>
>
> This, along with Josh's livepatch work, are two examples of unwinders
> that matter for correctness instead of just debugging.  ISTM this
> should just use Josh's code directly once it's been written.

Do you have URL for Josh's code? I'd love to see what happening there.

In the meantime, usercopy can use this...

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:04       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 22:04 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> This creates per-architecture function arch_within_stack_frames() that
>> should validate if a given object is contained by a kernel stack frame.
>> Initial implementation is on x86.
>>
>> This is based on code from PaX.
>>
>
> This, along with Josh's livepatch work, are two examples of unwinders
> that matter for correctness instead of just debugging.  ISTM this
> should just use Josh's code directly once it's been written.

Do you have URL for Josh's code? I'd love to see what happening there.

In the meantime, usercopy can use this...

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:04       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 22:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> This creates per-architecture function arch_within_stack_frames() that
>> should validate if a given object is contained by a kernel stack frame.
>> Initial implementation is on x86.
>>
>> This is based on code from PaX.
>>
>
> This, along with Josh's livepatch work, are two examples of unwinders
> that matter for correctness instead of just debugging.  ISTM this
> should just use Josh's code directly once it's been written.

Do you have URL for Josh's code? I'd love to see what happening there.

In the meantime, usercopy can use this...

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-13 22:04       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-13 22:04 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening,
	Josh Poimboeuf

On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> This creates per-architecture function arch_within_stack_frames() that
>> should validate if a given object is contained by a kernel stack frame.
>> Initial implementation is on x86.
>>
>> This is based on code from PaX.
>>
>
> This, along with Josh's livepatch work, are two examples of unwinders
> that matter for correctness instead of just debugging.  ISTM this
> should just use Josh's code directly once it's been written.

Do you have URL for Josh's code? I'd love to see what happening there.

In the meantime, usercopy can use this...

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
  2016-07-13 22:04       ` Kees Cook
                           ` (4 preceding siblings ...)
  (?)
@ 2016-07-14  5:48         ` Josh Poimboeuf
  -1 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14  5:48 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> This creates per-architecture function arch_within_stack_frames() that
> >> should validate if a given object is contained by a kernel stack frame.
> >> Initial implementation is on x86.
> >>
> >> This is based on code from PaX.
> >>
> >
> > This, along with Josh's livepatch work, are two examples of unwinders
> > that matter for correctness instead of just debugging.  ISTM this
> > should just use Josh's code directly once it's been written.
> 
> Do you have URL for Josh's code? I'd love to see what happening there.

The code is actually going to be 100% different next time around, but
FWIW, here's the last attempt:

  https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com

In the meantime I've realized the need to rewrite the x86 core stack
walking code to something much more manageable so we don't need all
these unwinders everywhere.  I'll probably post the patches in the next
week or so.  I'll add you to the CC list.

With the new interface I think you'll be able to do something like:

	struct unwind_state;

	unwind_start(&state, current, NULL, NULL);
	unwind_next_frame(&state);
	oldframe = unwind_get_stack_pointer(&state);

	unwind_next_frame(&state);
	frame = unwind_get_stack_pointer(&state);

	do {
		if (obj + len <= frame)
			return blah;
		oldframe = frame;
		frame = unwind_get_stack_pointer(&state);

	} while (unwind_next_frame(&state);

And then at the end there'll be some (still TBD) way to query whether it
reached the last syscall pt_regs frame, or if it instead encountered a
bogus frame pointer along the way and had to bail early.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14  5:48         ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14  5:48 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim

On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> This creates per-architecture function arch_within_stack_frames() that
> >> should validate if a given object is contained by a kernel stack frame.
> >> Initial implementation is on x86.
> >>
> >> This is based on code from PaX.
> >>
> >
> > This, along with Josh's livepatch work, are two examples of unwinders
> > that matter for correctness instead of just debugging.  ISTM this
> > should just use Josh's code directly once it's been written.
> 
> Do you have URL for Josh's code? I'd love to see what happening there.

The code is actually going to be 100% different next time around, but
FWIW, here's the last attempt:

  https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com

In the meantime I've realized the need to rewrite the x86 core stack
walking code to something much more manageable so we don't need all
these unwinders everywhere.  I'll probably post the patches in the next
week or so.  I'll add you to the CC list.

With the new interface I think you'll be able to do something like:

	struct unwind_state;

	unwind_start(&state, current, NULL, NULL);
	unwind_next_frame(&state);
	oldframe = unwind_get_stack_pointer(&state);

	unwind_next_frame(&state);
	frame = unwind_get_stack_pointer(&state);

	do {
		if (obj + len <= frame)
			return blah;
		oldframe = frame;
		frame = unwind_get_stack_pointer(&state);

	} while (unwind_next_frame(&state);

And then at the end there'll be some (still TBD) way to query whether it
reached the last syscall pt_regs frame, or if it instead encountered a
bogus frame pointer along the way and had to bail early.

-- 
Josh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14  5:48         ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14  5:48 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> This creates per-architecture function arch_within_stack_frames() that
> >> should validate if a given object is contained by a kernel stack frame.
> >> Initial implementation is on x86.
> >>
> >> This is based on code from PaX.
> >>
> >
> > This, along with Josh's livepatch work, are two examples of unwinders
> > that matter for correctness instead of just debugging.  ISTM this
> > should just use Josh's code directly once it's been written.
> 
> Do you have URL for Josh's code? I'd love to see what happening there.

The code is actually going to be 100% different next time around, but
FWIW, here's the last attempt:

  https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com

In the meantime I've realized the need to rewrite the x86 core stack
walking code to something much more manageable so we don't need all
these unwinders everywhere.  I'll probably post the patches in the next
week or so.  I'll add you to the CC list.

With the new interface I think you'll be able to do something like:

	struct unwind_state;

	unwind_start(&state, current, NULL, NULL);
	unwind_next_frame(&state);
	oldframe = unwind_get_stack_pointer(&state);

	unwind_next_frame(&state);
	frame = unwind_get_stack_pointer(&state);

	do {
		if (obj + len <= frame)
			return blah;
		oldframe = frame;
		frame = unwind_get_stack_pointer(&state);

	} while (unwind_next_frame(&state);

And then at the end there'll be some (still TBD) way to query whether it
reached the last syscall pt_regs frame, or if it instead encountered a
bogus frame pointer along the way and had to bail early.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14  5:48         ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14  5:48 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> This creates per-architecture function arch_within_stack_frames() that
> >> should validate if a given object is contained by a kernel stack frame.
> >> Initial implementation is on x86.
> >>
> >> This is based on code from PaX.
> >>
> >
> > This, along with Josh's livepatch work, are two examples of unwinders
> > that matter for correctness instead of just debugging.  ISTM this
> > should just use Josh's code directly once it's been written.
> 
> Do you have URL for Josh's code? I'd love to see what happening there.

The code is actually going to be 100% different next time around, but
FWIW, here's the last attempt:

  https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com

In the meantime I've realized the need to rewrite the x86 core stack
walking code to something much more manageable so we don't need all
these unwinders everywhere.  I'll probably post the patches in the next
week or so.  I'll add you to the CC list.

With the new interface I think you'll be able to do something like:

	struct unwind_state;

	unwind_start(&state, current, NULL, NULL);
	unwind_next_frame(&state);
	oldframe = unwind_get_stack_pointer(&state);

	unwind_next_frame(&state);
	frame = unwind_get_stack_pointer(&state);

	do {
		if (obj + len <= frame)
			return blah;
		oldframe = frame;
		frame = unwind_get_stack_pointer(&state);

	} while (unwind_next_frame(&state);

And then at the end there'll be some (still TBD) way to query whether it
reached the last syscall pt_regs frame, or if it instead encountered a
bogus frame pointer along the way and had to bail early.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14  5:48         ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14  5:48 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> This creates per-architecture function arch_within_stack_frames() that
> >> should validate if a given object is contained by a kernel stack frame.
> >> Initial implementation is on x86.
> >>
> >> This is based on code from PaX.
> >>
> >
> > This, along with Josh's livepatch work, are two examples of unwinders
> > that matter for correctness instead of just debugging.  ISTM this
> > should just use Josh's code directly once it's been written.
> 
> Do you have URL for Josh's code? I'd love to see what happening there.

The code is actually going to be 100% different next time around, but
FWIW, here's the last attempt:

  https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com

In the meantime I've realized the need to rewrite the x86 core stack
walking code to something much more manageable so we don't need all
these unwinders everywhere.  I'll probably post the patches in the next
week or so.  I'll add you to the CC list.

With the new interface I think you'll be able to do something like:

	struct unwind_state;

	unwind_start(&state, current, NULL, NULL);
	unwind_next_frame(&state);
	oldframe = unwind_get_stack_pointer(&state);

	unwind_next_frame(&state);
	frame = unwind_get_stack_pointer(&state);

	do {
		if (obj + len <= frame)
			return blah;
		oldframe = frame;
		frame = unwind_get_stack_pointer(&state);

	} while (unwind_next_frame(&state);

And then at the end there'll be some (still TBD) way to query whether it
reached the last syscall pt_regs frame, or if it instead encountered a
bogus frame pointer along the way and had to bail early.

-- 
Josh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14  5:48         ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14  5:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> This creates per-architecture function arch_within_stack_frames() that
> >> should validate if a given object is contained by a kernel stack frame.
> >> Initial implementation is on x86.
> >>
> >> This is based on code from PaX.
> >>
> >
> > This, along with Josh's livepatch work, are two examples of unwinders
> > that matter for correctness instead of just debugging.  ISTM this
> > should just use Josh's code directly once it's been written.
> 
> Do you have URL for Josh's code? I'd love to see what happening there.

The code is actually going to be 100% different next time around, but
FWIW, here's the last attempt:

  https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe at redhat.com

In the meantime I've realized the need to rewrite the x86 core stack
walking code to something much more manageable so we don't need all
these unwinders everywhere.  I'll probably post the patches in the next
week or so.  I'll add you to the CC list.

With the new interface I think you'll be able to do something like:

	struct unwind_state;

	unwind_start(&state, current, NULL, NULL);
	unwind_next_frame(&state);
	oldframe = unwind_get_stack_pointer(&state);

	unwind_next_frame(&state);
	frame = unwind_get_stack_pointer(&state);

	do {
		if (obj + len <= frame)
			return blah;
		oldframe = frame;
		frame = unwind_get_stack_pointer(&state);

	} while (unwind_next_frame(&state);

And then@the end there'll be some (still TBD) way to query whether it
reached the last syscall pt_regs frame, or if it instead encountered a
bogus frame pointer along the way and had to bail early.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14  5:48         ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14  5:48 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> This creates per-architecture function arch_within_stack_frames() that
> >> should validate if a given object is contained by a kernel stack frame.
> >> Initial implementation is on x86.
> >>
> >> This is based on code from PaX.
> >>
> >
> > This, along with Josh's livepatch work, are two examples of unwinders
> > that matter for correctness instead of just debugging.  ISTM this
> > should just use Josh's code directly once it's been written.
> 
> Do you have URL for Josh's code? I'd love to see what happening there.

The code is actually going to be 100% different next time around, but
FWIW, here's the last attempt:

  https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com

In the meantime I've realized the need to rewrite the x86 core stack
walking code to something much more manageable so we don't need all
these unwinders everywhere.  I'll probably post the patches in the next
week or so.  I'll add you to the CC list.

With the new interface I think you'll be able to do something like:

	struct unwind_state;

	unwind_start(&state, current, NULL, NULL);
	unwind_next_frame(&state);
	oldframe = unwind_get_stack_pointer(&state);

	unwind_next_frame(&state);
	frame = unwind_get_stack_pointer(&state);

	do {
		if (obj + len <= frame)
			return blah;
		oldframe = frame;
		frame = unwind_get_stack_pointer(&state);

	} while (unwind_next_frame(&state);

And then at the end there'll be some (still TBD) way to query whether it
reached the last syscall pt_regs frame, or if it instead encountered a
bogus frame pointer along the way and had to bail early.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] [PATCH v2 11/11] mm: SLUB hardened usercopy support
  2016-07-13 21:56   ` Kees Cook
@ 2016-07-14 10:07     ` Michael Ellerman
  -1 siblings, 0 replies; 203+ messages in thread
From: Michael Ellerman @ 2016-07-14 10:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool

Kees Cook <keescook@chromium.org> writes:

> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.

Actually I think you wrote the fix, I just pointed you in that
direction. But anyway, this works for me, so if you like:

Tested-by: Michael Ellerman <mpe@ellerman.id.au>

cheers

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
  2016-07-13 21:56   ` Kees Cook
                     ` (8 preceding siblings ...)
  (?)
@ 2016-07-14 10:07   ` Michael Ellerman
  -1 siblings, 0 replies; 203+ messages in thread
From: Michael Ellerman @ 2016-07-14 10:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, ooglemail.com, Jan Kara,
	Vitaly Wool, Andrea Arcangeli

Kees Cook <keescook@chromium.org> writes:

> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.

Actually I think you wrote the fix, I just pointed you in that
direction. But anyway, this works for me, so if you like:

Tested-by: Michael Ellerman <mpe@ellerman.id.au>

cheers

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] [PATCH v2 11/11] mm: SLUB hardened usercopy support
  2016-07-13 21:56   ` Kees Cook
                     ` (7 preceding siblings ...)
  (?)
@ 2016-07-14 10:07   ` Michael Ellerman
  -1 siblings, 0 replies; 203+ messages in thread
From: Michael Ellerman @ 2016-07-14 10:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Jan Kara, kernel-hardening, Catalin Marinas, Will Deacon,
	linux-mm, sparclinux, linux-ia64, Christoph Lameter,
	Andrea Arcangeli, linux-arch, x86, Russell King,
	linux-arm-kernel, PaX Team, Borislav Petkov, Mathias Krause,
	Fenghua Yu, Rik van Riel, Kees Cook, David Rientjes, Tony Luck,
	Andy Lutomirski, Joonsoo Kim, Dmitry Vyukov, Laura Abbott,
	Brad Spengler, Ard

Kees Cook <keescook@chromium.org> writes:

> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.

Actually I think you wrote the fix, I just pointed you in that
direction. But anyway, this works for me, so if you like:

Tested-by: Michael Ellerman <mpe@ellerman.id.au>

cheers
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] [PATCH v2 11/11] mm: SLUB hardened usercopy support
  2016-07-13 21:56   ` Kees Cook
                     ` (6 preceding siblings ...)
  (?)
@ 2016-07-14 10:07   ` Michael Ellerman
  -1 siblings, 0 replies; 203+ messages in thread
From: Michael Ellerman @ 2016-07-14 10:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: Jan Kara, kernel-hardening, Catalin Marinas, Will Deacon,
	linux-mm, sparclinux, linux-ia64, Christoph Lameter,
	Andrea Arcangeli, linux-arch, x86, Russell King,
	linux-arm-kernel, PaX Team, Borislav Petkov, Mathias Krause,
	Fenghua Yu, Rik van Riel, Kees Cook, David Rientjes, Tony Luck,
	Andy Lutomirski, Joonsoo Kim, Dmitry Vyukov, Laura Abbott,
	Brad Spengler, Ard

Kees Cook <keescook@chromium.org> writes:

> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.

Actually I think you wrote the fix, I just pointed you in that
direction. But anyway, this works for me, so if you like:

Tested-by: Michael Ellerman <mpe@ellerman.id.au>

cheers
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-14 10:07     ` Michael Ellerman
  0 siblings, 0 replies; 203+ messages in thread
From: Michael Ellerman @ 2016-07-14 10:07 UTC (permalink / raw)
  To: Kees Cook, linux-kernel
  Cc: Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Tony Luck, Fenghua Yu, David S. Miller,
	x86, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Andy Lutomirski, Borislav Petkov,
	Mathias Krause, Jan Kara, Vitaly Wool, Andrea Arcangeli,
	Dmitry Vyukov, Laura Abbott, lin, ux-arm-kernel, linux-ia64,
	linuxppc-dev, sparclinux, linux-arch, linux-mm, kernel-hardening

Kees Cook <keescook@chromium.org> writes:

> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.

Actually I think you wrote the fix, I just pointed you in that
direction. But anyway, this works for me, so if you like:

Tested-by: Michael Ellerman <mpe@ellerman.id.au>

cheers

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] [PATCH v2 11/11] mm: SLUB hardened usercopy support
  2016-07-13 21:56   ` Kees Cook
@ 2016-07-14 10:07     ` Michael Ellerman
  -1 siblings, 0 replies; 203+ messages in thread
From: Michael Ellerman @ 2016-07-14 10:07 UTC (permalink / raw)
  To: Kees Cook, linux-kernel
  Cc: Kees Cook, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, lin

Kees Cook <keescook@chromium.org> writes:

> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.

Actually I think you wrote the fix, I just pointed you in that
direction. But anyway, this works for me, so if you like:

Tested-by: Michael Ellerman <mpe@ellerman.id.au>

cheers

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-14 10:07     ` Michael Ellerman
  0 siblings, 0 replies; 203+ messages in thread
From: Michael Ellerman @ 2016-07-14 10:07 UTC (permalink / raw)
  To: Kees Cook, linux-kernel
  Cc: Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Tony Luck, Fenghua Yu, David S. Miller,
	x86, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Andy Lutomirski, Borislav Petkov,
	Mathias Krause, Jan Kara, Vitaly Wool, Andrea Arcangeli,
	Dmitry Vyukov, Laura Abbott, lin

Kees Cook <keescook@chromium.org> writes:

> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.

Actually I think you wrote the fix, I just pointed you in that
direction. But anyway, this works for me, so if you like:

Tested-by: Michael Ellerman <mpe@ellerman.id.au>

cheers

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
  2016-07-14  5:48         ` Josh Poimboeuf
                             ` (4 preceding siblings ...)
  (?)
@ 2016-07-14 18:10           ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 18:10 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> This creates per-architecture function arch_within_stack_frames() that
>> >> should validate if a given object is contained by a kernel stack frame.
>> >> Initial implementation is on x86.
>> >>
>> >> This is based on code from PaX.
>> >>
>> >
>> > This, along with Josh's livepatch work, are two examples of unwinders
>> > that matter for correctness instead of just debugging.  ISTM this
>> > should just use Josh's code directly once it's been written.
>>
>> Do you have URL for Josh's code? I'd love to see what happening there.
>
> The code is actually going to be 100% different next time around, but
> FWIW, here's the last attempt:
>
>   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>
> In the meantime I've realized the need to rewrite the x86 core stack
> walking code to something much more manageable so we don't need all
> these unwinders everywhere.  I'll probably post the patches in the next
> week or so.  I'll add you to the CC list.

Awesome!

> With the new interface I think you'll be able to do something like:
>
>         struct unwind_state;
>
>         unwind_start(&state, current, NULL, NULL);
>         unwind_next_frame(&state);
>         oldframe = unwind_get_stack_pointer(&state);
>
>         unwind_next_frame(&state);
>         frame = unwind_get_stack_pointer(&state);
>
>         do {
>                 if (obj + len <= frame)
>                         return blah;
>                 oldframe = frame;
>                 frame = unwind_get_stack_pointer(&state);
>
>         } while (unwind_next_frame(&state);
>
> And then at the end there'll be some (still TBD) way to query whether it
> reached the last syscall pt_regs frame, or if it instead encountered a
> bogus frame pointer along the way and had to bail early.

Sounds good to me. Will there be any frame size information available?
Right now, the unwinder from PaX just drops 2 pointers (saved frame,
saved ip) from the delta of frame address to find the size of the
actual stack area used by the function. If I could shave things like
padding and possible stack canaries off the size too, that would be
great.

Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
just leave this unwinder in for now, and once yours lands, I can rip
it out again.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 18:10           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 18:10 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski

On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> This creates per-architecture function arch_within_stack_frames() that
>> >> should validate if a given object is contained by a kernel stack frame.
>> >> Initial implementation is on x86.
>> >>
>> >> This is based on code from PaX.
>> >>
>> >
>> > This, along with Josh's livepatch work, are two examples of unwinders
>> > that matter for correctness instead of just debugging.  ISTM this
>> > should just use Josh's code directly once it's been written.
>>
>> Do you have URL for Josh's code? I'd love to see what happening there.
>
> The code is actually going to be 100% different next time around, but
> FWIW, here's the last attempt:
>
>   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>
> In the meantime I've realized the need to rewrite the x86 core stack
> walking code to something much more manageable so we don't need all
> these unwinders everywhere.  I'll probably post the patches in the next
> week or so.  I'll add you to the CC list.

Awesome!

> With the new interface I think you'll be able to do something like:
>
>         struct unwind_state;
>
>         unwind_start(&state, current, NULL, NULL);
>         unwind_next_frame(&state);
>         oldframe = unwind_get_stack_pointer(&state);
>
>         unwind_next_frame(&state);
>         frame = unwind_get_stack_pointer(&state);
>
>         do {
>                 if (obj + len <= frame)
>                         return blah;
>                 oldframe = frame;
>                 frame = unwind_get_stack_pointer(&state);
>
>         } while (unwind_next_frame(&state);
>
> And then at the end there'll be some (still TBD) way to query whether it
> reached the last syscall pt_regs frame, or if it instead encountered a
> bogus frame pointer along the way and had to bail early.

Sounds good to me. Will there be any frame size information available?
Right now, the unwinder from PaX just drops 2 pointers (saved frame,
saved ip) from the delta of frame address to find the size of the
actual stack area used by the function. If I could shave things like
padding and possible stack canaries off the size too, that would be
great.

Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
just leave this unwinder in for now, and once yours lands, I can rip
it out again.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 18:10           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 18:10 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> This creates per-architecture function arch_within_stack_frames() that
>> >> should validate if a given object is contained by a kernel stack frame.
>> >> Initial implementation is on x86.
>> >>
>> >> This is based on code from PaX.
>> >>
>> >
>> > This, along with Josh's livepatch work, are two examples of unwinders
>> > that matter for correctness instead of just debugging.  ISTM this
>> > should just use Josh's code directly once it's been written.
>>
>> Do you have URL for Josh's code? I'd love to see what happening there.
>
> The code is actually going to be 100% different next time around, but
> FWIW, here's the last attempt:
>
>   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>
> In the meantime I've realized the need to rewrite the x86 core stack
> walking code to something much more manageable so we don't need all
> these unwinders everywhere.  I'll probably post the patches in the next
> week or so.  I'll add you to the CC list.

Awesome!

> With the new interface I think you'll be able to do something like:
>
>         struct unwind_state;
>
>         unwind_start(&state, current, NULL, NULL);
>         unwind_next_frame(&state);
>         oldframe = unwind_get_stack_pointer(&state);
>
>         unwind_next_frame(&state);
>         frame = unwind_get_stack_pointer(&state);
>
>         do {
>                 if (obj + len <= frame)
>                         return blah;
>                 oldframe = frame;
>                 frame = unwind_get_stack_pointer(&state);
>
>         } while (unwind_next_frame(&state);
>
> And then at the end there'll be some (still TBD) way to query whether it
> reached the last syscall pt_regs frame, or if it instead encountered a
> bogus frame pointer along the way and had to bail early.

Sounds good to me. Will there be any frame size information available?
Right now, the unwinder from PaX just drops 2 pointers (saved frame,
saved ip) from the delta of frame address to find the size of the
actual stack area used by the function. If I could shave things like
padding and possible stack canaries off the size too, that would be
great.

Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
just leave this unwinder in for now, and once yours lands, I can rip
it out again.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 18:10           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 18:10 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> This creates per-architecture function arch_within_stack_frames() that
>> >> should validate if a given object is contained by a kernel stack frame.
>> >> Initial implementation is on x86.
>> >>
>> >> This is based on code from PaX.
>> >>
>> >
>> > This, along with Josh's livepatch work, are two examples of unwinders
>> > that matter for correctness instead of just debugging.  ISTM this
>> > should just use Josh's code directly once it's been written.
>>
>> Do you have URL for Josh's code? I'd love to see what happening there.
>
> The code is actually going to be 100% different next time around, but
> FWIW, here's the last attempt:
>
>   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>
> In the meantime I've realized the need to rewrite the x86 core stack
> walking code to something much more manageable so we don't need all
> these unwinders everywhere.  I'll probably post the patches in the next
> week or so.  I'll add you to the CC list.

Awesome!

> With the new interface I think you'll be able to do something like:
>
>         struct unwind_state;
>
>         unwind_start(&state, current, NULL, NULL);
>         unwind_next_frame(&state);
>         oldframe = unwind_get_stack_pointer(&state);
>
>         unwind_next_frame(&state);
>         frame = unwind_get_stack_pointer(&state);
>
>         do {
>                 if (obj + len <= frame)
>                         return blah;
>                 oldframe = frame;
>                 frame = unwind_get_stack_pointer(&state);
>
>         } while (unwind_next_frame(&state);
>
> And then at the end there'll be some (still TBD) way to query whether it
> reached the last syscall pt_regs frame, or if it instead encountered a
> bogus frame pointer along the way and had to bail early.

Sounds good to me. Will there be any frame size information available?
Right now, the unwinder from PaX just drops 2 pointers (saved frame,
saved ip) from the delta of frame address to find the size of the
actual stack area used by the function. If I could shave things like
padding and possible stack canaries off the size too, that would be
great.

Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
just leave this unwinder in for now, and once yours lands, I can rip
it out again.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 18:10           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 18:10 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> This creates per-architecture function arch_within_stack_frames() that
>> >> should validate if a given object is contained by a kernel stack frame.
>> >> Initial implementation is on x86.
>> >>
>> >> This is based on code from PaX.
>> >>
>> >
>> > This, along with Josh's livepatch work, are two examples of unwinders
>> > that matter for correctness instead of just debugging.  ISTM this
>> > should just use Josh's code directly once it's been written.
>>
>> Do you have URL for Josh's code? I'd love to see what happening there.
>
> The code is actually going to be 100% different next time around, but
> FWIW, here's the last attempt:
>
>   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>
> In the meantime I've realized the need to rewrite the x86 core stack
> walking code to something much more manageable so we don't need all
> these unwinders everywhere.  I'll probably post the patches in the next
> week or so.  I'll add you to the CC list.

Awesome!

> With the new interface I think you'll be able to do something like:
>
>         struct unwind_state;
>
>         unwind_start(&state, current, NULL, NULL);
>         unwind_next_frame(&state);
>         oldframe = unwind_get_stack_pointer(&state);
>
>         unwind_next_frame(&state);
>         frame = unwind_get_stack_pointer(&state);
>
>         do {
>                 if (obj + len <= frame)
>                         return blah;
>                 oldframe = frame;
>                 frame = unwind_get_stack_pointer(&state);
>
>         } while (unwind_next_frame(&state);
>
> And then at the end there'll be some (still TBD) way to query whether it
> reached the last syscall pt_regs frame, or if it instead encountered a
> bogus frame pointer along the way and had to bail early.

Sounds good to me. Will there be any frame size information available?
Right now, the unwinder from PaX just drops 2 pointers (saved frame,
saved ip) from the delta of frame address to find the size of the
actual stack area used by the function. If I could shave things like
padding and possible stack canaries off the size too, that would be
great.

Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
just leave this unwinder in for now, and once yours lands, I can rip
it out again.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 18:10           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 18:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> This creates per-architecture function arch_within_stack_frames() that
>> >> should validate if a given object is contained by a kernel stack frame.
>> >> Initial implementation is on x86.
>> >>
>> >> This is based on code from PaX.
>> >>
>> >
>> > This, along with Josh's livepatch work, are two examples of unwinders
>> > that matter for correctness instead of just debugging.  ISTM this
>> > should just use Josh's code directly once it's been written.
>>
>> Do you have URL for Josh's code? I'd love to see what happening there.
>
> The code is actually going to be 100% different next time around, but
> FWIW, here's the last attempt:
>
>   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe at redhat.com
>
> In the meantime I've realized the need to rewrite the x86 core stack
> walking code to something much more manageable so we don't need all
> these unwinders everywhere.  I'll probably post the patches in the next
> week or so.  I'll add you to the CC list.

Awesome!

> With the new interface I think you'll be able to do something like:
>
>         struct unwind_state;
>
>         unwind_start(&state, current, NULL, NULL);
>         unwind_next_frame(&state);
>         oldframe = unwind_get_stack_pointer(&state);
>
>         unwind_next_frame(&state);
>         frame = unwind_get_stack_pointer(&state);
>
>         do {
>                 if (obj + len <= frame)
>                         return blah;
>                 oldframe = frame;
>                 frame = unwind_get_stack_pointer(&state);
>
>         } while (unwind_next_frame(&state);
>
> And then at the end there'll be some (still TBD) way to query whether it
> reached the last syscall pt_regs frame, or if it instead encountered a
> bogus frame pointer along the way and had to bail early.

Sounds good to me. Will there be any frame size information available?
Right now, the unwinder from PaX just drops 2 pointers (saved frame,
saved ip) from the delta of frame address to find the size of the
actual stack area used by the function. If I could shave things like
padding and possible stack canaries off the size too, that would be
great.

Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
just leave this unwinder in for now, and once yours lands, I can rip
it out again.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 18:10           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 18:10 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> This creates per-architecture function arch_within_stack_frames() that
>> >> should validate if a given object is contained by a kernel stack frame.
>> >> Initial implementation is on x86.
>> >>
>> >> This is based on code from PaX.
>> >>
>> >
>> > This, along with Josh's livepatch work, are two examples of unwinders
>> > that matter for correctness instead of just debugging.  ISTM this
>> > should just use Josh's code directly once it's been written.
>>
>> Do you have URL for Josh's code? I'd love to see what happening there.
>
> The code is actually going to be 100% different next time around, but
> FWIW, here's the last attempt:
>
>   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>
> In the meantime I've realized the need to rewrite the x86 core stack
> walking code to something much more manageable so we don't need all
> these unwinders everywhere.  I'll probably post the patches in the next
> week or so.  I'll add you to the CC list.

Awesome!

> With the new interface I think you'll be able to do something like:
>
>         struct unwind_state;
>
>         unwind_start(&state, current, NULL, NULL);
>         unwind_next_frame(&state);
>         oldframe = unwind_get_stack_pointer(&state);
>
>         unwind_next_frame(&state);
>         frame = unwind_get_stack_pointer(&state);
>
>         do {
>                 if (obj + len <= frame)
>                         return blah;
>                 oldframe = frame;
>                 frame = unwind_get_stack_pointer(&state);
>
>         } while (unwind_next_frame(&state);
>
> And then at the end there'll be some (still TBD) way to query whether it
> reached the last syscall pt_regs frame, or if it instead encountered a
> bogus frame pointer along the way and had to bail early.

Sounds good to me. Will there be any frame size information available?
Right now, the unwinder from PaX just drops 2 pointers (saved frame,
saved ip) from the delta of frame address to find the size of the
actual stack area used by the function. If I could shave things like
padding and possible stack canaries off the size too, that would be
great.

Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
just leave this unwinder in for now, and once yours lands, I can rip
it out again.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
  2016-07-14 18:10           ` Kees Cook
                               ` (4 preceding siblings ...)
  (?)
@ 2016-07-14 19:23             ` Josh Poimboeuf
  -1 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14 19:23 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> This creates per-architecture function arch_within_stack_frames() that
> >> >> should validate if a given object is contained by a kernel stack frame.
> >> >> Initial implementation is on x86.
> >> >>
> >> >> This is based on code from PaX.
> >> >>
> >> >
> >> > This, along with Josh's livepatch work, are two examples of unwinders
> >> > that matter for correctness instead of just debugging.  ISTM this
> >> > should just use Josh's code directly once it's been written.
> >>
> >> Do you have URL for Josh's code? I'd love to see what happening there.
> >
> > The code is actually going to be 100% different next time around, but
> > FWIW, here's the last attempt:
> >
> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
> >
> > In the meantime I've realized the need to rewrite the x86 core stack
> > walking code to something much more manageable so we don't need all
> > these unwinders everywhere.  I'll probably post the patches in the next
> > week or so.  I'll add you to the CC list.
> 
> Awesome!
> 
> > With the new interface I think you'll be able to do something like:
> >
> >         struct unwind_state;
> >
> >         unwind_start(&state, current, NULL, NULL);
> >         unwind_next_frame(&state);
> >         oldframe = unwind_get_stack_pointer(&state);
> >
> >         unwind_next_frame(&state);
> >         frame = unwind_get_stack_pointer(&state);
> >
> >         do {
> >                 if (obj + len <= frame)
> >                         return blah;
> >                 oldframe = frame;
> >                 frame = unwind_get_stack_pointer(&state);
> >
> >         } while (unwind_next_frame(&state);
> >
> > And then at the end there'll be some (still TBD) way to query whether it
> > reached the last syscall pt_regs frame, or if it instead encountered a
> > bogus frame pointer along the way and had to bail early.
> 
> Sounds good to me. Will there be any frame size information available?
> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
> saved ip) from the delta of frame address to find the size of the
> actual stack area used by the function. If I could shave things like
> padding and possible stack canaries off the size too, that would be
> great.

For x86, stacks are aligned at long word boundaries, so there's no real
stack padding.

Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
which only affects certain functions (and thus certain frames) and I
don't know of any reliable way to find them.

So with frame pointers, I think the best you can do is just assume that
the frame data area is always two words smaller than the total frame
size.

> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
> just leave this unwinder in for now, and once yours lands, I can rip
> it out again.

Sure, sounds fine to me.  If your code lands before I post mine, I can
convert it myself.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 19:23             ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14 19:23 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim

On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> This creates per-architecture function arch_within_stack_frames() that
> >> >> should validate if a given object is contained by a kernel stack frame.
> >> >> Initial implementation is on x86.
> >> >>
> >> >> This is based on code from PaX.
> >> >>
> >> >
> >> > This, along with Josh's livepatch work, are two examples of unwinders
> >> > that matter for correctness instead of just debugging.  ISTM this
> >> > should just use Josh's code directly once it's been written.
> >>
> >> Do you have URL for Josh's code? I'd love to see what happening there.
> >
> > The code is actually going to be 100% different next time around, but
> > FWIW, here's the last attempt:
> >
> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
> >
> > In the meantime I've realized the need to rewrite the x86 core stack
> > walking code to something much more manageable so we don't need all
> > these unwinders everywhere.  I'll probably post the patches in the next
> > week or so.  I'll add you to the CC list.
> 
> Awesome!
> 
> > With the new interface I think you'll be able to do something like:
> >
> >         struct unwind_state;
> >
> >         unwind_start(&state, current, NULL, NULL);
> >         unwind_next_frame(&state);
> >         oldframe = unwind_get_stack_pointer(&state);
> >
> >         unwind_next_frame(&state);
> >         frame = unwind_get_stack_pointer(&state);
> >
> >         do {
> >                 if (obj + len <= frame)
> >                         return blah;
> >                 oldframe = frame;
> >                 frame = unwind_get_stack_pointer(&state);
> >
> >         } while (unwind_next_frame(&state);
> >
> > And then at the end there'll be some (still TBD) way to query whether it
> > reached the last syscall pt_regs frame, or if it instead encountered a
> > bogus frame pointer along the way and had to bail early.
> 
> Sounds good to me. Will there be any frame size information available?
> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
> saved ip) from the delta of frame address to find the size of the
> actual stack area used by the function. If I could shave things like
> padding and possible stack canaries off the size too, that would be
> great.

For x86, stacks are aligned at long word boundaries, so there's no real
stack padding.

Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
which only affects certain functions (and thus certain frames) and I
don't know of any reliable way to find them.

So with frame pointers, I think the best you can do is just assume that
the frame data area is always two words smaller than the total frame
size.

> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
> just leave this unwinder in for now, and once yours lands, I can rip
> it out again.

Sure, sounds fine to me.  If your code lands before I post mine, I can
convert it myself.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 19:23             ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14 19:23 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> This creates per-architecture function arch_within_stack_frames() that
> >> >> should validate if a given object is contained by a kernel stack frame.
> >> >> Initial implementation is on x86.
> >> >>
> >> >> This is based on code from PaX.
> >> >>
> >> >
> >> > This, along with Josh's livepatch work, are two examples of unwinders
> >> > that matter for correctness instead of just debugging.  ISTM this
> >> > should just use Josh's code directly once it's been written.
> >>
> >> Do you have URL for Josh's code? I'd love to see what happening there.
> >
> > The code is actually going to be 100% different next time around, but
> > FWIW, here's the last attempt:
> >
> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
> >
> > In the meantime I've realized the need to rewrite the x86 core stack
> > walking code to something much more manageable so we don't need all
> > these unwinders everywhere.  I'll probably post the patches in the next
> > week or so.  I'll add you to the CC list.
> 
> Awesome!
> 
> > With the new interface I think you'll be able to do something like:
> >
> >         struct unwind_state;
> >
> >         unwind_start(&state, current, NULL, NULL);
> >         unwind_next_frame(&state);
> >         oldframe = unwind_get_stack_pointer(&state);
> >
> >         unwind_next_frame(&state);
> >         frame = unwind_get_stack_pointer(&state);
> >
> >         do {
> >                 if (obj + len <= frame)
> >                         return blah;
> >                 oldframe = frame;
> >                 frame = unwind_get_stack_pointer(&state);
> >
> >         } while (unwind_next_frame(&state);
> >
> > And then at the end there'll be some (still TBD) way to query whether it
> > reached the last syscall pt_regs frame, or if it instead encountered a
> > bogus frame pointer along the way and had to bail early.
> 
> Sounds good to me. Will there be any frame size information available?
> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
> saved ip) from the delta of frame address to find the size of the
> actual stack area used by the function. If I could shave things like
> padding and possible stack canaries off the size too, that would be
> great.

For x86, stacks are aligned at long word boundaries, so there's no real
stack padding.

Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
which only affects certain functions (and thus certain frames) and I
don't know of any reliable way to find them.

So with frame pointers, I think the best you can do is just assume that
the frame data area is always two words smaller than the total frame
size.

> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
> just leave this unwinder in for now, and once yours lands, I can rip
> it out again.

Sure, sounds fine to me.  If your code lands before I post mine, I can
convert it myself.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 19:23             ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14 19:23 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> This creates per-architecture function arch_within_stack_frames() that
> >> >> should validate if a given object is contained by a kernel stack frame.
> >> >> Initial implementation is on x86.
> >> >>
> >> >> This is based on code from PaX.
> >> >>
> >> >
> >> > This, along with Josh's livepatch work, are two examples of unwinders
> >> > that matter for correctness instead of just debugging.  ISTM this
> >> > should just use Josh's code directly once it's been written.
> >>
> >> Do you have URL for Josh's code? I'd love to see what happening there.
> >
> > The code is actually going to be 100% different next time around, but
> > FWIW, here's the last attempt:
> >
> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
> >
> > In the meantime I've realized the need to rewrite the x86 core stack
> > walking code to something much more manageable so we don't need all
> > these unwinders everywhere.  I'll probably post the patches in the next
> > week or so.  I'll add you to the CC list.
> 
> Awesome!
> 
> > With the new interface I think you'll be able to do something like:
> >
> >         struct unwind_state;
> >
> >         unwind_start(&state, current, NULL, NULL);
> >         unwind_next_frame(&state);
> >         oldframe = unwind_get_stack_pointer(&state);
> >
> >         unwind_next_frame(&state);
> >         frame = unwind_get_stack_pointer(&state);
> >
> >         do {
> >                 if (obj + len <= frame)
> >                         return blah;
> >                 oldframe = frame;
> >                 frame = unwind_get_stack_pointer(&state);
> >
> >         } while (unwind_next_frame(&state);
> >
> > And then at the end there'll be some (still TBD) way to query whether it
> > reached the last syscall pt_regs frame, or if it instead encountered a
> > bogus frame pointer along the way and had to bail early.
> 
> Sounds good to me. Will there be any frame size information available?
> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
> saved ip) from the delta of frame address to find the size of the
> actual stack area used by the function. If I could shave things like
> padding and possible stack canaries off the size too, that would be
> great.

For x86, stacks are aligned at long word boundaries, so there's no real
stack padding.

Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
which only affects certain functions (and thus certain frames) and I
don't know of any reliable way to find them.

So with frame pointers, I think the best you can do is just assume that
the frame data area is always two words smaller than the total frame
size.

> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
> just leave this unwinder in for now, and once yours lands, I can rip
> it out again.

Sure, sounds fine to me.  If your code lands before I post mine, I can
convert it myself.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 19:23             ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14 19:23 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> This creates per-architecture function arch_within_stack_frames() that
> >> >> should validate if a given object is contained by a kernel stack frame.
> >> >> Initial implementation is on x86.
> >> >>
> >> >> This is based on code from PaX.
> >> >>
> >> >
> >> > This, along with Josh's livepatch work, are two examples of unwinders
> >> > that matter for correctness instead of just debugging.  ISTM this
> >> > should just use Josh's code directly once it's been written.
> >>
> >> Do you have URL for Josh's code? I'd love to see what happening there.
> >
> > The code is actually going to be 100% different next time around, but
> > FWIW, here's the last attempt:
> >
> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
> >
> > In the meantime I've realized the need to rewrite the x86 core stack
> > walking code to something much more manageable so we don't need all
> > these unwinders everywhere.  I'll probably post the patches in the next
> > week or so.  I'll add you to the CC list.
> 
> Awesome!
> 
> > With the new interface I think you'll be able to do something like:
> >
> >         struct unwind_state;
> >
> >         unwind_start(&state, current, NULL, NULL);
> >         unwind_next_frame(&state);
> >         oldframe = unwind_get_stack_pointer(&state);
> >
> >         unwind_next_frame(&state);
> >         frame = unwind_get_stack_pointer(&state);
> >
> >         do {
> >                 if (obj + len <= frame)
> >                         return blah;
> >                 oldframe = frame;
> >                 frame = unwind_get_stack_pointer(&state);
> >
> >         } while (unwind_next_frame(&state);
> >
> > And then at the end there'll be some (still TBD) way to query whether it
> > reached the last syscall pt_regs frame, or if it instead encountered a
> > bogus frame pointer along the way and had to bail early.
> 
> Sounds good to me. Will there be any frame size information available?
> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
> saved ip) from the delta of frame address to find the size of the
> actual stack area used by the function. If I could shave things like
> padding and possible stack canaries off the size too, that would be
> great.

For x86, stacks are aligned at long word boundaries, so there's no real
stack padding.

Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
which only affects certain functions (and thus certain frames) and I
don't know of any reliable way to find them.

So with frame pointers, I think the best you can do is just assume that
the frame data area is always two words smaller than the total frame
size.

> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
> just leave this unwinder in for now, and once yours lands, I can rip
> it out again.

Sure, sounds fine to me.  If your code lands before I post mine, I can
convert it myself.

-- 
Josh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 19:23             ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14 19:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> This creates per-architecture function arch_within_stack_frames() that
> >> >> should validate if a given object is contained by a kernel stack frame.
> >> >> Initial implementation is on x86.
> >> >>
> >> >> This is based on code from PaX.
> >> >>
> >> >
> >> > This, along with Josh's livepatch work, are two examples of unwinders
> >> > that matter for correctness instead of just debugging.  ISTM this
> >> > should just use Josh's code directly once it's been written.
> >>
> >> Do you have URL for Josh's code? I'd love to see what happening there.
> >
> > The code is actually going to be 100% different next time around, but
> > FWIW, here's the last attempt:
> >
> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe at redhat.com
> >
> > In the meantime I've realized the need to rewrite the x86 core stack
> > walking code to something much more manageable so we don't need all
> > these unwinders everywhere.  I'll probably post the patches in the next
> > week or so.  I'll add you to the CC list.
> 
> Awesome!
> 
> > With the new interface I think you'll be able to do something like:
> >
> >         struct unwind_state;
> >
> >         unwind_start(&state, current, NULL, NULL);
> >         unwind_next_frame(&state);
> >         oldframe = unwind_get_stack_pointer(&state);
> >
> >         unwind_next_frame(&state);
> >         frame = unwind_get_stack_pointer(&state);
> >
> >         do {
> >                 if (obj + len <= frame)
> >                         return blah;
> >                 oldframe = frame;
> >                 frame = unwind_get_stack_pointer(&state);
> >
> >         } while (unwind_next_frame(&state);
> >
> > And then at the end there'll be some (still TBD) way to query whether it
> > reached the last syscall pt_regs frame, or if it instead encountered a
> > bogus frame pointer along the way and had to bail early.
> 
> Sounds good to me. Will there be any frame size information available?
> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
> saved ip) from the delta of frame address to find the size of the
> actual stack area used by the function. If I could shave things like
> padding and possible stack canaries off the size too, that would be
> great.

For x86, stacks are aligned at long word boundaries, so there's no real
stack padding.

Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
which only affects certain functions (and thus certain frames) and I
don't know of any reliable way to find them.

So with frame pointers, I think the best you can do is just assume that
the frame data area is always two words smaller than the total frame
size.

> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
> just leave this unwinder in for now, and once yours lands, I can rip
> it out again.

Sure, sounds fine to me.  If your code lands before I post mine, I can
convert it myself.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 19:23             ` Josh Poimboeuf
  0 siblings, 0 replies; 203+ messages in thread
From: Josh Poimboeuf @ 2016-07-14 19:23 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
> >> >> This creates per-architecture function arch_within_stack_frames() that
> >> >> should validate if a given object is contained by a kernel stack frame.
> >> >> Initial implementation is on x86.
> >> >>
> >> >> This is based on code from PaX.
> >> >>
> >> >
> >> > This, along with Josh's livepatch work, are two examples of unwinders
> >> > that matter for correctness instead of just debugging.  ISTM this
> >> > should just use Josh's code directly once it's been written.
> >>
> >> Do you have URL for Josh's code? I'd love to see what happening there.
> >
> > The code is actually going to be 100% different next time around, but
> > FWIW, here's the last attempt:
> >
> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
> >
> > In the meantime I've realized the need to rewrite the x86 core stack
> > walking code to something much more manageable so we don't need all
> > these unwinders everywhere.  I'll probably post the patches in the next
> > week or so.  I'll add you to the CC list.
> 
> Awesome!
> 
> > With the new interface I think you'll be able to do something like:
> >
> >         struct unwind_state;
> >
> >         unwind_start(&state, current, NULL, NULL);
> >         unwind_next_frame(&state);
> >         oldframe = unwind_get_stack_pointer(&state);
> >
> >         unwind_next_frame(&state);
> >         frame = unwind_get_stack_pointer(&state);
> >
> >         do {
> >                 if (obj + len <= frame)
> >                         return blah;
> >                 oldframe = frame;
> >                 frame = unwind_get_stack_pointer(&state);
> >
> >         } while (unwind_next_frame(&state);
> >
> > And then at the end there'll be some (still TBD) way to query whether it
> > reached the last syscall pt_regs frame, or if it instead encountered a
> > bogus frame pointer along the way and had to bail early.
> 
> Sounds good to me. Will there be any frame size information available?
> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
> saved ip) from the delta of frame address to find the size of the
> actual stack area used by the function. If I could shave things like
> padding and possible stack canaries off the size too, that would be
> great.

For x86, stacks are aligned at long word boundaries, so there's no real
stack padding.

Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
which only affects certain functions (and thus certain frames) and I
don't know of any reliable way to find them.

So with frame pointers, I think the best you can do is just assume that
the frame data area is always two words smaller than the total frame
size.

> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
> just leave this unwinder in for now, and once yours lands, I can rip
> it out again.

Sure, sounds fine to me.  If your code lands before I post mine, I can
convert it myself.

-- 
Josh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
  2016-07-14 19:23             ` Josh Poimboeuf
                                 ` (4 preceding siblings ...)
  (?)
@ 2016-07-14 21:38               ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 21:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 12:23 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> >> This creates per-architecture function arch_within_stack_frames() that
>> >> >> should validate if a given object is contained by a kernel stack frame.
>> >> >> Initial implementation is on x86.
>> >> >>
>> >> >> This is based on code from PaX.
>> >> >>
>> >> >
>> >> > This, along with Josh's livepatch work, are two examples of unwinders
>> >> > that matter for correctness instead of just debugging.  ISTM this
>> >> > should just use Josh's code directly once it's been written.
>> >>
>> >> Do you have URL for Josh's code? I'd love to see what happening there.
>> >
>> > The code is actually going to be 100% different next time around, but
>> > FWIW, here's the last attempt:
>> >
>> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>> >
>> > In the meantime I've realized the need to rewrite the x86 core stack
>> > walking code to something much more manageable so we don't need all
>> > these unwinders everywhere.  I'll probably post the patches in the next
>> > week or so.  I'll add you to the CC list.
>>
>> Awesome!
>>
>> > With the new interface I think you'll be able to do something like:
>> >
>> >         struct unwind_state;
>> >
>> >         unwind_start(&state, current, NULL, NULL);
>> >         unwind_next_frame(&state);
>> >         oldframe = unwind_get_stack_pointer(&state);
>> >
>> >         unwind_next_frame(&state);
>> >         frame = unwind_get_stack_pointer(&state);
>> >
>> >         do {
>> >                 if (obj + len <= frame)
>> >                         return blah;
>> >                 oldframe = frame;
>> >                 frame = unwind_get_stack_pointer(&state);
>> >
>> >         } while (unwind_next_frame(&state);
>> >
>> > And then at the end there'll be some (still TBD) way to query whether it
>> > reached the last syscall pt_regs frame, or if it instead encountered a
>> > bogus frame pointer along the way and had to bail early.
>>
>> Sounds good to me. Will there be any frame size information available?
>> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
>> saved ip) from the delta of frame address to find the size of the
>> actual stack area used by the function. If I could shave things like
>> padding and possible stack canaries off the size too, that would be
>> great.
>
> For x86, stacks are aligned at long word boundaries, so there's no real
> stack padding.

Well, I guess I meant the possible padding between variables and the
aligned pointers, but that's a really minor concern in my mind (as far
as being a potential kernel memory exposure on a bad usercopy).

> Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
> which only affects certain functions (and thus certain frames) and I
> don't know of any reliable way to find them.

Okay, that's fine. I had a horrible idea to just have the unwinder
look at the value stored in front of the saved ip, and if it matches
the known canary (for current anyway), then reduce the frame size by
another long word. ;)

> So with frame pointers, I think the best you can do is just assume that
> the frame data area is always two words smaller than the total frame
> size.

Yeah, that's what's happening here currently. Cool.

>> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
>> just leave this unwinder in for now, and once yours lands, I can rip
>> it out again.
>
> Sure, sounds fine to me.  If your code lands before I post mine, I can
> convert it myself.

Awesome, I'll keep you posted. Thanks!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 21:38               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 21:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski

On Thu, Jul 14, 2016 at 12:23 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> >> This creates per-architecture function arch_within_stack_frames() that
>> >> >> should validate if a given object is contained by a kernel stack frame.
>> >> >> Initial implementation is on x86.
>> >> >>
>> >> >> This is based on code from PaX.
>> >> >>
>> >> >
>> >> > This, along with Josh's livepatch work, are two examples of unwinders
>> >> > that matter for correctness instead of just debugging.  ISTM this
>> >> > should just use Josh's code directly once it's been written.
>> >>
>> >> Do you have URL for Josh's code? I'd love to see what happening there.
>> >
>> > The code is actually going to be 100% different next time around, but
>> > FWIW, here's the last attempt:
>> >
>> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>> >
>> > In the meantime I've realized the need to rewrite the x86 core stack
>> > walking code to something much more manageable so we don't need all
>> > these unwinders everywhere.  I'll probably post the patches in the next
>> > week or so.  I'll add you to the CC list.
>>
>> Awesome!
>>
>> > With the new interface I think you'll be able to do something like:
>> >
>> >         struct unwind_state;
>> >
>> >         unwind_start(&state, current, NULL, NULL);
>> >         unwind_next_frame(&state);
>> >         oldframe = unwind_get_stack_pointer(&state);
>> >
>> >         unwind_next_frame(&state);
>> >         frame = unwind_get_stack_pointer(&state);
>> >
>> >         do {
>> >                 if (obj + len <= frame)
>> >                         return blah;
>> >                 oldframe = frame;
>> >                 frame = unwind_get_stack_pointer(&state);
>> >
>> >         } while (unwind_next_frame(&state);
>> >
>> > And then at the end there'll be some (still TBD) way to query whether it
>> > reached the last syscall pt_regs frame, or if it instead encountered a
>> > bogus frame pointer along the way and had to bail early.
>>
>> Sounds good to me. Will there be any frame size information available?
>> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
>> saved ip) from the delta of frame address to find the size of the
>> actual stack area used by the function. If I could shave things like
>> padding and possible stack canaries off the size too, that would be
>> great.
>
> For x86, stacks are aligned at long word boundaries, so there's no real
> stack padding.

Well, I guess I meant the possible padding between variables and the
aligned pointers, but that's a really minor concern in my mind (as far
as being a potential kernel memory exposure on a bad usercopy).

> Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
> which only affects certain functions (and thus certain frames) and I
> don't know of any reliable way to find them.

Okay, that's fine. I had a horrible idea to just have the unwinder
look at the value stored in front of the saved ip, and if it matches
the known canary (for current anyway), then reduce the frame size by
another long word. ;)

> So with frame pointers, I think the best you can do is just assume that
> the frame data area is always two words smaller than the total frame
> size.

Yeah, that's what's happening here currently. Cool.

>> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
>> just leave this unwinder in for now, and once yours lands, I can rip
>> it out again.
>
> Sure, sounds fine to me.  If your code lands before I post mine, I can
> convert it myself.

Awesome, I'll keep you posted. Thanks!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 21:38               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 21:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 12:23 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> >> This creates per-architecture function arch_within_stack_frames() that
>> >> >> should validate if a given object is contained by a kernel stack frame.
>> >> >> Initial implementation is on x86.
>> >> >>
>> >> >> This is based on code from PaX.
>> >> >>
>> >> >
>> >> > This, along with Josh's livepatch work, are two examples of unwinders
>> >> > that matter for correctness instead of just debugging.  ISTM this
>> >> > should just use Josh's code directly once it's been written.
>> >>
>> >> Do you have URL for Josh's code? I'd love to see what happening there.
>> >
>> > The code is actually going to be 100% different next time around, but
>> > FWIW, here's the last attempt:
>> >
>> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>> >
>> > In the meantime I've realized the need to rewrite the x86 core stack
>> > walking code to something much more manageable so we don't need all
>> > these unwinders everywhere.  I'll probably post the patches in the next
>> > week or so.  I'll add you to the CC list.
>>
>> Awesome!
>>
>> > With the new interface I think you'll be able to do something like:
>> >
>> >         struct unwind_state;
>> >
>> >         unwind_start(&state, current, NULL, NULL);
>> >         unwind_next_frame(&state);
>> >         oldframe = unwind_get_stack_pointer(&state);
>> >
>> >         unwind_next_frame(&state);
>> >         frame = unwind_get_stack_pointer(&state);
>> >
>> >         do {
>> >                 if (obj + len <= frame)
>> >                         return blah;
>> >                 oldframe = frame;
>> >                 frame = unwind_get_stack_pointer(&state);
>> >
>> >         } while (unwind_next_frame(&state);
>> >
>> > And then at the end there'll be some (still TBD) way to query whether it
>> > reached the last syscall pt_regs frame, or if it instead encountered a
>> > bogus frame pointer along the way and had to bail early.
>>
>> Sounds good to me. Will there be any frame size information available?
>> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
>> saved ip) from the delta of frame address to find the size of the
>> actual stack area used by the function. If I could shave things like
>> padding and possible stack canaries off the size too, that would be
>> great.
>
> For x86, stacks are aligned at long word boundaries, so there's no real
> stack padding.

Well, I guess I meant the possible padding between variables and the
aligned pointers, but that's a really minor concern in my mind (as far
as being a potential kernel memory exposure on a bad usercopy).

> Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
> which only affects certain functions (and thus certain frames) and I
> don't know of any reliable way to find them.

Okay, that's fine. I had a horrible idea to just have the unwinder
look at the value stored in front of the saved ip, and if it matches
the known canary (for current anyway), then reduce the frame size by
another long word. ;)

> So with frame pointers, I think the best you can do is just assume that
> the frame data area is always two words smaller than the total frame
> size.

Yeah, that's what's happening here currently. Cool.

>> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
>> just leave this unwinder in for now, and once yours lands, I can rip
>> it out again.
>
> Sure, sounds fine to me.  If your code lands before I post mine, I can
> convert it myself.

Awesome, I'll keep you posted. Thanks!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 21:38               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 21:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 12:23 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> >> This creates per-architecture function arch_within_stack_frames() that
>> >> >> should validate if a given object is contained by a kernel stack frame.
>> >> >> Initial implementation is on x86.
>> >> >>
>> >> >> This is based on code from PaX.
>> >> >>
>> >> >
>> >> > This, along with Josh's livepatch work, are two examples of unwinders
>> >> > that matter for correctness instead of just debugging.  ISTM this
>> >> > should just use Josh's code directly once it's been written.
>> >>
>> >> Do you have URL for Josh's code? I'd love to see what happening there.
>> >
>> > The code is actually going to be 100% different next time around, but
>> > FWIW, here's the last attempt:
>> >
>> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>> >
>> > In the meantime I've realized the need to rewrite the x86 core stack
>> > walking code to something much more manageable so we don't need all
>> > these unwinders everywhere.  I'll probably post the patches in the next
>> > week or so.  I'll add you to the CC list.
>>
>> Awesome!
>>
>> > With the new interface I think you'll be able to do something like:
>> >
>> >         struct unwind_state;
>> >
>> >         unwind_start(&state, current, NULL, NULL);
>> >         unwind_next_frame(&state);
>> >         oldframe = unwind_get_stack_pointer(&state);
>> >
>> >         unwind_next_frame(&state);
>> >         frame = unwind_get_stack_pointer(&state);
>> >
>> >         do {
>> >                 if (obj + len <= frame)
>> >                         return blah;
>> >                 oldframe = frame;
>> >                 frame = unwind_get_stack_pointer(&state);
>> >
>> >         } while (unwind_next_frame(&state);
>> >
>> > And then at the end there'll be some (still TBD) way to query whether it
>> > reached the last syscall pt_regs frame, or if it instead encountered a
>> > bogus frame pointer along the way and had to bail early.
>>
>> Sounds good to me. Will there be any frame size information available?
>> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
>> saved ip) from the delta of frame address to find the size of the
>> actual stack area used by the function. If I could shave things like
>> padding and possible stack canaries off the size too, that would be
>> great.
>
> For x86, stacks are aligned at long word boundaries, so there's no real
> stack padding.

Well, I guess I meant the possible padding between variables and the
aligned pointers, but that's a really minor concern in my mind (as far
as being a potential kernel memory exposure on a bad usercopy).

> Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
> which only affects certain functions (and thus certain frames) and I
> don't know of any reliable way to find them.

Okay, that's fine. I had a horrible idea to just have the unwinder
look at the value stored in front of the saved ip, and if it matches
the known canary (for current anyway), then reduce the frame size by
another long word. ;)

> So with frame pointers, I think the best you can do is just assume that
> the frame data area is always two words smaller than the total frame
> size.

Yeah, that's what's happening here currently. Cool.

>> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
>> just leave this unwinder in for now, and once yours lands, I can rip
>> it out again.
>
> Sure, sounds fine to me.  If your code lands before I post mine, I can
> convert it myself.

Awesome, I'll keep you posted. Thanks!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 21:38               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 21:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 12:23 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> >> This creates per-architecture function arch_within_stack_frames() that
>> >> >> should validate if a given object is contained by a kernel stack frame.
>> >> >> Initial implementation is on x86.
>> >> >>
>> >> >> This is based on code from PaX.
>> >> >>
>> >> >
>> >> > This, along with Josh's livepatch work, are two examples of unwinders
>> >> > that matter for correctness instead of just debugging.  ISTM this
>> >> > should just use Josh's code directly once it's been written.
>> >>
>> >> Do you have URL for Josh's code? I'd love to see what happening there.
>> >
>> > The code is actually going to be 100% different next time around, but
>> > FWIW, here's the last attempt:
>> >
>> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>> >
>> > In the meantime I've realized the need to rewrite the x86 core stack
>> > walking code to something much more manageable so we don't need all
>> > these unwinders everywhere.  I'll probably post the patches in the next
>> > week or so.  I'll add you to the CC list.
>>
>> Awesome!
>>
>> > With the new interface I think you'll be able to do something like:
>> >
>> >         struct unwind_state;
>> >
>> >         unwind_start(&state, current, NULL, NULL);
>> >         unwind_next_frame(&state);
>> >         oldframe = unwind_get_stack_pointer(&state);
>> >
>> >         unwind_next_frame(&state);
>> >         frame = unwind_get_stack_pointer(&state);
>> >
>> >         do {
>> >                 if (obj + len <= frame)
>> >                         return blah;
>> >                 oldframe = frame;
>> >                 frame = unwind_get_stack_pointer(&state);
>> >
>> >         } while (unwind_next_frame(&state);
>> >
>> > And then at the end there'll be some (still TBD) way to query whether it
>> > reached the last syscall pt_regs frame, or if it instead encountered a
>> > bogus frame pointer along the way and had to bail early.
>>
>> Sounds good to me. Will there be any frame size information available?
>> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
>> saved ip) from the delta of frame address to find the size of the
>> actual stack area used by the function. If I could shave things like
>> padding and possible stack canaries off the size too, that would be
>> great.
>
> For x86, stacks are aligned at long word boundaries, so there's no real
> stack padding.

Well, I guess I meant the possible padding between variables and the
aligned pointers, but that's a really minor concern in my mind (as far
as being a potential kernel memory exposure on a bad usercopy).

> Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
> which only affects certain functions (and thus certain frames) and I
> don't know of any reliable way to find them.

Okay, that's fine. I had a horrible idea to just have the unwinder
look at the value stored in front of the saved ip, and if it matches
the known canary (for current anyway), then reduce the frame size by
another long word. ;)

> So with frame pointers, I think the best you can do is just assume that
> the frame data area is always two words smaller than the total frame
> size.

Yeah, that's what's happening here currently. Cool.

>> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
>> just leave this unwinder in for now, and once yours lands, I can rip
>> it out again.
>
> Sure, sounds fine to me.  If your code lands before I post mine, I can
> convert it myself.

Awesome, I'll keep you posted. Thanks!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 21:38               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 21:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 14, 2016 at 12:23 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> >> This creates per-architecture function arch_within_stack_frames() that
>> >> >> should validate if a given object is contained by a kernel stack frame.
>> >> >> Initial implementation is on x86.
>> >> >>
>> >> >> This is based on code from PaX.
>> >> >>
>> >> >
>> >> > This, along with Josh's livepatch work, are two examples of unwinders
>> >> > that matter for correctness instead of just debugging.  ISTM this
>> >> > should just use Josh's code directly once it's been written.
>> >>
>> >> Do you have URL for Josh's code? I'd love to see what happening there.
>> >
>> > The code is actually going to be 100% different next time around, but
>> > FWIW, here's the last attempt:
>> >
>> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe at redhat.com
>> >
>> > In the meantime I've realized the need to rewrite the x86 core stack
>> > walking code to something much more manageable so we don't need all
>> > these unwinders everywhere.  I'll probably post the patches in the next
>> > week or so.  I'll add you to the CC list.
>>
>> Awesome!
>>
>> > With the new interface I think you'll be able to do something like:
>> >
>> >         struct unwind_state;
>> >
>> >         unwind_start(&state, current, NULL, NULL);
>> >         unwind_next_frame(&state);
>> >         oldframe = unwind_get_stack_pointer(&state);
>> >
>> >         unwind_next_frame(&state);
>> >         frame = unwind_get_stack_pointer(&state);
>> >
>> >         do {
>> >                 if (obj + len <= frame)
>> >                         return blah;
>> >                 oldframe = frame;
>> >                 frame = unwind_get_stack_pointer(&state);
>> >
>> >         } while (unwind_next_frame(&state);
>> >
>> > And then at the end there'll be some (still TBD) way to query whether it
>> > reached the last syscall pt_regs frame, or if it instead encountered a
>> > bogus frame pointer along the way and had to bail early.
>>
>> Sounds good to me. Will there be any frame size information available?
>> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
>> saved ip) from the delta of frame address to find the size of the
>> actual stack area used by the function. If I could shave things like
>> padding and possible stack canaries off the size too, that would be
>> great.
>
> For x86, stacks are aligned at long word boundaries, so there's no real
> stack padding.

Well, I guess I meant the possible padding between variables and the
aligned pointers, but that's a really minor concern in my mind (as far
as being a potential kernel memory exposure on a bad usercopy).

> Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
> which only affects certain functions (and thus certain frames) and I
> don't know of any reliable way to find them.

Okay, that's fine. I had a horrible idea to just have the unwinder
look at the value stored in front of the saved ip, and if it matches
the known canary (for current anyway), then reduce the frame size by
another long word. ;)

> So with frame pointers, I think the best you can do is just assume that
> the frame data area is always two words smaller than the total frame
> size.

Yeah, that's what's happening here currently. Cool.

>> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
>> just leave this unwinder in for now, and once yours lands, I can rip
>> it out again.
>
> Sure, sounds fine to me.  If your code lands before I post mine, I can
> convert it myself.

Awesome, I'll keep you posted. Thanks!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 01/11] mm: Implement stack frame object validation
@ 2016-07-14 21:38               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-14 21:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andy Lutomirski, linux-kernel, Rik van Riel, Casey Schaufler,
	PaX Team, Brad Spengler, Russell King, Catalin Marinas,
	Will Deacon, Ard Biesheuvel, Benjamin Herrenschmidt,
	Michael Ellerman, Tony Luck, Fenghua Yu, David S. Miller, X86 ML,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Andy Lutomirski, Borislav Petkov, Mathias Krause,
	Jan Kara, Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov,
	Laura Abbott, linux-arm-kernel, linux-ia64, linuxppc-dev,
	sparclinux, linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 12:23 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Thu, Jul 14, 2016 at 11:10:18AM -0700, Kees Cook wrote:
>> On Wed, Jul 13, 2016 at 10:48 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > On Wed, Jul 13, 2016 at 03:04:26PM -0700, Kees Cook wrote:
>> >> On Wed, Jul 13, 2016 at 3:01 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> >> > On Wed, Jul 13, 2016 at 2:55 PM, Kees Cook <keescook@chromium.org> wrote:
>> >> >> This creates per-architecture function arch_within_stack_frames() that
>> >> >> should validate if a given object is contained by a kernel stack frame.
>> >> >> Initial implementation is on x86.
>> >> >>
>> >> >> This is based on code from PaX.
>> >> >>
>> >> >
>> >> > This, along with Josh's livepatch work, are two examples of unwinders
>> >> > that matter for correctness instead of just debugging.  ISTM this
>> >> > should just use Josh's code directly once it's been written.
>> >>
>> >> Do you have URL for Josh's code? I'd love to see what happening there.
>> >
>> > The code is actually going to be 100% different next time around, but
>> > FWIW, here's the last attempt:
>> >
>> >   https://lkml.kernel.org/r/4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com
>> >
>> > In the meantime I've realized the need to rewrite the x86 core stack
>> > walking code to something much more manageable so we don't need all
>> > these unwinders everywhere.  I'll probably post the patches in the next
>> > week or so.  I'll add you to the CC list.
>>
>> Awesome!
>>
>> > With the new interface I think you'll be able to do something like:
>> >
>> >         struct unwind_state;
>> >
>> >         unwind_start(&state, current, NULL, NULL);
>> >         unwind_next_frame(&state);
>> >         oldframe = unwind_get_stack_pointer(&state);
>> >
>> >         unwind_next_frame(&state);
>> >         frame = unwind_get_stack_pointer(&state);
>> >
>> >         do {
>> >                 if (obj + len <= frame)
>> >                         return blah;
>> >                 oldframe = frame;
>> >                 frame = unwind_get_stack_pointer(&state);
>> >
>> >         } while (unwind_next_frame(&state);
>> >
>> > And then at the end there'll be some (still TBD) way to query whether it
>> > reached the last syscall pt_regs frame, or if it instead encountered a
>> > bogus frame pointer along the way and had to bail early.
>>
>> Sounds good to me. Will there be any frame size information available?
>> Right now, the unwinder from PaX just drops 2 pointers (saved frame,
>> saved ip) from the delta of frame address to find the size of the
>> actual stack area used by the function. If I could shave things like
>> padding and possible stack canaries off the size too, that would be
>> great.
>
> For x86, stacks are aligned at long word boundaries, so there's no real
> stack padding.

Well, I guess I meant the possible padding between variables and the
aligned pointers, but that's a really minor concern in my mind (as far
as being a potential kernel memory exposure on a bad usercopy).

> Also the CC_STACKPROTECTOR stack canaries are created by a gcc feature
> which only affects certain functions (and thus certain frames) and I
> don't know of any reliable way to find them.

Okay, that's fine. I had a horrible idea to just have the unwinder
look at the value stored in front of the saved ip, and if it matches
the known canary (for current anyway), then reduce the frame size by
another long word. ;)

> So with frame pointers, I think the best you can do is just assume that
> the frame data area is always two words smaller than the total frame
> size.

Yeah, that's what's happening here currently. Cool.

>> Since I'm aiming the hardened usercopy series for 4.8, I figure I'll
>> just leave this unwinder in for now, and once yours lands, I can rip
>> it out again.
>
> Sure, sounds fine to me.  If your code lands before I post mine, I can
> convert it myself.

Awesome, I'll keep you posted. Thanks!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-13 21:55   ` Kees Cook
                       ` (3 preceding siblings ...)
  (?)
@ 2016-07-14 23:20     ` Balbir Singh
  -1 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-14 23:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
> This is the start of porting PAX_USERCOPY into the mainline kernel. This
> is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
> work is based on code by PaX Team and Brad Spengler, and an earlier port
> from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.
> 
> This patch contains the logic for validating several conditions when
> performing copy_to_user() and copy_from_user() on the kernel object
> being copied to/from:
> - address range doesn't wrap around
> - address range isn't NULL or zero-allocated (with a non-zero copy size)
> - if on the slab allocator:
>   - object size must be less than or equal to copy size (when check is
>     implemented in the allocator, which appear in subsequent patches)
> - otherwise, object must not span page allocations
> - if on the stack
>   - object must not extend before/after the current process task
>   - object must be contained by the current stack frame (when there is
>     arch/build support for identifying stack frames)
> - object must not overlap with kernel text
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/Kconfig                |   7 ++
>  include/linux/slab.h        |  12 +++
>  include/linux/thread_info.h |  15 +++
>  mm/Makefile                 |   4 +
>  mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
>  security/Kconfig            |  27 ++++++
>  6 files changed, 284 insertions(+)
>  create mode 100644 mm/usercopy.c
> 
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 5e2776562035..195ee4cc939a 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
>  	  and similar) by implementing an inline arch_within_stack_frames(),
>  	  which is used by CONFIG_HARDENED_USERCOPY.
>  
> +config HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	bool
> +	help
> +	  An architecture should select this if it has a secondary linear
> +	  mapping of the kernel text. This is used to verify that kernel
> +	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
> +
>  config HAVE_CONTEXT_TRACKING
>  	bool
>  	help
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index aeb3e6d00a66..96a16a3fb7cb 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -155,6 +155,18 @@ void kfree(const void *);
>  void kzfree(const void *);
>  size_t ksize(const void *);
>  
> +#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page);
> +#else
> +static inline const char *__check_heap_object(const void *ptr,
> +					      unsigned long n,
> +					      struct page *page)
> +{
> +	return NULL;
> +}
> +#endif
> +
>  /*
>   * Some archs want to perform DMA into kmalloc caches and need a guaranteed
>   * alignment larger than the alignment of a 64-bit integer.
> diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
> index 3d5c80b4391d..f24b99eac969 100644
> --- a/include/linux/thread_info.h
> +++ b/include/linux/thread_info.h
> @@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
>  }
>  #endif
>  
> +#ifdef CONFIG_HARDENED_USERCOPY
> +extern void __check_object_size(const void *ptr, unsigned long n,
> +					bool to_user);
> +
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{
> +	__check_object_size(ptr, n, to_user);
> +}
> +#else
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{ }
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +
>  #endif	/* __KERNEL__ */
>  
>  #endif /* _LINUX_THREAD_INFO_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index 78c6f7dedb83..32d37247c7e5 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
>  KCOV_INSTRUMENT_mmzone.o := n
>  KCOV_INSTRUMENT_vmstat.o := n
>  
> +# Since __builtin_frame_address does work as used, disable the warning.
> +CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> @@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
>  obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
>  obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
>  obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
> +obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> new file mode 100644
> index 000000000000..4161a1fb1909
> --- /dev/null
> +++ b/mm/usercopy.c
> @@ -0,0 +1,219 @@
> +/*
> + * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
> + * which are designed to protect kernel memory from needless exposure
> + * and overwrite under many unintended conditions. This code is based
> + * on PAX_USERCOPY, which is:
> + *
> + * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
> + * Security Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <asm/sections.h>
> +
> +/*
> + * Checks if a given pointer and length is contained by the current
> + * stack frame (if possible).
> + *
> + *	0: not at all on the stack
> + *	1: fully within a valid stack frame
> + *	2: fully on the stack (when can't do frame-checking)
> + *	-1: error condition (invalid stack position or bad stack frame)

Can we use enums? Makes it easier to read/debug

> + */
> +static noinline int check_stack_object(const void *obj, unsigned long len)
> +{
> +	const void * const stack = task_stack_page(current);
> +	const void * const stackend = stack + THREAD_SIZE;
> +	int ret;
> +
> +	/* Object is not on the stack at all. */
> +	if (obj + len <= stack || stackend <= obj)
> +		return 0;
> +
> +	/*
> +	 * Reject: object partially overlaps the stack (passing the
> +	 * the check above means at least one end is within the stack,
> +	 * so if this check fails, the other end is outside the stack).
> +	 */
> +	if (obj < stack || stackend < obj + len)
> +		return -1;
> +
> +	/* Check if object is safely within a valid frame. */
> +	ret = arch_within_stack_frames(stack, stackend, obj, len);
> +	if (ret)
> +		return ret;
> +
> +	return 2;
> +}
> +
> +static void report_usercopy(const void *ptr, unsigned long len,
> +			    bool to_user, const char *type)
> +{
> +	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
> +		to_user ? "exposure" : "overwrite",
> +		to_user ? "from" : "to", ptr, type ? : "unknown", len);
> +	dump_stack();
> +	do_group_exit(SIGKILL);

SIGKILL -- SIGBUS?

> +}
> +
> +/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
> +static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
> +		     unsigned long high)
> +{
> +	unsigned long check_low = (uintptr_t)ptr;
> +	unsigned long check_high = check_low + n;
> +
> +	/* Does not overlap if entirely above or entirely below. */
> +	if (check_low >= high || check_high < low)
> +		return false;
> +
> +	return true;
> +}
> +
> +/* Is this address range in the kernel text area? */
> +static inline const char *check_kernel_text_object(const void *ptr,
> +						   unsigned long n)
> +{
> +	unsigned long textlow = (unsigned long)_stext;
> +	unsigned long texthigh = (unsigned long)_etext;
> +
> +	if (overlaps(ptr, n, textlow, texthigh))
> +		return "<kernel text>";
> +
> +#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	/* Check against linear mapping as well. */
> +	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
> +		     (unsigned long)__va(__pa(texthigh))))
> +		return "<linear kernel text>";
> +#endif
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_bogus_address(const void *ptr, unsigned long n)
> +{
> +	/* Reject if object wraps past end of memory. */
> +	if (ptr + n < ptr)
> +		return "<wrapped address>";
> +
> +	/* Reject if NULL or ZERO-allocation. */
> +	if (ZERO_OR_NULL_PTR(ptr))
> +		return "<null>";
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_heap_object(const void *ptr, unsigned long n,
> +					    bool to_user)
> +{
> +	struct page *page, *endpage;
> +	const void *end = ptr + n - 1;
> +
> +	if (!virt_addr_valid(ptr))
> +		return NULL;
> +
> +	page = virt_to_head_page(ptr);
> +
> +	/* Check slab allocator for flags and size. */
> +	if (PageSlab(page))
> +		return __check_heap_object(ptr, n, page);
> +
> +	/*
> +	 * Sometimes the kernel data regions are not marked Reserved (see
> +	 * check below). And sometimes [_sdata,_edata) does not cover
> +	 * rodata and/or bss, so check each range explicitly.
> +	 */
> +
> +	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__start_rodata &&
> +	    end <= (const void *)__end_rodata) {
> +		if (!to_user)
> +			return "<rodata>";
> +		return NULL;
> +	}
> +
> +	/* Allow kernel data region (if not marked as Reserved). */
> +	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> +		return NULL;
> +
> +	/* Allow kernel bss region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__bss_start &&
> +	    end <= (const void *)__bss_stop)
> +		return NULL;
> +
> +	/* Is the object wholly within one base page? */
> +	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> +		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> +		return NULL;
> +
> +	/* Allow if start and end are inside the same compound page. */
> +	endpage = virt_to_head_page(end);
> +	if (likely(endpage == page))
> +		return NULL;
> +
> +	/* Allow special areas, device memory, and sometimes kernel data. */
> +	if (PageReserved(page) && PageReserved(endpage))
> +		return NULL;a

If we came here, it's likely that endpage > page, do we need to check
that only the first and last pages are reserved? What about the ones in
the middle?


> +
> +	/* Uh oh. The "object" spans several independently allocated pages. */
> +	return "<spans multiple pages>";
> +}
> +
> +/*
> + * Validates that the given object is one of:
> + * - known safe heap object
> + * - known safe stack object
> + * - not in kernel text
> + */
> +void __check_object_size(const void *ptr, unsigned long n, bool to_user)
> +{
> +	const char *err;
> +
> +	/* Skip all tests if size is zero. */
> +	if (!n)
> +		return;
> +
> +	/* Check for invalid addresses. */
> +	err = check_bogus_address(ptr, n);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad heap object. */
> +	err = check_heap_object(ptr, n, to_user);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad stack object. */
> +	switch (check_stack_object(ptr, n)) {
> +	case 0:
> +		/* Object is not touching the current process stack. */
> +		break;
> +	case 1:
> +	case 2:
> +		/*
> +		 * Object is either in the correct frame (when it
> +		 * is possible to check) or just generally on the
> +		 * process stack (when frame checking not available).
> +		 */
> +		return;
> +	default:
> +		err = "<process stack>";
> +		goto report;
> +	}
> +
> +	/* Check for object in kernel to avoid text exposure. */
> +	err = check_kernel_text_object(ptr, n);
> +	if (!err)
> +		return;
> +
> +report:
> +	report_usercopy(ptr, n, to_user, err);
> +}

Looks good otherwise

Balbir Singh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-14 23:20     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-14 23:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara, Vita

On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
> This is the start of porting PAX_USERCOPY into the mainline kernel. This
> is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
> work is based on code by PaX Team and Brad Spengler, and an earlier port
> from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.
> 
> This patch contains the logic for validating several conditions when
> performing copy_to_user() and copy_from_user() on the kernel object
> being copied to/from:
> - address range doesn't wrap around
> - address range isn't NULL or zero-allocated (with a non-zero copy size)
> - if on the slab allocator:
>   - object size must be less than or equal to copy size (when check is
>     implemented in the allocator, which appear in subsequent patches)
> - otherwise, object must not span page allocations
> - if on the stack
>   - object must not extend before/after the current process task
>   - object must be contained by the current stack frame (when there is
>     arch/build support for identifying stack frames)
> - object must not overlap with kernel text
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/Kconfig                |   7 ++
>  include/linux/slab.h        |  12 +++
>  include/linux/thread_info.h |  15 +++
>  mm/Makefile                 |   4 +
>  mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
>  security/Kconfig            |  27 ++++++
>  6 files changed, 284 insertions(+)
>  create mode 100644 mm/usercopy.c
> 
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 5e2776562035..195ee4cc939a 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
>  	  and similar) by implementing an inline arch_within_stack_frames(),
>  	  which is used by CONFIG_HARDENED_USERCOPY.
>  
> +config HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	bool
> +	help
> +	  An architecture should select this if it has a secondary linear
> +	  mapping of the kernel text. This is used to verify that kernel
> +	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
> +
>  config HAVE_CONTEXT_TRACKING
>  	bool
>  	help
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index aeb3e6d00a66..96a16a3fb7cb 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -155,6 +155,18 @@ void kfree(const void *);
>  void kzfree(const void *);
>  size_t ksize(const void *);
>  
> +#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page);
> +#else
> +static inline const char *__check_heap_object(const void *ptr,
> +					      unsigned long n,
> +					      struct page *page)
> +{
> +	return NULL;
> +}
> +#endif
> +
>  /*
>   * Some archs want to perform DMA into kmalloc caches and need a guaranteed
>   * alignment larger than the alignment of a 64-bit integer.
> diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
> index 3d5c80b4391d..f24b99eac969 100644
> --- a/include/linux/thread_info.h
> +++ b/include/linux/thread_info.h
> @@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
>  }
>  #endif
>  
> +#ifdef CONFIG_HARDENED_USERCOPY
> +extern void __check_object_size(const void *ptr, unsigned long n,
> +					bool to_user);
> +
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{
> +	__check_object_size(ptr, n, to_user);
> +}
> +#else
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{ }
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +
>  #endif	/* __KERNEL__ */
>  
>  #endif /* _LINUX_THREAD_INFO_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index 78c6f7dedb83..32d37247c7e5 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
>  KCOV_INSTRUMENT_mmzone.o := n
>  KCOV_INSTRUMENT_vmstat.o := n
>  
> +# Since __builtin_frame_address does work as used, disable the warning.
> +CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> @@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
>  obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
>  obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
>  obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
> +obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> new file mode 100644
> index 000000000000..4161a1fb1909
> --- /dev/null
> +++ b/mm/usercopy.c
> @@ -0,0 +1,219 @@
> +/*
> + * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
> + * which are designed to protect kernel memory from needless exposure
> + * and overwrite under many unintended conditions. This code is based
> + * on PAX_USERCOPY, which is:
> + *
> + * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
> + * Security Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <asm/sections.h>
> +
> +/*
> + * Checks if a given pointer and length is contained by the current
> + * stack frame (if possible).
> + *
> + *	0: not at all on the stack
> + *	1: fully within a valid stack frame
> + *	2: fully on the stack (when can't do frame-checking)
> + *	-1: error condition (invalid stack position or bad stack frame)

Can we use enums? Makes it easier to read/debug

> + */
> +static noinline int check_stack_object(const void *obj, unsigned long len)
> +{
> +	const void * const stack = task_stack_page(current);
> +	const void * const stackend = stack + THREAD_SIZE;
> +	int ret;
> +
> +	/* Object is not on the stack at all. */
> +	if (obj + len <= stack || stackend <= obj)
> +		return 0;
> +
> +	/*
> +	 * Reject: object partially overlaps the stack (passing the
> +	 * the check above means at least one end is within the stack,
> +	 * so if this check fails, the other end is outside the stack).
> +	 */
> +	if (obj < stack || stackend < obj + len)
> +		return -1;
> +
> +	/* Check if object is safely within a valid frame. */
> +	ret = arch_within_stack_frames(stack, stackend, obj, len);
> +	if (ret)
> +		return ret;
> +
> +	return 2;
> +}
> +
> +static void report_usercopy(const void *ptr, unsigned long len,
> +			    bool to_user, const char *type)
> +{
> +	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
> +		to_user ? "exposure" : "overwrite",
> +		to_user ? "from" : "to", ptr, type ? : "unknown", len);
> +	dump_stack();
> +	do_group_exit(SIGKILL);

SIGKILL -- SIGBUS?

> +}
> +
> +/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
> +static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
> +		     unsigned long high)
> +{
> +	unsigned long check_low = (uintptr_t)ptr;
> +	unsigned long check_high = check_low + n;
> +
> +	/* Does not overlap if entirely above or entirely below. */
> +	if (check_low >= high || check_high < low)
> +		return false;
> +
> +	return true;
> +}
> +
> +/* Is this address range in the kernel text area? */
> +static inline const char *check_kernel_text_object(const void *ptr,
> +						   unsigned long n)
> +{
> +	unsigned long textlow = (unsigned long)_stext;
> +	unsigned long texthigh = (unsigned long)_etext;
> +
> +	if (overlaps(ptr, n, textlow, texthigh))
> +		return "<kernel text>";
> +
> +#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	/* Check against linear mapping as well. */
> +	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
> +		     (unsigned long)__va(__pa(texthigh))))
> +		return "<linear kernel text>";
> +#endif
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_bogus_address(const void *ptr, unsigned long n)
> +{
> +	/* Reject if object wraps past end of memory. */
> +	if (ptr + n < ptr)
> +		return "<wrapped address>";
> +
> +	/* Reject if NULL or ZERO-allocation. */
> +	if (ZERO_OR_NULL_PTR(ptr))
> +		return "<null>";
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_heap_object(const void *ptr, unsigned long n,
> +					    bool to_user)
> +{
> +	struct page *page, *endpage;
> +	const void *end = ptr + n - 1;
> +
> +	if (!virt_addr_valid(ptr))
> +		return NULL;
> +
> +	page = virt_to_head_page(ptr);
> +
> +	/* Check slab allocator for flags and size. */
> +	if (PageSlab(page))
> +		return __check_heap_object(ptr, n, page);
> +
> +	/*
> +	 * Sometimes the kernel data regions are not marked Reserved (see
> +	 * check below). And sometimes [_sdata,_edata) does not cover
> +	 * rodata and/or bss, so check each range explicitly.
> +	 */
> +
> +	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__start_rodata &&
> +	    end <= (const void *)__end_rodata) {
> +		if (!to_user)
> +			return "<rodata>";
> +		return NULL;
> +	}
> +
> +	/* Allow kernel data region (if not marked as Reserved). */
> +	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> +		return NULL;
> +
> +	/* Allow kernel bss region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__bss_start &&
> +	    end <= (const void *)__bss_stop)
> +		return NULL;
> +
> +	/* Is the object wholly within one base page? */
> +	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> +		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> +		return NULL;
> +
> +	/* Allow if start and end are inside the same compound page. */
> +	endpage = virt_to_head_page(end);
> +	if (likely(endpage == page))
> +		return NULL;
> +
> +	/* Allow special areas, device memory, and sometimes kernel data. */
> +	if (PageReserved(page) && PageReserved(endpage))
> +		return NULL;a

If we came here, it's likely that endpage > page, do we need to check
that only the first and last pages are reserved? What about the ones in
the middle?


> +
> +	/* Uh oh. The "object" spans several independently allocated pages. */
> +	return "<spans multiple pages>";
> +}
> +
> +/*
> + * Validates that the given object is one of:
> + * - known safe heap object
> + * - known safe stack object
> + * - not in kernel text
> + */
> +void __check_object_size(const void *ptr, unsigned long n, bool to_user)
> +{
> +	const char *err;
> +
> +	/* Skip all tests if size is zero. */
> +	if (!n)
> +		return;
> +
> +	/* Check for invalid addresses. */
> +	err = check_bogus_address(ptr, n);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad heap object. */
> +	err = check_heap_object(ptr, n, to_user);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad stack object. */
> +	switch (check_stack_object(ptr, n)) {
> +	case 0:
> +		/* Object is not touching the current process stack. */
> +		break;
> +	case 1:
> +	case 2:
> +		/*
> +		 * Object is either in the correct frame (when it
> +		 * is possible to check) or just generally on the
> +		 * process stack (when frame checking not available).
> +		 */
> +		return;
> +	default:
> +		err = "<process stack>";
> +		goto report;
> +	}
> +
> +	/* Check for object in kernel to avoid text exposure. */
> +	err = check_kernel_text_object(ptr, n);
> +	if (!err)
> +		return;
> +
> +report:
> +	report_usercopy(ptr, n, to_user, err);
> +}

Looks good otherwise

Balbir Singh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-14 23:20     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-14 23:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
> This is the start of porting PAX_USERCOPY into the mainline kernel. This
> is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
> work is based on code by PaX Team and Brad Spengler, and an earlier port
> from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.
> 
> This patch contains the logic for validating several conditions when
> performing copy_to_user() and copy_from_user() on the kernel object
> being copied to/from:
> - address range doesn't wrap around
> - address range isn't NULL or zero-allocated (with a non-zero copy size)
> - if on the slab allocator:
>   - object size must be less than or equal to copy size (when check is
>     implemented in the allocator, which appear in subsequent patches)
> - otherwise, object must not span page allocations
> - if on the stack
>   - object must not extend before/after the current process task
>   - object must be contained by the current stack frame (when there is
>     arch/build support for identifying stack frames)
> - object must not overlap with kernel text
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/Kconfig                |   7 ++
>  include/linux/slab.h        |  12 +++
>  include/linux/thread_info.h |  15 +++
>  mm/Makefile                 |   4 +
>  mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
>  security/Kconfig            |  27 ++++++
>  6 files changed, 284 insertions(+)
>  create mode 100644 mm/usercopy.c
> 
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 5e2776562035..195ee4cc939a 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
>  	  and similar) by implementing an inline arch_within_stack_frames(),
>  	  which is used by CONFIG_HARDENED_USERCOPY.
>  
> +config HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	bool
> +	help
> +	  An architecture should select this if it has a secondary linear
> +	  mapping of the kernel text. This is used to verify that kernel
> +	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
> +
>  config HAVE_CONTEXT_TRACKING
>  	bool
>  	help
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index aeb3e6d00a66..96a16a3fb7cb 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -155,6 +155,18 @@ void kfree(const void *);
>  void kzfree(const void *);
>  size_t ksize(const void *);
>  
> +#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page);
> +#else
> +static inline const char *__check_heap_object(const void *ptr,
> +					      unsigned long n,
> +					      struct page *page)
> +{
> +	return NULL;
> +}
> +#endif
> +
>  /*
>   * Some archs want to perform DMA into kmalloc caches and need a guaranteed
>   * alignment larger than the alignment of a 64-bit integer.
> diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
> index 3d5c80b4391d..f24b99eac969 100644
> --- a/include/linux/thread_info.h
> +++ b/include/linux/thread_info.h
> @@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
>  }
>  #endif
>  
> +#ifdef CONFIG_HARDENED_USERCOPY
> +extern void __check_object_size(const void *ptr, unsigned long n,
> +					bool to_user);
> +
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{
> +	__check_object_size(ptr, n, to_user);
> +}
> +#else
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{ }
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +
>  #endif	/* __KERNEL__ */
>  
>  #endif /* _LINUX_THREAD_INFO_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index 78c6f7dedb83..32d37247c7e5 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
>  KCOV_INSTRUMENT_mmzone.o := n
>  KCOV_INSTRUMENT_vmstat.o := n
>  
> +# Since __builtin_frame_address does work as used, disable the warning.
> +CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> @@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
>  obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
>  obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
>  obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
> +obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> new file mode 100644
> index 000000000000..4161a1fb1909
> --- /dev/null
> +++ b/mm/usercopy.c
> @@ -0,0 +1,219 @@
> +/*
> + * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
> + * which are designed to protect kernel memory from needless exposure
> + * and overwrite under many unintended conditions. This code is based
> + * on PAX_USERCOPY, which is:
> + *
> + * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
> + * Security Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <asm/sections.h>
> +
> +/*
> + * Checks if a given pointer and length is contained by the current
> + * stack frame (if possible).
> + *
> + *	0: not at all on the stack
> + *	1: fully within a valid stack frame
> + *	2: fully on the stack (when can't do frame-checking)
> + *	-1: error condition (invalid stack position or bad stack frame)

Can we use enums? Makes it easier to read/debug

> + */
> +static noinline int check_stack_object(const void *obj, unsigned long len)
> +{
> +	const void * const stack = task_stack_page(current);
> +	const void * const stackend = stack + THREAD_SIZE;
> +	int ret;
> +
> +	/* Object is not on the stack at all. */
> +	if (obj + len <= stack || stackend <= obj)
> +		return 0;
> +
> +	/*
> +	 * Reject: object partially overlaps the stack (passing the
> +	 * the check above means at least one end is within the stack,
> +	 * so if this check fails, the other end is outside the stack).
> +	 */
> +	if (obj < stack || stackend < obj + len)
> +		return -1;
> +
> +	/* Check if object is safely within a valid frame. */
> +	ret = arch_within_stack_frames(stack, stackend, obj, len);
> +	if (ret)
> +		return ret;
> +
> +	return 2;
> +}
> +
> +static void report_usercopy(const void *ptr, unsigned long len,
> +			    bool to_user, const char *type)
> +{
> +	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
> +		to_user ? "exposure" : "overwrite",
> +		to_user ? "from" : "to", ptr, type ? : "unknown", len);
> +	dump_stack();
> +	do_group_exit(SIGKILL);

SIGKILL -- SIGBUS?

> +}
> +
> +/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
> +static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
> +		     unsigned long high)
> +{
> +	unsigned long check_low = (uintptr_t)ptr;
> +	unsigned long check_high = check_low + n;
> +
> +	/* Does not overlap if entirely above or entirely below. */
> +	if (check_low >= high || check_high < low)
> +		return false;
> +
> +	return true;
> +}
> +
> +/* Is this address range in the kernel text area? */
> +static inline const char *check_kernel_text_object(const void *ptr,
> +						   unsigned long n)
> +{
> +	unsigned long textlow = (unsigned long)_stext;
> +	unsigned long texthigh = (unsigned long)_etext;
> +
> +	if (overlaps(ptr, n, textlow, texthigh))
> +		return "<kernel text>";
> +
> +#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	/* Check against linear mapping as well. */
> +	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
> +		     (unsigned long)__va(__pa(texthigh))))
> +		return "<linear kernel text>";
> +#endif
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_bogus_address(const void *ptr, unsigned long n)
> +{
> +	/* Reject if object wraps past end of memory. */
> +	if (ptr + n < ptr)
> +		return "<wrapped address>";
> +
> +	/* Reject if NULL or ZERO-allocation. */
> +	if (ZERO_OR_NULL_PTR(ptr))
> +		return "<null>";
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_heap_object(const void *ptr, unsigned long n,
> +					    bool to_user)
> +{
> +	struct page *page, *endpage;
> +	const void *end = ptr + n - 1;
> +
> +	if (!virt_addr_valid(ptr))
> +		return NULL;
> +
> +	page = virt_to_head_page(ptr);
> +
> +	/* Check slab allocator for flags and size. */
> +	if (PageSlab(page))
> +		return __check_heap_object(ptr, n, page);
> +
> +	/*
> +	 * Sometimes the kernel data regions are not marked Reserved (see
> +	 * check below). And sometimes [_sdata,_edata) does not cover
> +	 * rodata and/or bss, so check each range explicitly.
> +	 */
> +
> +	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__start_rodata &&
> +	    end <= (const void *)__end_rodata) {
> +		if (!to_user)
> +			return "<rodata>";
> +		return NULL;
> +	}
> +
> +	/* Allow kernel data region (if not marked as Reserved). */
> +	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> +		return NULL;
> +
> +	/* Allow kernel bss region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__bss_start &&
> +	    end <= (const void *)__bss_stop)
> +		return NULL;
> +
> +	/* Is the object wholly within one base page? */
> +	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) =
> +		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> +		return NULL;
> +
> +	/* Allow if start and end are inside the same compound page. */
> +	endpage = virt_to_head_page(end);
> +	if (likely(endpage = page))
> +		return NULL;
> +
> +	/* Allow special areas, device memory, and sometimes kernel data. */
> +	if (PageReserved(page) && PageReserved(endpage))
> +		return NULL;a

If we came here, it's likely that endpage > page, do we need to check
that only the first and last pages are reserved? What about the ones in
the middle?


> +
> +	/* Uh oh. The "object" spans several independently allocated pages. */
> +	return "<spans multiple pages>";
> +}
> +
> +/*
> + * Validates that the given object is one of:
> + * - known safe heap object
> + * - known safe stack object
> + * - not in kernel text
> + */
> +void __check_object_size(const void *ptr, unsigned long n, bool to_user)
> +{
> +	const char *err;
> +
> +	/* Skip all tests if size is zero. */
> +	if (!n)
> +		return;
> +
> +	/* Check for invalid addresses. */
> +	err = check_bogus_address(ptr, n);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad heap object. */
> +	err = check_heap_object(ptr, n, to_user);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad stack object. */
> +	switch (check_stack_object(ptr, n)) {
> +	case 0:
> +		/* Object is not touching the current process stack. */
> +		break;
> +	case 1:
> +	case 2:
> +		/*
> +		 * Object is either in the correct frame (when it
> +		 * is possible to check) or just generally on the
> +		 * process stack (when frame checking not available).
> +		 */
> +		return;
> +	default:
> +		err = "<process stack>";
> +		goto report;
> +	}
> +
> +	/* Check for object in kernel to avoid text exposure. */
> +	err = check_kernel_text_object(ptr, n);
> +	if (!err)
> +		return;
> +
> +report:
> +	report_usercopy(ptr, n, to_user, err);
> +}

Looks good otherwise

Balbir Singh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-14 23:20     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-14 23:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
> This is the start of porting PAX_USERCOPY into the mainline kernel. This
> is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
> work is based on code by PaX Team and Brad Spengler, and an earlier port
> from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.
> 
> This patch contains the logic for validating several conditions when
> performing copy_to_user() and copy_from_user() on the kernel object
> being copied to/from:
> - address range doesn't wrap around
> - address range isn't NULL or zero-allocated (with a non-zero copy size)
> - if on the slab allocator:
>   - object size must be less than or equal to copy size (when check is
>     implemented in the allocator, which appear in subsequent patches)
> - otherwise, object must not span page allocations
> - if on the stack
>   - object must not extend before/after the current process task
>   - object must be contained by the current stack frame (when there is
>     arch/build support for identifying stack frames)
> - object must not overlap with kernel text
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/Kconfig                |   7 ++
>  include/linux/slab.h        |  12 +++
>  include/linux/thread_info.h |  15 +++
>  mm/Makefile                 |   4 +
>  mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
>  security/Kconfig            |  27 ++++++
>  6 files changed, 284 insertions(+)
>  create mode 100644 mm/usercopy.c
> 
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 5e2776562035..195ee4cc939a 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
>  	  and similar) by implementing an inline arch_within_stack_frames(),
>  	  which is used by CONFIG_HARDENED_USERCOPY.
>  
> +config HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	bool
> +	help
> +	  An architecture should select this if it has a secondary linear
> +	  mapping of the kernel text. This is used to verify that kernel
> +	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
> +
>  config HAVE_CONTEXT_TRACKING
>  	bool
>  	help
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index aeb3e6d00a66..96a16a3fb7cb 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -155,6 +155,18 @@ void kfree(const void *);
>  void kzfree(const void *);
>  size_t ksize(const void *);
>  
> +#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page);
> +#else
> +static inline const char *__check_heap_object(const void *ptr,
> +					      unsigned long n,
> +					      struct page *page)
> +{
> +	return NULL;
> +}
> +#endif
> +
>  /*
>   * Some archs want to perform DMA into kmalloc caches and need a guaranteed
>   * alignment larger than the alignment of a 64-bit integer.
> diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
> index 3d5c80b4391d..f24b99eac969 100644
> --- a/include/linux/thread_info.h
> +++ b/include/linux/thread_info.h
> @@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
>  }
>  #endif
>  
> +#ifdef CONFIG_HARDENED_USERCOPY
> +extern void __check_object_size(const void *ptr, unsigned long n,
> +					bool to_user);
> +
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{
> +	__check_object_size(ptr, n, to_user);
> +}
> +#else
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{ }
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +
>  #endif	/* __KERNEL__ */
>  
>  #endif /* _LINUX_THREAD_INFO_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index 78c6f7dedb83..32d37247c7e5 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
>  KCOV_INSTRUMENT_mmzone.o := n
>  KCOV_INSTRUMENT_vmstat.o := n
>  
> +# Since __builtin_frame_address does work as used, disable the warning.
> +CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> @@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
>  obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
>  obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
>  obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
> +obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> new file mode 100644
> index 000000000000..4161a1fb1909
> --- /dev/null
> +++ b/mm/usercopy.c
> @@ -0,0 +1,219 @@
> +/*
> + * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
> + * which are designed to protect kernel memory from needless exposure
> + * and overwrite under many unintended conditions. This code is based
> + * on PAX_USERCOPY, which is:
> + *
> + * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
> + * Security Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <asm/sections.h>
> +
> +/*
> + * Checks if a given pointer and length is contained by the current
> + * stack frame (if possible).
> + *
> + *	0: not at all on the stack
> + *	1: fully within a valid stack frame
> + *	2: fully on the stack (when can't do frame-checking)
> + *	-1: error condition (invalid stack position or bad stack frame)

Can we use enums? Makes it easier to read/debug

> + */
> +static noinline int check_stack_object(const void *obj, unsigned long len)
> +{
> +	const void * const stack = task_stack_page(current);
> +	const void * const stackend = stack + THREAD_SIZE;
> +	int ret;
> +
> +	/* Object is not on the stack at all. */
> +	if (obj + len <= stack || stackend <= obj)
> +		return 0;
> +
> +	/*
> +	 * Reject: object partially overlaps the stack (passing the
> +	 * the check above means at least one end is within the stack,
> +	 * so if this check fails, the other end is outside the stack).
> +	 */
> +	if (obj < stack || stackend < obj + len)
> +		return -1;
> +
> +	/* Check if object is safely within a valid frame. */
> +	ret = arch_within_stack_frames(stack, stackend, obj, len);
> +	if (ret)
> +		return ret;
> +
> +	return 2;
> +}
> +
> +static void report_usercopy(const void *ptr, unsigned long len,
> +			    bool to_user, const char *type)
> +{
> +	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
> +		to_user ? "exposure" : "overwrite",
> +		to_user ? "from" : "to", ptr, type ? : "unknown", len);
> +	dump_stack();
> +	do_group_exit(SIGKILL);

SIGKILL -- SIGBUS?

> +}
> +
> +/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
> +static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
> +		     unsigned long high)
> +{
> +	unsigned long check_low = (uintptr_t)ptr;
> +	unsigned long check_high = check_low + n;
> +
> +	/* Does not overlap if entirely above or entirely below. */
> +	if (check_low >= high || check_high < low)
> +		return false;
> +
> +	return true;
> +}
> +
> +/* Is this address range in the kernel text area? */
> +static inline const char *check_kernel_text_object(const void *ptr,
> +						   unsigned long n)
> +{
> +	unsigned long textlow = (unsigned long)_stext;
> +	unsigned long texthigh = (unsigned long)_etext;
> +
> +	if (overlaps(ptr, n, textlow, texthigh))
> +		return "<kernel text>";
> +
> +#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	/* Check against linear mapping as well. */
> +	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
> +		     (unsigned long)__va(__pa(texthigh))))
> +		return "<linear kernel text>";
> +#endif
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_bogus_address(const void *ptr, unsigned long n)
> +{
> +	/* Reject if object wraps past end of memory. */
> +	if (ptr + n < ptr)
> +		return "<wrapped address>";
> +
> +	/* Reject if NULL or ZERO-allocation. */
> +	if (ZERO_OR_NULL_PTR(ptr))
> +		return "<null>";
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_heap_object(const void *ptr, unsigned long n,
> +					    bool to_user)
> +{
> +	struct page *page, *endpage;
> +	const void *end = ptr + n - 1;
> +
> +	if (!virt_addr_valid(ptr))
> +		return NULL;
> +
> +	page = virt_to_head_page(ptr);
> +
> +	/* Check slab allocator for flags and size. */
> +	if (PageSlab(page))
> +		return __check_heap_object(ptr, n, page);
> +
> +	/*
> +	 * Sometimes the kernel data regions are not marked Reserved (see
> +	 * check below). And sometimes [_sdata,_edata) does not cover
> +	 * rodata and/or bss, so check each range explicitly.
> +	 */
> +
> +	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__start_rodata &&
> +	    end <= (const void *)__end_rodata) {
> +		if (!to_user)
> +			return "<rodata>";
> +		return NULL;
> +	}
> +
> +	/* Allow kernel data region (if not marked as Reserved). */
> +	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> +		return NULL;
> +
> +	/* Allow kernel bss region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__bss_start &&
> +	    end <= (const void *)__bss_stop)
> +		return NULL;
> +
> +	/* Is the object wholly within one base page? */
> +	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> +		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> +		return NULL;
> +
> +	/* Allow if start and end are inside the same compound page. */
> +	endpage = virt_to_head_page(end);
> +	if (likely(endpage == page))
> +		return NULL;
> +
> +	/* Allow special areas, device memory, and sometimes kernel data. */
> +	if (PageReserved(page) && PageReserved(endpage))
> +		return NULL;a

If we came here, it's likely that endpage > page, do we need to check
that only the first and last pages are reserved? What about the ones in
the middle?


> +
> +	/* Uh oh. The "object" spans several independently allocated pages. */
> +	return "<spans multiple pages>";
> +}
> +
> +/*
> + * Validates that the given object is one of:
> + * - known safe heap object
> + * - known safe stack object
> + * - not in kernel text
> + */
> +void __check_object_size(const void *ptr, unsigned long n, bool to_user)
> +{
> +	const char *err;
> +
> +	/* Skip all tests if size is zero. */
> +	if (!n)
> +		return;
> +
> +	/* Check for invalid addresses. */
> +	err = check_bogus_address(ptr, n);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad heap object. */
> +	err = check_heap_object(ptr, n, to_user);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad stack object. */
> +	switch (check_stack_object(ptr, n)) {
> +	case 0:
> +		/* Object is not touching the current process stack. */
> +		break;
> +	case 1:
> +	case 2:
> +		/*
> +		 * Object is either in the correct frame (when it
> +		 * is possible to check) or just generally on the
> +		 * process stack (when frame checking not available).
> +		 */
> +		return;
> +	default:
> +		err = "<process stack>";
> +		goto report;
> +	}
> +
> +	/* Check for object in kernel to avoid text exposure. */
> +	err = check_kernel_text_object(ptr, n);
> +	if (!err)
> +		return;
> +
> +report:
> +	report_usercopy(ptr, n, to_user, err);
> +}

Looks good otherwise

Balbir Singh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-14 23:20     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-14 23:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
> This is the start of porting PAX_USERCOPY into the mainline kernel. This
> is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
> work is based on code by PaX Team and Brad Spengler, and an earlier port
> from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.
> 
> This patch contains the logic for validating several conditions when
> performing copy_to_user() and copy_from_user() on the kernel object
> being copied to/from:
> - address range doesn't wrap around
> - address range isn't NULL or zero-allocated (with a non-zero copy size)
> - if on the slab allocator:
>   - object size must be less than or equal to copy size (when check is
>     implemented in the allocator, which appear in subsequent patches)
> - otherwise, object must not span page allocations
> - if on the stack
>   - object must not extend before/after the current process task
>   - object must be contained by the current stack frame (when there is
>     arch/build support for identifying stack frames)
> - object must not overlap with kernel text
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/Kconfig                |   7 ++
>  include/linux/slab.h        |  12 +++
>  include/linux/thread_info.h |  15 +++
>  mm/Makefile                 |   4 +
>  mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
>  security/Kconfig            |  27 ++++++
>  6 files changed, 284 insertions(+)
>  create mode 100644 mm/usercopy.c
> 
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 5e2776562035..195ee4cc939a 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
>  	  and similar) by implementing an inline arch_within_stack_frames(),
>  	  which is used by CONFIG_HARDENED_USERCOPY.
>  
> +config HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	bool
> +	help
> +	  An architecture should select this if it has a secondary linear
> +	  mapping of the kernel text. This is used to verify that kernel
> +	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
> +
>  config HAVE_CONTEXT_TRACKING
>  	bool
>  	help
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index aeb3e6d00a66..96a16a3fb7cb 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -155,6 +155,18 @@ void kfree(const void *);
>  void kzfree(const void *);
>  size_t ksize(const void *);
>  
> +#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page);
> +#else
> +static inline const char *__check_heap_object(const void *ptr,
> +					      unsigned long n,
> +					      struct page *page)
> +{
> +	return NULL;
> +}
> +#endif
> +
>  /*
>   * Some archs want to perform DMA into kmalloc caches and need a guaranteed
>   * alignment larger than the alignment of a 64-bit integer.
> diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
> index 3d5c80b4391d..f24b99eac969 100644
> --- a/include/linux/thread_info.h
> +++ b/include/linux/thread_info.h
> @@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
>  }
>  #endif
>  
> +#ifdef CONFIG_HARDENED_USERCOPY
> +extern void __check_object_size(const void *ptr, unsigned long n,
> +					bool to_user);
> +
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{
> +	__check_object_size(ptr, n, to_user);
> +}
> +#else
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{ }
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +
>  #endif	/* __KERNEL__ */
>  
>  #endif /* _LINUX_THREAD_INFO_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index 78c6f7dedb83..32d37247c7e5 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
>  KCOV_INSTRUMENT_mmzone.o := n
>  KCOV_INSTRUMENT_vmstat.o := n
>  
> +# Since __builtin_frame_address does work as used, disable the warning.
> +CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> @@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
>  obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
>  obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
>  obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
> +obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> new file mode 100644
> index 000000000000..4161a1fb1909
> --- /dev/null
> +++ b/mm/usercopy.c
> @@ -0,0 +1,219 @@
> +/*
> + * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
> + * which are designed to protect kernel memory from needless exposure
> + * and overwrite under many unintended conditions. This code is based
> + * on PAX_USERCOPY, which is:
> + *
> + * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
> + * Security Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <asm/sections.h>
> +
> +/*
> + * Checks if a given pointer and length is contained by the current
> + * stack frame (if possible).
> + *
> + *	0: not at all on the stack
> + *	1: fully within a valid stack frame
> + *	2: fully on the stack (when can't do frame-checking)
> + *	-1: error condition (invalid stack position or bad stack frame)

Can we use enums? Makes it easier to read/debug

> + */
> +static noinline int check_stack_object(const void *obj, unsigned long len)
> +{
> +	const void * const stack = task_stack_page(current);
> +	const void * const stackend = stack + THREAD_SIZE;
> +	int ret;
> +
> +	/* Object is not on the stack at all. */
> +	if (obj + len <= stack || stackend <= obj)
> +		return 0;
> +
> +	/*
> +	 * Reject: object partially overlaps the stack (passing the
> +	 * the check above means at least one end is within the stack,
> +	 * so if this check fails, the other end is outside the stack).
> +	 */
> +	if (obj < stack || stackend < obj + len)
> +		return -1;
> +
> +	/* Check if object is safely within a valid frame. */
> +	ret = arch_within_stack_frames(stack, stackend, obj, len);
> +	if (ret)
> +		return ret;
> +
> +	return 2;
> +}
> +
> +static void report_usercopy(const void *ptr, unsigned long len,
> +			    bool to_user, const char *type)
> +{
> +	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
> +		to_user ? "exposure" : "overwrite",
> +		to_user ? "from" : "to", ptr, type ? : "unknown", len);
> +	dump_stack();
> +	do_group_exit(SIGKILL);

SIGKILL -- SIGBUS?

> +}
> +
> +/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
> +static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
> +		     unsigned long high)
> +{
> +	unsigned long check_low = (uintptr_t)ptr;
> +	unsigned long check_high = check_low + n;
> +
> +	/* Does not overlap if entirely above or entirely below. */
> +	if (check_low >= high || check_high < low)
> +		return false;
> +
> +	return true;
> +}
> +
> +/* Is this address range in the kernel text area? */
> +static inline const char *check_kernel_text_object(const void *ptr,
> +						   unsigned long n)
> +{
> +	unsigned long textlow = (unsigned long)_stext;
> +	unsigned long texthigh = (unsigned long)_etext;
> +
> +	if (overlaps(ptr, n, textlow, texthigh))
> +		return "<kernel text>";
> +
> +#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	/* Check against linear mapping as well. */
> +	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
> +		     (unsigned long)__va(__pa(texthigh))))
> +		return "<linear kernel text>";
> +#endif
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_bogus_address(const void *ptr, unsigned long n)
> +{
> +	/* Reject if object wraps past end of memory. */
> +	if (ptr + n < ptr)
> +		return "<wrapped address>";
> +
> +	/* Reject if NULL or ZERO-allocation. */
> +	if (ZERO_OR_NULL_PTR(ptr))
> +		return "<null>";
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_heap_object(const void *ptr, unsigned long n,
> +					    bool to_user)
> +{
> +	struct page *page, *endpage;
> +	const void *end = ptr + n - 1;
> +
> +	if (!virt_addr_valid(ptr))
> +		return NULL;
> +
> +	page = virt_to_head_page(ptr);
> +
> +	/* Check slab allocator for flags and size. */
> +	if (PageSlab(page))
> +		return __check_heap_object(ptr, n, page);
> +
> +	/*
> +	 * Sometimes the kernel data regions are not marked Reserved (see
> +	 * check below). And sometimes [_sdata,_edata) does not cover
> +	 * rodata and/or bss, so check each range explicitly.
> +	 */
> +
> +	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__start_rodata &&
> +	    end <= (const void *)__end_rodata) {
> +		if (!to_user)
> +			return "<rodata>";
> +		return NULL;
> +	}
> +
> +	/* Allow kernel data region (if not marked as Reserved). */
> +	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> +		return NULL;
> +
> +	/* Allow kernel bss region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__bss_start &&
> +	    end <= (const void *)__bss_stop)
> +		return NULL;
> +
> +	/* Is the object wholly within one base page? */
> +	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> +		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> +		return NULL;
> +
> +	/* Allow if start and end are inside the same compound page. */
> +	endpage = virt_to_head_page(end);
> +	if (likely(endpage == page))
> +		return NULL;
> +
> +	/* Allow special areas, device memory, and sometimes kernel data. */
> +	if (PageReserved(page) && PageReserved(endpage))
> +		return NULL;a

If we came here, it's likely that endpage > page, do we need to check
that only the first and last pages are reserved? What about the ones in
the middle?


> +
> +	/* Uh oh. The "object" spans several independently allocated pages. */
> +	return "<spans multiple pages>";
> +}
> +
> +/*
> + * Validates that the given object is one of:
> + * - known safe heap object
> + * - known safe stack object
> + * - not in kernel text
> + */
> +void __check_object_size(const void *ptr, unsigned long n, bool to_user)
> +{
> +	const char *err;
> +
> +	/* Skip all tests if size is zero. */
> +	if (!n)
> +		return;
> +
> +	/* Check for invalid addresses. */
> +	err = check_bogus_address(ptr, n);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad heap object. */
> +	err = check_heap_object(ptr, n, to_user);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad stack object. */
> +	switch (check_stack_object(ptr, n)) {
> +	case 0:
> +		/* Object is not touching the current process stack. */
> +		break;
> +	case 1:
> +	case 2:
> +		/*
> +		 * Object is either in the correct frame (when it
> +		 * is possible to check) or just generally on the
> +		 * process stack (when frame checking not available).
> +		 */
> +		return;
> +	default:
> +		err = "<process stack>";
> +		goto report;
> +	}
> +
> +	/* Check for object in kernel to avoid text exposure. */
> +	err = check_kernel_text_object(ptr, n);
> +	if (!err)
> +		return;
> +
> +report:
> +	report_usercopy(ptr, n, to_user, err);
> +}

Looks good otherwise

Balbir Singh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-14 23:20     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-14 23:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
> This is the start of porting PAX_USERCOPY into the mainline kernel. This
> is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
> work is based on code by PaX Team and Brad Spengler, and an earlier port
> from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.
> 
> This patch contains the logic for validating several conditions when
> performing copy_to_user() and copy_from_user() on the kernel object
> being copied to/from:
> - address range doesn't wrap around
> - address range isn't NULL or zero-allocated (with a non-zero copy size)
> - if on the slab allocator:
>   - object size must be less than or equal to copy size (when check is
>     implemented in the allocator, which appear in subsequent patches)
> - otherwise, object must not span page allocations
> - if on the stack
>   - object must not extend before/after the current process task
>   - object must be contained by the current stack frame (when there is
>     arch/build support for identifying stack frames)
> - object must not overlap with kernel text
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  arch/Kconfig                |   7 ++
>  include/linux/slab.h        |  12 +++
>  include/linux/thread_info.h |  15 +++
>  mm/Makefile                 |   4 +
>  mm/usercopy.c               | 219 ++++++++++++++++++++++++++++++++++++++++++++
>  security/Kconfig            |  27 ++++++
>  6 files changed, 284 insertions(+)
>  create mode 100644 mm/usercopy.c
> 
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 5e2776562035..195ee4cc939a 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -433,6 +433,13 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES
>  	  and similar) by implementing an inline arch_within_stack_frames(),
>  	  which is used by CONFIG_HARDENED_USERCOPY.
>  
> +config HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	bool
> +	help
> +	  An architecture should select this if it has a secondary linear
> +	  mapping of the kernel text. This is used to verify that kernel
> +	  text exposures are not visible under CONFIG_HARDENED_USERCOPY.
> +
>  config HAVE_CONTEXT_TRACKING
>  	bool
>  	help
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index aeb3e6d00a66..96a16a3fb7cb 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -155,6 +155,18 @@ void kfree(const void *);
>  void kzfree(const void *);
>  size_t ksize(const void *);
>  
> +#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page);
> +#else
> +static inline const char *__check_heap_object(const void *ptr,
> +					      unsigned long n,
> +					      struct page *page)
> +{
> +	return NULL;
> +}
> +#endif
> +
>  /*
>   * Some archs want to perform DMA into kmalloc caches and need a guaranteed
>   * alignment larger than the alignment of a 64-bit integer.
> diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
> index 3d5c80b4391d..f24b99eac969 100644
> --- a/include/linux/thread_info.h
> +++ b/include/linux/thread_info.h
> @@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
>  }
>  #endif
>  
> +#ifdef CONFIG_HARDENED_USERCOPY
> +extern void __check_object_size(const void *ptr, unsigned long n,
> +					bool to_user);
> +
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{
> +	__check_object_size(ptr, n, to_user);
> +}
> +#else
> +static inline void check_object_size(const void *ptr, unsigned long n,
> +				     bool to_user)
> +{ }
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +
>  #endif	/* __KERNEL__ */
>  
>  #endif /* _LINUX_THREAD_INFO_H */
> diff --git a/mm/Makefile b/mm/Makefile
> index 78c6f7dedb83..32d37247c7e5 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
>  KCOV_INSTRUMENT_mmzone.o := n
>  KCOV_INSTRUMENT_vmstat.o := n
>  
> +# Since __builtin_frame_address does work as used, disable the warning.
> +CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
> +
>  mmu-y			:= nommu.o
>  mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
>  			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
> @@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
>  obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
>  obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
>  obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
> +obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> new file mode 100644
> index 000000000000..4161a1fb1909
> --- /dev/null
> +++ b/mm/usercopy.c
> @@ -0,0 +1,219 @@
> +/*
> + * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
> + * which are designed to protect kernel memory from needless exposure
> + * and overwrite under many unintended conditions. This code is based
> + * on PAX_USERCOPY, which is:
> + *
> + * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
> + * Security Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + */
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <asm/sections.h>
> +
> +/*
> + * Checks if a given pointer and length is contained by the current
> + * stack frame (if possible).
> + *
> + *	0: not at all on the stack
> + *	1: fully within a valid stack frame
> + *	2: fully on the stack (when can't do frame-checking)
> + *	-1: error condition (invalid stack position or bad stack frame)

Can we use enums? Makes it easier to read/debug

> + */
> +static noinline int check_stack_object(const void *obj, unsigned long len)
> +{
> +	const void * const stack = task_stack_page(current);
> +	const void * const stackend = stack + THREAD_SIZE;
> +	int ret;
> +
> +	/* Object is not on the stack at all. */
> +	if (obj + len <= stack || stackend <= obj)
> +		return 0;
> +
> +	/*
> +	 * Reject: object partially overlaps the stack (passing the
> +	 * the check above means at least one end is within the stack,
> +	 * so if this check fails, the other end is outside the stack).
> +	 */
> +	if (obj < stack || stackend < obj + len)
> +		return -1;
> +
> +	/* Check if object is safely within a valid frame. */
> +	ret = arch_within_stack_frames(stack, stackend, obj, len);
> +	if (ret)
> +		return ret;
> +
> +	return 2;
> +}
> +
> +static void report_usercopy(const void *ptr, unsigned long len,
> +			    bool to_user, const char *type)
> +{
> +	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
> +		to_user ? "exposure" : "overwrite",
> +		to_user ? "from" : "to", ptr, type ? : "unknown", len);
> +	dump_stack();
> +	do_group_exit(SIGKILL);

SIGKILL -- SIGBUS?

> +}
> +
> +/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
> +static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
> +		     unsigned long high)
> +{
> +	unsigned long check_low = (uintptr_t)ptr;
> +	unsigned long check_high = check_low + n;
> +
> +	/* Does not overlap if entirely above or entirely below. */
> +	if (check_low >= high || check_high < low)
> +		return false;
> +
> +	return true;
> +}
> +
> +/* Is this address range in the kernel text area? */
> +static inline const char *check_kernel_text_object(const void *ptr,
> +						   unsigned long n)
> +{
> +	unsigned long textlow = (unsigned long)_stext;
> +	unsigned long texthigh = (unsigned long)_etext;
> +
> +	if (overlaps(ptr, n, textlow, texthigh))
> +		return "<kernel text>";
> +
> +#ifdef HAVE_ARCH_LINEAR_KERNEL_MAPPING
> +	/* Check against linear mapping as well. */
> +	if (overlaps(ptr, n, (unsigned long)__va(__pa(textlow)),
> +		     (unsigned long)__va(__pa(texthigh))))
> +		return "<linear kernel text>";
> +#endif
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_bogus_address(const void *ptr, unsigned long n)
> +{
> +	/* Reject if object wraps past end of memory. */
> +	if (ptr + n < ptr)
> +		return "<wrapped address>";
> +
> +	/* Reject if NULL or ZERO-allocation. */
> +	if (ZERO_OR_NULL_PTR(ptr))
> +		return "<null>";
> +
> +	return NULL;
> +}
> +
> +static inline const char *check_heap_object(const void *ptr, unsigned long n,
> +					    bool to_user)
> +{
> +	struct page *page, *endpage;
> +	const void *end = ptr + n - 1;
> +
> +	if (!virt_addr_valid(ptr))
> +		return NULL;
> +
> +	page = virt_to_head_page(ptr);
> +
> +	/* Check slab allocator for flags and size. */
> +	if (PageSlab(page))
> +		return __check_heap_object(ptr, n, page);
> +
> +	/*
> +	 * Sometimes the kernel data regions are not marked Reserved (see
> +	 * check below). And sometimes [_sdata,_edata) does not cover
> +	 * rodata and/or bss, so check each range explicitly.
> +	 */
> +
> +	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__start_rodata &&
> +	    end <= (const void *)__end_rodata) {
> +		if (!to_user)
> +			return "<rodata>";
> +		return NULL;
> +	}
> +
> +	/* Allow kernel data region (if not marked as Reserved). */
> +	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> +		return NULL;
> +
> +	/* Allow kernel bss region (if not marked as Reserved). */
> +	if (ptr >= (const void *)__bss_start &&
> +	    end <= (const void *)__bss_stop)
> +		return NULL;
> +
> +	/* Is the object wholly within one base page? */
> +	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> +		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> +		return NULL;
> +
> +	/* Allow if start and end are inside the same compound page. */
> +	endpage = virt_to_head_page(end);
> +	if (likely(endpage == page))
> +		return NULL;
> +
> +	/* Allow special areas, device memory, and sometimes kernel data. */
> +	if (PageReserved(page) && PageReserved(endpage))
> +		return NULL;a

If we came here, it's likely that endpage > page, do we need to check
that only the first and last pages are reserved? What about the ones in
the middle?


> +
> +	/* Uh oh. The "object" spans several independently allocated pages. */
> +	return "<spans multiple pages>";
> +}
> +
> +/*
> + * Validates that the given object is one of:
> + * - known safe heap object
> + * - known safe stack object
> + * - not in kernel text
> + */
> +void __check_object_size(const void *ptr, unsigned long n, bool to_user)
> +{
> +	const char *err;
> +
> +	/* Skip all tests if size is zero. */
> +	if (!n)
> +		return;
> +
> +	/* Check for invalid addresses. */
> +	err = check_bogus_address(ptr, n);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad heap object. */
> +	err = check_heap_object(ptr, n, to_user);
> +	if (err)
> +		goto report;
> +
> +	/* Check for bad stack object. */
> +	switch (check_stack_object(ptr, n)) {
> +	case 0:
> +		/* Object is not touching the current process stack. */
> +		break;
> +	case 1:
> +	case 2:
> +		/*
> +		 * Object is either in the correct frame (when it
> +		 * is possible to check) or just generally on the
> +		 * process stack (when frame checking not available).
> +		 */
> +		return;
> +	default:
> +		err = "<process stack>";
> +		goto report;
> +	}
> +
> +	/* Check for object in kernel to avoid text exposure. */
> +	err = check_kernel_text_object(ptr, n);
> +	if (!err)
> +		return;
> +
> +report:
> +	report_usercopy(ptr, n, to_user, err);
> +}

Looks good otherwise

Balbir Singh

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-14 23:20     ` Balbir Singh
                         ` (2 preceding siblings ...)
  (?)
@ 2016-07-15  1:04       ` Rik van Riel
  -1 siblings, 0 replies; 203+ messages in thread
From: Rik van Riel @ 2016-07-15  1:04 UTC (permalink / raw)
  To: bsingharora, Kees Cook
  Cc: linux-kernel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, linux-mm,
	kernel-hardening

[-- Attachment #1: Type: text/plain, Size: 797 bytes --]

On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:

> > ==
> > +		   ((unsigned long)end & (unsigned
> > long)PAGE_MASK)))
> > +		return NULL;
> > +
> > +	/* Allow if start and end are inside the same compound
> > page. */
> > +	endpage = virt_to_head_page(end);
> > +	if (likely(endpage == page))
> > +		return NULL;
> > +
> > +	/* Allow special areas, device memory, and sometimes
> > kernel data. */
> > +	if (PageReserved(page) && PageReserved(endpage))
> > +		return NULL;
> 
> If we came here, it's likely that endpage > page, do we need to check
> that only the first and last pages are reserved? What about the ones
> in
> the middle?

I think this will be so rare, we can get away with just
checking the beginning and the end.

-- 

All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:04       ` Rik van Riel
  0 siblings, 0 replies; 203+ messages in thread
From: Rik van Riel @ 2016-07-15  1:04 UTC (permalink / raw)
  To: bsingharora, Kees Cook
  Cc: linux-kernel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool

[-- Attachment #1: Type: text/plain, Size: 797 bytes --]

On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:

> > ==
> > +		   ((unsigned long)end & (unsigned
> > long)PAGE_MASK)))
> > +		return NULL;
> > +
> > +	/* Allow if start and end are inside the same compound
> > page. */
> > +	endpage = virt_to_head_page(end);
> > +	if (likely(endpage == page))
> > +		return NULL;
> > +
> > +	/* Allow special areas, device memory, and sometimes
> > kernel data. */
> > +	if (PageReserved(page) && PageReserved(endpage))
> > +		return NULL;
> 
> If we came here, it's likely that endpage > page, do we need to check
> that only the first and last pages are reserved? What about the ones
> in
> the middle?

I think this will be so rare, we can get away with just
checking the beginning and the end.

-- 

All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:04       ` Rik van Riel
  0 siblings, 0 replies; 203+ messages in thread
From: Rik van Riel @ 2016-07-15  1:04 UTC (permalink / raw)
  To: bsingharora, Kees Cook
  Cc: linux-kernel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, linux-mm,
	kernel-hardening

[-- Attachment #1: Type: text/plain, Size: 797 bytes --]

On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:

> > ==
> > +		   ((unsigned long)end & (unsigned
> > long)PAGE_MASK)))
> > +		return NULL;
> > +
> > +	/* Allow if start and end are inside the same compound
> > page. */
> > +	endpage = virt_to_head_page(end);
> > +	if (likely(endpage == page))
> > +		return NULL;
> > +
> > +	/* Allow special areas, device memory, and sometimes
> > kernel data. */
> > +	if (PageReserved(page) && PageReserved(endpage))
> > +		return NULL;
> 
> If we came here, it's likely that endpage > page, do we need to check
> that only the first and last pages are reserved? What about the ones
> in
> the middle?

I think this will be so rare, we can get away with just
checking the beginning and the end.

-- 

All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:04       ` Rik van Riel
  0 siblings, 0 replies; 203+ messages in thread
From: Rik van Riel @ 2016-07-15  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:

> > ==
> > +		???((unsigned long)end & (unsigned
> > long)PAGE_MASK)))
> > +		return NULL;
> > +
> > +	/* Allow if start and end are inside the same compound
> > page. */
> > +	endpage = virt_to_head_page(end);
> > +	if (likely(endpage == page))
> > +		return NULL;
> > +
> > +	/* Allow special areas, device memory, and sometimes
> > kernel data. */
> > +	if (PageReserved(page) && PageReserved(endpage))
> > +		return NULL;
> 
> If we came here, it's likely that endpage > page, do we need to check
> that only the first and last pages are reserved? What about the ones
> in
> the middle?

I think this will be so rare, we can get away with just
checking the beginning and the end.

-- 

All Rights Reversed.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: This is a digitally signed message part
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160714/9ed8313e/attachment-0001.sig>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:04       ` Rik van Riel
  0 siblings, 0 replies; 203+ messages in thread
From: Rik van Riel @ 2016-07-15  1:04 UTC (permalink / raw)
  To: bsingharora, Kees Cook
  Cc: linux-kernel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, linux-mm,
	kernel-hardening

[-- Attachment #1: Type: text/plain, Size: 797 bytes --]

On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:

> > ==
> > +		   ((unsigned long)end & (unsigned
> > long)PAGE_MASK)))
> > +		return NULL;
> > +
> > +	/* Allow if start and end are inside the same compound
> > page. */
> > +	endpage = virt_to_head_page(end);
> > +	if (likely(endpage == page))
> > +		return NULL;
> > +
> > +	/* Allow special areas, device memory, and sometimes
> > kernel data. */
> > +	if (PageReserved(page) && PageReserved(endpage))
> > +		return NULL;
> 
> If we came here, it's likely that endpage > page, do we need to check
> that only the first and last pages are reserved? What about the ones
> in
> the middle?

I think this will be so rare, we can get away with just
checking the beginning and the end.

-- 

All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-15  1:04       ` Rik van Riel
                           ` (3 preceding siblings ...)
  (?)
@ 2016-07-15  1:41         ` Balbir Singh
  -1 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  1:41 UTC (permalink / raw)
  To: Rik van Riel
  Cc: bsingharora, Kees Cook, linux-kernel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> 
> > > ==
> > > +		   ((unsigned long)end & (unsigned
> > > long)PAGE_MASK)))
> > > +		return NULL;
> > > +
> > > +	/* Allow if start and end are inside the same compound
> > > page. */
> > > +	endpage = virt_to_head_page(end);
> > > +	if (likely(endpage == page))
> > > +		return NULL;
> > > +
> > > +	/* Allow special areas, device memory, and sometimes
> > > kernel data. */
> > > +	if (PageReserved(page) && PageReserved(endpage))
> > > +		return NULL;
> > 
> > If we came here, it's likely that endpage > page, do we need to check
> > that only the first and last pages are reserved? What about the ones
> > in
> > the middle?
> 
> I think this will be so rare, we can get away with just
> checking the beginning and the end.
>

But do we want to leave a hole where an aware user space
can try a longer copy_* to avoid this check? If it is unlikely
should we just bite the bullet and do the check for the entire
range?

Balbir Singh. 

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:41         ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  1:41 UTC (permalink / raw)
  To: Rik van Riel
  Cc: bsingharora, Kees Cook, linux-kernel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan

On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> 
> > > ==
> > > +		   ((unsigned long)end & (unsigned
> > > long)PAGE_MASK)))
> > > +		return NULL;
> > > +
> > > +	/* Allow if start and end are inside the same compound
> > > page. */
> > > +	endpage = virt_to_head_page(end);
> > > +	if (likely(endpage == page))
> > > +		return NULL;
> > > +
> > > +	/* Allow special areas, device memory, and sometimes
> > > kernel data. */
> > > +	if (PageReserved(page) && PageReserved(endpage))
> > > +		return NULL;
> > 
> > If we came here, it's likely that endpage > page, do we need to check
> > that only the first and last pages are reserved? What about the ones
> > in
> > the middle?
> 
> I think this will be so rare, we can get away with just
> checking the beginning and the end.
>

But do we want to leave a hole where an aware user space
can try a longer copy_* to avoid this check? If it is unlikely
should we just bite the bullet and do the check for the entire
range?

Balbir Singh. 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:41         ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  1:41 UTC (permalink / raw)
  To: Rik van Riel
  Cc: bsingharora, Kees Cook, linux-kernel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> 
> > > =
> > > +		   ((unsigned long)end & (unsigned
> > > long)PAGE_MASK)))
> > > +		return NULL;
> > > +
> > > +	/* Allow if start and end are inside the same compound
> > > page. */
> > > +	endpage = virt_to_head_page(end);
> > > +	if (likely(endpage = page))
> > > +		return NULL;
> > > +
> > > +	/* Allow special areas, device memory, and sometimes
> > > kernel data. */
> > > +	if (PageReserved(page) && PageReserved(endpage))
> > > +		return NULL;
> > 
> > If we came here, it's likely that endpage > page, do we need to check
> > that only the first and last pages are reserved? What about the ones
> > in
> > the middle?
> 
> I think this will be so rare, we can get away with just
> checking the beginning and the end.
>

But do we want to leave a hole where an aware user space
can try a longer copy_* to avoid this check? If it is unlikely
should we just bite the bullet and do the check for the entire
range?

Balbir Singh. 



^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:41         ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  1:41 UTC (permalink / raw)
  To: Rik van Riel
  Cc: bsingharora, Kees Cook, linux-kernel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> 
> > > ==
> > > +		   ((unsigned long)end & (unsigned
> > > long)PAGE_MASK)))
> > > +		return NULL;
> > > +
> > > +	/* Allow if start and end are inside the same compound
> > > page. */
> > > +	endpage = virt_to_head_page(end);
> > > +	if (likely(endpage == page))
> > > +		return NULL;
> > > +
> > > +	/* Allow special areas, device memory, and sometimes
> > > kernel data. */
> > > +	if (PageReserved(page) && PageReserved(endpage))
> > > +		return NULL;
> > 
> > If we came here, it's likely that endpage > page, do we need to check
> > that only the first and last pages are reserved? What about the ones
> > in
> > the middle?
> 
> I think this will be so rare, we can get away with just
> checking the beginning and the end.
>

But do we want to leave a hole where an aware user space
can try a longer copy_* to avoid this check? If it is unlikely
should we just bite the bullet and do the check for the entire
range?

Balbir Singh. 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:41         ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  1:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> 
> > > ==
> > > +		???((unsigned long)end & (unsigned
> > > long)PAGE_MASK)))
> > > +		return NULL;
> > > +
> > > +	/* Allow if start and end are inside the same compound
> > > page. */
> > > +	endpage = virt_to_head_page(end);
> > > +	if (likely(endpage == page))
> > > +		return NULL;
> > > +
> > > +	/* Allow special areas, device memory, and sometimes
> > > kernel data. */
> > > +	if (PageReserved(page) && PageReserved(endpage))
> > > +		return NULL;
> > 
> > If we came here, it's likely that endpage > page, do we need to check
> > that only the first and last pages are reserved? What about the ones
> > in
> > the middle?
> 
> I think this will be so rare, we can get away with just
> checking the beginning and the end.
>

But do we want to leave a hole where an aware user space
can try a longer copy_* to avoid this check? If it is unlikely
should we just bite the bullet and do the check for the entire
range?

Balbir Singh. 

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  1:41         ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  1:41 UTC (permalink / raw)
  To: Rik van Riel
  Cc: bsingharora, Kees Cook, linux-kernel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> 
> > > ==
> > > +		   ((unsigned long)end & (unsigned
> > > long)PAGE_MASK)))
> > > +		return NULL;
> > > +
> > > +	/* Allow if start and end are inside the same compound
> > > page. */
> > > +	endpage = virt_to_head_page(end);
> > > +	if (likely(endpage == page))
> > > +		return NULL;
> > > +
> > > +	/* Allow special areas, device memory, and sometimes
> > > kernel data. */
> > > +	if (PageReserved(page) && PageReserved(endpage))
> > > +		return NULL;
> > 
> > If we came here, it's likely that endpage > page, do we need to check
> > that only the first and last pages are reserved? What about the ones
> > in
> > the middle?
> 
> I think this will be so rare, we can get away with just
> checking the beginning and the end.
>

But do we want to leave a hole where an aware user space
can try a longer copy_* to avoid this check? If it is unlikely
should we just bite the bullet and do the check for the entire
range?

Balbir Singh. 

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
  2016-07-13 21:56   ` Kees Cook
                       ` (3 preceding siblings ...)
  (?)
@ 2016-07-15  2:05     ` Balbir Singh
  -1 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  2:05 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.
> 
> Based on code from PaX and grsecurity.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
> 
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>  
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR

Should this patch come in earlier from a build perspective? I think
patch 1 introduces and uses __check_heap_object.

Balbir Singh.

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  2:05     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  2:05 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly

On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.
> 
> Based on code from PaX and grsecurity.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
> 
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>  
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR

Should this patch come in earlier from a build perspective? I think
patch 1 introduces and uses __check_heap_object.

Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  2:05     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  2:05 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.
> 
> Based on code from PaX and grsecurity.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
> 
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>  
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR

Should this patch come in earlier from a build perspective? I think
patch 1 introduces and uses __check_heap_object.

Balbir Singh.

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  2:05     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  2:05 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.
> 
> Based on code from PaX and grsecurity.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
> 
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>  
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR

Should this patch come in earlier from a build perspective? I think
patch 1 introduces and uses __check_heap_object.

Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  2:05     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  2:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.
> 
> Based on code from PaX and grsecurity.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
> 
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>  
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR

Should this patch come in earlier from a build perspective? I think
patch 1 introduces and uses __check_heap_object.

Balbir Singh.

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  2:05     ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15  2:05 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, linux-mm, kernel-hardening

On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix from Michael Ellerman.
> 
> Based on code from PaX and grsecurity.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
> 
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>  
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR

Should this patch come in earlier from a build perspective? I think
patch 1 introduces and uses __check_heap_object.

Balbir Singh.

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-15  1:41         ` Balbir Singh
                             ` (4 preceding siblings ...)
  (?)
@ 2016-07-15  4:05           ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:05 UTC (permalink / raw)
  To: bsingharora
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>
>> > > ==
>> > > +            ((unsigned long)end & (unsigned
>> > > long)PAGE_MASK)))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow if start and end are inside the same compound
>> > > page. */
>> > > + endpage = virt_to_head_page(end);
>> > > + if (likely(endpage == page))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow special areas, device memory, and sometimes
>> > > kernel data. */
>> > > + if (PageReserved(page) && PageReserved(endpage))
>> > > +         return NULL;
>> >
>> > If we came here, it's likely that endpage > page, do we need to check
>> > that only the first and last pages are reserved? What about the ones
>> > in
>> > the middle?
>>
>> I think this will be so rare, we can get away with just
>> checking the beginning and the end.
>>
>
> But do we want to leave a hole where an aware user space
> can try a longer copy_* to avoid this check? If it is unlikely
> should we just bite the bullet and do the check for the entire
> range?

I'd be okay with expanding the test -- it should be an extremely rare
situation already since the common Reserved areas (kernel data) will
have already been explicitly tested.

What's the best way to do "next page"? Should it just be:

for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
    if (!PageReserved(page))
        return "<spans multiple pages>";
}

return NULL;

?


-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:05           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:05 UTC (permalink / raw)
  To: bsingharora
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause

On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>
>> > > ==
>> > > +            ((unsigned long)end & (unsigned
>> > > long)PAGE_MASK)))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow if start and end are inside the same compound
>> > > page. */
>> > > + endpage = virt_to_head_page(end);
>> > > + if (likely(endpage == page))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow special areas, device memory, and sometimes
>> > > kernel data. */
>> > > + if (PageReserved(page) && PageReserved(endpage))
>> > > +         return NULL;
>> >
>> > If we came here, it's likely that endpage > page, do we need to check
>> > that only the first and last pages are reserved? What about the ones
>> > in
>> > the middle?
>>
>> I think this will be so rare, we can get away with just
>> checking the beginning and the end.
>>
>
> But do we want to leave a hole where an aware user space
> can try a longer copy_* to avoid this check? If it is unlikely
> should we just bite the bullet and do the check for the entire
> range?

I'd be okay with expanding the test -- it should be an extremely rare
situation already since the common Reserved areas (kernel data) will
have already been explicitly tested.

What's the best way to do "next page"? Should it just be:

for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
    if (!PageReserved(page))
        return "<spans multiple pages>";
}

return NULL;

?


-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:05           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:05 UTC (permalink / raw)
  To: bsingharora
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>
>> > > ==
>> > > +            ((unsigned long)end & (unsigned
>> > > long)PAGE_MASK)))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow if start and end are inside the same compound
>> > > page. */
>> > > + endpage = virt_to_head_page(end);
>> > > + if (likely(endpage == page))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow special areas, device memory, and sometimes
>> > > kernel data. */
>> > > + if (PageReserved(page) && PageReserved(endpage))
>> > > +         return NULL;
>> >
>> > If we came here, it's likely that endpage > page, do we need to check
>> > that only the first and last pages are reserved? What about the ones
>> > in
>> > the middle?
>>
>> I think this will be so rare, we can get away with just
>> checking the beginning and the end.
>>
>
> But do we want to leave a hole where an aware user space
> can try a longer copy_* to avoid this check? If it is unlikely
> should we just bite the bullet and do the check for the entire
> range?

I'd be okay with expanding the test -- it should be an extremely rare
situation already since the common Reserved areas (kernel data) will
have already been explicitly tested.

What's the best way to do "next page"? Should it just be:

for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
    if (!PageReserved(page))
        return "<spans multiple pages>";
}

return NULL;

?


-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:05           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:05 UTC (permalink / raw)
  To: bsingharora
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>
>> > > =
>> > > +            ((unsigned long)end & (unsigned
>> > > long)PAGE_MASK)))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow if start and end are inside the same compound
>> > > page. */
>> > > + endpage = virt_to_head_page(end);
>> > > + if (likely(endpage = page))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow special areas, device memory, and sometimes
>> > > kernel data. */
>> > > + if (PageReserved(page) && PageReserved(endpage))
>> > > +         return NULL;
>> >
>> > If we came here, it's likely that endpage > page, do we need to check
>> > that only the first and last pages are reserved? What about the ones
>> > in
>> > the middle?
>>
>> I think this will be so rare, we can get away with just
>> checking the beginning and the end.
>>
>
> But do we want to leave a hole where an aware user space
> can try a longer copy_* to avoid this check? If it is unlikely
> should we just bite the bullet and do the check for the entire
> range?

I'd be okay with expanding the test -- it should be an extremely rare
situation already since the common Reserved areas (kernel data) will
have already been explicitly tested.

What's the best way to do "next page"? Should it just be:

for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
    if (!PageReserved(page))
        return "<spans multiple pages>";
}

return NULL;

?


-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:05           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:05 UTC (permalink / raw)
  To: bsingharora
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>
>> > > ==
>> > > +            ((unsigned long)end & (unsigned
>> > > long)PAGE_MASK)))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow if start and end are inside the same compound
>> > > page. */
>> > > + endpage = virt_to_head_page(end);
>> > > + if (likely(endpage == page))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow special areas, device memory, and sometimes
>> > > kernel data. */
>> > > + if (PageReserved(page) && PageReserved(endpage))
>> > > +         return NULL;
>> >
>> > If we came here, it's likely that endpage > page, do we need to check
>> > that only the first and last pages are reserved? What about the ones
>> > in
>> > the middle?
>>
>> I think this will be so rare, we can get away with just
>> checking the beginning and the end.
>>
>
> But do we want to leave a hole where an aware user space
> can try a longer copy_* to avoid this check? If it is unlikely
> should we just bite the bullet and do the check for the entire
> range?

I'd be okay with expanding the test -- it should be an extremely rare
situation already since the common Reserved areas (kernel data) will
have already been explicitly tested.

What's the best way to do "next page"? Should it just be:

for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
    if (!PageReserved(page))
        return "<spans multiple pages>";
}

return NULL;

?


-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:05           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>
>> > > ==
>> > > +            ((unsigned long)end & (unsigned
>> > > long)PAGE_MASK)))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow if start and end are inside the same compound
>> > > page. */
>> > > + endpage = virt_to_head_page(end);
>> > > + if (likely(endpage == page))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow special areas, device memory, and sometimes
>> > > kernel data. */
>> > > + if (PageReserved(page) && PageReserved(endpage))
>> > > +         return NULL;
>> >
>> > If we came here, it's likely that endpage > page, do we need to check
>> > that only the first and last pages are reserved? What about the ones
>> > in
>> > the middle?
>>
>> I think this will be so rare, we can get away with just
>> checking the beginning and the end.
>>
>
> But do we want to leave a hole where an aware user space
> can try a longer copy_* to avoid this check? If it is unlikely
> should we just bite the bullet and do the check for the entire
> range?

I'd be okay with expanding the test -- it should be an extremely rare
situation already since the common Reserved areas (kernel data) will
have already been explicitly tested.

What's the best way to do "next page"? Should it just be:

for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
    if (!PageReserved(page))
        return "<spans multiple pages>";
}

return NULL;

?


-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:05           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:05 UTC (permalink / raw)
  To: bsingharora
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>
>> > > ==
>> > > +            ((unsigned long)end & (unsigned
>> > > long)PAGE_MASK)))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow if start and end are inside the same compound
>> > > page. */
>> > > + endpage = virt_to_head_page(end);
>> > > + if (likely(endpage == page))
>> > > +         return NULL;
>> > > +
>> > > + /* Allow special areas, device memory, and sometimes
>> > > kernel data. */
>> > > + if (PageReserved(page) && PageReserved(endpage))
>> > > +         return NULL;
>> >
>> > If we came here, it's likely that endpage > page, do we need to check
>> > that only the first and last pages are reserved? What about the ones
>> > in
>> > the middle?
>>
>> I think this will be so rare, we can get away with just
>> checking the beginning and the end.
>>
>
> But do we want to leave a hole where an aware user space
> can try a longer copy_* to avoid this check? If it is unlikely
> should we just bite the bullet and do the check for the entire
> range?

I'd be okay with expanding the test -- it should be an extremely rare
situation already since the common Reserved areas (kernel data) will
have already been explicitly tested.

What's the best way to do "next page"? Should it just be:

for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
    if (!PageReserved(page))
        return "<spans multiple pages>";
}

return NULL;

?


-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-14 23:20     ` Balbir Singh
                         ` (4 preceding siblings ...)
  (?)
@ 2016-07-15  4:25       ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:25 UTC (permalink / raw)
  To: bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 4:20 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
>> [...]
>> +++ b/mm/usercopy.c
>> @@ -0,0 +1,219 @@
>> [...]
>> +/*
>> + * Checks if a given pointer and length is contained by the current
>> + * stack frame (if possible).
>> + *
>> + *   0: not at all on the stack
>> + *   1: fully within a valid stack frame
>> + *   2: fully on the stack (when can't do frame-checking)
>> + *   -1: error condition (invalid stack position or bad stack frame)
>
> Can we use enums? Makes it easier to read/debug

Sure, I will update this.

>> [...]
>> +static void report_usercopy(const void *ptr, unsigned long len,
>> +                         bool to_user, const char *type)
>> +{
>> +     pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
>> +             to_user ? "exposure" : "overwrite",
>> +             to_user ? "from" : "to", ptr, type ? : "unknown", len);
>> +     dump_stack();
>> +     do_group_exit(SIGKILL);
>
> SIGKILL -- SIGBUS?

I'd like to keep SIGKILL since it indicates a process fiddling with a
kernel bug. The real problem here is that there doesn't seem to be an
arch-independent way to Oops the kernel and kill a process ("die()" is
closest, but it's defined on a per-arch basis with varying arguments).
This could be a BUG, but I'd rather not panic the entire kernel.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:25       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:25 UTC (permalink / raw)
  To: bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause

On Thu, Jul 14, 2016 at 4:20 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
>> [...]
>> +++ b/mm/usercopy.c
>> @@ -0,0 +1,219 @@
>> [...]
>> +/*
>> + * Checks if a given pointer and length is contained by the current
>> + * stack frame (if possible).
>> + *
>> + *   0: not at all on the stack
>> + *   1: fully within a valid stack frame
>> + *   2: fully on the stack (when can't do frame-checking)
>> + *   -1: error condition (invalid stack position or bad stack frame)
>
> Can we use enums? Makes it easier to read/debug

Sure, I will update this.

>> [...]
>> +static void report_usercopy(const void *ptr, unsigned long len,
>> +                         bool to_user, const char *type)
>> +{
>> +     pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
>> +             to_user ? "exposure" : "overwrite",
>> +             to_user ? "from" : "to", ptr, type ? : "unknown", len);
>> +     dump_stack();
>> +     do_group_exit(SIGKILL);
>
> SIGKILL -- SIGBUS?

I'd like to keep SIGKILL since it indicates a process fiddling with a
kernel bug. The real problem here is that there doesn't seem to be an
arch-independent way to Oops the kernel and kill a process ("die()" is
closest, but it's defined on a per-arch basis with varying arguments).
This could be a BUG, but I'd rather not panic the entire kernel.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:25       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:25 UTC (permalink / raw)
  To: bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 4:20 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
>> [...]
>> +++ b/mm/usercopy.c
>> @@ -0,0 +1,219 @@
>> [...]
>> +/*
>> + * Checks if a given pointer and length is contained by the current
>> + * stack frame (if possible).
>> + *
>> + *   0: not at all on the stack
>> + *   1: fully within a valid stack frame
>> + *   2: fully on the stack (when can't do frame-checking)
>> + *   -1: error condition (invalid stack position or bad stack frame)
>
> Can we use enums? Makes it easier to read/debug

Sure, I will update this.

>> [...]
>> +static void report_usercopy(const void *ptr, unsigned long len,
>> +                         bool to_user, const char *type)
>> +{
>> +     pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
>> +             to_user ? "exposure" : "overwrite",
>> +             to_user ? "from" : "to", ptr, type ? : "unknown", len);
>> +     dump_stack();
>> +     do_group_exit(SIGKILL);
>
> SIGKILL -- SIGBUS?

I'd like to keep SIGKILL since it indicates a process fiddling with a
kernel bug. The real problem here is that there doesn't seem to be an
arch-independent way to Oops the kernel and kill a process ("die()" is
closest, but it's defined on a per-arch basis with varying arguments).
This could be a BUG, but I'd rather not panic the entire kernel.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:25       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:25 UTC (permalink / raw)
  To: bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 4:20 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
>> [...]
>> +++ b/mm/usercopy.c
>> @@ -0,0 +1,219 @@
>> [...]
>> +/*
>> + * Checks if a given pointer and length is contained by the current
>> + * stack frame (if possible).
>> + *
>> + *   0: not at all on the stack
>> + *   1: fully within a valid stack frame
>> + *   2: fully on the stack (when can't do frame-checking)
>> + *   -1: error condition (invalid stack position or bad stack frame)
>
> Can we use enums? Makes it easier to read/debug

Sure, I will update this.

>> [...]
>> +static void report_usercopy(const void *ptr, unsigned long len,
>> +                         bool to_user, const char *type)
>> +{
>> +     pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
>> +             to_user ? "exposure" : "overwrite",
>> +             to_user ? "from" : "to", ptr, type ? : "unknown", len);
>> +     dump_stack();
>> +     do_group_exit(SIGKILL);
>
> SIGKILL -- SIGBUS?

I'd like to keep SIGKILL since it indicates a process fiddling with a
kernel bug. The real problem here is that there doesn't seem to be an
arch-independent way to Oops the kernel and kill a process ("die()" is
closest, but it's defined on a per-arch basis with varying arguments).
This could be a BUG, but I'd rather not panic the entire kernel.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:25       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:25 UTC (permalink / raw)
  To: bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 4:20 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
>> [...]
>> +++ b/mm/usercopy.c
>> @@ -0,0 +1,219 @@
>> [...]
>> +/*
>> + * Checks if a given pointer and length is contained by the current
>> + * stack frame (if possible).
>> + *
>> + *   0: not at all on the stack
>> + *   1: fully within a valid stack frame
>> + *   2: fully on the stack (when can't do frame-checking)
>> + *   -1: error condition (invalid stack position or bad stack frame)
>
> Can we use enums? Makes it easier to read/debug

Sure, I will update this.

>> [...]
>> +static void report_usercopy(const void *ptr, unsigned long len,
>> +                         bool to_user, const char *type)
>> +{
>> +     pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
>> +             to_user ? "exposure" : "overwrite",
>> +             to_user ? "from" : "to", ptr, type ? : "unknown", len);
>> +     dump_stack();
>> +     do_group_exit(SIGKILL);
>
> SIGKILL -- SIGBUS?

I'd like to keep SIGKILL since it indicates a process fiddling with a
kernel bug. The real problem here is that there doesn't seem to be an
arch-independent way to Oops the kernel and kill a process ("die()" is
closest, but it's defined on a per-arch basis with varying arguments).
This could be a BUG, but I'd rather not panic the entire kernel.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:25       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 14, 2016 at 4:20 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
>> [...]
>> +++ b/mm/usercopy.c
>> @@ -0,0 +1,219 @@
>> [...]
>> +/*
>> + * Checks if a given pointer and length is contained by the current
>> + * stack frame (if possible).
>> + *
>> + *   0: not at all on the stack
>> + *   1: fully within a valid stack frame
>> + *   2: fully on the stack (when can't do frame-checking)
>> + *   -1: error condition (invalid stack position or bad stack frame)
>
> Can we use enums? Makes it easier to read/debug

Sure, I will update this.

>> [...]
>> +static void report_usercopy(const void *ptr, unsigned long len,
>> +                         bool to_user, const char *type)
>> +{
>> +     pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
>> +             to_user ? "exposure" : "overwrite",
>> +             to_user ? "from" : "to", ptr, type ? : "unknown", len);
>> +     dump_stack();
>> +     do_group_exit(SIGKILL);
>
> SIGKILL -- SIGBUS?

I'd like to keep SIGKILL since it indicates a process fiddling with a
kernel bug. The real problem here is that there doesn't seem to be an
arch-independent way to Oops the kernel and kill a process ("die()" is
closest, but it's defined on a per-arch basis with varying arguments).
This could be a BUG, but I'd rather not panic the entire kernel.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:25       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:25 UTC (permalink / raw)
  To: bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 4:20 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:55:55PM -0700, Kees Cook wrote:
>> [...]
>> +++ b/mm/usercopy.c
>> @@ -0,0 +1,219 @@
>> [...]
>> +/*
>> + * Checks if a given pointer and length is contained by the current
>> + * stack frame (if possible).
>> + *
>> + *   0: not at all on the stack
>> + *   1: fully within a valid stack frame
>> + *   2: fully on the stack (when can't do frame-checking)
>> + *   -1: error condition (invalid stack position or bad stack frame)
>
> Can we use enums? Makes it easier to read/debug

Sure, I will update this.

>> [...]
>> +static void report_usercopy(const void *ptr, unsigned long len,
>> +                         bool to_user, const char *type)
>> +{
>> +     pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
>> +             to_user ? "exposure" : "overwrite",
>> +             to_user ? "from" : "to", ptr, type ? : "unknown", len);
>> +     dump_stack();
>> +     do_group_exit(SIGKILL);
>
> SIGKILL -- SIGBUS?

I'd like to keep SIGKILL since it indicates a process fiddling with a
kernel bug. The real problem here is that there doesn't seem to be an
arch-independent way to Oops the kernel and kill a process ("die()" is
closest, but it's defined on a per-arch basis with varying arguments).
This could be a BUG, but I'd rather not panic the entire kernel.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
  2016-07-15  2:05     ` Balbir Singh
                         ` (4 preceding siblings ...)
  (?)
@ 2016-07-15  4:29       ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:29 UTC (permalink / raw)
  To: Balbir Singh
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 7:05 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects. Includes a
>> redzone handling fix from Michael Ellerman.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  init/Kconfig |  1 +
>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 798c2020ee7c..1c4711819dfd 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1765,6 +1765,7 @@ config SLAB
>>
>>  config SLUB
>>       bool "SLUB (Unqueued Allocator)"
>> +     select HAVE_HARDENED_USERCOPY_ALLOCATOR
>
> Should this patch come in earlier from a build perspective? I think
> patch 1 introduces and uses __check_heap_object.

__check_heap_object in patch 1 is protected by a check for
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR.

It seemed better to be to do arch enablement first, and then add the
per-allocator heap object size check since it was a distinct piece.
I'm happy to rearrange things, though, if there's a good reason.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  4:29       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:29 UTC (permalink / raw)
  To: Balbir Singh
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Ja

On Thu, Jul 14, 2016 at 7:05 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects. Includes a
>> redzone handling fix from Michael Ellerman.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  init/Kconfig |  1 +
>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 798c2020ee7c..1c4711819dfd 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1765,6 +1765,7 @@ config SLAB
>>
>>  config SLUB
>>       bool "SLUB (Unqueued Allocator)"
>> +     select HAVE_HARDENED_USERCOPY_ALLOCATOR
>
> Should this patch come in earlier from a build perspective? I think
> patch 1 introduces and uses __check_heap_object.

__check_heap_object in patch 1 is protected by a check for
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR.

It seemed better to be to do arch enablement first, and then add the
per-allocator heap object size check since it was a distinct piece.
I'm happy to rearrange things, though, if there's a good reason.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  4:29       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:29 UTC (permalink / raw)
  To: Balbir Singh
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 7:05 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects. Includes a
>> redzone handling fix from Michael Ellerman.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  init/Kconfig |  1 +
>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 798c2020ee7c..1c4711819dfd 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1765,6 +1765,7 @@ config SLAB
>>
>>  config SLUB
>>       bool "SLUB (Unqueued Allocator)"
>> +     select HAVE_HARDENED_USERCOPY_ALLOCATOR
>
> Should this patch come in earlier from a build perspective? I think
> patch 1 introduces and uses __check_heap_object.

__check_heap_object in patch 1 is protected by a check for
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR.

It seemed better to be to do arch enablement first, and then add the
per-allocator heap object size check since it was a distinct piece.
I'm happy to rearrange things, though, if there's a good reason.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  4:29       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:29 UTC (permalink / raw)
  To: Balbir Singh
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 7:05 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects. Includes a
>> redzone handling fix from Michael Ellerman.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  init/Kconfig |  1 +
>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 798c2020ee7c..1c4711819dfd 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1765,6 +1765,7 @@ config SLAB
>>
>>  config SLUB
>>       bool "SLUB (Unqueued Allocator)"
>> +     select HAVE_HARDENED_USERCOPY_ALLOCATOR
>
> Should this patch come in earlier from a build perspective? I think
> patch 1 introduces and uses __check_heap_object.

__check_heap_object in patch 1 is protected by a check for
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR.

It seemed better to be to do arch enablement first, and then add the
per-allocator heap object size check since it was a distinct piece.
I'm happy to rearrange things, though, if there's a good reason.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  4:29       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:29 UTC (permalink / raw)
  To: Balbir Singh
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 7:05 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects. Includes a
>> redzone handling fix from Michael Ellerman.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  init/Kconfig |  1 +
>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 798c2020ee7c..1c4711819dfd 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1765,6 +1765,7 @@ config SLAB
>>
>>  config SLUB
>>       bool "SLUB (Unqueued Allocator)"
>> +     select HAVE_HARDENED_USERCOPY_ALLOCATOR
>
> Should this patch come in earlier from a build perspective? I think
> patch 1 introduces and uses __check_heap_object.

__check_heap_object in patch 1 is protected by a check for
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR.

It seemed better to be to do arch enablement first, and then add the
per-allocator heap object size check since it was a distinct piece.
I'm happy to rearrange things, though, if there's a good reason.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  4:29       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 14, 2016 at 7:05 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects. Includes a
>> redzone handling fix from Michael Ellerman.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  init/Kconfig |  1 +
>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 798c2020ee7c..1c4711819dfd 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1765,6 +1765,7 @@ config SLAB
>>
>>  config SLUB
>>       bool "SLUB (Unqueued Allocator)"
>> +     select HAVE_HARDENED_USERCOPY_ALLOCATOR
>
> Should this patch come in earlier from a build perspective? I think
> patch 1 introduces and uses __check_heap_object.

__check_heap_object in patch 1 is protected by a check for
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR.

It seemed better to be to do arch enablement first, and then add the
per-allocator heap object size check since it was a distinct piece.
I'm happy to rearrange things, though, if there's a good reason.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 11/11] mm: SLUB hardened usercopy support
@ 2016-07-15  4:29       ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:29 UTC (permalink / raw)
  To: Balbir Singh
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 7:05 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> On Wed, Jul 13, 2016 at 02:56:04PM -0700, Kees Cook wrote:
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects. Includes a
>> redzone handling fix from Michael Ellerman.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  init/Kconfig |  1 +
>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 798c2020ee7c..1c4711819dfd 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1765,6 +1765,7 @@ config SLAB
>>
>>  config SLUB
>>       bool "SLUB (Unqueued Allocator)"
>> +     select HAVE_HARDENED_USERCOPY_ALLOCATOR
>
> Should this patch come in earlier from a build perspective? I think
> patch 1 introduces and uses __check_heap_object.

__check_heap_object in patch 1 is protected by a check for
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR.

It seemed better to be to do arch enablement first, and then add the
per-allocator heap object size check since it was a distinct piece.
I'm happy to rearrange things, though, if there's a good reason.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-15  4:05           ` Kees Cook
                               ` (4 preceding siblings ...)
  (?)
@ 2016-07-15  4:53             ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:53 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
>> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>>
>>> > > ==
>>> > > +            ((unsigned long)end & (unsigned
>>> > > long)PAGE_MASK)))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow if start and end are inside the same compound
>>> > > page. */
>>> > > + endpage = virt_to_head_page(end);
>>> > > + if (likely(endpage == page))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow special areas, device memory, and sometimes
>>> > > kernel data. */
>>> > > + if (PageReserved(page) && PageReserved(endpage))
>>> > > +         return NULL;
>>> >
>>> > If we came here, it's likely that endpage > page, do we need to check
>>> > that only the first and last pages are reserved? What about the ones
>>> > in
>>> > the middle?
>>>
>>> I think this will be so rare, we can get away with just
>>> checking the beginning and the end.
>>>
>>
>> But do we want to leave a hole where an aware user space
>> can try a longer copy_* to avoid this check? If it is unlikely
>> should we just bite the bullet and do the check for the entire
>> range?
>
> I'd be okay with expanding the test -- it should be an extremely rare
> situation already since the common Reserved areas (kernel data) will
> have already been explicitly tested.
>
> What's the best way to do "next page"? Should it just be:
>
> for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
>     if (!PageReserved(page))
>         return "<spans multiple pages>";
> }
>
> return NULL;
>
> ?

Er, I was testing the wrong thing. How about:

        /*
         * Reject if range is not Reserved (i.e. special or device memory),
         * since then the object spans several independently allocated pages.
         */
        for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
                if (!PageReserved(page))
                        return "<spans multiple pages>";
        }

        return NULL;



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:53             ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:53 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause

On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
>> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>>
>>> > > ==
>>> > > +            ((unsigned long)end & (unsigned
>>> > > long)PAGE_MASK)))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow if start and end are inside the same compound
>>> > > page. */
>>> > > + endpage = virt_to_head_page(end);
>>> > > + if (likely(endpage == page))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow special areas, device memory, and sometimes
>>> > > kernel data. */
>>> > > + if (PageReserved(page) && PageReserved(endpage))
>>> > > +         return NULL;
>>> >
>>> > If we came here, it's likely that endpage > page, do we need to check
>>> > that only the first and last pages are reserved? What about the ones
>>> > in
>>> > the middle?
>>>
>>> I think this will be so rare, we can get away with just
>>> checking the beginning and the end.
>>>
>>
>> But do we want to leave a hole where an aware user space
>> can try a longer copy_* to avoid this check? If it is unlikely
>> should we just bite the bullet and do the check for the entire
>> range?
>
> I'd be okay with expanding the test -- it should be an extremely rare
> situation already since the common Reserved areas (kernel data) will
> have already been explicitly tested.
>
> What's the best way to do "next page"? Should it just be:
>
> for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
>     if (!PageReserved(page))
>         return "<spans multiple pages>";
> }
>
> return NULL;
>
> ?

Er, I was testing the wrong thing. How about:

        /*
         * Reject if range is not Reserved (i.e. special or device memory),
         * since then the object spans several independently allocated pages.
         */
        for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
                if (!PageReserved(page))
                        return "<spans multiple pages>";
        }

        return NULL;



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:53             ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:53 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
>> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>>
>>> > > ==
>>> > > +            ((unsigned long)end & (unsigned
>>> > > long)PAGE_MASK)))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow if start and end are inside the same compound
>>> > > page. */
>>> > > + endpage = virt_to_head_page(end);
>>> > > + if (likely(endpage == page))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow special areas, device memory, and sometimes
>>> > > kernel data. */
>>> > > + if (PageReserved(page) && PageReserved(endpage))
>>> > > +         return NULL;
>>> >
>>> > If we came here, it's likely that endpage > page, do we need to check
>>> > that only the first and last pages are reserved? What about the ones
>>> > in
>>> > the middle?
>>>
>>> I think this will be so rare, we can get away with just
>>> checking the beginning and the end.
>>>
>>
>> But do we want to leave a hole where an aware user space
>> can try a longer copy_* to avoid this check? If it is unlikely
>> should we just bite the bullet and do the check for the entire
>> range?
>
> I'd be okay with expanding the test -- it should be an extremely rare
> situation already since the common Reserved areas (kernel data) will
> have already been explicitly tested.
>
> What's the best way to do "next page"? Should it just be:
>
> for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
>     if (!PageReserved(page))
>         return "<spans multiple pages>";
> }
>
> return NULL;
>
> ?

Er, I was testing the wrong thing. How about:

        /*
         * Reject if range is not Reserved (i.e. special or device memory),
         * since then the object spans several independently allocated pages.
         */
        for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
                if (!PageReserved(page))
                        return "<spans multiple pages>";
        }

        return NULL;



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:53             ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:53 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
>> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>>
>>> > > =
>>> > > +            ((unsigned long)end & (unsigned
>>> > > long)PAGE_MASK)))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow if start and end are inside the same compound
>>> > > page. */
>>> > > + endpage = virt_to_head_page(end);
>>> > > + if (likely(endpage = page))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow special areas, device memory, and sometimes
>>> > > kernel data. */
>>> > > + if (PageReserved(page) && PageReserved(endpage))
>>> > > +         return NULL;
>>> >
>>> > If we came here, it's likely that endpage > page, do we need to check
>>> > that only the first and last pages are reserved? What about the ones
>>> > in
>>> > the middle?
>>>
>>> I think this will be so rare, we can get away with just
>>> checking the beginning and the end.
>>>
>>
>> But do we want to leave a hole where an aware user space
>> can try a longer copy_* to avoid this check? If it is unlikely
>> should we just bite the bullet and do the check for the entire
>> range?
>
> I'd be okay with expanding the test -- it should be an extremely rare
> situation already since the common Reserved areas (kernel data) will
> have already been explicitly tested.
>
> What's the best way to do "next page"? Should it just be:
>
> for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
>     if (!PageReserved(page))
>         return "<spans multiple pages>";
> }
>
> return NULL;
>
> ?

Er, I was testing the wrong thing. How about:

        /*
         * Reject if range is not Reserved (i.e. special or device memory),
         * since then the object spans several independently allocated pages.
         */
        for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
                if (!PageReserved(page))
                        return "<spans multiple pages>";
        }

        return NULL;



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:53             ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:53 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
>> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>>
>>> > > ==
>>> > > +            ((unsigned long)end & (unsigned
>>> > > long)PAGE_MASK)))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow if start and end are inside the same compound
>>> > > page. */
>>> > > + endpage = virt_to_head_page(end);
>>> > > + if (likely(endpage == page))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow special areas, device memory, and sometimes
>>> > > kernel data. */
>>> > > + if (PageReserved(page) && PageReserved(endpage))
>>> > > +         return NULL;
>>> >
>>> > If we came here, it's likely that endpage > page, do we need to check
>>> > that only the first and last pages are reserved? What about the ones
>>> > in
>>> > the middle?
>>>
>>> I think this will be so rare, we can get away with just
>>> checking the beginning and the end.
>>>
>>
>> But do we want to leave a hole where an aware user space
>> can try a longer copy_* to avoid this check? If it is unlikely
>> should we just bite the bullet and do the check for the entire
>> range?
>
> I'd be okay with expanding the test -- it should be an extremely rare
> situation already since the common Reserved areas (kernel data) will
> have already been explicitly tested.
>
> What's the best way to do "next page"? Should it just be:
>
> for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
>     if (!PageReserved(page))
>         return "<spans multiple pages>";
> }
>
> return NULL;
>
> ?

Er, I was testing the wrong thing. How about:

        /*
         * Reject if range is not Reserved (i.e. special or device memory),
         * since then the object spans several independently allocated pages.
         */
        for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
                if (!PageReserved(page))
                        return "<spans multiple pages>";
        }

        return NULL;



-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:53             ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:53 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
>> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>>
>>> > > ==
>>> > > +            ((unsigned long)end & (unsigned
>>> > > long)PAGE_MASK)))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow if start and end are inside the same compound
>>> > > page. */
>>> > > + endpage = virt_to_head_page(end);
>>> > > + if (likely(endpage == page))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow special areas, device memory, and sometimes
>>> > > kernel data. */
>>> > > + if (PageReserved(page) && PageReserved(endpage))
>>> > > +         return NULL;
>>> >
>>> > If we came here, it's likely that endpage > page, do we need to check
>>> > that only the first and last pages are reserved? What about the ones
>>> > in
>>> > the middle?
>>>
>>> I think this will be so rare, we can get away with just
>>> checking the beginning and the end.
>>>
>>
>> But do we want to leave a hole where an aware user space
>> can try a longer copy_* to avoid this check? If it is unlikely
>> should we just bite the bullet and do the check for the entire
>> range?
>
> I'd be okay with expanding the test -- it should be an extremely rare
> situation already since the common Reserved areas (kernel data) will
> have already been explicitly tested.
>
> What's the best way to do "next page"? Should it just be:
>
> for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
>     if (!PageReserved(page))
>         return "<spans multiple pages>";
> }
>
> return NULL;
>
> ?

Er, I was testing the wrong thing. How about:

        /*
         * Reject if range is not Reserved (i.e. special or device memory),
         * since then the object spans several independently allocated pages.
         */
        for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
                if (!PageReserved(page))
                        return "<spans multiple pages>";
        }

        return NULL;



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15  4:53             ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15  4:53 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Rik van Riel, LKML, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM,
	kernel-hardening

On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
>> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
>>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
>>>
>>> > > ==
>>> > > +            ((unsigned long)end & (unsigned
>>> > > long)PAGE_MASK)))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow if start and end are inside the same compound
>>> > > page. */
>>> > > + endpage = virt_to_head_page(end);
>>> > > + if (likely(endpage == page))
>>> > > +         return NULL;
>>> > > +
>>> > > + /* Allow special areas, device memory, and sometimes
>>> > > kernel data. */
>>> > > + if (PageReserved(page) && PageReserved(endpage))
>>> > > +         return NULL;
>>> >
>>> > If we came here, it's likely that endpage > page, do we need to check
>>> > that only the first and last pages are reserved? What about the ones
>>> > in
>>> > the middle?
>>>
>>> I think this will be so rare, we can get away with just
>>> checking the beginning and the end.
>>>
>>
>> But do we want to leave a hole where an aware user space
>> can try a longer copy_* to avoid this check? If it is unlikely
>> should we just bite the bullet and do the check for the entire
>> range?
>
> I'd be okay with expanding the test -- it should be an extremely rare
> situation already since the common Reserved areas (kernel data) will
> have already been explicitly tested.
>
> What's the best way to do "next page"? Should it just be:
>
> for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
>     if (!PageReserved(page))
>         return "<spans multiple pages>";
> }
>
> return NULL;
>
> ?

Er, I was testing the wrong thing. How about:

        /*
         * Reject if range is not Reserved (i.e. special or device memory),
         * since then the object spans several independently allocated pages.
         */
        for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
                if (!PageReserved(page))
                        return "<spans multiple pages>";
        }

        return NULL;



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-15  4:53             ` Kees Cook
                                 ` (4 preceding siblings ...)
  (?)
@ 2016-07-15 12:55               ` Balbir Singh
  -1 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15 12:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Balbir Singh, Rik van Riel, LKML, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM, kernel-hardening

On Thu, Jul 14, 2016 at 09:53:31PM -0700, Kees Cook wrote:
> On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> > On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> >> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> >>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> >>>
> >>> > > ==
> >>> > > +            ((unsigned long)end & (unsigned
> >>> > > long)PAGE_MASK)))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow if start and end are inside the same compound
> >>> > > page. */
> >>> > > + endpage = virt_to_head_page(end);
> >>> > > + if (likely(endpage == page))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow special areas, device memory, and sometimes
> >>> > > kernel data. */
> >>> > > + if (PageReserved(page) && PageReserved(endpage))
> >>> > > +         return NULL;
> >>> >
> >>> > If we came here, it's likely that endpage > page, do we need to check
> >>> > that only the first and last pages are reserved? What about the ones
> >>> > in
> >>> > the middle?
> >>>
> >>> I think this will be so rare, we can get away with just
> >>> checking the beginning and the end.
> >>>
> >>
> >> But do we want to leave a hole where an aware user space
> >> can try a longer copy_* to avoid this check? If it is unlikely
> >> should we just bite the bullet and do the check for the entire
> >> range?
> >
> > I'd be okay with expanding the test -- it should be an extremely rare
> > situation already since the common Reserved areas (kernel data) will
> > have already been explicitly tested.
> >
> > What's the best way to do "next page"? Should it just be:
> >
> > for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
> >     if (!PageReserved(page))
> >         return "<spans multiple pages>";
> > }
> >
> > return NULL;
> >
> > ?
> 
> Er, I was testing the wrong thing. How about:
> 
>         /*
>          * Reject if range is not Reserved (i.e. special or device memory),
>          * since then the object spans several independently allocated pages.
>          */
>         for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
>                 if (!PageReserved(page))
>                         return "<spans multiple pages>";
>         }
> 
>         return NULL;

That looks reasonable to me

Balbir

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 12:55               ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15 12:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Balbir Singh, Rik van Riel, LKML, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Math

On Thu, Jul 14, 2016 at 09:53:31PM -0700, Kees Cook wrote:
> On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> > On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> >> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> >>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> >>>
> >>> > > ==
> >>> > > +            ((unsigned long)end & (unsigned
> >>> > > long)PAGE_MASK)))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow if start and end are inside the same compound
> >>> > > page. */
> >>> > > + endpage = virt_to_head_page(end);
> >>> > > + if (likely(endpage == page))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow special areas, device memory, and sometimes
> >>> > > kernel data. */
> >>> > > + if (PageReserved(page) && PageReserved(endpage))
> >>> > > +         return NULL;
> >>> >
> >>> > If we came here, it's likely that endpage > page, do we need to check
> >>> > that only the first and last pages are reserved? What about the ones
> >>> > in
> >>> > the middle?
> >>>
> >>> I think this will be so rare, we can get away with just
> >>> checking the beginning and the end.
> >>>
> >>
> >> But do we want to leave a hole where an aware user space
> >> can try a longer copy_* to avoid this check? If it is unlikely
> >> should we just bite the bullet and do the check for the entire
> >> range?
> >
> > I'd be okay with expanding the test -- it should be an extremely rare
> > situation already since the common Reserved areas (kernel data) will
> > have already been explicitly tested.
> >
> > What's the best way to do "next page"? Should it just be:
> >
> > for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
> >     if (!PageReserved(page))
> >         return "<spans multiple pages>";
> > }
> >
> > return NULL;
> >
> > ?
> 
> Er, I was testing the wrong thing. How about:
> 
>         /*
>          * Reject if range is not Reserved (i.e. special or device memory),
>          * since then the object spans several independently allocated pages.
>          */
>         for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
>                 if (!PageReserved(page))
>                         return "<spans multiple pages>";
>         }
> 
>         return NULL;

That looks reasonable to me

Balbir


^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 12:55               ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15 12:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Balbir Singh, Rik van Riel, LKML, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM, kernel-hardening

On Thu, Jul 14, 2016 at 09:53:31PM -0700, Kees Cook wrote:
> On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> > On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> >> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> >>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> >>>
> >>> > > ==
> >>> > > +            ((unsigned long)end & (unsigned
> >>> > > long)PAGE_MASK)))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow if start and end are inside the same compound
> >>> > > page. */
> >>> > > + endpage = virt_to_head_page(end);
> >>> > > + if (likely(endpage == page))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow special areas, device memory, and sometimes
> >>> > > kernel data. */
> >>> > > + if (PageReserved(page) && PageReserved(endpage))
> >>> > > +         return NULL;
> >>> >
> >>> > If we came here, it's likely that endpage > page, do we need to check
> >>> > that only the first and last pages are reserved? What about the ones
> >>> > in
> >>> > the middle?
> >>>
> >>> I think this will be so rare, we can get away with just
> >>> checking the beginning and the end.
> >>>
> >>
> >> But do we want to leave a hole where an aware user space
> >> can try a longer copy_* to avoid this check? If it is unlikely
> >> should we just bite the bullet and do the check for the entire
> >> range?
> >
> > I'd be okay with expanding the test -- it should be an extremely rare
> > situation already since the common Reserved areas (kernel data) will
> > have already been explicitly tested.
> >
> > What's the best way to do "next page"? Should it just be:
> >
> > for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
> >     if (!PageReserved(page))
> >         return "<spans multiple pages>";
> > }
> >
> > return NULL;
> >
> > ?
> 
> Er, I was testing the wrong thing. How about:
> 
>         /*
>          * Reject if range is not Reserved (i.e. special or device memory),
>          * since then the object spans several independently allocated pages.
>          */
>         for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
>                 if (!PageReserved(page))
>                         return "<spans multiple pages>";
>         }
> 
>         return NULL;

That looks reasonable to me

Balbir


^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 12:55               ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15 12:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Balbir Singh, Rik van Riel, LKML, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM, kernel-hardening

On Thu, Jul 14, 2016 at 09:53:31PM -0700, Kees Cook wrote:
> On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> > On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> >> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> >>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> >>>
> >>> > > =
> >>> > > +            ((unsigned long)end & (unsigned
> >>> > > long)PAGE_MASK)))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow if start and end are inside the same compound
> >>> > > page. */
> >>> > > + endpage = virt_to_head_page(end);
> >>> > > + if (likely(endpage = page))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow special areas, device memory, and sometimes
> >>> > > kernel data. */
> >>> > > + if (PageReserved(page) && PageReserved(endpage))
> >>> > > +         return NULL;
> >>> >
> >>> > If we came here, it's likely that endpage > page, do we need to check
> >>> > that only the first and last pages are reserved? What about the ones
> >>> > in
> >>> > the middle?
> >>>
> >>> I think this will be so rare, we can get away with just
> >>> checking the beginning and the end.
> >>>
> >>
> >> But do we want to leave a hole where an aware user space
> >> can try a longer copy_* to avoid this check? If it is unlikely
> >> should we just bite the bullet and do the check for the entire
> >> range?
> >
> > I'd be okay with expanding the test -- it should be an extremely rare
> > situation already since the common Reserved areas (kernel data) will
> > have already been explicitly tested.
> >
> > What's the best way to do "next page"? Should it just be:
> >
> > for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
> >     if (!PageReserved(page))
> >         return "<spans multiple pages>";
> > }
> >
> > return NULL;
> >
> > ?
> 
> Er, I was testing the wrong thing. How about:
> 
>         /*
>          * Reject if range is not Reserved (i.e. special or device memory),
>          * since then the object spans several independently allocated pages.
>          */
>         for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
>                 if (!PageReserved(page))
>                         return "<spans multiple pages>";
>         }
> 
>         return NULL;

That looks reasonable to me

Balbir


^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 12:55               ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15 12:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Balbir Singh, Rik van Riel, LKML, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM, kernel-hardening

On Thu, Jul 14, 2016 at 09:53:31PM -0700, Kees Cook wrote:
> On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> > On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> >> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> >>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> >>>
> >>> > > ==
> >>> > > +            ((unsigned long)end & (unsigned
> >>> > > long)PAGE_MASK)))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow if start and end are inside the same compound
> >>> > > page. */
> >>> > > + endpage = virt_to_head_page(end);
> >>> > > + if (likely(endpage == page))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow special areas, device memory, and sometimes
> >>> > > kernel data. */
> >>> > > + if (PageReserved(page) && PageReserved(endpage))
> >>> > > +         return NULL;
> >>> >
> >>> > If we came here, it's likely that endpage > page, do we need to check
> >>> > that only the first and last pages are reserved? What about the ones
> >>> > in
> >>> > the middle?
> >>>
> >>> I think this will be so rare, we can get away with just
> >>> checking the beginning and the end.
> >>>
> >>
> >> But do we want to leave a hole where an aware user space
> >> can try a longer copy_* to avoid this check? If it is unlikely
> >> should we just bite the bullet and do the check for the entire
> >> range?
> >
> > I'd be okay with expanding the test -- it should be an extremely rare
> > situation already since the common Reserved areas (kernel data) will
> > have already been explicitly tested.
> >
> > What's the best way to do "next page"? Should it just be:
> >
> > for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
> >     if (!PageReserved(page))
> >         return "<spans multiple pages>";
> > }
> >
> > return NULL;
> >
> > ?
> 
> Er, I was testing the wrong thing. How about:
> 
>         /*
>          * Reject if range is not Reserved (i.e. special or device memory),
>          * since then the object spans several independently allocated pages.
>          */
>         for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
>                 if (!PageReserved(page))
>                         return "<spans multiple pages>";
>         }
> 
>         return NULL;

That looks reasonable to me

Balbir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 12:55               ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15 12:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 14, 2016 at 09:53:31PM -0700, Kees Cook wrote:
> On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> > On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> >> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> >>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> >>>
> >>> > > ==
> >>> > > +            ((unsigned long)end & (unsigned
> >>> > > long)PAGE_MASK)))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow if start and end are inside the same compound
> >>> > > page. */
> >>> > > + endpage = virt_to_head_page(end);
> >>> > > + if (likely(endpage == page))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow special areas, device memory, and sometimes
> >>> > > kernel data. */
> >>> > > + if (PageReserved(page) && PageReserved(endpage))
> >>> > > +         return NULL;
> >>> >
> >>> > If we came here, it's likely that endpage > page, do we need to check
> >>> > that only the first and last pages are reserved? What about the ones
> >>> > in
> >>> > the middle?
> >>>
> >>> I think this will be so rare, we can get away with just
> >>> checking the beginning and the end.
> >>>
> >>
> >> But do we want to leave a hole where an aware user space
> >> can try a longer copy_* to avoid this check? If it is unlikely
> >> should we just bite the bullet and do the check for the entire
> >> range?
> >
> > I'd be okay with expanding the test -- it should be an extremely rare
> > situation already since the common Reserved areas (kernel data) will
> > have already been explicitly tested.
> >
> > What's the best way to do "next page"? Should it just be:
> >
> > for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
> >     if (!PageReserved(page))
> >         return "<spans multiple pages>";
> > }
> >
> > return NULL;
> >
> > ?
> 
> Er, I was testing the wrong thing. How about:
> 
>         /*
>          * Reject if range is not Reserved (i.e. special or device memory),
>          * since then the object spans several independently allocated pages.
>          */
>         for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
>                 if (!PageReserved(page))
>                         return "<spans multiple pages>";
>         }
> 
>         return NULL;

That looks reasonable to me

Balbir

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 12:55               ` Balbir Singh
  0 siblings, 0 replies; 203+ messages in thread
From: Balbir Singh @ 2016-07-15 12:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Balbir Singh, Rik van Riel, LKML, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM, kernel-hardening

On Thu, Jul 14, 2016 at 09:53:31PM -0700, Kees Cook wrote:
> On Thu, Jul 14, 2016 at 9:05 PM, Kees Cook <keescook@chromium.org> wrote:
> > On Thu, Jul 14, 2016 at 6:41 PM, Balbir Singh <bsingharora@gmail.com> wrote:
> >> On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote:
> >>> On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote:
> >>>
> >>> > > ==
> >>> > > +            ((unsigned long)end & (unsigned
> >>> > > long)PAGE_MASK)))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow if start and end are inside the same compound
> >>> > > page. */
> >>> > > + endpage = virt_to_head_page(end);
> >>> > > + if (likely(endpage == page))
> >>> > > +         return NULL;
> >>> > > +
> >>> > > + /* Allow special areas, device memory, and sometimes
> >>> > > kernel data. */
> >>> > > + if (PageReserved(page) && PageReserved(endpage))
> >>> > > +         return NULL;
> >>> >
> >>> > If we came here, it's likely that endpage > page, do we need to check
> >>> > that only the first and last pages are reserved? What about the ones
> >>> > in
> >>> > the middle?
> >>>
> >>> I think this will be so rare, we can get away with just
> >>> checking the beginning and the end.
> >>>
> >>
> >> But do we want to leave a hole where an aware user space
> >> can try a longer copy_* to avoid this check? If it is unlikely
> >> should we just bite the bullet and do the check for the entire
> >> range?
> >
> > I'd be okay with expanding the test -- it should be an extremely rare
> > situation already since the common Reserved areas (kernel data) will
> > have already been explicitly tested.
> >
> > What's the best way to do "next page"? Should it just be:
> >
> > for ( ; page <= endpage ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr) ) {
> >     if (!PageReserved(page))
> >         return "<spans multiple pages>";
> > }
> >
> > return NULL;
> >
> > ?
> 
> Er, I was testing the wrong thing. How about:
> 
>         /*
>          * Reject if range is not Reserved (i.e. special or device memory),
>          * since then the object spans several independently allocated pages.
>          */
>         for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) {
>                 if (!PageReserved(page))
>                         return "<spans multiple pages>";
>         }
> 
>         return NULL;

That looks reasonable to me

Balbir

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-15  4:25       ` Kees Cook
                           ` (3 preceding siblings ...)
  (?)
@ 2016-07-15 19:00         ` Daniel Micay
  -1 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:00 UTC (permalink / raw)
  To: kernel-hardening, bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 760 bytes --]

> This could be a BUG, but I'd rather not panic the entire kernel.

It seems unlikely that it will panic without panic_on_oops and that's
an explicit opt-in to taking down the system on kernel logic errors
exactly like this. In grsecurity, it calls the kernel exploit handling
logic (panic if root, otherwise kill all process of that user and ban
them until reboot) but that same logic is also called for BUG via oops
handling so there's only really a distinction with panic_on_oops=1.

Does it make sense to be less fatal for a fatal assertion that's more
likely to be security-related? Maybe you're worried about having some
false positives for the whitelisting portion, but I don't think those
will lurk around very long with the way this works.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:00         ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:00 UTC (permalink / raw)
  To: kernel-hardening, bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause

[-- Attachment #1: Type: text/plain, Size: 760 bytes --]

> This could be a BUG, but I'd rather not panic the entire kernel.

It seems unlikely that it will panic without panic_on_oops and that's
an explicit opt-in to taking down the system on kernel logic errors
exactly like this. In grsecurity, it calls the kernel exploit handling
logic (panic if root, otherwise kill all process of that user and ban
them until reboot) but that same logic is also called for BUG via oops
handling so there's only really a distinction with panic_on_oops=1.

Does it make sense to be less fatal for a fatal assertion that's more
likely to be security-related? Maybe you're worried about having some
false positives for the whitelisting portion, but I don't think those
will lurk around very long with the way this works.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:00         ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:00 UTC (permalink / raw)
  To: kernel-hardening, bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 760 bytes --]

> This could be a BUG, but I'd rather not panic the entire kernel.

It seems unlikely that it will panic without panic_on_oops and that's
an explicit opt-in to taking down the system on kernel logic errors
exactly like this. In grsecurity, it calls the kernel exploit handling
logic (panic if root, otherwise kill all process of that user and ban
them until reboot) but that same logic is also called for BUG via oops
handling so there's only really a distinction with panic_on_oops=1.

Does it make sense to be less fatal for a fatal assertion that's more
likely to be security-related? Maybe you're worried about having some
false positives for the whitelisting portion, but I don't think those
will lurk around very long with the way this works.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:00         ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:00 UTC (permalink / raw)
  To: kernel-hardening, bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 760 bytes --]

> This could be a BUG, but I'd rather not panic the entire kernel.

It seems unlikely that it will panic without panic_on_oops and that's
an explicit opt-in to taking down the system on kernel logic errors
exactly like this. In grsecurity, it calls the kernel exploit handling
logic (panic if root, otherwise kill all process of that user and ban
them until reboot) but that same logic is also called for BUG via oops
handling so there's only really a distinction with panic_on_oops=1.

Does it make sense to be less fatal for a fatal assertion that's more
likely to be security-related? Maybe you're worried about having some
false positives for the whitelisting portion, but I don't think those
will lurk around very long with the way this works.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:00         ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:00 UTC (permalink / raw)
  To: kernel-hardening, bsingharora
  Cc: LKML, Rik van Riel, Casey Schaufler, PaX Team, Brad Spengler,
	Russell King, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Benjamin Herrenschmidt, Michael Ellerman, Tony Luck, Fenghua Yu,
	David S. Miller, x86, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Mathias Krause, Jan Kara, Vitaly Wool,
	Andrea Arcangeli, Dmitry Vyukov, Laura Abbott, linux-arm-kernel,
	linux-ia64, linuxppc-dev, sparclinux, linux-arch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 760 bytes --]

> This could be a BUG, but I'd rather not panic the entire kernel.

It seems unlikely that it will panic without panic_on_oops and that's
an explicit opt-in to taking down the system on kernel logic errors
exactly like this. In grsecurity, it calls the kernel exploit handling
logic (panic if root, otherwise kill all process of that user and ban
them until reboot) but that same logic is also called for BUG via oops
handling so there's only really a distinction with panic_on_oops=1.

Does it make sense to be less fatal for a fatal assertion that's more
likely to be security-related? Maybe you're worried about having some
false positives for the whitelisting portion, but I don't think those
will lurk around very long with the way this works.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:00         ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:00 UTC (permalink / raw)
  To: linux-arm-kernel

> This could be a BUG, but I'd rather not panic the entire kernel.

It seems unlikely that it will panic without panic_on_oops and that's
an explicit opt-in to taking down the system on kernel logic errors
exactly like this. In grsecurity, it calls the kernel exploit handling
logic (panic if root, otherwise kill all process of that user and ban
them until reboot) but that same logic is also called for BUG via oops
handling so there's only really a distinction with panic_on_oops=1.

Does it make sense to be less fatal for a fatal assertion that's more
likely to be security-related? Maybe you're worried about having some
false positives for the whitelisting portion, but I don't think those
will lurk around very long with the way this works.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 851 bytes
Desc: This is a digitally signed message part
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160715/05e8ad54/attachment.sig>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-15 19:00         ` Daniel Micay
                             ` (3 preceding siblings ...)
  (?)
@ 2016-07-15 19:14           ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:14 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

On Fri, Jul 15, 2016 at 12:00 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> This could be a BUG, but I'd rather not panic the entire kernel.
>
> It seems unlikely that it will panic without panic_on_oops and that's
> an explicit opt-in to taking down the system on kernel logic errors
> exactly like this. In grsecurity, it calls the kernel exploit handling
> logic (panic if root, otherwise kill all process of that user and ban
> them until reboot) but that same logic is also called for BUG via oops
> handling so there's only really a distinction with panic_on_oops=1.
>
> Does it make sense to be less fatal for a fatal assertion that's more
> likely to be security-related? Maybe you're worried about having some
> false positives for the whitelisting portion, but I don't think those
> will lurk around very long with the way this works.

I'd like it to dump stack and be fatal to the process involved, but
yeah, I guess BUG() would work. Creating an infrastructure for
handling security-related Oopses can be done separately from this (and
I'd like to see that added, since it's a nice bit of configurable
reactivity to possible attacks).

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:14           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:14 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Math

On Fri, Jul 15, 2016 at 12:00 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> This could be a BUG, but I'd rather not panic the entire kernel.
>
> It seems unlikely that it will panic without panic_on_oops and that's
> an explicit opt-in to taking down the system on kernel logic errors
> exactly like this. In grsecurity, it calls the kernel exploit handling
> logic (panic if root, otherwise kill all process of that user and ban
> them until reboot) but that same logic is also called for BUG via oops
> handling so there's only really a distinction with panic_on_oops=1.
>
> Does it make sense to be less fatal for a fatal assertion that's more
> likely to be security-related? Maybe you're worried about having some
> false positives for the whitelisting portion, but I don't think those
> will lurk around very long with the way this works.

I'd like it to dump stack and be fatal to the process involved, but
yeah, I guess BUG() would work. Creating an infrastructure for
handling security-related Oopses can be done separately from this (and
I'd like to see that added, since it's a nice bit of configurable
reactivity to possible attacks).

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:14           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:14 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

On Fri, Jul 15, 2016 at 12:00 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> This could be a BUG, but I'd rather not panic the entire kernel.
>
> It seems unlikely that it will panic without panic_on_oops and that's
> an explicit opt-in to taking down the system on kernel logic errors
> exactly like this. In grsecurity, it calls the kernel exploit handling
> logic (panic if root, otherwise kill all process of that user and ban
> them until reboot) but that same logic is also called for BUG via oops
> handling so there's only really a distinction with panic_on_oops=1.
>
> Does it make sense to be less fatal for a fatal assertion that's more
> likely to be security-related? Maybe you're worried about having some
> false positives for the whitelisting portion, but I don't think those
> will lurk around very long with the way this works.

I'd like it to dump stack and be fatal to the process involved, but
yeah, I guess BUG() would work. Creating an infrastructure for
handling security-related Oopses can be done separately from this (and
I'd like to see that added, since it's a nice bit of configurable
reactivity to possible attacks).

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:14           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:14 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

On Fri, Jul 15, 2016 at 12:00 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> This could be a BUG, but I'd rather not panic the entire kernel.
>
> It seems unlikely that it will panic without panic_on_oops and that's
> an explicit opt-in to taking down the system on kernel logic errors
> exactly like this. In grsecurity, it calls the kernel exploit handling
> logic (panic if root, otherwise kill all process of that user and ban
> them until reboot) but that same logic is also called for BUG via oops
> handling so there's only really a distinction with panic_on_oops=1.
>
> Does it make sense to be less fatal for a fatal assertion that's more
> likely to be security-related? Maybe you're worried about having some
> false positives for the whitelisting portion, but I don't think those
> will lurk around very long with the way this works.

I'd like it to dump stack and be fatal to the process involved, but
yeah, I guess BUG() would work. Creating an infrastructure for
handling security-related Oopses can be done separately from this (and
I'd like to see that added, since it's a nice bit of configurable
reactivity to possible attacks).

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:14           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:14 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

On Fri, Jul 15, 2016 at 12:00 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> This could be a BUG, but I'd rather not panic the entire kernel.
>
> It seems unlikely that it will panic without panic_on_oops and that's
> an explicit opt-in to taking down the system on kernel logic errors
> exactly like this. In grsecurity, it calls the kernel exploit handling
> logic (panic if root, otherwise kill all process of that user and ban
> them until reboot) but that same logic is also called for BUG via oops
> handling so there's only really a distinction with panic_on_oops=1.
>
> Does it make sense to be less fatal for a fatal assertion that's more
> likely to be security-related? Maybe you're worried about having some
> false positives for the whitelisting portion, but I don't think those
> will lurk around very long with the way this works.

I'd like it to dump stack and be fatal to the process involved, but
yeah, I guess BUG() would work. Creating an infrastructure for
handling security-related Oopses can be done separately from this (and
I'd like to see that added, since it's a nice bit of configurable
reactivity to possible attacks).

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:14           ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 15, 2016 at 12:00 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> This could be a BUG, but I'd rather not panic the entire kernel.
>
> It seems unlikely that it will panic without panic_on_oops and that's
> an explicit opt-in to taking down the system on kernel logic errors
> exactly like this. In grsecurity, it calls the kernel exploit handling
> logic (panic if root, otherwise kill all process of that user and ban
> them until reboot) but that same logic is also called for BUG via oops
> handling so there's only really a distinction with panic_on_oops=1.
>
> Does it make sense to be less fatal for a fatal assertion that's more
> likely to be security-related? Maybe you're worried about having some
> false positives for the whitelisting portion, but I don't think those
> will lurk around very long with the way this works.

I'd like it to dump stack and be fatal to the process involved, but
yeah, I guess BUG() would work. Creating an infrastructure for
handling security-related Oopses can be done separately from this (and
I'd like to see that added, since it's a nice bit of configurable
reactivity to possible attacks).

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-15 19:14           ` Kees Cook
                               ` (3 preceding siblings ...)
  (?)
@ 2016-07-15 19:19             ` Daniel Micay
  -1 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:19 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 665 bytes --]

> I'd like it to dump stack and be fatal to the process involved, but
> yeah, I guess BUG() would work. Creating an infrastructure for
> handling security-related Oopses can be done separately from this
> (and
> I'd like to see that added, since it's a nice bit of configurable
> reactivity to possible attacks).

In grsecurity, the oops handling also uses do_group_exit instead of
do_exit but both that change (or at least the option to do it) and the
exploit handling could be done separately from this without actually
needing special treatment for USERCOPY. Could expose is as something
like panic_on_oops=2 as a balance between the existing options.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:19             ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:19 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Math

[-- Attachment #1: Type: text/plain, Size: 665 bytes --]

> I'd like it to dump stack and be fatal to the process involved, but
> yeah, I guess BUG() would work. Creating an infrastructure for
> handling security-related Oopses can be done separately from this
> (and
> I'd like to see that added, since it's a nice bit of configurable
> reactivity to possible attacks).

In grsecurity, the oops handling also uses do_group_exit instead of
do_exit but both that change (or at least the option to do it) and the
exploit handling could be done separately from this without actually
needing special treatment for USERCOPY. Could expose is as something
like panic_on_oops=2 as a balance between the existing options.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:19             ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:19 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 665 bytes --]

> I'd like it to dump stack and be fatal to the process involved, but
> yeah, I guess BUG() would work. Creating an infrastructure for
> handling security-related Oopses can be done separately from this
> (and
> I'd like to see that added, since it's a nice bit of configurable
> reactivity to possible attacks).

In grsecurity, the oops handling also uses do_group_exit instead of
do_exit but both that change (or at least the option to do it) and the
exploit handling could be done separately from this without actually
needing special treatment for USERCOPY. Could expose is as something
like panic_on_oops=2 as a balance between the existing options.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:19             ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:19 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 665 bytes --]

> I'd like it to dump stack and be fatal to the process involved, but
> yeah, I guess BUG() would work. Creating an infrastructure for
> handling security-related Oopses can be done separately from this
> (and
> I'd like to see that added, since it's a nice bit of configurable
> reactivity to possible attacks).

In grsecurity, the oops handling also uses do_group_exit instead of
do_exit but both that change (or at least the option to do it) and the
exploit handling could be done separately from this without actually
needing special treatment for USERCOPY. Could expose is as something
like panic_on_oops=2 as a balance between the existing options.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:19             ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:19 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 665 bytes --]

> I'd like it to dump stack and be fatal to the process involved, but
> yeah, I guess BUG() would work. Creating an infrastructure for
> handling security-related Oopses can be done separately from this
> (and
> I'd like to see that added, since it's a nice bit of configurable
> reactivity to possible attacks).

In grsecurity, the oops handling also uses do_group_exit instead of
do_exit but both that change (or at least the option to do it) and the
exploit handling could be done separately from this without actually
needing special treatment for USERCOPY. Could expose is as something
like panic_on_oops=2 as a balance between the existing options.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:19             ` Daniel Micay
  0 siblings, 0 replies; 203+ messages in thread
From: Daniel Micay @ 2016-07-15 19:19 UTC (permalink / raw)
  To: linux-arm-kernel

> I'd like it to dump stack and be fatal to the process involved, but
> yeah, I guess BUG() would work. Creating an infrastructure for
> handling security-related Oopses can be done separately from this
> (and
> I'd like to see that added, since it's a nice bit of configurable
> reactivity to possible attacks).

In grsecurity, the oops handling also uses do_group_exit instead of
do_exit but both that change (or at least the option to do it) and the
exploit handling could be done separately from this without actually
needing special treatment for USERCOPY. Could expose is as something
like panic_on_oops=2 as a balance between the existing options.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 851 bytes
Desc: This is a digitally signed message part
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160715/f6cde39b/attachment.sig>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
  2016-07-15 19:19             ` Daniel Micay
                                 ` (3 preceding siblings ...)
  (?)
@ 2016-07-15 19:23               ` Kees Cook
  -1 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:23 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

On Fri, Jul 15, 2016 at 12:19 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> I'd like it to dump stack and be fatal to the process involved, but
>> yeah, I guess BUG() would work. Creating an infrastructure for
>> handling security-related Oopses can be done separately from this
>> (and
>> I'd like to see that added, since it's a nice bit of configurable
>> reactivity to possible attacks).
>
> In grsecurity, the oops handling also uses do_group_exit instead of
> do_exit but both that change (or at least the option to do it) and the
> exploit handling could be done separately from this without actually
> needing special treatment for USERCOPY. Could expose is as something
> like panic_on_oops=2 as a balance between the existing options.

I'm also uncomfortable about BUG() being removed by unsetting
CONFIG_BUG, but that seems unlikely. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:23               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:23 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Math

On Fri, Jul 15, 2016 at 12:19 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> I'd like it to dump stack and be fatal to the process involved, but
>> yeah, I guess BUG() would work. Creating an infrastructure for
>> handling security-related Oopses can be done separately from this
>> (and
>> I'd like to see that added, since it's a nice bit of configurable
>> reactivity to possible attacks).
>
> In grsecurity, the oops handling also uses do_group_exit instead of
> do_exit but both that change (or at least the option to do it) and the
> exploit handling could be done separately from this without actually
> needing special treatment for USERCOPY. Could expose is as something
> like panic_on_oops=2 as a balance between the existing options.

I'm also uncomfortable about BUG() being removed by unsetting
CONFIG_BUG, but that seems unlikely. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:23               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:23 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

On Fri, Jul 15, 2016 at 12:19 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> I'd like it to dump stack and be fatal to the process involved, but
>> yeah, I guess BUG() would work. Creating an infrastructure for
>> handling security-related Oopses can be done separately from this
>> (and
>> I'd like to see that added, since it's a nice bit of configurable
>> reactivity to possible attacks).
>
> In grsecurity, the oops handling also uses do_group_exit instead of
> do_exit but both that change (or at least the option to do it) and the
> exploit handling could be done separately from this without actually
> needing special treatment for USERCOPY. Could expose is as something
> like panic_on_oops=2 as a balance between the existing options.

I'm also uncomfortable about BUG() being removed by unsetting
CONFIG_BUG, but that seems unlikely. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:23               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:23 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

On Fri, Jul 15, 2016 at 12:19 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> I'd like it to dump stack and be fatal to the process involved, but
>> yeah, I guess BUG() would work. Creating an infrastructure for
>> handling security-related Oopses can be done separately from this
>> (and
>> I'd like to see that added, since it's a nice bit of configurable
>> reactivity to possible attacks).
>
> In grsecurity, the oops handling also uses do_group_exit instead of
> do_exit but both that change (or at least the option to do it) and the
> exploit handling could be done separately from this without actually
> needing special treatment for USERCOPY. Could expose is as something
> like panic_on_oops=2 as a balance between the existing options.

I'm also uncomfortable about BUG() being removed by unsetting
CONFIG_BUG, but that seems unlikely. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:23               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:23 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Balbir Singh, LKML, Rik van Riel, Casey Schaufler, PaX Team,
	Brad Spengler, Russell King, Catalin Marinas, Will Deacon,
	Ard Biesheuvel, Benjamin Herrenschmidt, Michael Ellerman,
	Tony Luck, Fenghua Yu, David S. Miller, x86, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Mathias Krause, Jan Kara,
	Vitaly Wool, Andrea Arcangeli, Dmitry Vyukov, Laura Abbott,
	linux-arm-kernel, linux-ia64, linuxppc-dev, sparclinux,
	linux-arch, Linux-MM

On Fri, Jul 15, 2016 at 12:19 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> I'd like it to dump stack and be fatal to the process involved, but
>> yeah, I guess BUG() would work. Creating an infrastructure for
>> handling security-related Oopses can be done separately from this
>> (and
>> I'd like to see that added, since it's a nice bit of configurable
>> reactivity to possible attacks).
>
> In grsecurity, the oops handling also uses do_group_exit instead of
> do_exit but both that change (or at least the option to do it) and the
> exploit handling could be done separately from this without actually
> needing special treatment for USERCOPY. Could expose is as something
> like panic_on_oops=2 as a balance between the existing options.

I'm also uncomfortable about BUG() being removed by unsetting
CONFIG_BUG, but that seems unlikely. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 203+ messages in thread

* [kernel-hardening] Re: [PATCH v2 02/11] mm: Hardened usercopy
@ 2016-07-15 19:23               ` Kees Cook
  0 siblings, 0 replies; 203+ messages in thread
From: Kees Cook @ 2016-07-15 19:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 15, 2016 at 12:19 PM, Daniel Micay <danielmicay@gmail.com> wrote:
>> I'd like it to dump stack and be fatal to the process involved, but
>> yeah, I guess BUG() would work. Creating an infrastructure for
>> handling security-related Oopses can be done separately from this
>> (and
>> I'd like to see that added, since it's a nice bit of configurable
>> reactivity to possible attacks).
>
> In grsecurity, the oops handling also uses do_group_exit instead of
> do_exit but both that change (or at least the option to do it) and the
> exploit handling could be done separately from this without actually
> needing special treatment for USERCOPY. Could expose is as something
> like panic_on_oops=2 as a balance between the existing options.

I'm also uncomfortable about BUG() being removed by unsetting
CONFIG_BUG, but that seems unlikely. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 203+ messages in thread

end of thread, other threads:[~2016-07-15 19:23 UTC | newest]

Thread overview: 203+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-13 21:55 [PATCH v2 0/11] mm: Hardened usercopy Kees Cook
2016-07-13 21:55 ` [kernel-hardening] " Kees Cook
2016-07-13 21:55 ` Kees Cook
2016-07-13 21:55 ` Kees Cook
2016-07-13 21:55 ` Kees Cook
2016-07-13 21:55 ` Kees Cook
2016-07-13 21:55 ` [PATCH v2 01/11] mm: Implement stack frame object validation Kees Cook
2016-07-13 21:55   ` [kernel-hardening] " Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 22:01   ` Andy Lutomirski
2016-07-13 22:01     ` [kernel-hardening] " Andy Lutomirski
2016-07-13 22:01     ` Andy Lutomirski
2016-07-13 22:01     ` Andy Lutomirski
2016-07-13 22:01     ` Andy Lutomirski
2016-07-13 22:01     ` Andy Lutomirski
2016-07-13 22:01     ` Andy Lutomirski
2016-07-13 22:04     ` Kees Cook
2016-07-13 22:04       ` [kernel-hardening] " Kees Cook
2016-07-13 22:04       ` Kees Cook
2016-07-13 22:04       ` Kees Cook
2016-07-13 22:04       ` Kees Cook
2016-07-13 22:04       ` Kees Cook
2016-07-13 22:04       ` Kees Cook
2016-07-14  5:48       ` Josh Poimboeuf
2016-07-14  5:48         ` [kernel-hardening] " Josh Poimboeuf
2016-07-14  5:48         ` Josh Poimboeuf
2016-07-14  5:48         ` Josh Poimboeuf
2016-07-14  5:48         ` Josh Poimboeuf
2016-07-14  5:48         ` Josh Poimboeuf
2016-07-14  5:48         ` Josh Poimboeuf
2016-07-14 18:10         ` Kees Cook
2016-07-14 18:10           ` [kernel-hardening] " Kees Cook
2016-07-14 18:10           ` Kees Cook
2016-07-14 18:10           ` Kees Cook
2016-07-14 18:10           ` Kees Cook
2016-07-14 18:10           ` Kees Cook
2016-07-14 18:10           ` Kees Cook
2016-07-14 19:23           ` Josh Poimboeuf
2016-07-14 19:23             ` [kernel-hardening] " Josh Poimboeuf
2016-07-14 19:23             ` Josh Poimboeuf
2016-07-14 19:23             ` Josh Poimboeuf
2016-07-14 19:23             ` Josh Poimboeuf
2016-07-14 19:23             ` Josh Poimboeuf
2016-07-14 19:23             ` Josh Poimboeuf
2016-07-14 21:38             ` Kees Cook
2016-07-14 21:38               ` [kernel-hardening] " Kees Cook
2016-07-14 21:38               ` Kees Cook
2016-07-14 21:38               ` Kees Cook
2016-07-14 21:38               ` Kees Cook
2016-07-14 21:38               ` Kees Cook
2016-07-14 21:38               ` Kees Cook
2016-07-13 21:55 ` [PATCH v2 02/11] mm: Hardened usercopy Kees Cook
2016-07-13 21:55   ` [kernel-hardening] " Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-14 23:20   ` Balbir Singh
2016-07-14 23:20     ` [kernel-hardening] " Balbir Singh
2016-07-14 23:20     ` Balbir Singh
2016-07-14 23:20     ` Balbir Singh
2016-07-14 23:20     ` Balbir Singh
2016-07-14 23:20     ` Balbir Singh
2016-07-15  1:04     ` Rik van Riel
2016-07-15  1:04       ` [kernel-hardening] " Rik van Riel
2016-07-15  1:04       ` Rik van Riel
2016-07-15  1:04       ` Rik van Riel
2016-07-15  1:04       ` Rik van Riel
2016-07-15  1:41       ` Balbir Singh
2016-07-15  1:41         ` [kernel-hardening] " Balbir Singh
2016-07-15  1:41         ` Balbir Singh
2016-07-15  1:41         ` Balbir Singh
2016-07-15  1:41         ` Balbir Singh
2016-07-15  1:41         ` Balbir Singh
2016-07-15  4:05         ` Kees Cook
2016-07-15  4:05           ` [kernel-hardening] " Kees Cook
2016-07-15  4:05           ` Kees Cook
2016-07-15  4:05           ` Kees Cook
2016-07-15  4:05           ` Kees Cook
2016-07-15  4:05           ` Kees Cook
2016-07-15  4:05           ` Kees Cook
2016-07-15  4:53           ` Kees Cook
2016-07-15  4:53             ` [kernel-hardening] " Kees Cook
2016-07-15  4:53             ` Kees Cook
2016-07-15  4:53             ` Kees Cook
2016-07-15  4:53             ` Kees Cook
2016-07-15  4:53             ` Kees Cook
2016-07-15  4:53             ` Kees Cook
2016-07-15 12:55             ` Balbir Singh
2016-07-15 12:55               ` [kernel-hardening] " Balbir Singh
2016-07-15 12:55               ` Balbir Singh
2016-07-15 12:55               ` Balbir Singh
2016-07-15 12:55               ` Balbir Singh
2016-07-15 12:55               ` Balbir Singh
2016-07-15 12:55               ` Balbir Singh
2016-07-15  4:25     ` Kees Cook
2016-07-15  4:25       ` [kernel-hardening] " Kees Cook
2016-07-15  4:25       ` Kees Cook
2016-07-15  4:25       ` Kees Cook
2016-07-15  4:25       ` Kees Cook
2016-07-15  4:25       ` Kees Cook
2016-07-15  4:25       ` Kees Cook
2016-07-15 19:00       ` [kernel-hardening] " Daniel Micay
2016-07-15 19:00         ` Daniel Micay
2016-07-15 19:00         ` Daniel Micay
2016-07-15 19:00         ` Daniel Micay
2016-07-15 19:00         ` Daniel Micay
2016-07-15 19:00         ` Daniel Micay
2016-07-15 19:14         ` Kees Cook
2016-07-15 19:14           ` Kees Cook
2016-07-15 19:14           ` Kees Cook
2016-07-15 19:14           ` Kees Cook
2016-07-15 19:14           ` Kees Cook
2016-07-15 19:14           ` Kees Cook
2016-07-15 19:19           ` Daniel Micay
2016-07-15 19:19             ` Daniel Micay
2016-07-15 19:19             ` Daniel Micay
2016-07-15 19:19             ` Daniel Micay
2016-07-15 19:19             ` Daniel Micay
2016-07-15 19:19             ` Daniel Micay
2016-07-15 19:23             ` Kees Cook
2016-07-15 19:23               ` Kees Cook
2016-07-15 19:23               ` Kees Cook
2016-07-15 19:23               ` Kees Cook
2016-07-15 19:23               ` Kees Cook
2016-07-15 19:23               ` Kees Cook
2016-07-13 21:55 ` [PATCH v2 03/11] x86/uaccess: Enable hardened usercopy Kees Cook
2016-07-13 21:55   ` [kernel-hardening] " Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55 ` [PATCH v2 04/11] ARM: uaccess: " Kees Cook
2016-07-13 21:55   ` [kernel-hardening] " Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55 ` [PATCH v2 05/11] arm64/uaccess: " Kees Cook
2016-07-13 21:55   ` [kernel-hardening] " Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55 ` [PATCH v2 06/11] ia64/uaccess: " Kees Cook
2016-07-13 21:55   ` [kernel-hardening] " Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:55   ` Kees Cook
2016-07-13 21:56 ` [PATCH v2 07/11] powerpc/uaccess: " Kees Cook
2016-07-13 21:56   ` [kernel-hardening] " Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56 ` [PATCH v2 08/11] sparc/uaccess: " Kees Cook
2016-07-13 21:56   ` [kernel-hardening] " Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56 ` [PATCH v2 09/11] s390/uaccess: " Kees Cook
2016-07-13 21:56   ` [kernel-hardening] " Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56 ` [PATCH v2 10/11] mm: SLAB hardened usercopy support Kees Cook
2016-07-13 21:56   ` [kernel-hardening] " Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56 ` [PATCH v2 11/11] mm: SLUB " Kees Cook
2016-07-13 21:56   ` [kernel-hardening] " Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-13 21:56   ` Kees Cook
2016-07-14 10:07   ` [kernel-hardening] " Michael Ellerman
2016-07-14 10:07     ` Michael Ellerman
2016-07-14 10:07   ` Michael Ellerman
2016-07-14 10:07     ` Michael Ellerman
2016-07-14 10:07   ` Michael Ellerman
2016-07-14 10:07   ` Michael Ellerman
2016-07-14 10:07   ` Michael Ellerman
2016-07-15  2:05   ` Balbir Singh
2016-07-15  2:05     ` [kernel-hardening] " Balbir Singh
2016-07-15  2:05     ` Balbir Singh
2016-07-15  2:05     ` Balbir Singh
2016-07-15  2:05     ` Balbir Singh
2016-07-15  2:05     ` Balbir Singh
2016-07-15  4:29     ` Kees Cook
2016-07-15  4:29       ` [kernel-hardening] " Kees Cook
2016-07-15  4:29       ` Kees Cook
2016-07-15  4:29       ` Kees Cook
2016-07-15  4:29       ` Kees Cook
2016-07-15  4:29       ` Kees Cook
2016-07-15  4:29       ` Kees Cook

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.