All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/9] Application Data Integrity feature introduced by SPARC M7
@ 2017-08-09 21:25 ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, akpm, 0x7f454c46, aarcange, ak, allen.pais,
	aneesh.kumar, arnd, atish.patra, benh, bob.picco, bsingharora,
	chris.hyser, cmetcalf, corbet, dan.carpenter, dave.jiang, dja,
	eric.saint.etienne, geert, hannes, heiko.carstens, hillf.zj, hpa,
	hughd, imbrenda, jack, jmarchan, jroedel, kirill.shutemov,
	Liam.Howlett, lstoakes, mgorman, mhocko, mike.kravetz, minchan,
	mingo, mpe, nitin.m.gupta, pasha.tatashin, paul.gortmaker,
	paulus, peterz, rientjes, ross.zwisler, shli, steven.sistare,
	tglx, thomas.tai, tklauser, tom.hromatka, vegard.nossum,
	vijay.ac.kumar, viro, willy, x86, ying.huang, zhongjiang,
	sparclinux, linux-arch, linux-doc, linux-kernel, linux-mm,
	linuxppc-dev

SPARC M7 processor adds additional metadata for memory address space
that can be used to secure access to regions of memory. This additional
metadata is implemented as a 4-bit tag attached to each cacheline size
block of memory. A task can set a tag on any number of such blocks.
Access to such block is granted only if the virtual address used to
access that block of memory has the tag encoded in the uppermost 4 bits
of VA. Since sparc processor does not implement all 64 bits of VA, top 4
bits are available for ADI tags. Any mismatch between tag encoded in VA
and tag set on the memory block results in a trap. Tags are verified in
the VA presented to the MMU and tags are associated with the physical
page VA maps on to. If a memory page is swapped out and page frame gets
reused for another task, the tags are lost and hence must be saved when
swapping or migrating the page.

A userspace task enables ADI through mprotect(). This patch series adds
a page protection bit PROT_ADI and a corresponding VMA flag
VM_SPARC_ADI. VM_SPARC_ADI is used to trigger setting TTE.mcd bit in the
sparc pte that enables ADI checking on the corresponding page. MMU
validates the tag embedded in VA for every page that has TTE.mcd bit set
in its pte. After enabling ADI on a memory range, the userspace task can
set ADI version tags using stxa instruction with ASI_MCD_PRIMARY or
ASI_MCD_ST_BLKINIT_PRIMARY ASI.

Once userspace task calls mprotect() with PROT_ADI, kernel takes
following overall steps:

1. Find the VMAs covering the address range passed in to mprotect and
set VM_SPARC_ADI flag. If address range covers a subset of a VMA, the
VMA will be split.

2. When a page is allocated for a VA and the VMA covering this VA has
VM_SPARC_ADI flag set, set the TTE.mcd bit so MMU will check the
vwersion tag.

3. Userspace can now set version tags on the memory it has enabled ADI
on. Userspace accesses ADI enabled memory using a virtual address that
has the version tag embedded in the high bits. MMU validates this
version tag against the actual tag set on the memory. If tag matches,
MMU performs the VA->PA translation and access is granted. If there is a
mismatch, hypervisor sends a data access exception or precise memory
corruption detected exception depending upon whether precise exceptions
are enabled or not (controlled by MCDPERR register). Kernel sends
SIGSEGV to the task with appropriate si_code.

4. If a page is being swapped out or migrated, kernel must save any ADI
tags set on the page. Kernel maintains a page worth of tag storage
descriptors. Each descriptors pointsto a tag storage space and the
address range it covers. If the page being swapped out or migrated has
ADI enabled on it, kernel finds a tag storage descriptor that covers the
address range for the page or allocates a new descriptor if none of the
existing descriptors cover the address range. Kernel saves tags from the
page into the tag storage space descriptor points to.

5. When the page is swapped back in or reinstantiated after migration,
kernel restores the version tags on the new physical page by retrieving
the original tag from tag storage pointed to by a tag storage descriptor
for the virtual address range for new page.

User task can disable ADI by calling mprotect() again on the memory
range with PROT_ADI bit unset. Kernel clears the VM_SPARC_ADI flag in
VMAs, merges adjacent VMAs if necessary, and clears TTE.mcd bit in the
corresponding ptes.

IOMMU does not support ADI checking. Any version tags embedded in the
top bits of VA meant for IOMMU, are cleared and replaced with sign
extension of the first non-version tag bit (bit 59 for SPARC M7) for
IOMMU addresses.

This patch series adds support for this feature in 9 patches:

Patch 1/9
  Tag mismatch on access by a task results in a trap from hypervisor as
  data access exception or a precide memory corruption detected
  exception. As part of handling these exceptions, kernel sends a
  SIGSEGV to user process with special si_code to indicate which fault
  occurred. This patch adds three new si_codes to differentiate between
  various mismatch errors.

Patch 2/9
  When a page is swapped or migrated, metadata associated with the page
  must be saved so it can be restored later. This patch adds a new
  function that saves/restores this metadata when updating pte upon a
  swap/migration.

Patch 3/9
  SPARC M7 processor adds new fields to control registers to support ADI
  feature. It also adds a new exception for precise traps on tag
  mismatch. This patch adds definitions for the new control register
  fields, new ASIs for ADI and an exception handler for the precise trap
  on tag mismatch.

Patch 4/9
  New hypervisor fault types were added by sparc M7 processor to support
  ADI feature. This patch adds code to handle these fault types for data
  access exception handler.

Patch 5/9
  When ADI is in use for a page and a tag mismatch occurs, processor
  raises "Memory corruption Detected" trap. This patch adds a handler
  for this trap.

Patch 6/9
  ADI usage is governed by ADI properties on a platform. These
  properties are provided to kernel by firmware. Thsi patch adds new
  auxiliary vectors that provide these values to userpsace.

Patch 7/9
  arch_validate_prot() is used to validate the new protection bits asked
  for by the userspace app. Validating protection bits may need the
  context of address space the bits are being applied to. One such
  example is PROT_ADI bit on sparc processor that enables ADI protection
  on an address range. ADI protection applies only to addresses covered
  by physical RAM and not other PFN mapped addresses or device
  addresses. This patch adds "address" to the parameters being passed to
  arch_validate_prot() to provide that context.

Patch 8/9
  When protection bits are changed on a page, kernel carries forward all
  protection bits except for read/write/exec. Additional code was added
  to allow kernel to clear PKEY bits on x86 but this requirement to
  clear other bits is not unique to x86. This patch extends the existing
  code to allow other architectures to clear any other protection bits
  as well on protection bit change.

Patch 9/9
  This patch adds support for a user space task to enable ADI and enable
  tag checking for subsets of its address space. As part of enabling
  this feature, this patch adds to support manipulation of precise
  exception for memory corruption detection, adds code to save and
  restore tags on page swap and migration, and adds code to handle ADI
  tagged addresses for DMA.


Changelog v7:

	- Patch 1/9: No changes
	- Patch 2/9: Updated parameters to arch specific swap in/out
	  handlers
	- Patch 3/9: No changes
	- Patch 4/9: new patch split off from patch 4/4 in v6
	- Patch 5/9: new patch split off from patch 4/4 in v6
	- Patch 6/9: new patch split off from patch 4/4 in v6
	- Patch 7/9: new patch
	- Patch 8/9: new patch
	- Patch 9/9:
		- Enhanced arch_validate_prot() to enable ADI only on
		  writable addresses backed by physical RAM
		- Added support for saving/restoring ADI tags for each
		  ADI block size address range on a page on swap in/out
		- copy ADI tags on COW
		- Updated values for auxiliary vectors to not conflict
		  with values on other architectures to avoid conflict
		  in glibc
		- Disable same page merging on ADI enabled pages
		- Enable ADI only on writable addresses backed by
		  physical RAM
		- Split parts of patch off into separate patches

Changelog v6:
	- Patch 1/4: No changes
	- Patch 2/4: No changes
	- Patch 3/4: Added missing nop in the delay slot in
	  sun4v_mcd_detect_precise
	- Patch 4/4: Eliminated instructions to read and write PSTATE
	  as well as MCDPER and PMCDPER on every access to userspace
	  addresses by setting PSTATE and PMCDPER correctly upon entry
	  into kernel

Changelog v5:
	- Patch 1/4: No changes
	- Patch 2/4: Replaced set_swp_pte_at() with new architecture
	  functions arch_do_swap_page() and arch_unmap_one() that
	  suppoprt architecture specific actions to be taken on page
	  swap and migration
	- Patch 3/4: Fixed indentation issues in assembly code
	- Patch 4/4:
		- Fixed indentation issues and instrcuctions in assembly
		  code
		- Removed CONFIG_SPARC64 from mdesc.c
		- Changed to maintain state of MCDPER register in thread
		  info flags as opposed to in mm context. MCDPER is a
		  per-thread state and belongs in thread info flag as
		  opposed to mm context which is shared across threads.
		  Added comments to clarify this is a lazily maintained
		  state and must be updated on context switch and
		  copy_process() 
		- Updated code to use the new arch_do_swap_page() and
		  arch_unmap_one() functions

Testing:

- All functionality was tested with 8K normal pages as well as hugepages
  using malloc, mmap and shm.
- Multiple long duration stress tests were run using hugepages over 2+
  months. Normal pages were tested with shorter duration stress tests.
- Tested swapping with malloc and shm by reducing max memory and
  allocating three times the available system memory by active processes
  using ADI on allocated memory. Ran through multiple hours long runs of
  this test.
- Tested page migration with malloc and shm by migrating data pages of
  active ADI test process using migratepages, back and forth between two
  nodes every few seconds over an hour long run. Verified page migration
  through /proc/<pid>/numa_maps.
- Tested COW support using test that forks children that read from
  ADI enabled pages shared with parent and other children and write to
  them as well forcing COW.


---------
Khalid Aziz (9):
  signals, sparc: Add signal codes for ADI violations
  mm, swap: Add infrastructure for saving page metadata as well on swap
  sparc64: Add support for ADI register fields, ASIs and traps
  sparc64: Add HV fault type handlers for ADI related faults
  sparc64: Add handler for "Memory Corruption Detected" trap
  sparc64: Add auxiliary vectors to report platform ADI properties
  mm: Add address parameter to arch_validate_prot()
  mm: Clear arch specific VM flags on protection change
  sparc64: Add support for ADI (Application Data Integrity)

 Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++
 arch/powerpc/include/asm/mman.h         |   2 +-
 arch/powerpc/kernel/syscalls.c          |   2 +-
 arch/sparc/include/asm/adi.h            |   6 +
 arch/sparc/include/asm/adi_64.h         |  45 ++++
 arch/sparc/include/asm/elf_64.h         |   8 +
 arch/sparc/include/asm/hypervisor.h     |   2 +
 arch/sparc/include/asm/mman.h           |  72 ++++++-
 arch/sparc/include/asm/mmu_64.h         |  17 ++
 arch/sparc/include/asm/mmu_context_64.h |  43 ++++
 arch/sparc/include/asm/page_64.h        |   4 +
 arch/sparc/include/asm/pgtable_64.h     |  48 +++++
 arch/sparc/include/asm/thread_info_64.h |   2 +-
 arch/sparc/include/asm/trap_block.h     |   2 +
 arch/sparc/include/asm/ttable.h         |  10 +
 arch/sparc/include/uapi/asm/asi.h       |   5 +
 arch/sparc/include/uapi/asm/auxvec.h    |  10 +
 arch/sparc/include/uapi/asm/mman.h      |   2 +
 arch/sparc/include/uapi/asm/pstate.h    |  10 +
 arch/sparc/kernel/Makefile              |   1 +
 arch/sparc/kernel/adi_64.c              | 367 ++++++++++++++++++++++++++++++++
 arch/sparc/kernel/entry.h               |   3 +
 arch/sparc/kernel/etrap_64.S            |  28 ++-
 arch/sparc/kernel/head_64.S             |   1 +
 arch/sparc/kernel/mdesc.c               |   2 +
 arch/sparc/kernel/process_64.c          |  25 +++
 arch/sparc/kernel/setup_64.c            |  11 +-
 arch/sparc/kernel/sun4v_mcd.S           |  17 ++
 arch/sparc/kernel/traps_64.c            | 142 +++++++++++-
 arch/sparc/kernel/ttable_64.S           |   6 +-
 arch/sparc/kernel/vmlinux.lds.S         |   5 +
 arch/sparc/mm/gup.c                     |  37 ++++
 arch/sparc/mm/hugetlbpage.c             |  14 +-
 arch/sparc/mm/init_64.c                 |  33 +++
 arch/sparc/mm/tsb.c                     |  21 ++
 arch/x86/kernel/signal_compat.c         |   2 +-
 include/asm-generic/pgtable.h           |  36 ++++
 include/linux/mm.h                      |   9 +
 include/linux/mman.h                    |   2 +-
 include/uapi/asm-generic/siginfo.h      |   5 +-
 mm/ksm.c                                |   4 +
 mm/memory.c                             |   1 +
 mm/mprotect.c                           |   4 +-
 mm/rmap.c                               |  13 ++
 44 files changed, 1334 insertions(+), 17 deletions(-)
 create mode 100644 Documentation/sparc/adi.txt
 create mode 100644 arch/sparc/include/asm/adi.h
 create mode 100644 arch/sparc/include/asm/adi_64.h
 create mode 100644 arch/sparc/kernel/adi_64.c
 create mode 100644 arch/sparc/kernel/sun4v_mcd.S

-- 
2.11.0

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v7 0/9] Application Data Integrity feature introduced by SPARC M7
@ 2017-08-09 21:25 ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, akpm, 0x7f454c46, aarcange, ak, allen.pais,
	aneesh.kumar, arnd, atish.patra, benh, bob.picco, bsingharora,
	chris.hyser, cmetcalf, corbet, dan.carpenter, dave.jiang, dja,
	eric.saint.etienne, geert, hannes, heiko.carstens, hillf.zj, hpa,
	hughd, imbrenda, jack, jmarchan, jroedel, kirill.shutemov,
	Liam.Howlett, lstoakes, mgorman, mhocko, mike.kravetz

SPARC M7 processor adds additional metadata for memory address space
that can be used to secure access to regions of memory. This additional
metadata is implemented as a 4-bit tag attached to each cacheline size
block of memory. A task can set a tag on any number of such blocks.
Access to such block is granted only if the virtual address used to
access that block of memory has the tag encoded in the uppermost 4 bits
of VA. Since sparc processor does not implement all 64 bits of VA, top 4
bits are available for ADI tags. Any mismatch between tag encoded in VA
and tag set on the memory block results in a trap. Tags are verified in
the VA presented to the MMU and tags are associated with the physical
page VA maps on to. If a memory page is swapped out and page frame gets
reused for another task, the tags are lost and hence must be saved when
swapping or migrating the page.

A userspace task enables ADI through mprotect(). This patch series adds
a page protection bit PROT_ADI and a corresponding VMA flag
VM_SPARC_ADI. VM_SPARC_ADI is used to trigger setting TTE.mcd bit in the
sparc pte that enables ADI checking on the corresponding page. MMU
validates the tag embedded in VA for every page that has TTE.mcd bit set
in its pte. After enabling ADI on a memory range, the userspace task can
set ADI version tags using stxa instruction with ASI_MCD_PRIMARY or
ASI_MCD_ST_BLKINIT_PRIMARY ASI.

Once userspace task calls mprotect() with PROT_ADI, kernel takes
following overall steps:

1. Find the VMAs covering the address range passed in to mprotect and
set VM_SPARC_ADI flag. If address range covers a subset of a VMA, the
VMA will be split.

2. When a page is allocated for a VA and the VMA covering this VA has
VM_SPARC_ADI flag set, set the TTE.mcd bit so MMU will check the
vwersion tag.

3. Userspace can now set version tags on the memory it has enabled ADI
on. Userspace accesses ADI enabled memory using a virtual address that
has the version tag embedded in the high bits. MMU validates this
version tag against the actual tag set on the memory. If tag matches,
MMU performs the VA->PA translation and access is granted. If there is a
mismatch, hypervisor sends a data access exception or precise memory
corruption detected exception depending upon whether precise exceptions
are enabled or not (controlled by MCDPERR register). Kernel sends
SIGSEGV to the task with appropriate si_code.

4. If a page is being swapped out or migrated, kernel must save any ADI
tags set on the page. Kernel maintains a page worth of tag storage
descriptors. Each descriptors pointsto a tag storage space and the
address range it covers. If the page being swapped out or migrated has
ADI enabled on it, kernel finds a tag storage descriptor that covers the
address range for the page or allocates a new descriptor if none of the
existing descriptors cover the address range. Kernel saves tags from the
page into the tag storage space descriptor points to.

5. When the page is swapped back in or reinstantiated after migration,
kernel restores the version tags on the new physical page by retrieving
the original tag from tag storage pointed to by a tag storage descriptor
for the virtual address range for new page.

User task can disable ADI by calling mprotect() again on the memory
range with PROT_ADI bit unset. Kernel clears the VM_SPARC_ADI flag in
VMAs, merges adjacent VMAs if necessary, and clears TTE.mcd bit in the
corresponding ptes.

IOMMU does not support ADI checking. Any version tags embedded in the
top bits of VA meant for IOMMU, are cleared and replaced with sign
extension of the first non-version tag bit (bit 59 for SPARC M7) for
IOMMU addresses.

This patch series adds support for this feature in 9 patches:

Patch 1/9
  Tag mismatch on access by a task results in a trap from hypervisor as
  data access exception or a precide memory corruption detected
  exception. As part of handling these exceptions, kernel sends a
  SIGSEGV to user process with special si_code to indicate which fault
  occurred. This patch adds three new si_codes to differentiate between
  various mismatch errors.

Patch 2/9
  When a page is swapped or migrated, metadata associated with the page
  must be saved so it can be restored later. This patch adds a new
  function that saves/restores this metadata when updating pte upon a
  swap/migration.

Patch 3/9
  SPARC M7 processor adds new fields to control registers to support ADI
  feature. It also adds a new exception for precise traps on tag
  mismatch. This patch adds definitions for the new control register
  fields, new ASIs for ADI and an exception handler for the precise trap
  on tag mismatch.

Patch 4/9
  New hypervisor fault types were added by sparc M7 processor to support
  ADI feature. This patch adds code to handle these fault types for data
  access exception handler.

Patch 5/9
  When ADI is in use for a page and a tag mismatch occurs, processor
  raises "Memory corruption Detected" trap. This patch adds a handler
  for this trap.

Patch 6/9
  ADI usage is governed by ADI properties on a platform. These
  properties are provided to kernel by firmware. Thsi patch adds new
  auxiliary vectors that provide these values to userpsace.

Patch 7/9
  arch_validate_prot() is used to validate the new protection bits asked
  for by the userspace app. Validating protection bits may need the
  context of address space the bits are being applied to. One such
  example is PROT_ADI bit on sparc processor that enables ADI protection
  on an address range. ADI protection applies only to addresses covered
  by physical RAM and not other PFN mapped addresses or device
  addresses. This patch adds "address" to the parameters being passed to
  arch_validate_prot() to provide that context.

Patch 8/9
  When protection bits are changed on a page, kernel carries forward all
  protection bits except for read/write/exec. Additional code was added
  to allow kernel to clear PKEY bits on x86 but this requirement to
  clear other bits is not unique to x86. This patch extends the existing
  code to allow other architectures to clear any other protection bits
  as well on protection bit change.

Patch 9/9
  This patch adds support for a user space task to enable ADI and enable
  tag checking for subsets of its address space. As part of enabling
  this feature, this patch adds to support manipulation of precise
  exception for memory corruption detection, adds code to save and
  restore tags on page swap and migration, and adds code to handle ADI
  tagged addresses for DMA.


Changelog v7:

	- Patch 1/9: No changes
	- Patch 2/9: Updated parameters to arch specific swap in/out
	  handlers
	- Patch 3/9: No changes
	- Patch 4/9: new patch split off from patch 4/4 in v6
	- Patch 5/9: new patch split off from patch 4/4 in v6
	- Patch 6/9: new patch split off from patch 4/4 in v6
	- Patch 7/9: new patch
	- Patch 8/9: new patch
	- Patch 9/9:
		- Enhanced arch_validate_prot() to enable ADI only on
		  writable addresses backed by physical RAM
		- Added support for saving/restoring ADI tags for each
		  ADI block size address range on a page on swap in/out
		- copy ADI tags on COW
		- Updated values for auxiliary vectors to not conflict
		  with values on other architectures to avoid conflict
		  in glibc
		- Disable same page merging on ADI enabled pages
		- Enable ADI only on writable addresses backed by
		  physical RAM
		- Split parts of patch off into separate patches

Changelog v6:
	- Patch 1/4: No changes
	- Patch 2/4: No changes
	- Patch 3/4: Added missing nop in the delay slot in
	  sun4v_mcd_detect_precise
	- Patch 4/4: Eliminated instructions to read and write PSTATE
	  as well as MCDPER and PMCDPER on every access to userspace
	  addresses by setting PSTATE and PMCDPER correctly upon entry
	  into kernel

Changelog v5:
	- Patch 1/4: No changes
	- Patch 2/4: Replaced set_swp_pte_at() with new architecture
	  functions arch_do_swap_page() and arch_unmap_one() that
	  suppoprt architecture specific actions to be taken on page
	  swap and migration
	- Patch 3/4: Fixed indentation issues in assembly code
	- Patch 4/4:
		- Fixed indentation issues and instrcuctions in assembly
		  code
		- Removed CONFIG_SPARC64 from mdesc.c
		- Changed to maintain state of MCDPER register in thread
		  info flags as opposed to in mm context. MCDPER is a
		  per-thread state and belongs in thread info flag as
		  opposed to mm context which is shared across threads.
		  Added comments to clarify this is a lazily maintained
		  state and must be updated on context switch and
		  copy_process() 
		- Updated code to use the new arch_do_swap_page() and
		  arch_unmap_one() functions

Testing:

- All functionality was tested with 8K normal pages as well as hugepages
  using malloc, mmap and shm.
- Multiple long duration stress tests were run using hugepages over 2+
  months. Normal pages were tested with shorter duration stress tests.
- Tested swapping with malloc and shm by reducing max memory and
  allocating three times the available system memory by active processes
  using ADI on allocated memory. Ran through multiple hours long runs of
  this test.
- Tested page migration with malloc and shm by migrating data pages of
  active ADI test process using migratepages, back and forth between two
  nodes every few seconds over an hour long run. Verified page migration
  through /proc/<pid>/numa_maps.
- Tested COW support using test that forks children that read from
  ADI enabled pages shared with parent and other children and write to
  them as well forcing COW.


---------
Khalid Aziz (9):
  signals, sparc: Add signal codes for ADI violations
  mm, swap: Add infrastructure for saving page metadata as well on swap
  sparc64: Add support for ADI register fields, ASIs and traps
  sparc64: Add HV fault type handlers for ADI related faults
  sparc64: Add handler for "Memory Corruption Detected" trap
  sparc64: Add auxiliary vectors to report platform ADI properties
  mm: Add address parameter to arch_validate_prot()
  mm: Clear arch specific VM flags on protection change
  sparc64: Add support for ADI (Application Data Integrity)

 Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++
 arch/powerpc/include/asm/mman.h         |   2 +-
 arch/powerpc/kernel/syscalls.c          |   2 +-
 arch/sparc/include/asm/adi.h            |   6 +
 arch/sparc/include/asm/adi_64.h         |  45 ++++
 arch/sparc/include/asm/elf_64.h         |   8 +
 arch/sparc/include/asm/hypervisor.h     |   2 +
 arch/sparc/include/asm/mman.h           |  72 ++++++-
 arch/sparc/include/asm/mmu_64.h         |  17 ++
 arch/sparc/include/asm/mmu_context_64.h |  43 ++++
 arch/sparc/include/asm/page_64.h        |   4 +
 arch/sparc/include/asm/pgtable_64.h     |  48 +++++
 arch/sparc/include/asm/thread_info_64.h |   2 +-
 arch/sparc/include/asm/trap_block.h     |   2 +
 arch/sparc/include/asm/ttable.h         |  10 +
 arch/sparc/include/uapi/asm/asi.h       |   5 +
 arch/sparc/include/uapi/asm/auxvec.h    |  10 +
 arch/sparc/include/uapi/asm/mman.h      |   2 +
 arch/sparc/include/uapi/asm/pstate.h    |  10 +
 arch/sparc/kernel/Makefile              |   1 +
 arch/sparc/kernel/adi_64.c              | 367 ++++++++++++++++++++++++++++++++
 arch/sparc/kernel/entry.h               |   3 +
 arch/sparc/kernel/etrap_64.S            |  28 ++-
 arch/sparc/kernel/head_64.S             |   1 +
 arch/sparc/kernel/mdesc.c               |   2 +
 arch/sparc/kernel/process_64.c          |  25 +++
 arch/sparc/kernel/setup_64.c            |  11 +-
 arch/sparc/kernel/sun4v_mcd.S           |  17 ++
 arch/sparc/kernel/traps_64.c            | 142 +++++++++++-
 arch/sparc/kernel/ttable_64.S           |   6 +-
 arch/sparc/kernel/vmlinux.lds.S         |   5 +
 arch/sparc/mm/gup.c                     |  37 ++++
 arch/sparc/mm/hugetlbpage.c             |  14 +-
 arch/sparc/mm/init_64.c                 |  33 +++
 arch/sparc/mm/tsb.c                     |  21 ++
 arch/x86/kernel/signal_compat.c         |   2 +-
 include/asm-generic/pgtable.h           |  36 ++++
 include/linux/mm.h                      |   9 +
 include/linux/mman.h                    |   2 +-
 include/uapi/asm-generic/siginfo.h      |   5 +-
 mm/ksm.c                                |   4 +
 mm/memory.c                             |   1 +
 mm/mprotect.c                           |   4 +-
 mm/rmap.c                               |  13 ++
 44 files changed, 1334 insertions(+), 17 deletions(-)
 create mode 100644 Documentation/sparc/adi.txt
 create mode 100644 arch/sparc/include/asm/adi.h
 create mode 100644 arch/sparc/include/asm/adi_64.h
 create mode 100644 arch/sparc/kernel/adi_64.c
 create mode 100644 arch/sparc/kernel/sun4v_mcd.S

-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v7 0/9] Application Data Integrity feature introduced by SPARC M7
@ 2017-08-09 21:25 ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, akpm, 0x7f454c46, aarcange, ak, allen.pais,
	aneesh.kumar, arnd, atish.patra, benh, bob.picco, bsingharora,
	chris.hyser, cmetcalf, corbet, dan.carpenter, dave.jiang, dja,
	eric.saint.etienne, geert, hannes, heiko.carstens, hillf.zj, hpa,
	hughd, imbrenda, jack, jmarchan, jroedel, kirill.shutemov,
	Liam.Howlett, lstoakes, mgorman, mhocko, mike.kravetz

SPARC M7 processor adds additional metadata for memory address space
that can be used to secure access to regions of memory. This additional
metadata is implemented as a 4-bit tag attached to each cacheline size
block of memory. A task can set a tag on any number of such blocks.
Access to such block is granted only if the virtual address used to
access that block of memory has the tag encoded in the uppermost 4 bits
of VA. Since sparc processor does not implement all 64 bits of VA, top 4
bits are available for ADI tags. Any mismatch between tag encoded in VA
and tag set on the memory block results in a trap. Tags are verified in
the VA presented to the MMU and tags are associated with the physical
page VA maps on to. If a memory page is swapped out and page frame gets
reused for another task, the tags are lost and hence must be saved when
swapping or migrating the page.

A userspace task enables ADI through mprotect(). This patch series adds
a page protection bit PROT_ADI and a corresponding VMA flag
VM_SPARC_ADI. VM_SPARC_ADI is used to trigger setting TTE.mcd bit in the
sparc pte that enables ADI checking on the corresponding page. MMU
validates the tag embedded in VA for every page that has TTE.mcd bit set
in its pte. After enabling ADI on a memory range, the userspace task can
set ADI version tags using stxa instruction with ASI_MCD_PRIMARY or
ASI_MCD_ST_BLKINIT_PRIMARY ASI.

Once userspace task calls mprotect() with PROT_ADI, kernel takes
following overall steps:

1. Find the VMAs covering the address range passed in to mprotect and
set VM_SPARC_ADI flag. If address range covers a subset of a VMA, the
VMA will be split.

2. When a page is allocated for a VA and the VMA covering this VA has
VM_SPARC_ADI flag set, set the TTE.mcd bit so MMU will check the
vwersion tag.

3. Userspace can now set version tags on the memory it has enabled ADI
on. Userspace accesses ADI enabled memory using a virtual address that
has the version tag embedded in the high bits. MMU validates this
version tag against the actual tag set on the memory. If tag matches,
MMU performs the VA->PA translation and access is granted. If there is a
mismatch, hypervisor sends a data access exception or precise memory
corruption detected exception depending upon whether precise exceptions
are enabled or not (controlled by MCDPERR register). Kernel sends
SIGSEGV to the task with appropriate si_code.

4. If a page is being swapped out or migrated, kernel must save any ADI
tags set on the page. Kernel maintains a page worth of tag storage
descriptors. Each descriptors pointsto a tag storage space and the
address range it covers. If the page being swapped out or migrated has
ADI enabled on it, kernel finds a tag storage descriptor that covers the
address range for the page or allocates a new descriptor if none of the
existing descriptors cover the address range. Kernel saves tags from the
page into the tag storage space descriptor points to.

5. When the page is swapped back in or reinstantiated after migration,
kernel restores the version tags on the new physical page by retrieving
the original tag from tag storage pointed to by a tag storage descriptor
for the virtual address range for new page.

User task can disable ADI by calling mprotect() again on the memory
range with PROT_ADI bit unset. Kernel clears the VM_SPARC_ADI flag in
VMAs, merges adjacent VMAs if necessary, and clears TTE.mcd bit in the
corresponding ptes.

IOMMU does not support ADI checking. Any version tags embedded in the
top bits of VA meant for IOMMU, are cleared and replaced with sign
extension of the first non-version tag bit (bit 59 for SPARC M7) for
IOMMU addresses.

This patch series adds support for this feature in 9 patches:

Patch 1/9
  Tag mismatch on access by a task results in a trap from hypervisor as
  data access exception or a precide memory corruption detected
  exception. As part of handling these exceptions, kernel sends a
  SIGSEGV to user process with special si_code to indicate which fault
  occurred. This patch adds three new si_codes to differentiate between
  various mismatch errors.

Patch 2/9
  When a page is swapped or migrated, metadata associated with the page
  must be saved so it can be restored later. This patch adds a new
  function that saves/restores this metadata when updating pte upon a
  swap/migration.

Patch 3/9
  SPARC M7 processor adds new fields to control registers to support ADI
  feature. It also adds a new exception for precise traps on tag
  mismatch. This patch adds definitions for the new control register
  fields, new ASIs for ADI and an exception handler for the precise trap
  on tag mismatch.

Patch 4/9
  New hypervisor fault types were added by sparc M7 processor to support
  ADI feature. This patch adds code to handle these fault types for data
  access exception handler.

Patch 5/9
  When ADI is in use for a page and a tag mismatch occurs, processor
  raises "Memory corruption Detected" trap. This patch adds a handler
  for this trap.

Patch 6/9
  ADI usage is governed by ADI properties on a platform. These
  properties are provided to kernel by firmware. Thsi patch adds new
  auxiliary vectors that provide these values to userpsace.

Patch 7/9
  arch_validate_prot() is used to validate the new protection bits asked
  for by the userspace app. Validating protection bits may need the
  context of address space the bits are being applied to. One such
  example is PROT_ADI bit on sparc processor that enables ADI protection
  on an address range. ADI protection applies only to addresses covered
  by physical RAM and not other PFN mapped addresses or device
  addresses. This patch adds "address" to the parameters being passed to
  arch_validate_prot() to provide that context.

Patch 8/9
  When protection bits are changed on a page, kernel carries forward all
  protection bits except for read/write/exec. Additional code was added
  to allow kernel to clear PKEY bits on x86 but this requirement to
  clear other bits is not unique to x86. This patch extends the existing
  code to allow other architectures to clear any other protection bits
  as well on protection bit change.

Patch 9/9
  This patch adds support for a user space task to enable ADI and enable
  tag checking for subsets of its address space. As part of enabling
  this feature, this patch adds to support manipulation of precise
  exception for memory corruption detection, adds code to save and
  restore tags on page swap and migration, and adds code to handle ADI
  tagged addresses for DMA.


Changelog v7:

	- Patch 1/9: No changes
	- Patch 2/9: Updated parameters to arch specific swap in/out
	  handlers
	- Patch 3/9: No changes
	- Patch 4/9: new patch split off from patch 4/4 in v6
	- Patch 5/9: new patch split off from patch 4/4 in v6
	- Patch 6/9: new patch split off from patch 4/4 in v6
	- Patch 7/9: new patch
	- Patch 8/9: new patch
	- Patch 9/9:
		- Enhanced arch_validate_prot() to enable ADI only on
		  writable addresses backed by physical RAM
		- Added support for saving/restoring ADI tags for each
		  ADI block size address range on a page on swap in/out
		- copy ADI tags on COW
		- Updated values for auxiliary vectors to not conflict
		  with values on other architectures to avoid conflict
		  in glibc
		- Disable same page merging on ADI enabled pages
		- Enable ADI only on writable addresses backed by
		  physical RAM
		- Split parts of patch off into separate patches

Changelog v6:
	- Patch 1/4: No changes
	- Patch 2/4: No changes
	- Patch 3/4: Added missing nop in the delay slot in
	  sun4v_mcd_detect_precise
	- Patch 4/4: Eliminated instructions to read and write PSTATE
	  as well as MCDPER and PMCDPER on every access to userspace
	  addresses by setting PSTATE and PMCDPER correctly upon entry
	  into kernel

Changelog v5:
	- Patch 1/4: No changes
	- Patch 2/4: Replaced set_swp_pte_at() with new architecture
	  functions arch_do_swap_page() and arch_unmap_one() that
	  suppoprt architecture specific actions to be taken on page
	  swap and migration
	- Patch 3/4: Fixed indentation issues in assembly code
	- Patch 4/4:
		- Fixed indentation issues and instrcuctions in assembly
		  code
		- Removed CONFIG_SPARC64 from mdesc.c
		- Changed to maintain state of MCDPER register in thread
		  info flags as opposed to in mm context. MCDPER is a
		  per-thread state and belongs in thread info flag as
		  opposed to mm context which is shared across threads.
		  Added comments to clarify this is a lazily maintained
		  state and must be updated on context switch and
		  copy_process() 
		- Updated code to use the new arch_do_swap_page() and
		  arch_unmap_one() functions

Testing:

- All functionality was tested with 8K normal pages as well as hugepages
  using malloc, mmap and shm.
- Multiple long duration stress tests were run using hugepages over 2+
  months. Normal pages were tested with shorter duration stress tests.
- Tested swapping with malloc and shm by reducing max memory and
  allocating three times the available system memory by active processes
  using ADI on allocated memory. Ran through multiple hours long runs of
  this test.
- Tested page migration with malloc and shm by migrating data pages of
  active ADI test process using migratepages, back and forth between two
  nodes every few seconds over an hour long run. Verified page migration
  through /proc/<pid>/numa_maps.
- Tested COW support using test that forks children that read from
  ADI enabled pages shared with parent and other children and write to
  them as well forcing COW.


---------
Khalid Aziz (9):
  signals, sparc: Add signal codes for ADI violations
  mm, swap: Add infrastructure for saving page metadata as well on swap
  sparc64: Add support for ADI register fields, ASIs and traps
  sparc64: Add HV fault type handlers for ADI related faults
  sparc64: Add handler for "Memory Corruption Detected" trap
  sparc64: Add auxiliary vectors to report platform ADI properties
  mm: Add address parameter to arch_validate_prot()
  mm: Clear arch specific VM flags on protection change
  sparc64: Add support for ADI (Application Data Integrity)

 Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++
 arch/powerpc/include/asm/mman.h         |   2 +-
 arch/powerpc/kernel/syscalls.c          |   2 +-
 arch/sparc/include/asm/adi.h            |   6 +
 arch/sparc/include/asm/adi_64.h         |  45 ++++
 arch/sparc/include/asm/elf_64.h         |   8 +
 arch/sparc/include/asm/hypervisor.h     |   2 +
 arch/sparc/include/asm/mman.h           |  72 ++++++-
 arch/sparc/include/asm/mmu_64.h         |  17 ++
 arch/sparc/include/asm/mmu_context_64.h |  43 ++++
 arch/sparc/include/asm/page_64.h        |   4 +
 arch/sparc/include/asm/pgtable_64.h     |  48 +++++
 arch/sparc/include/asm/thread_info_64.h |   2 +-
 arch/sparc/include/asm/trap_block.h     |   2 +
 arch/sparc/include/asm/ttable.h         |  10 +
 arch/sparc/include/uapi/asm/asi.h       |   5 +
 arch/sparc/include/uapi/asm/auxvec.h    |  10 +
 arch/sparc/include/uapi/asm/mman.h      |   2 +
 arch/sparc/include/uapi/asm/pstate.h    |  10 +
 arch/sparc/kernel/Makefile              |   1 +
 arch/sparc/kernel/adi_64.c              | 367 ++++++++++++++++++++++++++++++++
 arch/sparc/kernel/entry.h               |   3 +
 arch/sparc/kernel/etrap_64.S            |  28 ++-
 arch/sparc/kernel/head_64.S             |   1 +
 arch/sparc/kernel/mdesc.c               |   2 +
 arch/sparc/kernel/process_64.c          |  25 +++
 arch/sparc/kernel/setup_64.c            |  11 +-
 arch/sparc/kernel/sun4v_mcd.S           |  17 ++
 arch/sparc/kernel/traps_64.c            | 142 +++++++++++-
 arch/sparc/kernel/ttable_64.S           |   6 +-
 arch/sparc/kernel/vmlinux.lds.S         |   5 +
 arch/sparc/mm/gup.c                     |  37 ++++
 arch/sparc/mm/hugetlbpage.c             |  14 +-
 arch/sparc/mm/init_64.c                 |  33 +++
 arch/sparc/mm/tsb.c                     |  21 ++
 arch/x86/kernel/signal_compat.c         |   2 +-
 include/asm-generic/pgtable.h           |  36 ++++
 include/linux/mm.h                      |   9 +
 include/linux/mman.h                    |   2 +-
 include/uapi/asm-generic/siginfo.h      |   5 +-
 mm/ksm.c                                |   4 +
 mm/memory.c                             |   1 +
 mm/mprotect.c                           |   4 +-
 mm/rmap.c                               |  13 ++
 44 files changed, 1334 insertions(+), 17 deletions(-)
 create mode 100644 Documentation/sparc/adi.txt
 create mode 100644 arch/sparc/include/asm/adi.h
 create mode 100644 arch/sparc/include/asm/adi_64.h
 create mode 100644 arch/sparc/kernel/adi_64.c
 create mode 100644 arch/sparc/kernel/sun4v_mcd.S

-- 
2.11.0


^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v7 0/9] Application Data Integrity feature introduced by SPARC M7
@ 2017-08-09 21:25 ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, akpm, 0x7f454c46, aarcange, ak, allen.pais,
	aneesh.kumar, arnd, atish.patra, benh, bob.picco, bsingharora,
	chris.hyser, cmetcalf, corbet, dan.carpenter, dave.jiang, dja,
	eric.saint.etienne, geert, hannes, heiko.carstens, hillf.zj, hpa,
	hughd, imbrenda, jack, jmarchan, jroedel, kirill.shutemov,
	Liam.Howlett, lstoakes, mgorman, mhocko, mike.kravetz, minchan,
	mingo, mpe, nitin.m.gupta, pasha.tatashin, paul.gortmaker,
	paulus, peterz, rientjes, ross.zwisler, shli, steven.sistare,
	tglx, thomas.tai, tklauser, tom.hromatka, vegard.nossum,
	vijay.ac.kumar, viro, willy, x86, ying.huang, zhongjiang,
	sparclinux, linux-arch, linux-doc, linux-kernel, linux-mm,
	linuxppc-dev

SPARC M7 processor adds additional metadata for memory address space
that can be used to secure access to regions of memory. This additional
metadata is implemented as a 4-bit tag attached to each cacheline size
block of memory. A task can set a tag on any number of such blocks.
Access to such block is granted only if the virtual address used to
access that block of memory has the tag encoded in the uppermost 4 bits
of VA. Since sparc processor does not implement all 64 bits of VA, top 4
bits are available for ADI tags. Any mismatch between tag encoded in VA
and tag set on the memory block results in a trap. Tags are verified in
the VA presented to the MMU and tags are associated with the physical
page VA maps on to. If a memory page is swapped out and page frame gets
reused for another task, the tags are lost and hence must be saved when
swapping or migrating the page.

A userspace task enables ADI through mprotect(). This patch series adds
a page protection bit PROT_ADI and a corresponding VMA flag
VM_SPARC_ADI. VM_SPARC_ADI is used to trigger setting TTE.mcd bit in the
sparc pte that enables ADI checking on the corresponding page. MMU
validates the tag embedded in VA for every page that has TTE.mcd bit set
in its pte. After enabling ADI on a memory range, the userspace task can
set ADI version tags using stxa instruction with ASI_MCD_PRIMARY or
ASI_MCD_ST_BLKINIT_PRIMARY ASI.

Once userspace task calls mprotect() with PROT_ADI, kernel takes
following overall steps:

1. Find the VMAs covering the address range passed in to mprotect and
set VM_SPARC_ADI flag. If address range covers a subset of a VMA, the
VMA will be split.

2. When a page is allocated for a VA and the VMA covering this VA has
VM_SPARC_ADI flag set, set the TTE.mcd bit so MMU will check the
vwersion tag.

3. Userspace can now set version tags on the memory it has enabled ADI
on. Userspace accesses ADI enabled memory using a virtual address that
has the version tag embedded in the high bits. MMU validates this
version tag against the actual tag set on the memory. If tag matches,
MMU performs the VA->PA translation and access is granted. If there is a
mismatch, hypervisor sends a data access exception or precise memory
corruption detected exception depending upon whether precise exceptions
are enabled or not (controlled by MCDPERR register). Kernel sends
SIGSEGV to the task with appropriate si_code.

4. If a page is being swapped out or migrated, kernel must save any ADI
tags set on the page. Kernel maintains a page worth of tag storage
descriptors. Each descriptors pointsto a tag storage space and the
address range it covers. If the page being swapped out or migrated has
ADI enabled on it, kernel finds a tag storage descriptor that covers the
address range for the page or allocates a new descriptor if none of the
existing descriptors cover the address range. Kernel saves tags from the
page into the tag storage space descriptor points to.

5. When the page is swapped back in or reinstantiated after migration,
kernel restores the version tags on the new physical page by retrieving
the original tag from tag storage pointed to by a tag storage descriptor
for the virtual address range for new page.

User task can disable ADI by calling mprotect() again on the memory
range with PROT_ADI bit unset. Kernel clears the VM_SPARC_ADI flag in
VMAs, merges adjacent VMAs if necessary, and clears TTE.mcd bit in the
corresponding ptes.

IOMMU does not support ADI checking. Any version tags embedded in the
top bits of VA meant for IOMMU, are cleared and replaced with sign
extension of the first non-version tag bit (bit 59 for SPARC M7) for
IOMMU addresses.

This patch series adds support for this feature in 9 patches:

Patch 1/9
  Tag mismatch on access by a task results in a trap from hypervisor as
  data access exception or a precide memory corruption detected
  exception. As part of handling these exceptions, kernel sends a
  SIGSEGV to user process with special si_code to indicate which fault
  occurred. This patch adds three new si_codes to differentiate between
  various mismatch errors.

Patch 2/9
  When a page is swapped or migrated, metadata associated with the page
  must be saved so it can be restored later. This patch adds a new
  function that saves/restores this metadata when updating pte upon a
  swap/migration.

Patch 3/9
  SPARC M7 processor adds new fields to control registers to support ADI
  feature. It also adds a new exception for precise traps on tag
  mismatch. This patch adds definitions for the new control register
  fields, new ASIs for ADI and an exception handler for the precise trap
  on tag mismatch.

Patch 4/9
  New hypervisor fault types were added by sparc M7 processor to support
  ADI feature. This patch adds code to handle these fault types for data
  access exception handler.

Patch 5/9
  When ADI is in use for a page and a tag mismatch occurs, processor
  raises "Memory corruption Detected" trap. This patch adds a handler
  for this trap.

Patch 6/9
  ADI usage is governed by ADI properties on a platform. These
  properties are provided to kernel by firmware. Thsi patch adds new
  auxiliary vectors that provide these values to userpsace.

Patch 7/9
  arch_validate_prot() is used to validate the new protection bits asked
  for by the userspace app. Validating protection bits may need the
  context of address space the bits are being applied to. One such
  example is PROT_ADI bit on sparc processor that enables ADI protection
  on an address range. ADI protection applies only to addresses covered
  by physical RAM and not other PFN mapped addresses or device
  addresses. This patch adds "address" to the parameters being passed to
  arch_validate_prot() to provide that context.

Patch 8/9
  When protection bits are changed on a page, kernel carries forward all
  protection bits except for read/write/exec. Additional code was added
  to allow kernel to clear PKEY bits on x86 but this requirement to
  clear other bits is not unique to x86. This patch extends the existing
  code to allow other architectures to clear any other protection bits
  as well on protection bit change.

Patch 9/9
  This patch adds support for a user space task to enable ADI and enable
  tag checking for subsets of its address space. As part of enabling
  this feature, this patch adds to support manipulation of precise
  exception for memory corruption detection, adds code to save and
  restore tags on page swap and migration, and adds code to handle ADI
  tagged addresses for DMA.


Changelog v7:

	- Patch 1/9: No changes
	- Patch 2/9: Updated parameters to arch specific swap in/out
	  handlers
	- Patch 3/9: No changes
	- Patch 4/9: new patch split off from patch 4/4 in v6
	- Patch 5/9: new patch split off from patch 4/4 in v6
	- Patch 6/9: new patch split off from patch 4/4 in v6
	- Patch 7/9: new patch
	- Patch 8/9: new patch
	- Patch 9/9:
		- Enhanced arch_validate_prot() to enable ADI only on
		  writable addresses backed by physical RAM
		- Added support for saving/restoring ADI tags for each
		  ADI block size address range on a page on swap in/out
		- copy ADI tags on COW
		- Updated values for auxiliary vectors to not conflict
		  with values on other architectures to avoid conflict
		  in glibc
		- Disable same page merging on ADI enabled pages
		- Enable ADI only on writable addresses backed by
		  physical RAM
		- Split parts of patch off into separate patches

Changelog v6:
	- Patch 1/4: No changes
	- Patch 2/4: No changes
	- Patch 3/4: Added missing nop in the delay slot in
	  sun4v_mcd_detect_precise
	- Patch 4/4: Eliminated instructions to read and write PSTATE
	  as well as MCDPER and PMCDPER on every access to userspace
	  addresses by setting PSTATE and PMCDPER correctly upon entry
	  into kernel

Changelog v5:
	- Patch 1/4: No changes
	- Patch 2/4: Replaced set_swp_pte_at() with new architecture
	  functions arch_do_swap_page() and arch_unmap_one() that
	  suppoprt architecture specific actions to be taken on page
	  swap and migration
	- Patch 3/4: Fixed indentation issues in assembly code
	- Patch 4/4:
		- Fixed indentation issues and instrcuctions in assembly
		  code
		- Removed CONFIG_SPARC64 from mdesc.c
		- Changed to maintain state of MCDPER register in thread
		  info flags as opposed to in mm context. MCDPER is a
		  per-thread state and belongs in thread info flag as
		  opposed to mm context which is shared across threads.
		  Added comments to clarify this is a lazily maintained
		  state and must be updated on context switch and
		  copy_process() 
		- Updated code to use the new arch_do_swap_page() and
		  arch_unmap_one() functions

Testing:

- All functionality was tested with 8K normal pages as well as hugepages
  using malloc, mmap and shm.
- Multiple long duration stress tests were run using hugepages over 2+
  months. Normal pages were tested with shorter duration stress tests.
- Tested swapping with malloc and shm by reducing max memory and
  allocating three times the available system memory by active processes
  using ADI on allocated memory. Ran through multiple hours long runs of
  this test.
- Tested page migration with malloc and shm by migrating data pages of
  active ADI test process using migratepages, back and forth between two
  nodes every few seconds over an hour long run. Verified page migration
  through /proc/<pid>/numa_maps.
- Tested COW support using test that forks children that read from
  ADI enabled pages shared with parent and other children and write to
  them as well forcing COW.


---------
Khalid Aziz (9):
  signals, sparc: Add signal codes for ADI violations
  mm, swap: Add infrastructure for saving page metadata as well on swap
  sparc64: Add support for ADI register fields, ASIs and traps
  sparc64: Add HV fault type handlers for ADI related faults
  sparc64: Add handler for "Memory Corruption Detected" trap
  sparc64: Add auxiliary vectors to report platform ADI properties
  mm: Add address parameter to arch_validate_prot()
  mm: Clear arch specific VM flags on protection change
  sparc64: Add support for ADI (Application Data Integrity)

 Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++
 arch/powerpc/include/asm/mman.h         |   2 +-
 arch/powerpc/kernel/syscalls.c          |   2 +-
 arch/sparc/include/asm/adi.h            |   6 +
 arch/sparc/include/asm/adi_64.h         |  45 ++++
 arch/sparc/include/asm/elf_64.h         |   8 +
 arch/sparc/include/asm/hypervisor.h     |   2 +
 arch/sparc/include/asm/mman.h           |  72 ++++++-
 arch/sparc/include/asm/mmu_64.h         |  17 ++
 arch/sparc/include/asm/mmu_context_64.h |  43 ++++
 arch/sparc/include/asm/page_64.h        |   4 +
 arch/sparc/include/asm/pgtable_64.h     |  48 +++++
 arch/sparc/include/asm/thread_info_64.h |   2 +-
 arch/sparc/include/asm/trap_block.h     |   2 +
 arch/sparc/include/asm/ttable.h         |  10 +
 arch/sparc/include/uapi/asm/asi.h       |   5 +
 arch/sparc/include/uapi/asm/auxvec.h    |  10 +
 arch/sparc/include/uapi/asm/mman.h      |   2 +
 arch/sparc/include/uapi/asm/pstate.h    |  10 +
 arch/sparc/kernel/Makefile              |   1 +
 arch/sparc/kernel/adi_64.c              | 367 ++++++++++++++++++++++++++++++++
 arch/sparc/kernel/entry.h               |   3 +
 arch/sparc/kernel/etrap_64.S            |  28 ++-
 arch/sparc/kernel/head_64.S             |   1 +
 arch/sparc/kernel/mdesc.c               |   2 +
 arch/sparc/kernel/process_64.c          |  25 +++
 arch/sparc/kernel/setup_64.c            |  11 +-
 arch/sparc/kernel/sun4v_mcd.S           |  17 ++
 arch/sparc/kernel/traps_64.c            | 142 +++++++++++-
 arch/sparc/kernel/ttable_64.S           |   6 +-
 arch/sparc/kernel/vmlinux.lds.S         |   5 +
 arch/sparc/mm/gup.c                     |  37 ++++
 arch/sparc/mm/hugetlbpage.c             |  14 +-
 arch/sparc/mm/init_64.c                 |  33 +++
 arch/sparc/mm/tsb.c                     |  21 ++
 arch/x86/kernel/signal_compat.c         |   2 +-
 include/asm-generic/pgtable.h           |  36 ++++
 include/linux/mm.h                      |   9 +
 include/linux/mman.h                    |   2 +-
 include/uapi/asm-generic/siginfo.h      |   5 +-
 mm/ksm.c                                |   4 +
 mm/memory.c                             |   1 +
 mm/mprotect.c                           |   4 +-
 mm/rmap.c                               |  13 ++
 44 files changed, 1334 insertions(+), 17 deletions(-)
 create mode 100644 Documentation/sparc/adi.txt
 create mode 100644 arch/sparc/include/asm/adi.h
 create mode 100644 arch/sparc/include/asm/adi_64.h
 create mode 100644 arch/sparc/kernel/adi_64.c
 create mode 100644 arch/sparc/kernel/sun4v_mcd.S

-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v7 1/9] signals, sparc: Add signal codes for ADI violations
  2017-08-09 21:25 ` Khalid Aziz
@ 2017-08-09 21:25   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: arnd, davem, dave.hansen
  Cc: Khalid Aziz, hpa, 0x7f454c46, tglx, mingo, x86, jroedel,
	linux-kernel, linux-arch, sparclinux, Khalid Aziz

SPARC M7 processor introduces a new feature - Application Data
Integrity (ADI). ADI allows MMU to  catch rogue accesses to memory.
When a rogue access occurs, MMU blocks the access and raises an
exception. In response to the exception, kernel sends the offending
task a SIGSEGV with si_code that indicates the nature of exception.
This patch adds three new signal codes specific to ADI feature:

1. ADI is not enabled for the address and task attempted to access
   memory using ADI
2. Task attempted to access memory using wrong ADI tag and caused
   a deferred exception.
3. Task attempted to access memory using wrong ADI tag and caused
   a precise exception.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
 arch/x86/kernel/signal_compat.c    | 2 +-
 include/uapi/asm-generic/siginfo.h | 5 ++++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c
index 71beb28600d4..e6e0b50230c3 100644
--- a/arch/x86/kernel/signal_compat.c
+++ b/arch/x86/kernel/signal_compat.c
@@ -26,7 +26,7 @@ static inline void signal_compat_build_tests(void)
 	 */
 	BUILD_BUG_ON(NSIGILL  != 8);
 	BUILD_BUG_ON(NSIGFPE  != 8);
-	BUILD_BUG_ON(NSIGSEGV != 4);
+	BUILD_BUG_ON(NSIGSEGV != 7);
 	BUILD_BUG_ON(NSIGBUS  != 5);
 	BUILD_BUG_ON(NSIGTRAP != 4);
 	BUILD_BUG_ON(NSIGCHLD != 6);
diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h
index 1abaf62c86fc..24468643ee9d 100644
--- a/include/uapi/asm-generic/siginfo.h
+++ b/include/uapi/asm-generic/siginfo.h
@@ -213,7 +213,10 @@ typedef struct siginfo {
 #define SEGV_ACCERR	(__SI_FAULT|2)	/* invalid permissions for mapped object */
 #define SEGV_BNDERR	(__SI_FAULT|3)  /* failed address bound checks */
 #define SEGV_PKUERR	(__SI_FAULT|4)  /* failed protection key checks */
-#define NSIGSEGV	4
+#define SEGV_ACCADI	(__SI_FAULT|5)	/* ADI not enabled for mapped object */
+#define SEGV_ADIDERR	(__SI_FAULT|6)	/* Disrupting MCD error */
+#define SEGV_ADIPERR	(__SI_FAULT|7)	/* Precise MCD exception */
+#define NSIGSEGV	7
 
 /*
  * SIGBUS si_codes
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 1/9] signals, sparc: Add signal codes for ADI violations
@ 2017-08-09 21:25   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: arnd, davem, dave.hansen
  Cc: Khalid Aziz, hpa, 0x7f454c46, tglx, mingo, x86, jroedel,
	linux-kernel, linux-arch, sparclinux, Khalid Aziz

SPARC M7 processor introduces a new feature - Application Data
Integrity (ADI). ADI allows MMU to  catch rogue accesses to memory.
When a rogue access occurs, MMU blocks the access and raises an
exception. In response to the exception, kernel sends the offending
task a SIGSEGV with si_code that indicates the nature of exception.
This patch adds three new signal codes specific to ADI feature:

1. ADI is not enabled for the address and task attempted to access
   memory using ADI
2. Task attempted to access memory using wrong ADI tag and caused
   a deferred exception.
3. Task attempted to access memory using wrong ADI tag and caused
   a precise exception.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
 arch/x86/kernel/signal_compat.c    | 2 +-
 include/uapi/asm-generic/siginfo.h | 5 ++++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c
index 71beb28600d4..e6e0b50230c3 100644
--- a/arch/x86/kernel/signal_compat.c
+++ b/arch/x86/kernel/signal_compat.c
@@ -26,7 +26,7 @@ static inline void signal_compat_build_tests(void)
 	 */
 	BUILD_BUG_ON(NSIGILL  != 8);
 	BUILD_BUG_ON(NSIGFPE  != 8);
-	BUILD_BUG_ON(NSIGSEGV != 4);
+	BUILD_BUG_ON(NSIGSEGV != 7);
 	BUILD_BUG_ON(NSIGBUS  != 5);
 	BUILD_BUG_ON(NSIGTRAP != 4);
 	BUILD_BUG_ON(NSIGCHLD != 6);
diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h
index 1abaf62c86fc..24468643ee9d 100644
--- a/include/uapi/asm-generic/siginfo.h
+++ b/include/uapi/asm-generic/siginfo.h
@@ -213,7 +213,10 @@ typedef struct siginfo {
 #define SEGV_ACCERR	(__SI_FAULT|2)	/* invalid permissions for mapped object */
 #define SEGV_BNDERR	(__SI_FAULT|3)  /* failed address bound checks */
 #define SEGV_PKUERR	(__SI_FAULT|4)  /* failed protection key checks */
-#define NSIGSEGV	4
+#define SEGV_ACCADI	(__SI_FAULT|5)	/* ADI not enabled for mapped object */
+#define SEGV_ADIDERR	(__SI_FAULT|6)	/* Disrupting MCD error */
+#define SEGV_ADIPERR	(__SI_FAULT|7)	/* Precise MCD exception */
+#define NSIGSEGV	7
 
 /*
  * SIGBUS si_codes
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
  2017-08-09 21:25 ` Khalid Aziz
  (?)
@ 2017-08-09 21:25   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: akpm, davem, dave.hansen, arnd
  Cc: Khalid Aziz, kirill.shutemov, mhocko, jack, ross.zwisler,
	aneesh.kumar, dave.jiang, willy, hughd, minchan, hannes,
	hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, Khalid Aziz


If a processor supports special metadata for a page, for example ADI
version tags on SPARC M7, this metadata must be saved when the page is
swapped out. The same metadata must be restored when the page is swapped
back in. This patch adds two new architecture specific functions -
arch_do_swap_page() to be called when a page is swapped in, and
arch_unmap_one() to be called when a page is being unmapped for swap
out. These architecture hooks allow page metadata to be saved if the
architecture supports it.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
---
v6:
	- Updated parameter list for arch_do_swap_page() and
	  arch_unmap_one()
v5:
	- Replaced set_swp_pte() function with new architecture
	  functions arch_do_swap_page() and arch_unmap_one()

 include/asm-generic/pgtable.h | 36 ++++++++++++++++++++++++++++++++++++
 mm/memory.c                   |  1 +
 mm/rmap.c                     | 13 +++++++++++++
 3 files changed, 50 insertions(+)

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 7dfa767dc680..15668c2470b4 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -392,6 +392,42 @@ static inline int pud_same(pud_t pud_a, pud_t pud_b)
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
+#ifndef __HAVE_ARCH_DO_SWAP_PAGE
+/*
+ * Some architectures support metadata associated with a page. When a
+ * page is being swapped out, this metadata must be saved so it can be
+ * restored when the page is swapped back in. SPARC M7 and newer
+ * processors support an ADI (Application Data Integrity) tag for the
+ * page as metadata for the page. arch_do_swap_page() can restore this
+ * metadata when a page is swapped back in.
+ */
+static inline void arch_do_swap_page(struct mm_struct *mm,
+				     struct vm_area_struct *vma,
+				     unsigned long addr,
+				     pte_t pte, pte_t oldpte)
+{
+
+}
+#endif
+
+#ifndef __HAVE_ARCH_UNMAP_ONE
+/*
+ * Some architectures support metadata associated with a page. When a
+ * page is being swapped out, this metadata must be saved so it can be
+ * restored when the page is swapped back in. SPARC M7 and newer
+ * processors support an ADI (Application Data Integrity) tag for the
+ * page as metadata for the page. arch_unmap_one() can save this
+ * metadata on a swap-out of a page.
+ */
+static inline int arch_unmap_one(struct mm_struct *mm,
+				  struct vm_area_struct *vma,
+				  unsigned long addr,
+				  pte_t orig_pte)
+{
+	return 0;
+}
+#endif
+
 #ifndef __HAVE_ARCH_PGD_OFFSET_GATE
 #define pgd_offset_gate(mm, addr)	pgd_offset(mm, addr)
 #endif
diff --git a/mm/memory.c b/mm/memory.c
index bb11c474857e..eb92e4f94d3b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2798,6 +2798,7 @@ int do_swap_page(struct vm_fault *vmf)
 	if (pte_swp_soft_dirty(vmf->orig_pte))
 		pte = pte_mksoft_dirty(pte);
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
+	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
 	vmf->orig_pte = pte;
 	if (page == swapcache) {
 		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
diff --git a/mm/rmap.c b/mm/rmap.c
index d405f0e0ee96..5ff2a7943c57 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				(flags & TTU_MIGRATION)) {
 			swp_entry_t entry;
 			pte_t swp_pte;
+
+			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
+				set_pte_at(mm, address, pvmw.pte, pteval);
+				ret = false;
+				page_vma_mapped_walk_done(&pvmw);
+				break;
 			/*
 			 * Store the pfn of the page in a special migration
 			 * pte. do_swap_page() will wait until the migration
@@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 			if (pte_soft_dirty(pteval))
 				swp_pte = pte_swp_mksoft_dirty(swp_pte);
 			set_pte_at(mm, address, pvmw.pte, swp_pte);
+			}
 		} else if (PageAnon(page)) {
 			swp_entry_t entry = { .val = page_private(subpage) };
 			pte_t swp_pte;
@@ -1448,6 +1455,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
+			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
+				set_pte_at(mm, address, pvmw.pte, pteval);
+				ret = false;
+				page_vma_mapped_walk_done(&pvmw);
+				break;
+			}
 			if (list_empty(&mm->mmlist)) {
 				spin_lock(&mmlist_lock);
 				if (list_empty(&mm->mmlist))
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
@ 2017-08-09 21:25   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: akpm, davem, dave.hansen, arnd
  Cc: Khalid Aziz, kirill.shutemov, mhocko, jack, ross.zwisler,
	aneesh.kumar, dave.jiang, willy, hughd, minchan, hannes,
	hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, Khalid Aziz


If a processor supports special metadata for a page, for example ADI
version tags on SPARC M7, this metadata must be saved when the page is
swapped out. The same metadata must be restored when the page is swapped
back in. This patch adds two new architecture specific functions -
arch_do_swap_page() to be called when a page is swapped in, and
arch_unmap_one() to be called when a page is being unmapped for swap
out. These architecture hooks allow page metadata to be saved if the
architecture supports it.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
---
v6:
	- Updated parameter list for arch_do_swap_page() and
	  arch_unmap_one()
v5:
	- Replaced set_swp_pte() function with new architecture
	  functions arch_do_swap_page() and arch_unmap_one()

 include/asm-generic/pgtable.h | 36 ++++++++++++++++++++++++++++++++++++
 mm/memory.c                   |  1 +
 mm/rmap.c                     | 13 +++++++++++++
 3 files changed, 50 insertions(+)

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 7dfa767dc680..15668c2470b4 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -392,6 +392,42 @@ static inline int pud_same(pud_t pud_a, pud_t pud_b)
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
+#ifndef __HAVE_ARCH_DO_SWAP_PAGE
+/*
+ * Some architectures support metadata associated with a page. When a
+ * page is being swapped out, this metadata must be saved so it can be
+ * restored when the page is swapped back in. SPARC M7 and newer
+ * processors support an ADI (Application Data Integrity) tag for the
+ * page as metadata for the page. arch_do_swap_page() can restore this
+ * metadata when a page is swapped back in.
+ */
+static inline void arch_do_swap_page(struct mm_struct *mm,
+				     struct vm_area_struct *vma,
+				     unsigned long addr,
+				     pte_t pte, pte_t oldpte)
+{
+
+}
+#endif
+
+#ifndef __HAVE_ARCH_UNMAP_ONE
+/*
+ * Some architectures support metadata associated with a page. When a
+ * page is being swapped out, this metadata must be saved so it can be
+ * restored when the page is swapped back in. SPARC M7 and newer
+ * processors support an ADI (Application Data Integrity) tag for the
+ * page as metadata for the page. arch_unmap_one() can save this
+ * metadata on a swap-out of a page.
+ */
+static inline int arch_unmap_one(struct mm_struct *mm,
+				  struct vm_area_struct *vma,
+				  unsigned long addr,
+				  pte_t orig_pte)
+{
+	return 0;
+}
+#endif
+
 #ifndef __HAVE_ARCH_PGD_OFFSET_GATE
 #define pgd_offset_gate(mm, addr)	pgd_offset(mm, addr)
 #endif
diff --git a/mm/memory.c b/mm/memory.c
index bb11c474857e..eb92e4f94d3b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2798,6 +2798,7 @@ int do_swap_page(struct vm_fault *vmf)
 	if (pte_swp_soft_dirty(vmf->orig_pte))
 		pte = pte_mksoft_dirty(pte);
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
+	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
 	vmf->orig_pte = pte;
 	if (page == swapcache) {
 		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
diff --git a/mm/rmap.c b/mm/rmap.c
index d405f0e0ee96..5ff2a7943c57 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				(flags & TTU_MIGRATION)) {
 			swp_entry_t entry;
 			pte_t swp_pte;
+
+			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
+				set_pte_at(mm, address, pvmw.pte, pteval);
+				ret = false;
+				page_vma_mapped_walk_done(&pvmw);
+				break;
 			/*
 			 * Store the pfn of the page in a special migration
 			 * pte. do_swap_page() will wait until the migration
@@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 			if (pte_soft_dirty(pteval))
 				swp_pte = pte_swp_mksoft_dirty(swp_pte);
 			set_pte_at(mm, address, pvmw.pte, swp_pte);
+			}
 		} else if (PageAnon(page)) {
 			swp_entry_t entry = { .val = page_private(subpage) };
 			pte_t swp_pte;
@@ -1448,6 +1455,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
+			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
+				set_pte_at(mm, address, pvmw.pte, pteval);
+				ret = false;
+				page_vma_mapped_walk_done(&pvmw);
+				break;
+			}
 			if (list_empty(&mm->mmlist)) {
 				spin_lock(&mmlist_lock);
 				if (list_empty(&mm->mmlist))
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
@ 2017-08-09 21:25   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: akpm, davem, dave.hansen, arnd
  Cc: Khalid Aziz, kirill.shutemov, mhocko, jack, ross.zwisler,
	aneesh.kumar, dave.jiang, willy, hughd, minchan, hannes,
	hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, Khalid Aziz


If a processor supports special metadata for a page, for example ADI
version tags on SPARC M7, this metadata must be saved when the page is
swapped out. The same metadata must be restored when the page is swapped
back in. This patch adds two new architecture specific functions -
arch_do_swap_page() to be called when a page is swapped in, and
arch_unmap_one() to be called when a page is being unmapped for swap
out. These architecture hooks allow page metadata to be saved if the
architecture supports it.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
---
v6:
	- Updated parameter list for arch_do_swap_page() and
	  arch_unmap_one()
v5:
	- Replaced set_swp_pte() function with new architecture
	  functions arch_do_swap_page() and arch_unmap_one()

 include/asm-generic/pgtable.h | 36 ++++++++++++++++++++++++++++++++++++
 mm/memory.c                   |  1 +
 mm/rmap.c                     | 13 +++++++++++++
 3 files changed, 50 insertions(+)

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 7dfa767dc680..15668c2470b4 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -392,6 +392,42 @@ static inline int pud_same(pud_t pud_a, pud_t pud_b)
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 #endif
 
+#ifndef __HAVE_ARCH_DO_SWAP_PAGE
+/*
+ * Some architectures support metadata associated with a page. When a
+ * page is being swapped out, this metadata must be saved so it can be
+ * restored when the page is swapped back in. SPARC M7 and newer
+ * processors support an ADI (Application Data Integrity) tag for the
+ * page as metadata for the page. arch_do_swap_page() can restore this
+ * metadata when a page is swapped back in.
+ */
+static inline void arch_do_swap_page(struct mm_struct *mm,
+				     struct vm_area_struct *vma,
+				     unsigned long addr,
+				     pte_t pte, pte_t oldpte)
+{
+
+}
+#endif
+
+#ifndef __HAVE_ARCH_UNMAP_ONE
+/*
+ * Some architectures support metadata associated with a page. When a
+ * page is being swapped out, this metadata must be saved so it can be
+ * restored when the page is swapped back in. SPARC M7 and newer
+ * processors support an ADI (Application Data Integrity) tag for the
+ * page as metadata for the page. arch_unmap_one() can save this
+ * metadata on a swap-out of a page.
+ */
+static inline int arch_unmap_one(struct mm_struct *mm,
+				  struct vm_area_struct *vma,
+				  unsigned long addr,
+				  pte_t orig_pte)
+{
+	return 0;
+}
+#endif
+
 #ifndef __HAVE_ARCH_PGD_OFFSET_GATE
 #define pgd_offset_gate(mm, addr)	pgd_offset(mm, addr)
 #endif
diff --git a/mm/memory.c b/mm/memory.c
index bb11c474857e..eb92e4f94d3b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2798,6 +2798,7 @@ int do_swap_page(struct vm_fault *vmf)
 	if (pte_swp_soft_dirty(vmf->orig_pte))
 		pte = pte_mksoft_dirty(pte);
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
+	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
 	vmf->orig_pte = pte;
 	if (page = swapcache) {
 		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
diff --git a/mm/rmap.c b/mm/rmap.c
index d405f0e0ee96..5ff2a7943c57 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				(flags & TTU_MIGRATION)) {
 			swp_entry_t entry;
 			pte_t swp_pte;
+
+			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
+				set_pte_at(mm, address, pvmw.pte, pteval);
+				ret = false;
+				page_vma_mapped_walk_done(&pvmw);
+				break;
 			/*
 			 * Store the pfn of the page in a special migration
 			 * pte. do_swap_page() will wait until the migration
@@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 			if (pte_soft_dirty(pteval))
 				swp_pte = pte_swp_mksoft_dirty(swp_pte);
 			set_pte_at(mm, address, pvmw.pte, swp_pte);
+			}
 		} else if (PageAnon(page)) {
 			swp_entry_t entry = { .val = page_private(subpage) };
 			pte_t swp_pte;
@@ -1448,6 +1455,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
+			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
+				set_pte_at(mm, address, pvmw.pte, pteval);
+				ret = false;
+				page_vma_mapped_walk_done(&pvmw);
+				break;
+			}
 			if (list_empty(&mm->mmlist)) {
 				spin_lock(&mmlist_lock);
 				if (list_empty(&mm->mmlist))
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 3/9] sparc64: Add support for ADI register fields, ASIs and traps
  2017-08-09 21:25 ` Khalid Aziz
@ 2017-08-09 21:25   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, mhocko, rientjes, mingo, Liam.Howlett,
	vegard.nossum, dan.carpenter, sparclinux, linux-kernel,
	Khalid Aziz

SPARC M7 processor adds new control register fields, ASIs and a new
trap to support the ADI (Application Data Integrity) feature. This
patch adds definitions for these register fields, ASIs and a handler
for the new precise memory corruption detected trap.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- No change
v6:
	- Added a missing nop in the delay slot in sun4v_mcd_detect_precise

v5:
	- Fixed indentation issues in assembly code

v4:
	- Broke patch up into smaller patches

v3:
	- Removed CONFIG_SPARC_ADI
	- Replaced prctl commands with mprotect
	- Added auxiliary vectors for ADI parameters
	- Enabled ADI for swappable pages

v2:
	- Fixed a build error

 arch/sparc/include/asm/hypervisor.h  |  2 ++
 arch/sparc/include/asm/pgtable_64.h  |  2 ++
 arch/sparc/include/asm/ttable.h      | 10 +++++++
 arch/sparc/include/uapi/asm/asi.h    |  5 ++++
 arch/sparc/include/uapi/asm/pstate.h | 10 +++++++
 arch/sparc/kernel/entry.h            |  3 ++
 arch/sparc/kernel/head_64.S          |  1 +
 arch/sparc/kernel/sun4v_mcd.S        | 17 ++++++++++++
 arch/sparc/kernel/traps_64.c         | 54 ++++++++++++++++++++++++++++++++++++
 arch/sparc/kernel/ttable_64.S        |  6 ++--
 10 files changed, 108 insertions(+), 2 deletions(-)
 create mode 100644 arch/sparc/kernel/sun4v_mcd.S

diff --git a/arch/sparc/include/asm/hypervisor.h b/arch/sparc/include/asm/hypervisor.h
index 73cb8978df58..31782f7996b3 100644
--- a/arch/sparc/include/asm/hypervisor.h
+++ b/arch/sparc/include/asm/hypervisor.h
@@ -547,6 +547,8 @@ struct hv_fault_status {
 #define HV_FAULT_TYPE_RESV1	13
 #define HV_FAULT_TYPE_UNALIGNED	14
 #define HV_FAULT_TYPE_INV_PGSZ	15
+#define HV_FAULT_TYPE_MCD	17
+#define HV_FAULT_TYPE_MCD_DIS	18
 /* Values 16 --> -2 are reserved.  */
 #define HV_FAULT_TYPE_MULTIPLE	-1
 
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 6fbd931f0570..af045061f41e 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -163,6 +163,8 @@ bool kern_addr_valid(unsigned long addr);
 #define _PAGE_E_4V	  _AC(0x0000000000000800,UL) /* side-Effect          */
 #define _PAGE_CP_4V	  _AC(0x0000000000000400,UL) /* Cacheable in P-Cache */
 #define _PAGE_CV_4V	  _AC(0x0000000000000200,UL) /* Cacheable in V-Cache */
+/* Bit 9 is used to enable MCD corruption detection instead on M7 */
+#define _PAGE_MCD_4V      _AC(0x0000000000000200,UL) /* Memory Corruption    */
 #define _PAGE_P_4V	  _AC(0x0000000000000100,UL) /* Privileged Page      */
 #define _PAGE_EXEC_4V	  _AC(0x0000000000000080,UL) /* Executable Page      */
 #define _PAGE_W_4V	  _AC(0x0000000000000040,UL) /* Writable             */
diff --git a/arch/sparc/include/asm/ttable.h b/arch/sparc/include/asm/ttable.h
index 82e7df296abc..d6510ab8fa4d 100644
--- a/arch/sparc/include/asm/ttable.h
+++ b/arch/sparc/include/asm/ttable.h
@@ -218,6 +218,16 @@
 	nop;						\
 	nop;
 
+#define SUN4V_MCD_PRECISE				\
+	ldxa	[%g0] ASI_SCRATCHPAD, %g2;		\
+	ldx	[%g2 + HV_FAULT_D_ADDR_OFFSET], %g4;	\
+	ldx	[%g2 + HV_FAULT_D_CTX_OFFSET], %g5;	\
+	ba,pt	%xcc, etrap;				\
+	 rd	%pc, %g7;				\
+	ba,pt	%xcc, sun4v_mcd_detect_precise;		\
+	 nop;						\
+	nop;
+
 /* Before touching these macros, you owe it to yourself to go and
  * see how arch/sparc64/kernel/winfixup.S works... -DaveM
  *
diff --git a/arch/sparc/include/uapi/asm/asi.h b/arch/sparc/include/uapi/asm/asi.h
index 7ad7203deaec..2bcdaa5321d3 100644
--- a/arch/sparc/include/uapi/asm/asi.h
+++ b/arch/sparc/include/uapi/asm/asi.h
@@ -144,6 +144,8 @@
  * ASIs, "(4V)" designates SUN4V specific ASIs.  "(NG4)" designates SPARC-T4
  * and later ASIs.
  */
+#define ASI_MCD_PRIV_PRIMARY	0x02 /* (NG7) Privileged MCD version VA	*/
+#define ASI_MCD_REAL		0x05 /* (NG7) Privileged MCD version PA	*/
 #define ASI_PHYS_USE_EC		0x14 /* PADDR, E-cachable		*/
 #define ASI_PHYS_BYPASS_EC_E	0x15 /* PADDR, E-bit			*/
 #define ASI_BLK_AIUP_4V		0x16 /* (4V) Prim, user, block ld/st	*/
@@ -244,6 +246,9 @@
 #define ASI_UDBL_CONTROL_R	0x7f /* External UDB control regs rd low*/
 #define ASI_INTR_R		0x7f /* IRQ vector dispatch read	*/
 #define ASI_INTR_DATAN_R	0x7f /* (III) In irq vector data reg N	*/
+#define ASI_MCD_PRIMARY		0x90 /* (NG7) MCD version load/store	*/
+#define ASI_MCD_ST_BLKINIT_PRIMARY	\
+				0x92 /* (NG7) MCD store BLKINIT primary	*/
 #define ASI_PIC			0xb0 /* (NG4) PIC registers		*/
 #define ASI_PST8_P		0xc0 /* Primary, 8 8-bit, partial	*/
 #define ASI_PST8_S		0xc1 /* Secondary, 8 8-bit, partial	*/
diff --git a/arch/sparc/include/uapi/asm/pstate.h b/arch/sparc/include/uapi/asm/pstate.h
index cf832e14aa05..d0521db9bb6f 100644
--- a/arch/sparc/include/uapi/asm/pstate.h
+++ b/arch/sparc/include/uapi/asm/pstate.h
@@ -10,7 +10,12 @@
  * -----------------------------------------------------------------------
  *  63  12  11   10    9     8    7   6   5     4     3     2     1    0
  */
+/* IG on V9 conflicts with MCDE on M7. PSTATE_MCDE will only be used on
+ * processors that support ADI which do not use IG, hence there is no
+ * functional conflict
+ */
 #define PSTATE_IG   _AC(0x0000000000000800,UL) /* Interrupt Globals.	*/
+#define PSTATE_MCDE _AC(0x0000000000000800,UL) /* MCD Enable		*/
 #define PSTATE_MG   _AC(0x0000000000000400,UL) /* MMU Globals.		*/
 #define PSTATE_CLE  _AC(0x0000000000000200,UL) /* Current Little Endian.*/
 #define PSTATE_TLE  _AC(0x0000000000000100,UL) /* Trap Little Endian.	*/
@@ -47,7 +52,12 @@
 #define TSTATE_ASI	_AC(0x00000000ff000000,UL) /* AddrSpace ID.	*/
 #define TSTATE_PIL	_AC(0x0000000000f00000,UL) /* %pil (Linux traps)*/
 #define TSTATE_PSTATE	_AC(0x00000000000fff00,UL) /* PSTATE.		*/
+/* IG on V9 conflicts with MCDE on M7. TSTATE_MCDE will only be used on
+ * processors that support ADI which do not support IG, hence there is
+ * no functional conflict
+ */
 #define TSTATE_IG	_AC(0x0000000000080000,UL) /* Interrupt Globals.*/
+#define TSTATE_MCDE	_AC(0x0000000000080000,UL) /* MCD enable.       */
 #define TSTATE_MG	_AC(0x0000000000040000,UL) /* MMU Globals.	*/
 #define TSTATE_CLE	_AC(0x0000000000020000,UL) /* CurrLittleEndian.	*/
 #define TSTATE_TLE	_AC(0x0000000000010000,UL) /* TrapLittleEndian.	*/
diff --git a/arch/sparc/kernel/entry.h b/arch/sparc/kernel/entry.h
index 0f679421b468..207846855a4d 100644
--- a/arch/sparc/kernel/entry.h
+++ b/arch/sparc/kernel/entry.h
@@ -159,6 +159,9 @@ void sun4v_resum_overflow(struct pt_regs *regs);
 void sun4v_nonresum_error(struct pt_regs *regs,
 			  unsigned long offset);
 void sun4v_nonresum_overflow(struct pt_regs *regs);
+void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs,
+				      unsigned long addr,
+				      unsigned long context);
 
 extern unsigned long sun4v_err_itlb_vaddr;
 extern unsigned long sun4v_err_itlb_ctx;
diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
index 41a407328667..15fc979da1bf 100644
--- a/arch/sparc/kernel/head_64.S
+++ b/arch/sparc/kernel/head_64.S
@@ -877,6 +877,7 @@ sparc64_boot_end:
 #include "helpers.S"
 #include "hvcalls.S"
 #include "sun4v_tlb_miss.S"
+#include "sun4v_mcd.S"
 #include "sun4v_ivec.S"
 #include "ktlb.S"
 #include "tsb.S"
diff --git a/arch/sparc/kernel/sun4v_mcd.S b/arch/sparc/kernel/sun4v_mcd.S
new file mode 100644
index 000000000000..92afb6248dbc
--- /dev/null
+++ b/arch/sparc/kernel/sun4v_mcd.S
@@ -0,0 +1,17 @@
+/* sun4v_mcd.S: Sun4v memory corruption detected precise exception handler
+ *
+ * Copyright (C) 2015 Bob Picco <bob.picco@oracle.com>
+ * Copyright (C) 2015 Khalid Aziz <khalid.aziz@oracle.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+	.text
+	.align 32
+
+sun4v_mcd_detect_precise:
+	mov	%l4, %o1
+	mov 	%l5, %o2
+	call	sun4v_mem_corrupt_detect_precise
+	 add	%sp, PTREGS_OFF, %o0
+	ba,a,pt	%xcc, rtrap
+	 nop
diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index 196ee5eb4d48..88eed0dc3faf 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -2605,6 +2605,60 @@ void sun4v_do_mna(struct pt_regs *regs, unsigned long addr, unsigned long type_c
 	force_sig_info(SIGBUS, &info, current);
 }
 
+/* sun4v_mem_corrupt_detect_precise() - Handle precise exception on an ADI
+ * tag mismatch.
+ *
+ * ADI version tag mismatch on a load from memory always results in a
+ * precise exception. Tag mismatch on a store to memory will result in
+ * precise exception if MCDPER or PMCDPER is set to 1.
+ */
+void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs, unsigned long addr,
+				      unsigned long context)
+{
+	siginfo_t info;
+
+	if (notify_die(DIE_TRAP, "memory corruption precise exception", regs,
+		       0, 0x8, SIGSEGV) == NOTIFY_STOP)
+		return;
+
+	if (regs->tstate & TSTATE_PRIV) {
+		/* MCD exception could happen because the task was running
+		 * a system call with MCD enabled and passed a non-versioned
+		 * pointer or pointer with bad version tag to  the system
+		 * call.
+		 */
+		const struct exception_table_entry *entry;
+
+		entry = search_exception_tables(regs->tpc);
+		if (entry) {
+			/* Looks like a bad syscall parameter */
+#ifdef DEBUG_EXCEPTIONS
+			pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n",
+				 regs->tpc);
+			pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
+				 regs->tpc, entry->fixup);
+#endif
+			regs->tpc = entry->fixup;
+			regs->tnpc = regs->tpc + 4;
+			return;
+		}
+		pr_emerg("sun4v_mem_corrupt_detect_precise: ADDR[%016lx] "
+			"CTX[%lx], going.\n", addr, context);
+		die_if_kernel("MCD precise", regs);
+	}
+
+	if (test_thread_flag(TIF_32BIT)) {
+		regs->tpc &= 0xffffffff;
+		regs->tnpc &= 0xffffffff;
+	}
+	info.si_signo = SIGSEGV;
+	info.si_code = SEGV_ADIPERR;
+	info.si_errno = 0;
+	info.si_addr = (void __user *) addr;
+	info.si_trapno = 0;
+	force_sig_info(SIGSEGV, &info, current);
+}
+
 void do_privop(struct pt_regs *regs)
 {
 	enum ctx_state prev_state = exception_enter();
diff --git a/arch/sparc/kernel/ttable_64.S b/arch/sparc/kernel/ttable_64.S
index efe93ab4a9c0..36a9708f93d9 100644
--- a/arch/sparc/kernel/ttable_64.S
+++ b/arch/sparc/kernel/ttable_64.S
@@ -25,8 +25,10 @@ tl0_ill:	membar #Sync
 		TRAP_7INSNS(do_illegal_instruction)
 tl0_privop:	TRAP(do_privop)
 tl0_resv012:	BTRAP(0x12) BTRAP(0x13) BTRAP(0x14) BTRAP(0x15) BTRAP(0x16) BTRAP(0x17)
-tl0_resv018:	BTRAP(0x18) BTRAP(0x19) BTRAP(0x1a) BTRAP(0x1b) BTRAP(0x1c) BTRAP(0x1d)
-tl0_resv01e:	BTRAP(0x1e) BTRAP(0x1f)
+tl0_resv018:	BTRAP(0x18) BTRAP(0x19)
+tl0_mcd:	SUN4V_MCD_PRECISE
+tl0_resv01b:	BTRAP(0x1b)
+tl0_resv01c:	BTRAP(0x1c) BTRAP(0x1d)	BTRAP(0x1e) BTRAP(0x1f)
 tl0_fpdis:	TRAP_NOSAVE(do_fpdis)
 tl0_fpieee:	TRAP_SAVEFPU(do_fpieee)
 tl0_fpother:	TRAP_NOSAVE(do_fpother_check_fitos)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 3/9] sparc64: Add support for ADI register fields, ASIs and traps
@ 2017-08-09 21:25   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, mhocko, rientjes, mingo, Liam.Howlett,
	vegard.nossum, dan.carpenter, sparclinux, linux-kernel,
	Khalid Aziz

SPARC M7 processor adds new control register fields, ASIs and a new
trap to support the ADI (Application Data Integrity) feature. This
patch adds definitions for these register fields, ASIs and a handler
for the new precise memory corruption detected trap.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- No change
v6:
	- Added a missing nop in the delay slot in sun4v_mcd_detect_precise

v5:
	- Fixed indentation issues in assembly code

v4:
	- Broke patch up into smaller patches

v3:
	- Removed CONFIG_SPARC_ADI
	- Replaced prctl commands with mprotect
	- Added auxiliary vectors for ADI parameters
	- Enabled ADI for swappable pages

v2:
	- Fixed a build error

 arch/sparc/include/asm/hypervisor.h  |  2 ++
 arch/sparc/include/asm/pgtable_64.h  |  2 ++
 arch/sparc/include/asm/ttable.h      | 10 +++++++
 arch/sparc/include/uapi/asm/asi.h    |  5 ++++
 arch/sparc/include/uapi/asm/pstate.h | 10 +++++++
 arch/sparc/kernel/entry.h            |  3 ++
 arch/sparc/kernel/head_64.S          |  1 +
 arch/sparc/kernel/sun4v_mcd.S        | 17 ++++++++++++
 arch/sparc/kernel/traps_64.c         | 54 ++++++++++++++++++++++++++++++++++++
 arch/sparc/kernel/ttable_64.S        |  6 ++--
 10 files changed, 108 insertions(+), 2 deletions(-)
 create mode 100644 arch/sparc/kernel/sun4v_mcd.S

diff --git a/arch/sparc/include/asm/hypervisor.h b/arch/sparc/include/asm/hypervisor.h
index 73cb8978df58..31782f7996b3 100644
--- a/arch/sparc/include/asm/hypervisor.h
+++ b/arch/sparc/include/asm/hypervisor.h
@@ -547,6 +547,8 @@ struct hv_fault_status {
 #define HV_FAULT_TYPE_RESV1	13
 #define HV_FAULT_TYPE_UNALIGNED	14
 #define HV_FAULT_TYPE_INV_PGSZ	15
+#define HV_FAULT_TYPE_MCD	17
+#define HV_FAULT_TYPE_MCD_DIS	18
 /* Values 16 --> -2 are reserved.  */
 #define HV_FAULT_TYPE_MULTIPLE	-1
 
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 6fbd931f0570..af045061f41e 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -163,6 +163,8 @@ bool kern_addr_valid(unsigned long addr);
 #define _PAGE_E_4V	  _AC(0x0000000000000800,UL) /* side-Effect          */
 #define _PAGE_CP_4V	  _AC(0x0000000000000400,UL) /* Cacheable in P-Cache */
 #define _PAGE_CV_4V	  _AC(0x0000000000000200,UL) /* Cacheable in V-Cache */
+/* Bit 9 is used to enable MCD corruption detection instead on M7 */
+#define _PAGE_MCD_4V      _AC(0x0000000000000200,UL) /* Memory Corruption    */
 #define _PAGE_P_4V	  _AC(0x0000000000000100,UL) /* Privileged Page      */
 #define _PAGE_EXEC_4V	  _AC(0x0000000000000080,UL) /* Executable Page      */
 #define _PAGE_W_4V	  _AC(0x0000000000000040,UL) /* Writable             */
diff --git a/arch/sparc/include/asm/ttable.h b/arch/sparc/include/asm/ttable.h
index 82e7df296abc..d6510ab8fa4d 100644
--- a/arch/sparc/include/asm/ttable.h
+++ b/arch/sparc/include/asm/ttable.h
@@ -218,6 +218,16 @@
 	nop;						\
 	nop;
 
+#define SUN4V_MCD_PRECISE				\
+	ldxa	[%g0] ASI_SCRATCHPAD, %g2;		\
+	ldx	[%g2 + HV_FAULT_D_ADDR_OFFSET], %g4;	\
+	ldx	[%g2 + HV_FAULT_D_CTX_OFFSET], %g5;	\
+	ba,pt	%xcc, etrap;				\
+	 rd	%pc, %g7;				\
+	ba,pt	%xcc, sun4v_mcd_detect_precise;		\
+	 nop;						\
+	nop;
+
 /* Before touching these macros, you owe it to yourself to go and
  * see how arch/sparc64/kernel/winfixup.S works... -DaveM
  *
diff --git a/arch/sparc/include/uapi/asm/asi.h b/arch/sparc/include/uapi/asm/asi.h
index 7ad7203deaec..2bcdaa5321d3 100644
--- a/arch/sparc/include/uapi/asm/asi.h
+++ b/arch/sparc/include/uapi/asm/asi.h
@@ -144,6 +144,8 @@
  * ASIs, "(4V)" designates SUN4V specific ASIs.  "(NG4)" designates SPARC-T4
  * and later ASIs.
  */
+#define ASI_MCD_PRIV_PRIMARY	0x02 /* (NG7) Privileged MCD version VA	*/
+#define ASI_MCD_REAL		0x05 /* (NG7) Privileged MCD version PA	*/
 #define ASI_PHYS_USE_EC		0x14 /* PADDR, E-cachable		*/
 #define ASI_PHYS_BYPASS_EC_E	0x15 /* PADDR, E-bit			*/
 #define ASI_BLK_AIUP_4V		0x16 /* (4V) Prim, user, block ld/st	*/
@@ -244,6 +246,9 @@
 #define ASI_UDBL_CONTROL_R	0x7f /* External UDB control regs rd low*/
 #define ASI_INTR_R		0x7f /* IRQ vector dispatch read	*/
 #define ASI_INTR_DATAN_R	0x7f /* (III) In irq vector data reg N	*/
+#define ASI_MCD_PRIMARY		0x90 /* (NG7) MCD version load/store	*/
+#define ASI_MCD_ST_BLKINIT_PRIMARY	\
+				0x92 /* (NG7) MCD store BLKINIT primary	*/
 #define ASI_PIC			0xb0 /* (NG4) PIC registers		*/
 #define ASI_PST8_P		0xc0 /* Primary, 8 8-bit, partial	*/
 #define ASI_PST8_S		0xc1 /* Secondary, 8 8-bit, partial	*/
diff --git a/arch/sparc/include/uapi/asm/pstate.h b/arch/sparc/include/uapi/asm/pstate.h
index cf832e14aa05..d0521db9bb6f 100644
--- a/arch/sparc/include/uapi/asm/pstate.h
+++ b/arch/sparc/include/uapi/asm/pstate.h
@@ -10,7 +10,12 @@
  * -----------------------------------------------------------------------
  *  63  12  11   10    9     8    7   6   5     4     3     2     1    0
  */
+/* IG on V9 conflicts with MCDE on M7. PSTATE_MCDE will only be used on
+ * processors that support ADI which do not use IG, hence there is no
+ * functional conflict
+ */
 #define PSTATE_IG   _AC(0x0000000000000800,UL) /* Interrupt Globals.	*/
+#define PSTATE_MCDE _AC(0x0000000000000800,UL) /* MCD Enable		*/
 #define PSTATE_MG   _AC(0x0000000000000400,UL) /* MMU Globals.		*/
 #define PSTATE_CLE  _AC(0x0000000000000200,UL) /* Current Little Endian.*/
 #define PSTATE_TLE  _AC(0x0000000000000100,UL) /* Trap Little Endian.	*/
@@ -47,7 +52,12 @@
 #define TSTATE_ASI	_AC(0x00000000ff000000,UL) /* AddrSpace ID.	*/
 #define TSTATE_PIL	_AC(0x0000000000f00000,UL) /* %pil (Linux traps)*/
 #define TSTATE_PSTATE	_AC(0x00000000000fff00,UL) /* PSTATE.		*/
+/* IG on V9 conflicts with MCDE on M7. TSTATE_MCDE will only be used on
+ * processors that support ADI which do not support IG, hence there is
+ * no functional conflict
+ */
 #define TSTATE_IG	_AC(0x0000000000080000,UL) /* Interrupt Globals.*/
+#define TSTATE_MCDE	_AC(0x0000000000080000,UL) /* MCD enable.       */
 #define TSTATE_MG	_AC(0x0000000000040000,UL) /* MMU Globals.	*/
 #define TSTATE_CLE	_AC(0x0000000000020000,UL) /* CurrLittleEndian.	*/
 #define TSTATE_TLE	_AC(0x0000000000010000,UL) /* TrapLittleEndian.	*/
diff --git a/arch/sparc/kernel/entry.h b/arch/sparc/kernel/entry.h
index 0f679421b468..207846855a4d 100644
--- a/arch/sparc/kernel/entry.h
+++ b/arch/sparc/kernel/entry.h
@@ -159,6 +159,9 @@ void sun4v_resum_overflow(struct pt_regs *regs);
 void sun4v_nonresum_error(struct pt_regs *regs,
 			  unsigned long offset);
 void sun4v_nonresum_overflow(struct pt_regs *regs);
+void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs,
+				      unsigned long addr,
+				      unsigned long context);
 
 extern unsigned long sun4v_err_itlb_vaddr;
 extern unsigned long sun4v_err_itlb_ctx;
diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
index 41a407328667..15fc979da1bf 100644
--- a/arch/sparc/kernel/head_64.S
+++ b/arch/sparc/kernel/head_64.S
@@ -877,6 +877,7 @@ sparc64_boot_end:
 #include "helpers.S"
 #include "hvcalls.S"
 #include "sun4v_tlb_miss.S"
+#include "sun4v_mcd.S"
 #include "sun4v_ivec.S"
 #include "ktlb.S"
 #include "tsb.S"
diff --git a/arch/sparc/kernel/sun4v_mcd.S b/arch/sparc/kernel/sun4v_mcd.S
new file mode 100644
index 000000000000..92afb6248dbc
--- /dev/null
+++ b/arch/sparc/kernel/sun4v_mcd.S
@@ -0,0 +1,17 @@
+/* sun4v_mcd.S: Sun4v memory corruption detected precise exception handler
+ *
+ * Copyright (C) 2015 Bob Picco <bob.picco@oracle.com>
+ * Copyright (C) 2015 Khalid Aziz <khalid.aziz@oracle.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+	.text
+	.align 32
+
+sun4v_mcd_detect_precise:
+	mov	%l4, %o1
+	mov 	%l5, %o2
+	call	sun4v_mem_corrupt_detect_precise
+	 add	%sp, PTREGS_OFF, %o0
+	ba,a,pt	%xcc, rtrap
+	 nop
diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index 196ee5eb4d48..88eed0dc3faf 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -2605,6 +2605,60 @@ void sun4v_do_mna(struct pt_regs *regs, unsigned long addr, unsigned long type_c
 	force_sig_info(SIGBUS, &info, current);
 }
 
+/* sun4v_mem_corrupt_detect_precise() - Handle precise exception on an ADI
+ * tag mismatch.
+ *
+ * ADI version tag mismatch on a load from memory always results in a
+ * precise exception. Tag mismatch on a store to memory will result in
+ * precise exception if MCDPER or PMCDPER is set to 1.
+ */
+void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs, unsigned long addr,
+				      unsigned long context)
+{
+	siginfo_t info;
+
+	if (notify_die(DIE_TRAP, "memory corruption precise exception", regs,
+		       0, 0x8, SIGSEGV) = NOTIFY_STOP)
+		return;
+
+	if (regs->tstate & TSTATE_PRIV) {
+		/* MCD exception could happen because the task was running
+		 * a system call with MCD enabled and passed a non-versioned
+		 * pointer or pointer with bad version tag to  the system
+		 * call.
+		 */
+		const struct exception_table_entry *entry;
+
+		entry = search_exception_tables(regs->tpc);
+		if (entry) {
+			/* Looks like a bad syscall parameter */
+#ifdef DEBUG_EXCEPTIONS
+			pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n",
+				 regs->tpc);
+			pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
+				 regs->tpc, entry->fixup);
+#endif
+			regs->tpc = entry->fixup;
+			regs->tnpc = regs->tpc + 4;
+			return;
+		}
+		pr_emerg("sun4v_mem_corrupt_detect_precise: ADDR[%016lx] "
+			"CTX[%lx], going.\n", addr, context);
+		die_if_kernel("MCD precise", regs);
+	}
+
+	if (test_thread_flag(TIF_32BIT)) {
+		regs->tpc &= 0xffffffff;
+		regs->tnpc &= 0xffffffff;
+	}
+	info.si_signo = SIGSEGV;
+	info.si_code = SEGV_ADIPERR;
+	info.si_errno = 0;
+	info.si_addr = (void __user *) addr;
+	info.si_trapno = 0;
+	force_sig_info(SIGSEGV, &info, current);
+}
+
 void do_privop(struct pt_regs *regs)
 {
 	enum ctx_state prev_state = exception_enter();
diff --git a/arch/sparc/kernel/ttable_64.S b/arch/sparc/kernel/ttable_64.S
index efe93ab4a9c0..36a9708f93d9 100644
--- a/arch/sparc/kernel/ttable_64.S
+++ b/arch/sparc/kernel/ttable_64.S
@@ -25,8 +25,10 @@ tl0_ill:	membar #Sync
 		TRAP_7INSNS(do_illegal_instruction)
 tl0_privop:	TRAP(do_privop)
 tl0_resv012:	BTRAP(0x12) BTRAP(0x13) BTRAP(0x14) BTRAP(0x15) BTRAP(0x16) BTRAP(0x17)
-tl0_resv018:	BTRAP(0x18) BTRAP(0x19) BTRAP(0x1a) BTRAP(0x1b) BTRAP(0x1c) BTRAP(0x1d)
-tl0_resv01e:	BTRAP(0x1e) BTRAP(0x1f)
+tl0_resv018:	BTRAP(0x18) BTRAP(0x19)
+tl0_mcd:	SUN4V_MCD_PRECISE
+tl0_resv01b:	BTRAP(0x1b)
+tl0_resv01c:	BTRAP(0x1c) BTRAP(0x1d)	BTRAP(0x1e) BTRAP(0x1f)
 tl0_fpdis:	TRAP_NOSAVE(do_fpdis)
 tl0_fpieee:	TRAP_SAVEFPU(do_fpieee)
 tl0_fpother:	TRAP_NOSAVE(do_fpother_check_fitos)
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 4/9] sparc64: Add HV fault type handlers for ADI related faults
  2017-08-09 21:25 ` Khalid Aziz
@ 2017-08-09 21:25   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, akpm, vegard.nossum, mingo, peterz, dan.carpenter,
	Liam.Howlett, paul.gortmaker, sparclinux, linux-kernel,
	Khalid Aziz

ADI (Application Data Integrity) feature on M7 and newer processors
adds new fault types for hypervisor - Invalid ASI and MCD disabled.
This patch expands data access exception handler to handle these
faults.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch split off from patch 4/4 in v6

 arch/sparc/kernel/traps_64.c | 29 ++++++++++++++++++++++++++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index 88eed0dc3faf..88c96d349f2f 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -352,12 +352,35 @@ void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsig
 		regs->tpc &= 0xffffffff;
 		regs->tnpc &= 0xffffffff;
 	}
-	info.si_signo = SIGSEGV;
+	/* MCD (Memory Corruption Detection) disabled trap (TT=0x19) in HV
+	 * is vectored thorugh data access exception trap with fault type
+	 * set to HV_FAULT_TYPE_MCD_DIS. Check for MCD disabled trap.
+	 * Accessing an address with invalid ASI for the address, for
+	 * example setting an ADI tag on an address with ASI_MCD_PRIMARY
+	 * when TTE.mcd is not set for the VA, is also vectored into
+	 * kerbel by HV as data access exception with fault type set to
+	 * HV_FAULT_TYPE_INV_ASI.
+	 */
 	info.si_errno = 0;
-	info.si_code = SEGV_MAPERR;
 	info.si_addr = (void __user *) addr;
 	info.si_trapno = 0;
-	force_sig_info(SIGSEGV, &info, current);
+	switch (type) {
+	case HV_FAULT_TYPE_INV_ASI:
+		info.si_signo = SIGILL;
+		info.si_code = ILL_ILLADR;
+		force_sig_info(SIGILL, &info, current);
+		break;
+	case HV_FAULT_TYPE_MCD_DIS:
+		info.si_signo = SIGSEGV;
+		info.si_code = SEGV_ACCADI;
+		force_sig_info(SIGSEGV, &info, current);
+		break;
+	default:
+		info.si_signo = SIGSEGV;
+		info.si_code = SEGV_MAPERR;
+		force_sig_info(SIGSEGV, &info, current);
+		break;
+	}
 }
 
 void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 4/9] sparc64: Add HV fault type handlers for ADI related faults
@ 2017-08-09 21:25   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, akpm, vegard.nossum, mingo, peterz, dan.carpenter,
	Liam.Howlett, paul.gortmaker, sparclinux, linux-kernel,
	Khalid Aziz

ADI (Application Data Integrity) feature on M7 and newer processors
adds new fault types for hypervisor - Invalid ASI and MCD disabled.
This patch expands data access exception handler to handle these
faults.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch split off from patch 4/4 in v6

 arch/sparc/kernel/traps_64.c | 29 ++++++++++++++++++++++++++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index 88eed0dc3faf..88c96d349f2f 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -352,12 +352,35 @@ void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsig
 		regs->tpc &= 0xffffffff;
 		regs->tnpc &= 0xffffffff;
 	}
-	info.si_signo = SIGSEGV;
+	/* MCD (Memory Corruption Detection) disabled trap (TT=0x19) in HV
+	 * is vectored thorugh data access exception trap with fault type
+	 * set to HV_FAULT_TYPE_MCD_DIS. Check for MCD disabled trap.
+	 * Accessing an address with invalid ASI for the address, for
+	 * example setting an ADI tag on an address with ASI_MCD_PRIMARY
+	 * when TTE.mcd is not set for the VA, is also vectored into
+	 * kerbel by HV as data access exception with fault type set to
+	 * HV_FAULT_TYPE_INV_ASI.
+	 */
 	info.si_errno = 0;
-	info.si_code = SEGV_MAPERR;
 	info.si_addr = (void __user *) addr;
 	info.si_trapno = 0;
-	force_sig_info(SIGSEGV, &info, current);
+	switch (type) {
+	case HV_FAULT_TYPE_INV_ASI:
+		info.si_signo = SIGILL;
+		info.si_code = ILL_ILLADR;
+		force_sig_info(SIGILL, &info, current);
+		break;
+	case HV_FAULT_TYPE_MCD_DIS:
+		info.si_signo = SIGSEGV;
+		info.si_code = SEGV_ACCADI;
+		force_sig_info(SIGSEGV, &info, current);
+		break;
+	default:
+		info.si_signo = SIGSEGV;
+		info.si_code = SEGV_MAPERR;
+		force_sig_info(SIGSEGV, &info, current);
+		break;
+	}
 }
 
 void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 5/9] sparc64: Add handler for "Memory Corruption Detected" trap
  2017-08-09 21:25 ` Khalid Aziz
@ 2017-08-09 21:25   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, dan.carpenter, Liam.Howlett, mingo, paul.gortmaker,
	vegard.nossum, sparclinux, linux-kernel, Khalid Aziz

M7 and newer processors add a "Memory corruption Detected" trap with
the addition of ADI feature. This trap is vectored into kernel by HV
through resumable error trap with error attribute for the resumable
error set to 0x00000800.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch split off from patch 4/4 in v6

 arch/sparc/kernel/traps_64.c | 59 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index 88c96d349f2f..c14fa0a634b1 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -1825,6 +1825,7 @@ struct sun4v_error_entry {
 #define SUN4V_ERR_ATTRS_ASI		0x00000080
 #define SUN4V_ERR_ATTRS_PRIV_REG	0x00000100
 #define SUN4V_ERR_ATTRS_SPSTATE_MSK	0x00000600
+#define SUN4V_ERR_ATTRS_MCD		0x00000800
 #define SUN4V_ERR_ATTRS_SPSTATE_SHFT	9
 #define SUN4V_ERR_ATTRS_MODE_MSK	0x03000000
 #define SUN4V_ERR_ATTRS_MODE_SHFT	24
@@ -2022,6 +2023,56 @@ static void sun4v_log_error(struct pt_regs *regs, struct sun4v_error_entry *ent,
 	}
 }
 
+/* Handle memory corruption detected error which is vectored in
+ * through resumable error trap.
+ */
+void do_mcd_err(struct pt_regs *regs, struct sun4v_error_entry ent)
+{
+	siginfo_t info;
+
+	if (notify_die(DIE_TRAP, "MCD error", regs, 0, 0x34,
+		       SIGSEGV) == NOTIFY_STOP)
+		return;
+
+	if (regs->tstate & TSTATE_PRIV) {
+		/* MCD exception could happen because the task was
+		 * running a system call with MCD enabled and passed a
+		 * non-versioned pointer or pointer with bad version
+		 * tag to the system call. In such cases, hypervisor
+		 * places the address of offending instruction in the
+		 * resumable error report. This is a deferred error,
+		 * so the read/write that caused the trap was potentially
+		 * retired long time back and we may have no choice
+		 * but to send SIGSEGV to the process.
+		 */
+		const struct exception_table_entry *entry;
+
+		entry = search_exception_tables(regs->tpc);
+		if (entry) {
+			/* Looks like a bad syscall parameter */
+#ifdef DEBUG_EXCEPTIONS
+			pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n",
+				 regs->tpc);
+			pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
+				 ent.err_raddr, entry->fixup);
+#endif
+			regs->tpc = entry->fixup;
+			regs->tnpc = regs->tpc + 4;
+			return;
+		}
+	}
+
+	/* Send SIGSEGV to the userspace process with the right signal
+	 * code
+	 */
+	info.si_signo = SIGSEGV;
+	info.si_errno = 0;
+	info.si_code = SEGV_ADIDERR;
+	info.si_addr = (void __user *)ent.err_raddr;
+	info.si_trapno = 0;
+	force_sig_info(SIGSEGV, &info, current);
+}
+
 /* We run with %pil set to PIL_NORMAL_MAX and PSTATE_IE enabled in %pstate.
  * Log the event and clear the first word of the entry.
  */
@@ -2059,6 +2110,14 @@ void sun4v_resum_error(struct pt_regs *regs, unsigned long offset)
 		goto out;
 	}
 
+	/* If this is a memory corruption detected error vectored in
+	 * by HV through resumable error trap, call the handler
+	 */
+	if (local_copy.err_attrs & SUN4V_ERR_ATTRS_MCD) {
+		do_mcd_err(regs, local_copy);
+		return;
+	}
+
 	sun4v_log_error(regs, &local_copy, cpu,
 			KERN_ERR "RESUMABLE ERROR",
 			&sun4v_resum_oflow_cnt);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 5/9] sparc64: Add handler for "Memory Corruption Detected" trap
@ 2017-08-09 21:25   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, dan.carpenter, Liam.Howlett, mingo, paul.gortmaker,
	vegard.nossum, sparclinux, linux-kernel, Khalid Aziz

M7 and newer processors add a "Memory corruption Detected" trap with
the addition of ADI feature. This trap is vectored into kernel by HV
through resumable error trap with error attribute for the resumable
error set to 0x00000800.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch split off from patch 4/4 in v6

 arch/sparc/kernel/traps_64.c | 59 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index 88c96d349f2f..c14fa0a634b1 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -1825,6 +1825,7 @@ struct sun4v_error_entry {
 #define SUN4V_ERR_ATTRS_ASI		0x00000080
 #define SUN4V_ERR_ATTRS_PRIV_REG	0x00000100
 #define SUN4V_ERR_ATTRS_SPSTATE_MSK	0x00000600
+#define SUN4V_ERR_ATTRS_MCD		0x00000800
 #define SUN4V_ERR_ATTRS_SPSTATE_SHFT	9
 #define SUN4V_ERR_ATTRS_MODE_MSK	0x03000000
 #define SUN4V_ERR_ATTRS_MODE_SHFT	24
@@ -2022,6 +2023,56 @@ static void sun4v_log_error(struct pt_regs *regs, struct sun4v_error_entry *ent,
 	}
 }
 
+/* Handle memory corruption detected error which is vectored in
+ * through resumable error trap.
+ */
+void do_mcd_err(struct pt_regs *regs, struct sun4v_error_entry ent)
+{
+	siginfo_t info;
+
+	if (notify_die(DIE_TRAP, "MCD error", regs, 0, 0x34,
+		       SIGSEGV) = NOTIFY_STOP)
+		return;
+
+	if (regs->tstate & TSTATE_PRIV) {
+		/* MCD exception could happen because the task was
+		 * running a system call with MCD enabled and passed a
+		 * non-versioned pointer or pointer with bad version
+		 * tag to the system call. In such cases, hypervisor
+		 * places the address of offending instruction in the
+		 * resumable error report. This is a deferred error,
+		 * so the read/write that caused the trap was potentially
+		 * retired long time back and we may have no choice
+		 * but to send SIGSEGV to the process.
+		 */
+		const struct exception_table_entry *entry;
+
+		entry = search_exception_tables(regs->tpc);
+		if (entry) {
+			/* Looks like a bad syscall parameter */
+#ifdef DEBUG_EXCEPTIONS
+			pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n",
+				 regs->tpc);
+			pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
+				 ent.err_raddr, entry->fixup);
+#endif
+			regs->tpc = entry->fixup;
+			regs->tnpc = regs->tpc + 4;
+			return;
+		}
+	}
+
+	/* Send SIGSEGV to the userspace process with the right signal
+	 * code
+	 */
+	info.si_signo = SIGSEGV;
+	info.si_errno = 0;
+	info.si_code = SEGV_ADIDERR;
+	info.si_addr = (void __user *)ent.err_raddr;
+	info.si_trapno = 0;
+	force_sig_info(SIGSEGV, &info, current);
+}
+
 /* We run with %pil set to PIL_NORMAL_MAX and PSTATE_IE enabled in %pstate.
  * Log the event and clear the first word of the entry.
  */
@@ -2059,6 +2110,14 @@ void sun4v_resum_error(struct pt_regs *regs, unsigned long offset)
 		goto out;
 	}
 
+	/* If this is a memory corruption detected error vectored in
+	 * by HV through resumable error trap, call the handler
+	 */
+	if (local_copy.err_attrs & SUN4V_ERR_ATTRS_MCD) {
+		do_mcd_err(regs, local_copy);
+		return;
+	}
+
 	sun4v_log_error(regs, &local_copy, cpu,
 			KERN_ERR "RESUMABLE ERROR",
 			&sun4v_resum_oflow_cnt);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 6/9] sparc64: Add auxiliary vectors to report platform ADI properties
  2017-08-09 21:25 ` Khalid Aziz
@ 2017-08-09 21:25   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, viro, eric.saint.etienne, allen.pais, chris.hyser,
	atish.patra, sparclinux, linux-kernel, Khalid Aziz

ADI feature on M7 and newer processors has two important properties
relevant to userspace apps using ADI capabilities - (1) Size of
block of memory an ADI version tag applies to, and (2) Number of
uppermost bits in virtual address used to encode ADI tag. Kernel can
retrieve these properties for a platform through machine description
provided by the firmware. This patch adds code to retrieve these
properties and report them to userspace through auxiliary vectors.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch split off from patch 4/4 in v6

 arch/sparc/include/asm/adi.h         |  6 +++
 arch/sparc/include/asm/adi_64.h      | 45 ++++++++++++++++++
 arch/sparc/include/asm/elf_64.h      |  8 ++++
 arch/sparc/include/uapi/asm/auxvec.h | 10 ++++
 arch/sparc/kernel/Makefile           |  1 +
 arch/sparc/kernel/adi_64.c           | 90 ++++++++++++++++++++++++++++++++++++
 arch/sparc/kernel/mdesc.c            |  2 +
 7 files changed, 162 insertions(+)
 create mode 100644 arch/sparc/include/asm/adi.h
 create mode 100644 arch/sparc/include/asm/adi_64.h
 create mode 100644 arch/sparc/kernel/adi_64.c

diff --git a/arch/sparc/include/asm/adi.h b/arch/sparc/include/asm/adi.h
new file mode 100644
index 000000000000..acad0d04e4c6
--- /dev/null
+++ b/arch/sparc/include/asm/adi.h
@@ -0,0 +1,6 @@
+#ifndef ___ASM_SPARC_ADI_H
+#define ___ASM_SPARC_ADI_H
+#if defined(__sparc__) && defined(__arch64__)
+#include <asm/adi_64.h>
+#endif
+#endif
diff --git a/arch/sparc/include/asm/adi_64.h b/arch/sparc/include/asm/adi_64.h
new file mode 100644
index 000000000000..03e99bd2ebbf
--- /dev/null
+++ b/arch/sparc/include/asm/adi_64.h
@@ -0,0 +1,45 @@
+/* adi_64.h: ADI related data structures
+ *
+ * Copyright (C) 2016 Khalid Aziz (khalid.aziz@oracle.com)
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+#ifndef __ASM_SPARC64_ADI_H
+#define __ASM_SPARC64_ADI_H
+
+#include <linux/types.h>
+
+#ifndef __ASSEMBLY__
+
+struct adi_caps {
+	__u64 blksz;
+	__u64 nbits;
+};
+
+struct adi_config {
+	bool enabled;
+	struct adi_caps caps;
+};
+
+extern struct adi_config adi_state;
+
+extern void mdesc_adi_init(void);
+
+static inline bool adi_capable(void)
+{
+	return adi_state.enabled;
+}
+
+static inline unsigned long adi_blksize(void)
+{
+	return adi_state.caps.blksz;
+}
+
+static inline unsigned long adi_nbits(void)
+{
+	return adi_state.caps.nbits;
+}
+
+#endif	/* __ASSEMBLY__ */
+
+#endif	/* !(__ASM_SPARC64_ADI_H) */
diff --git a/arch/sparc/include/asm/elf_64.h b/arch/sparc/include/asm/elf_64.h
index 3f2d403873bd..bcfa580db29e 100644
--- a/arch/sparc/include/asm/elf_64.h
+++ b/arch/sparc/include/asm/elf_64.h
@@ -9,6 +9,7 @@
 #include <asm/processor.h>
 #include <asm/extable_64.h>
 #include <asm/spitfire.h>
+#include <asm/adi.h>
 
 /*
  * Sparc section types
@@ -210,4 +211,11 @@ do {	if ((ex).e_ident[EI_CLASS] == ELFCLASS32)	\
 			(current->personality & (~PER_MASK)));	\
 } while (0)
 
+#define ARCH_DLINFO						\
+do {								\
+	extern struct adi_config adi_state;			\
+	NEW_AUX_ENT(AT_ADI_BLKSZ, adi_state.caps.blksz);	\
+	NEW_AUX_ENT(AT_ADI_NBITS, adi_state.caps.nbits);	\
+} while (0)
+
 #endif /* !(__ASM_SPARC64_ELF_H) */
diff --git a/arch/sparc/include/uapi/asm/auxvec.h b/arch/sparc/include/uapi/asm/auxvec.h
index ad6f360261f6..c8064dd2bb94 100644
--- a/arch/sparc/include/uapi/asm/auxvec.h
+++ b/arch/sparc/include/uapi/asm/auxvec.h
@@ -1,4 +1,14 @@
 #ifndef __ASMSPARC_AUXVEC_H
 #define __ASMSPARC_AUXVEC_H
 
+#ifdef CONFIG_SPARC64
+/* Avoid overlap with other AT_* values since they are consolidated in
+ * glibc and any overlaps can cause problems
+ */
+#define AT_ADI_BLKSZ	48
+#define AT_ADI_NBITS	49
+
+#define AT_VECTOR_SIZE_ARCH	2
+#endif
+
 #endif /* !(__ASMSPARC_AUXVEC_H) */
diff --git a/arch/sparc/kernel/Makefile b/arch/sparc/kernel/Makefile
index aac609889ee4..8149e175e899 100644
--- a/arch/sparc/kernel/Makefile
+++ b/arch/sparc/kernel/Makefile
@@ -67,6 +67,7 @@ obj-$(CONFIG_SPARC64)   += visemul.o
 obj-$(CONFIG_SPARC64)   += hvapi.o
 obj-$(CONFIG_SPARC64)   += sstate.o
 obj-$(CONFIG_SPARC64)   += mdesc.o
+obj-$(CONFIG_SPARC64)   += adi_64.o
 obj-$(CONFIG_SPARC64)	+= pcr.o
 obj-$(CONFIG_SPARC64)	+= nmi.o
 obj-$(CONFIG_SPARC64_SMP) += cpumap.o
diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
new file mode 100644
index 000000000000..9fbb5dd4a7bf
--- /dev/null
+++ b/arch/sparc/kernel/adi_64.c
@@ -0,0 +1,90 @@
+/* adi_64.c: support for ADI (Application Data Integrity) feature on
+ * sparc m7 and newer processors. This feature is also known as
+ * SSM (Silicon Secured Memory).
+ *
+ * Copyright (C) 2016 Khalid Aziz (khalid.aziz@oracle.com)
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+#include <linux/init.h>
+#include <asm/mdesc.h>
+#include <asm/adi_64.h>
+
+struct adi_config adi_state;
+
+/* mdesc_adi_init() : Parse machine description provided by the
+ *	hypervisor to detect ADI capabilities
+ *
+ * Hypervisor reports ADI capabilities of platform in "hwcap-list" property
+ * for "cpu" node. If the platform supports ADI, "hwcap-list" property
+ * contains the keyword "adp". If the platform supports ADI, "platform"
+ * node will contain "adp-blksz", "adp-nbits" and "ue-on-adp" properties
+ * to describe the ADI capabilities.
+ */
+void __init mdesc_adi_init(void)
+{
+	struct mdesc_handle *hp = mdesc_grab();
+	const char *prop;
+	u64 pn, *val;
+	int len;
+
+	if (!hp)
+		goto adi_not_found;
+
+	pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "cpu");
+	if (pn == MDESC_NODE_NULL)
+		goto adi_not_found;
+
+	prop = mdesc_get_property(hp, pn, "hwcap-list", &len);
+	if (!prop)
+		goto adi_not_found;
+
+	/*
+	 * Look for "adp" keyword in hwcap-list which would indicate
+	 * ADI support
+	 */
+	adi_state.enabled = false;
+	while (len) {
+		int plen;
+
+		if (!strcmp(prop, "adp")) {
+			adi_state.enabled = true;
+			break;
+		}
+
+		plen = strlen(prop) + 1;
+		prop += plen;
+		len -= plen;
+	}
+
+	if (!adi_state.enabled)
+		goto adi_not_found;
+
+	/* Find the ADI properties in "platform" node. If all ADI
+	 * properties are not found, ADI support is incomplete and
+	 * do not enable ADI in the kernel.
+	 */
+	pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "platform");
+	if (pn == MDESC_NODE_NULL)
+		goto adi_not_found;
+
+	val = (u64 *) mdesc_get_property(hp, pn, "adp-blksz", &len);
+	if (!val)
+		goto adi_not_found;
+	adi_state.caps.blksz = *val;
+
+	val = (u64 *) mdesc_get_property(hp, pn, "adp-nbits", &len);
+	if (!val)
+		goto adi_not_found;
+	adi_state.caps.nbits = *val;
+
+	mdesc_release(hp);
+	return;
+
+adi_not_found:
+	adi_state.enabled = false;
+	adi_state.caps.blksz = 0;
+	adi_state.caps.nbits = 0;
+	if (hp)
+		mdesc_release(hp);
+}
diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c
index c0765bbf60ea..50b8bfb37530 100644
--- a/arch/sparc/kernel/mdesc.c
+++ b/arch/sparc/kernel/mdesc.c
@@ -20,6 +20,7 @@
 #include <linux/uaccess.h>
 #include <asm/oplib.h>
 #include <asm/smp.h>
+#include <asm/adi.h>
 
 /* Unlike the OBP device tree, the machine description is a full-on
  * DAG.  An arbitrary number of ARCs are possible from one
@@ -1104,5 +1105,6 @@ void __init sun4v_mdesc_init(void)
 
 	cur_mdesc = hp;
 
+	mdesc_adi_init();
 	report_platform_properties();
 }
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 6/9] sparc64: Add auxiliary vectors to report platform ADI properties
@ 2017-08-09 21:25   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:25 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, viro, eric.saint.etienne, allen.pais, chris.hyser,
	atish.patra, sparclinux, linux-kernel, Khalid Aziz

ADI feature on M7 and newer processors has two important properties
relevant to userspace apps using ADI capabilities - (1) Size of
block of memory an ADI version tag applies to, and (2) Number of
uppermost bits in virtual address used to encode ADI tag. Kernel can
retrieve these properties for a platform through machine description
provided by the firmware. This patch adds code to retrieve these
properties and report them to userspace through auxiliary vectors.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch split off from patch 4/4 in v6

 arch/sparc/include/asm/adi.h         |  6 +++
 arch/sparc/include/asm/adi_64.h      | 45 ++++++++++++++++++
 arch/sparc/include/asm/elf_64.h      |  8 ++++
 arch/sparc/include/uapi/asm/auxvec.h | 10 ++++
 arch/sparc/kernel/Makefile           |  1 +
 arch/sparc/kernel/adi_64.c           | 90 ++++++++++++++++++++++++++++++++++++
 arch/sparc/kernel/mdesc.c            |  2 +
 7 files changed, 162 insertions(+)
 create mode 100644 arch/sparc/include/asm/adi.h
 create mode 100644 arch/sparc/include/asm/adi_64.h
 create mode 100644 arch/sparc/kernel/adi_64.c

diff --git a/arch/sparc/include/asm/adi.h b/arch/sparc/include/asm/adi.h
new file mode 100644
index 000000000000..acad0d04e4c6
--- /dev/null
+++ b/arch/sparc/include/asm/adi.h
@@ -0,0 +1,6 @@
+#ifndef ___ASM_SPARC_ADI_H
+#define ___ASM_SPARC_ADI_H
+#if defined(__sparc__) && defined(__arch64__)
+#include <asm/adi_64.h>
+#endif
+#endif
diff --git a/arch/sparc/include/asm/adi_64.h b/arch/sparc/include/asm/adi_64.h
new file mode 100644
index 000000000000..03e99bd2ebbf
--- /dev/null
+++ b/arch/sparc/include/asm/adi_64.h
@@ -0,0 +1,45 @@
+/* adi_64.h: ADI related data structures
+ *
+ * Copyright (C) 2016 Khalid Aziz (khalid.aziz@oracle.com)
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+#ifndef __ASM_SPARC64_ADI_H
+#define __ASM_SPARC64_ADI_H
+
+#include <linux/types.h>
+
+#ifndef __ASSEMBLY__
+
+struct adi_caps {
+	__u64 blksz;
+	__u64 nbits;
+};
+
+struct adi_config {
+	bool enabled;
+	struct adi_caps caps;
+};
+
+extern struct adi_config adi_state;
+
+extern void mdesc_adi_init(void);
+
+static inline bool adi_capable(void)
+{
+	return adi_state.enabled;
+}
+
+static inline unsigned long adi_blksize(void)
+{
+	return adi_state.caps.blksz;
+}
+
+static inline unsigned long adi_nbits(void)
+{
+	return adi_state.caps.nbits;
+}
+
+#endif	/* __ASSEMBLY__ */
+
+#endif	/* !(__ASM_SPARC64_ADI_H) */
diff --git a/arch/sparc/include/asm/elf_64.h b/arch/sparc/include/asm/elf_64.h
index 3f2d403873bd..bcfa580db29e 100644
--- a/arch/sparc/include/asm/elf_64.h
+++ b/arch/sparc/include/asm/elf_64.h
@@ -9,6 +9,7 @@
 #include <asm/processor.h>
 #include <asm/extable_64.h>
 #include <asm/spitfire.h>
+#include <asm/adi.h>
 
 /*
  * Sparc section types
@@ -210,4 +211,11 @@ do {	if ((ex).e_ident[EI_CLASS] = ELFCLASS32)	\
 			(current->personality & (~PER_MASK)));	\
 } while (0)
 
+#define ARCH_DLINFO						\
+do {								\
+	extern struct adi_config adi_state;			\
+	NEW_AUX_ENT(AT_ADI_BLKSZ, adi_state.caps.blksz);	\
+	NEW_AUX_ENT(AT_ADI_NBITS, adi_state.caps.nbits);	\
+} while (0)
+
 #endif /* !(__ASM_SPARC64_ELF_H) */
diff --git a/arch/sparc/include/uapi/asm/auxvec.h b/arch/sparc/include/uapi/asm/auxvec.h
index ad6f360261f6..c8064dd2bb94 100644
--- a/arch/sparc/include/uapi/asm/auxvec.h
+++ b/arch/sparc/include/uapi/asm/auxvec.h
@@ -1,4 +1,14 @@
 #ifndef __ASMSPARC_AUXVEC_H
 #define __ASMSPARC_AUXVEC_H
 
+#ifdef CONFIG_SPARC64
+/* Avoid overlap with other AT_* values since they are consolidated in
+ * glibc and any overlaps can cause problems
+ */
+#define AT_ADI_BLKSZ	48
+#define AT_ADI_NBITS	49
+
+#define AT_VECTOR_SIZE_ARCH	2
+#endif
+
 #endif /* !(__ASMSPARC_AUXVEC_H) */
diff --git a/arch/sparc/kernel/Makefile b/arch/sparc/kernel/Makefile
index aac609889ee4..8149e175e899 100644
--- a/arch/sparc/kernel/Makefile
+++ b/arch/sparc/kernel/Makefile
@@ -67,6 +67,7 @@ obj-$(CONFIG_SPARC64)   += visemul.o
 obj-$(CONFIG_SPARC64)   += hvapi.o
 obj-$(CONFIG_SPARC64)   += sstate.o
 obj-$(CONFIG_SPARC64)   += mdesc.o
+obj-$(CONFIG_SPARC64)   += adi_64.o
 obj-$(CONFIG_SPARC64)	+= pcr.o
 obj-$(CONFIG_SPARC64)	+= nmi.o
 obj-$(CONFIG_SPARC64_SMP) += cpumap.o
diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
new file mode 100644
index 000000000000..9fbb5dd4a7bf
--- /dev/null
+++ b/arch/sparc/kernel/adi_64.c
@@ -0,0 +1,90 @@
+/* adi_64.c: support for ADI (Application Data Integrity) feature on
+ * sparc m7 and newer processors. This feature is also known as
+ * SSM (Silicon Secured Memory).
+ *
+ * Copyright (C) 2016 Khalid Aziz (khalid.aziz@oracle.com)
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+#include <linux/init.h>
+#include <asm/mdesc.h>
+#include <asm/adi_64.h>
+
+struct adi_config adi_state;
+
+/* mdesc_adi_init() : Parse machine description provided by the
+ *	hypervisor to detect ADI capabilities
+ *
+ * Hypervisor reports ADI capabilities of platform in "hwcap-list" property
+ * for "cpu" node. If the platform supports ADI, "hwcap-list" property
+ * contains the keyword "adp". If the platform supports ADI, "platform"
+ * node will contain "adp-blksz", "adp-nbits" and "ue-on-adp" properties
+ * to describe the ADI capabilities.
+ */
+void __init mdesc_adi_init(void)
+{
+	struct mdesc_handle *hp = mdesc_grab();
+	const char *prop;
+	u64 pn, *val;
+	int len;
+
+	if (!hp)
+		goto adi_not_found;
+
+	pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "cpu");
+	if (pn = MDESC_NODE_NULL)
+		goto adi_not_found;
+
+	prop = mdesc_get_property(hp, pn, "hwcap-list", &len);
+	if (!prop)
+		goto adi_not_found;
+
+	/*
+	 * Look for "adp" keyword in hwcap-list which would indicate
+	 * ADI support
+	 */
+	adi_state.enabled = false;
+	while (len) {
+		int plen;
+
+		if (!strcmp(prop, "adp")) {
+			adi_state.enabled = true;
+			break;
+		}
+
+		plen = strlen(prop) + 1;
+		prop += plen;
+		len -= plen;
+	}
+
+	if (!adi_state.enabled)
+		goto adi_not_found;
+
+	/* Find the ADI properties in "platform" node. If all ADI
+	 * properties are not found, ADI support is incomplete and
+	 * do not enable ADI in the kernel.
+	 */
+	pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "platform");
+	if (pn = MDESC_NODE_NULL)
+		goto adi_not_found;
+
+	val = (u64 *) mdesc_get_property(hp, pn, "adp-blksz", &len);
+	if (!val)
+		goto adi_not_found;
+	adi_state.caps.blksz = *val;
+
+	val = (u64 *) mdesc_get_property(hp, pn, "adp-nbits", &len);
+	if (!val)
+		goto adi_not_found;
+	adi_state.caps.nbits = *val;
+
+	mdesc_release(hp);
+	return;
+
+adi_not_found:
+	adi_state.enabled = false;
+	adi_state.caps.blksz = 0;
+	adi_state.caps.nbits = 0;
+	if (hp)
+		mdesc_release(hp);
+}
diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c
index c0765bbf60ea..50b8bfb37530 100644
--- a/arch/sparc/kernel/mdesc.c
+++ b/arch/sparc/kernel/mdesc.c
@@ -20,6 +20,7 @@
 #include <linux/uaccess.h>
 #include <asm/oplib.h>
 #include <asm/smp.h>
+#include <asm/adi.h>
 
 /* Unlike the OBP device tree, the machine description is a full-on
  * DAG.  An arbitrary number of ARCs are possible from one
@@ -1104,5 +1105,6 @@ void __init sun4v_mdesc_init(void)
 
 	cur_mdesc = hp;
 
+	mdesc_adi_init();
 	report_platform_properties();
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
  2017-08-09 21:25 ` Khalid Aziz
  (?)
@ 2017-08-09 21:26   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: akpm, benh, paulus, mpe, davem, dave.hansen
  Cc: Khalid Aziz, bsingharora, dja, tglx, mgorman, aarcange,
	kirill.shutemov, heiko.carstens, ak, linuxppc-dev, linux-kernel,
	linux-mm, sparclinux, Khalid Aziz

A protection flag may not be valid across entire address space and
hence arch_validate_prot() might need the address a protection bit is
being set on to ensure it is a valid protection flag. For example, sparc
processors support memory corruption detection (as part of ADI feature)
flag on memory addresses mapped on to physical RAM but not on PFN mapped
pages or addresses mapped on to devices. This patch adds address to the
parameters being passed to arch_validate_prot() so protection bits can
be validated in the relevant context.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch

 arch/powerpc/include/asm/mman.h | 2 +-
 arch/powerpc/kernel/syscalls.c  | 2 +-
 include/linux/mman.h            | 2 +-
 mm/mprotect.c                   | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 30922f699341..bc74074304a2 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
 		return false;
 	return true;
 }
-#define arch_validate_prot(prot) arch_validate_prot(prot)
+#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
 
 #endif /* CONFIG_PPC64 */
 #endif	/* _ASM_POWERPC_MMAN_H */
diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c
index a877bf8269fe..6d90ddbd2d11 100644
--- a/arch/powerpc/kernel/syscalls.c
+++ b/arch/powerpc/kernel/syscalls.c
@@ -48,7 +48,7 @@ static inline long do_mmap2(unsigned long addr, size_t len,
 {
 	long ret = -EINVAL;
 
-	if (!arch_validate_prot(prot))
+	if (!arch_validate_prot(prot, addr))
 		goto out;
 
 	if (shift) {
diff --git a/include/linux/mman.h b/include/linux/mman.h
index 634c4c51fe3a..1693d95a88ee 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -49,7 +49,7 @@ static inline void vm_unacct_memory(long pages)
  *
  * Returns true if the prot flags are valid
  */
-static inline bool arch_validate_prot(unsigned long prot)
+static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
 {
 	return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0;
 }
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 8edd0d576254..beac2dfbb5fa 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -396,7 +396,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
 	end = start + len;
 	if (end <= start)
 		return -ENOMEM;
-	if (!arch_validate_prot(prot))
+	if (!arch_validate_prot(prot, start))
 		return -EINVAL;
 
 	reqprot = prot;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-09 21:26   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: akpm, benh, paulus, mpe, davem, dave.hansen
  Cc: Khalid Aziz, bsingharora, dja, tglx, mgorman, aarcange,
	kirill.shutemov, heiko.carstens, ak, linuxppc-dev, linux-kernel,
	linux-mm, sparclinux, Khalid Aziz

A protection flag may not be valid across entire address space and
hence arch_validate_prot() might need the address a protection bit is
being set on to ensure it is a valid protection flag. For example, sparc
processors support memory corruption detection (as part of ADI feature)
flag on memory addresses mapped on to physical RAM but not on PFN mapped
pages or addresses mapped on to devices. This patch adds address to the
parameters being passed to arch_validate_prot() so protection bits can
be validated in the relevant context.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch

 arch/powerpc/include/asm/mman.h | 2 +-
 arch/powerpc/kernel/syscalls.c  | 2 +-
 include/linux/mman.h            | 2 +-
 mm/mprotect.c                   | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 30922f699341..bc74074304a2 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
 		return false;
 	return true;
 }
-#define arch_validate_prot(prot) arch_validate_prot(prot)
+#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
 
 #endif /* CONFIG_PPC64 */
 #endif	/* _ASM_POWERPC_MMAN_H */
diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c
index a877bf8269fe..6d90ddbd2d11 100644
--- a/arch/powerpc/kernel/syscalls.c
+++ b/arch/powerpc/kernel/syscalls.c
@@ -48,7 +48,7 @@ static inline long do_mmap2(unsigned long addr, size_t len,
 {
 	long ret = -EINVAL;
 
-	if (!arch_validate_prot(prot))
+	if (!arch_validate_prot(prot, addr))
 		goto out;
 
 	if (shift) {
diff --git a/include/linux/mman.h b/include/linux/mman.h
index 634c4c51fe3a..1693d95a88ee 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -49,7 +49,7 @@ static inline void vm_unacct_memory(long pages)
  *
  * Returns true if the prot flags are valid
  */
-static inline bool arch_validate_prot(unsigned long prot)
+static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
 {
 	return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) = 0;
 }
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 8edd0d576254..beac2dfbb5fa 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -396,7 +396,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
 	end = start + len;
 	if (end <= start)
 		return -ENOMEM;
-	if (!arch_validate_prot(prot))
+	if (!arch_validate_prot(prot, start))
 		return -EINVAL;
 
 	reqprot = prot;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-09 21:26   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: akpm, benh, paulus, mpe, davem, dave.hansen
  Cc: Khalid Aziz, bsingharora, dja, tglx, mgorman, aarcange,
	kirill.shutemov, heiko.carstens, ak, linuxppc-dev, linux-kernel,
	linux-mm, sparclinux, Khalid Aziz

A protection flag may not be valid across entire address space and
hence arch_validate_prot() might need the address a protection bit is
being set on to ensure it is a valid protection flag. For example, sparc
processors support memory corruption detection (as part of ADI feature)
flag on memory addresses mapped on to physical RAM but not on PFN mapped
pages or addresses mapped on to devices. This patch adds address to the
parameters being passed to arch_validate_prot() so protection bits can
be validated in the relevant context.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch

 arch/powerpc/include/asm/mman.h | 2 +-
 arch/powerpc/kernel/syscalls.c  | 2 +-
 include/linux/mman.h            | 2 +-
 mm/mprotect.c                   | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 30922f699341..bc74074304a2 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
 		return false;
 	return true;
 }
-#define arch_validate_prot(prot) arch_validate_prot(prot)
+#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
 
 #endif /* CONFIG_PPC64 */
 #endif	/* _ASM_POWERPC_MMAN_H */
diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c
index a877bf8269fe..6d90ddbd2d11 100644
--- a/arch/powerpc/kernel/syscalls.c
+++ b/arch/powerpc/kernel/syscalls.c
@@ -48,7 +48,7 @@ static inline long do_mmap2(unsigned long addr, size_t len,
 {
 	long ret = -EINVAL;
 
-	if (!arch_validate_prot(prot))
+	if (!arch_validate_prot(prot, addr))
 		goto out;
 
 	if (shift) {
diff --git a/include/linux/mman.h b/include/linux/mman.h
index 634c4c51fe3a..1693d95a88ee 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -49,7 +49,7 @@ static inline void vm_unacct_memory(long pages)
  *
  * Returns true if the prot flags are valid
  */
-static inline bool arch_validate_prot(unsigned long prot)
+static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
 {
 	return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0;
 }
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 8edd0d576254..beac2dfbb5fa 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -396,7 +396,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
 	end = start + len;
 	if (end <= start)
 		return -ENOMEM;
-	if (!arch_validate_prot(prot))
+	if (!arch_validate_prot(prot, start))
 		return -EINVAL;
 
 	reqprot = prot;
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 8/9] mm: Clear arch specific VM flags on protection change
  2017-08-09 21:25 ` Khalid Aziz
  (?)
@ 2017-08-09 21:26   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: akpm, dave.hansen
  Cc: Khalid Aziz, mhocko, davem, jack, kirill.shutemov, ross.zwisler,
	lstoakes, dave.jiang, willy, hughd, ying.huang, tglx, mgorman,
	aarcange, ak, aneesh.kumar, linux-mm, linux-kernel, sparclinux,
	Khalid Aziz

When protection bits are changed on a VMA, some of the architecture
specific flags should be cleared as well. An examples of this are the
PKEY flags on x86.  This patch expands the current code that clears PKEY
flags for x86, to support similar functionality for other architectures
as well.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch

 include/linux/mm.h | 6 ++++++
 mm/mprotect.c      | 2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6f543a47fc92..b7aa3932e6d4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -271,6 +271,12 @@ extern unsigned int kobjsize(const void *objp);
 /* This mask is used to clear all the VMA flags used by mlock */
 #define VM_LOCKED_CLEAR_MASK	(~(VM_LOCKED | VM_LOCKONFAULT))
 
+/* Arch-specific flags to clear when updating VM flags on protection change */
+#ifndef VM_ARCH_CLEAR
+# define VM_ARCH_CLEAR	VM_NONE
+#endif
+#define VM_FLAGS_CLEAR	(ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)
+
 /*
  * mapping from the currently active vm_flags protection bits (the
  * low four bits) to a page protection mask..
diff --git a/mm/mprotect.c b/mm/mprotect.c
index beac2dfbb5fa..b1ec9902dcd6 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -454,7 +454,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
 		 * cleared from the VMA.
 		 */
 		mask_off_old_flags = VM_READ | VM_WRITE | VM_EXEC |
-					ARCH_VM_PKEY_FLAGS;
+					VM_FLAGS_CLEAR;
 
 		new_vma_pkey = arch_override_mprotect_pkey(vma, prot, pkey);
 		newflags = calc_vm_prot_bits(prot, new_vma_pkey);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 8/9] mm: Clear arch specific VM flags on protection change
@ 2017-08-09 21:26   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: akpm, dave.hansen
  Cc: Khalid Aziz, mhocko, davem, jack, kirill.shutemov, ross.zwisler,
	lstoakes, dave.jiang, willy, hughd, ying.huang, tglx, mgorman,
	aarcange, ak, aneesh.kumar, linux-mm, linux-kernel, sparclinux,
	Khalid Aziz

When protection bits are changed on a VMA, some of the architecture
specific flags should be cleared as well. An examples of this are the
PKEY flags on x86.  This patch expands the current code that clears PKEY
flags for x86, to support similar functionality for other architectures
as well.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch

 include/linux/mm.h | 6 ++++++
 mm/mprotect.c      | 2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6f543a47fc92..b7aa3932e6d4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -271,6 +271,12 @@ extern unsigned int kobjsize(const void *objp);
 /* This mask is used to clear all the VMA flags used by mlock */
 #define VM_LOCKED_CLEAR_MASK	(~(VM_LOCKED | VM_LOCKONFAULT))
 
+/* Arch-specific flags to clear when updating VM flags on protection change */
+#ifndef VM_ARCH_CLEAR
+# define VM_ARCH_CLEAR	VM_NONE
+#endif
+#define VM_FLAGS_CLEAR	(ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)
+
 /*
  * mapping from the currently active vm_flags protection bits (the
  * low four bits) to a page protection mask..
diff --git a/mm/mprotect.c b/mm/mprotect.c
index beac2dfbb5fa..b1ec9902dcd6 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -454,7 +454,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
 		 * cleared from the VMA.
 		 */
 		mask_off_old_flags = VM_READ | VM_WRITE | VM_EXEC |
-					ARCH_VM_PKEY_FLAGS;
+					VM_FLAGS_CLEAR;
 
 		new_vma_pkey = arch_override_mprotect_pkey(vma, prot, pkey);
 		newflags = calc_vm_prot_bits(prot, new_vma_pkey);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 8/9] mm: Clear arch specific VM flags on protection change
@ 2017-08-09 21:26   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: akpm, dave.hansen
  Cc: Khalid Aziz, mhocko, davem, jack, kirill.shutemov, ross.zwisler,
	lstoakes, dave.jiang, willy, hughd, ying.huang, tglx, mgorman,
	aarcange, ak, aneesh.kumar, linux-mm, linux-kernel, sparclinux,
	Khalid Aziz

When protection bits are changed on a VMA, some of the architecture
specific flags should be cleared as well. An examples of this are the
PKEY flags on x86.  This patch expands the current code that clears PKEY
flags for x86, to support similar functionality for other architectures
as well.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- new patch

 include/linux/mm.h | 6 ++++++
 mm/mprotect.c      | 2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6f543a47fc92..b7aa3932e6d4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -271,6 +271,12 @@ extern unsigned int kobjsize(const void *objp);
 /* This mask is used to clear all the VMA flags used by mlock */
 #define VM_LOCKED_CLEAR_MASK	(~(VM_LOCKED | VM_LOCKONFAULT))
 
+/* Arch-specific flags to clear when updating VM flags on protection change */
+#ifndef VM_ARCH_CLEAR
+# define VM_ARCH_CLEAR	VM_NONE
+#endif
+#define VM_FLAGS_CLEAR	(ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)
+
 /*
  * mapping from the currently active vm_flags protection bits (the
  * low four bits) to a page protection mask..
diff --git a/mm/mprotect.c b/mm/mprotect.c
index beac2dfbb5fa..b1ec9902dcd6 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -454,7 +454,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
 		 * cleared from the VMA.
 		 */
 		mask_off_old_flags = VM_READ | VM_WRITE | VM_EXEC |
-					ARCH_VM_PKEY_FLAGS;
+					VM_FLAGS_CLEAR;
 
 		new_vma_pkey = arch_override_mprotect_pkey(vma, prot, pkey);
 		newflags = calc_vm_prot_bits(prot, new_vma_pkey);
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-09 21:25 ` Khalid Aziz
  (?)
@ 2017-08-09 21:26   ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, Khalid Aziz

ADI is a new feature supported on SPARC M7 and newer processors to allow
hardware to catch rogue accesses to memory. ADI is supported for data
fetches only and not instruction fetches. An app can enable ADI on its
data pages, set version tags on them and use versioned addresses to
access the data pages. Upper bits of the address contain the version
tag. On M7 processors, upper four bits (bits 63-60) contain the version
tag. If a rogue app attempts to access ADI enabled data pages, its
access is blocked and processor generates an exception. Please see
Documentation/sparc/adi.txt for further details.

This patch extends mprotect to enable ADI (TSTATE.mcde), enable/disable
MCD (Memory Corruption Detection) on selected memory ranges, enable
TTE.mcd in PTEs, return ADI parameters to userspace and save/restore ADI
version tags on page swap out/in or migration. ADI is not enabled by
default for any task. A task must explicitly enable ADI on a memory
range and set version tag for ADI to be effective for the task.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- Enhanced arch_validate_prot() to enable ADI only on writable
	  addresses backed by physical RAM
	- Added support for saving/restoring ADI tags for each ADI
	  block size address range on a page on swap in/out
	- Added code to copy ADI tags on COW
	- Updated values for auxiliary vectors to not conflict with
	  values on other architectures to avoid conflict in glibc. glibc
	  consolidates all auxiliary vectors into its headers and
	  duplicate values in consolidated header are problematic
	- Disable same page merging on ADI enabled pages since ADI tags
	  may not match on pages with identical data
	- Broke the patch up further into smaller patches

v6:
	- Eliminated instructions to read and write PSTATE as well as
	  MCDPER and PMCDPER on every access to userspace addresses
	  by setting PSTATE and PMCDPER correctly upon entry into
	  kernel. PSTATE.mcde and PMCDPER are set upon entry into
	  kernel when running on an M7 processor. PSTATE.mcde being
	  set only affects memory accesses that have TTE.mcd set.
	  PMCDPER being set only affects writes to memory addresses
	  that have TTE.mcd set. This ensures any faults caused by
	  ADI tag mismatch on a write are exposed before kernel returns
	  to userspace.

v5:
	- Fixed indentation issues and instrcuctions in assembly code
	- Removed CONFIG_SPARC64 from mdesc.c
	- Changed to maintain state of MCDPER register in thread info
	  flags as opposed to in mm context. MCDPER is a per-thread
	  state and belongs in thread info flag as opposed to mm context
	  which is shared across threads. Added comments to clarify this
	  is a lazily maintained state and must be updated on context
	  switch and copy_process()
	- Updated code to use the new arch_do_swap_page() and
	  arch_unmap_one() functions

v4:
	- Broke patch up into smaller patches

v3:
	- Removed CONFIG_SPARC_ADI
	- Replaced prctl commands with mprotect
	- Added auxiliary vectors for ADI parameters
	- Enabled ADI for swappable pages

v2:
	- Fixed a build error

 Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++++++++++
 arch/sparc/include/asm/mman.h           |  72 ++++++++-
 arch/sparc/include/asm/mmu_64.h         |  17 ++
 arch/sparc/include/asm/mmu_context_64.h |  43 +++++
 arch/sparc/include/asm/page_64.h        |   4 +
 arch/sparc/include/asm/pgtable_64.h     |  46 ++++++
 arch/sparc/include/asm/thread_info_64.h |   2 +-
 arch/sparc/include/asm/trap_block.h     |   2 +
 arch/sparc/include/uapi/asm/mman.h      |   2 +
 arch/sparc/kernel/adi_64.c              | 277 ++++++++++++++++++++++++++++++++
 arch/sparc/kernel/etrap_64.S            |  28 +++-
 arch/sparc/kernel/process_64.c          |  25 +++
 arch/sparc/kernel/setup_64.c            |  11 +-
 arch/sparc/kernel/vmlinux.lds.S         |   5 +
 arch/sparc/mm/gup.c                     |  37 +++++
 arch/sparc/mm/hugetlbpage.c             |  14 +-
 arch/sparc/mm/init_64.c                 |  33 ++++
 arch/sparc/mm/tsb.c                     |  21 +++
 include/linux/mm.h                      |   3 +
 mm/ksm.c                                |   4 +
 20 files changed, 913 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/sparc/adi.txt

diff --git a/Documentation/sparc/adi.txt b/Documentation/sparc/adi.txt
new file mode 100644
index 000000000000..383bc65fec1e
--- /dev/null
+++ b/Documentation/sparc/adi.txt
@@ -0,0 +1,272 @@
+Application Data Integrity (ADI)
+================================
+
+SPARC M7 processor adds the Application Data Integrity (ADI) feature.
+ADI allows a task to set version tags on any subset of its address
+space. Once ADI is enabled and version tags are set for ranges of
+address space of a task, the processor will compare the tag in pointers
+to memory in these ranges to the version set by the application
+previously. Access to memory is granted only if the tag in given pointer
+matches the tag set by the application. In case of mismatch, processor
+raises an exception.
+
+Following steps must be taken by a task to enable ADI fully:
+
+1. Set the user mode PSTATE.mcde bit. This acts as master switch for
+   the task's entire address space to enable/disable ADI for the task.
+
+2. Set TTE.mcd bit on any TLB entries that correspond to the range of
+   addresses ADI is being enabled on. MMU checks the version tag only
+   on the pages that have TTE.mcd bit set.
+
+3. Set the version tag for virtual addresses using stxa instruction
+   and one of the MCD specific ASIs. Each stxa instruction sets the
+   given tag for one ADI block size number of bytes. This step must
+   be repeated for entire page to set tags for entire page.
+
+ADI block size for the platform is provided by the hypervisor to kernel
+in machine description tables. Hypervisor also provides the number of
+top bits in the virtual address that specify the version tag.  Once
+version tag has been set for a memory location, the tag is stored in the
+physical memory and the same tag must be present in the ADI version tag
+bits of the virtual address being presented to the MMU. For example on
+SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block
+size is same as cacheline size which is 64 bytes. A task that sets ADI
+version to, say 10, on a range of memory, must access that memory using
+virtual addresses that contain 0xa in bits 63-60.
+
+ADI is enabled on a set of pages using mprotect() with PROT_ADI flag.
+When ADI is enabled on a set of pages by a task for the first time,
+kernel sets the PSTATE.mcde bit fot the task. Version tags for memory
+addresses are set with an stxa instruction on the addresses using
+ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is
+provided by the hypervisor to the kernel.  Kernel returns the value of
+ADI block size to userspace using auxiliary vector along with other ADI
+info. Following auxiliary vectors are provided by the kernel:
+
+	AT_ADI_BLKSZ	ADI block size. This is the granularity and
+			alignment, in bytes, of ADI versioning.
+	AT_ADI_NBITS	Number of ADI version bits in the VA
+
+
+IMPORTANT NOTES:
+
+- Version tag values of 0x0 and 0xf are reserved.
+
+- Version tags are set on virtual addresses from userspace even though
+  tags are stored in physical memory. Tags are set on a physical page
+  after it has been allocated to a task and a pte has been created for
+  it.
+
+- When a task frees a memory page it had set version tags on, the page
+  goes back to free page pool. When this page is re-allocated to a task,
+  kernel clears the page using block initialization ASI which clears the
+  version tags as well for the page. If a page allocated to a task is
+  freed and allocated back to the same task, old version tags set by the
+  task on that page will no longer be present.
+
+- Kernel does not set any tags for user pages and it is entirely a
+  task's responsibility to set any version tags. Kernel does ensure the
+  version tags are preserved if a page is swapped out to the disk and
+  swapped back in. It also preserves that version tags if a page is
+  migrated.
+
+- ADI works for any size pages. A userspace task need not be aware of
+  page size when using ADI. It can simply select a virtual address
+  range, enable ADI on the range using mprotect() and set version tags
+  for the entire range. mprotect() ensures range is aligned to page size
+  and is a multiple of page size.
+
+
+
+ADI related traps
+-----------------
+
+With ADI enabled, following new traps may occur:
+
+Disrupting memory corruption
+
+	When a store accesses a memory localtion that has TTE.mcd=1,
+	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
+	tag in the address used (bits 63:60) does not match the tag set on
+	the corresponding cacheline, a memory corruption trap occurs. By
+	default, it is a disrupting trap and is sent to the hypervisor
+	first. Hypervisor creates a sun4v error report and sends a
+	resumable error (TT=0x7e) trap to the kernel. The kernel sends
+	a SIGSEGV to the task that resulted in this trap with the following
+	info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ADIDERR;
+		siginfo.si_addr = addr; /* PC where first mismatch occurred */
+		siginfo.si_trapno = 0;
+
+
+Precise memory corruption
+
+	When a store accesses a memory location that has TTE.mcd=1,
+	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
+	tag in the address used (bits 63:60) does not match the tag set on
+	the corresponding cacheline, a memory corruption trap occurs. If
+	MCD precise exception is enabled (MCDPERR=1), a precise
+	exception is sent to the kernel with TT=0x1a. The kernel sends
+	a SIGSEGV to the task that resulted in this trap with the following
+	info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ADIPERR;
+		siginfo.si_addr = addr;	/* address that caused trap */
+		siginfo.si_trapno = 0;
+
+	NOTE: ADI tag mismatch on a load always results in precise trap.
+
+
+MCD disabled
+
+	When a task has not enabled ADI and attempts to set ADI version
+	on a memory address, processor sends an MCD disabled trap. This
+	trap is handled by hypervisor first and the hypervisor vectors this
+	trap through to the kernel as Data Access Exception trap with
+	fault type set to 0xa (invalid ASI). When this occurs, the kernel
+	sends the task SIGSEGV signal with following info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ACCADI;
+		siginfo.si_addr = addr;	/* address that caused trap */
+		siginfo.si_trapno = 0;
+
+
+Sample program to use ADI
+-------------------------
+
+Following sample program is meant to illustrate how to use the ADI
+functionality.
+
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <elf.h>
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/mman.h>
+#include <asm/asi.h>
+
+#ifndef AT_ADI_BLKSZ
+#define AT_ADI_BLKSZ	48
+#endif
+#ifndef AT_ADI_NBITS
+#define AT_ADI_NBITS	49
+#endif
+
+#ifndef PROT_ADI
+#define PROT_ADI	0x10
+#endif
+
+#define BUFFER_SIZE     32*1024*1024UL
+
+main(int argc, char* argv[], char* envp[])
+{
+        unsigned long i, mcde, adi_blksz, adi_nbits;
+        char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr;
+        int shmid, version;
+	Elf64_auxv_t *auxv;
+
+	adi_blksz = 0;
+
+	while(*envp++ != NULL);
+	for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
+		switch (auxv->a_type) {
+		case AT_ADI_BLKSZ:
+			adi_blksz = auxv->a_un.a_val;
+			break;
+		case AT_ADI_NBITS:
+			adi_nbits = auxv->a_un.a_val;
+			break;
+		}
+	}
+	if (adi_blksz == 0) {
+		fprintf(stderr, "Oops! ADI is not supported\n");
+		exit(1);
+	}
+
+	printf("ADI capabilities:\n");
+	printf("\tBlock size = %ld\n", adi_blksz);
+	printf("\tNumber of bits = %ld\n", adi_nbits);
+
+        if ((shmid = shmget(2, BUFFER_SIZE,
+                                IPC_CREAT | SHM_R | SHM_W)) < 0) {
+                perror("shmget failed");
+                exit(1);
+        }
+
+        shmaddr = shmat(shmid, NULL, 0);
+        if (shmaddr == (char *)-1) {
+                perror("shm attach failed");
+                shmctl(shmid, IPC_RMID, NULL);
+                exit(1);
+        }
+
+	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) {
+		perror("mprotect failed");
+		goto err_out;
+	}
+
+        /* Set the ADI version tag on the shm segment
+         */
+        version = 10;
+        tmp_addr = shmaddr;
+        end = shmaddr + BUFFER_SIZE;
+        while (tmp_addr < end) {
+                asm volatile(
+                        "stxa %1, [%0]0x90\n\t"
+                        :
+                        : "r" (tmp_addr), "r" (version));
+                tmp_addr += adi_blksz;
+        }
+	asm volatile("membar #Sync\n\t");
+
+        /* Create a versioned address from the normal address by placing
+	 * version tag in the upper adi_nbits bits
+         */
+        tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits);
+        tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits);
+        veraddr = (void *) (((unsigned long)version << (64-adi_nbits))
+                        | (unsigned long)tmp_addr);
+
+        printf("Starting the writes:\n");
+        for (i = 0; i < BUFFER_SIZE; i++) {
+                veraddr[i] = (char)(i);
+                if (!(i % (1024 * 1024)))
+                        printf(".");
+        }
+        printf("\n");
+
+        printf("Verifying data...");
+	fflush(stdout);
+        for (i = 0; i < BUFFER_SIZE; i++)
+                if (veraddr[i] != (char)i)
+                        printf("\nIndex %lu mismatched\n", i);
+        printf("Done.\n");
+
+        /* Disable ADI and clean up
+         */
+	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) {
+		perror("mprotect failed");
+		goto err_out;
+	}
+
+        if (shmdt((const void *)shmaddr) != 0)
+                perror("Detach failure");
+        shmctl(shmid, IPC_RMID, NULL);
+
+        exit(0);
+
+err_out:
+        if (shmdt((const void *)shmaddr) != 0)
+                perror("Detach failure");
+        shmctl(shmid, IPC_RMID, NULL);
+        exit(1);
+}
diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
index 59bb5938d852..b799796ad963 100644
--- a/arch/sparc/include/asm/mman.h
+++ b/arch/sparc/include/asm/mman.h
@@ -6,5 +6,75 @@
 #ifndef __ASSEMBLY__
 #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
 int sparc_mmap_check(unsigned long addr, unsigned long len);
-#endif
+
+#ifdef CONFIG_SPARC64
+#include <asm/adi_64.h>
+
+#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
+static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
+{
+	if (prot & PROT_ADI) {
+		struct pt_regs *regs;
+
+		if (!current->mm->context.adi) {
+			regs = task_pt_regs(current);
+			regs->tstate |= TSTATE_MCDE;
+			current->mm->context.adi = true;
+		}
+		return VM_SPARC_ADI;
+	} else {
+		return 0;
+	}
+}
+
+#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
+static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
+{
+	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
+}
+
+#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
+static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
+{
+	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
+		return 0;
+	if (prot & PROT_ADI) {
+		if (!adi_capable())
+			return 0;
+
+		/* ADI tags can not be set on read-only memory, so it makes
+		 * sense to enable ADI on writable memory only.
+		 */
+		if (!(prot & PROT_WRITE))
+			return 0;
+
+		if (addr) {
+			struct vm_area_struct *vma;
+
+			vma = find_vma(current->mm, addr);
+			if (vma) {
+				/* ADI can not be enabled on PFN
+				 * mapped pages
+				 */
+				if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+					return 0;
+
+				/* Mergeable pages can become unmergeable
+				 * if ADI is enabled on them even if they
+				 * have identical data on them. This can be
+				 * because ADI enabled pages with identical
+				 * data may still not have identical ADI
+				 * tags on them. Disallow ADI on mergeable
+				 * pages.
+				 */
+				if (vma->vm_flags & VM_MERGEABLE)
+					return 0;
+			}
+		}
+	}
+	return 1;
+}
+#endif /* CONFIG_SPARC64 */
+
+#endif /* __ASSEMBLY__ */
 #endif /* __SPARC_MMAN_H__ */
diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
index 83b36a5371ff..a65d51ebe00b 100644
--- a/arch/sparc/include/asm/mmu_64.h
+++ b/arch/sparc/include/asm/mmu_64.h
@@ -89,6 +89,20 @@ struct tsb_config {
 #define MM_NUM_TSBS	1
 #endif
 
+/* ADI tags are stored when a page is swapped out and the storage for
+ * tags is allocated dynamically. There is a tag storage descriptor
+ * associated with each set of tag storage pages. Tag storage descriptors
+ * are allocated dynamically. Since kernel will allocate a full page for
+ * each tag storage descriptor, we can store up to
+ * PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page.
+ */
+typedef struct {
+	unsigned long	start;		/* Start address for this tag storage */
+	unsigned long	end;		/* Last address for tag storage */
+	unsigned char	*tags;		/* Where the tags are */
+	unsigned long	tag_users;	/* number of references to descriptor */
+} tag_storage_desc_t;
+
 typedef struct {
 	spinlock_t		lock;
 	unsigned long		sparc64_ctx_val;
@@ -96,6 +110,9 @@ typedef struct {
 	unsigned long		thp_pte_count;
 	struct tsb_config	tsb_block[MM_NUM_TSBS];
 	struct hv_tsb_descr	tsb_descr[MM_NUM_TSBS];
+	bool			adi;
+	tag_storage_desc_t	*tag_store;
+	spinlock_t		tag_lock;
 } mm_context_t;
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
index 2cddcda4f85f..68de059551f9 100644
--- a/arch/sparc/include/asm/mmu_context_64.h
+++ b/arch/sparc/include/asm/mmu_context_64.h
@@ -9,6 +9,7 @@
 #include <linux/mm_types.h>
 
 #include <asm/spitfire.h>
+#include <asm/adi_64.h>
 #include <asm-generic/mm_hooks.h>
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
@@ -129,6 +130,48 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
 
 #define deactivate_mm(tsk,mm)	do { } while (0)
 #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
+
+#define  __HAVE_ARCH_START_CONTEXT_SWITCH
+static inline void arch_start_context_switch(struct task_struct *prev)
+{
+	/* Save the current state of MCDPER register for the process
+	 * we are switching from
+	 */
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		__asm__ __volatile__(
+			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
+			"mov %%g1, %0\n\t"
+			: "=r" (tmp_mcdper)
+			:
+			: "g1");
+		if (tmp_mcdper)
+			set_tsk_thread_flag(prev, TIF_MCDPER);
+		else
+			clear_tsk_thread_flag(prev, TIF_MCDPER);
+	}
+}
+
+#define finish_arch_post_lock_switch	finish_arch_post_lock_switch
+static inline void finish_arch_post_lock_switch(void)
+{
+	/* Restore the state of MCDPER register for the new process
+	 * just switched to.
+	 */
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		tmp_mcdper = test_thread_flag(TIF_MCDPER);
+		__asm__ __volatile__(
+			"mov %0, %%g1\n\t"
+			".word 0x9d800001\n\t"	/* wr %g0, %g1, %mcdper" */
+			:
+			: "ir" (tmp_mcdper)
+			: "g1");
+	}
+}
+
 #endif /* !(__ASSEMBLY__) */
 
 #endif /* !(__SPARC64_MMU_CONTEXT_H) */
diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
index 5961b2d8398a..dc582c5611f8 100644
--- a/arch/sparc/include/asm/page_64.h
+++ b/arch/sparc/include/asm/page_64.h
@@ -46,6 +46,10 @@ struct page;
 void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
 #define copy_page(X,Y)	memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
 void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
+#define __HAVE_ARCH_COPY_USER_HIGHPAGE
+struct vm_area_struct;
+void copy_user_highpage(struct page *to, struct page *from,
+			unsigned long vaddr, struct vm_area_struct *vma);
 
 /* Unlike sparc32, sparc64's parameter passing API is more
  * sane in that structures which as small enough are passed
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index af045061f41e..51da342c392d 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -18,6 +18,7 @@
 #include <asm/types.h>
 #include <asm/spitfire.h>
 #include <asm/asi.h>
+#include <asm/adi.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 
@@ -570,6 +571,18 @@ static inline pte_t pte_mkspecial(pte_t pte)
 	return pte;
 }
 
+static inline pte_t pte_mkmcd(pte_t pte)
+{
+	pte_val(pte) |= _PAGE_MCD_4V;
+	return pte;
+}
+
+static inline pte_t pte_mknotmcd(pte_t pte)
+{
+	pte_val(pte) &= ~_PAGE_MCD_4V;
+	return pte;
+}
+
 static inline unsigned long pte_young(pte_t pte)
 {
 	unsigned long mask;
@@ -1001,6 +1014,39 @@ int page_in_phys_avail(unsigned long paddr);
 int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
 		    unsigned long, pgprot_t);
 
+void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		      unsigned long addr, pte_t pte);
+
+int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		  unsigned long addr, pte_t oldpte);
+
+#define __HAVE_ARCH_DO_SWAP_PAGE
+static inline void arch_do_swap_page(struct mm_struct *mm,
+				     struct vm_area_struct *vma,
+				     unsigned long addr,
+				     pte_t pte, pte_t oldpte)
+{
+	/* If this is a new page being mapped in, there can be no
+	 * ADI tags stored away for this page. Skip looking for
+	 * stored tags
+	 */
+	if (pte_none(oldpte))
+		return;
+
+	if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V))
+		adi_restore_tags(mm, vma, addr, pte);
+}
+
+#define __HAVE_ARCH_UNMAP_ONE
+static inline int arch_unmap_one(struct mm_struct *mm,
+				 struct vm_area_struct *vma,
+				 unsigned long addr, pte_t oldpte)
+{
+	if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
+		return adi_save_tags(mm, vma, addr, oldpte);
+	return 0;
+}
+
 static inline int io_remap_pfn_range(struct vm_area_struct *vma,
 				     unsigned long from, unsigned long pfn,
 				     unsigned long size, pgprot_t prot)
diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
index 38a24f257b85..9c04acb1f9af 100644
--- a/arch/sparc/include/asm/thread_info_64.h
+++ b/arch/sparc/include/asm/thread_info_64.h
@@ -190,7 +190,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
  *       in using in assembly, else we can't use the mask as
  *       an immediate value in instructions such as andcc.
  */
-/* flag bit 12 is available */
+#define TIF_MCDPER		12	/* Precise MCD exception */
 #define TIF_MEMDIE		13	/* is terminating due to OOM killer */
 #define TIF_POLLING_NRFLAG	14
 
diff --git a/arch/sparc/include/asm/trap_block.h b/arch/sparc/include/asm/trap_block.h
index ec9c04de3664..b283e940671a 100644
--- a/arch/sparc/include/asm/trap_block.h
+++ b/arch/sparc/include/asm/trap_block.h
@@ -72,6 +72,8 @@ struct sun4v_1insn_patch_entry {
 };
 extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch,
 	__sun4v_1insn_patch_end;
+extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch,
+	__sun_m7_1insn_patch_end;
 
 struct sun4v_2insn_patch_entry {
 	unsigned int	addr;
diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
index 9765896ecb2c..a72c03397345 100644
--- a/arch/sparc/include/uapi/asm/mman.h
+++ b/arch/sparc/include/uapi/asm/mman.h
@@ -5,6 +5,8 @@
 
 /* SunOS'ified... */
 
+#define PROT_ADI	0x10		/* ADI enabled */
+
 #define MAP_RENAME      MAP_ANONYMOUS   /* In SunOS terminology */
 #define MAP_NORESERVE   0x40            /* don't reserve swap pages */
 #define MAP_INHERIT     0x80            /* SunOS doesn't do this, but... */
diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
index 9fbb5dd4a7bf..83c1e36ae5fa 100644
--- a/arch/sparc/kernel/adi_64.c
+++ b/arch/sparc/kernel/adi_64.c
@@ -7,10 +7,24 @@
  * This work is licensed under the terms of the GNU GPL, version 2.
  */
 #include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/mm_types.h>
 #include <asm/mdesc.h>
 #include <asm/adi_64.h>
+#include <asm/mmu_64.h>
+#include <asm/pgtable_64.h>
+
+/* Each page of storage for ADI tags can accommodate tags for 128
+ * pages. When ADI enabled pages are being swapped out, it would be
+ * prudent to allocate at least enough tag storage space to accommodate
+ * SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to
+ * store tags for four SWAPFILE_CLUSTER pages to reduce need for
+ * further allocations for same vma.
+ */
+#define TAG_STORAGE_PAGES	8
 
 struct adi_config adi_state;
+EXPORT_SYMBOL(adi_state);
 
 /* mdesc_adi_init() : Parse machine description provided by the
  *	hypervisor to detect ADI capabilities
@@ -78,6 +92,19 @@ void __init mdesc_adi_init(void)
 		goto adi_not_found;
 	adi_state.caps.nbits = *val;
 
+	/* Some of the code to support swapping ADI tags is written
+	 * assumption that two ADI tags can fit inside one byte. If
+	 * this assumption is broken by a future architecture change,
+	 * that code will have to be revisited. If that were to happen,
+	 * disable ADI support so we do not get unpredictable results
+	 * with programs trying to use ADI and their pages getting
+	 * swapped out
+	 */
+	if (adi_state.caps.nbits > 4) {
+		pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n");
+		adi_state.enabled = false;
+	}
+
 	mdesc_release(hp);
 	return;
 
@@ -88,3 +115,253 @@ void __init mdesc_adi_init(void)
 	if (hp)
 		mdesc_release(hp);
 }
+
+tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
+				   struct vm_area_struct *vma,
+				   unsigned long addr)
+{
+	tag_storage_desc_t *tag_desc = NULL;
+	unsigned long i, max_desc, flags;
+
+	/* Check if this vma already has tag storage descriptor
+	 * allocated for it.
+	 */
+	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+	if (mm->context.tag_store) {
+		tag_desc = mm->context.tag_store;
+		spin_lock_irqsave(&mm->context.tag_lock, flags);
+		for (i = 0; i < max_desc; i++) {
+			if ((addr >= tag_desc->start) &&
+			    ((addr + PAGE_SIZE - 1) <= tag_desc->end))
+				break;
+			tag_desc++;
+		}
+		spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+
+		/* If no matching entries were found, this must be a
+		 * freshly allocated page
+		 */
+		if (i >= max_desc)
+			tag_desc = NULL;
+	}
+
+	return tag_desc;
+}
+
+tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
+				    struct vm_area_struct *vma,
+				    unsigned long addr)
+{
+	unsigned char *tags;
+	unsigned long i, size, max_desc, flags;
+	tag_storage_desc_t *tag_desc, *open_desc;
+	unsigned long end_addr, hole_start, hole_end;
+
+	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+	open_desc = NULL;
+	hole_start = 0;
+	hole_end = ULONG_MAX;
+	end_addr = addr + PAGE_SIZE - 1;
+
+	/* Check if this vma already has tag storage descriptor
+	 * allocated for it.
+	 */
+	spin_lock_irqsave(&mm->context.tag_lock, flags);
+	if (mm->context.tag_store) {
+		tag_desc = mm->context.tag_store;
+
+		/* Look for a matching entry for this address. While doing
+		 * that, look for the first open slot as well and find
+		 * the hole in already allocated range where this request
+		 * will fit in.
+		 */
+		for (i = 0; i < max_desc; i++) {
+			if (tag_desc->tag_users == 0) {
+				if (open_desc == NULL)
+					open_desc = tag_desc;
+			} else {
+				if ((addr >= tag_desc->start) &&
+				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
+					tag_desc->tag_users++;
+					goto out;
+				}
+			}
+			if ((tag_desc->start > end_addr) &&
+			    (tag_desc->start < hole_end))
+				hole_end = tag_desc->start;
+			if ((tag_desc->end < addr) &&
+			    (tag_desc->end > hole_start))
+				hole_start = tag_desc->end;
+			tag_desc++;
+		}
+
+	} else {
+		size = sizeof(tag_storage_desc_t)*max_desc;
+		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
+		if (mm->context.tag_store == NULL) {
+			tag_desc = NULL;
+			goto out;
+		}
+		tag_desc = mm->context.tag_store;
+		for (i = 0; i < max_desc; i++, tag_desc++)
+			tag_desc->tag_users = 0;
+		open_desc = mm->context.tag_store;
+		i = 0;
+	}
+
+	/* Check if we ran out of tag storage descriptors */
+	if (open_desc == NULL) {
+		tag_desc = NULL;
+		goto out;
+	}
+
+	/* Mark this tag descriptor slot in use and then initialize it */
+	tag_desc = open_desc;
+	tag_desc->tag_users = 1;
+
+	/* Tag storage has not been allocated for this vma and space
+	 * is available in tag storage descriptor. Since this page is
+	 * being swapped out, there is high probability subsequent pages
+	 * in the VMA will be swapped out as well. Allocates pages to
+	 * store tags for as many pages in this vma as possible but not
+	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
+	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
+	 * covers adi_blksize() worth of addresses. Check if the hole is
+	 * big enough to accommodate full address range for using
+	 * TAG_STORAGE_PAGES number of tag pages.
+	 */
+	size = TAG_STORAGE_PAGES * PAGE_SIZE;
+	end_addr = addr + (size*2*adi_blksize()) - 1;
+	if (hole_end < end_addr) {
+		/* Available hole is too small on the upper end of
+		 * address. Can we expand the range towards the lower
+		 * address and maximize use of this slot?
+		 */
+		unsigned long tmp_addr;
+
+		end_addr = hole_end - 1;
+		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
+		if (tmp_addr < hole_start) {
+			/* Available hole is restricted on lower address
+			 * end as well
+			 */
+			tmp_addr = hole_start + 1;
+		}
+		addr = tmp_addr;
+		size = (end_addr + 1 - addr)/(2*adi_blksize());
+		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
+		size = size * PAGE_SIZE;
+	}
+	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
+	if (tags == NULL) {
+		tag_desc->tag_users = 0;
+		tag_desc = NULL;
+		goto out;
+	}
+	tag_desc->start = addr;
+	tag_desc->tags = tags;
+	tag_desc->end = end_addr;
+
+out:
+	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+	return tag_desc;
+}
+
+void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
+{
+	unsigned long flags;
+	unsigned char *tags = NULL;
+
+	spin_lock_irqsave(&mm->context.tag_lock, flags);
+	tag_desc->tag_users--;
+	if (tag_desc->tag_users == 0) {
+		tag_desc->start = tag_desc->end = 0;
+		/* Do not free up the tag storage space allocated
+		 * by the first descriptor. This is persistent
+		 * emergency tag storage space for the task.
+		 */
+		if (tag_desc != mm->context.tag_store) {
+			tags = tag_desc->tags;
+			tag_desc->tags = NULL;
+		}
+	}
+	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+	kfree(tags);
+}
+
+#define tag_start(addr, tag_desc)		\
+	((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize())))
+
+/* Retrieve any saved ADI tags for the page being swapped back in and
+ * restore these tags to the newly allocated physical page.
+ */
+void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		      unsigned long addr, pte_t pte)
+{
+	unsigned char *tag;
+	tag_storage_desc_t *tag_desc;
+	unsigned long paddr, tmp, version1, version2;
+
+	/* Check if the swapped out page has an ADI version
+	 * saved. If yes, restore version tag to the newly
+	 * allocated page.
+	 */
+	tag_desc = find_tag_store(mm, vma, addr);
+	if (tag_desc == NULL)
+		return;
+
+	tag = tag_start(addr, tag_desc);
+	paddr = pte_val(pte) & _PAGE_PADDR_4V;
+	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
+		version1 = (*tag) >> 4;
+		version2 = (*tag) & 0x0f;
+		*tag++ = 0;
+		asm volatile("stxa %0, [%1] %2\n\t"
+			:
+			: "r" (version1), "r" (tmp),
+			  "i" (ASI_MCD_REAL));
+		tmp += adi_blksize();
+		asm volatile("stxa %0, [%1] %2\n\t"
+			:
+			: "r" (version2), "r" (tmp),
+			  "i" (ASI_MCD_REAL));
+	}
+	asm volatile("membar #Sync\n\t");
+
+	/* Check and mark this tag space for release later if
+	 * the swapped in page was the last user of tag space
+	 */
+	del_tag_store(tag_desc, mm);
+}
+
+/* A page is about to be swapped out. Save any ADI tags associated with
+ * this physical page so they can be restored later when the page is swapped
+ * back in.
+ */
+int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		  unsigned long addr, pte_t oldpte)
+{
+	unsigned char *tag;
+	tag_storage_desc_t *tag_desc;
+	unsigned long version1, version2, paddr, tmp;
+
+	tag_desc = alloc_tag_store(mm, vma, addr);
+	if (tag_desc == NULL)
+		return -1;
+
+	tag = tag_start(addr, tag_desc);
+	paddr = pte_val(oldpte) & _PAGE_PADDR_4V;
+	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
+		asm volatile("ldxa [%1] %2, %0\n\t"
+				: "=r" (version1)
+				: "r" (tmp), "i" (ASI_MCD_REAL));
+		tmp += adi_blksize();
+		asm volatile("ldxa [%1] %2, %0\n\t"
+				: "=r" (version2)
+				: "r" (tmp), "i" (ASI_MCD_REAL));
+		*tag = (version1 << 4) | version2;
+		tag++;
+	}
+
+	return 0;
+}
diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
index 1276ca2567ba..7be33bf45cff 100644
--- a/arch/sparc/kernel/etrap_64.S
+++ b/arch/sparc/kernel/etrap_64.S
@@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
 		or	%l7, %l0, %l7
-		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
+661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
+		/*
+		 * If userspace is using ADI, it could potentially pass
+		 * a pointer with version tag embedded in it. To maintain
+		 * the ADI security, we must enable PSTATE.mcde. Userspace
+		 * would have already set TTE.mcd in an earlier call to
+		 * kernel and set the version tag for the address being
+		 * dereferenced. Setting PSTATE.mcde would ensure any
+		 * access to userspace data through a system call honors
+		 * ADI and does not allow a rogue app to bypass ADI by
+		 * using system calls. Setting PSTATE.mcde only affects
+		 * accesses to virtual addresses that have TTE.mcd set.
+		 * Set PMCDPER to ensure any exceptions caused by ADI
+		 * version tag mismatch are exposed before system call
+		 * returns to userspace. Setting PMCDPER affects only
+		 * writes to virtual addresses that have TTE.mcd set and
+		 * have a version tag set as well.
+		 */
+		.section .sun_m7_1insn_patch, "ax"
+		.word	661b
+		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
+		.previous
+661:		nop
+		.section .sun_m7_1insn_patch, "ax"
+		.word	661b
+		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
+		.previous
 		or	%l7, %l0, %l7
 		wrpr	%l2, %tnpc
 		wrpr	%l7, (TSTATE_PRIV | TSTATE_IE), %tstate
diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
index b96104da5bd6..defa5723dfa6 100644
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -664,6 +664,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
 	return 0;
 }
 
+/* TIF_MCDPER in thread info flags for current task is updated lazily upon
+ * a context switch. Update the this flag in current task's thread flags
+ * before dup so the dup'd task will inherit the current TIF_MCDPER flag.
+ */
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		__asm__ __volatile__(
+			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
+			"mov %%g1, %0\n\t"
+			: "=r" (tmp_mcdper)
+			:
+			: "g1");
+		if (tmp_mcdper)
+			set_thread_flag(TIF_MCDPER);
+		else
+			clear_thread_flag(TIF_MCDPER);
+	}
+
+	*dst = *src;
+	return 0;
+}
+
 typedef struct {
 	union {
 		unsigned int	pr_regs[32];
diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
index 422b17880955..a9da205da394 100644
--- a/arch/sparc/kernel/setup_64.c
+++ b/arch/sparc/kernel/setup_64.c
@@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
 	}
 }
 
+void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
+			     struct sun4v_1insn_patch_entry *end)
+{
+	sun4v_patch_1insn_range(start, end);
+}
+
 void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
 			     struct sun4v_2insn_patch_entry *end)
 {
@@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
 				&__sun4v_2insn_patch_end);
 	if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
-	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
+	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN) {
+		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
+					 &__sun_m7_1insn_patch_end);
 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
 					 &__sun_m7_2insn_patch_end);
+		}
 
 	sun4v_hvapi_init();
 }
diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
index 572db686f845..20a70682cce7 100644
--- a/arch/sparc/kernel/vmlinux.lds.S
+++ b/arch/sparc/kernel/vmlinux.lds.S
@@ -144,6 +144,11 @@ SECTIONS
 		*(.pause_3insn_patch)
 		__pause_3insn_patch_end = .;
 	}
+	.sun_m7_1insn_patch : {
+		__sun_m7_1insn_patch = .;
+		*(.sun_m7_1insn_patch)
+		__sun_m7_1insn_patch_end = .;
+	}
 	.sun_m7_2insn_patch : {
 		__sun_m7_2insn_patch = .;
 		*(.sun_m7_2insn_patch)
diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
index cd0e32bbcb1d..579f7ae75b35 100644
--- a/arch/sparc/mm/gup.c
+++ b/arch/sparc/mm/gup.c
@@ -11,6 +11,7 @@
 #include <linux/pagemap.h>
 #include <linux/rwsem.h>
 #include <asm/pgtable.h>
+#include <asm/adi.h>
 
 /*
  * The performance critical leaf functions are made noinline otherwise gcc
@@ -157,6 +158,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
 	pgd_t *pgdp;
 	int nr = 0;
 
+#ifdef CONFIG_SPARC64
+	if (adi_capable()) {
+		long addr = start;
+
+		/* If userspace has passed a versioned address, kernel
+		 * will not find it in the VMAs since it does not store
+		 * the version tags in the list of VMAs. Storing version
+		 * tags in list of VMAs is impractical since they can be
+		 * changed any time from userspace without dropping into
+		 * kernel. Any address search in VMAs will be done with
+		 * non-versioned addresses. Ensure the ADI version bits
+		 * are dropped here by sign extending the last bit before
+		 * ADI bits. IOMMU does not implement version tags.
+		 */
+		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
+		start = addr;
+	}
+#endif
 	start &= PAGE_MASK;
 	addr = start;
 	len = (unsigned long) nr_pages << PAGE_SHIFT;
@@ -187,6 +206,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
 	pgd_t *pgdp;
 	int nr = 0;
 
+#ifdef CONFIG_SPARC64
+	if (adi_capable()) {
+		long addr = start;
+
+		/* If userspace has passed a versioned address, kernel
+		 * will not find it in the VMAs since it does not store
+		 * the version tags in the list of VMAs. Storing version
+		 * tags in list of VMAs is impractical since they can be
+		 * changed any time from userspace without dropping into
+		 * kernel. Any address search in VMAs will be done with
+		 * non-versioned addresses. Ensure the ADI version bits
+		 * are dropped here by sign extending the last bit before
+		 * ADI bits. IOMMU does not implements version tags,
+		 */
+		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
+		start = addr;
+	}
+#endif
 	start &= PAGE_MASK;
 	addr = start;
 	len = (unsigned long) nr_pages << PAGE_SHIFT;
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 88855e383b34..487ed1f1ce86 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -177,8 +177,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
 			 struct page *page, int writeable)
 {
 	unsigned int shift = huge_page_shift(hstate_vma(vma));
+	pte_t pte;
 
-	return hugepage_shift_to_tte(entry, shift);
+	pte = hugepage_shift_to_tte(entry, shift);
+
+#ifdef CONFIG_SPARC64
+	/* If this vma has ADI enabled on it, turn on TTE.mcd
+	 */
+	if (vma->vm_flags & VM_SPARC_ADI)
+		return pte_mkmcd(pte);
+	else
+		return pte_mknotmcd(pte);
+#else
+	return pte;
+#endif
 }
 
 static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 3c40ebd50f92..94854e7e833e 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -3087,3 +3087,36 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 		do_flush_tlb_kernel_range(start, end);
 	}
 }
+
+void copy_user_highpage(struct page *to, struct page *from,
+	unsigned long vaddr, struct vm_area_struct *vma)
+{
+	char *vfrom, *vto;
+
+	vfrom = kmap_atomic(from);
+	vto = kmap_atomic(to);
+	copy_user_page(vto, vfrom, vaddr, to);
+	kunmap_atomic(vto);
+	kunmap_atomic(vfrom);
+
+	/* If this page has ADI enabled, copy over any ADI tags
+	 * as well
+	 */
+	if (vma->vm_flags & VM_SPARC_ADI) {
+		unsigned long pfrom, pto, i, adi_tag;
+
+		pfrom = page_to_phys(from);
+		pto = page_to_phys(to);
+
+		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
+			asm volatile("ldxa [%1] %2, %0\n\t"
+					: "=r" (adi_tag)
+					:  "r" (i), "i" (ASI_MCD_REAL));
+			asm volatile("stxa %0, [%1] %2\n\t"
+					:
+					: "r" (adi_tag), "r" (pto),
+					  "i" (ASI_MCD_REAL));
+			pto += adi_blksize();
+		}
+	}
+}
diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
index 0d4b998c7d7b..6518cc42056b 100644
--- a/arch/sparc/mm/tsb.c
+++ b/arch/sparc/mm/tsb.c
@@ -545,6 +545,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 
 	mm->context.sparc64_ctx_val = 0UL;
 
+	mm->context.tag_store = NULL;
+	spin_lock_init(&mm->context.tag_lock);
+
 #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
 	/* We reset them to zero because the fork() page copying
 	 * will re-increment the counters as the parent PTEs are
@@ -610,4 +613,22 @@ void destroy_context(struct mm_struct *mm)
 	}
 
 	spin_unlock_irqrestore(&ctx_alloc_lock, flags);
+
+	/* If ADI tag storage was allocated for this task, free it */
+	if (mm->context.tag_store) {
+		tag_storage_desc_t *tag_desc;
+		unsigned long max_desc;
+		unsigned char *tags;
+
+		tag_desc = mm->context.tag_store;
+		max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+		for (i = 0; i < max_desc; i++) {
+			tags = tag_desc->tags;
+			tag_desc->tags = NULL;
+			kfree(tags);
+			tag_desc++;
+		}
+		kfree(mm->context.tag_store);
+		mm->context.tag_store = NULL;
+	}
 }
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b7aa3932e6d4..c0972114036f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -231,6 +231,9 @@ extern unsigned int kobjsize(const void *objp);
 # define VM_GROWSUP	VM_ARCH_1
 #elif defined(CONFIG_IA64)
 # define VM_GROWSUP	VM_ARCH_1
+#elif defined(CONFIG_SPARC64)
+# define VM_SPARC_ADI	VM_ARCH_1	/* Uses ADI tag for access control */
+# define VM_ARCH_CLEAR	VM_SPARC_ADI
 #elif !defined(CONFIG_MMU)
 # define VM_MAPPED_COPY	VM_ARCH_1	/* T if mapped copy of data (nommu mmap) */
 #endif
diff --git a/mm/ksm.c b/mm/ksm.c
index 216184af0e19..bb82399816ef 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1797,6 +1797,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 		if (*vm_flags & VM_SAO)
 			return 0;
 #endif
+#ifdef VM_SPARC_ADI
+		if (*vm_flags & VM_SPARC_ADI)
+			return 0;
+#endif
 
 		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
 			err = __ksm_enter(mm);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-09 21:26   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, Khalid Aziz

ADI is a new feature supported on SPARC M7 and newer processors to allow
hardware to catch rogue accesses to memory. ADI is supported for data
fetches only and not instruction fetches. An app can enable ADI on its
data pages, set version tags on them and use versioned addresses to
access the data pages. Upper bits of the address contain the version
tag. On M7 processors, upper four bits (bits 63-60) contain the version
tag. If a rogue app attempts to access ADI enabled data pages, its
access is blocked and processor generates an exception. Please see
Documentation/sparc/adi.txt for further details.

This patch extends mprotect to enable ADI (TSTATE.mcde), enable/disable
MCD (Memory Corruption Detection) on selected memory ranges, enable
TTE.mcd in PTEs, return ADI parameters to userspace and save/restore ADI
version tags on page swap out/in or migration. ADI is not enabled by
default for any task. A task must explicitly enable ADI on a memory
range and set version tag for ADI to be effective for the task.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- Enhanced arch_validate_prot() to enable ADI only on writable
	  addresses backed by physical RAM
	- Added support for saving/restoring ADI tags for each ADI
	  block size address range on a page on swap in/out
	- Added code to copy ADI tags on COW
	- Updated values for auxiliary vectors to not conflict with
	  values on other architectures to avoid conflict in glibc. glibc
	  consolidates all auxiliary vectors into its headers and
	  duplicate values in consolidated header are problematic
	- Disable same page merging on ADI enabled pages since ADI tags
	  may not match on pages with identical data
	- Broke the patch up further into smaller patches

v6:
	- Eliminated instructions to read and write PSTATE as well as
	  MCDPER and PMCDPER on every access to userspace addresses
	  by setting PSTATE and PMCDPER correctly upon entry into
	  kernel. PSTATE.mcde and PMCDPER are set upon entry into
	  kernel when running on an M7 processor. PSTATE.mcde being
	  set only affects memory accesses that have TTE.mcd set.
	  PMCDPER being set only affects writes to memory addresses
	  that have TTE.mcd set. This ensures any faults caused by
	  ADI tag mismatch on a write are exposed before kernel returns
	  to userspace.

v5:
	- Fixed indentation issues and instrcuctions in assembly code
	- Removed CONFIG_SPARC64 from mdesc.c
	- Changed to maintain state of MCDPER register in thread info
	  flags as opposed to in mm context. MCDPER is a per-thread
	  state and belongs in thread info flag as opposed to mm context
	  which is shared across threads. Added comments to clarify this
	  is a lazily maintained state and must be updated on context
	  switch and copy_process()
	- Updated code to use the new arch_do_swap_page() and
	  arch_unmap_one() functions

v4:
	- Broke patch up into smaller patches

v3:
	- Removed CONFIG_SPARC_ADI
	- Replaced prctl commands with mprotect
	- Added auxiliary vectors for ADI parameters
	- Enabled ADI for swappable pages

v2:
	- Fixed a build error

 Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++++++++++
 arch/sparc/include/asm/mman.h           |  72 ++++++++-
 arch/sparc/include/asm/mmu_64.h         |  17 ++
 arch/sparc/include/asm/mmu_context_64.h |  43 +++++
 arch/sparc/include/asm/page_64.h        |   4 +
 arch/sparc/include/asm/pgtable_64.h     |  46 ++++++
 arch/sparc/include/asm/thread_info_64.h |   2 +-
 arch/sparc/include/asm/trap_block.h     |   2 +
 arch/sparc/include/uapi/asm/mman.h      |   2 +
 arch/sparc/kernel/adi_64.c              | 277 ++++++++++++++++++++++++++++++++
 arch/sparc/kernel/etrap_64.S            |  28 +++-
 arch/sparc/kernel/process_64.c          |  25 +++
 arch/sparc/kernel/setup_64.c            |  11 +-
 arch/sparc/kernel/vmlinux.lds.S         |   5 +
 arch/sparc/mm/gup.c                     |  37 +++++
 arch/sparc/mm/hugetlbpage.c             |  14 +-
 arch/sparc/mm/init_64.c                 |  33 ++++
 arch/sparc/mm/tsb.c                     |  21 +++
 include/linux/mm.h                      |   3 +
 mm/ksm.c                                |   4 +
 20 files changed, 913 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/sparc/adi.txt

diff --git a/Documentation/sparc/adi.txt b/Documentation/sparc/adi.txt
new file mode 100644
index 000000000000..383bc65fec1e
--- /dev/null
+++ b/Documentation/sparc/adi.txt
@@ -0,0 +1,272 @@
+Application Data Integrity (ADI)
+================
+
+SPARC M7 processor adds the Application Data Integrity (ADI) feature.
+ADI allows a task to set version tags on any subset of its address
+space. Once ADI is enabled and version tags are set for ranges of
+address space of a task, the processor will compare the tag in pointers
+to memory in these ranges to the version set by the application
+previously. Access to memory is granted only if the tag in given pointer
+matches the tag set by the application. In case of mismatch, processor
+raises an exception.
+
+Following steps must be taken by a task to enable ADI fully:
+
+1. Set the user mode PSTATE.mcde bit. This acts as master switch for
+   the task's entire address space to enable/disable ADI for the task.
+
+2. Set TTE.mcd bit on any TLB entries that correspond to the range of
+   addresses ADI is being enabled on. MMU checks the version tag only
+   on the pages that have TTE.mcd bit set.
+
+3. Set the version tag for virtual addresses using stxa instruction
+   and one of the MCD specific ASIs. Each stxa instruction sets the
+   given tag for one ADI block size number of bytes. This step must
+   be repeated for entire page to set tags for entire page.
+
+ADI block size for the platform is provided by the hypervisor to kernel
+in machine description tables. Hypervisor also provides the number of
+top bits in the virtual address that specify the version tag.  Once
+version tag has been set for a memory location, the tag is stored in the
+physical memory and the same tag must be present in the ADI version tag
+bits of the virtual address being presented to the MMU. For example on
+SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block
+size is same as cacheline size which is 64 bytes. A task that sets ADI
+version to, say 10, on a range of memory, must access that memory using
+virtual addresses that contain 0xa in bits 63-60.
+
+ADI is enabled on a set of pages using mprotect() with PROT_ADI flag.
+When ADI is enabled on a set of pages by a task for the first time,
+kernel sets the PSTATE.mcde bit fot the task. Version tags for memory
+addresses are set with an stxa instruction on the addresses using
+ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is
+provided by the hypervisor to the kernel.  Kernel returns the value of
+ADI block size to userspace using auxiliary vector along with other ADI
+info. Following auxiliary vectors are provided by the kernel:
+
+	AT_ADI_BLKSZ	ADI block size. This is the granularity and
+			alignment, in bytes, of ADI versioning.
+	AT_ADI_NBITS	Number of ADI version bits in the VA
+
+
+IMPORTANT NOTES:
+
+- Version tag values of 0x0 and 0xf are reserved.
+
+- Version tags are set on virtual addresses from userspace even though
+  tags are stored in physical memory. Tags are set on a physical page
+  after it has been allocated to a task and a pte has been created for
+  it.
+
+- When a task frees a memory page it had set version tags on, the page
+  goes back to free page pool. When this page is re-allocated to a task,
+  kernel clears the page using block initialization ASI which clears the
+  version tags as well for the page. If a page allocated to a task is
+  freed and allocated back to the same task, old version tags set by the
+  task on that page will no longer be present.
+
+- Kernel does not set any tags for user pages and it is entirely a
+  task's responsibility to set any version tags. Kernel does ensure the
+  version tags are preserved if a page is swapped out to the disk and
+  swapped back in. It also preserves that version tags if a page is
+  migrated.
+
+- ADI works for any size pages. A userspace task need not be aware of
+  page size when using ADI. It can simply select a virtual address
+  range, enable ADI on the range using mprotect() and set version tags
+  for the entire range. mprotect() ensures range is aligned to page size
+  and is a multiple of page size.
+
+
+
+ADI related traps
+-----------------
+
+With ADI enabled, following new traps may occur:
+
+Disrupting memory corruption
+
+	When a store accesses a memory localtion that has TTE.mcd=1,
+	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
+	tag in the address used (bits 63:60) does not match the tag set on
+	the corresponding cacheline, a memory corruption trap occurs. By
+	default, it is a disrupting trap and is sent to the hypervisor
+	first. Hypervisor creates a sun4v error report and sends a
+	resumable error (TT=0x7e) trap to the kernel. The kernel sends
+	a SIGSEGV to the task that resulted in this trap with the following
+	info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ADIDERR;
+		siginfo.si_addr = addr; /* PC where first mismatch occurred */
+		siginfo.si_trapno = 0;
+
+
+Precise memory corruption
+
+	When a store accesses a memory location that has TTE.mcd=1,
+	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
+	tag in the address used (bits 63:60) does not match the tag set on
+	the corresponding cacheline, a memory corruption trap occurs. If
+	MCD precise exception is enabled (MCDPERR=1), a precise
+	exception is sent to the kernel with TT=0x1a. The kernel sends
+	a SIGSEGV to the task that resulted in this trap with the following
+	info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ADIPERR;
+		siginfo.si_addr = addr;	/* address that caused trap */
+		siginfo.si_trapno = 0;
+
+	NOTE: ADI tag mismatch on a load always results in precise trap.
+
+
+MCD disabled
+
+	When a task has not enabled ADI and attempts to set ADI version
+	on a memory address, processor sends an MCD disabled trap. This
+	trap is handled by hypervisor first and the hypervisor vectors this
+	trap through to the kernel as Data Access Exception trap with
+	fault type set to 0xa (invalid ASI). When this occurs, the kernel
+	sends the task SIGSEGV signal with following info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ACCADI;
+		siginfo.si_addr = addr;	/* address that caused trap */
+		siginfo.si_trapno = 0;
+
+
+Sample program to use ADI
+-------------------------
+
+Following sample program is meant to illustrate how to use the ADI
+functionality.
+
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <elf.h>
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/mman.h>
+#include <asm/asi.h>
+
+#ifndef AT_ADI_BLKSZ
+#define AT_ADI_BLKSZ	48
+#endif
+#ifndef AT_ADI_NBITS
+#define AT_ADI_NBITS	49
+#endif
+
+#ifndef PROT_ADI
+#define PROT_ADI	0x10
+#endif
+
+#define BUFFER_SIZE     32*1024*1024UL
+
+main(int argc, char* argv[], char* envp[])
+{
+        unsigned long i, mcde, adi_blksz, adi_nbits;
+        char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr;
+        int shmid, version;
+	Elf64_auxv_t *auxv;
+
+	adi_blksz = 0;
+
+	while(*envp++ != NULL);
+	for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
+		switch (auxv->a_type) {
+		case AT_ADI_BLKSZ:
+			adi_blksz = auxv->a_un.a_val;
+			break;
+		case AT_ADI_NBITS:
+			adi_nbits = auxv->a_un.a_val;
+			break;
+		}
+	}
+	if (adi_blksz = 0) {
+		fprintf(stderr, "Oops! ADI is not supported\n");
+		exit(1);
+	}
+
+	printf("ADI capabilities:\n");
+	printf("\tBlock size = %ld\n", adi_blksz);
+	printf("\tNumber of bits = %ld\n", adi_nbits);
+
+        if ((shmid = shmget(2, BUFFER_SIZE,
+                                IPC_CREAT | SHM_R | SHM_W)) < 0) {
+                perror("shmget failed");
+                exit(1);
+        }
+
+        shmaddr = shmat(shmid, NULL, 0);
+        if (shmaddr = (char *)-1) {
+                perror("shm attach failed");
+                shmctl(shmid, IPC_RMID, NULL);
+                exit(1);
+        }
+
+	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) {
+		perror("mprotect failed");
+		goto err_out;
+	}
+
+        /* Set the ADI version tag on the shm segment
+         */
+        version = 10;
+        tmp_addr = shmaddr;
+        end = shmaddr + BUFFER_SIZE;
+        while (tmp_addr < end) {
+                asm volatile(
+                        "stxa %1, [%0]0x90\n\t"
+                        :
+                        : "r" (tmp_addr), "r" (version));
+                tmp_addr += adi_blksz;
+        }
+	asm volatile("membar #Sync\n\t");
+
+        /* Create a versioned address from the normal address by placing
+	 * version tag in the upper adi_nbits bits
+         */
+        tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits);
+        tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits);
+        veraddr = (void *) (((unsigned long)version << (64-adi_nbits))
+                        | (unsigned long)tmp_addr);
+
+        printf("Starting the writes:\n");
+        for (i = 0; i < BUFFER_SIZE; i++) {
+                veraddr[i] = (char)(i);
+                if (!(i % (1024 * 1024)))
+                        printf(".");
+        }
+        printf("\n");
+
+        printf("Verifying data...");
+	fflush(stdout);
+        for (i = 0; i < BUFFER_SIZE; i++)
+                if (veraddr[i] != (char)i)
+                        printf("\nIndex %lu mismatched\n", i);
+        printf("Done.\n");
+
+        /* Disable ADI and clean up
+         */
+	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) {
+		perror("mprotect failed");
+		goto err_out;
+	}
+
+        if (shmdt((const void *)shmaddr) != 0)
+                perror("Detach failure");
+        shmctl(shmid, IPC_RMID, NULL);
+
+        exit(0);
+
+err_out:
+        if (shmdt((const void *)shmaddr) != 0)
+                perror("Detach failure");
+        shmctl(shmid, IPC_RMID, NULL);
+        exit(1);
+}
diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
index 59bb5938d852..b799796ad963 100644
--- a/arch/sparc/include/asm/mman.h
+++ b/arch/sparc/include/asm/mman.h
@@ -6,5 +6,75 @@
 #ifndef __ASSEMBLY__
 #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
 int sparc_mmap_check(unsigned long addr, unsigned long len);
-#endif
+
+#ifdef CONFIG_SPARC64
+#include <asm/adi_64.h>
+
+#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
+static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
+{
+	if (prot & PROT_ADI) {
+		struct pt_regs *regs;
+
+		if (!current->mm->context.adi) {
+			regs = task_pt_regs(current);
+			regs->tstate |= TSTATE_MCDE;
+			current->mm->context.adi = true;
+		}
+		return VM_SPARC_ADI;
+	} else {
+		return 0;
+	}
+}
+
+#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
+static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
+{
+	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
+}
+
+#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
+static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
+{
+	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
+		return 0;
+	if (prot & PROT_ADI) {
+		if (!adi_capable())
+			return 0;
+
+		/* ADI tags can not be set on read-only memory, so it makes
+		 * sense to enable ADI on writable memory only.
+		 */
+		if (!(prot & PROT_WRITE))
+			return 0;
+
+		if (addr) {
+			struct vm_area_struct *vma;
+
+			vma = find_vma(current->mm, addr);
+			if (vma) {
+				/* ADI can not be enabled on PFN
+				 * mapped pages
+				 */
+				if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+					return 0;
+
+				/* Mergeable pages can become unmergeable
+				 * if ADI is enabled on them even if they
+				 * have identical data on them. This can be
+				 * because ADI enabled pages with identical
+				 * data may still not have identical ADI
+				 * tags on them. Disallow ADI on mergeable
+				 * pages.
+				 */
+				if (vma->vm_flags & VM_MERGEABLE)
+					return 0;
+			}
+		}
+	}
+	return 1;
+}
+#endif /* CONFIG_SPARC64 */
+
+#endif /* __ASSEMBLY__ */
 #endif /* __SPARC_MMAN_H__ */
diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
index 83b36a5371ff..a65d51ebe00b 100644
--- a/arch/sparc/include/asm/mmu_64.h
+++ b/arch/sparc/include/asm/mmu_64.h
@@ -89,6 +89,20 @@ struct tsb_config {
 #define MM_NUM_TSBS	1
 #endif
 
+/* ADI tags are stored when a page is swapped out and the storage for
+ * tags is allocated dynamically. There is a tag storage descriptor
+ * associated with each set of tag storage pages. Tag storage descriptors
+ * are allocated dynamically. Since kernel will allocate a full page for
+ * each tag storage descriptor, we can store up to
+ * PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page.
+ */
+typedef struct {
+	unsigned long	start;		/* Start address for this tag storage */
+	unsigned long	end;		/* Last address for tag storage */
+	unsigned char	*tags;		/* Where the tags are */
+	unsigned long	tag_users;	/* number of references to descriptor */
+} tag_storage_desc_t;
+
 typedef struct {
 	spinlock_t		lock;
 	unsigned long		sparc64_ctx_val;
@@ -96,6 +110,9 @@ typedef struct {
 	unsigned long		thp_pte_count;
 	struct tsb_config	tsb_block[MM_NUM_TSBS];
 	struct hv_tsb_descr	tsb_descr[MM_NUM_TSBS];
+	bool			adi;
+	tag_storage_desc_t	*tag_store;
+	spinlock_t		tag_lock;
 } mm_context_t;
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
index 2cddcda4f85f..68de059551f9 100644
--- a/arch/sparc/include/asm/mmu_context_64.h
+++ b/arch/sparc/include/asm/mmu_context_64.h
@@ -9,6 +9,7 @@
 #include <linux/mm_types.h>
 
 #include <asm/spitfire.h>
+#include <asm/adi_64.h>
 #include <asm-generic/mm_hooks.h>
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
@@ -129,6 +130,48 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
 
 #define deactivate_mm(tsk,mm)	do { } while (0)
 #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
+
+#define  __HAVE_ARCH_START_CONTEXT_SWITCH
+static inline void arch_start_context_switch(struct task_struct *prev)
+{
+	/* Save the current state of MCDPER register for the process
+	 * we are switching from
+	 */
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		__asm__ __volatile__(
+			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
+			"mov %%g1, %0\n\t"
+			: "=r" (tmp_mcdper)
+			:
+			: "g1");
+		if (tmp_mcdper)
+			set_tsk_thread_flag(prev, TIF_MCDPER);
+		else
+			clear_tsk_thread_flag(prev, TIF_MCDPER);
+	}
+}
+
+#define finish_arch_post_lock_switch	finish_arch_post_lock_switch
+static inline void finish_arch_post_lock_switch(void)
+{
+	/* Restore the state of MCDPER register for the new process
+	 * just switched to.
+	 */
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		tmp_mcdper = test_thread_flag(TIF_MCDPER);
+		__asm__ __volatile__(
+			"mov %0, %%g1\n\t"
+			".word 0x9d800001\n\t"	/* wr %g0, %g1, %mcdper" */
+			:
+			: "ir" (tmp_mcdper)
+			: "g1");
+	}
+}
+
 #endif /* !(__ASSEMBLY__) */
 
 #endif /* !(__SPARC64_MMU_CONTEXT_H) */
diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
index 5961b2d8398a..dc582c5611f8 100644
--- a/arch/sparc/include/asm/page_64.h
+++ b/arch/sparc/include/asm/page_64.h
@@ -46,6 +46,10 @@ struct page;
 void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
 #define copy_page(X,Y)	memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
 void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
+#define __HAVE_ARCH_COPY_USER_HIGHPAGE
+struct vm_area_struct;
+void copy_user_highpage(struct page *to, struct page *from,
+			unsigned long vaddr, struct vm_area_struct *vma);
 
 /* Unlike sparc32, sparc64's parameter passing API is more
  * sane in that structures which as small enough are passed
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index af045061f41e..51da342c392d 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -18,6 +18,7 @@
 #include <asm/types.h>
 #include <asm/spitfire.h>
 #include <asm/asi.h>
+#include <asm/adi.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 
@@ -570,6 +571,18 @@ static inline pte_t pte_mkspecial(pte_t pte)
 	return pte;
 }
 
+static inline pte_t pte_mkmcd(pte_t pte)
+{
+	pte_val(pte) |= _PAGE_MCD_4V;
+	return pte;
+}
+
+static inline pte_t pte_mknotmcd(pte_t pte)
+{
+	pte_val(pte) &= ~_PAGE_MCD_4V;
+	return pte;
+}
+
 static inline unsigned long pte_young(pte_t pte)
 {
 	unsigned long mask;
@@ -1001,6 +1014,39 @@ int page_in_phys_avail(unsigned long paddr);
 int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
 		    unsigned long, pgprot_t);
 
+void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		      unsigned long addr, pte_t pte);
+
+int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		  unsigned long addr, pte_t oldpte);
+
+#define __HAVE_ARCH_DO_SWAP_PAGE
+static inline void arch_do_swap_page(struct mm_struct *mm,
+				     struct vm_area_struct *vma,
+				     unsigned long addr,
+				     pte_t pte, pte_t oldpte)
+{
+	/* If this is a new page being mapped in, there can be no
+	 * ADI tags stored away for this page. Skip looking for
+	 * stored tags
+	 */
+	if (pte_none(oldpte))
+		return;
+
+	if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V))
+		adi_restore_tags(mm, vma, addr, pte);
+}
+
+#define __HAVE_ARCH_UNMAP_ONE
+static inline int arch_unmap_one(struct mm_struct *mm,
+				 struct vm_area_struct *vma,
+				 unsigned long addr, pte_t oldpte)
+{
+	if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
+		return adi_save_tags(mm, vma, addr, oldpte);
+	return 0;
+}
+
 static inline int io_remap_pfn_range(struct vm_area_struct *vma,
 				     unsigned long from, unsigned long pfn,
 				     unsigned long size, pgprot_t prot)
diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
index 38a24f257b85..9c04acb1f9af 100644
--- a/arch/sparc/include/asm/thread_info_64.h
+++ b/arch/sparc/include/asm/thread_info_64.h
@@ -190,7 +190,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
  *       in using in assembly, else we can't use the mask as
  *       an immediate value in instructions such as andcc.
  */
-/* flag bit 12 is available */
+#define TIF_MCDPER		12	/* Precise MCD exception */
 #define TIF_MEMDIE		13	/* is terminating due to OOM killer */
 #define TIF_POLLING_NRFLAG	14
 
diff --git a/arch/sparc/include/asm/trap_block.h b/arch/sparc/include/asm/trap_block.h
index ec9c04de3664..b283e940671a 100644
--- a/arch/sparc/include/asm/trap_block.h
+++ b/arch/sparc/include/asm/trap_block.h
@@ -72,6 +72,8 @@ struct sun4v_1insn_patch_entry {
 };
 extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch,
 	__sun4v_1insn_patch_end;
+extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch,
+	__sun_m7_1insn_patch_end;
 
 struct sun4v_2insn_patch_entry {
 	unsigned int	addr;
diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
index 9765896ecb2c..a72c03397345 100644
--- a/arch/sparc/include/uapi/asm/mman.h
+++ b/arch/sparc/include/uapi/asm/mman.h
@@ -5,6 +5,8 @@
 
 /* SunOS'ified... */
 
+#define PROT_ADI	0x10		/* ADI enabled */
+
 #define MAP_RENAME      MAP_ANONYMOUS   /* In SunOS terminology */
 #define MAP_NORESERVE   0x40            /* don't reserve swap pages */
 #define MAP_INHERIT     0x80            /* SunOS doesn't do this, but... */
diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
index 9fbb5dd4a7bf..83c1e36ae5fa 100644
--- a/arch/sparc/kernel/adi_64.c
+++ b/arch/sparc/kernel/adi_64.c
@@ -7,10 +7,24 @@
  * This work is licensed under the terms of the GNU GPL, version 2.
  */
 #include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/mm_types.h>
 #include <asm/mdesc.h>
 #include <asm/adi_64.h>
+#include <asm/mmu_64.h>
+#include <asm/pgtable_64.h>
+
+/* Each page of storage for ADI tags can accommodate tags for 128
+ * pages. When ADI enabled pages are being swapped out, it would be
+ * prudent to allocate at least enough tag storage space to accommodate
+ * SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to
+ * store tags for four SWAPFILE_CLUSTER pages to reduce need for
+ * further allocations for same vma.
+ */
+#define TAG_STORAGE_PAGES	8
 
 struct adi_config adi_state;
+EXPORT_SYMBOL(adi_state);
 
 /* mdesc_adi_init() : Parse machine description provided by the
  *	hypervisor to detect ADI capabilities
@@ -78,6 +92,19 @@ void __init mdesc_adi_init(void)
 		goto adi_not_found;
 	adi_state.caps.nbits = *val;
 
+	/* Some of the code to support swapping ADI tags is written
+	 * assumption that two ADI tags can fit inside one byte. If
+	 * this assumption is broken by a future architecture change,
+	 * that code will have to be revisited. If that were to happen,
+	 * disable ADI support so we do not get unpredictable results
+	 * with programs trying to use ADI and their pages getting
+	 * swapped out
+	 */
+	if (adi_state.caps.nbits > 4) {
+		pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n");
+		adi_state.enabled = false;
+	}
+
 	mdesc_release(hp);
 	return;
 
@@ -88,3 +115,253 @@ void __init mdesc_adi_init(void)
 	if (hp)
 		mdesc_release(hp);
 }
+
+tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
+				   struct vm_area_struct *vma,
+				   unsigned long addr)
+{
+	tag_storage_desc_t *tag_desc = NULL;
+	unsigned long i, max_desc, flags;
+
+	/* Check if this vma already has tag storage descriptor
+	 * allocated for it.
+	 */
+	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+	if (mm->context.tag_store) {
+		tag_desc = mm->context.tag_store;
+		spin_lock_irqsave(&mm->context.tag_lock, flags);
+		for (i = 0; i < max_desc; i++) {
+			if ((addr >= tag_desc->start) &&
+			    ((addr + PAGE_SIZE - 1) <= tag_desc->end))
+				break;
+			tag_desc++;
+		}
+		spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+
+		/* If no matching entries were found, this must be a
+		 * freshly allocated page
+		 */
+		if (i >= max_desc)
+			tag_desc = NULL;
+	}
+
+	return tag_desc;
+}
+
+tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
+				    struct vm_area_struct *vma,
+				    unsigned long addr)
+{
+	unsigned char *tags;
+	unsigned long i, size, max_desc, flags;
+	tag_storage_desc_t *tag_desc, *open_desc;
+	unsigned long end_addr, hole_start, hole_end;
+
+	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+	open_desc = NULL;
+	hole_start = 0;
+	hole_end = ULONG_MAX;
+	end_addr = addr + PAGE_SIZE - 1;
+
+	/* Check if this vma already has tag storage descriptor
+	 * allocated for it.
+	 */
+	spin_lock_irqsave(&mm->context.tag_lock, flags);
+	if (mm->context.tag_store) {
+		tag_desc = mm->context.tag_store;
+
+		/* Look for a matching entry for this address. While doing
+		 * that, look for the first open slot as well and find
+		 * the hole in already allocated range where this request
+		 * will fit in.
+		 */
+		for (i = 0; i < max_desc; i++) {
+			if (tag_desc->tag_users = 0) {
+				if (open_desc = NULL)
+					open_desc = tag_desc;
+			} else {
+				if ((addr >= tag_desc->start) &&
+				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
+					tag_desc->tag_users++;
+					goto out;
+				}
+			}
+			if ((tag_desc->start > end_addr) &&
+			    (tag_desc->start < hole_end))
+				hole_end = tag_desc->start;
+			if ((tag_desc->end < addr) &&
+			    (tag_desc->end > hole_start))
+				hole_start = tag_desc->end;
+			tag_desc++;
+		}
+
+	} else {
+		size = sizeof(tag_storage_desc_t)*max_desc;
+		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
+		if (mm->context.tag_store = NULL) {
+			tag_desc = NULL;
+			goto out;
+		}
+		tag_desc = mm->context.tag_store;
+		for (i = 0; i < max_desc; i++, tag_desc++)
+			tag_desc->tag_users = 0;
+		open_desc = mm->context.tag_store;
+		i = 0;
+	}
+
+	/* Check if we ran out of tag storage descriptors */
+	if (open_desc = NULL) {
+		tag_desc = NULL;
+		goto out;
+	}
+
+	/* Mark this tag descriptor slot in use and then initialize it */
+	tag_desc = open_desc;
+	tag_desc->tag_users = 1;
+
+	/* Tag storage has not been allocated for this vma and space
+	 * is available in tag storage descriptor. Since this page is
+	 * being swapped out, there is high probability subsequent pages
+	 * in the VMA will be swapped out as well. Allocates pages to
+	 * store tags for as many pages in this vma as possible but not
+	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
+	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
+	 * covers adi_blksize() worth of addresses. Check if the hole is
+	 * big enough to accommodate full address range for using
+	 * TAG_STORAGE_PAGES number of tag pages.
+	 */
+	size = TAG_STORAGE_PAGES * PAGE_SIZE;
+	end_addr = addr + (size*2*adi_blksize()) - 1;
+	if (hole_end < end_addr) {
+		/* Available hole is too small on the upper end of
+		 * address. Can we expand the range towards the lower
+		 * address and maximize use of this slot?
+		 */
+		unsigned long tmp_addr;
+
+		end_addr = hole_end - 1;
+		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
+		if (tmp_addr < hole_start) {
+			/* Available hole is restricted on lower address
+			 * end as well
+			 */
+			tmp_addr = hole_start + 1;
+		}
+		addr = tmp_addr;
+		size = (end_addr + 1 - addr)/(2*adi_blksize());
+		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
+		size = size * PAGE_SIZE;
+	}
+	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
+	if (tags = NULL) {
+		tag_desc->tag_users = 0;
+		tag_desc = NULL;
+		goto out;
+	}
+	tag_desc->start = addr;
+	tag_desc->tags = tags;
+	tag_desc->end = end_addr;
+
+out:
+	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+	return tag_desc;
+}
+
+void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
+{
+	unsigned long flags;
+	unsigned char *tags = NULL;
+
+	spin_lock_irqsave(&mm->context.tag_lock, flags);
+	tag_desc->tag_users--;
+	if (tag_desc->tag_users = 0) {
+		tag_desc->start = tag_desc->end = 0;
+		/* Do not free up the tag storage space allocated
+		 * by the first descriptor. This is persistent
+		 * emergency tag storage space for the task.
+		 */
+		if (tag_desc != mm->context.tag_store) {
+			tags = tag_desc->tags;
+			tag_desc->tags = NULL;
+		}
+	}
+	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+	kfree(tags);
+}
+
+#define tag_start(addr, tag_desc)		\
+	((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize())))
+
+/* Retrieve any saved ADI tags for the page being swapped back in and
+ * restore these tags to the newly allocated physical page.
+ */
+void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		      unsigned long addr, pte_t pte)
+{
+	unsigned char *tag;
+	tag_storage_desc_t *tag_desc;
+	unsigned long paddr, tmp, version1, version2;
+
+	/* Check if the swapped out page has an ADI version
+	 * saved. If yes, restore version tag to the newly
+	 * allocated page.
+	 */
+	tag_desc = find_tag_store(mm, vma, addr);
+	if (tag_desc = NULL)
+		return;
+
+	tag = tag_start(addr, tag_desc);
+	paddr = pte_val(pte) & _PAGE_PADDR_4V;
+	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
+		version1 = (*tag) >> 4;
+		version2 = (*tag) & 0x0f;
+		*tag++ = 0;
+		asm volatile("stxa %0, [%1] %2\n\t"
+			:
+			: "r" (version1), "r" (tmp),
+			  "i" (ASI_MCD_REAL));
+		tmp += adi_blksize();
+		asm volatile("stxa %0, [%1] %2\n\t"
+			:
+			: "r" (version2), "r" (tmp),
+			  "i" (ASI_MCD_REAL));
+	}
+	asm volatile("membar #Sync\n\t");
+
+	/* Check and mark this tag space for release later if
+	 * the swapped in page was the last user of tag space
+	 */
+	del_tag_store(tag_desc, mm);
+}
+
+/* A page is about to be swapped out. Save any ADI tags associated with
+ * this physical page so they can be restored later when the page is swapped
+ * back in.
+ */
+int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		  unsigned long addr, pte_t oldpte)
+{
+	unsigned char *tag;
+	tag_storage_desc_t *tag_desc;
+	unsigned long version1, version2, paddr, tmp;
+
+	tag_desc = alloc_tag_store(mm, vma, addr);
+	if (tag_desc = NULL)
+		return -1;
+
+	tag = tag_start(addr, tag_desc);
+	paddr = pte_val(oldpte) & _PAGE_PADDR_4V;
+	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
+		asm volatile("ldxa [%1] %2, %0\n\t"
+				: "=r" (version1)
+				: "r" (tmp), "i" (ASI_MCD_REAL));
+		tmp += adi_blksize();
+		asm volatile("ldxa [%1] %2, %0\n\t"
+				: "=r" (version2)
+				: "r" (tmp), "i" (ASI_MCD_REAL));
+		*tag = (version1 << 4) | version2;
+		tag++;
+	}
+
+	return 0;
+}
diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
index 1276ca2567ba..7be33bf45cff 100644
--- a/arch/sparc/kernel/etrap_64.S
+++ b/arch/sparc/kernel/etrap_64.S
@@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
 		or	%l7, %l0, %l7
-		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
+661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
+		/*
+		 * If userspace is using ADI, it could potentially pass
+		 * a pointer with version tag embedded in it. To maintain
+		 * the ADI security, we must enable PSTATE.mcde. Userspace
+		 * would have already set TTE.mcd in an earlier call to
+		 * kernel and set the version tag for the address being
+		 * dereferenced. Setting PSTATE.mcde would ensure any
+		 * access to userspace data through a system call honors
+		 * ADI and does not allow a rogue app to bypass ADI by
+		 * using system calls. Setting PSTATE.mcde only affects
+		 * accesses to virtual addresses that have TTE.mcd set.
+		 * Set PMCDPER to ensure any exceptions caused by ADI
+		 * version tag mismatch are exposed before system call
+		 * returns to userspace. Setting PMCDPER affects only
+		 * writes to virtual addresses that have TTE.mcd set and
+		 * have a version tag set as well.
+		 */
+		.section .sun_m7_1insn_patch, "ax"
+		.word	661b
+		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
+		.previous
+661:		nop
+		.section .sun_m7_1insn_patch, "ax"
+		.word	661b
+		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
+		.previous
 		or	%l7, %l0, %l7
 		wrpr	%l2, %tnpc
 		wrpr	%l7, (TSTATE_PRIV | TSTATE_IE), %tstate
diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
index b96104da5bd6..defa5723dfa6 100644
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -664,6 +664,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
 	return 0;
 }
 
+/* TIF_MCDPER in thread info flags for current task is updated lazily upon
+ * a context switch. Update the this flag in current task's thread flags
+ * before dup so the dup'd task will inherit the current TIF_MCDPER flag.
+ */
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		__asm__ __volatile__(
+			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
+			"mov %%g1, %0\n\t"
+			: "=r" (tmp_mcdper)
+			:
+			: "g1");
+		if (tmp_mcdper)
+			set_thread_flag(TIF_MCDPER);
+		else
+			clear_thread_flag(TIF_MCDPER);
+	}
+
+	*dst = *src;
+	return 0;
+}
+
 typedef struct {
 	union {
 		unsigned int	pr_regs[32];
diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
index 422b17880955..a9da205da394 100644
--- a/arch/sparc/kernel/setup_64.c
+++ b/arch/sparc/kernel/setup_64.c
@@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
 	}
 }
 
+void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
+			     struct sun4v_1insn_patch_entry *end)
+{
+	sun4v_patch_1insn_range(start, end);
+}
+
 void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
 			     struct sun4v_2insn_patch_entry *end)
 {
@@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
 				&__sun4v_2insn_patch_end);
 	if (sun4v_chip_type = SUN4V_CHIP_SPARC_M7 ||
-	    sun4v_chip_type = SUN4V_CHIP_SPARC_SN)
+	    sun4v_chip_type = SUN4V_CHIP_SPARC_SN) {
+		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
+					 &__sun_m7_1insn_patch_end);
 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
 					 &__sun_m7_2insn_patch_end);
+		}
 
 	sun4v_hvapi_init();
 }
diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
index 572db686f845..20a70682cce7 100644
--- a/arch/sparc/kernel/vmlinux.lds.S
+++ b/arch/sparc/kernel/vmlinux.lds.S
@@ -144,6 +144,11 @@ SECTIONS
 		*(.pause_3insn_patch)
 		__pause_3insn_patch_end = .;
 	}
+	.sun_m7_1insn_patch : {
+		__sun_m7_1insn_patch = .;
+		*(.sun_m7_1insn_patch)
+		__sun_m7_1insn_patch_end = .;
+	}
 	.sun_m7_2insn_patch : {
 		__sun_m7_2insn_patch = .;
 		*(.sun_m7_2insn_patch)
diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
index cd0e32bbcb1d..579f7ae75b35 100644
--- a/arch/sparc/mm/gup.c
+++ b/arch/sparc/mm/gup.c
@@ -11,6 +11,7 @@
 #include <linux/pagemap.h>
 #include <linux/rwsem.h>
 #include <asm/pgtable.h>
+#include <asm/adi.h>
 
 /*
  * The performance critical leaf functions are made noinline otherwise gcc
@@ -157,6 +158,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
 	pgd_t *pgdp;
 	int nr = 0;
 
+#ifdef CONFIG_SPARC64
+	if (adi_capable()) {
+		long addr = start;
+
+		/* If userspace has passed a versioned address, kernel
+		 * will not find it in the VMAs since it does not store
+		 * the version tags in the list of VMAs. Storing version
+		 * tags in list of VMAs is impractical since they can be
+		 * changed any time from userspace without dropping into
+		 * kernel. Any address search in VMAs will be done with
+		 * non-versioned addresses. Ensure the ADI version bits
+		 * are dropped here by sign extending the last bit before
+		 * ADI bits. IOMMU does not implement version tags.
+		 */
+		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
+		start = addr;
+	}
+#endif
 	start &= PAGE_MASK;
 	addr = start;
 	len = (unsigned long) nr_pages << PAGE_SHIFT;
@@ -187,6 +206,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
 	pgd_t *pgdp;
 	int nr = 0;
 
+#ifdef CONFIG_SPARC64
+	if (adi_capable()) {
+		long addr = start;
+
+		/* If userspace has passed a versioned address, kernel
+		 * will not find it in the VMAs since it does not store
+		 * the version tags in the list of VMAs. Storing version
+		 * tags in list of VMAs is impractical since they can be
+		 * changed any time from userspace without dropping into
+		 * kernel. Any address search in VMAs will be done with
+		 * non-versioned addresses. Ensure the ADI version bits
+		 * are dropped here by sign extending the last bit before
+		 * ADI bits. IOMMU does not implements version tags,
+		 */
+		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
+		start = addr;
+	}
+#endif
 	start &= PAGE_MASK;
 	addr = start;
 	len = (unsigned long) nr_pages << PAGE_SHIFT;
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 88855e383b34..487ed1f1ce86 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -177,8 +177,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
 			 struct page *page, int writeable)
 {
 	unsigned int shift = huge_page_shift(hstate_vma(vma));
+	pte_t pte;
 
-	return hugepage_shift_to_tte(entry, shift);
+	pte = hugepage_shift_to_tte(entry, shift);
+
+#ifdef CONFIG_SPARC64
+	/* If this vma has ADI enabled on it, turn on TTE.mcd
+	 */
+	if (vma->vm_flags & VM_SPARC_ADI)
+		return pte_mkmcd(pte);
+	else
+		return pte_mknotmcd(pte);
+#else
+	return pte;
+#endif
 }
 
 static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 3c40ebd50f92..94854e7e833e 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -3087,3 +3087,36 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 		do_flush_tlb_kernel_range(start, end);
 	}
 }
+
+void copy_user_highpage(struct page *to, struct page *from,
+	unsigned long vaddr, struct vm_area_struct *vma)
+{
+	char *vfrom, *vto;
+
+	vfrom = kmap_atomic(from);
+	vto = kmap_atomic(to);
+	copy_user_page(vto, vfrom, vaddr, to);
+	kunmap_atomic(vto);
+	kunmap_atomic(vfrom);
+
+	/* If this page has ADI enabled, copy over any ADI tags
+	 * as well
+	 */
+	if (vma->vm_flags & VM_SPARC_ADI) {
+		unsigned long pfrom, pto, i, adi_tag;
+
+		pfrom = page_to_phys(from);
+		pto = page_to_phys(to);
+
+		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
+			asm volatile("ldxa [%1] %2, %0\n\t"
+					: "=r" (adi_tag)
+					:  "r" (i), "i" (ASI_MCD_REAL));
+			asm volatile("stxa %0, [%1] %2\n\t"
+					:
+					: "r" (adi_tag), "r" (pto),
+					  "i" (ASI_MCD_REAL));
+			pto += adi_blksize();
+		}
+	}
+}
diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
index 0d4b998c7d7b..6518cc42056b 100644
--- a/arch/sparc/mm/tsb.c
+++ b/arch/sparc/mm/tsb.c
@@ -545,6 +545,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 
 	mm->context.sparc64_ctx_val = 0UL;
 
+	mm->context.tag_store = NULL;
+	spin_lock_init(&mm->context.tag_lock);
+
 #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
 	/* We reset them to zero because the fork() page copying
 	 * will re-increment the counters as the parent PTEs are
@@ -610,4 +613,22 @@ void destroy_context(struct mm_struct *mm)
 	}
 
 	spin_unlock_irqrestore(&ctx_alloc_lock, flags);
+
+	/* If ADI tag storage was allocated for this task, free it */
+	if (mm->context.tag_store) {
+		tag_storage_desc_t *tag_desc;
+		unsigned long max_desc;
+		unsigned char *tags;
+
+		tag_desc = mm->context.tag_store;
+		max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+		for (i = 0; i < max_desc; i++) {
+			tags = tag_desc->tags;
+			tag_desc->tags = NULL;
+			kfree(tags);
+			tag_desc++;
+		}
+		kfree(mm->context.tag_store);
+		mm->context.tag_store = NULL;
+	}
 }
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b7aa3932e6d4..c0972114036f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -231,6 +231,9 @@ extern unsigned int kobjsize(const void *objp);
 # define VM_GROWSUP	VM_ARCH_1
 #elif defined(CONFIG_IA64)
 # define VM_GROWSUP	VM_ARCH_1
+#elif defined(CONFIG_SPARC64)
+# define VM_SPARC_ADI	VM_ARCH_1	/* Uses ADI tag for access control */
+# define VM_ARCH_CLEAR	VM_SPARC_ADI
 #elif !defined(CONFIG_MMU)
 # define VM_MAPPED_COPY	VM_ARCH_1	/* T if mapped copy of data (nommu mmap) */
 #endif
diff --git a/mm/ksm.c b/mm/ksm.c
index 216184af0e19..bb82399816ef 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1797,6 +1797,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 		if (*vm_flags & VM_SAO)
 			return 0;
 #endif
+#ifdef VM_SPARC_ADI
+		if (*vm_flags & VM_SPARC_ADI)
+			return 0;
+#endif
 
 		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
 			err = __ksm_enter(mm);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 86+ messages in thread

* [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-09 21:26   ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-09 21:26 UTC (permalink / raw)
  To: davem, dave.hansen
  Cc: Khalid Aziz, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, Khalid Aziz

ADI is a new feature supported on SPARC M7 and newer processors to allow
hardware to catch rogue accesses to memory. ADI is supported for data
fetches only and not instruction fetches. An app can enable ADI on its
data pages, set version tags on them and use versioned addresses to
access the data pages. Upper bits of the address contain the version
tag. On M7 processors, upper four bits (bits 63-60) contain the version
tag. If a rogue app attempts to access ADI enabled data pages, its
access is blocked and processor generates an exception. Please see
Documentation/sparc/adi.txt for further details.

This patch extends mprotect to enable ADI (TSTATE.mcde), enable/disable
MCD (Memory Corruption Detection) on selected memory ranges, enable
TTE.mcd in PTEs, return ADI parameters to userspace and save/restore ADI
version tags on page swap out/in or migration. ADI is not enabled by
default for any task. A task must explicitly enable ADI on a memory
range and set version tag for ADI to be effective for the task.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Khalid Aziz <khalid@gonehiking.org>
---
v7:
	- Enhanced arch_validate_prot() to enable ADI only on writable
	  addresses backed by physical RAM
	- Added support for saving/restoring ADI tags for each ADI
	  block size address range on a page on swap in/out
	- Added code to copy ADI tags on COW
	- Updated values for auxiliary vectors to not conflict with
	  values on other architectures to avoid conflict in glibc. glibc
	  consolidates all auxiliary vectors into its headers and
	  duplicate values in consolidated header are problematic
	- Disable same page merging on ADI enabled pages since ADI tags
	  may not match on pages with identical data
	- Broke the patch up further into smaller patches

v6:
	- Eliminated instructions to read and write PSTATE as well as
	  MCDPER and PMCDPER on every access to userspace addresses
	  by setting PSTATE and PMCDPER correctly upon entry into
	  kernel. PSTATE.mcde and PMCDPER are set upon entry into
	  kernel when running on an M7 processor. PSTATE.mcde being
	  set only affects memory accesses that have TTE.mcd set.
	  PMCDPER being set only affects writes to memory addresses
	  that have TTE.mcd set. This ensures any faults caused by
	  ADI tag mismatch on a write are exposed before kernel returns
	  to userspace.

v5:
	- Fixed indentation issues and instrcuctions in assembly code
	- Removed CONFIG_SPARC64 from mdesc.c
	- Changed to maintain state of MCDPER register in thread info
	  flags as opposed to in mm context. MCDPER is a per-thread
	  state and belongs in thread info flag as opposed to mm context
	  which is shared across threads. Added comments to clarify this
	  is a lazily maintained state and must be updated on context
	  switch and copy_process()
	- Updated code to use the new arch_do_swap_page() and
	  arch_unmap_one() functions

v4:
	- Broke patch up into smaller patches

v3:
	- Removed CONFIG_SPARC_ADI
	- Replaced prctl commands with mprotect
	- Added auxiliary vectors for ADI parameters
	- Enabled ADI for swappable pages

v2:
	- Fixed a build error

 Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++++++++++
 arch/sparc/include/asm/mman.h           |  72 ++++++++-
 arch/sparc/include/asm/mmu_64.h         |  17 ++
 arch/sparc/include/asm/mmu_context_64.h |  43 +++++
 arch/sparc/include/asm/page_64.h        |   4 +
 arch/sparc/include/asm/pgtable_64.h     |  46 ++++++
 arch/sparc/include/asm/thread_info_64.h |   2 +-
 arch/sparc/include/asm/trap_block.h     |   2 +
 arch/sparc/include/uapi/asm/mman.h      |   2 +
 arch/sparc/kernel/adi_64.c              | 277 ++++++++++++++++++++++++++++++++
 arch/sparc/kernel/etrap_64.S            |  28 +++-
 arch/sparc/kernel/process_64.c          |  25 +++
 arch/sparc/kernel/setup_64.c            |  11 +-
 arch/sparc/kernel/vmlinux.lds.S         |   5 +
 arch/sparc/mm/gup.c                     |  37 +++++
 arch/sparc/mm/hugetlbpage.c             |  14 +-
 arch/sparc/mm/init_64.c                 |  33 ++++
 arch/sparc/mm/tsb.c                     |  21 +++
 include/linux/mm.h                      |   3 +
 mm/ksm.c                                |   4 +
 20 files changed, 913 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/sparc/adi.txt

diff --git a/Documentation/sparc/adi.txt b/Documentation/sparc/adi.txt
new file mode 100644
index 000000000000..383bc65fec1e
--- /dev/null
+++ b/Documentation/sparc/adi.txt
@@ -0,0 +1,272 @@
+Application Data Integrity (ADI)
+================================
+
+SPARC M7 processor adds the Application Data Integrity (ADI) feature.
+ADI allows a task to set version tags on any subset of its address
+space. Once ADI is enabled and version tags are set for ranges of
+address space of a task, the processor will compare the tag in pointers
+to memory in these ranges to the version set by the application
+previously. Access to memory is granted only if the tag in given pointer
+matches the tag set by the application. In case of mismatch, processor
+raises an exception.
+
+Following steps must be taken by a task to enable ADI fully:
+
+1. Set the user mode PSTATE.mcde bit. This acts as master switch for
+   the task's entire address space to enable/disable ADI for the task.
+
+2. Set TTE.mcd bit on any TLB entries that correspond to the range of
+   addresses ADI is being enabled on. MMU checks the version tag only
+   on the pages that have TTE.mcd bit set.
+
+3. Set the version tag for virtual addresses using stxa instruction
+   and one of the MCD specific ASIs. Each stxa instruction sets the
+   given tag for one ADI block size number of bytes. This step must
+   be repeated for entire page to set tags for entire page.
+
+ADI block size for the platform is provided by the hypervisor to kernel
+in machine description tables. Hypervisor also provides the number of
+top bits in the virtual address that specify the version tag.  Once
+version tag has been set for a memory location, the tag is stored in the
+physical memory and the same tag must be present in the ADI version tag
+bits of the virtual address being presented to the MMU. For example on
+SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block
+size is same as cacheline size which is 64 bytes. A task that sets ADI
+version to, say 10, on a range of memory, must access that memory using
+virtual addresses that contain 0xa in bits 63-60.
+
+ADI is enabled on a set of pages using mprotect() with PROT_ADI flag.
+When ADI is enabled on a set of pages by a task for the first time,
+kernel sets the PSTATE.mcde bit fot the task. Version tags for memory
+addresses are set with an stxa instruction on the addresses using
+ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is
+provided by the hypervisor to the kernel.  Kernel returns the value of
+ADI block size to userspace using auxiliary vector along with other ADI
+info. Following auxiliary vectors are provided by the kernel:
+
+	AT_ADI_BLKSZ	ADI block size. This is the granularity and
+			alignment, in bytes, of ADI versioning.
+	AT_ADI_NBITS	Number of ADI version bits in the VA
+
+
+IMPORTANT NOTES:
+
+- Version tag values of 0x0 and 0xf are reserved.
+
+- Version tags are set on virtual addresses from userspace even though
+  tags are stored in physical memory. Tags are set on a physical page
+  after it has been allocated to a task and a pte has been created for
+  it.
+
+- When a task frees a memory page it had set version tags on, the page
+  goes back to free page pool. When this page is re-allocated to a task,
+  kernel clears the page using block initialization ASI which clears the
+  version tags as well for the page. If a page allocated to a task is
+  freed and allocated back to the same task, old version tags set by the
+  task on that page will no longer be present.
+
+- Kernel does not set any tags for user pages and it is entirely a
+  task's responsibility to set any version tags. Kernel does ensure the
+  version tags are preserved if a page is swapped out to the disk and
+  swapped back in. It also preserves that version tags if a page is
+  migrated.
+
+- ADI works for any size pages. A userspace task need not be aware of
+  page size when using ADI. It can simply select a virtual address
+  range, enable ADI on the range using mprotect() and set version tags
+  for the entire range. mprotect() ensures range is aligned to page size
+  and is a multiple of page size.
+
+
+
+ADI related traps
+-----------------
+
+With ADI enabled, following new traps may occur:
+
+Disrupting memory corruption
+
+	When a store accesses a memory localtion that has TTE.mcd=1,
+	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
+	tag in the address used (bits 63:60) does not match the tag set on
+	the corresponding cacheline, a memory corruption trap occurs. By
+	default, it is a disrupting trap and is sent to the hypervisor
+	first. Hypervisor creates a sun4v error report and sends a
+	resumable error (TT=0x7e) trap to the kernel. The kernel sends
+	a SIGSEGV to the task that resulted in this trap with the following
+	info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ADIDERR;
+		siginfo.si_addr = addr; /* PC where first mismatch occurred */
+		siginfo.si_trapno = 0;
+
+
+Precise memory corruption
+
+	When a store accesses a memory location that has TTE.mcd=1,
+	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
+	tag in the address used (bits 63:60) does not match the tag set on
+	the corresponding cacheline, a memory corruption trap occurs. If
+	MCD precise exception is enabled (MCDPERR=1), a precise
+	exception is sent to the kernel with TT=0x1a. The kernel sends
+	a SIGSEGV to the task that resulted in this trap with the following
+	info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ADIPERR;
+		siginfo.si_addr = addr;	/* address that caused trap */
+		siginfo.si_trapno = 0;
+
+	NOTE: ADI tag mismatch on a load always results in precise trap.
+
+
+MCD disabled
+
+	When a task has not enabled ADI and attempts to set ADI version
+	on a memory address, processor sends an MCD disabled trap. This
+	trap is handled by hypervisor first and the hypervisor vectors this
+	trap through to the kernel as Data Access Exception trap with
+	fault type set to 0xa (invalid ASI). When this occurs, the kernel
+	sends the task SIGSEGV signal with following info:
+
+		siginfo.si_signo = SIGSEGV;
+		siginfo.errno = 0;
+		siginfo.si_code = SEGV_ACCADI;
+		siginfo.si_addr = addr;	/* address that caused trap */
+		siginfo.si_trapno = 0;
+
+
+Sample program to use ADI
+-------------------------
+
+Following sample program is meant to illustrate how to use the ADI
+functionality.
+
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <elf.h>
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/mman.h>
+#include <asm/asi.h>
+
+#ifndef AT_ADI_BLKSZ
+#define AT_ADI_BLKSZ	48
+#endif
+#ifndef AT_ADI_NBITS
+#define AT_ADI_NBITS	49
+#endif
+
+#ifndef PROT_ADI
+#define PROT_ADI	0x10
+#endif
+
+#define BUFFER_SIZE     32*1024*1024UL
+
+main(int argc, char* argv[], char* envp[])
+{
+        unsigned long i, mcde, adi_blksz, adi_nbits;
+        char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr;
+        int shmid, version;
+	Elf64_auxv_t *auxv;
+
+	adi_blksz = 0;
+
+	while(*envp++ != NULL);
+	for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
+		switch (auxv->a_type) {
+		case AT_ADI_BLKSZ:
+			adi_blksz = auxv->a_un.a_val;
+			break;
+		case AT_ADI_NBITS:
+			adi_nbits = auxv->a_un.a_val;
+			break;
+		}
+	}
+	if (adi_blksz == 0) {
+		fprintf(stderr, "Oops! ADI is not supported\n");
+		exit(1);
+	}
+
+	printf("ADI capabilities:\n");
+	printf("\tBlock size = %ld\n", adi_blksz);
+	printf("\tNumber of bits = %ld\n", adi_nbits);
+
+        if ((shmid = shmget(2, BUFFER_SIZE,
+                                IPC_CREAT | SHM_R | SHM_W)) < 0) {
+                perror("shmget failed");
+                exit(1);
+        }
+
+        shmaddr = shmat(shmid, NULL, 0);
+        if (shmaddr == (char *)-1) {
+                perror("shm attach failed");
+                shmctl(shmid, IPC_RMID, NULL);
+                exit(1);
+        }
+
+	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) {
+		perror("mprotect failed");
+		goto err_out;
+	}
+
+        /* Set the ADI version tag on the shm segment
+         */
+        version = 10;
+        tmp_addr = shmaddr;
+        end = shmaddr + BUFFER_SIZE;
+        while (tmp_addr < end) {
+                asm volatile(
+                        "stxa %1, [%0]0x90\n\t"
+                        :
+                        : "r" (tmp_addr), "r" (version));
+                tmp_addr += adi_blksz;
+        }
+	asm volatile("membar #Sync\n\t");
+
+        /* Create a versioned address from the normal address by placing
+	 * version tag in the upper adi_nbits bits
+         */
+        tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits);
+        tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits);
+        veraddr = (void *) (((unsigned long)version << (64-adi_nbits))
+                        | (unsigned long)tmp_addr);
+
+        printf("Starting the writes:\n");
+        for (i = 0; i < BUFFER_SIZE; i++) {
+                veraddr[i] = (char)(i);
+                if (!(i % (1024 * 1024)))
+                        printf(".");
+        }
+        printf("\n");
+
+        printf("Verifying data...");
+	fflush(stdout);
+        for (i = 0; i < BUFFER_SIZE; i++)
+                if (veraddr[i] != (char)i)
+                        printf("\nIndex %lu mismatched\n", i);
+        printf("Done.\n");
+
+        /* Disable ADI and clean up
+         */
+	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) {
+		perror("mprotect failed");
+		goto err_out;
+	}
+
+        if (shmdt((const void *)shmaddr) != 0)
+                perror("Detach failure");
+        shmctl(shmid, IPC_RMID, NULL);
+
+        exit(0);
+
+err_out:
+        if (shmdt((const void *)shmaddr) != 0)
+                perror("Detach failure");
+        shmctl(shmid, IPC_RMID, NULL);
+        exit(1);
+}
diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
index 59bb5938d852..b799796ad963 100644
--- a/arch/sparc/include/asm/mman.h
+++ b/arch/sparc/include/asm/mman.h
@@ -6,5 +6,75 @@
 #ifndef __ASSEMBLY__
 #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
 int sparc_mmap_check(unsigned long addr, unsigned long len);
-#endif
+
+#ifdef CONFIG_SPARC64
+#include <asm/adi_64.h>
+
+#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
+static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
+{
+	if (prot & PROT_ADI) {
+		struct pt_regs *regs;
+
+		if (!current->mm->context.adi) {
+			regs = task_pt_regs(current);
+			regs->tstate |= TSTATE_MCDE;
+			current->mm->context.adi = true;
+		}
+		return VM_SPARC_ADI;
+	} else {
+		return 0;
+	}
+}
+
+#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
+static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
+{
+	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
+}
+
+#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
+static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
+{
+	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
+		return 0;
+	if (prot & PROT_ADI) {
+		if (!adi_capable())
+			return 0;
+
+		/* ADI tags can not be set on read-only memory, so it makes
+		 * sense to enable ADI on writable memory only.
+		 */
+		if (!(prot & PROT_WRITE))
+			return 0;
+
+		if (addr) {
+			struct vm_area_struct *vma;
+
+			vma = find_vma(current->mm, addr);
+			if (vma) {
+				/* ADI can not be enabled on PFN
+				 * mapped pages
+				 */
+				if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+					return 0;
+
+				/* Mergeable pages can become unmergeable
+				 * if ADI is enabled on them even if they
+				 * have identical data on them. This can be
+				 * because ADI enabled pages with identical
+				 * data may still not have identical ADI
+				 * tags on them. Disallow ADI on mergeable
+				 * pages.
+				 */
+				if (vma->vm_flags & VM_MERGEABLE)
+					return 0;
+			}
+		}
+	}
+	return 1;
+}
+#endif /* CONFIG_SPARC64 */
+
+#endif /* __ASSEMBLY__ */
 #endif /* __SPARC_MMAN_H__ */
diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
index 83b36a5371ff..a65d51ebe00b 100644
--- a/arch/sparc/include/asm/mmu_64.h
+++ b/arch/sparc/include/asm/mmu_64.h
@@ -89,6 +89,20 @@ struct tsb_config {
 #define MM_NUM_TSBS	1
 #endif
 
+/* ADI tags are stored when a page is swapped out and the storage for
+ * tags is allocated dynamically. There is a tag storage descriptor
+ * associated with each set of tag storage pages. Tag storage descriptors
+ * are allocated dynamically. Since kernel will allocate a full page for
+ * each tag storage descriptor, we can store up to
+ * PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page.
+ */
+typedef struct {
+	unsigned long	start;		/* Start address for this tag storage */
+	unsigned long	end;		/* Last address for tag storage */
+	unsigned char	*tags;		/* Where the tags are */
+	unsigned long	tag_users;	/* number of references to descriptor */
+} tag_storage_desc_t;
+
 typedef struct {
 	spinlock_t		lock;
 	unsigned long		sparc64_ctx_val;
@@ -96,6 +110,9 @@ typedef struct {
 	unsigned long		thp_pte_count;
 	struct tsb_config	tsb_block[MM_NUM_TSBS];
 	struct hv_tsb_descr	tsb_descr[MM_NUM_TSBS];
+	bool			adi;
+	tag_storage_desc_t	*tag_store;
+	spinlock_t		tag_lock;
 } mm_context_t;
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
index 2cddcda4f85f..68de059551f9 100644
--- a/arch/sparc/include/asm/mmu_context_64.h
+++ b/arch/sparc/include/asm/mmu_context_64.h
@@ -9,6 +9,7 @@
 #include <linux/mm_types.h>
 
 #include <asm/spitfire.h>
+#include <asm/adi_64.h>
 #include <asm-generic/mm_hooks.h>
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
@@ -129,6 +130,48 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
 
 #define deactivate_mm(tsk,mm)	do { } while (0)
 #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
+
+#define  __HAVE_ARCH_START_CONTEXT_SWITCH
+static inline void arch_start_context_switch(struct task_struct *prev)
+{
+	/* Save the current state of MCDPER register for the process
+	 * we are switching from
+	 */
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		__asm__ __volatile__(
+			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
+			"mov %%g1, %0\n\t"
+			: "=r" (tmp_mcdper)
+			:
+			: "g1");
+		if (tmp_mcdper)
+			set_tsk_thread_flag(prev, TIF_MCDPER);
+		else
+			clear_tsk_thread_flag(prev, TIF_MCDPER);
+	}
+}
+
+#define finish_arch_post_lock_switch	finish_arch_post_lock_switch
+static inline void finish_arch_post_lock_switch(void)
+{
+	/* Restore the state of MCDPER register for the new process
+	 * just switched to.
+	 */
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		tmp_mcdper = test_thread_flag(TIF_MCDPER);
+		__asm__ __volatile__(
+			"mov %0, %%g1\n\t"
+			".word 0x9d800001\n\t"	/* wr %g0, %g1, %mcdper" */
+			:
+			: "ir" (tmp_mcdper)
+			: "g1");
+	}
+}
+
 #endif /* !(__ASSEMBLY__) */
 
 #endif /* !(__SPARC64_MMU_CONTEXT_H) */
diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
index 5961b2d8398a..dc582c5611f8 100644
--- a/arch/sparc/include/asm/page_64.h
+++ b/arch/sparc/include/asm/page_64.h
@@ -46,6 +46,10 @@ struct page;
 void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
 #define copy_page(X,Y)	memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
 void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
+#define __HAVE_ARCH_COPY_USER_HIGHPAGE
+struct vm_area_struct;
+void copy_user_highpage(struct page *to, struct page *from,
+			unsigned long vaddr, struct vm_area_struct *vma);
 
 /* Unlike sparc32, sparc64's parameter passing API is more
  * sane in that structures which as small enough are passed
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index af045061f41e..51da342c392d 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -18,6 +18,7 @@
 #include <asm/types.h>
 #include <asm/spitfire.h>
 #include <asm/asi.h>
+#include <asm/adi.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 
@@ -570,6 +571,18 @@ static inline pte_t pte_mkspecial(pte_t pte)
 	return pte;
 }
 
+static inline pte_t pte_mkmcd(pte_t pte)
+{
+	pte_val(pte) |= _PAGE_MCD_4V;
+	return pte;
+}
+
+static inline pte_t pte_mknotmcd(pte_t pte)
+{
+	pte_val(pte) &= ~_PAGE_MCD_4V;
+	return pte;
+}
+
 static inline unsigned long pte_young(pte_t pte)
 {
 	unsigned long mask;
@@ -1001,6 +1014,39 @@ int page_in_phys_avail(unsigned long paddr);
 int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
 		    unsigned long, pgprot_t);
 
+void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		      unsigned long addr, pte_t pte);
+
+int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		  unsigned long addr, pte_t oldpte);
+
+#define __HAVE_ARCH_DO_SWAP_PAGE
+static inline void arch_do_swap_page(struct mm_struct *mm,
+				     struct vm_area_struct *vma,
+				     unsigned long addr,
+				     pte_t pte, pte_t oldpte)
+{
+	/* If this is a new page being mapped in, there can be no
+	 * ADI tags stored away for this page. Skip looking for
+	 * stored tags
+	 */
+	if (pte_none(oldpte))
+		return;
+
+	if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V))
+		adi_restore_tags(mm, vma, addr, pte);
+}
+
+#define __HAVE_ARCH_UNMAP_ONE
+static inline int arch_unmap_one(struct mm_struct *mm,
+				 struct vm_area_struct *vma,
+				 unsigned long addr, pte_t oldpte)
+{
+	if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
+		return adi_save_tags(mm, vma, addr, oldpte);
+	return 0;
+}
+
 static inline int io_remap_pfn_range(struct vm_area_struct *vma,
 				     unsigned long from, unsigned long pfn,
 				     unsigned long size, pgprot_t prot)
diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
index 38a24f257b85..9c04acb1f9af 100644
--- a/arch/sparc/include/asm/thread_info_64.h
+++ b/arch/sparc/include/asm/thread_info_64.h
@@ -190,7 +190,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
  *       in using in assembly, else we can't use the mask as
  *       an immediate value in instructions such as andcc.
  */
-/* flag bit 12 is available */
+#define TIF_MCDPER		12	/* Precise MCD exception */
 #define TIF_MEMDIE		13	/* is terminating due to OOM killer */
 #define TIF_POLLING_NRFLAG	14
 
diff --git a/arch/sparc/include/asm/trap_block.h b/arch/sparc/include/asm/trap_block.h
index ec9c04de3664..b283e940671a 100644
--- a/arch/sparc/include/asm/trap_block.h
+++ b/arch/sparc/include/asm/trap_block.h
@@ -72,6 +72,8 @@ struct sun4v_1insn_patch_entry {
 };
 extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch,
 	__sun4v_1insn_patch_end;
+extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch,
+	__sun_m7_1insn_patch_end;
 
 struct sun4v_2insn_patch_entry {
 	unsigned int	addr;
diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
index 9765896ecb2c..a72c03397345 100644
--- a/arch/sparc/include/uapi/asm/mman.h
+++ b/arch/sparc/include/uapi/asm/mman.h
@@ -5,6 +5,8 @@
 
 /* SunOS'ified... */
 
+#define PROT_ADI	0x10		/* ADI enabled */
+
 #define MAP_RENAME      MAP_ANONYMOUS   /* In SunOS terminology */
 #define MAP_NORESERVE   0x40            /* don't reserve swap pages */
 #define MAP_INHERIT     0x80            /* SunOS doesn't do this, but... */
diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
index 9fbb5dd4a7bf..83c1e36ae5fa 100644
--- a/arch/sparc/kernel/adi_64.c
+++ b/arch/sparc/kernel/adi_64.c
@@ -7,10 +7,24 @@
  * This work is licensed under the terms of the GNU GPL, version 2.
  */
 #include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/mm_types.h>
 #include <asm/mdesc.h>
 #include <asm/adi_64.h>
+#include <asm/mmu_64.h>
+#include <asm/pgtable_64.h>
+
+/* Each page of storage for ADI tags can accommodate tags for 128
+ * pages. When ADI enabled pages are being swapped out, it would be
+ * prudent to allocate at least enough tag storage space to accommodate
+ * SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to
+ * store tags for four SWAPFILE_CLUSTER pages to reduce need for
+ * further allocations for same vma.
+ */
+#define TAG_STORAGE_PAGES	8
 
 struct adi_config adi_state;
+EXPORT_SYMBOL(adi_state);
 
 /* mdesc_adi_init() : Parse machine description provided by the
  *	hypervisor to detect ADI capabilities
@@ -78,6 +92,19 @@ void __init mdesc_adi_init(void)
 		goto adi_not_found;
 	adi_state.caps.nbits = *val;
 
+	/* Some of the code to support swapping ADI tags is written
+	 * assumption that two ADI tags can fit inside one byte. If
+	 * this assumption is broken by a future architecture change,
+	 * that code will have to be revisited. If that were to happen,
+	 * disable ADI support so we do not get unpredictable results
+	 * with programs trying to use ADI and their pages getting
+	 * swapped out
+	 */
+	if (adi_state.caps.nbits > 4) {
+		pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n");
+		adi_state.enabled = false;
+	}
+
 	mdesc_release(hp);
 	return;
 
@@ -88,3 +115,253 @@ void __init mdesc_adi_init(void)
 	if (hp)
 		mdesc_release(hp);
 }
+
+tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
+				   struct vm_area_struct *vma,
+				   unsigned long addr)
+{
+	tag_storage_desc_t *tag_desc = NULL;
+	unsigned long i, max_desc, flags;
+
+	/* Check if this vma already has tag storage descriptor
+	 * allocated for it.
+	 */
+	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+	if (mm->context.tag_store) {
+		tag_desc = mm->context.tag_store;
+		spin_lock_irqsave(&mm->context.tag_lock, flags);
+		for (i = 0; i < max_desc; i++) {
+			if ((addr >= tag_desc->start) &&
+			    ((addr + PAGE_SIZE - 1) <= tag_desc->end))
+				break;
+			tag_desc++;
+		}
+		spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+
+		/* If no matching entries were found, this must be a
+		 * freshly allocated page
+		 */
+		if (i >= max_desc)
+			tag_desc = NULL;
+	}
+
+	return tag_desc;
+}
+
+tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
+				    struct vm_area_struct *vma,
+				    unsigned long addr)
+{
+	unsigned char *tags;
+	unsigned long i, size, max_desc, flags;
+	tag_storage_desc_t *tag_desc, *open_desc;
+	unsigned long end_addr, hole_start, hole_end;
+
+	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+	open_desc = NULL;
+	hole_start = 0;
+	hole_end = ULONG_MAX;
+	end_addr = addr + PAGE_SIZE - 1;
+
+	/* Check if this vma already has tag storage descriptor
+	 * allocated for it.
+	 */
+	spin_lock_irqsave(&mm->context.tag_lock, flags);
+	if (mm->context.tag_store) {
+		tag_desc = mm->context.tag_store;
+
+		/* Look for a matching entry for this address. While doing
+		 * that, look for the first open slot as well and find
+		 * the hole in already allocated range where this request
+		 * will fit in.
+		 */
+		for (i = 0; i < max_desc; i++) {
+			if (tag_desc->tag_users == 0) {
+				if (open_desc == NULL)
+					open_desc = tag_desc;
+			} else {
+				if ((addr >= tag_desc->start) &&
+				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
+					tag_desc->tag_users++;
+					goto out;
+				}
+			}
+			if ((tag_desc->start > end_addr) &&
+			    (tag_desc->start < hole_end))
+				hole_end = tag_desc->start;
+			if ((tag_desc->end < addr) &&
+			    (tag_desc->end > hole_start))
+				hole_start = tag_desc->end;
+			tag_desc++;
+		}
+
+	} else {
+		size = sizeof(tag_storage_desc_t)*max_desc;
+		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
+		if (mm->context.tag_store == NULL) {
+			tag_desc = NULL;
+			goto out;
+		}
+		tag_desc = mm->context.tag_store;
+		for (i = 0; i < max_desc; i++, tag_desc++)
+			tag_desc->tag_users = 0;
+		open_desc = mm->context.tag_store;
+		i = 0;
+	}
+
+	/* Check if we ran out of tag storage descriptors */
+	if (open_desc == NULL) {
+		tag_desc = NULL;
+		goto out;
+	}
+
+	/* Mark this tag descriptor slot in use and then initialize it */
+	tag_desc = open_desc;
+	tag_desc->tag_users = 1;
+
+	/* Tag storage has not been allocated for this vma and space
+	 * is available in tag storage descriptor. Since this page is
+	 * being swapped out, there is high probability subsequent pages
+	 * in the VMA will be swapped out as well. Allocates pages to
+	 * store tags for as many pages in this vma as possible but not
+	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
+	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
+	 * covers adi_blksize() worth of addresses. Check if the hole is
+	 * big enough to accommodate full address range for using
+	 * TAG_STORAGE_PAGES number of tag pages.
+	 */
+	size = TAG_STORAGE_PAGES * PAGE_SIZE;
+	end_addr = addr + (size*2*adi_blksize()) - 1;
+	if (hole_end < end_addr) {
+		/* Available hole is too small on the upper end of
+		 * address. Can we expand the range towards the lower
+		 * address and maximize use of this slot?
+		 */
+		unsigned long tmp_addr;
+
+		end_addr = hole_end - 1;
+		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
+		if (tmp_addr < hole_start) {
+			/* Available hole is restricted on lower address
+			 * end as well
+			 */
+			tmp_addr = hole_start + 1;
+		}
+		addr = tmp_addr;
+		size = (end_addr + 1 - addr)/(2*adi_blksize());
+		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
+		size = size * PAGE_SIZE;
+	}
+	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
+	if (tags == NULL) {
+		tag_desc->tag_users = 0;
+		tag_desc = NULL;
+		goto out;
+	}
+	tag_desc->start = addr;
+	tag_desc->tags = tags;
+	tag_desc->end = end_addr;
+
+out:
+	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+	return tag_desc;
+}
+
+void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
+{
+	unsigned long flags;
+	unsigned char *tags = NULL;
+
+	spin_lock_irqsave(&mm->context.tag_lock, flags);
+	tag_desc->tag_users--;
+	if (tag_desc->tag_users == 0) {
+		tag_desc->start = tag_desc->end = 0;
+		/* Do not free up the tag storage space allocated
+		 * by the first descriptor. This is persistent
+		 * emergency tag storage space for the task.
+		 */
+		if (tag_desc != mm->context.tag_store) {
+			tags = tag_desc->tags;
+			tag_desc->tags = NULL;
+		}
+	}
+	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
+	kfree(tags);
+}
+
+#define tag_start(addr, tag_desc)		\
+	((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize())))
+
+/* Retrieve any saved ADI tags for the page being swapped back in and
+ * restore these tags to the newly allocated physical page.
+ */
+void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		      unsigned long addr, pte_t pte)
+{
+	unsigned char *tag;
+	tag_storage_desc_t *tag_desc;
+	unsigned long paddr, tmp, version1, version2;
+
+	/* Check if the swapped out page has an ADI version
+	 * saved. If yes, restore version tag to the newly
+	 * allocated page.
+	 */
+	tag_desc = find_tag_store(mm, vma, addr);
+	if (tag_desc == NULL)
+		return;
+
+	tag = tag_start(addr, tag_desc);
+	paddr = pte_val(pte) & _PAGE_PADDR_4V;
+	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
+		version1 = (*tag) >> 4;
+		version2 = (*tag) & 0x0f;
+		*tag++ = 0;
+		asm volatile("stxa %0, [%1] %2\n\t"
+			:
+			: "r" (version1), "r" (tmp),
+			  "i" (ASI_MCD_REAL));
+		tmp += adi_blksize();
+		asm volatile("stxa %0, [%1] %2\n\t"
+			:
+			: "r" (version2), "r" (tmp),
+			  "i" (ASI_MCD_REAL));
+	}
+	asm volatile("membar #Sync\n\t");
+
+	/* Check and mark this tag space for release later if
+	 * the swapped in page was the last user of tag space
+	 */
+	del_tag_store(tag_desc, mm);
+}
+
+/* A page is about to be swapped out. Save any ADI tags associated with
+ * this physical page so they can be restored later when the page is swapped
+ * back in.
+ */
+int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+		  unsigned long addr, pte_t oldpte)
+{
+	unsigned char *tag;
+	tag_storage_desc_t *tag_desc;
+	unsigned long version1, version2, paddr, tmp;
+
+	tag_desc = alloc_tag_store(mm, vma, addr);
+	if (tag_desc == NULL)
+		return -1;
+
+	tag = tag_start(addr, tag_desc);
+	paddr = pte_val(oldpte) & _PAGE_PADDR_4V;
+	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
+		asm volatile("ldxa [%1] %2, %0\n\t"
+				: "=r" (version1)
+				: "r" (tmp), "i" (ASI_MCD_REAL));
+		tmp += adi_blksize();
+		asm volatile("ldxa [%1] %2, %0\n\t"
+				: "=r" (version2)
+				: "r" (tmp), "i" (ASI_MCD_REAL));
+		*tag = (version1 << 4) | version2;
+		tag++;
+	}
+
+	return 0;
+}
diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
index 1276ca2567ba..7be33bf45cff 100644
--- a/arch/sparc/kernel/etrap_64.S
+++ b/arch/sparc/kernel/etrap_64.S
@@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
 		or	%l7, %l0, %l7
-		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
+661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
+		/*
+		 * If userspace is using ADI, it could potentially pass
+		 * a pointer with version tag embedded in it. To maintain
+		 * the ADI security, we must enable PSTATE.mcde. Userspace
+		 * would have already set TTE.mcd in an earlier call to
+		 * kernel and set the version tag for the address being
+		 * dereferenced. Setting PSTATE.mcde would ensure any
+		 * access to userspace data through a system call honors
+		 * ADI and does not allow a rogue app to bypass ADI by
+		 * using system calls. Setting PSTATE.mcde only affects
+		 * accesses to virtual addresses that have TTE.mcd set.
+		 * Set PMCDPER to ensure any exceptions caused by ADI
+		 * version tag mismatch are exposed before system call
+		 * returns to userspace. Setting PMCDPER affects only
+		 * writes to virtual addresses that have TTE.mcd set and
+		 * have a version tag set as well.
+		 */
+		.section .sun_m7_1insn_patch, "ax"
+		.word	661b
+		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
+		.previous
+661:		nop
+		.section .sun_m7_1insn_patch, "ax"
+		.word	661b
+		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
+		.previous
 		or	%l7, %l0, %l7
 		wrpr	%l2, %tnpc
 		wrpr	%l7, (TSTATE_PRIV | TSTATE_IE), %tstate
diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
index b96104da5bd6..defa5723dfa6 100644
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -664,6 +664,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
 	return 0;
 }
 
+/* TIF_MCDPER in thread info flags for current task is updated lazily upon
+ * a context switch. Update the this flag in current task's thread flags
+ * before dup so the dup'd task will inherit the current TIF_MCDPER flag.
+ */
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+{
+	if (adi_capable()) {
+		register unsigned long tmp_mcdper;
+
+		__asm__ __volatile__(
+			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
+			"mov %%g1, %0\n\t"
+			: "=r" (tmp_mcdper)
+			:
+			: "g1");
+		if (tmp_mcdper)
+			set_thread_flag(TIF_MCDPER);
+		else
+			clear_thread_flag(TIF_MCDPER);
+	}
+
+	*dst = *src;
+	return 0;
+}
+
 typedef struct {
 	union {
 		unsigned int	pr_regs[32];
diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
index 422b17880955..a9da205da394 100644
--- a/arch/sparc/kernel/setup_64.c
+++ b/arch/sparc/kernel/setup_64.c
@@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
 	}
 }
 
+void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
+			     struct sun4v_1insn_patch_entry *end)
+{
+	sun4v_patch_1insn_range(start, end);
+}
+
 void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
 			     struct sun4v_2insn_patch_entry *end)
 {
@@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
 				&__sun4v_2insn_patch_end);
 	if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
-	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
+	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN) {
+		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
+					 &__sun_m7_1insn_patch_end);
 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
 					 &__sun_m7_2insn_patch_end);
+		}
 
 	sun4v_hvapi_init();
 }
diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
index 572db686f845..20a70682cce7 100644
--- a/arch/sparc/kernel/vmlinux.lds.S
+++ b/arch/sparc/kernel/vmlinux.lds.S
@@ -144,6 +144,11 @@ SECTIONS
 		*(.pause_3insn_patch)
 		__pause_3insn_patch_end = .;
 	}
+	.sun_m7_1insn_patch : {
+		__sun_m7_1insn_patch = .;
+		*(.sun_m7_1insn_patch)
+		__sun_m7_1insn_patch_end = .;
+	}
 	.sun_m7_2insn_patch : {
 		__sun_m7_2insn_patch = .;
 		*(.sun_m7_2insn_patch)
diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
index cd0e32bbcb1d..579f7ae75b35 100644
--- a/arch/sparc/mm/gup.c
+++ b/arch/sparc/mm/gup.c
@@ -11,6 +11,7 @@
 #include <linux/pagemap.h>
 #include <linux/rwsem.h>
 #include <asm/pgtable.h>
+#include <asm/adi.h>
 
 /*
  * The performance critical leaf functions are made noinline otherwise gcc
@@ -157,6 +158,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
 	pgd_t *pgdp;
 	int nr = 0;
 
+#ifdef CONFIG_SPARC64
+	if (adi_capable()) {
+		long addr = start;
+
+		/* If userspace has passed a versioned address, kernel
+		 * will not find it in the VMAs since it does not store
+		 * the version tags in the list of VMAs. Storing version
+		 * tags in list of VMAs is impractical since they can be
+		 * changed any time from userspace without dropping into
+		 * kernel. Any address search in VMAs will be done with
+		 * non-versioned addresses. Ensure the ADI version bits
+		 * are dropped here by sign extending the last bit before
+		 * ADI bits. IOMMU does not implement version tags.
+		 */
+		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
+		start = addr;
+	}
+#endif
 	start &= PAGE_MASK;
 	addr = start;
 	len = (unsigned long) nr_pages << PAGE_SHIFT;
@@ -187,6 +206,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
 	pgd_t *pgdp;
 	int nr = 0;
 
+#ifdef CONFIG_SPARC64
+	if (adi_capable()) {
+		long addr = start;
+
+		/* If userspace has passed a versioned address, kernel
+		 * will not find it in the VMAs since it does not store
+		 * the version tags in the list of VMAs. Storing version
+		 * tags in list of VMAs is impractical since they can be
+		 * changed any time from userspace without dropping into
+		 * kernel. Any address search in VMAs will be done with
+		 * non-versioned addresses. Ensure the ADI version bits
+		 * are dropped here by sign extending the last bit before
+		 * ADI bits. IOMMU does not implements version tags,
+		 */
+		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
+		start = addr;
+	}
+#endif
 	start &= PAGE_MASK;
 	addr = start;
 	len = (unsigned long) nr_pages << PAGE_SHIFT;
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 88855e383b34..487ed1f1ce86 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -177,8 +177,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
 			 struct page *page, int writeable)
 {
 	unsigned int shift = huge_page_shift(hstate_vma(vma));
+	pte_t pte;
 
-	return hugepage_shift_to_tte(entry, shift);
+	pte = hugepage_shift_to_tte(entry, shift);
+
+#ifdef CONFIG_SPARC64
+	/* If this vma has ADI enabled on it, turn on TTE.mcd
+	 */
+	if (vma->vm_flags & VM_SPARC_ADI)
+		return pte_mkmcd(pte);
+	else
+		return pte_mknotmcd(pte);
+#else
+	return pte;
+#endif
 }
 
 static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 3c40ebd50f92..94854e7e833e 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -3087,3 +3087,36 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 		do_flush_tlb_kernel_range(start, end);
 	}
 }
+
+void copy_user_highpage(struct page *to, struct page *from,
+	unsigned long vaddr, struct vm_area_struct *vma)
+{
+	char *vfrom, *vto;
+
+	vfrom = kmap_atomic(from);
+	vto = kmap_atomic(to);
+	copy_user_page(vto, vfrom, vaddr, to);
+	kunmap_atomic(vto);
+	kunmap_atomic(vfrom);
+
+	/* If this page has ADI enabled, copy over any ADI tags
+	 * as well
+	 */
+	if (vma->vm_flags & VM_SPARC_ADI) {
+		unsigned long pfrom, pto, i, adi_tag;
+
+		pfrom = page_to_phys(from);
+		pto = page_to_phys(to);
+
+		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
+			asm volatile("ldxa [%1] %2, %0\n\t"
+					: "=r" (adi_tag)
+					:  "r" (i), "i" (ASI_MCD_REAL));
+			asm volatile("stxa %0, [%1] %2\n\t"
+					:
+					: "r" (adi_tag), "r" (pto),
+					  "i" (ASI_MCD_REAL));
+			pto += adi_blksize();
+		}
+	}
+}
diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
index 0d4b998c7d7b..6518cc42056b 100644
--- a/arch/sparc/mm/tsb.c
+++ b/arch/sparc/mm/tsb.c
@@ -545,6 +545,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 
 	mm->context.sparc64_ctx_val = 0UL;
 
+	mm->context.tag_store = NULL;
+	spin_lock_init(&mm->context.tag_lock);
+
 #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
 	/* We reset them to zero because the fork() page copying
 	 * will re-increment the counters as the parent PTEs are
@@ -610,4 +613,22 @@ void destroy_context(struct mm_struct *mm)
 	}
 
 	spin_unlock_irqrestore(&ctx_alloc_lock, flags);
+
+	/* If ADI tag storage was allocated for this task, free it */
+	if (mm->context.tag_store) {
+		tag_storage_desc_t *tag_desc;
+		unsigned long max_desc;
+		unsigned char *tags;
+
+		tag_desc = mm->context.tag_store;
+		max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
+		for (i = 0; i < max_desc; i++) {
+			tags = tag_desc->tags;
+			tag_desc->tags = NULL;
+			kfree(tags);
+			tag_desc++;
+		}
+		kfree(mm->context.tag_store);
+		mm->context.tag_store = NULL;
+	}
 }
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b7aa3932e6d4..c0972114036f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -231,6 +231,9 @@ extern unsigned int kobjsize(const void *objp);
 # define VM_GROWSUP	VM_ARCH_1
 #elif defined(CONFIG_IA64)
 # define VM_GROWSUP	VM_ARCH_1
+#elif defined(CONFIG_SPARC64)
+# define VM_SPARC_ADI	VM_ARCH_1	/* Uses ADI tag for access control */
+# define VM_ARCH_CLEAR	VM_SPARC_ADI
 #elif !defined(CONFIG_MMU)
 # define VM_MAPPED_COPY	VM_ARCH_1	/* T if mapped copy of data (nommu mmap) */
 #endif
diff --git a/mm/ksm.c b/mm/ksm.c
index 216184af0e19..bb82399816ef 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1797,6 +1797,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 		if (*vm_flags & VM_SAO)
 			return 0;
 #endif
+#ifdef VM_SPARC_ADI
+		if (*vm_flags & VM_SPARC_ADI)
+			return 0;
+#endif
 
 		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
 			err = __ksm_enter(mm);
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
  2017-08-09 21:26   ` Khalid Aziz
  (?)
@ 2017-08-10 13:20     ` Michael Ellerman
  -1 siblings, 0 replies; 86+ messages in thread
From: Michael Ellerman @ 2017-08-10 13:20 UTC (permalink / raw)
  To: Khalid Aziz, akpm, benh, paulus, davem, dave.hansen
  Cc: Khalid Aziz, bsingharora, dja, tglx, mgorman, aarcange,
	kirill.shutemov, heiko.carstens, ak, linuxppc-dev, linux-kernel,
	linux-mm, sparclinux, Khalid Aziz

Khalid Aziz <khalid.aziz@oracle.com> writes:

> A protection flag may not be valid across entire address space and
> hence arch_validate_prot() might need the address a protection bit is
> being set on to ensure it is a valid protection flag. For example, sparc
> processors support memory corruption detection (as part of ADI feature)
> flag on memory addresses mapped on to physical RAM but not on PFN mapped
> pages or addresses mapped on to devices. This patch adds address to the
> parameters being passed to arch_validate_prot() so protection bits can
> be validated in the relevant context.
>
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> Cc: Khalid Aziz <khalid@gonehiking.org>
> ---
> v7:
> 	- new patch
>
>  arch/powerpc/include/asm/mman.h | 2 +-
>  arch/powerpc/kernel/syscalls.c  | 2 +-
>  include/linux/mman.h            | 2 +-
>  mm/mprotect.c                   | 2 +-
>  4 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
> index 30922f699341..bc74074304a2 100644
> --- a/arch/powerpc/include/asm/mman.h
> +++ b/arch/powerpc/include/asm/mman.h
> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>  		return false;
>  	return true;
>  }
> -#define arch_validate_prot(prot) arch_validate_prot(prot)
> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)

This can be simpler, as just:

#define arch_validate_prot arch_validate_prot

cheers

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-10 13:20     ` Michael Ellerman
  0 siblings, 0 replies; 86+ messages in thread
From: Michael Ellerman @ 2017-08-10 13:20 UTC (permalink / raw)
  To: Khalid Aziz, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

Khalid Aziz <khalid.aziz@oracle.com> writes:

> A protection flag may not be valid across entire address space and
> hence arch_validate_prot() might need the address a protection bit is
> being set on to ensure it is a valid protection flag. For example, sparc
> processors support memory corruption detection (as part of ADI feature)
> flag on memory addresses mapped on to physical RAM but not on PFN mapped
> pages or addresses mapped on to devices. This patch adds address to the
> parameters being passed to arch_validate_prot() so protection bits can
> be validated in the relevant context.
>
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> Cc: Khalid Aziz <khalid@gonehiking.org>
> ---
> v7:
> 	- new patch
>
>  arch/powerpc/include/asm/mman.h | 2 +-
>  arch/powerpc/kernel/syscalls.c  | 2 +-
>  include/linux/mman.h            | 2 +-
>  mm/mprotect.c                   | 2 +-
>  4 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
> index 30922f699341..bc74074304a2 100644
> --- a/arch/powerpc/include/asm/mman.h
> +++ b/arch/powerpc/include/asm/mman.h
> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>  		return false;
>  	return true;
>  }
> -#define arch_validate_prot(prot) arch_validate_prot(prot)
> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)

This can be simpler, as just:

#define arch_validate_prot arch_validate_prot

cheers

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-10 13:20     ` Michael Ellerman
  0 siblings, 0 replies; 86+ messages in thread
From: Michael Ellerman @ 2017-08-10 13:20 UTC (permalink / raw)
  To: Khalid Aziz, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

Khalid Aziz <khalid.aziz@oracle.com> writes:

> A protection flag may not be valid across entire address space and
> hence arch_validate_prot() might need the address a protection bit is
> being set on to ensure it is a valid protection flag. For example, sparc
> processors support memory corruption detection (as part of ADI feature)
> flag on memory addresses mapped on to physical RAM but not on PFN mapped
> pages or addresses mapped on to devices. This patch adds address to the
> parameters being passed to arch_validate_prot() so protection bits can
> be validated in the relevant context.
>
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> Cc: Khalid Aziz <khalid@gonehiking.org>
> ---
> v7:
> 	- new patch
>
>  arch/powerpc/include/asm/mman.h | 2 +-
>  arch/powerpc/kernel/syscalls.c  | 2 +-
>  include/linux/mman.h            | 2 +-
>  mm/mprotect.c                   | 2 +-
>  4 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
> index 30922f699341..bc74074304a2 100644
> --- a/arch/powerpc/include/asm/mman.h
> +++ b/arch/powerpc/include/asm/mman.h
> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>  		return false;
>  	return true;
>  }
> -#define arch_validate_prot(prot) arch_validate_prot(prot)
> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)

This can be simpler, as just:

#define arch_validate_prot arch_validate_prot

cheers

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
  2017-08-10 13:20     ` Michael Ellerman
  (?)
@ 2017-08-10 14:41       ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-10 14:41 UTC (permalink / raw)
  To: Michael Ellerman, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

On 08/10/2017 07:20 AM, Michael Ellerman wrote:
> Khalid Aziz <khalid.aziz@oracle.com> writes:
> 
>> A protection flag may not be valid across entire address space and
>> hence arch_validate_prot() might need the address a protection bit is
>> being set on to ensure it is a valid protection flag. For example, sparc
>> processors support memory corruption detection (as part of ADI feature)
>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>> pages or addresses mapped on to devices. This patch adds address to the
>> parameters being passed to arch_validate_prot() so protection bits can
>> be validated in the relevant context.
>>
>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>> Cc: Khalid Aziz <khalid@gonehiking.org>
>> ---
>> v7:
>> 	- new patch
>>
>>   arch/powerpc/include/asm/mman.h | 2 +-
>>   arch/powerpc/kernel/syscalls.c  | 2 +-
>>   include/linux/mman.h            | 2 +-
>>   mm/mprotect.c                   | 2 +-
>>   4 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>> index 30922f699341..bc74074304a2 100644
>> --- a/arch/powerpc/include/asm/mman.h
>> +++ b/arch/powerpc/include/asm/mman.h
>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>   		return false;
>>   	return true;
>>   }
>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
> 
> This can be simpler, as just:
> 
> #define arch_validate_prot arch_validate_prot
> 

Hi Michael,

Thanks for reviewing!

My patch expands parameter list for arch_validate_prot() from one to two 
parameters. Existing powerpc version of arch_validate_prot() is written 
with one parameter. If I use the above #define, compilation fails with:

mm/mprotect.c: In function ‘do_mprotect_pkey’:
mm/mprotect.c:399: error: too many arguments to function 
‘arch_validate_prot’

Another way to solve it would be to add the new addr parameter to 
powerpc version of arch_validate_prot() but I chose the less disruptive 
solution of tackling it through #define and expanded the existing 
#define to include the new parameter. Make sense?

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-10 14:41       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-10 14:41 UTC (permalink / raw)
  To: Michael Ellerman, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

On 08/10/2017 07:20 AM, Michael Ellerman wrote:
> Khalid Aziz <khalid.aziz@oracle.com> writes:
> 
>> A protection flag may not be valid across entire address space and
>> hence arch_validate_prot() might need the address a protection bit is
>> being set on to ensure it is a valid protection flag. For example, sparc
>> processors support memory corruption detection (as part of ADI feature)
>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>> pages or addresses mapped on to devices. This patch adds address to the
>> parameters being passed to arch_validate_prot() so protection bits can
>> be validated in the relevant context.
>>
>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>> Cc: Khalid Aziz <khalid@gonehiking.org>
>> ---
>> v7:
>> 	- new patch
>>
>>   arch/powerpc/include/asm/mman.h | 2 +-
>>   arch/powerpc/kernel/syscalls.c  | 2 +-
>>   include/linux/mman.h            | 2 +-
>>   mm/mprotect.c                   | 2 +-
>>   4 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>> index 30922f699341..bc74074304a2 100644
>> --- a/arch/powerpc/include/asm/mman.h
>> +++ b/arch/powerpc/include/asm/mman.h
>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>   		return false;
>>   	return true;
>>   }
>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
> 
> This can be simpler, as just:
> 
> #define arch_validate_prot arch_validate_prot
> 

Hi Michael,

Thanks for reviewing!

My patch expands parameter list for arch_validate_prot() from one to two 
parameters. Existing powerpc version of arch_validate_prot() is written 
with one parameter. If I use the above #define, compilation fails with:

mm/mprotect.c: In function ‘do_mprotect_pkey’:
mm/mprotect.c:399: error: too many arguments to function 
‘arch_validate_prot’

Another way to solve it would be to add the new addr parameter to 
powerpc version of arch_validate_prot() but I chose the less disruptive 
solution of tackling it through #define and expanded the existing 
#define to include the new parameter. Make sense?

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-10 14:41       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-10 14:41 UTC (permalink / raw)
  To: Michael Ellerman, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

On 08/10/2017 07:20 AM, Michael Ellerman wrote:
> Khalid Aziz <khalid.aziz@oracle.com> writes:
> 
>> A protection flag may not be valid across entire address space and
>> hence arch_validate_prot() might need the address a protection bit is
>> being set on to ensure it is a valid protection flag. For example, sparc
>> processors support memory corruption detection (as part of ADI feature)
>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>> pages or addresses mapped on to devices. This patch adds address to the
>> parameters being passed to arch_validate_prot() so protection bits can
>> be validated in the relevant context.
>>
>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>> Cc: Khalid Aziz <khalid@gonehiking.org>
>> ---
>> v7:
>> 	- new patch
>>
>>   arch/powerpc/include/asm/mman.h | 2 +-
>>   arch/powerpc/kernel/syscalls.c  | 2 +-
>>   include/linux/mman.h            | 2 +-
>>   mm/mprotect.c                   | 2 +-
>>   4 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>> index 30922f699341..bc74074304a2 100644
>> --- a/arch/powerpc/include/asm/mman.h
>> +++ b/arch/powerpc/include/asm/mman.h
>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>   		return false;
>>   	return true;
>>   }
>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
> 
> This can be simpler, as just:
> 
> #define arch_validate_prot arch_validate_prot
> 

Hi Michael,

Thanks for reviewing!

My patch expands parameter list for arch_validate_prot() from one to two 
parameters. Existing powerpc version of arch_validate_prot() is written 
with one parameter. If I use the above #define, compilation fails with:

mm/mprotect.c: In function a??do_mprotect_pkeya??:
mm/mprotect.c:399: error: too many arguments to function 
a??arch_validate_prota??

Another way to solve it would be to add the new addr parameter to 
powerpc version of arch_validate_prot() but I chose the less disruptive 
solution of tackling it through #define and expanded the existing 
#define to include the new parameter. Make sense?

Thanks,
Khalid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
  2017-08-10 14:41       ` Khalid Aziz
  (?)
  (?)
@ 2017-08-15  5:02         ` Michael Ellerman
  -1 siblings, 0 replies; 86+ messages in thread
From: Michael Ellerman @ 2017-08-15  5:02 UTC (permalink / raw)
  To: Khalid Aziz, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

Khalid Aziz <khalid.aziz@oracle.com> writes:

> On 08/10/2017 07:20 AM, Michael Ellerman wrote:
>> Khalid Aziz <khalid.aziz@oracle.com> writes:
>> 
>>> A protection flag may not be valid across entire address space and
>>> hence arch_validate_prot() might need the address a protection bit is
>>> being set on to ensure it is a valid protection flag. For example, sparc
>>> processors support memory corruption detection (as part of ADI feature)
>>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>>> pages or addresses mapped on to devices. This patch adds address to the
>>> parameters being passed to arch_validate_prot() so protection bits can
>>> be validated in the relevant context.
>>>
>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>> Cc: Khalid Aziz <khalid@gonehiking.org>
>>> ---
>>> v7:
>>> 	- new patch
>>>
>>>   arch/powerpc/include/asm/mman.h | 2 +-
>>>   arch/powerpc/kernel/syscalls.c  | 2 +-
>>>   include/linux/mman.h            | 2 +-
>>>   mm/mprotect.c                   | 2 +-
>>>   4 files changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>>> index 30922f699341..bc74074304a2 100644
>>> --- a/arch/powerpc/include/asm/mman.h
>>> +++ b/arch/powerpc/include/asm/mman.h
>>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>>   		return false;
>>>   	return true;
>>>   }
>>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
>> 
>> This can be simpler, as just:
>> 
>> #define arch_validate_prot arch_validate_prot
>> 
>
> Hi Michael,
>
> Thanks for reviewing!
>
> My patch expands parameter list for arch_validate_prot() from one to two 
> parameters. Existing powerpc version of arch_validate_prot() is written 
> with one parameter. If I use the above #define, compilation fails with:
>
> mm/mprotect.c: In function ‘do_mprotect_pkey’:
> mm/mprotect.c:399: error: too many arguments to function 
> ‘arch_validate_prot’
>
> Another way to solve it would be to add the new addr parameter to 
> powerpc version of arch_validate_prot() but I chose the less disruptive 
> solution of tackling it through #define and expanded the existing 
> #define to include the new parameter. Make sense?

Yes, it makes sense. But it's a bit gross.

At first glance it looks like our arch_validate_prot() has an incorrect
signature.

I'd prefer you just updated it to have the correct signature, I think
you'll have to change one more line in do_mmap2(). So it's not very
intrusive.

cheers

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-15  5:02         ` Michael Ellerman
  0 siblings, 0 replies; 86+ messages in thread
From: Michael Ellerman @ 2017-08-15  5:02 UTC (permalink / raw)
  To: Khalid Aziz, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

Khalid Aziz <khalid.aziz@oracle.com> writes:

> On 08/10/2017 07:20 AM, Michael Ellerman wrote:
>> Khalid Aziz <khalid.aziz@oracle.com> writes:
>> 
>>> A protection flag may not be valid across entire address space and
>>> hence arch_validate_prot() might need the address a protection bit is
>>> being set on to ensure it is a valid protection flag. For example, sparc
>>> processors support memory corruption detection (as part of ADI feature)
>>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>>> pages or addresses mapped on to devices. This patch adds address to the
>>> parameters being passed to arch_validate_prot() so protection bits can
>>> be validated in the relevant context.
>>>
>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>> Cc: Khalid Aziz <khalid@gonehiking.org>
>>> ---
>>> v7:
>>> 	- new patch
>>>
>>>   arch/powerpc/include/asm/mman.h | 2 +-
>>>   arch/powerpc/kernel/syscalls.c  | 2 +-
>>>   include/linux/mman.h            | 2 +-
>>>   mm/mprotect.c                   | 2 +-
>>>   4 files changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>>> index 30922f699341..bc74074304a2 100644
>>> --- a/arch/powerpc/include/asm/mman.h
>>> +++ b/arch/powerpc/include/asm/mman.h
>>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>>   		return false;
>>>   	return true;
>>>   }
>>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
>> 
>> This can be simpler, as just:
>> 
>> #define arch_validate_prot arch_validate_prot
>> 
>
> Hi Michael,
>
> Thanks for reviewing!
>
> My patch expands parameter list for arch_validate_prot() from one to two 
> parameters. Existing powerpc version of arch_validate_prot() is written 
> with one parameter. If I use the above #define, compilation fails with:
>
> mm/mprotect.c: In function ‘do_mprotect_pkey’:
> mm/mprotect.c:399: error: too many arguments to function 
> ‘arch_validate_prot’
>
> Another way to solve it would be to add the new addr parameter to 
> powerpc version of arch_validate_prot() but I chose the less disruptive 
> solution of tackling it through #define and expanded the existing 
> #define to include the new parameter. Make sense?

Yes, it makes sense. But it's a bit gross.

At first glance it looks like our arch_validate_prot() has an incorrect
signature.

I'd prefer you just updated it to have the correct signature, I think
you'll have to change one more line in do_mmap2(). So it's not very
intrusive.

cheers

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-15  5:02         ` Michael Ellerman
  0 siblings, 0 replies; 86+ messages in thread
From: Michael Ellerman @ 2017-08-15  5:02 UTC (permalink / raw)
  To: Khalid Aziz, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

Khalid Aziz <khalid.aziz@oracle.com> writes:

> On 08/10/2017 07:20 AM, Michael Ellerman wrote:
>> Khalid Aziz <khalid.aziz@oracle.com> writes:
>> 
>>> A protection flag may not be valid across entire address space and
>>> hence arch_validate_prot() might need the address a protection bit is
>>> being set on to ensure it is a valid protection flag. For example, sparc
>>> processors support memory corruption detection (as part of ADI feature)
>>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>>> pages or addresses mapped on to devices. This patch adds address to the
>>> parameters being passed to arch_validate_prot() so protection bits can
>>> be validated in the relevant context.
>>>
>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>> Cc: Khalid Aziz <khalid@gonehiking.org>
>>> ---
>>> v7:
>>> 	- new patch
>>>
>>>   arch/powerpc/include/asm/mman.h | 2 +-
>>>   arch/powerpc/kernel/syscalls.c  | 2 +-
>>>   include/linux/mman.h            | 2 +-
>>>   mm/mprotect.c                   | 2 +-
>>>   4 files changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>>> index 30922f699341..bc74074304a2 100644
>>> --- a/arch/powerpc/include/asm/mman.h
>>> +++ b/arch/powerpc/include/asm/mman.h
>>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>>   		return false;
>>>   	return true;
>>>   }
>>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
>> 
>> This can be simpler, as just:
>> 
>> #define arch_validate_prot arch_validate_prot
>> 
>
> Hi Michael,
>
> Thanks for reviewing!
>
> My patch expands parameter list for arch_validate_prot() from one to two 
> parameters. Existing powerpc version of arch_validate_prot() is written 
> with one parameter. If I use the above #define, compilation fails with:
>
> mm/mprotect.c: In function ‘do_mprotect_pkey’:
> mm/mprotect.c:399: error: too many arguments to function 
> ‘arch_validate_prot’
>
> Another way to solve it would be to add the new addr parameter to 
> powerpc version of arch_validate_prot() but I chose the less disruptive 
> solution of tackling it through #define and expanded the existing 
> #define to include the new parameter. Make sense?

Yes, it makes sense. But it's a bit gross.

At first glance it looks like our arch_validate_prot() has an incorrect
signature.

I'd prefer you just updated it to have the correct signature, I think
you'll have to change one more line in do_mmap2(). So it's not very
intrusive.

cheers

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-15  5:02         ` Michael Ellerman
  0 siblings, 0 replies; 86+ messages in thread
From: Michael Ellerman @ 2017-08-15  5:02 UTC (permalink / raw)
  To: Khalid Aziz, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

Khalid Aziz <khalid.aziz@oracle.com> writes:

> On 08/10/2017 07:20 AM, Michael Ellerman wrote:
>> Khalid Aziz <khalid.aziz@oracle.com> writes:
>>=20
>>> A protection flag may not be valid across entire address space and
>>> hence arch_validate_prot() might need the address a protection bit is
>>> being set on to ensure it is a valid protection flag. For example, sparc
>>> processors support memory corruption detection (as part of ADI feature)
>>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>>> pages or addresses mapped on to devices. This patch adds address to the
>>> parameters being passed to arch_validate_prot() so protection bits can
>>> be validated in the relevant context.
>>>
>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>> Cc: Khalid Aziz <khalid@gonehiking.org>
>>> ---
>>> v7:
>>> 	- new patch
>>>
>>>   arch/powerpc/include/asm/mman.h | 2 +-
>>>   arch/powerpc/kernel/syscalls.c  | 2 +-
>>>   include/linux/mman.h            | 2 +-
>>>   mm/mprotect.c                   | 2 +-
>>>   4 files changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm=
/mman.h
>>> index 30922f699341..bc74074304a2 100644
>>> --- a/arch/powerpc/include/asm/mman.h
>>> +++ b/arch/powerpc/include/asm/mman.h
>>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long p=
rot)
>>>   		return false;
>>>   	return true;
>>>   }
>>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
>>=20
>> This can be simpler, as just:
>>=20
>> #define arch_validate_prot arch_validate_prot
>>=20
>
> Hi Michael,
>
> Thanks for reviewing!
>
> My patch expands parameter list for arch_validate_prot() from one to two=
=20
> parameters. Existing powerpc version of arch_validate_prot() is written=20
> with one parameter. If I use the above #define, compilation fails with:
>
> mm/mprotect.c: In function =E2=80=98do_mprotect_pkey=E2=80=99:
> mm/mprotect.c:399: error: too many arguments to function=20
> =E2=80=98arch_validate_prot=E2=80=99
>
> Another way to solve it would be to add the new addr parameter to=20
> powerpc version of arch_validate_prot() but I chose the less disruptive=20
> solution of tackling it through #define and expanded the existing=20
> #define to include the new parameter. Make sense?

Yes, it makes sense. But it's a bit gross.

At first glance it looks like our arch_validate_prot() has an incorrect
signature.

I'd prefer you just updated it to have the correct signature, I think
you'll have to change one more line in do_mmap2(). So it's not very
intrusive.

cheers

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
  2017-08-15  5:02         ` Michael Ellerman
  (?)
@ 2017-08-15 14:32           ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-15 14:32 UTC (permalink / raw)
  To: Michael Ellerman, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

On 08/14/2017 11:02 PM, Michael Ellerman wrote:
> Khalid Aziz <khalid.aziz@oracle.com> writes:
> 
>> On 08/10/2017 07:20 AM, Michael Ellerman wrote:
>>> Khalid Aziz <khalid.aziz@oracle.com> writes:
>>>
>>>> A protection flag may not be valid across entire address space and
>>>> hence arch_validate_prot() might need the address a protection bit is
>>>> being set on to ensure it is a valid protection flag. For example, sparc
>>>> processors support memory corruption detection (as part of ADI feature)
>>>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>>>> pages or addresses mapped on to devices. This patch adds address to the
>>>> parameters being passed to arch_validate_prot() so protection bits can
>>>> be validated in the relevant context.
>>>>
>>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>>> Cc: Khalid Aziz <khalid@gonehiking.org>
>>>> ---
>>>> v7:
>>>> 	- new patch
>>>>
>>>>    arch/powerpc/include/asm/mman.h | 2 +-
>>>>    arch/powerpc/kernel/syscalls.c  | 2 +-
>>>>    include/linux/mman.h            | 2 +-
>>>>    mm/mprotect.c                   | 2 +-
>>>>    4 files changed, 4 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>>>> index 30922f699341..bc74074304a2 100644
>>>> --- a/arch/powerpc/include/asm/mman.h
>>>> +++ b/arch/powerpc/include/asm/mman.h
>>>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>>>    		return false;
>>>>    	return true;
>>>>    }
>>>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>>>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
>>>
>>> This can be simpler, as just:
>>>
>>> #define arch_validate_prot arch_validate_prot
>>>
>>
>> Hi Michael,
>>
>> Thanks for reviewing!
>>
>> My patch expands parameter list for arch_validate_prot() from one to two
>> parameters. Existing powerpc version of arch_validate_prot() is written
>> with one parameter. If I use the above #define, compilation fails with:
>>
>> mm/mprotect.c: In function ‘do_mprotect_pkey’:
>> mm/mprotect.c:399: error: too many arguments to function
>> ‘arch_validate_prot’
>>
>> Another way to solve it would be to add the new addr parameter to
>> powerpc version of arch_validate_prot() but I chose the less disruptive
>> solution of tackling it through #define and expanded the existing
>> #define to include the new parameter. Make sense?
> 
> Yes, it makes sense. But it's a bit gross.
> 
> At first glance it looks like our arch_validate_prot() has an incorrect
> signature.
> 
> I'd prefer you just updated it to have the correct signature, I think
> you'll have to change one more line in do_mmap2(). So it's not very
> intrusive.

Thanks, Michael. I can do that.

--
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-15 14:32           ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-15 14:32 UTC (permalink / raw)
  To: Michael Ellerman, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

On 08/14/2017 11:02 PM, Michael Ellerman wrote:
> Khalid Aziz <khalid.aziz@oracle.com> writes:
> 
>> On 08/10/2017 07:20 AM, Michael Ellerman wrote:
>>> Khalid Aziz <khalid.aziz@oracle.com> writes:
>>>
>>>> A protection flag may not be valid across entire address space and
>>>> hence arch_validate_prot() might need the address a protection bit is
>>>> being set on to ensure it is a valid protection flag. For example, sparc
>>>> processors support memory corruption detection (as part of ADI feature)
>>>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>>>> pages or addresses mapped on to devices. This patch adds address to the
>>>> parameters being passed to arch_validate_prot() so protection bits can
>>>> be validated in the relevant context.
>>>>
>>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>>> Cc: Khalid Aziz <khalid@gonehiking.org>
>>>> ---
>>>> v7:
>>>> 	- new patch
>>>>
>>>>    arch/powerpc/include/asm/mman.h | 2 +-
>>>>    arch/powerpc/kernel/syscalls.c  | 2 +-
>>>>    include/linux/mman.h            | 2 +-
>>>>    mm/mprotect.c                   | 2 +-
>>>>    4 files changed, 4 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>>>> index 30922f699341..bc74074304a2 100644
>>>> --- a/arch/powerpc/include/asm/mman.h
>>>> +++ b/arch/powerpc/include/asm/mman.h
>>>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>>>    		return false;
>>>>    	return true;
>>>>    }
>>>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>>>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
>>>
>>> This can be simpler, as just:
>>>
>>> #define arch_validate_prot arch_validate_prot
>>>
>>
>> Hi Michael,
>>
>> Thanks for reviewing!
>>
>> My patch expands parameter list for arch_validate_prot() from one to two
>> parameters. Existing powerpc version of arch_validate_prot() is written
>> with one parameter. If I use the above #define, compilation fails with:
>>
>> mm/mprotect.c: In function ‘do_mprotect_pkey’:
>> mm/mprotect.c:399: error: too many arguments to function
>> ‘arch_validate_prot’
>>
>> Another way to solve it would be to add the new addr parameter to
>> powerpc version of arch_validate_prot() but I chose the less disruptive
>> solution of tackling it through #define and expanded the existing
>> #define to include the new parameter. Make sense?
> 
> Yes, it makes sense. But it's a bit gross.
> 
> At first glance it looks like our arch_validate_prot() has an incorrect
> signature.
> 
> I'd prefer you just updated it to have the correct signature, I think
> you'll have to change one more line in do_mmap2(). So it's not very
> intrusive.

Thanks, Michael. I can do that.

--
Khalid


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot()
@ 2017-08-15 14:32           ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-15 14:32 UTC (permalink / raw)
  To: Michael Ellerman, akpm, benh, paulus, davem, dave.hansen
  Cc: bsingharora, dja, tglx, mgorman, aarcange, kirill.shutemov,
	heiko.carstens, ak, linuxppc-dev, linux-kernel, linux-mm,
	sparclinux, Khalid Aziz

On 08/14/2017 11:02 PM, Michael Ellerman wrote:
> Khalid Aziz <khalid.aziz@oracle.com> writes:
> 
>> On 08/10/2017 07:20 AM, Michael Ellerman wrote:
>>> Khalid Aziz <khalid.aziz@oracle.com> writes:
>>>
>>>> A protection flag may not be valid across entire address space and
>>>> hence arch_validate_prot() might need the address a protection bit is
>>>> being set on to ensure it is a valid protection flag. For example, sparc
>>>> processors support memory corruption detection (as part of ADI feature)
>>>> flag on memory addresses mapped on to physical RAM but not on PFN mapped
>>>> pages or addresses mapped on to devices. This patch adds address to the
>>>> parameters being passed to arch_validate_prot() so protection bits can
>>>> be validated in the relevant context.
>>>>
>>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>>> Cc: Khalid Aziz <khalid@gonehiking.org>
>>>> ---
>>>> v7:
>>>> 	- new patch
>>>>
>>>>    arch/powerpc/include/asm/mman.h | 2 +-
>>>>    arch/powerpc/kernel/syscalls.c  | 2 +-
>>>>    include/linux/mman.h            | 2 +-
>>>>    mm/mprotect.c                   | 2 +-
>>>>    4 files changed, 4 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>>>> index 30922f699341..bc74074304a2 100644
>>>> --- a/arch/powerpc/include/asm/mman.h
>>>> +++ b/arch/powerpc/include/asm/mman.h
>>>> @@ -40,7 +40,7 @@ static inline bool arch_validate_prot(unsigned long prot)
>>>>    		return false;
>>>>    	return true;
>>>>    }
>>>> -#define arch_validate_prot(prot) arch_validate_prot(prot)
>>>> +#define arch_validate_prot(prot, addr) arch_validate_prot(prot)
>>>
>>> This can be simpler, as just:
>>>
>>> #define arch_validate_prot arch_validate_prot
>>>
>>
>> Hi Michael,
>>
>> Thanks for reviewing!
>>
>> My patch expands parameter list for arch_validate_prot() from one to two
>> parameters. Existing powerpc version of arch_validate_prot() is written
>> with one parameter. If I use the above #define, compilation fails with:
>>
>> mm/mprotect.c: In function a??do_mprotect_pkeya??:
>> mm/mprotect.c:399: error: too many arguments to function
>> a??arch_validate_prota??
>>
>> Another way to solve it would be to add the new addr parameter to
>> powerpc version of arch_validate_prot() but I chose the less disruptive
>> solution of tackling it through #define and expanded the existing
>> #define to include the new parameter. Make sense?
> 
> Yes, it makes sense. But it's a bit gross.
> 
> At first glance it looks like our arch_validate_prot() has an incorrect
> signature.
> 
> I'd prefer you just updated it to have the correct signature, I think
> you'll have to change one more line in do_mmap2(). So it's not very
> intrusive.

Thanks, Michael. I can do that.

--
Khalid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
  2017-08-09 21:25   ` Khalid Aziz
  (?)
@ 2017-08-16  4:53     ` David Miller
  -1 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-16  4:53 UTC (permalink / raw)
  To: khalid.aziz
  Cc: akpm, dave.hansen, arnd, kirill.shutemov, mhocko, jack,
	ross.zwisler, aneesh.kumar, dave.jiang, willy, hughd, minchan,
	hannes, hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed,  9 Aug 2017 15:25:55 -0600

> @@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  				(flags & TTU_MIGRATION)) {
>  			swp_entry_t entry;
>  			pte_t swp_pte;
> +
> +			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
> +				set_pte_at(mm, address, pvmw.pte, pteval);
> +				ret = false;
> +				page_vma_mapped_walk_done(&pvmw);
> +				break;
>  			/*
>  			 * Store the pfn of the page in a special migration
>  			 * pte. do_swap_page() will wait until the migration
> @@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  			if (pte_soft_dirty(pteval))
>  				swp_pte = pte_swp_mksoft_dirty(swp_pte);
>  			set_pte_at(mm, address, pvmw.pte, swp_pte);
> +			}

This basic block doesn't look right.  I think the new closing brace is
intended to be right after the new break; statement.  If not at the
very least the indentation of the existing code in there needs to be
adjusted.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
@ 2017-08-16  4:53     ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-16  4:53 UTC (permalink / raw)
  To: khalid.aziz
  Cc: akpm, dave.hansen, arnd, kirill.shutemov, mhocko, jack,
	ross.zwisler, aneesh.kumar, dave.jiang, willy, hughd, minchan,
	hannes, hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed,  9 Aug 2017 15:25:55 -0600

> @@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  				(flags & TTU_MIGRATION)) {
>  			swp_entry_t entry;
>  			pte_t swp_pte;
> +
> +			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
> +				set_pte_at(mm, address, pvmw.pte, pteval);
> +				ret = false;
> +				page_vma_mapped_walk_done(&pvmw);
> +				break;
>  			/*
>  			 * Store the pfn of the page in a special migration
>  			 * pte. do_swap_page() will wait until the migration
> @@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  			if (pte_soft_dirty(pteval))
>  				swp_pte = pte_swp_mksoft_dirty(swp_pte);
>  			set_pte_at(mm, address, pvmw.pte, swp_pte);
> +			}

This basic block doesn't look right.  I think the new closing brace is
intended to be right after the new break; statement.  If not at the
very least the indentation of the existing code in there needs to be
adjusted.


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
@ 2017-08-16  4:53     ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-16  4:53 UTC (permalink / raw)
  To: khalid.aziz
  Cc: akpm, dave.hansen, arnd, kirill.shutemov, mhocko, jack,
	ross.zwisler, aneesh.kumar, dave.jiang, willy, hughd, minchan,
	hannes, hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed,  9 Aug 2017 15:25:55 -0600

> @@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  				(flags & TTU_MIGRATION)) {
>  			swp_entry_t entry;
>  			pte_t swp_pte;
> +
> +			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
> +				set_pte_at(mm, address, pvmw.pte, pteval);
> +				ret = false;
> +				page_vma_mapped_walk_done(&pvmw);
> +				break;
>  			/*
>  			 * Store the pfn of the page in a special migration
>  			 * pte. do_swap_page() will wait until the migration
> @@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  			if (pte_soft_dirty(pteval))
>  				swp_pte = pte_swp_mksoft_dirty(swp_pte);
>  			set_pte_at(mm, address, pvmw.pte, swp_pte);
> +			}

This basic block doesn't look right.  I think the new closing brace is
intended to be right after the new break; statement.  If not at the
very least the indentation of the existing code in there needs to be
adjusted.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-09 21:26   ` Khalid Aziz
  (?)
@ 2017-08-16  4:58     ` David Miller
  -1 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-16  4:58 UTC (permalink / raw)
  To: khalid.aziz
  Cc: dave.hansen, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed,  9 Aug 2017 15:26:02 -0600

> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte)
> +{
 ...
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		version1 = (*tag) >> 4;
> +		version2 = (*tag) & 0x0f;
> +		*tag++ = 0;
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version1), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version2), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +	}
> +	asm volatile("membar #Sync\n\t");

You do a membar here.

> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
> +			asm volatile("ldxa [%1] %2, %0\n\t"
> +					: "=r" (adi_tag)
> +					:  "r" (i), "i" (ASI_MCD_REAL));
> +			asm volatile("stxa %0, [%1] %2\n\t"
> +					:
> +					: "r" (adi_tag), "r" (pto),
> +					  "i" (ASI_MCD_REAL));

But not here.

Is this OK?  I suspect you need to add a membar this this second piece
of MCD tag storing code.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-16  4:58     ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-16  4:58 UTC (permalink / raw)
  To: khalid.aziz
  Cc: dave.hansen, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed,  9 Aug 2017 15:26:02 -0600

> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte)
> +{
 ...
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		version1 = (*tag) >> 4;
> +		version2 = (*tag) & 0x0f;
> +		*tag++ = 0;
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version1), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version2), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +	}
> +	asm volatile("membar #Sync\n\t");

You do a membar here.

> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
> +			asm volatile("ldxa [%1] %2, %0\n\t"
> +					: "=r" (adi_tag)
> +					:  "r" (i), "i" (ASI_MCD_REAL));
> +			asm volatile("stxa %0, [%1] %2\n\t"
> +					:
> +					: "r" (adi_tag), "r" (pto),
> +					  "i" (ASI_MCD_REAL));

But not here.

Is this OK?  I suspect you need to add a membar this this second piece
of MCD tag storing code.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-16  4:58     ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-16  4:58 UTC (permalink / raw)
  To: khalid.aziz
  Cc: dave.hansen, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed,  9 Aug 2017 15:26:02 -0600

> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte)
> +{
 ...
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		version1 = (*tag) >> 4;
> +		version2 = (*tag) & 0x0f;
> +		*tag++ = 0;
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version1), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version2), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +	}
> +	asm volatile("membar #Sync\n\t");

You do a membar here.

> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
> +			asm volatile("ldxa [%1] %2, %0\n\t"
> +					: "=r" (adi_tag)
> +					:  "r" (i), "i" (ASI_MCD_REAL));
> +			asm volatile("stxa %0, [%1] %2\n\t"
> +					:
> +					: "r" (adi_tag), "r" (pto),
> +					  "i" (ASI_MCD_REAL));

But not here.

Is this OK?  I suspect you need to add a membar this this second piece
of MCD tag storing code.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
  2017-08-16  4:53     ` David Miller
  (?)
@ 2017-08-16 14:34       ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-16 14:34 UTC (permalink / raw)
  To: David Miller
  Cc: akpm, dave.hansen, arnd, kirill.shutemov, mhocko, jack,
	ross.zwisler, aneesh.kumar, dave.jiang, willy, hughd, minchan,
	hannes, hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, khalid

On 08/15/2017 10:53 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed,  9 Aug 2017 15:25:55 -0600
> 
>> @@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>   				(flags & TTU_MIGRATION)) {
>>   			swp_entry_t entry;
>>   			pte_t swp_pte;
>> +
>> +			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
>> +				set_pte_at(mm, address, pvmw.pte, pteval);
>> +				ret = false;
>> +				page_vma_mapped_walk_done(&pvmw);
>> +				break;
>>   			/*
>>   			 * Store the pfn of the page in a special migration
>>   			 * pte. do_swap_page() will wait until the migration
>> @@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>   			if (pte_soft_dirty(pteval))
>>   				swp_pte = pte_swp_mksoft_dirty(swp_pte);
>>   			set_pte_at(mm, address, pvmw.pte, swp_pte);
>> +			}
> 
> This basic block doesn't look right.  I think the new closing brace is
> intended to be right after the new break; statement.  If not at the
> very least the indentation of the existing code in there needs to be
> adjusted.

Hi Dave,

Thanks. That brace needs to move up right after break. I will fix that.

--
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
@ 2017-08-16 14:34       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-16 14:34 UTC (permalink / raw)
  To: David Miller
  Cc: akpm, dave.hansen, arnd, kirill.shutemov, mhocko, jack,
	ross.zwisler, aneesh.kumar, dave.jiang, willy, hughd, minchan,
	hannes, hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, khalid

On 08/15/2017 10:53 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed,  9 Aug 2017 15:25:55 -0600
> 
>> @@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>   				(flags & TTU_MIGRATION)) {
>>   			swp_entry_t entry;
>>   			pte_t swp_pte;
>> +
>> +			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
>> +				set_pte_at(mm, address, pvmw.pte, pteval);
>> +				ret = false;
>> +				page_vma_mapped_walk_done(&pvmw);
>> +				break;
>>   			/*
>>   			 * Store the pfn of the page in a special migration
>>   			 * pte. do_swap_page() will wait until the migration
>> @@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>   			if (pte_soft_dirty(pteval))
>>   				swp_pte = pte_swp_mksoft_dirty(swp_pte);
>>   			set_pte_at(mm, address, pvmw.pte, swp_pte);
>> +			}
> 
> This basic block doesn't look right.  I think the new closing brace is
> intended to be right after the new break; statement.  If not at the
> very least the indentation of the existing code in there needs to be
> adjusted.

Hi Dave,

Thanks. That brace needs to move up right after break. I will fix that.

--
Khalid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap
@ 2017-08-16 14:34       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-16 14:34 UTC (permalink / raw)
  To: David Miller
  Cc: akpm, dave.hansen, arnd, kirill.shutemov, mhocko, jack,
	ross.zwisler, aneesh.kumar, dave.jiang, willy, hughd, minchan,
	hannes, hillf.zj, shli, mingo, jmarchan, lstoakes, linux-arch,
	linux-kernel, linux-mm, sparclinux, khalid

On 08/15/2017 10:53 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed,  9 Aug 2017 15:25:55 -0600
> 
>> @@ -1399,6 +1399,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>   				(flags & TTU_MIGRATION)) {
>>   			swp_entry_t entry;
>>   			pte_t swp_pte;
>> +
>> +			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
>> +				set_pte_at(mm, address, pvmw.pte, pteval);
>> +				ret = false;
>> +				page_vma_mapped_walk_done(&pvmw);
>> +				break;
>>   			/*
>>   			 * Store the pfn of the page in a special migration
>>   			 * pte. do_swap_page() will wait until the migration
>> @@ -1410,6 +1416,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>   			if (pte_soft_dirty(pteval))
>>   				swp_pte = pte_swp_mksoft_dirty(swp_pte);
>>   			set_pte_at(mm, address, pvmw.pte, swp_pte);
>> +			}
> 
> This basic block doesn't look right.  I think the new closing brace is
> intended to be right after the new break; statement.  If not at the
> very least the indentation of the existing code in there needs to be
> adjusted.

Hi Dave,

Thanks. That brace needs to move up right after break. I will fix that.

--
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-16  4:58     ` David Miller
  (?)
@ 2017-08-16 14:44       ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-16 14:44 UTC (permalink / raw)
  To: David Miller
  Cc: dave.hansen, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

On 08/15/2017 10:58 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed,  9 Aug 2017 15:26:02 -0600
> 
>> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
>> +		      unsigned long addr, pte_t pte)
>> +{
>   ...
>> +	tag = tag_start(addr, tag_desc);
>> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
>> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
>> +		version1 = (*tag) >> 4;
>> +		version2 = (*tag) & 0x0f;
>> +		*tag++ = 0;
>> +		asm volatile("stxa %0, [%1] %2\n\t"
>> +			:
>> +			: "r" (version1), "r" (tmp),
>> +			  "i" (ASI_MCD_REAL));
>> +		tmp += adi_blksize();
>> +		asm volatile("stxa %0, [%1] %2\n\t"
>> +			:
>> +			: "r" (version2), "r" (tmp),
>> +			  "i" (ASI_MCD_REAL));
>> +	}
>> +	asm volatile("membar #Sync\n\t");
> 
> You do a membar here.
> 
>> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
>> +			asm volatile("ldxa [%1] %2, %0\n\t"
>> +					: "=r" (adi_tag)
>> +					:  "r" (i), "i" (ASI_MCD_REAL));
>> +			asm volatile("stxa %0, [%1] %2\n\t"
>> +					:
>> +					: "r" (adi_tag), "r" (pto),
>> +					  "i" (ASI_MCD_REAL));
> 
> But not here.
> 
> Is this OK?  I suspect you need to add a membar this this second piece
> of MCD tag storing code.

Hi Dave,

You are right. This tag storing code needs membar as well. I will add that.

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-16 14:44       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-16 14:44 UTC (permalink / raw)
  To: David Miller
  Cc: dave.hansen, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

On 08/15/2017 10:58 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed,  9 Aug 2017 15:26:02 -0600
> 
>> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
>> +		      unsigned long addr, pte_t pte)
>> +{
>   ...
>> +	tag = tag_start(addr, tag_desc);
>> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
>> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
>> +		version1 = (*tag) >> 4;
>> +		version2 = (*tag) & 0x0f;
>> +		*tag++ = 0;
>> +		asm volatile("stxa %0, [%1] %2\n\t"
>> +			:
>> +			: "r" (version1), "r" (tmp),
>> +			  "i" (ASI_MCD_REAL));
>> +		tmp += adi_blksize();
>> +		asm volatile("stxa %0, [%1] %2\n\t"
>> +			:
>> +			: "r" (version2), "r" (tmp),
>> +			  "i" (ASI_MCD_REAL));
>> +	}
>> +	asm volatile("membar #Sync\n\t");
> 
> You do a membar here.
> 
>> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
>> +			asm volatile("ldxa [%1] %2, %0\n\t"
>> +					: "=r" (adi_tag)
>> +					:  "r" (i), "i" (ASI_MCD_REAL));
>> +			asm volatile("stxa %0, [%1] %2\n\t"
>> +					:
>> +					: "r" (adi_tag), "r" (pto),
>> +					  "i" (ASI_MCD_REAL));
> 
> But not here.
> 
> Is this OK?  I suspect you need to add a membar this this second piece
> of MCD tag storing code.

Hi Dave,

You are right. This tag storing code needs membar as well. I will add that.

Thanks,
Khalid


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-16 14:44       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-16 14:44 UTC (permalink / raw)
  To: David Miller
  Cc: dave.hansen, corbet, bob.picco, steven.sistare, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

On 08/15/2017 10:58 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed,  9 Aug 2017 15:26:02 -0600
> 
>> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
>> +		      unsigned long addr, pte_t pte)
>> +{
>   ...
>> +	tag = tag_start(addr, tag_desc);
>> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
>> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
>> +		version1 = (*tag) >> 4;
>> +		version2 = (*tag) & 0x0f;
>> +		*tag++ = 0;
>> +		asm volatile("stxa %0, [%1] %2\n\t"
>> +			:
>> +			: "r" (version1), "r" (tmp),
>> +			  "i" (ASI_MCD_REAL));
>> +		tmp += adi_blksize();
>> +		asm volatile("stxa %0, [%1] %2\n\t"
>> +			:
>> +			: "r" (version2), "r" (tmp),
>> +			  "i" (ASI_MCD_REAL));
>> +	}
>> +	asm volatile("membar #Sync\n\t");
> 
> You do a membar here.
> 
>> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
>> +			asm volatile("ldxa [%1] %2, %0\n\t"
>> +					: "=r" (adi_tag)
>> +					:  "r" (i), "i" (ASI_MCD_REAL));
>> +			asm volatile("stxa %0, [%1] %2\n\t"
>> +					:
>> +					: "r" (adi_tag), "r" (pto),
>> +					  "i" (ASI_MCD_REAL));
> 
> But not here.
> 
> Is this OK?  I suspect you need to add a membar this this second piece
> of MCD tag storing code.

Hi Dave,

You are right. This tag storing code needs membar as well. I will add that.

Thanks,
Khalid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-09 21:26   ` Khalid Aziz
  (?)
@ 2017-08-25 22:31     ` Anthony Yznaga
  -1 siblings, 0 replies; 86+ messages in thread
From: Anthony Yznaga @ 2017-08-25 22:31 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz


> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
> 
> ADI is a new feature supported on SPARC M7 and newer processors to allow
> hardware to catch rogue accesses to memory. ADI is supported for data
> fetches only and not instruction fetches. An app can enable ADI on its
> data pages, set version tags on them and use versioned addresses to
> access the data pages. Upper bits of the address contain the version
> tag. On M7 processors, upper four bits (bits 63-60) contain the version
> tag. If a rogue app attempts to access ADI enabled data pages, its
> access is blocked and processor generates an exception. Please see
> Documentation/sparc/adi.txt for further details.
> 
> This patch extends mprotect to enable ADI (TSTATE.mcde), enable/disable
> MCD (Memory Corruption Detection) on selected memory ranges, enable
> TTE.mcd in PTEs, return ADI parameters to userspace and save/restore ADI
> version tags on page swap out/in or migration. ADI is not enabled by
> default for any task. A task must explicitly enable ADI on a memory
> range and set version tag for ADI to be effective for the task.
> 
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> Cc: Khalid Aziz <khalid@gonehiking.org>
> ---
> v7:
> 	- Enhanced arch_validate_prot() to enable ADI only on writable
> 	  addresses backed by physical RAM
> 	- Added support for saving/restoring ADI tags for each ADI
> 	  block size address range on a page on swap in/out
> 	- Added code to copy ADI tags on COW
> 	- Updated values for auxiliary vectors to not conflict with
> 	  values on other architectures to avoid conflict in glibc. glibc
> 	  consolidates all auxiliary vectors into its headers and
> 	  duplicate values in consolidated header are problematic
> 	- Disable same page merging on ADI enabled pages since ADI tags
> 	  may not match on pages with identical data
> 	- Broke the patch up further into smaller patches
> 
> v6:
> 	- Eliminated instructions to read and write PSTATE as well as
> 	  MCDPER and PMCDPER on every access to userspace addresses
> 	  by setting PSTATE and PMCDPER correctly upon entry into
> 	  kernel. PSTATE.mcde and PMCDPER are set upon entry into
> 	  kernel when running on an M7 processor. PSTATE.mcde being
> 	  set only affects memory accesses that have TTE.mcd set.
> 	  PMCDPER being set only affects writes to memory addresses
> 	  that have TTE.mcd set. This ensures any faults caused by
> 	  ADI tag mismatch on a write are exposed before kernel returns
> 	  to userspace.
> 
> v5:
> 	- Fixed indentation issues and instrcuctions in assembly code
> 	- Removed CONFIG_SPARC64 from mdesc.c
> 	- Changed to maintain state of MCDPER register in thread info
> 	  flags as opposed to in mm context. MCDPER is a per-thread
> 	  state and belongs in thread info flag as opposed to mm context
> 	  which is shared across threads. Added comments to clarify this
> 	  is a lazily maintained state and must be updated on context
> 	  switch and copy_process()
> 	- Updated code to use the new arch_do_swap_page() and
> 	  arch_unmap_one() functions
> 
> v4:
> 	- Broke patch up into smaller patches
> 
> v3:
> 	- Removed CONFIG_SPARC_ADI
> 	- Replaced prctl commands with mprotect
> 	- Added auxiliary vectors for ADI parameters
> 	- Enabled ADI for swappable pages
> 
> v2:
> 	- Fixed a build error
> 
> Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++++++++++
> arch/sparc/include/asm/mman.h           |  72 ++++++++-
> arch/sparc/include/asm/mmu_64.h         |  17 ++
> arch/sparc/include/asm/mmu_context_64.h |  43 +++++
> arch/sparc/include/asm/page_64.h        |   4 +
> arch/sparc/include/asm/pgtable_64.h     |  46 ++++++
> arch/sparc/include/asm/thread_info_64.h |   2 +-
> arch/sparc/include/asm/trap_block.h     |   2 +
> arch/sparc/include/uapi/asm/mman.h      |   2 +
> arch/sparc/kernel/adi_64.c              | 277 ++++++++++++++++++++++++++++++++
> arch/sparc/kernel/etrap_64.S            |  28 +++-
> arch/sparc/kernel/process_64.c          |  25 +++
> arch/sparc/kernel/setup_64.c            |  11 +-
> arch/sparc/kernel/vmlinux.lds.S         |   5 +
> arch/sparc/mm/gup.c                     |  37 +++++
> arch/sparc/mm/hugetlbpage.c             |  14 +-
> arch/sparc/mm/init_64.c                 |  33 ++++
> arch/sparc/mm/tsb.c                     |  21 +++
> include/linux/mm.h                      |   3 +
> mm/ksm.c                                |   4 +
> 20 files changed, 913 insertions(+), 5 deletions(-)
> create mode 100644 Documentation/sparc/adi.txt
> 
> diff --git a/Documentation/sparc/adi.txt b/Documentation/sparc/adi.txt
> new file mode 100644
> index 000000000000..383bc65fec1e
> --- /dev/null
> +++ b/Documentation/sparc/adi.txt
> @@ -0,0 +1,272 @@
> +Application Data Integrity (ADI)
> +================================
> +
> +SPARC M7 processor adds the Application Data Integrity (ADI) feature.
> +ADI allows a task to set version tags on any subset of its address
> +space. Once ADI is enabled and version tags are set for ranges of
> +address space of a task, the processor will compare the tag in pointers
> +to memory in these ranges to the version set by the application
> +previously. Access to memory is granted only if the tag in given pointer
> +matches the tag set by the application. In case of mismatch, processor
> +raises an exception.
> +
> +Following steps must be taken by a task to enable ADI fully:
> +
> +1. Set the user mode PSTATE.mcde bit. This acts as master switch for
> +   the task's entire address space to enable/disable ADI for the task.
> +
> +2. Set TTE.mcd bit on any TLB entries that correspond to the range of
> +   addresses ADI is being enabled on. MMU checks the version tag only
> +   on the pages that have TTE.mcd bit set.
> +
> +3. Set the version tag for virtual addresses using stxa instruction
> +   and one of the MCD specific ASIs. Each stxa instruction sets the
> +   given tag for one ADI block size number of bytes. This step must
> +   be repeated for entire page to set tags for entire page.
> +
> +ADI block size for the platform is provided by the hypervisor to kernel
> +in machine description tables. Hypervisor also provides the number of
> +top bits in the virtual address that specify the version tag.  Once
> +version tag has been set for a memory location, the tag is stored in the
> +physical memory and the same tag must be present in the ADI version tag
> +bits of the virtual address being presented to the MMU. For example on
> +SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block
> +size is same as cacheline size which is 64 bytes. A task that sets ADI
> +version to, say 10, on a range of memory, must access that memory using
> +virtual addresses that contain 0xa in bits 63-60.
> +
> +ADI is enabled on a set of pages using mprotect() with PROT_ADI flag.
> +When ADI is enabled on a set of pages by a task for the first time,
> +kernel sets the PSTATE.mcde bit fot the task. Version tags for memory
> +addresses are set with an stxa instruction on the addresses using
> +ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is
> +provided by the hypervisor to the kernel.  Kernel returns the value of
> +ADI block size to userspace using auxiliary vector along with other ADI
> +info. Following auxiliary vectors are provided by the kernel:
> +
> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
> +			alignment, in bytes, of ADI versioning.
> +	AT_ADI_NBITS	Number of ADI version bits in the VA

The previous patch series also defined AT_ADI_UEONADI.  Why was that
removed?

> +
> +
> +IMPORTANT NOTES:
> +
> +- Version tag values of 0x0 and 0xf are reserved.

The documentation should probably state more specifically that an
in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
meaning that a mismatch exception will never be generated regardless
of the tag bits set in the VA accessing the memory.

> +
> +- Version tags are set on virtual addresses from userspace even though
> +  tags are stored in physical memory. Tags are set on a physical page
> +  after it has been allocated to a task and a pte has been created for
> +  it.
> +
> +- When a task frees a memory page it had set version tags on, the page
> +  goes back to free page pool. When this page is re-allocated to a task,
> +  kernel clears the page using block initialization ASI which clears the
> +  version tags as well for the page. If a page allocated to a task is
> +  freed and allocated back to the same task, old version tags set by the
> +  task on that page will no longer be present.

The specifics should be included here, too, so someone doesn't have
to guess what's going on if they make changes and the tags are no longer
cleared.  The HW clears the tag for a cacheline for block initializing
stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
PSTATE.mce is set when executing in the kernel, but pages are cleared
using kernel physical mapping VAs which are mapped with TTE.mcd=0.

Another HW behavior that should be mentioned is that tag mismatches
are not detected for non-faulting loads.

> +
> +- Kernel does not set any tags for user pages and it is entirely a
> +  task's responsibility to set any version tags. Kernel does ensure the
> +  version tags are preserved if a page is swapped out to the disk and
> +  swapped back in. It also preserves that version tags if a page is
> +  migrated.

I only have a cursory understanding of how page migration works, but
I could not see how the tags would be preserved if a page were migrated.
I figured the place to copy the tags would be migrate_page_copy(), but
I don't see changes there.


> +
> +- ADI works for any size pages. A userspace task need not be aware of
> +  page size when using ADI. It can simply select a virtual address
> +  range, enable ADI on the range using mprotect() and set version tags
> +  for the entire range. mprotect() ensures range is aligned to page size
> +  and is a multiple of page size.
> +
> +
> +
> +ADI related traps
> +-----------------
> +
> +With ADI enabled, following new traps may occur:
> +
> +Disrupting memory corruption
> +
> +	When a store accesses a memory localtion that has TTE.mcd=1,
> +	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
> +	tag in the address used (bits 63:60) does not match the tag set on
> +	the corresponding cacheline, a memory corruption trap occurs. By
> +	default, it is a disrupting trap and is sent to the hypervisor
> +	first. Hypervisor creates a sun4v error report and sends a
> +	resumable error (TT=0x7e) trap to the kernel. The kernel sends
> +	a SIGSEGV to the task that resulted in this trap with the following
> +	info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ADIDERR;
> +		siginfo.si_addr = addr; /* PC where first mismatch occurred */
> +		siginfo.si_trapno = 0;
> +
> +
> +Precise memory corruption
> +
> +	When a store accesses a memory location that has TTE.mcd=1,
> +	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
> +	tag in the address used (bits 63:60) does not match the tag set on
> +	the corresponding cacheline, a memory corruption trap occurs. If
> +	MCD precise exception is enabled (MCDPERR=1), a precise
> +	exception is sent to the kernel with TT=0x1a. The kernel sends
> +	a SIGSEGV to the task that resulted in this trap with the following
> +	info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ADIPERR;
> +		siginfo.si_addr = addr;	/* address that caused trap */
> +		siginfo.si_trapno = 0;
> +
> +	NOTE: ADI tag mismatch on a load always results in precise trap.
> +
> +
> +MCD disabled
> +
> +	When a task has not enabled ADI and attempts to set ADI version
> +	on a memory address, processor sends an MCD disabled trap. This
> +	trap is handled by hypervisor first and the hypervisor vectors this
> +	trap through to the kernel as Data Access Exception trap with
> +	fault type set to 0xa (invalid ASI). When this occurs, the kernel
> +	sends the task SIGSEGV signal with following info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ACCADI;
> +		siginfo.si_addr = addr;	/* address that caused trap */
> +		siginfo.si_trapno = 0;
> +
> +
> +Sample program to use ADI
> +-------------------------
> +
> +Following sample program is meant to illustrate how to use the ADI
> +functionality.
> +
> +#include <unistd.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <elf.h>
> +#include <sys/ipc.h>
> +#include <sys/shm.h>
> +#include <sys/mman.h>
> +#include <asm/asi.h>
> +
> +#ifndef AT_ADI_BLKSZ
> +#define AT_ADI_BLKSZ	48
> +#endif
> +#ifndef AT_ADI_NBITS
> +#define AT_ADI_NBITS	49
> +#endif
> +
> +#ifndef PROT_ADI
> +#define PROT_ADI	0x10
> +#endif
> +
> +#define BUFFER_SIZE     32*1024*1024UL
> +
> +main(int argc, char* argv[], char* envp[])
> +{
> +        unsigned long i, mcde, adi_blksz, adi_nbits;
> +        char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr;
> +        int shmid, version;
> +	Elf64_auxv_t *auxv;
> +
> +	adi_blksz = 0;
> +
> +	while(*envp++ != NULL);
> +	for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
> +		switch (auxv->a_type) {
> +		case AT_ADI_BLKSZ:
> +			adi_blksz = auxv->a_un.a_val;
> +			break;
> +		case AT_ADI_NBITS:
> +			adi_nbits = auxv->a_un.a_val;
> +			break;
> +		}
> +	}
> +	if (adi_blksz == 0) {
> +		fprintf(stderr, "Oops! ADI is not supported\n");
> +		exit(1);
> +	}
> +
> +	printf("ADI capabilities:\n");
> +	printf("\tBlock size = %ld\n", adi_blksz);
> +	printf("\tNumber of bits = %ld\n", adi_nbits);
> +
> +        if ((shmid = shmget(2, BUFFER_SIZE,
> +                                IPC_CREAT | SHM_R | SHM_W)) < 0) {
> +                perror("shmget failed");
> +                exit(1);
> +        }
> +
> +        shmaddr = shmat(shmid, NULL, 0);
> +        if (shmaddr == (char *)-1) {
> +                perror("shm attach failed");
> +                shmctl(shmid, IPC_RMID, NULL);
> +                exit(1);
> +        }
> +
> +	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) {
> +		perror("mprotect failed");
> +		goto err_out;
> +	}
> +
> +        /* Set the ADI version tag on the shm segment
> +         */
> +        version = 10;
> +        tmp_addr = shmaddr;
> +        end = shmaddr + BUFFER_SIZE;
> +        while (tmp_addr < end) {
> +                asm volatile(
> +                        "stxa %1, [%0]0x90\n\t"
> +                        :
> +                        : "r" (tmp_addr), "r" (version));
> +                tmp_addr += adi_blksz;
> +        }
> +	asm volatile("membar #Sync\n\t");
> +
> +        /* Create a versioned address from the normal address by placing
> +	 * version tag in the upper adi_nbits bits
> +         */
> +        tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits);
> +        tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits);
> +        veraddr = (void *) (((unsigned long)version << (64-adi_nbits))
> +                        | (unsigned long)tmp_addr);
> +
> +        printf("Starting the writes:\n");
> +        for (i = 0; i < BUFFER_SIZE; i++) {
> +                veraddr[i] = (char)(i);
> +                if (!(i % (1024 * 1024)))
> +                        printf(".");
> +        }
> +        printf("\n");
> +
> +        printf("Verifying data...");
> +	fflush(stdout);
> +        for (i = 0; i < BUFFER_SIZE; i++)
> +                if (veraddr[i] != (char)i)
> +                        printf("\nIndex %lu mismatched\n", i);
> +        printf("Done.\n");
> +
> +        /* Disable ADI and clean up
> +         */
> +	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) {
> +		perror("mprotect failed");
> +		goto err_out;
> +	}
> +
> +        if (shmdt((const void *)shmaddr) != 0)
> +                perror("Detach failure");
> +        shmctl(shmid, IPC_RMID, NULL);
> +
> +        exit(0);
> +
> +err_out:
> +        if (shmdt((const void *)shmaddr) != 0)
> +                perror("Detach failure");
> +        shmctl(shmid, IPC_RMID, NULL);
> +        exit(1);
> +}
> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
> index 59bb5938d852..b799796ad963 100644
> --- a/arch/sparc/include/asm/mman.h
> +++ b/arch/sparc/include/asm/mman.h
> @@ -6,5 +6,75 @@
> #ifndef __ASSEMBLY__
> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
> int sparc_mmap_check(unsigned long addr, unsigned long len);
> -#endif
> +
> +#ifdef CONFIG_SPARC64
> +#include <asm/adi_64.h>
> +
> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
> +{
> +	if (prot & PROT_ADI) {
> +		struct pt_regs *regs;
> +
> +		if (!current->mm->context.adi) {
> +			regs = task_pt_regs(current);
> +			regs->tstate |= TSTATE_MCDE;
> +			current->mm->context.adi = true;

If a process is multi-threaded when it enables ADI on some memory for
the first time, TSTATE_MCDE will only be set for the calling thread
and it will not be possible to enable it for the other threads.
One possible way to handle this is to enable TSTATE_MCDE for all user
threads when they are initialized if adi_capable() returns true.


> +		}
> +		return VM_SPARC_ADI;
> +	} else {
> +		return 0;
> +	}
> +}
> +
> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
> +}
> +
> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
> +{
> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
> +		return 0;
> +	if (prot & PROT_ADI) {
> +		if (!adi_capable())
> +			return 0;
> +
> +		/* ADI tags can not be set on read-only memory, so it makes
> +		 * sense to enable ADI on writable memory only.
> +		 */
> +		if (!(prot & PROT_WRITE))
> +			return 0;

This prevents the use of ADI for the legitimate case where shared memory
is mapped read/write for a master process but mapped read-only for a
client process.  The master process could set the tags and communicate
the expected tag values to the client.


> +
> +		if (addr) {
> +			struct vm_area_struct *vma;
> +
> +			vma = find_vma(current->mm, addr);
> +			if (vma) {
> +				/* ADI can not be enabled on PFN
> +				 * mapped pages
> +				 */
> +				if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
> +					return 0;
> +
> +				/* Mergeable pages can become unmergeable
> +				 * if ADI is enabled on them even if they
> +				 * have identical data on them. This can be
> +				 * because ADI enabled pages with identical
> +				 * data may still not have identical ADI
> +				 * tags on them. Disallow ADI on mergeable
> +				 * pages.
> +				 */
> +				if (vma->vm_flags & VM_MERGEABLE)
> +					return 0;
> +			}
> +		}
> +	}
> +	return 1;
> +}
> +#endif /* CONFIG_SPARC64 */
> +
> +#endif /* __ASSEMBLY__ */
> #endif /* __SPARC_MMAN_H__ */
> diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
> index 83b36a5371ff..a65d51ebe00b 100644
> --- a/arch/sparc/include/asm/mmu_64.h
> +++ b/arch/sparc/include/asm/mmu_64.h
> @@ -89,6 +89,20 @@ struct tsb_config {
> #define MM_NUM_TSBS	1
> #endif
> 
> +/* ADI tags are stored when a page is swapped out and the storage for
> + * tags is allocated dynamically. There is a tag storage descriptor
> + * associated with each set of tag storage pages. Tag storage descriptors
> + * are allocated dynamically. Since kernel will allocate a full page for
> + * each tag storage descriptor, we can store up to
> + * PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page.
> + */
> +typedef struct {
> +	unsigned long	start;		/* Start address for this tag storage */
> +	unsigned long	end;		/* Last address for tag storage */
> +	unsigned char	*tags;		/* Where the tags are */
> +	unsigned long	tag_users;	/* number of references to descriptor */
> +} tag_storage_desc_t;
> +
> typedef struct {
> 	spinlock_t		lock;
> 	unsigned long		sparc64_ctx_val;
> @@ -96,6 +110,9 @@ typedef struct {
> 	unsigned long		thp_pte_count;
> 	struct tsb_config	tsb_block[MM_NUM_TSBS];
> 	struct hv_tsb_descr	tsb_descr[MM_NUM_TSBS];
> +	bool			adi;
> +	tag_storage_desc_t	*tag_store;
> +	spinlock_t		tag_lock;
> } mm_context_t;
> 
> #endif /* !__ASSEMBLY__ */
> diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
> index 2cddcda4f85f..68de059551f9 100644
> --- a/arch/sparc/include/asm/mmu_context_64.h
> +++ b/arch/sparc/include/asm/mmu_context_64.h
> @@ -9,6 +9,7 @@
> #include <linux/mm_types.h>
> 
> #include <asm/spitfire.h>
> +#include <asm/adi_64.h>
> #include <asm-generic/mm_hooks.h>
> 
> static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
> @@ -129,6 +130,48 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
> 
> #define deactivate_mm(tsk,mm)	do { } while (0)
> #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
> +
> +#define  __HAVE_ARCH_START_CONTEXT_SWITCH
> +static inline void arch_start_context_switch(struct task_struct *prev)
> +{
> +	/* Save the current state of MCDPER register for the process
> +	 * we are switching from
> +	 */
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		__asm__ __volatile__(
> +			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
> +			"mov %%g1, %0\n\t"
> +			: "=r" (tmp_mcdper)
> +			:
> +			: "g1");
> +		if (tmp_mcdper)
> +			set_tsk_thread_flag(prev, TIF_MCDPER);
> +		else
> +			clear_tsk_thread_flag(prev, TIF_MCDPER);
> +	}
> +}
> +
> +#define finish_arch_post_lock_switch	finish_arch_post_lock_switch
> +static inline void finish_arch_post_lock_switch(void)
> +{
> +	/* Restore the state of MCDPER register for the new process
> +	 * just switched to.
> +	 */
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		tmp_mcdper = test_thread_flag(TIF_MCDPER);
> +		__asm__ __volatile__(
> +			"mov %0, %%g1\n\t"
> +			".word 0x9d800001\n\t"	/* wr %g0, %g1, %mcdper" */
> +			:
> +			: "ir" (tmp_mcdper)
> +			: "g1");
> +	}
> +}
> +
> #endif /* !(__ASSEMBLY__) */
> 
> #endif /* !(__SPARC64_MMU_CONTEXT_H) */
> diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
> index 5961b2d8398a..dc582c5611f8 100644
> --- a/arch/sparc/include/asm/page_64.h
> +++ b/arch/sparc/include/asm/page_64.h
> @@ -46,6 +46,10 @@ struct page;
> void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
> #define copy_page(X,Y)	memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
> void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
> +#define __HAVE_ARCH_COPY_USER_HIGHPAGE
> +struct vm_area_struct;
> +void copy_user_highpage(struct page *to, struct page *from,
> +			unsigned long vaddr, struct vm_area_struct *vma);
> 
> /* Unlike sparc32, sparc64's parameter passing API is more
>  * sane in that structures which as small enough are passed
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index af045061f41e..51da342c392d 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -18,6 +18,7 @@
> #include <asm/types.h>
> #include <asm/spitfire.h>
> #include <asm/asi.h>
> +#include <asm/adi.h>
> #include <asm/page.h>
> #include <asm/processor.h>
> 
> @@ -570,6 +571,18 @@ static inline pte_t pte_mkspecial(pte_t pte)
> 	return pte;
> }
> 
> +static inline pte_t pte_mkmcd(pte_t pte)
> +{
> +	pte_val(pte) |= _PAGE_MCD_4V;
> +	return pte;
> +}
> +
> +static inline pte_t pte_mknotmcd(pte_t pte)
> +{
> +	pte_val(pte) &= ~_PAGE_MCD_4V;
> +	return pte;
> +}
> +
> static inline unsigned long pte_young(pte_t pte)
> {
> 	unsigned long mask;
> @@ -1001,6 +1014,39 @@ int page_in_phys_avail(unsigned long paddr);
> int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
> 		    unsigned long, pgprot_t);
> 
> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte);
> +
> +int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		  unsigned long addr, pte_t oldpte);
> +
> +#define __HAVE_ARCH_DO_SWAP_PAGE
> +static inline void arch_do_swap_page(struct mm_struct *mm,
> +				     struct vm_area_struct *vma,
> +				     unsigned long addr,
> +				     pte_t pte, pte_t oldpte)
> +{
> +	/* If this is a new page being mapped in, there can be no
> +	 * ADI tags stored away for this page. Skip looking for
> +	 * stored tags
> +	 */
> +	if (pte_none(oldpte))
> +		return;
> +
> +	if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V))
> +		adi_restore_tags(mm, vma, addr, pte);
> +}
> +
> +#define __HAVE_ARCH_UNMAP_ONE
> +static inline int arch_unmap_one(struct mm_struct *mm,
> +				 struct vm_area_struct *vma,
> +				 unsigned long addr, pte_t oldpte)
> +{
> +	if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
> +		return adi_save_tags(mm, vma, addr, oldpte);
> +	return 0;
> +}
> +
> static inline int io_remap_pfn_range(struct vm_area_struct *vma,
> 				     unsigned long from, unsigned long pfn,
> 				     unsigned long size, pgprot_t prot)
> diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
> index 38a24f257b85..9c04acb1f9af 100644
> --- a/arch/sparc/include/asm/thread_info_64.h
> +++ b/arch/sparc/include/asm/thread_info_64.h
> @@ -190,7 +190,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
>  *       in using in assembly, else we can't use the mask as
>  *       an immediate value in instructions such as andcc.
>  */
> -/* flag bit 12 is available */
> +#define TIF_MCDPER		12	/* Precise MCD exception */
> #define TIF_MEMDIE		13	/* is terminating due to OOM killer */
> #define TIF_POLLING_NRFLAG	14
> 
> diff --git a/arch/sparc/include/asm/trap_block.h b/arch/sparc/include/asm/trap_block.h
> index ec9c04de3664..b283e940671a 100644
> --- a/arch/sparc/include/asm/trap_block.h
> +++ b/arch/sparc/include/asm/trap_block.h
> @@ -72,6 +72,8 @@ struct sun4v_1insn_patch_entry {
> };
> extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch,
> 	__sun4v_1insn_patch_end;
> +extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch,
> +	__sun_m7_1insn_patch_end;
> 
> struct sun4v_2insn_patch_entry {
> 	unsigned int	addr;
> diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
> index 9765896ecb2c..a72c03397345 100644
> --- a/arch/sparc/include/uapi/asm/mman.h
> +++ b/arch/sparc/include/uapi/asm/mman.h
> @@ -5,6 +5,8 @@
> 
> /* SunOS'ified... */
> 
> +#define PROT_ADI	0x10		/* ADI enabled */
> +
> #define MAP_RENAME      MAP_ANONYMOUS   /* In SunOS terminology */
> #define MAP_NORESERVE   0x40            /* don't reserve swap pages */
> #define MAP_INHERIT     0x80            /* SunOS doesn't do this, but... */
> diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
> index 9fbb5dd4a7bf..83c1e36ae5fa 100644
> --- a/arch/sparc/kernel/adi_64.c
> +++ b/arch/sparc/kernel/adi_64.c
> @@ -7,10 +7,24 @@
>  * This work is licensed under the terms of the GNU GPL, version 2.
>  */
> #include <linux/init.h>
> +#include <linux/slab.h>
> +#include <linux/mm_types.h>
> #include <asm/mdesc.h>
> #include <asm/adi_64.h>
> +#include <asm/mmu_64.h>
> +#include <asm/pgtable_64.h>
> +
> +/* Each page of storage for ADI tags can accommodate tags for 128
> + * pages. When ADI enabled pages are being swapped out, it would be
> + * prudent to allocate at least enough tag storage space to accommodate
> + * SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to
> + * store tags for four SWAPFILE_CLUSTER pages to reduce need for
> + * further allocations for same vma.
> + */
> +#define TAG_STORAGE_PAGES	8
> 
> struct adi_config adi_state;
> +EXPORT_SYMBOL(adi_state);
> 
> /* mdesc_adi_init() : Parse machine description provided by the
>  *	hypervisor to detect ADI capabilities
> @@ -78,6 +92,19 @@ void __init mdesc_adi_init(void)
> 		goto adi_not_found;
> 	adi_state.caps.nbits = *val;
> 
> +	/* Some of the code to support swapping ADI tags is written
> +	 * assumption that two ADI tags can fit inside one byte. If
> +	 * this assumption is broken by a future architecture change,
> +	 * that code will have to be revisited. If that were to happen,
> +	 * disable ADI support so we do not get unpredictable results
> +	 * with programs trying to use ADI and their pages getting
> +	 * swapped out
> +	 */
> +	if (adi_state.caps.nbits > 4) {
> +		pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n");
> +		adi_state.enabled = false;
> +	}
> +
> 	mdesc_release(hp);
> 	return;
> 
> @@ -88,3 +115,253 @@ void __init mdesc_adi_init(void)
> 	if (hp)
> 		mdesc_release(hp);
> }
> +
> +tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
> +				   struct vm_area_struct *vma,
> +				   unsigned long addr)
> +{
> +	tag_storage_desc_t *tag_desc = NULL;
> +	unsigned long i, max_desc, flags;
> +
> +	/* Check if this vma already has tag storage descriptor
> +	 * allocated for it.
> +	 */
> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +	if (mm->context.tag_store) {
> +		tag_desc = mm->context.tag_store;
> +		spin_lock_irqsave(&mm->context.tag_lock, flags);
> +		for (i = 0; i < max_desc; i++) {
> +			if ((addr >= tag_desc->start) &&
> +			    ((addr + PAGE_SIZE - 1) <= tag_desc->end))
> +				break;
> +			tag_desc++;
> +		}
> +		spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +
> +		/* If no matching entries were found, this must be a
> +		 * freshly allocated page
> +		 */
> +		if (i >= max_desc)
> +			tag_desc = NULL;
> +	}
> +
> +	return tag_desc;
> +}
> +
> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
> +				    struct vm_area_struct *vma,
> +				    unsigned long addr)
> +{
> +	unsigned char *tags;
> +	unsigned long i, size, max_desc, flags;
> +	tag_storage_desc_t *tag_desc, *open_desc;
> +	unsigned long end_addr, hole_start, hole_end;
> +
> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +	open_desc = NULL;
> +	hole_start = 0;
> +	hole_end = ULONG_MAX;
> +	end_addr = addr + PAGE_SIZE - 1;
> +
> +	/* Check if this vma already has tag storage descriptor
> +	 * allocated for it.
> +	 */
> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
> +	if (mm->context.tag_store) {
> +		tag_desc = mm->context.tag_store;
> +
> +		/* Look for a matching entry for this address. While doing
> +		 * that, look for the first open slot as well and find
> +		 * the hole in already allocated range where this request
> +		 * will fit in.
> +		 */
> +		for (i = 0; i < max_desc; i++) {
> +			if (tag_desc->tag_users == 0) {
> +				if (open_desc == NULL)
> +					open_desc = tag_desc;
> +			} else {
> +				if ((addr >= tag_desc->start) &&
> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
> +					tag_desc->tag_users++;
> +					goto out;
> +				}
> +			}
> +			if ((tag_desc->start > end_addr) &&
> +			    (tag_desc->start < hole_end))
> +				hole_end = tag_desc->start;
> +			if ((tag_desc->end < addr) &&
> +			    (tag_desc->end > hole_start))
> +				hole_start = tag_desc->end;
> +			tag_desc++;
> +		}
> +
> +	} else {
> +		size = sizeof(tag_storage_desc_t)*max_desc;
> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);

The spin_lock_irqsave() above means that all but level 15 interrupts
will be disabled when kzalloc() is called.  If kzalloc() can sleep
there's a risk of deadlock.


> +		if (mm->context.tag_store == NULL) {
> +			tag_desc = NULL;
> +			goto out;
> +		}
> +		tag_desc = mm->context.tag_store;
> +		for (i = 0; i < max_desc; i++, tag_desc++)
> +			tag_desc->tag_users = 0;
> +		open_desc = mm->context.tag_store;
> +		i = 0;
> +	}
> +
> +	/* Check if we ran out of tag storage descriptors */
> +	if (open_desc == NULL) {
> +		tag_desc = NULL;
> +		goto out;
> +	}
> +
> +	/* Mark this tag descriptor slot in use and then initialize it */
> +	tag_desc = open_desc;
> +	tag_desc->tag_users = 1;
> +
> +	/* Tag storage has not been allocated for this vma and space
> +	 * is available in tag storage descriptor. Since this page is
> +	 * being swapped out, there is high probability subsequent pages
> +	 * in the VMA will be swapped out as well. Allocates pages to
> +	 * store tags for as many pages in this vma as possible but not
> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
> +	 * covers adi_blksize() worth of addresses. Check if the hole is
> +	 * big enough to accommodate full address range for using
> +	 * TAG_STORAGE_PAGES number of tag pages.
> +	 */
> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
> +	end_addr = addr + (size*2*adi_blksize()) - 1;

Since size > PAGE_SIZE, end_addr could theoretically overflow.


> +	if (hole_end < end_addr) {
> +		/* Available hole is too small on the upper end of
> +		 * address. Can we expand the range towards the lower
> +		 * address and maximize use of this slot?
> +		 */
> +		unsigned long tmp_addr;
> +
> +		end_addr = hole_end - 1;
> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;

Similarily, tmp_addr may underflow.

> +		if (tmp_addr < hole_start) {
> +			/* Available hole is restricted on lower address
> +			 * end as well
> +			 */
> +			tmp_addr = hole_start + 1;
> +		}
> +		addr = tmp_addr;
> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
> +		size = size * PAGE_SIZE;
> +	}
> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);

Potential deadlock due to PIL=14?


> +	if (tags == NULL) {
> +		tag_desc->tag_users = 0;
> +		tag_desc = NULL;
> +		goto out;
> +	}
> +	tag_desc->start = addr;
> +	tag_desc->tags = tags;
> +	tag_desc->end = end_addr;
> +
> +out:
> +	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +	return tag_desc;
> +}
> +
> +void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
> +{
> +	unsigned long flags;
> +	unsigned char *tags = NULL;
> +
> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
> +	tag_desc->tag_users--;
> +	if (tag_desc->tag_users == 0) {
> +		tag_desc->start = tag_desc->end = 0;
> +		/* Do not free up the tag storage space allocated
> +		 * by the first descriptor. This is persistent
> +		 * emergency tag storage space for the task.
> +		 */
> +		if (tag_desc != mm->context.tag_store) {
> +			tags = tag_desc->tags;
> +			tag_desc->tags = NULL;
> +		}
> +	}
> +	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +	kfree(tags);
> +}
> +
> +#define tag_start(addr, tag_desc)		\
> +	((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize())))
> +
> +/* Retrieve any saved ADI tags for the page being swapped back in and
> + * restore these tags to the newly allocated physical page.
> + */
> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte)
> +{
> +	unsigned char *tag;
> +	tag_storage_desc_t *tag_desc;
> +	unsigned long paddr, tmp, version1, version2;
> +
> +	/* Check if the swapped out page has an ADI version
> +	 * saved. If yes, restore version tag to the newly
> +	 * allocated page.
> +	 */
> +	tag_desc = find_tag_store(mm, vma, addr);
> +	if (tag_desc == NULL)
> +		return;
> +
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		version1 = (*tag) >> 4;
> +		version2 = (*tag) & 0x0f;
> +		*tag++ = 0;
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version1), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version2), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +	}
> +	asm volatile("membar #Sync\n\t");
> +
> +	/* Check and mark this tag space for release later if
> +	 * the swapped in page was the last user of tag space
> +	 */
> +	del_tag_store(tag_desc, mm);
> +}
> +
> +/* A page is about to be swapped out. Save any ADI tags associated with
> + * this physical page so they can be restored later when the page is swapped
> + * back in.
> + */
> +int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		  unsigned long addr, pte_t oldpte)
> +{
> +	unsigned char *tag;
> +	tag_storage_desc_t *tag_desc;
> +	unsigned long version1, version2, paddr, tmp;
> +
> +	tag_desc = alloc_tag_store(mm, vma, addr);
> +	if (tag_desc == NULL)
> +		return -1;
> +
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(oldpte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		asm volatile("ldxa [%1] %2, %0\n\t"
> +				: "=r" (version1)
> +				: "r" (tmp), "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("ldxa [%1] %2, %0\n\t"
> +				: "=r" (version2)
> +				: "r" (tmp), "i" (ASI_MCD_REAL));
> +		*tag = (version1 << 4) | version2;
> +		tag++;
> +	}
> +
> +	return 0;
> +}
> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
> index 1276ca2567ba..7be33bf45cff 100644
> --- a/arch/sparc/kernel/etrap_64.S
> +++ b/arch/sparc/kernel/etrap_64.S
> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
> 		or	%l7, %l0, %l7
> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
> +		/*
> +		 * If userspace is using ADI, it could potentially pass
> +		 * a pointer with version tag embedded in it. To maintain
> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
> +		 * would have already set TTE.mcd in an earlier call to
> +		 * kernel and set the version tag for the address being
> +		 * dereferenced. Setting PSTATE.mcde would ensure any
> +		 * access to userspace data through a system call honors
> +		 * ADI and does not allow a rogue app to bypass ADI by
> +		 * using system calls. Setting PSTATE.mcde only affects
> +		 * accesses to virtual addresses that have TTE.mcd set.
> +		 * Set PMCDPER to ensure any exceptions caused by ADI
> +		 * version tag mismatch are exposed before system call
> +		 * returns to userspace. Setting PMCDPER affects only
> +		 * writes to virtual addresses that have TTE.mcd set and
> +		 * have a version tag set as well.
> +		 */
> +		.section .sun_m7_1insn_patch, "ax"
> +		.word	661b
> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
> +		.previous
> +661:		nop
> +		.section .sun_m7_1insn_patch, "ax"
> +		.word	661b
> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */

I commented on this on the last patch series revision.  PMCDPER could be
set once when each CPU is configured rather than every time the kernel
is entered.  Since it's never cleared, setting it repeatedly unnecessarily
impacts the performance of etrap.

Also, there are places in rtrap where PSTATE is set before continuing
execution in the kernel.  These should also be patched to set TSTATE_MCDE.


> +		.previous
> 		or	%l7, %l0, %l7
> 		wrpr	%l2, %tnpc
> 		wrpr	%l7, (TSTATE_PRIV | TSTATE_IE), %tstate
> diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
> index b96104da5bd6..defa5723dfa6 100644
> --- a/arch/sparc/kernel/process_64.c
> +++ b/arch/sparc/kernel/process_64.c
> @@ -664,6 +664,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
> 	return 0;
> }
> 
> +/* TIF_MCDPER in thread info flags for current task is updated lazily upon
> + * a context switch. Update the this flag in current task's thread flags
> + * before dup so the dup'd task will inherit the current TIF_MCDPER flag.
> + */
> +int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
> +{
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		__asm__ __volatile__(
> +			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
> +			"mov %%g1, %0\n\t"
> +			: "=r" (tmp_mcdper)
> +			:
> +			: "g1");
> +		if (tmp_mcdper)
> +			set_thread_flag(TIF_MCDPER);
> +		else
> +			clear_thread_flag(TIF_MCDPER);
> +	}
> +
> +	*dst = *src;
> +	return 0;
> +}
> +
> typedef struct {
> 	union {
> 		unsigned int	pr_regs[32];
> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
> index 422b17880955..a9da205da394 100644
> --- a/arch/sparc/kernel/setup_64.c
> +++ b/arch/sparc/kernel/setup_64.c
> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
> 	}
> }
> 
> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
> +			     struct sun4v_1insn_patch_entry *end)
> +{
> +	sun4v_patch_1insn_range(start, end);
> +}
> +
> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
> 			     struct sun4v_2insn_patch_entry *end)
> {
> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
> 				&__sun4v_2insn_patch_end);
> 	if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
> -	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
> +	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN) {
> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
> +					 &__sun_m7_1insn_patch_end);
> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
> 					 &__sun_m7_2insn_patch_end);

Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
here instead of adding new functions that just call these functions?

Anthony

> +		}
> 
> 	sun4v_hvapi_init();
> }
> diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
> index 572db686f845..20a70682cce7 100644
> --- a/arch/sparc/kernel/vmlinux.lds.S
> +++ b/arch/sparc/kernel/vmlinux.lds.S
> @@ -144,6 +144,11 @@ SECTIONS
> 		*(.pause_3insn_patch)
> 		__pause_3insn_patch_end = .;
> 	}
> +	.sun_m7_1insn_patch : {
> +		__sun_m7_1insn_patch = .;
> +		*(.sun_m7_1insn_patch)
> +		__sun_m7_1insn_patch_end = .;
> +	}
> 	.sun_m7_2insn_patch : {
> 		__sun_m7_2insn_patch = .;
> 		*(.sun_m7_2insn_patch)
> diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
> index cd0e32bbcb1d..579f7ae75b35 100644
> --- a/arch/sparc/mm/gup.c
> +++ b/arch/sparc/mm/gup.c
> @@ -11,6 +11,7 @@
> #include <linux/pagemap.h>
> #include <linux/rwsem.h>
> #include <asm/pgtable.h>
> +#include <asm/adi.h>
> 
> /*
>  * The performance critical leaf functions are made noinline otherwise gcc
> @@ -157,6 +158,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
> 	pgd_t *pgdp;
> 	int nr = 0;
> 
> +#ifdef CONFIG_SPARC64
> +	if (adi_capable()) {
> +		long addr = start;
> +
> +		/* If userspace has passed a versioned address, kernel
> +		 * will not find it in the VMAs since it does not store
> +		 * the version tags in the list of VMAs. Storing version
> +		 * tags in list of VMAs is impractical since they can be
> +		 * changed any time from userspace without dropping into
> +		 * kernel. Any address search in VMAs will be done with
> +		 * non-versioned addresses. Ensure the ADI version bits
> +		 * are dropped here by sign extending the last bit before
> +		 * ADI bits. IOMMU does not implement version tags.
> +		 */
> +		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
> +		start = addr;
> +	}
> +#endif
> 	start &= PAGE_MASK;
> 	addr = start;
> 	len = (unsigned long) nr_pages << PAGE_SHIFT;
> @@ -187,6 +206,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
> 	pgd_t *pgdp;
> 	int nr = 0;
> 
> +#ifdef CONFIG_SPARC64
> +	if (adi_capable()) {
> +		long addr = start;
> +
> +		/* If userspace has passed a versioned address, kernel
> +		 * will not find it in the VMAs since it does not store
> +		 * the version tags in the list of VMAs. Storing version
> +		 * tags in list of VMAs is impractical since they can be
> +		 * changed any time from userspace without dropping into
> +		 * kernel. Any address search in VMAs will be done with
> +		 * non-versioned addresses. Ensure the ADI version bits
> +		 * are dropped here by sign extending the last bit before
> +		 * ADI bits. IOMMU does not implements version tags,
> +		 */
> +		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
> +		start = addr;
> +	}
> +#endif
> 	start &= PAGE_MASK;
> 	addr = start;
> 	len = (unsigned long) nr_pages << PAGE_SHIFT;
> diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
> index 88855e383b34..487ed1f1ce86 100644
> --- a/arch/sparc/mm/hugetlbpage.c
> +++ b/arch/sparc/mm/hugetlbpage.c
> @@ -177,8 +177,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
> 			 struct page *page, int writeable)
> {
> 	unsigned int shift = huge_page_shift(hstate_vma(vma));
> +	pte_t pte;
> 
> -	return hugepage_shift_to_tte(entry, shift);
> +	pte = hugepage_shift_to_tte(entry, shift);
> +
> +#ifdef CONFIG_SPARC64
> +	/* If this vma has ADI enabled on it, turn on TTE.mcd
> +	 */
> +	if (vma->vm_flags & VM_SPARC_ADI)
> +		return pte_mkmcd(pte);
> +	else
> +		return pte_mknotmcd(pte);
> +#else
> +	return pte;
> +#endif
> }
> 
> static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
> index 3c40ebd50f92..94854e7e833e 100644
> --- a/arch/sparc/mm/init_64.c
> +++ b/arch/sparc/mm/init_64.c
> @@ -3087,3 +3087,36 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
> 		do_flush_tlb_kernel_range(start, end);
> 	}
> }
> +
> +void copy_user_highpage(struct page *to, struct page *from,
> +	unsigned long vaddr, struct vm_area_struct *vma)
> +{
> +	char *vfrom, *vto;
> +
> +	vfrom = kmap_atomic(from);
> +	vto = kmap_atomic(to);
> +	copy_user_page(vto, vfrom, vaddr, to);
> +	kunmap_atomic(vto);
> +	kunmap_atomic(vfrom);
> +
> +	/* If this page has ADI enabled, copy over any ADI tags
> +	 * as well
> +	 */
> +	if (vma->vm_flags & VM_SPARC_ADI) {
> +		unsigned long pfrom, pto, i, adi_tag;
> +
> +		pfrom = page_to_phys(from);
> +		pto = page_to_phys(to);
> +
> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
> +			asm volatile("ldxa [%1] %2, %0\n\t"
> +					: "=r" (adi_tag)
> +					:  "r" (i), "i" (ASI_MCD_REAL));
> +			asm volatile("stxa %0, [%1] %2\n\t"
> +					:
> +					: "r" (adi_tag), "r" (pto),
> +					  "i" (ASI_MCD_REAL));
> +			pto += adi_blksize();
> +		}
> +	}
> +}
> diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
> index 0d4b998c7d7b..6518cc42056b 100644
> --- a/arch/sparc/mm/tsb.c
> +++ b/arch/sparc/mm/tsb.c
> @@ -545,6 +545,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
> 
> 	mm->context.sparc64_ctx_val = 0UL;
> 
> +	mm->context.tag_store = NULL;
> +	spin_lock_init(&mm->context.tag_lock);
> +
> #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
> 	/* We reset them to zero because the fork() page copying
> 	 * will re-increment the counters as the parent PTEs are
> @@ -610,4 +613,22 @@ void destroy_context(struct mm_struct *mm)
> 	}
> 
> 	spin_unlock_irqrestore(&ctx_alloc_lock, flags);
> +
> +	/* If ADI tag storage was allocated for this task, free it */
> +	if (mm->context.tag_store) {
> +		tag_storage_desc_t *tag_desc;
> +		unsigned long max_desc;
> +		unsigned char *tags;
> +
> +		tag_desc = mm->context.tag_store;
> +		max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +		for (i = 0; i < max_desc; i++) {
> +			tags = tag_desc->tags;
> +			tag_desc->tags = NULL;
> +			kfree(tags);
> +			tag_desc++;
> +		}
> +		kfree(mm->context.tag_store);
> +		mm->context.tag_store = NULL;
> +	}
> }
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b7aa3932e6d4..c0972114036f 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -231,6 +231,9 @@ extern unsigned int kobjsize(const void *objp);
> # define VM_GROWSUP	VM_ARCH_1
> #elif defined(CONFIG_IA64)
> # define VM_GROWSUP	VM_ARCH_1
> +#elif defined(CONFIG_SPARC64)
> +# define VM_SPARC_ADI	VM_ARCH_1	/* Uses ADI tag for access control */
> +# define VM_ARCH_CLEAR	VM_SPARC_ADI
> #elif !defined(CONFIG_MMU)
> # define VM_MAPPED_COPY	VM_ARCH_1	/* T if mapped copy of data (nommu mmap) */
> #endif
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 216184af0e19..bb82399816ef 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1797,6 +1797,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
> 		if (*vm_flags & VM_SAO)
> 			return 0;
> #endif
> +#ifdef VM_SPARC_ADI
> +		if (*vm_flags & VM_SPARC_ADI)
> +			return 0;
> +#endif
> 
> 		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
> 			err = __ksm_enter(mm);
> -- 
> 2.11.0
> 
> --
> To unsubscribe from this list: send the line "unsubscribe sparclinux" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-25 22:31     ` Anthony Yznaga
  0 siblings, 0 replies; 86+ messages in thread
From: Anthony Yznaga @ 2017-08-25 22:31 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz


> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
> 
> ADI is a new feature supported on SPARC M7 and newer processors to allow
> hardware to catch rogue accesses to memory. ADI is supported for data
> fetches only and not instruction fetches. An app can enable ADI on its
> data pages, set version tags on them and use versioned addresses to
> access the data pages. Upper bits of the address contain the version
> tag. On M7 processors, upper four bits (bits 63-60) contain the version
> tag. If a rogue app attempts to access ADI enabled data pages, its
> access is blocked and processor generates an exception. Please see
> Documentation/sparc/adi.txt for further details.
> 
> This patch extends mprotect to enable ADI (TSTATE.mcde), enable/disable
> MCD (Memory Corruption Detection) on selected memory ranges, enable
> TTE.mcd in PTEs, return ADI parameters to userspace and save/restore ADI
> version tags on page swap out/in or migration. ADI is not enabled by
> default for any task. A task must explicitly enable ADI on a memory
> range and set version tag for ADI to be effective for the task.
> 
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> Cc: Khalid Aziz <khalid@gonehiking.org>
> ---
> v7:
> 	- Enhanced arch_validate_prot() to enable ADI only on writable
> 	  addresses backed by physical RAM
> 	- Added support for saving/restoring ADI tags for each ADI
> 	  block size address range on a page on swap in/out
> 	- Added code to copy ADI tags on COW
> 	- Updated values for auxiliary vectors to not conflict with
> 	  values on other architectures to avoid conflict in glibc. glibc
> 	  consolidates all auxiliary vectors into its headers and
> 	  duplicate values in consolidated header are problematic
> 	- Disable same page merging on ADI enabled pages since ADI tags
> 	  may not match on pages with identical data
> 	- Broke the patch up further into smaller patches
> 
> v6:
> 	- Eliminated instructions to read and write PSTATE as well as
> 	  MCDPER and PMCDPER on every access to userspace addresses
> 	  by setting PSTATE and PMCDPER correctly upon entry into
> 	  kernel. PSTATE.mcde and PMCDPER are set upon entry into
> 	  kernel when running on an M7 processor. PSTATE.mcde being
> 	  set only affects memory accesses that have TTE.mcd set.
> 	  PMCDPER being set only affects writes to memory addresses
> 	  that have TTE.mcd set. This ensures any faults caused by
> 	  ADI tag mismatch on a write are exposed before kernel returns
> 	  to userspace.
> 
> v5:
> 	- Fixed indentation issues and instrcuctions in assembly code
> 	- Removed CONFIG_SPARC64 from mdesc.c
> 	- Changed to maintain state of MCDPER register in thread info
> 	  flags as opposed to in mm context. MCDPER is a per-thread
> 	  state and belongs in thread info flag as opposed to mm context
> 	  which is shared across threads. Added comments to clarify this
> 	  is a lazily maintained state and must be updated on context
> 	  switch and copy_process()
> 	- Updated code to use the new arch_do_swap_page() and
> 	  arch_unmap_one() functions
> 
> v4:
> 	- Broke patch up into smaller patches
> 
> v3:
> 	- Removed CONFIG_SPARC_ADI
> 	- Replaced prctl commands with mprotect
> 	- Added auxiliary vectors for ADI parameters
> 	- Enabled ADI for swappable pages
> 
> v2:
> 	- Fixed a build error
> 
> Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++++++++++
> arch/sparc/include/asm/mman.h           |  72 ++++++++-
> arch/sparc/include/asm/mmu_64.h         |  17 ++
> arch/sparc/include/asm/mmu_context_64.h |  43 +++++
> arch/sparc/include/asm/page_64.h        |   4 +
> arch/sparc/include/asm/pgtable_64.h     |  46 ++++++
> arch/sparc/include/asm/thread_info_64.h |   2 +-
> arch/sparc/include/asm/trap_block.h     |   2 +
> arch/sparc/include/uapi/asm/mman.h      |   2 +
> arch/sparc/kernel/adi_64.c              | 277 ++++++++++++++++++++++++++++++++
> arch/sparc/kernel/etrap_64.S            |  28 +++-
> arch/sparc/kernel/process_64.c          |  25 +++
> arch/sparc/kernel/setup_64.c            |  11 +-
> arch/sparc/kernel/vmlinux.lds.S         |   5 +
> arch/sparc/mm/gup.c                     |  37 +++++
> arch/sparc/mm/hugetlbpage.c             |  14 +-
> arch/sparc/mm/init_64.c                 |  33 ++++
> arch/sparc/mm/tsb.c                     |  21 +++
> include/linux/mm.h                      |   3 +
> mm/ksm.c                                |   4 +
> 20 files changed, 913 insertions(+), 5 deletions(-)
> create mode 100644 Documentation/sparc/adi.txt
> 
> diff --git a/Documentation/sparc/adi.txt b/Documentation/sparc/adi.txt
> new file mode 100644
> index 000000000000..383bc65fec1e
> --- /dev/null
> +++ b/Documentation/sparc/adi.txt
> @@ -0,0 +1,272 @@
> +Application Data Integrity (ADI)
> +================
> +
> +SPARC M7 processor adds the Application Data Integrity (ADI) feature.
> +ADI allows a task to set version tags on any subset of its address
> +space. Once ADI is enabled and version tags are set for ranges of
> +address space of a task, the processor will compare the tag in pointers
> +to memory in these ranges to the version set by the application
> +previously. Access to memory is granted only if the tag in given pointer
> +matches the tag set by the application. In case of mismatch, processor
> +raises an exception.
> +
> +Following steps must be taken by a task to enable ADI fully:
> +
> +1. Set the user mode PSTATE.mcde bit. This acts as master switch for
> +   the task's entire address space to enable/disable ADI for the task.
> +
> +2. Set TTE.mcd bit on any TLB entries that correspond to the range of
> +   addresses ADI is being enabled on. MMU checks the version tag only
> +   on the pages that have TTE.mcd bit set.
> +
> +3. Set the version tag for virtual addresses using stxa instruction
> +   and one of the MCD specific ASIs. Each stxa instruction sets the
> +   given tag for one ADI block size number of bytes. This step must
> +   be repeated for entire page to set tags for entire page.
> +
> +ADI block size for the platform is provided by the hypervisor to kernel
> +in machine description tables. Hypervisor also provides the number of
> +top bits in the virtual address that specify the version tag.  Once
> +version tag has been set for a memory location, the tag is stored in the
> +physical memory and the same tag must be present in the ADI version tag
> +bits of the virtual address being presented to the MMU. For example on
> +SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block
> +size is same as cacheline size which is 64 bytes. A task that sets ADI
> +version to, say 10, on a range of memory, must access that memory using
> +virtual addresses that contain 0xa in bits 63-60.
> +
> +ADI is enabled on a set of pages using mprotect() with PROT_ADI flag.
> +When ADI is enabled on a set of pages by a task for the first time,
> +kernel sets the PSTATE.mcde bit fot the task. Version tags for memory
> +addresses are set with an stxa instruction on the addresses using
> +ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is
> +provided by the hypervisor to the kernel.  Kernel returns the value of
> +ADI block size to userspace using auxiliary vector along with other ADI
> +info. Following auxiliary vectors are provided by the kernel:
> +
> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
> +			alignment, in bytes, of ADI versioning.
> +	AT_ADI_NBITS	Number of ADI version bits in the VA

The previous patch series also defined AT_ADI_UEONADI.  Why was that
removed?

> +
> +
> +IMPORTANT NOTES:
> +
> +- Version tag values of 0x0 and 0xf are reserved.

The documentation should probably state more specifically that an
in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
meaning that a mismatch exception will never be generated regardless
of the tag bits set in the VA accessing the memory.

> +
> +- Version tags are set on virtual addresses from userspace even though
> +  tags are stored in physical memory. Tags are set on a physical page
> +  after it has been allocated to a task and a pte has been created for
> +  it.
> +
> +- When a task frees a memory page it had set version tags on, the page
> +  goes back to free page pool. When this page is re-allocated to a task,
> +  kernel clears the page using block initialization ASI which clears the
> +  version tags as well for the page. If a page allocated to a task is
> +  freed and allocated back to the same task, old version tags set by the
> +  task on that page will no longer be present.

The specifics should be included here, too, so someone doesn't have
to guess what's going on if they make changes and the tags are no longer
cleared.  The HW clears the tag for a cacheline for block initializing
stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
PSTATE.mce is set when executing in the kernel, but pages are cleared
using kernel physical mapping VAs which are mapped with TTE.mcd=0.

Another HW behavior that should be mentioned is that tag mismatches
are not detected for non-faulting loads.

> +
> +- Kernel does not set any tags for user pages and it is entirely a
> +  task's responsibility to set any version tags. Kernel does ensure the
> +  version tags are preserved if a page is swapped out to the disk and
> +  swapped back in. It also preserves that version tags if a page is
> +  migrated.

I only have a cursory understanding of how page migration works, but
I could not see how the tags would be preserved if a page were migrated.
I figured the place to copy the tags would be migrate_page_copy(), but
I don't see changes there.


> +
> +- ADI works for any size pages. A userspace task need not be aware of
> +  page size when using ADI. It can simply select a virtual address
> +  range, enable ADI on the range using mprotect() and set version tags
> +  for the entire range. mprotect() ensures range is aligned to page size
> +  and is a multiple of page size.
> +
> +
> +
> +ADI related traps
> +-----------------
> +
> +With ADI enabled, following new traps may occur:
> +
> +Disrupting memory corruption
> +
> +	When a store accesses a memory localtion that has TTE.mcd=1,
> +	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
> +	tag in the address used (bits 63:60) does not match the tag set on
> +	the corresponding cacheline, a memory corruption trap occurs. By
> +	default, it is a disrupting trap and is sent to the hypervisor
> +	first. Hypervisor creates a sun4v error report and sends a
> +	resumable error (TT=0x7e) trap to the kernel. The kernel sends
> +	a SIGSEGV to the task that resulted in this trap with the following
> +	info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ADIDERR;
> +		siginfo.si_addr = addr; /* PC where first mismatch occurred */
> +		siginfo.si_trapno = 0;
> +
> +
> +Precise memory corruption
> +
> +	When a store accesses a memory location that has TTE.mcd=1,
> +	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
> +	tag in the address used (bits 63:60) does not match the tag set on
> +	the corresponding cacheline, a memory corruption trap occurs. If
> +	MCD precise exception is enabled (MCDPERR=1), a precise
> +	exception is sent to the kernel with TT=0x1a. The kernel sends
> +	a SIGSEGV to the task that resulted in this trap with the following
> +	info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ADIPERR;
> +		siginfo.si_addr = addr;	/* address that caused trap */
> +		siginfo.si_trapno = 0;
> +
> +	NOTE: ADI tag mismatch on a load always results in precise trap.
> +
> +
> +MCD disabled
> +
> +	When a task has not enabled ADI and attempts to set ADI version
> +	on a memory address, processor sends an MCD disabled trap. This
> +	trap is handled by hypervisor first and the hypervisor vectors this
> +	trap through to the kernel as Data Access Exception trap with
> +	fault type set to 0xa (invalid ASI). When this occurs, the kernel
> +	sends the task SIGSEGV signal with following info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ACCADI;
> +		siginfo.si_addr = addr;	/* address that caused trap */
> +		siginfo.si_trapno = 0;
> +
> +
> +Sample program to use ADI
> +-------------------------
> +
> +Following sample program is meant to illustrate how to use the ADI
> +functionality.
> +
> +#include <unistd.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <elf.h>
> +#include <sys/ipc.h>
> +#include <sys/shm.h>
> +#include <sys/mman.h>
> +#include <asm/asi.h>
> +
> +#ifndef AT_ADI_BLKSZ
> +#define AT_ADI_BLKSZ	48
> +#endif
> +#ifndef AT_ADI_NBITS
> +#define AT_ADI_NBITS	49
> +#endif
> +
> +#ifndef PROT_ADI
> +#define PROT_ADI	0x10
> +#endif
> +
> +#define BUFFER_SIZE     32*1024*1024UL
> +
> +main(int argc, char* argv[], char* envp[])
> +{
> +        unsigned long i, mcde, adi_blksz, adi_nbits;
> +        char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr;
> +        int shmid, version;
> +	Elf64_auxv_t *auxv;
> +
> +	adi_blksz = 0;
> +
> +	while(*envp++ != NULL);
> +	for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
> +		switch (auxv->a_type) {
> +		case AT_ADI_BLKSZ:
> +			adi_blksz = auxv->a_un.a_val;
> +			break;
> +		case AT_ADI_NBITS:
> +			adi_nbits = auxv->a_un.a_val;
> +			break;
> +		}
> +	}
> +	if (adi_blksz = 0) {
> +		fprintf(stderr, "Oops! ADI is not supported\n");
> +		exit(1);
> +	}
> +
> +	printf("ADI capabilities:\n");
> +	printf("\tBlock size = %ld\n", adi_blksz);
> +	printf("\tNumber of bits = %ld\n", adi_nbits);
> +
> +        if ((shmid = shmget(2, BUFFER_SIZE,
> +                                IPC_CREAT | SHM_R | SHM_W)) < 0) {
> +                perror("shmget failed");
> +                exit(1);
> +        }
> +
> +        shmaddr = shmat(shmid, NULL, 0);
> +        if (shmaddr = (char *)-1) {
> +                perror("shm attach failed");
> +                shmctl(shmid, IPC_RMID, NULL);
> +                exit(1);
> +        }
> +
> +	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) {
> +		perror("mprotect failed");
> +		goto err_out;
> +	}
> +
> +        /* Set the ADI version tag on the shm segment
> +         */
> +        version = 10;
> +        tmp_addr = shmaddr;
> +        end = shmaddr + BUFFER_SIZE;
> +        while (tmp_addr < end) {
> +                asm volatile(
> +                        "stxa %1, [%0]0x90\n\t"
> +                        :
> +                        : "r" (tmp_addr), "r" (version));
> +                tmp_addr += adi_blksz;
> +        }
> +	asm volatile("membar #Sync\n\t");
> +
> +        /* Create a versioned address from the normal address by placing
> +	 * version tag in the upper adi_nbits bits
> +         */
> +        tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits);
> +        tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits);
> +        veraddr = (void *) (((unsigned long)version << (64-adi_nbits))
> +                        | (unsigned long)tmp_addr);
> +
> +        printf("Starting the writes:\n");
> +        for (i = 0; i < BUFFER_SIZE; i++) {
> +                veraddr[i] = (char)(i);
> +                if (!(i % (1024 * 1024)))
> +                        printf(".");
> +        }
> +        printf("\n");
> +
> +        printf("Verifying data...");
> +	fflush(stdout);
> +        for (i = 0; i < BUFFER_SIZE; i++)
> +                if (veraddr[i] != (char)i)
> +                        printf("\nIndex %lu mismatched\n", i);
> +        printf("Done.\n");
> +
> +        /* Disable ADI and clean up
> +         */
> +	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) {
> +		perror("mprotect failed");
> +		goto err_out;
> +	}
> +
> +        if (shmdt((const void *)shmaddr) != 0)
> +                perror("Detach failure");
> +        shmctl(shmid, IPC_RMID, NULL);
> +
> +        exit(0);
> +
> +err_out:
> +        if (shmdt((const void *)shmaddr) != 0)
> +                perror("Detach failure");
> +        shmctl(shmid, IPC_RMID, NULL);
> +        exit(1);
> +}
> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
> index 59bb5938d852..b799796ad963 100644
> --- a/arch/sparc/include/asm/mman.h
> +++ b/arch/sparc/include/asm/mman.h
> @@ -6,5 +6,75 @@
> #ifndef __ASSEMBLY__
> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
> int sparc_mmap_check(unsigned long addr, unsigned long len);
> -#endif
> +
> +#ifdef CONFIG_SPARC64
> +#include <asm/adi_64.h>
> +
> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
> +{
> +	if (prot & PROT_ADI) {
> +		struct pt_regs *regs;
> +
> +		if (!current->mm->context.adi) {
> +			regs = task_pt_regs(current);
> +			regs->tstate |= TSTATE_MCDE;
> +			current->mm->context.adi = true;

If a process is multi-threaded when it enables ADI on some memory for
the first time, TSTATE_MCDE will only be set for the calling thread
and it will not be possible to enable it for the other threads.
One possible way to handle this is to enable TSTATE_MCDE for all user
threads when they are initialized if adi_capable() returns true.


> +		}
> +		return VM_SPARC_ADI;
> +	} else {
> +		return 0;
> +	}
> +}
> +
> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
> +}
> +
> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
> +{
> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
> +		return 0;
> +	if (prot & PROT_ADI) {
> +		if (!adi_capable())
> +			return 0;
> +
> +		/* ADI tags can not be set on read-only memory, so it makes
> +		 * sense to enable ADI on writable memory only.
> +		 */
> +		if (!(prot & PROT_WRITE))
> +			return 0;

This prevents the use of ADI for the legitimate case where shared memory
is mapped read/write for a master process but mapped read-only for a
client process.  The master process could set the tags and communicate
the expected tag values to the client.


> +
> +		if (addr) {
> +			struct vm_area_struct *vma;
> +
> +			vma = find_vma(current->mm, addr);
> +			if (vma) {
> +				/* ADI can not be enabled on PFN
> +				 * mapped pages
> +				 */
> +				if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
> +					return 0;
> +
> +				/* Mergeable pages can become unmergeable
> +				 * if ADI is enabled on them even if they
> +				 * have identical data on them. This can be
> +				 * because ADI enabled pages with identical
> +				 * data may still not have identical ADI
> +				 * tags on them. Disallow ADI on mergeable
> +				 * pages.
> +				 */
> +				if (vma->vm_flags & VM_MERGEABLE)
> +					return 0;
> +			}
> +		}
> +	}
> +	return 1;
> +}
> +#endif /* CONFIG_SPARC64 */
> +
> +#endif /* __ASSEMBLY__ */
> #endif /* __SPARC_MMAN_H__ */
> diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
> index 83b36a5371ff..a65d51ebe00b 100644
> --- a/arch/sparc/include/asm/mmu_64.h
> +++ b/arch/sparc/include/asm/mmu_64.h
> @@ -89,6 +89,20 @@ struct tsb_config {
> #define MM_NUM_TSBS	1
> #endif
> 
> +/* ADI tags are stored when a page is swapped out and the storage for
> + * tags is allocated dynamically. There is a tag storage descriptor
> + * associated with each set of tag storage pages. Tag storage descriptors
> + * are allocated dynamically. Since kernel will allocate a full page for
> + * each tag storage descriptor, we can store up to
> + * PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page.
> + */
> +typedef struct {
> +	unsigned long	start;		/* Start address for this tag storage */
> +	unsigned long	end;		/* Last address for tag storage */
> +	unsigned char	*tags;		/* Where the tags are */
> +	unsigned long	tag_users;	/* number of references to descriptor */
> +} tag_storage_desc_t;
> +
> typedef struct {
> 	spinlock_t		lock;
> 	unsigned long		sparc64_ctx_val;
> @@ -96,6 +110,9 @@ typedef struct {
> 	unsigned long		thp_pte_count;
> 	struct tsb_config	tsb_block[MM_NUM_TSBS];
> 	struct hv_tsb_descr	tsb_descr[MM_NUM_TSBS];
> +	bool			adi;
> +	tag_storage_desc_t	*tag_store;
> +	spinlock_t		tag_lock;
> } mm_context_t;
> 
> #endif /* !__ASSEMBLY__ */
> diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
> index 2cddcda4f85f..68de059551f9 100644
> --- a/arch/sparc/include/asm/mmu_context_64.h
> +++ b/arch/sparc/include/asm/mmu_context_64.h
> @@ -9,6 +9,7 @@
> #include <linux/mm_types.h>
> 
> #include <asm/spitfire.h>
> +#include <asm/adi_64.h>
> #include <asm-generic/mm_hooks.h>
> 
> static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
> @@ -129,6 +130,48 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
> 
> #define deactivate_mm(tsk,mm)	do { } while (0)
> #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
> +
> +#define  __HAVE_ARCH_START_CONTEXT_SWITCH
> +static inline void arch_start_context_switch(struct task_struct *prev)
> +{
> +	/* Save the current state of MCDPER register for the process
> +	 * we are switching from
> +	 */
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		__asm__ __volatile__(
> +			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
> +			"mov %%g1, %0\n\t"
> +			: "=r" (tmp_mcdper)
> +			:
> +			: "g1");
> +		if (tmp_mcdper)
> +			set_tsk_thread_flag(prev, TIF_MCDPER);
> +		else
> +			clear_tsk_thread_flag(prev, TIF_MCDPER);
> +	}
> +}
> +
> +#define finish_arch_post_lock_switch	finish_arch_post_lock_switch
> +static inline void finish_arch_post_lock_switch(void)
> +{
> +	/* Restore the state of MCDPER register for the new process
> +	 * just switched to.
> +	 */
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		tmp_mcdper = test_thread_flag(TIF_MCDPER);
> +		__asm__ __volatile__(
> +			"mov %0, %%g1\n\t"
> +			".word 0x9d800001\n\t"	/* wr %g0, %g1, %mcdper" */
> +			:
> +			: "ir" (tmp_mcdper)
> +			: "g1");
> +	}
> +}
> +
> #endif /* !(__ASSEMBLY__) */
> 
> #endif /* !(__SPARC64_MMU_CONTEXT_H) */
> diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
> index 5961b2d8398a..dc582c5611f8 100644
> --- a/arch/sparc/include/asm/page_64.h
> +++ b/arch/sparc/include/asm/page_64.h
> @@ -46,6 +46,10 @@ struct page;
> void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
> #define copy_page(X,Y)	memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
> void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
> +#define __HAVE_ARCH_COPY_USER_HIGHPAGE
> +struct vm_area_struct;
> +void copy_user_highpage(struct page *to, struct page *from,
> +			unsigned long vaddr, struct vm_area_struct *vma);
> 
> /* Unlike sparc32, sparc64's parameter passing API is more
>  * sane in that structures which as small enough are passed
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index af045061f41e..51da342c392d 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -18,6 +18,7 @@
> #include <asm/types.h>
> #include <asm/spitfire.h>
> #include <asm/asi.h>
> +#include <asm/adi.h>
> #include <asm/page.h>
> #include <asm/processor.h>
> 
> @@ -570,6 +571,18 @@ static inline pte_t pte_mkspecial(pte_t pte)
> 	return pte;
> }
> 
> +static inline pte_t pte_mkmcd(pte_t pte)
> +{
> +	pte_val(pte) |= _PAGE_MCD_4V;
> +	return pte;
> +}
> +
> +static inline pte_t pte_mknotmcd(pte_t pte)
> +{
> +	pte_val(pte) &= ~_PAGE_MCD_4V;
> +	return pte;
> +}
> +
> static inline unsigned long pte_young(pte_t pte)
> {
> 	unsigned long mask;
> @@ -1001,6 +1014,39 @@ int page_in_phys_avail(unsigned long paddr);
> int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
> 		    unsigned long, pgprot_t);
> 
> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte);
> +
> +int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		  unsigned long addr, pte_t oldpte);
> +
> +#define __HAVE_ARCH_DO_SWAP_PAGE
> +static inline void arch_do_swap_page(struct mm_struct *mm,
> +				     struct vm_area_struct *vma,
> +				     unsigned long addr,
> +				     pte_t pte, pte_t oldpte)
> +{
> +	/* If this is a new page being mapped in, there can be no
> +	 * ADI tags stored away for this page. Skip looking for
> +	 * stored tags
> +	 */
> +	if (pte_none(oldpte))
> +		return;
> +
> +	if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V))
> +		adi_restore_tags(mm, vma, addr, pte);
> +}
> +
> +#define __HAVE_ARCH_UNMAP_ONE
> +static inline int arch_unmap_one(struct mm_struct *mm,
> +				 struct vm_area_struct *vma,
> +				 unsigned long addr, pte_t oldpte)
> +{
> +	if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
> +		return adi_save_tags(mm, vma, addr, oldpte);
> +	return 0;
> +}
> +
> static inline int io_remap_pfn_range(struct vm_area_struct *vma,
> 				     unsigned long from, unsigned long pfn,
> 				     unsigned long size, pgprot_t prot)
> diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
> index 38a24f257b85..9c04acb1f9af 100644
> --- a/arch/sparc/include/asm/thread_info_64.h
> +++ b/arch/sparc/include/asm/thread_info_64.h
> @@ -190,7 +190,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
>  *       in using in assembly, else we can't use the mask as
>  *       an immediate value in instructions such as andcc.
>  */
> -/* flag bit 12 is available */
> +#define TIF_MCDPER		12	/* Precise MCD exception */
> #define TIF_MEMDIE		13	/* is terminating due to OOM killer */
> #define TIF_POLLING_NRFLAG	14
> 
> diff --git a/arch/sparc/include/asm/trap_block.h b/arch/sparc/include/asm/trap_block.h
> index ec9c04de3664..b283e940671a 100644
> --- a/arch/sparc/include/asm/trap_block.h
> +++ b/arch/sparc/include/asm/trap_block.h
> @@ -72,6 +72,8 @@ struct sun4v_1insn_patch_entry {
> };
> extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch,
> 	__sun4v_1insn_patch_end;
> +extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch,
> +	__sun_m7_1insn_patch_end;
> 
> struct sun4v_2insn_patch_entry {
> 	unsigned int	addr;
> diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
> index 9765896ecb2c..a72c03397345 100644
> --- a/arch/sparc/include/uapi/asm/mman.h
> +++ b/arch/sparc/include/uapi/asm/mman.h
> @@ -5,6 +5,8 @@
> 
> /* SunOS'ified... */
> 
> +#define PROT_ADI	0x10		/* ADI enabled */
> +
> #define MAP_RENAME      MAP_ANONYMOUS   /* In SunOS terminology */
> #define MAP_NORESERVE   0x40            /* don't reserve swap pages */
> #define MAP_INHERIT     0x80            /* SunOS doesn't do this, but... */
> diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
> index 9fbb5dd4a7bf..83c1e36ae5fa 100644
> --- a/arch/sparc/kernel/adi_64.c
> +++ b/arch/sparc/kernel/adi_64.c
> @@ -7,10 +7,24 @@
>  * This work is licensed under the terms of the GNU GPL, version 2.
>  */
> #include <linux/init.h>
> +#include <linux/slab.h>
> +#include <linux/mm_types.h>
> #include <asm/mdesc.h>
> #include <asm/adi_64.h>
> +#include <asm/mmu_64.h>
> +#include <asm/pgtable_64.h>
> +
> +/* Each page of storage for ADI tags can accommodate tags for 128
> + * pages. When ADI enabled pages are being swapped out, it would be
> + * prudent to allocate at least enough tag storage space to accommodate
> + * SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to
> + * store tags for four SWAPFILE_CLUSTER pages to reduce need for
> + * further allocations for same vma.
> + */
> +#define TAG_STORAGE_PAGES	8
> 
> struct adi_config adi_state;
> +EXPORT_SYMBOL(adi_state);
> 
> /* mdesc_adi_init() : Parse machine description provided by the
>  *	hypervisor to detect ADI capabilities
> @@ -78,6 +92,19 @@ void __init mdesc_adi_init(void)
> 		goto adi_not_found;
> 	adi_state.caps.nbits = *val;
> 
> +	/* Some of the code to support swapping ADI tags is written
> +	 * assumption that two ADI tags can fit inside one byte. If
> +	 * this assumption is broken by a future architecture change,
> +	 * that code will have to be revisited. If that were to happen,
> +	 * disable ADI support so we do not get unpredictable results
> +	 * with programs trying to use ADI and their pages getting
> +	 * swapped out
> +	 */
> +	if (adi_state.caps.nbits > 4) {
> +		pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n");
> +		adi_state.enabled = false;
> +	}
> +
> 	mdesc_release(hp);
> 	return;
> 
> @@ -88,3 +115,253 @@ void __init mdesc_adi_init(void)
> 	if (hp)
> 		mdesc_release(hp);
> }
> +
> +tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
> +				   struct vm_area_struct *vma,
> +				   unsigned long addr)
> +{
> +	tag_storage_desc_t *tag_desc = NULL;
> +	unsigned long i, max_desc, flags;
> +
> +	/* Check if this vma already has tag storage descriptor
> +	 * allocated for it.
> +	 */
> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +	if (mm->context.tag_store) {
> +		tag_desc = mm->context.tag_store;
> +		spin_lock_irqsave(&mm->context.tag_lock, flags);
> +		for (i = 0; i < max_desc; i++) {
> +			if ((addr >= tag_desc->start) &&
> +			    ((addr + PAGE_SIZE - 1) <= tag_desc->end))
> +				break;
> +			tag_desc++;
> +		}
> +		spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +
> +		/* If no matching entries were found, this must be a
> +		 * freshly allocated page
> +		 */
> +		if (i >= max_desc)
> +			tag_desc = NULL;
> +	}
> +
> +	return tag_desc;
> +}
> +
> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
> +				    struct vm_area_struct *vma,
> +				    unsigned long addr)
> +{
> +	unsigned char *tags;
> +	unsigned long i, size, max_desc, flags;
> +	tag_storage_desc_t *tag_desc, *open_desc;
> +	unsigned long end_addr, hole_start, hole_end;
> +
> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +	open_desc = NULL;
> +	hole_start = 0;
> +	hole_end = ULONG_MAX;
> +	end_addr = addr + PAGE_SIZE - 1;
> +
> +	/* Check if this vma already has tag storage descriptor
> +	 * allocated for it.
> +	 */
> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
> +	if (mm->context.tag_store) {
> +		tag_desc = mm->context.tag_store;
> +
> +		/* Look for a matching entry for this address. While doing
> +		 * that, look for the first open slot as well and find
> +		 * the hole in already allocated range where this request
> +		 * will fit in.
> +		 */
> +		for (i = 0; i < max_desc; i++) {
> +			if (tag_desc->tag_users = 0) {
> +				if (open_desc = NULL)
> +					open_desc = tag_desc;
> +			} else {
> +				if ((addr >= tag_desc->start) &&
> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
> +					tag_desc->tag_users++;
> +					goto out;
> +				}
> +			}
> +			if ((tag_desc->start > end_addr) &&
> +			    (tag_desc->start < hole_end))
> +				hole_end = tag_desc->start;
> +			if ((tag_desc->end < addr) &&
> +			    (tag_desc->end > hole_start))
> +				hole_start = tag_desc->end;
> +			tag_desc++;
> +		}
> +
> +	} else {
> +		size = sizeof(tag_storage_desc_t)*max_desc;
> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);

The spin_lock_irqsave() above means that all but level 15 interrupts
will be disabled when kzalloc() is called.  If kzalloc() can sleep
there's a risk of deadlock.


> +		if (mm->context.tag_store = NULL) {
> +			tag_desc = NULL;
> +			goto out;
> +		}
> +		tag_desc = mm->context.tag_store;
> +		for (i = 0; i < max_desc; i++, tag_desc++)
> +			tag_desc->tag_users = 0;
> +		open_desc = mm->context.tag_store;
> +		i = 0;
> +	}
> +
> +	/* Check if we ran out of tag storage descriptors */
> +	if (open_desc = NULL) {
> +		tag_desc = NULL;
> +		goto out;
> +	}
> +
> +	/* Mark this tag descriptor slot in use and then initialize it */
> +	tag_desc = open_desc;
> +	tag_desc->tag_users = 1;
> +
> +	/* Tag storage has not been allocated for this vma and space
> +	 * is available in tag storage descriptor. Since this page is
> +	 * being swapped out, there is high probability subsequent pages
> +	 * in the VMA will be swapped out as well. Allocates pages to
> +	 * store tags for as many pages in this vma as possible but not
> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
> +	 * covers adi_blksize() worth of addresses. Check if the hole is
> +	 * big enough to accommodate full address range for using
> +	 * TAG_STORAGE_PAGES number of tag pages.
> +	 */
> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
> +	end_addr = addr + (size*2*adi_blksize()) - 1;

Since size > PAGE_SIZE, end_addr could theoretically overflow.


> +	if (hole_end < end_addr) {
> +		/* Available hole is too small on the upper end of
> +		 * address. Can we expand the range towards the lower
> +		 * address and maximize use of this slot?
> +		 */
> +		unsigned long tmp_addr;
> +
> +		end_addr = hole_end - 1;
> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;

Similarily, tmp_addr may underflow.

> +		if (tmp_addr < hole_start) {
> +			/* Available hole is restricted on lower address
> +			 * end as well
> +			 */
> +			tmp_addr = hole_start + 1;
> +		}
> +		addr = tmp_addr;
> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
> +		size = size * PAGE_SIZE;
> +	}
> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);

Potential deadlock due to PIL\x14?


> +	if (tags = NULL) {
> +		tag_desc->tag_users = 0;
> +		tag_desc = NULL;
> +		goto out;
> +	}
> +	tag_desc->start = addr;
> +	tag_desc->tags = tags;
> +	tag_desc->end = end_addr;
> +
> +out:
> +	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +	return tag_desc;
> +}
> +
> +void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
> +{
> +	unsigned long flags;
> +	unsigned char *tags = NULL;
> +
> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
> +	tag_desc->tag_users--;
> +	if (tag_desc->tag_users = 0) {
> +		tag_desc->start = tag_desc->end = 0;
> +		/* Do not free up the tag storage space allocated
> +		 * by the first descriptor. This is persistent
> +		 * emergency tag storage space for the task.
> +		 */
> +		if (tag_desc != mm->context.tag_store) {
> +			tags = tag_desc->tags;
> +			tag_desc->tags = NULL;
> +		}
> +	}
> +	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +	kfree(tags);
> +}
> +
> +#define tag_start(addr, tag_desc)		\
> +	((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize())))
> +
> +/* Retrieve any saved ADI tags for the page being swapped back in and
> + * restore these tags to the newly allocated physical page.
> + */
> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte)
> +{
> +	unsigned char *tag;
> +	tag_storage_desc_t *tag_desc;
> +	unsigned long paddr, tmp, version1, version2;
> +
> +	/* Check if the swapped out page has an ADI version
> +	 * saved. If yes, restore version tag to the newly
> +	 * allocated page.
> +	 */
> +	tag_desc = find_tag_store(mm, vma, addr);
> +	if (tag_desc = NULL)
> +		return;
> +
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		version1 = (*tag) >> 4;
> +		version2 = (*tag) & 0x0f;
> +		*tag++ = 0;
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version1), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version2), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +	}
> +	asm volatile("membar #Sync\n\t");
> +
> +	/* Check and mark this tag space for release later if
> +	 * the swapped in page was the last user of tag space
> +	 */
> +	del_tag_store(tag_desc, mm);
> +}
> +
> +/* A page is about to be swapped out. Save any ADI tags associated with
> + * this physical page so they can be restored later when the page is swapped
> + * back in.
> + */
> +int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		  unsigned long addr, pte_t oldpte)
> +{
> +	unsigned char *tag;
> +	tag_storage_desc_t *tag_desc;
> +	unsigned long version1, version2, paddr, tmp;
> +
> +	tag_desc = alloc_tag_store(mm, vma, addr);
> +	if (tag_desc = NULL)
> +		return -1;
> +
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(oldpte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		asm volatile("ldxa [%1] %2, %0\n\t"
> +				: "=r" (version1)
> +				: "r" (tmp), "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("ldxa [%1] %2, %0\n\t"
> +				: "=r" (version2)
> +				: "r" (tmp), "i" (ASI_MCD_REAL));
> +		*tag = (version1 << 4) | version2;
> +		tag++;
> +	}
> +
> +	return 0;
> +}
> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
> index 1276ca2567ba..7be33bf45cff 100644
> --- a/arch/sparc/kernel/etrap_64.S
> +++ b/arch/sparc/kernel/etrap_64.S
> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
> 		or	%l7, %l0, %l7
> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
> +		/*
> +		 * If userspace is using ADI, it could potentially pass
> +		 * a pointer with version tag embedded in it. To maintain
> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
> +		 * would have already set TTE.mcd in an earlier call to
> +		 * kernel and set the version tag for the address being
> +		 * dereferenced. Setting PSTATE.mcde would ensure any
> +		 * access to userspace data through a system call honors
> +		 * ADI and does not allow a rogue app to bypass ADI by
> +		 * using system calls. Setting PSTATE.mcde only affects
> +		 * accesses to virtual addresses that have TTE.mcd set.
> +		 * Set PMCDPER to ensure any exceptions caused by ADI
> +		 * version tag mismatch are exposed before system call
> +		 * returns to userspace. Setting PMCDPER affects only
> +		 * writes to virtual addresses that have TTE.mcd set and
> +		 * have a version tag set as well.
> +		 */
> +		.section .sun_m7_1insn_patch, "ax"
> +		.word	661b
> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
> +		.previous
> +661:		nop
> +		.section .sun_m7_1insn_patch, "ax"
> +		.word	661b
> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */

I commented on this on the last patch series revision.  PMCDPER could be
set once when each CPU is configured rather than every time the kernel
is entered.  Since it's never cleared, setting it repeatedly unnecessarily
impacts the performance of etrap.

Also, there are places in rtrap where PSTATE is set before continuing
execution in the kernel.  These should also be patched to set TSTATE_MCDE.


> +		.previous
> 		or	%l7, %l0, %l7
> 		wrpr	%l2, %tnpc
> 		wrpr	%l7, (TSTATE_PRIV | TSTATE_IE), %tstate
> diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
> index b96104da5bd6..defa5723dfa6 100644
> --- a/arch/sparc/kernel/process_64.c
> +++ b/arch/sparc/kernel/process_64.c
> @@ -664,6 +664,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
> 	return 0;
> }
> 
> +/* TIF_MCDPER in thread info flags for current task is updated lazily upon
> + * a context switch. Update the this flag in current task's thread flags
> + * before dup so the dup'd task will inherit the current TIF_MCDPER flag.
> + */
> +int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
> +{
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		__asm__ __volatile__(
> +			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
> +			"mov %%g1, %0\n\t"
> +			: "=r" (tmp_mcdper)
> +			:
> +			: "g1");
> +		if (tmp_mcdper)
> +			set_thread_flag(TIF_MCDPER);
> +		else
> +			clear_thread_flag(TIF_MCDPER);
> +	}
> +
> +	*dst = *src;
> +	return 0;
> +}
> +
> typedef struct {
> 	union {
> 		unsigned int	pr_regs[32];
> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
> index 422b17880955..a9da205da394 100644
> --- a/arch/sparc/kernel/setup_64.c
> +++ b/arch/sparc/kernel/setup_64.c
> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
> 	}
> }
> 
> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
> +			     struct sun4v_1insn_patch_entry *end)
> +{
> +	sun4v_patch_1insn_range(start, end);
> +}
> +
> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
> 			     struct sun4v_2insn_patch_entry *end)
> {
> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
> 				&__sun4v_2insn_patch_end);
> 	if (sun4v_chip_type = SUN4V_CHIP_SPARC_M7 ||
> -	    sun4v_chip_type = SUN4V_CHIP_SPARC_SN)
> +	    sun4v_chip_type = SUN4V_CHIP_SPARC_SN) {
> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
> +					 &__sun_m7_1insn_patch_end);
> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
> 					 &__sun_m7_2insn_patch_end);

Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
here instead of adding new functions that just call these functions?

Anthony

> +		}
> 
> 	sun4v_hvapi_init();
> }
> diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
> index 572db686f845..20a70682cce7 100644
> --- a/arch/sparc/kernel/vmlinux.lds.S
> +++ b/arch/sparc/kernel/vmlinux.lds.S
> @@ -144,6 +144,11 @@ SECTIONS
> 		*(.pause_3insn_patch)
> 		__pause_3insn_patch_end = .;
> 	}
> +	.sun_m7_1insn_patch : {
> +		__sun_m7_1insn_patch = .;
> +		*(.sun_m7_1insn_patch)
> +		__sun_m7_1insn_patch_end = .;
> +	}
> 	.sun_m7_2insn_patch : {
> 		__sun_m7_2insn_patch = .;
> 		*(.sun_m7_2insn_patch)
> diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
> index cd0e32bbcb1d..579f7ae75b35 100644
> --- a/arch/sparc/mm/gup.c
> +++ b/arch/sparc/mm/gup.c
> @@ -11,6 +11,7 @@
> #include <linux/pagemap.h>
> #include <linux/rwsem.h>
> #include <asm/pgtable.h>
> +#include <asm/adi.h>
> 
> /*
>  * The performance critical leaf functions are made noinline otherwise gcc
> @@ -157,6 +158,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
> 	pgd_t *pgdp;
> 	int nr = 0;
> 
> +#ifdef CONFIG_SPARC64
> +	if (adi_capable()) {
> +		long addr = start;
> +
> +		/* If userspace has passed a versioned address, kernel
> +		 * will not find it in the VMAs since it does not store
> +		 * the version tags in the list of VMAs. Storing version
> +		 * tags in list of VMAs is impractical since they can be
> +		 * changed any time from userspace without dropping into
> +		 * kernel. Any address search in VMAs will be done with
> +		 * non-versioned addresses. Ensure the ADI version bits
> +		 * are dropped here by sign extending the last bit before
> +		 * ADI bits. IOMMU does not implement version tags.
> +		 */
> +		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
> +		start = addr;
> +	}
> +#endif
> 	start &= PAGE_MASK;
> 	addr = start;
> 	len = (unsigned long) nr_pages << PAGE_SHIFT;
> @@ -187,6 +206,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
> 	pgd_t *pgdp;
> 	int nr = 0;
> 
> +#ifdef CONFIG_SPARC64
> +	if (adi_capable()) {
> +		long addr = start;
> +
> +		/* If userspace has passed a versioned address, kernel
> +		 * will not find it in the VMAs since it does not store
> +		 * the version tags in the list of VMAs. Storing version
> +		 * tags in list of VMAs is impractical since they can be
> +		 * changed any time from userspace without dropping into
> +		 * kernel. Any address search in VMAs will be done with
> +		 * non-versioned addresses. Ensure the ADI version bits
> +		 * are dropped here by sign extending the last bit before
> +		 * ADI bits. IOMMU does not implements version tags,
> +		 */
> +		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
> +		start = addr;
> +	}
> +#endif
> 	start &= PAGE_MASK;
> 	addr = start;
> 	len = (unsigned long) nr_pages << PAGE_SHIFT;
> diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
> index 88855e383b34..487ed1f1ce86 100644
> --- a/arch/sparc/mm/hugetlbpage.c
> +++ b/arch/sparc/mm/hugetlbpage.c
> @@ -177,8 +177,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
> 			 struct page *page, int writeable)
> {
> 	unsigned int shift = huge_page_shift(hstate_vma(vma));
> +	pte_t pte;
> 
> -	return hugepage_shift_to_tte(entry, shift);
> +	pte = hugepage_shift_to_tte(entry, shift);
> +
> +#ifdef CONFIG_SPARC64
> +	/* If this vma has ADI enabled on it, turn on TTE.mcd
> +	 */
> +	if (vma->vm_flags & VM_SPARC_ADI)
> +		return pte_mkmcd(pte);
> +	else
> +		return pte_mknotmcd(pte);
> +#else
> +	return pte;
> +#endif
> }
> 
> static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
> index 3c40ebd50f92..94854e7e833e 100644
> --- a/arch/sparc/mm/init_64.c
> +++ b/arch/sparc/mm/init_64.c
> @@ -3087,3 +3087,36 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
> 		do_flush_tlb_kernel_range(start, end);
> 	}
> }
> +
> +void copy_user_highpage(struct page *to, struct page *from,
> +	unsigned long vaddr, struct vm_area_struct *vma)
> +{
> +	char *vfrom, *vto;
> +
> +	vfrom = kmap_atomic(from);
> +	vto = kmap_atomic(to);
> +	copy_user_page(vto, vfrom, vaddr, to);
> +	kunmap_atomic(vto);
> +	kunmap_atomic(vfrom);
> +
> +	/* If this page has ADI enabled, copy over any ADI tags
> +	 * as well
> +	 */
> +	if (vma->vm_flags & VM_SPARC_ADI) {
> +		unsigned long pfrom, pto, i, adi_tag;
> +
> +		pfrom = page_to_phys(from);
> +		pto = page_to_phys(to);
> +
> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
> +			asm volatile("ldxa [%1] %2, %0\n\t"
> +					: "=r" (adi_tag)
> +					:  "r" (i), "i" (ASI_MCD_REAL));
> +			asm volatile("stxa %0, [%1] %2\n\t"
> +					:
> +					: "r" (adi_tag), "r" (pto),
> +					  "i" (ASI_MCD_REAL));
> +			pto += adi_blksize();
> +		}
> +	}
> +}
> diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
> index 0d4b998c7d7b..6518cc42056b 100644
> --- a/arch/sparc/mm/tsb.c
> +++ b/arch/sparc/mm/tsb.c
> @@ -545,6 +545,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
> 
> 	mm->context.sparc64_ctx_val = 0UL;
> 
> +	mm->context.tag_store = NULL;
> +	spin_lock_init(&mm->context.tag_lock);
> +
> #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
> 	/* We reset them to zero because the fork() page copying
> 	 * will re-increment the counters as the parent PTEs are
> @@ -610,4 +613,22 @@ void destroy_context(struct mm_struct *mm)
> 	}
> 
> 	spin_unlock_irqrestore(&ctx_alloc_lock, flags);
> +
> +	/* If ADI tag storage was allocated for this task, free it */
> +	if (mm->context.tag_store) {
> +		tag_storage_desc_t *tag_desc;
> +		unsigned long max_desc;
> +		unsigned char *tags;
> +
> +		tag_desc = mm->context.tag_store;
> +		max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +		for (i = 0; i < max_desc; i++) {
> +			tags = tag_desc->tags;
> +			tag_desc->tags = NULL;
> +			kfree(tags);
> +			tag_desc++;
> +		}
> +		kfree(mm->context.tag_store);
> +		mm->context.tag_store = NULL;
> +	}
> }
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b7aa3932e6d4..c0972114036f 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -231,6 +231,9 @@ extern unsigned int kobjsize(const void *objp);
> # define VM_GROWSUP	VM_ARCH_1
> #elif defined(CONFIG_IA64)
> # define VM_GROWSUP	VM_ARCH_1
> +#elif defined(CONFIG_SPARC64)
> +# define VM_SPARC_ADI	VM_ARCH_1	/* Uses ADI tag for access control */
> +# define VM_ARCH_CLEAR	VM_SPARC_ADI
> #elif !defined(CONFIG_MMU)
> # define VM_MAPPED_COPY	VM_ARCH_1	/* T if mapped copy of data (nommu mmap) */
> #endif
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 216184af0e19..bb82399816ef 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1797,6 +1797,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
> 		if (*vm_flags & VM_SAO)
> 			return 0;
> #endif
> +#ifdef VM_SPARC_ADI
> +		if (*vm_flags & VM_SPARC_ADI)
> +			return 0;
> +#endif
> 
> 		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
> 			err = __ksm_enter(mm);
> -- 
> 2.11.0
> 
> --
> To unsubscribe from this list: send the line "unsubscribe sparclinux" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-25 22:31     ` Anthony Yznaga
  0 siblings, 0 replies; 86+ messages in thread
From: Anthony Yznaga @ 2017-08-25 22:31 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz


> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
> 
> ADI is a new feature supported on SPARC M7 and newer processors to allow
> hardware to catch rogue accesses to memory. ADI is supported for data
> fetches only and not instruction fetches. An app can enable ADI on its
> data pages, set version tags on them and use versioned addresses to
> access the data pages. Upper bits of the address contain the version
> tag. On M7 processors, upper four bits (bits 63-60) contain the version
> tag. If a rogue app attempts to access ADI enabled data pages, its
> access is blocked and processor generates an exception. Please see
> Documentation/sparc/adi.txt for further details.
> 
> This patch extends mprotect to enable ADI (TSTATE.mcde), enable/disable
> MCD (Memory Corruption Detection) on selected memory ranges, enable
> TTE.mcd in PTEs, return ADI parameters to userspace and save/restore ADI
> version tags on page swap out/in or migration. ADI is not enabled by
> default for any task. A task must explicitly enable ADI on a memory
> range and set version tag for ADI to be effective for the task.
> 
> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
> Cc: Khalid Aziz <khalid@gonehiking.org>
> ---
> v7:
> 	- Enhanced arch_validate_prot() to enable ADI only on writable
> 	  addresses backed by physical RAM
> 	- Added support for saving/restoring ADI tags for each ADI
> 	  block size address range on a page on swap in/out
> 	- Added code to copy ADI tags on COW
> 	- Updated values for auxiliary vectors to not conflict with
> 	  values on other architectures to avoid conflict in glibc. glibc
> 	  consolidates all auxiliary vectors into its headers and
> 	  duplicate values in consolidated header are problematic
> 	- Disable same page merging on ADI enabled pages since ADI tags
> 	  may not match on pages with identical data
> 	- Broke the patch up further into smaller patches
> 
> v6:
> 	- Eliminated instructions to read and write PSTATE as well as
> 	  MCDPER and PMCDPER on every access to userspace addresses
> 	  by setting PSTATE and PMCDPER correctly upon entry into
> 	  kernel. PSTATE.mcde and PMCDPER are set upon entry into
> 	  kernel when running on an M7 processor. PSTATE.mcde being
> 	  set only affects memory accesses that have TTE.mcd set.
> 	  PMCDPER being set only affects writes to memory addresses
> 	  that have TTE.mcd set. This ensures any faults caused by
> 	  ADI tag mismatch on a write are exposed before kernel returns
> 	  to userspace.
> 
> v5:
> 	- Fixed indentation issues and instrcuctions in assembly code
> 	- Removed CONFIG_SPARC64 from mdesc.c
> 	- Changed to maintain state of MCDPER register in thread info
> 	  flags as opposed to in mm context. MCDPER is a per-thread
> 	  state and belongs in thread info flag as opposed to mm context
> 	  which is shared across threads. Added comments to clarify this
> 	  is a lazily maintained state and must be updated on context
> 	  switch and copy_process()
> 	- Updated code to use the new arch_do_swap_page() and
> 	  arch_unmap_one() functions
> 
> v4:
> 	- Broke patch up into smaller patches
> 
> v3:
> 	- Removed CONFIG_SPARC_ADI
> 	- Replaced prctl commands with mprotect
> 	- Added auxiliary vectors for ADI parameters
> 	- Enabled ADI for swappable pages
> 
> v2:
> 	- Fixed a build error
> 
> Documentation/sparc/adi.txt             | 272 +++++++++++++++++++++++++++++++
> arch/sparc/include/asm/mman.h           |  72 ++++++++-
> arch/sparc/include/asm/mmu_64.h         |  17 ++
> arch/sparc/include/asm/mmu_context_64.h |  43 +++++
> arch/sparc/include/asm/page_64.h        |   4 +
> arch/sparc/include/asm/pgtable_64.h     |  46 ++++++
> arch/sparc/include/asm/thread_info_64.h |   2 +-
> arch/sparc/include/asm/trap_block.h     |   2 +
> arch/sparc/include/uapi/asm/mman.h      |   2 +
> arch/sparc/kernel/adi_64.c              | 277 ++++++++++++++++++++++++++++++++
> arch/sparc/kernel/etrap_64.S            |  28 +++-
> arch/sparc/kernel/process_64.c          |  25 +++
> arch/sparc/kernel/setup_64.c            |  11 +-
> arch/sparc/kernel/vmlinux.lds.S         |   5 +
> arch/sparc/mm/gup.c                     |  37 +++++
> arch/sparc/mm/hugetlbpage.c             |  14 +-
> arch/sparc/mm/init_64.c                 |  33 ++++
> arch/sparc/mm/tsb.c                     |  21 +++
> include/linux/mm.h                      |   3 +
> mm/ksm.c                                |   4 +
> 20 files changed, 913 insertions(+), 5 deletions(-)
> create mode 100644 Documentation/sparc/adi.txt
> 
> diff --git a/Documentation/sparc/adi.txt b/Documentation/sparc/adi.txt
> new file mode 100644
> index 000000000000..383bc65fec1e
> --- /dev/null
> +++ b/Documentation/sparc/adi.txt
> @@ -0,0 +1,272 @@
> +Application Data Integrity (ADI)
> +================================
> +
> +SPARC M7 processor adds the Application Data Integrity (ADI) feature.
> +ADI allows a task to set version tags on any subset of its address
> +space. Once ADI is enabled and version tags are set for ranges of
> +address space of a task, the processor will compare the tag in pointers
> +to memory in these ranges to the version set by the application
> +previously. Access to memory is granted only if the tag in given pointer
> +matches the tag set by the application. In case of mismatch, processor
> +raises an exception.
> +
> +Following steps must be taken by a task to enable ADI fully:
> +
> +1. Set the user mode PSTATE.mcde bit. This acts as master switch for
> +   the task's entire address space to enable/disable ADI for the task.
> +
> +2. Set TTE.mcd bit on any TLB entries that correspond to the range of
> +   addresses ADI is being enabled on. MMU checks the version tag only
> +   on the pages that have TTE.mcd bit set.
> +
> +3. Set the version tag for virtual addresses using stxa instruction
> +   and one of the MCD specific ASIs. Each stxa instruction sets the
> +   given tag for one ADI block size number of bytes. This step must
> +   be repeated for entire page to set tags for entire page.
> +
> +ADI block size for the platform is provided by the hypervisor to kernel
> +in machine description tables. Hypervisor also provides the number of
> +top bits in the virtual address that specify the version tag.  Once
> +version tag has been set for a memory location, the tag is stored in the
> +physical memory and the same tag must be present in the ADI version tag
> +bits of the virtual address being presented to the MMU. For example on
> +SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block
> +size is same as cacheline size which is 64 bytes. A task that sets ADI
> +version to, say 10, on a range of memory, must access that memory using
> +virtual addresses that contain 0xa in bits 63-60.
> +
> +ADI is enabled on a set of pages using mprotect() with PROT_ADI flag.
> +When ADI is enabled on a set of pages by a task for the first time,
> +kernel sets the PSTATE.mcde bit fot the task. Version tags for memory
> +addresses are set with an stxa instruction on the addresses using
> +ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is
> +provided by the hypervisor to the kernel.  Kernel returns the value of
> +ADI block size to userspace using auxiliary vector along with other ADI
> +info. Following auxiliary vectors are provided by the kernel:
> +
> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
> +			alignment, in bytes, of ADI versioning.
> +	AT_ADI_NBITS	Number of ADI version bits in the VA

The previous patch series also defined AT_ADI_UEONADI.  Why was that
removed?

> +
> +
> +IMPORTANT NOTES:
> +
> +- Version tag values of 0x0 and 0xf are reserved.

The documentation should probably state more specifically that an
in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
meaning that a mismatch exception will never be generated regardless
of the tag bits set in the VA accessing the memory.

> +
> +- Version tags are set on virtual addresses from userspace even though
> +  tags are stored in physical memory. Tags are set on a physical page
> +  after it has been allocated to a task and a pte has been created for
> +  it.
> +
> +- When a task frees a memory page it had set version tags on, the page
> +  goes back to free page pool. When this page is re-allocated to a task,
> +  kernel clears the page using block initialization ASI which clears the
> +  version tags as well for the page. If a page allocated to a task is
> +  freed and allocated back to the same task, old version tags set by the
> +  task on that page will no longer be present.

The specifics should be included here, too, so someone doesn't have
to guess what's going on if they make changes and the tags are no longer
cleared.  The HW clears the tag for a cacheline for block initializing
stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
PSTATE.mce is set when executing in the kernel, but pages are cleared
using kernel physical mapping VAs which are mapped with TTE.mcd=0.

Another HW behavior that should be mentioned is that tag mismatches
are not detected for non-faulting loads.

> +
> +- Kernel does not set any tags for user pages and it is entirely a
> +  task's responsibility to set any version tags. Kernel does ensure the
> +  version tags are preserved if a page is swapped out to the disk and
> +  swapped back in. It also preserves that version tags if a page is
> +  migrated.

I only have a cursory understanding of how page migration works, but
I could not see how the tags would be preserved if a page were migrated.
I figured the place to copy the tags would be migrate_page_copy(), but
I don't see changes there.


> +
> +- ADI works for any size pages. A userspace task need not be aware of
> +  page size when using ADI. It can simply select a virtual address
> +  range, enable ADI on the range using mprotect() and set version tags
> +  for the entire range. mprotect() ensures range is aligned to page size
> +  and is a multiple of page size.
> +
> +
> +
> +ADI related traps
> +-----------------
> +
> +With ADI enabled, following new traps may occur:
> +
> +Disrupting memory corruption
> +
> +	When a store accesses a memory localtion that has TTE.mcd=1,
> +	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
> +	tag in the address used (bits 63:60) does not match the tag set on
> +	the corresponding cacheline, a memory corruption trap occurs. By
> +	default, it is a disrupting trap and is sent to the hypervisor
> +	first. Hypervisor creates a sun4v error report and sends a
> +	resumable error (TT=0x7e) trap to the kernel. The kernel sends
> +	a SIGSEGV to the task that resulted in this trap with the following
> +	info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ADIDERR;
> +		siginfo.si_addr = addr; /* PC where first mismatch occurred */
> +		siginfo.si_trapno = 0;
> +
> +
> +Precise memory corruption
> +
> +	When a store accesses a memory location that has TTE.mcd=1,
> +	the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
> +	tag in the address used (bits 63:60) does not match the tag set on
> +	the corresponding cacheline, a memory corruption trap occurs. If
> +	MCD precise exception is enabled (MCDPERR=1), a precise
> +	exception is sent to the kernel with TT=0x1a. The kernel sends
> +	a SIGSEGV to the task that resulted in this trap with the following
> +	info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ADIPERR;
> +		siginfo.si_addr = addr;	/* address that caused trap */
> +		siginfo.si_trapno = 0;
> +
> +	NOTE: ADI tag mismatch on a load always results in precise trap.
> +
> +
> +MCD disabled
> +
> +	When a task has not enabled ADI and attempts to set ADI version
> +	on a memory address, processor sends an MCD disabled trap. This
> +	trap is handled by hypervisor first and the hypervisor vectors this
> +	trap through to the kernel as Data Access Exception trap with
> +	fault type set to 0xa (invalid ASI). When this occurs, the kernel
> +	sends the task SIGSEGV signal with following info:
> +
> +		siginfo.si_signo = SIGSEGV;
> +		siginfo.errno = 0;
> +		siginfo.si_code = SEGV_ACCADI;
> +		siginfo.si_addr = addr;	/* address that caused trap */
> +		siginfo.si_trapno = 0;
> +
> +
> +Sample program to use ADI
> +-------------------------
> +
> +Following sample program is meant to illustrate how to use the ADI
> +functionality.
> +
> +#include <unistd.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <elf.h>
> +#include <sys/ipc.h>
> +#include <sys/shm.h>
> +#include <sys/mman.h>
> +#include <asm/asi.h>
> +
> +#ifndef AT_ADI_BLKSZ
> +#define AT_ADI_BLKSZ	48
> +#endif
> +#ifndef AT_ADI_NBITS
> +#define AT_ADI_NBITS	49
> +#endif
> +
> +#ifndef PROT_ADI
> +#define PROT_ADI	0x10
> +#endif
> +
> +#define BUFFER_SIZE     32*1024*1024UL
> +
> +main(int argc, char* argv[], char* envp[])
> +{
> +        unsigned long i, mcde, adi_blksz, adi_nbits;
> +        char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr;
> +        int shmid, version;
> +	Elf64_auxv_t *auxv;
> +
> +	adi_blksz = 0;
> +
> +	while(*envp++ != NULL);
> +	for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
> +		switch (auxv->a_type) {
> +		case AT_ADI_BLKSZ:
> +			adi_blksz = auxv->a_un.a_val;
> +			break;
> +		case AT_ADI_NBITS:
> +			adi_nbits = auxv->a_un.a_val;
> +			break;
> +		}
> +	}
> +	if (adi_blksz == 0) {
> +		fprintf(stderr, "Oops! ADI is not supported\n");
> +		exit(1);
> +	}
> +
> +	printf("ADI capabilities:\n");
> +	printf("\tBlock size = %ld\n", adi_blksz);
> +	printf("\tNumber of bits = %ld\n", adi_nbits);
> +
> +        if ((shmid = shmget(2, BUFFER_SIZE,
> +                                IPC_CREAT | SHM_R | SHM_W)) < 0) {
> +                perror("shmget failed");
> +                exit(1);
> +        }
> +
> +        shmaddr = shmat(shmid, NULL, 0);
> +        if (shmaddr == (char *)-1) {
> +                perror("shm attach failed");
> +                shmctl(shmid, IPC_RMID, NULL);
> +                exit(1);
> +        }
> +
> +	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) {
> +		perror("mprotect failed");
> +		goto err_out;
> +	}
> +
> +        /* Set the ADI version tag on the shm segment
> +         */
> +        version = 10;
> +        tmp_addr = shmaddr;
> +        end = shmaddr + BUFFER_SIZE;
> +        while (tmp_addr < end) {
> +                asm volatile(
> +                        "stxa %1, [%0]0x90\n\t"
> +                        :
> +                        : "r" (tmp_addr), "r" (version));
> +                tmp_addr += adi_blksz;
> +        }
> +	asm volatile("membar #Sync\n\t");
> +
> +        /* Create a versioned address from the normal address by placing
> +	 * version tag in the upper adi_nbits bits
> +         */
> +        tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits);
> +        tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits);
> +        veraddr = (void *) (((unsigned long)version << (64-adi_nbits))
> +                        | (unsigned long)tmp_addr);
> +
> +        printf("Starting the writes:\n");
> +        for (i = 0; i < BUFFER_SIZE; i++) {
> +                veraddr[i] = (char)(i);
> +                if (!(i % (1024 * 1024)))
> +                        printf(".");
> +        }
> +        printf("\n");
> +
> +        printf("Verifying data...");
> +	fflush(stdout);
> +        for (i = 0; i < BUFFER_SIZE; i++)
> +                if (veraddr[i] != (char)i)
> +                        printf("\nIndex %lu mismatched\n", i);
> +        printf("Done.\n");
> +
> +        /* Disable ADI and clean up
> +         */
> +	if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) {
> +		perror("mprotect failed");
> +		goto err_out;
> +	}
> +
> +        if (shmdt((const void *)shmaddr) != 0)
> +                perror("Detach failure");
> +        shmctl(shmid, IPC_RMID, NULL);
> +
> +        exit(0);
> +
> +err_out:
> +        if (shmdt((const void *)shmaddr) != 0)
> +                perror("Detach failure");
> +        shmctl(shmid, IPC_RMID, NULL);
> +        exit(1);
> +}
> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
> index 59bb5938d852..b799796ad963 100644
> --- a/arch/sparc/include/asm/mman.h
> +++ b/arch/sparc/include/asm/mman.h
> @@ -6,5 +6,75 @@
> #ifndef __ASSEMBLY__
> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
> int sparc_mmap_check(unsigned long addr, unsigned long len);
> -#endif
> +
> +#ifdef CONFIG_SPARC64
> +#include <asm/adi_64.h>
> +
> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
> +{
> +	if (prot & PROT_ADI) {
> +		struct pt_regs *regs;
> +
> +		if (!current->mm->context.adi) {
> +			regs = task_pt_regs(current);
> +			regs->tstate |= TSTATE_MCDE;
> +			current->mm->context.adi = true;

If a process is multi-threaded when it enables ADI on some memory for
the first time, TSTATE_MCDE will only be set for the calling thread
and it will not be possible to enable it for the other threads.
One possible way to handle this is to enable TSTATE_MCDE for all user
threads when they are initialized if adi_capable() returns true.


> +		}
> +		return VM_SPARC_ADI;
> +	} else {
> +		return 0;
> +	}
> +}
> +
> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
> +}
> +
> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
> +{
> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
> +		return 0;
> +	if (prot & PROT_ADI) {
> +		if (!adi_capable())
> +			return 0;
> +
> +		/* ADI tags can not be set on read-only memory, so it makes
> +		 * sense to enable ADI on writable memory only.
> +		 */
> +		if (!(prot & PROT_WRITE))
> +			return 0;

This prevents the use of ADI for the legitimate case where shared memory
is mapped read/write for a master process but mapped read-only for a
client process.  The master process could set the tags and communicate
the expected tag values to the client.


> +
> +		if (addr) {
> +			struct vm_area_struct *vma;
> +
> +			vma = find_vma(current->mm, addr);
> +			if (vma) {
> +				/* ADI can not be enabled on PFN
> +				 * mapped pages
> +				 */
> +				if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
> +					return 0;
> +
> +				/* Mergeable pages can become unmergeable
> +				 * if ADI is enabled on them even if they
> +				 * have identical data on them. This can be
> +				 * because ADI enabled pages with identical
> +				 * data may still not have identical ADI
> +				 * tags on them. Disallow ADI on mergeable
> +				 * pages.
> +				 */
> +				if (vma->vm_flags & VM_MERGEABLE)
> +					return 0;
> +			}
> +		}
> +	}
> +	return 1;
> +}
> +#endif /* CONFIG_SPARC64 */
> +
> +#endif /* __ASSEMBLY__ */
> #endif /* __SPARC_MMAN_H__ */
> diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
> index 83b36a5371ff..a65d51ebe00b 100644
> --- a/arch/sparc/include/asm/mmu_64.h
> +++ b/arch/sparc/include/asm/mmu_64.h
> @@ -89,6 +89,20 @@ struct tsb_config {
> #define MM_NUM_TSBS	1
> #endif
> 
> +/* ADI tags are stored when a page is swapped out and the storage for
> + * tags is allocated dynamically. There is a tag storage descriptor
> + * associated with each set of tag storage pages. Tag storage descriptors
> + * are allocated dynamically. Since kernel will allocate a full page for
> + * each tag storage descriptor, we can store up to
> + * PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page.
> + */
> +typedef struct {
> +	unsigned long	start;		/* Start address for this tag storage */
> +	unsigned long	end;		/* Last address for tag storage */
> +	unsigned char	*tags;		/* Where the tags are */
> +	unsigned long	tag_users;	/* number of references to descriptor */
> +} tag_storage_desc_t;
> +
> typedef struct {
> 	spinlock_t		lock;
> 	unsigned long		sparc64_ctx_val;
> @@ -96,6 +110,9 @@ typedef struct {
> 	unsigned long		thp_pte_count;
> 	struct tsb_config	tsb_block[MM_NUM_TSBS];
> 	struct hv_tsb_descr	tsb_descr[MM_NUM_TSBS];
> +	bool			adi;
> +	tag_storage_desc_t	*tag_store;
> +	spinlock_t		tag_lock;
> } mm_context_t;
> 
> #endif /* !__ASSEMBLY__ */
> diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
> index 2cddcda4f85f..68de059551f9 100644
> --- a/arch/sparc/include/asm/mmu_context_64.h
> +++ b/arch/sparc/include/asm/mmu_context_64.h
> @@ -9,6 +9,7 @@
> #include <linux/mm_types.h>
> 
> #include <asm/spitfire.h>
> +#include <asm/adi_64.h>
> #include <asm-generic/mm_hooks.h>
> 
> static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
> @@ -129,6 +130,48 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
> 
> #define deactivate_mm(tsk,mm)	do { } while (0)
> #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
> +
> +#define  __HAVE_ARCH_START_CONTEXT_SWITCH
> +static inline void arch_start_context_switch(struct task_struct *prev)
> +{
> +	/* Save the current state of MCDPER register for the process
> +	 * we are switching from
> +	 */
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		__asm__ __volatile__(
> +			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
> +			"mov %%g1, %0\n\t"
> +			: "=r" (tmp_mcdper)
> +			:
> +			: "g1");
> +		if (tmp_mcdper)
> +			set_tsk_thread_flag(prev, TIF_MCDPER);
> +		else
> +			clear_tsk_thread_flag(prev, TIF_MCDPER);
> +	}
> +}
> +
> +#define finish_arch_post_lock_switch	finish_arch_post_lock_switch
> +static inline void finish_arch_post_lock_switch(void)
> +{
> +	/* Restore the state of MCDPER register for the new process
> +	 * just switched to.
> +	 */
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		tmp_mcdper = test_thread_flag(TIF_MCDPER);
> +		__asm__ __volatile__(
> +			"mov %0, %%g1\n\t"
> +			".word 0x9d800001\n\t"	/* wr %g0, %g1, %mcdper" */
> +			:
> +			: "ir" (tmp_mcdper)
> +			: "g1");
> +	}
> +}
> +
> #endif /* !(__ASSEMBLY__) */
> 
> #endif /* !(__SPARC64_MMU_CONTEXT_H) */
> diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
> index 5961b2d8398a..dc582c5611f8 100644
> --- a/arch/sparc/include/asm/page_64.h
> +++ b/arch/sparc/include/asm/page_64.h
> @@ -46,6 +46,10 @@ struct page;
> void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
> #define copy_page(X,Y)	memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
> void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
> +#define __HAVE_ARCH_COPY_USER_HIGHPAGE
> +struct vm_area_struct;
> +void copy_user_highpage(struct page *to, struct page *from,
> +			unsigned long vaddr, struct vm_area_struct *vma);
> 
> /* Unlike sparc32, sparc64's parameter passing API is more
>  * sane in that structures which as small enough are passed
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index af045061f41e..51da342c392d 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -18,6 +18,7 @@
> #include <asm/types.h>
> #include <asm/spitfire.h>
> #include <asm/asi.h>
> +#include <asm/adi.h>
> #include <asm/page.h>
> #include <asm/processor.h>
> 
> @@ -570,6 +571,18 @@ static inline pte_t pte_mkspecial(pte_t pte)
> 	return pte;
> }
> 
> +static inline pte_t pte_mkmcd(pte_t pte)
> +{
> +	pte_val(pte) |= _PAGE_MCD_4V;
> +	return pte;
> +}
> +
> +static inline pte_t pte_mknotmcd(pte_t pte)
> +{
> +	pte_val(pte) &= ~_PAGE_MCD_4V;
> +	return pte;
> +}
> +
> static inline unsigned long pte_young(pte_t pte)
> {
> 	unsigned long mask;
> @@ -1001,6 +1014,39 @@ int page_in_phys_avail(unsigned long paddr);
> int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
> 		    unsigned long, pgprot_t);
> 
> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte);
> +
> +int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		  unsigned long addr, pte_t oldpte);
> +
> +#define __HAVE_ARCH_DO_SWAP_PAGE
> +static inline void arch_do_swap_page(struct mm_struct *mm,
> +				     struct vm_area_struct *vma,
> +				     unsigned long addr,
> +				     pte_t pte, pte_t oldpte)
> +{
> +	/* If this is a new page being mapped in, there can be no
> +	 * ADI tags stored away for this page. Skip looking for
> +	 * stored tags
> +	 */
> +	if (pte_none(oldpte))
> +		return;
> +
> +	if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V))
> +		adi_restore_tags(mm, vma, addr, pte);
> +}
> +
> +#define __HAVE_ARCH_UNMAP_ONE
> +static inline int arch_unmap_one(struct mm_struct *mm,
> +				 struct vm_area_struct *vma,
> +				 unsigned long addr, pte_t oldpte)
> +{
> +	if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
> +		return adi_save_tags(mm, vma, addr, oldpte);
> +	return 0;
> +}
> +
> static inline int io_remap_pfn_range(struct vm_area_struct *vma,
> 				     unsigned long from, unsigned long pfn,
> 				     unsigned long size, pgprot_t prot)
> diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
> index 38a24f257b85..9c04acb1f9af 100644
> --- a/arch/sparc/include/asm/thread_info_64.h
> +++ b/arch/sparc/include/asm/thread_info_64.h
> @@ -190,7 +190,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
>  *       in using in assembly, else we can't use the mask as
>  *       an immediate value in instructions such as andcc.
>  */
> -/* flag bit 12 is available */
> +#define TIF_MCDPER		12	/* Precise MCD exception */
> #define TIF_MEMDIE		13	/* is terminating due to OOM killer */
> #define TIF_POLLING_NRFLAG	14
> 
> diff --git a/arch/sparc/include/asm/trap_block.h b/arch/sparc/include/asm/trap_block.h
> index ec9c04de3664..b283e940671a 100644
> --- a/arch/sparc/include/asm/trap_block.h
> +++ b/arch/sparc/include/asm/trap_block.h
> @@ -72,6 +72,8 @@ struct sun4v_1insn_patch_entry {
> };
> extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch,
> 	__sun4v_1insn_patch_end;
> +extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch,
> +	__sun_m7_1insn_patch_end;
> 
> struct sun4v_2insn_patch_entry {
> 	unsigned int	addr;
> diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
> index 9765896ecb2c..a72c03397345 100644
> --- a/arch/sparc/include/uapi/asm/mman.h
> +++ b/arch/sparc/include/uapi/asm/mman.h
> @@ -5,6 +5,8 @@
> 
> /* SunOS'ified... */
> 
> +#define PROT_ADI	0x10		/* ADI enabled */
> +
> #define MAP_RENAME      MAP_ANONYMOUS   /* In SunOS terminology */
> #define MAP_NORESERVE   0x40            /* don't reserve swap pages */
> #define MAP_INHERIT     0x80            /* SunOS doesn't do this, but... */
> diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
> index 9fbb5dd4a7bf..83c1e36ae5fa 100644
> --- a/arch/sparc/kernel/adi_64.c
> +++ b/arch/sparc/kernel/adi_64.c
> @@ -7,10 +7,24 @@
>  * This work is licensed under the terms of the GNU GPL, version 2.
>  */
> #include <linux/init.h>
> +#include <linux/slab.h>
> +#include <linux/mm_types.h>
> #include <asm/mdesc.h>
> #include <asm/adi_64.h>
> +#include <asm/mmu_64.h>
> +#include <asm/pgtable_64.h>
> +
> +/* Each page of storage for ADI tags can accommodate tags for 128
> + * pages. When ADI enabled pages are being swapped out, it would be
> + * prudent to allocate at least enough tag storage space to accommodate
> + * SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to
> + * store tags for four SWAPFILE_CLUSTER pages to reduce need for
> + * further allocations for same vma.
> + */
> +#define TAG_STORAGE_PAGES	8
> 
> struct adi_config adi_state;
> +EXPORT_SYMBOL(adi_state);
> 
> /* mdesc_adi_init() : Parse machine description provided by the
>  *	hypervisor to detect ADI capabilities
> @@ -78,6 +92,19 @@ void __init mdesc_adi_init(void)
> 		goto adi_not_found;
> 	adi_state.caps.nbits = *val;
> 
> +	/* Some of the code to support swapping ADI tags is written
> +	 * assumption that two ADI tags can fit inside one byte. If
> +	 * this assumption is broken by a future architecture change,
> +	 * that code will have to be revisited. If that were to happen,
> +	 * disable ADI support so we do not get unpredictable results
> +	 * with programs trying to use ADI and their pages getting
> +	 * swapped out
> +	 */
> +	if (adi_state.caps.nbits > 4) {
> +		pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n");
> +		adi_state.enabled = false;
> +	}
> +
> 	mdesc_release(hp);
> 	return;
> 
> @@ -88,3 +115,253 @@ void __init mdesc_adi_init(void)
> 	if (hp)
> 		mdesc_release(hp);
> }
> +
> +tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
> +				   struct vm_area_struct *vma,
> +				   unsigned long addr)
> +{
> +	tag_storage_desc_t *tag_desc = NULL;
> +	unsigned long i, max_desc, flags;
> +
> +	/* Check if this vma already has tag storage descriptor
> +	 * allocated for it.
> +	 */
> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +	if (mm->context.tag_store) {
> +		tag_desc = mm->context.tag_store;
> +		spin_lock_irqsave(&mm->context.tag_lock, flags);
> +		for (i = 0; i < max_desc; i++) {
> +			if ((addr >= tag_desc->start) &&
> +			    ((addr + PAGE_SIZE - 1) <= tag_desc->end))
> +				break;
> +			tag_desc++;
> +		}
> +		spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +
> +		/* If no matching entries were found, this must be a
> +		 * freshly allocated page
> +		 */
> +		if (i >= max_desc)
> +			tag_desc = NULL;
> +	}
> +
> +	return tag_desc;
> +}
> +
> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
> +				    struct vm_area_struct *vma,
> +				    unsigned long addr)
> +{
> +	unsigned char *tags;
> +	unsigned long i, size, max_desc, flags;
> +	tag_storage_desc_t *tag_desc, *open_desc;
> +	unsigned long end_addr, hole_start, hole_end;
> +
> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +	open_desc = NULL;
> +	hole_start = 0;
> +	hole_end = ULONG_MAX;
> +	end_addr = addr + PAGE_SIZE - 1;
> +
> +	/* Check if this vma already has tag storage descriptor
> +	 * allocated for it.
> +	 */
> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
> +	if (mm->context.tag_store) {
> +		tag_desc = mm->context.tag_store;
> +
> +		/* Look for a matching entry for this address. While doing
> +		 * that, look for the first open slot as well and find
> +		 * the hole in already allocated range where this request
> +		 * will fit in.
> +		 */
> +		for (i = 0; i < max_desc; i++) {
> +			if (tag_desc->tag_users == 0) {
> +				if (open_desc == NULL)
> +					open_desc = tag_desc;
> +			} else {
> +				if ((addr >= tag_desc->start) &&
> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
> +					tag_desc->tag_users++;
> +					goto out;
> +				}
> +			}
> +			if ((tag_desc->start > end_addr) &&
> +			    (tag_desc->start < hole_end))
> +				hole_end = tag_desc->start;
> +			if ((tag_desc->end < addr) &&
> +			    (tag_desc->end > hole_start))
> +				hole_start = tag_desc->end;
> +			tag_desc++;
> +		}
> +
> +	} else {
> +		size = sizeof(tag_storage_desc_t)*max_desc;
> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);

The spin_lock_irqsave() above means that all but level 15 interrupts
will be disabled when kzalloc() is called.  If kzalloc() can sleep
there's a risk of deadlock.


> +		if (mm->context.tag_store == NULL) {
> +			tag_desc = NULL;
> +			goto out;
> +		}
> +		tag_desc = mm->context.tag_store;
> +		for (i = 0; i < max_desc; i++, tag_desc++)
> +			tag_desc->tag_users = 0;
> +		open_desc = mm->context.tag_store;
> +		i = 0;
> +	}
> +
> +	/* Check if we ran out of tag storage descriptors */
> +	if (open_desc == NULL) {
> +		tag_desc = NULL;
> +		goto out;
> +	}
> +
> +	/* Mark this tag descriptor slot in use and then initialize it */
> +	tag_desc = open_desc;
> +	tag_desc->tag_users = 1;
> +
> +	/* Tag storage has not been allocated for this vma and space
> +	 * is available in tag storage descriptor. Since this page is
> +	 * being swapped out, there is high probability subsequent pages
> +	 * in the VMA will be swapped out as well. Allocates pages to
> +	 * store tags for as many pages in this vma as possible but not
> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
> +	 * covers adi_blksize() worth of addresses. Check if the hole is
> +	 * big enough to accommodate full address range for using
> +	 * TAG_STORAGE_PAGES number of tag pages.
> +	 */
> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
> +	end_addr = addr + (size*2*adi_blksize()) - 1;

Since size > PAGE_SIZE, end_addr could theoretically overflow.


> +	if (hole_end < end_addr) {
> +		/* Available hole is too small on the upper end of
> +		 * address. Can we expand the range towards the lower
> +		 * address and maximize use of this slot?
> +		 */
> +		unsigned long tmp_addr;
> +
> +		end_addr = hole_end - 1;
> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;

Similarily, tmp_addr may underflow.

> +		if (tmp_addr < hole_start) {
> +			/* Available hole is restricted on lower address
> +			 * end as well
> +			 */
> +			tmp_addr = hole_start + 1;
> +		}
> +		addr = tmp_addr;
> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
> +		size = size * PAGE_SIZE;
> +	}
> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);

Potential deadlock due to PIL=14?


> +	if (tags == NULL) {
> +		tag_desc->tag_users = 0;
> +		tag_desc = NULL;
> +		goto out;
> +	}
> +	tag_desc->start = addr;
> +	tag_desc->tags = tags;
> +	tag_desc->end = end_addr;
> +
> +out:
> +	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +	return tag_desc;
> +}
> +
> +void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
> +{
> +	unsigned long flags;
> +	unsigned char *tags = NULL;
> +
> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
> +	tag_desc->tag_users--;
> +	if (tag_desc->tag_users == 0) {
> +		tag_desc->start = tag_desc->end = 0;
> +		/* Do not free up the tag storage space allocated
> +		 * by the first descriptor. This is persistent
> +		 * emergency tag storage space for the task.
> +		 */
> +		if (tag_desc != mm->context.tag_store) {
> +			tags = tag_desc->tags;
> +			tag_desc->tags = NULL;
> +		}
> +	}
> +	spin_unlock_irqrestore(&mm->context.tag_lock, flags);
> +	kfree(tags);
> +}
> +
> +#define tag_start(addr, tag_desc)		\
> +	((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize())))
> +
> +/* Retrieve any saved ADI tags for the page being swapped back in and
> + * restore these tags to the newly allocated physical page.
> + */
> +void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		      unsigned long addr, pte_t pte)
> +{
> +	unsigned char *tag;
> +	tag_storage_desc_t *tag_desc;
> +	unsigned long paddr, tmp, version1, version2;
> +
> +	/* Check if the swapped out page has an ADI version
> +	 * saved. If yes, restore version tag to the newly
> +	 * allocated page.
> +	 */
> +	tag_desc = find_tag_store(mm, vma, addr);
> +	if (tag_desc == NULL)
> +		return;
> +
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(pte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		version1 = (*tag) >> 4;
> +		version2 = (*tag) & 0x0f;
> +		*tag++ = 0;
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version1), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("stxa %0, [%1] %2\n\t"
> +			:
> +			: "r" (version2), "r" (tmp),
> +			  "i" (ASI_MCD_REAL));
> +	}
> +	asm volatile("membar #Sync\n\t");
> +
> +	/* Check and mark this tag space for release later if
> +	 * the swapped in page was the last user of tag space
> +	 */
> +	del_tag_store(tag_desc, mm);
> +}
> +
> +/* A page is about to be swapped out. Save any ADI tags associated with
> + * this physical page so they can be restored later when the page is swapped
> + * back in.
> + */
> +int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +		  unsigned long addr, pte_t oldpte)
> +{
> +	unsigned char *tag;
> +	tag_storage_desc_t *tag_desc;
> +	unsigned long version1, version2, paddr, tmp;
> +
> +	tag_desc = alloc_tag_store(mm, vma, addr);
> +	if (tag_desc == NULL)
> +		return -1;
> +
> +	tag = tag_start(addr, tag_desc);
> +	paddr = pte_val(oldpte) & _PAGE_PADDR_4V;
> +	for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
> +		asm volatile("ldxa [%1] %2, %0\n\t"
> +				: "=r" (version1)
> +				: "r" (tmp), "i" (ASI_MCD_REAL));
> +		tmp += adi_blksize();
> +		asm volatile("ldxa [%1] %2, %0\n\t"
> +				: "=r" (version2)
> +				: "r" (tmp), "i" (ASI_MCD_REAL));
> +		*tag = (version1 << 4) | version2;
> +		tag++;
> +	}
> +
> +	return 0;
> +}
> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
> index 1276ca2567ba..7be33bf45cff 100644
> --- a/arch/sparc/kernel/etrap_64.S
> +++ b/arch/sparc/kernel/etrap_64.S
> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
> 		or	%l7, %l0, %l7
> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
> +		/*
> +		 * If userspace is using ADI, it could potentially pass
> +		 * a pointer with version tag embedded in it. To maintain
> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
> +		 * would have already set TTE.mcd in an earlier call to
> +		 * kernel and set the version tag for the address being
> +		 * dereferenced. Setting PSTATE.mcde would ensure any
> +		 * access to userspace data through a system call honors
> +		 * ADI and does not allow a rogue app to bypass ADI by
> +		 * using system calls. Setting PSTATE.mcde only affects
> +		 * accesses to virtual addresses that have TTE.mcd set.
> +		 * Set PMCDPER to ensure any exceptions caused by ADI
> +		 * version tag mismatch are exposed before system call
> +		 * returns to userspace. Setting PMCDPER affects only
> +		 * writes to virtual addresses that have TTE.mcd set and
> +		 * have a version tag set as well.
> +		 */
> +		.section .sun_m7_1insn_patch, "ax"
> +		.word	661b
> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
> +		.previous
> +661:		nop
> +		.section .sun_m7_1insn_patch, "ax"
> +		.word	661b
> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */

I commented on this on the last patch series revision.  PMCDPER could be
set once when each CPU is configured rather than every time the kernel
is entered.  Since it's never cleared, setting it repeatedly unnecessarily
impacts the performance of etrap.

Also, there are places in rtrap where PSTATE is set before continuing
execution in the kernel.  These should also be patched to set TSTATE_MCDE.


> +		.previous
> 		or	%l7, %l0, %l7
> 		wrpr	%l2, %tnpc
> 		wrpr	%l7, (TSTATE_PRIV | TSTATE_IE), %tstate
> diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
> index b96104da5bd6..defa5723dfa6 100644
> --- a/arch/sparc/kernel/process_64.c
> +++ b/arch/sparc/kernel/process_64.c
> @@ -664,6 +664,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
> 	return 0;
> }
> 
> +/* TIF_MCDPER in thread info flags for current task is updated lazily upon
> + * a context switch. Update the this flag in current task's thread flags
> + * before dup so the dup'd task will inherit the current TIF_MCDPER flag.
> + */
> +int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
> +{
> +	if (adi_capable()) {
> +		register unsigned long tmp_mcdper;
> +
> +		__asm__ __volatile__(
> +			".word 0x83438000\n\t"	/* rd  %mcdper, %g1 */
> +			"mov %%g1, %0\n\t"
> +			: "=r" (tmp_mcdper)
> +			:
> +			: "g1");
> +		if (tmp_mcdper)
> +			set_thread_flag(TIF_MCDPER);
> +		else
> +			clear_thread_flag(TIF_MCDPER);
> +	}
> +
> +	*dst = *src;
> +	return 0;
> +}
> +
> typedef struct {
> 	union {
> 		unsigned int	pr_regs[32];
> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
> index 422b17880955..a9da205da394 100644
> --- a/arch/sparc/kernel/setup_64.c
> +++ b/arch/sparc/kernel/setup_64.c
> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
> 	}
> }
> 
> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
> +			     struct sun4v_1insn_patch_entry *end)
> +{
> +	sun4v_patch_1insn_range(start, end);
> +}
> +
> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
> 			     struct sun4v_2insn_patch_entry *end)
> {
> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
> 				&__sun4v_2insn_patch_end);
> 	if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
> -	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
> +	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN) {
> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
> +					 &__sun_m7_1insn_patch_end);
> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
> 					 &__sun_m7_2insn_patch_end);

Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
here instead of adding new functions that just call these functions?

Anthony

> +		}
> 
> 	sun4v_hvapi_init();
> }
> diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
> index 572db686f845..20a70682cce7 100644
> --- a/arch/sparc/kernel/vmlinux.lds.S
> +++ b/arch/sparc/kernel/vmlinux.lds.S
> @@ -144,6 +144,11 @@ SECTIONS
> 		*(.pause_3insn_patch)
> 		__pause_3insn_patch_end = .;
> 	}
> +	.sun_m7_1insn_patch : {
> +		__sun_m7_1insn_patch = .;
> +		*(.sun_m7_1insn_patch)
> +		__sun_m7_1insn_patch_end = .;
> +	}
> 	.sun_m7_2insn_patch : {
> 		__sun_m7_2insn_patch = .;
> 		*(.sun_m7_2insn_patch)
> diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
> index cd0e32bbcb1d..579f7ae75b35 100644
> --- a/arch/sparc/mm/gup.c
> +++ b/arch/sparc/mm/gup.c
> @@ -11,6 +11,7 @@
> #include <linux/pagemap.h>
> #include <linux/rwsem.h>
> #include <asm/pgtable.h>
> +#include <asm/adi.h>
> 
> /*
>  * The performance critical leaf functions are made noinline otherwise gcc
> @@ -157,6 +158,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
> 	pgd_t *pgdp;
> 	int nr = 0;
> 
> +#ifdef CONFIG_SPARC64
> +	if (adi_capable()) {
> +		long addr = start;
> +
> +		/* If userspace has passed a versioned address, kernel
> +		 * will not find it in the VMAs since it does not store
> +		 * the version tags in the list of VMAs. Storing version
> +		 * tags in list of VMAs is impractical since they can be
> +		 * changed any time from userspace without dropping into
> +		 * kernel. Any address search in VMAs will be done with
> +		 * non-versioned addresses. Ensure the ADI version bits
> +		 * are dropped here by sign extending the last bit before
> +		 * ADI bits. IOMMU does not implement version tags.
> +		 */
> +		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
> +		start = addr;
> +	}
> +#endif
> 	start &= PAGE_MASK;
> 	addr = start;
> 	len = (unsigned long) nr_pages << PAGE_SHIFT;
> @@ -187,6 +206,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
> 	pgd_t *pgdp;
> 	int nr = 0;
> 
> +#ifdef CONFIG_SPARC64
> +	if (adi_capable()) {
> +		long addr = start;
> +
> +		/* If userspace has passed a versioned address, kernel
> +		 * will not find it in the VMAs since it does not store
> +		 * the version tags in the list of VMAs. Storing version
> +		 * tags in list of VMAs is impractical since they can be
> +		 * changed any time from userspace without dropping into
> +		 * kernel. Any address search in VMAs will be done with
> +		 * non-versioned addresses. Ensure the ADI version bits
> +		 * are dropped here by sign extending the last bit before
> +		 * ADI bits. IOMMU does not implements version tags,
> +		 */
> +		addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
> +		start = addr;
> +	}
> +#endif
> 	start &= PAGE_MASK;
> 	addr = start;
> 	len = (unsigned long) nr_pages << PAGE_SHIFT;
> diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
> index 88855e383b34..487ed1f1ce86 100644
> --- a/arch/sparc/mm/hugetlbpage.c
> +++ b/arch/sparc/mm/hugetlbpage.c
> @@ -177,8 +177,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
> 			 struct page *page, int writeable)
> {
> 	unsigned int shift = huge_page_shift(hstate_vma(vma));
> +	pte_t pte;
> 
> -	return hugepage_shift_to_tte(entry, shift);
> +	pte = hugepage_shift_to_tte(entry, shift);
> +
> +#ifdef CONFIG_SPARC64
> +	/* If this vma has ADI enabled on it, turn on TTE.mcd
> +	 */
> +	if (vma->vm_flags & VM_SPARC_ADI)
> +		return pte_mkmcd(pte);
> +	else
> +		return pte_mknotmcd(pte);
> +#else
> +	return pte;
> +#endif
> }
> 
> static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
> index 3c40ebd50f92..94854e7e833e 100644
> --- a/arch/sparc/mm/init_64.c
> +++ b/arch/sparc/mm/init_64.c
> @@ -3087,3 +3087,36 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
> 		do_flush_tlb_kernel_range(start, end);
> 	}
> }
> +
> +void copy_user_highpage(struct page *to, struct page *from,
> +	unsigned long vaddr, struct vm_area_struct *vma)
> +{
> +	char *vfrom, *vto;
> +
> +	vfrom = kmap_atomic(from);
> +	vto = kmap_atomic(to);
> +	copy_user_page(vto, vfrom, vaddr, to);
> +	kunmap_atomic(vto);
> +	kunmap_atomic(vfrom);
> +
> +	/* If this page has ADI enabled, copy over any ADI tags
> +	 * as well
> +	 */
> +	if (vma->vm_flags & VM_SPARC_ADI) {
> +		unsigned long pfrom, pto, i, adi_tag;
> +
> +		pfrom = page_to_phys(from);
> +		pto = page_to_phys(to);
> +
> +		for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
> +			asm volatile("ldxa [%1] %2, %0\n\t"
> +					: "=r" (adi_tag)
> +					:  "r" (i), "i" (ASI_MCD_REAL));
> +			asm volatile("stxa %0, [%1] %2\n\t"
> +					:
> +					: "r" (adi_tag), "r" (pto),
> +					  "i" (ASI_MCD_REAL));
> +			pto += adi_blksize();
> +		}
> +	}
> +}
> diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
> index 0d4b998c7d7b..6518cc42056b 100644
> --- a/arch/sparc/mm/tsb.c
> +++ b/arch/sparc/mm/tsb.c
> @@ -545,6 +545,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
> 
> 	mm->context.sparc64_ctx_val = 0UL;
> 
> +	mm->context.tag_store = NULL;
> +	spin_lock_init(&mm->context.tag_lock);
> +
> #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
> 	/* We reset them to zero because the fork() page copying
> 	 * will re-increment the counters as the parent PTEs are
> @@ -610,4 +613,22 @@ void destroy_context(struct mm_struct *mm)
> 	}
> 
> 	spin_unlock_irqrestore(&ctx_alloc_lock, flags);
> +
> +	/* If ADI tag storage was allocated for this task, free it */
> +	if (mm->context.tag_store) {
> +		tag_storage_desc_t *tag_desc;
> +		unsigned long max_desc;
> +		unsigned char *tags;
> +
> +		tag_desc = mm->context.tag_store;
> +		max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
> +		for (i = 0; i < max_desc; i++) {
> +			tags = tag_desc->tags;
> +			tag_desc->tags = NULL;
> +			kfree(tags);
> +			tag_desc++;
> +		}
> +		kfree(mm->context.tag_store);
> +		mm->context.tag_store = NULL;
> +	}
> }
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b7aa3932e6d4..c0972114036f 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -231,6 +231,9 @@ extern unsigned int kobjsize(const void *objp);
> # define VM_GROWSUP	VM_ARCH_1
> #elif defined(CONFIG_IA64)
> # define VM_GROWSUP	VM_ARCH_1
> +#elif defined(CONFIG_SPARC64)
> +# define VM_SPARC_ADI	VM_ARCH_1	/* Uses ADI tag for access control */
> +# define VM_ARCH_CLEAR	VM_SPARC_ADI
> #elif !defined(CONFIG_MMU)
> # define VM_MAPPED_COPY	VM_ARCH_1	/* T if mapped copy of data (nommu mmap) */
> #endif
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 216184af0e19..bb82399816ef 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1797,6 +1797,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
> 		if (*vm_flags & VM_SAO)
> 			return 0;
> #endif
> +#ifdef VM_SPARC_ADI
> +		if (*vm_flags & VM_SPARC_ADI)
> +			return 0;
> +#endif
> 
> 		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
> 			err = __ksm_enter(mm);
> -- 
> 2.11.0
> 
> --
> To unsubscribe from this list: send the line "unsubscribe sparclinux" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-25 22:31     ` Anthony Yznaga
  (?)
@ 2017-08-30 22:27       ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-30 22:27 UTC (permalink / raw)
  To: Anthony Yznaga
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

Hi Anthony,

Thanks for taking the time to provide feedback. My comments inline below.

On 08/25/2017 04:31 PM, Anthony Yznaga wrote:
> 
>> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
>> ......deleted......
>> +provided by the hypervisor to the kernel.  Kernel returns the value of
>> +ADI block size to userspace using auxiliary vector along with other ADI
>> +info. Following auxiliary vectors are provided by the kernel:
>> +
>> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
>> +			alignment, in bytes, of ADI versioning.
>> +	AT_ADI_NBITS	Number of ADI version bits in the VA
> 
> The previous patch series also defined AT_ADI_UEONADI.  Why was that
> removed?

This was based upon a conversation we had when you mentioned future 
processors may not implement this or change the way this is interpreted 
and any applications depending upon this value would break at that 
point. I removed it to eliminate building an unreliable dependency. If I 
misunderstood what you said, please let me know.

> 
>> +
>> +
>> +IMPORTANT NOTES:
>> +
>> +- Version tag values of 0x0 and 0xf are reserved.
> 
> The documentation should probably state more specifically that an
> in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
> meaning that a mismatch exception will never be generated regardless
> of the tag bits set in the VA accessing the memory.

Will do.

> 
>> +
>> +- Version tags are set on virtual addresses from userspace even though
>> +  tags are stored in physical memory. Tags are set on a physical page
>> +  after it has been allocated to a task and a pte has been created for
>> +  it.
>> +
>> +- When a task frees a memory page it had set version tags on, the page
>> +  goes back to free page pool. When this page is re-allocated to a task,
>> +  kernel clears the page using block initialization ASI which clears the
>> +  version tags as well for the page. If a page allocated to a task is
>> +  freed and allocated back to the same task, old version tags set by the
>> +  task on that page will no longer be present.
> 
> The specifics should be included here, too, so someone doesn't have
> to guess what's going on if they make changes and the tags are no longer
> cleared.  The HW clears the tag for a cacheline for block initializing
> stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
> PSTATE.mce is set when executing in the kernel, but pages are cleared
> using kernel physical mapping VAs which are mapped with TTE.mcd=0.
> 
> Another HW behavior that should be mentioned is that tag mismatches
> are not detected for non-faulting loads.

Sure, I can add that.

> 
>> +
>> +- Kernel does not set any tags for user pages and it is entirely a
>> +  task's responsibility to set any version tags. Kernel does ensure the
>> +  version tags are preserved if a page is swapped out to the disk and
>> +  swapped back in. It also preserves that version tags if a page is
>> +  migrated.
> 
> I only have a cursory understanding of how page migration works, but
> I could not see how the tags would be preserved if a page were migrated.
> I figured the place to copy the tags would be migrate_page_copy(), but
> I don't see changes there.
> 
> 

For migrating user pages, the way I understand the code works is if the 
page is mapped (which is the only time ADI tags are even in place), 
try_to_unmap() is called with TTU_MIGRATION flag set. try_to_unmap() 
will call arch_unmap_one() which saves the tags from currently mapped 
page. When the new page has been allocated, contents of the old page are 
faulted in through do_swap_page() which will call arch_do_swap_page(). 
arch_do_swap_page() then restores the ADI tags.


>> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
>> index 59bb5938d852..b799796ad963 100644
>> --- a/arch/sparc/include/asm/mman.h
>> +++ b/arch/sparc/include/asm/mman.h
>> @@ -6,5 +6,75 @@
>> #ifndef __ASSEMBLY__
>> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
>> int sparc_mmap_check(unsigned long addr, unsigned long len);
>> -#endif
>> +
>> +#ifdef CONFIG_SPARC64
>> +#include <asm/adi_64.h>
>> +
>> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
>> +{
>> +	if (prot & PROT_ADI) {
>> +		struct pt_regs *regs;
>> +
>> +		if (!current->mm->context.adi) {
>> +			regs = task_pt_regs(current);
>> +			regs->tstate |= TSTATE_MCDE;
>> +			current->mm->context.adi = true;
> 
> If a process is multi-threaded when it enables ADI on some memory for
> the first time, TSTATE_MCDE will only be set for the calling thread
> and it will not be possible to enable it for the other threads.
> One possible way to handle this is to enable TSTATE_MCDE for all user
> threads when they are initialized if adi_capable() returns true.
> 

Or set TSTATE_MCDE unconditionally here by removing "if 
(!current->mm->context.adi)"?

> 
>> +		}
>> +		return VM_SPARC_ADI;
>> +	} else {
>> +		return 0;
>> +	}
>> +}
>> +
>> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
>> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
>> +{
>> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
>> +}
>> +
>> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
>> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
>> +{
>> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
>> +		return 0;
>> +	if (prot & PROT_ADI) {
>> +		if (!adi_capable())
>> +			return 0;
>> +
>> +		/* ADI tags can not be set on read-only memory, so it makes
>> +		 * sense to enable ADI on writable memory only.
>> +		 */
>> +		if (!(prot & PROT_WRITE))
>> +			return 0;
> 
> This prevents the use of ADI for the legitimate case where shared memory
> is mapped read/write for a master process but mapped read-only for a
> client process.  The master process could set the tags and communicate
> the expected tag values to the client.

A non-writable mapping can access the shared memory using non-ADI tagged 
addresses if it does not enable ADI on its mappings, so it is 
superfluous to even allow enabling ADI. I can remove this if that helps 
any use cases that wouldn't work with above condition.

>> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
>> +				    struct vm_area_struct *vma,
>> +				    unsigned long addr)
>> +{
>> +	unsigned char *tags;
>> +	unsigned long i, size, max_desc, flags;
>> +	tag_storage_desc_t *tag_desc, *open_desc;
>> +	unsigned long end_addr, hole_start, hole_end;
>> +
>> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
>> +	open_desc = NULL;
>> +	hole_start = 0;
>> +	hole_end = ULONG_MAX;
>> +	end_addr = addr + PAGE_SIZE - 1;
>> +
>> +	/* Check if this vma already has tag storage descriptor
>> +	 * allocated for it.
>> +	 */
>> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
>> +	if (mm->context.tag_store) {
>> +		tag_desc = mm->context.tag_store;
>> +
>> +		/* Look for a matching entry for this address. While doing
>> +		 * that, look for the first open slot as well and find
>> +		 * the hole in already allocated range where this request
>> +		 * will fit in.
>> +		 */
>> +		for (i = 0; i < max_desc; i++) {
>> +			if (tag_desc->tag_users == 0) {
>> +				if (open_desc == NULL)
>> +					open_desc = tag_desc;
>> +			} else {
>> +				if ((addr >= tag_desc->start) &&
>> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
>> +					tag_desc->tag_users++;
>> +					goto out;
>> +				}
>> +			}
>> +			if ((tag_desc->start > end_addr) &&
>> +			    (tag_desc->start < hole_end))
>> +				hole_end = tag_desc->start;
>> +			if ((tag_desc->end < addr) &&
>> +			    (tag_desc->end > hole_start))
>> +				hole_start = tag_desc->end;
>> +			tag_desc++;
>> +		}
>> +
>> +	} else {
>> +		size = sizeof(tag_storage_desc_t)*max_desc;
>> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
> 
> The spin_lock_irqsave() above means that all but level 15 interrupts
> will be disabled when kzalloc() is called.  If kzalloc() can sleep
> there's a risk of deadlock.

I could call kzalloc() with GFP_NOWAIT instead of GFP_NOIO. Would that 
address the risk of deadlock?

> 
> 
>> +		if (mm->context.tag_store == NULL) {
>> +			tag_desc = NULL;
>> +			goto out;
>> +		}
>> +		tag_desc = mm->context.tag_store;
>> +		for (i = 0; i < max_desc; i++, tag_desc++)
>> +			tag_desc->tag_users = 0;
>> +		open_desc = mm->context.tag_store;
>> +		i = 0;
>> +	}
>> +
>> +	/* Check if we ran out of tag storage descriptors */
>> +	if (open_desc == NULL) {
>> +		tag_desc = NULL;
>> +		goto out;
>> +	}
>> +
>> +	/* Mark this tag descriptor slot in use and then initialize it */
>> +	tag_desc = open_desc;
>> +	tag_desc->tag_users = 1;
>> +
>> +	/* Tag storage has not been allocated for this vma and space
>> +	 * is available in tag storage descriptor. Since this page is
>> +	 * being swapped out, there is high probability subsequent pages
>> +	 * in the VMA will be swapped out as well. Allocates pages to
>> +	 * store tags for as many pages in this vma as possible but not
>> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
>> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
>> +	 * covers adi_blksize() worth of addresses. Check if the hole is
>> +	 * big enough to accommodate full address range for using
>> +	 * TAG_STORAGE_PAGES number of tag pages.
>> +	 */
>> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
>> +	end_addr = addr + (size*2*adi_blksize()) - 1;
> 
> Since size > PAGE_SIZE, end_addr could theoretically overflow >
> 
>> +	if (hole_end < end_addr) {
>> +		/* Available hole is too small on the upper end of
>> +		 * address. Can we expand the range towards the lower
>> +		 * address and maximize use of this slot?
>> +		 */
>> +		unsigned long tmp_addr;
>> +
>> +		end_addr = hole_end - 1;
>> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
> 
> Similarily, tmp_addr may underflow.

I will add checks for these two.

> 
>> +		if (tmp_addr < hole_start) {
>> +			/* Available hole is restricted on lower address
>> +			 * end as well
>> +			 */
>> +			tmp_addr = hole_start + 1;
>> +		}
>> +		addr = tmp_addr;
>> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
>> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
>> +		size = size * PAGE_SIZE;
>> +	}
>> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
> 
> Potential deadlock due to PIL=14?

Same as above - call kzalloc() with GFP_NOWAIT?

>> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
>> index 1276ca2567ba..7be33bf45cff 100644
>> --- a/arch/sparc/kernel/etrap_64.S
>> +++ b/arch/sparc/kernel/etrap_64.S
>> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
>> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
>> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
>> 		or	%l7, %l0, %l7
>> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>> +		/*
>> +		 * If userspace is using ADI, it could potentially pass
>> +		 * a pointer with version tag embedded in it. To maintain
>> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
>> +		 * would have already set TTE.mcd in an earlier call to
>> +		 * kernel and set the version tag for the address being
>> +		 * dereferenced. Setting PSTATE.mcde would ensure any
>> +		 * access to userspace data through a system call honors
>> +		 * ADI and does not allow a rogue app to bypass ADI by
>> +		 * using system calls. Setting PSTATE.mcde only affects
>> +		 * accesses to virtual addresses that have TTE.mcd set.
>> +		 * Set PMCDPER to ensure any exceptions caused by ADI
>> +		 * version tag mismatch are exposed before system call
>> +		 * returns to userspace. Setting PMCDPER affects only
>> +		 * writes to virtual addresses that have TTE.mcd set and
>> +		 * have a version tag set as well.
>> +		 */
>> +		.section .sun_m7_1insn_patch, "ax"
>> +		.word	661b
>> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
>> +		.previous
>> +661:		nop
>> +		.section .sun_m7_1insn_patch, "ax"
>> +		.word	661b
>> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
> 
> I commented on this on the last patch series revision.  PMCDPER could be
> set once when each CPU is configured rather than every time the kernel
> is entered.  Since it's never cleared, setting it repeatedly unnecessarily
> impacts the performance of etrap.

Yes, you did and I thought I had addressed it in that thread:

">> I considered that possibility. What made me uncomfortable with that 
is there is no way to prevent a driver/module or future code elsewhere 
in kernel from clearing PMCDPER with possibly good reason. If that were 
to happen, setting PMCDPER here ensures kernel will always see 
consistent behavior with system calls. It does come at a cost. Is that 
cost unacceptable to ensure consistent behavior?
> 
> Aren't you still at risk if the thread relinquishes the CPU while in the kernel and is then rescheduled on a CPU where PMCDPER has erroneously been left cleared?  You may need to save and restore PMCDPER as well as MCDPER on context switch, but I don't know if that will cover you completely.
> "

I should add setting PMCDPER to 1 in finish_arch_post_lock_switch() to 
address the possibility you had mentioned.

> 
> Also, there are places in rtrap where PSTATE is set before continuing
> execution in the kernel.  These should also be patched to set TSTATE_MCDE.
> 

I will find and fix those.

>> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
>> index 422b17880955..a9da205da394 100644
>> --- a/arch/sparc/kernel/setup_64.c
>> +++ b/arch/sparc/kernel/setup_64.c
>> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>> 	}
>> }
>>
>> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>> +			     struct sun4v_1insn_patch_entry *end)
>> +{
>> +	sun4v_patch_1insn_range(start, end);
>> +}
>> +
>> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
>> 			     struct sun4v_2insn_patch_entry *end)
>> {
>> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
>> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
>> 				&__sun4v_2insn_patch_end);
>> 	if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
>> -	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
>> +	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN) {
>> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
>> +					 &__sun_m7_1insn_patch_end);
>> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
>> 					 &__sun_m7_2insn_patch_end);
> 
> Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
> here instead of adding new functions that just call these functions?

Sounds reasonable, I can change that.

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-30 22:27       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-30 22:27 UTC (permalink / raw)
  To: Anthony Yznaga
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

Hi Anthony,

Thanks for taking the time to provide feedback. My comments inline below.

On 08/25/2017 04:31 PM, Anthony Yznaga wrote:
> 
>> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
>> ......deleted......
>> +provided by the hypervisor to the kernel.  Kernel returns the value of
>> +ADI block size to userspace using auxiliary vector along with other ADI
>> +info. Following auxiliary vectors are provided by the kernel:
>> +
>> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
>> +			alignment, in bytes, of ADI versioning.
>> +	AT_ADI_NBITS	Number of ADI version bits in the VA
> 
> The previous patch series also defined AT_ADI_UEONADI.  Why was that
> removed?

This was based upon a conversation we had when you mentioned future 
processors may not implement this or change the way this is interpreted 
and any applications depending upon this value would break at that 
point. I removed it to eliminate building an unreliable dependency. If I 
misunderstood what you said, please let me know.

> 
>> +
>> +
>> +IMPORTANT NOTES:
>> +
>> +- Version tag values of 0x0 and 0xf are reserved.
> 
> The documentation should probably state more specifically that an
> in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
> meaning that a mismatch exception will never be generated regardless
> of the tag bits set in the VA accessing the memory.

Will do.

> 
>> +
>> +- Version tags are set on virtual addresses from userspace even though
>> +  tags are stored in physical memory. Tags are set on a physical page
>> +  after it has been allocated to a task and a pte has been created for
>> +  it.
>> +
>> +- When a task frees a memory page it had set version tags on, the page
>> +  goes back to free page pool. When this page is re-allocated to a task,
>> +  kernel clears the page using block initialization ASI which clears the
>> +  version tags as well for the page. If a page allocated to a task is
>> +  freed and allocated back to the same task, old version tags set by the
>> +  task on that page will no longer be present.
> 
> The specifics should be included here, too, so someone doesn't have
> to guess what's going on if they make changes and the tags are no longer
> cleared.  The HW clears the tag for a cacheline for block initializing
> stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
> PSTATE.mce is set when executing in the kernel, but pages are cleared
> using kernel physical mapping VAs which are mapped with TTE.mcd=0.
> 
> Another HW behavior that should be mentioned is that tag mismatches
> are not detected for non-faulting loads.

Sure, I can add that.

> 
>> +
>> +- Kernel does not set any tags for user pages and it is entirely a
>> +  task's responsibility to set any version tags. Kernel does ensure the
>> +  version tags are preserved if a page is swapped out to the disk and
>> +  swapped back in. It also preserves that version tags if a page is
>> +  migrated.
> 
> I only have a cursory understanding of how page migration works, but
> I could not see how the tags would be preserved if a page were migrated.
> I figured the place to copy the tags would be migrate_page_copy(), but
> I don't see changes there.
> 
> 

For migrating user pages, the way I understand the code works is if the 
page is mapped (which is the only time ADI tags are even in place), 
try_to_unmap() is called with TTU_MIGRATION flag set. try_to_unmap() 
will call arch_unmap_one() which saves the tags from currently mapped 
page. When the new page has been allocated, contents of the old page are 
faulted in through do_swap_page() which will call arch_do_swap_page(). 
arch_do_swap_page() then restores the ADI tags.


>> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
>> index 59bb5938d852..b799796ad963 100644
>> --- a/arch/sparc/include/asm/mman.h
>> +++ b/arch/sparc/include/asm/mman.h
>> @@ -6,5 +6,75 @@
>> #ifndef __ASSEMBLY__
>> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
>> int sparc_mmap_check(unsigned long addr, unsigned long len);
>> -#endif
>> +
>> +#ifdef CONFIG_SPARC64
>> +#include <asm/adi_64.h>
>> +
>> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
>> +{
>> +	if (prot & PROT_ADI) {
>> +		struct pt_regs *regs;
>> +
>> +		if (!current->mm->context.adi) {
>> +			regs = task_pt_regs(current);
>> +			regs->tstate |= TSTATE_MCDE;
>> +			current->mm->context.adi = true;
> 
> If a process is multi-threaded when it enables ADI on some memory for
> the first time, TSTATE_MCDE will only be set for the calling thread
> and it will not be possible to enable it for the other threads.
> One possible way to handle this is to enable TSTATE_MCDE for all user
> threads when they are initialized if adi_capable() returns true.
> 

Or set TSTATE_MCDE unconditionally here by removing "if 
(!current->mm->context.adi)"?

> 
>> +		}
>> +		return VM_SPARC_ADI;
>> +	} else {
>> +		return 0;
>> +	}
>> +}
>> +
>> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
>> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
>> +{
>> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
>> +}
>> +
>> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
>> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
>> +{
>> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
>> +		return 0;
>> +	if (prot & PROT_ADI) {
>> +		if (!adi_capable())
>> +			return 0;
>> +
>> +		/* ADI tags can not be set on read-only memory, so it makes
>> +		 * sense to enable ADI on writable memory only.
>> +		 */
>> +		if (!(prot & PROT_WRITE))
>> +			return 0;
> 
> This prevents the use of ADI for the legitimate case where shared memory
> is mapped read/write for a master process but mapped read-only for a
> client process.  The master process could set the tags and communicate
> the expected tag values to the client.

A non-writable mapping can access the shared memory using non-ADI tagged 
addresses if it does not enable ADI on its mappings, so it is 
superfluous to even allow enabling ADI. I can remove this if that helps 
any use cases that wouldn't work with above condition.

>> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
>> +				    struct vm_area_struct *vma,
>> +				    unsigned long addr)
>> +{
>> +	unsigned char *tags;
>> +	unsigned long i, size, max_desc, flags;
>> +	tag_storage_desc_t *tag_desc, *open_desc;
>> +	unsigned long end_addr, hole_start, hole_end;
>> +
>> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
>> +	open_desc = NULL;
>> +	hole_start = 0;
>> +	hole_end = ULONG_MAX;
>> +	end_addr = addr + PAGE_SIZE - 1;
>> +
>> +	/* Check if this vma already has tag storage descriptor
>> +	 * allocated for it.
>> +	 */
>> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
>> +	if (mm->context.tag_store) {
>> +		tag_desc = mm->context.tag_store;
>> +
>> +		/* Look for a matching entry for this address. While doing
>> +		 * that, look for the first open slot as well and find
>> +		 * the hole in already allocated range where this request
>> +		 * will fit in.
>> +		 */
>> +		for (i = 0; i < max_desc; i++) {
>> +			if (tag_desc->tag_users = 0) {
>> +				if (open_desc = NULL)
>> +					open_desc = tag_desc;
>> +			} else {
>> +				if ((addr >= tag_desc->start) &&
>> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
>> +					tag_desc->tag_users++;
>> +					goto out;
>> +				}
>> +			}
>> +			if ((tag_desc->start > end_addr) &&
>> +			    (tag_desc->start < hole_end))
>> +				hole_end = tag_desc->start;
>> +			if ((tag_desc->end < addr) &&
>> +			    (tag_desc->end > hole_start))
>> +				hole_start = tag_desc->end;
>> +			tag_desc++;
>> +		}
>> +
>> +	} else {
>> +		size = sizeof(tag_storage_desc_t)*max_desc;
>> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
> 
> The spin_lock_irqsave() above means that all but level 15 interrupts
> will be disabled when kzalloc() is called.  If kzalloc() can sleep
> there's a risk of deadlock.

I could call kzalloc() with GFP_NOWAIT instead of GFP_NOIO. Would that 
address the risk of deadlock?

> 
> 
>> +		if (mm->context.tag_store = NULL) {
>> +			tag_desc = NULL;
>> +			goto out;
>> +		}
>> +		tag_desc = mm->context.tag_store;
>> +		for (i = 0; i < max_desc; i++, tag_desc++)
>> +			tag_desc->tag_users = 0;
>> +		open_desc = mm->context.tag_store;
>> +		i = 0;
>> +	}
>> +
>> +	/* Check if we ran out of tag storage descriptors */
>> +	if (open_desc = NULL) {
>> +		tag_desc = NULL;
>> +		goto out;
>> +	}
>> +
>> +	/* Mark this tag descriptor slot in use and then initialize it */
>> +	tag_desc = open_desc;
>> +	tag_desc->tag_users = 1;
>> +
>> +	/* Tag storage has not been allocated for this vma and space
>> +	 * is available in tag storage descriptor. Since this page is
>> +	 * being swapped out, there is high probability subsequent pages
>> +	 * in the VMA will be swapped out as well. Allocates pages to
>> +	 * store tags for as many pages in this vma as possible but not
>> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
>> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
>> +	 * covers adi_blksize() worth of addresses. Check if the hole is
>> +	 * big enough to accommodate full address range for using
>> +	 * TAG_STORAGE_PAGES number of tag pages.
>> +	 */
>> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
>> +	end_addr = addr + (size*2*adi_blksize()) - 1;
> 
> Since size > PAGE_SIZE, end_addr could theoretically overflow >
> 
>> +	if (hole_end < end_addr) {
>> +		/* Available hole is too small on the upper end of
>> +		 * address. Can we expand the range towards the lower
>> +		 * address and maximize use of this slot?
>> +		 */
>> +		unsigned long tmp_addr;
>> +
>> +		end_addr = hole_end - 1;
>> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
> 
> Similarily, tmp_addr may underflow.

I will add checks for these two.

> 
>> +		if (tmp_addr < hole_start) {
>> +			/* Available hole is restricted on lower address
>> +			 * end as well
>> +			 */
>> +			tmp_addr = hole_start + 1;
>> +		}
>> +		addr = tmp_addr;
>> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
>> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
>> +		size = size * PAGE_SIZE;
>> +	}
>> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
> 
> Potential deadlock due to PIL\x14?

Same as above - call kzalloc() with GFP_NOWAIT?

>> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
>> index 1276ca2567ba..7be33bf45cff 100644
>> --- a/arch/sparc/kernel/etrap_64.S
>> +++ b/arch/sparc/kernel/etrap_64.S
>> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
>> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
>> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
>> 		or	%l7, %l0, %l7
>> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>> +		/*
>> +		 * If userspace is using ADI, it could potentially pass
>> +		 * a pointer with version tag embedded in it. To maintain
>> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
>> +		 * would have already set TTE.mcd in an earlier call to
>> +		 * kernel and set the version tag for the address being
>> +		 * dereferenced. Setting PSTATE.mcde would ensure any
>> +		 * access to userspace data through a system call honors
>> +		 * ADI and does not allow a rogue app to bypass ADI by
>> +		 * using system calls. Setting PSTATE.mcde only affects
>> +		 * accesses to virtual addresses that have TTE.mcd set.
>> +		 * Set PMCDPER to ensure any exceptions caused by ADI
>> +		 * version tag mismatch are exposed before system call
>> +		 * returns to userspace. Setting PMCDPER affects only
>> +		 * writes to virtual addresses that have TTE.mcd set and
>> +		 * have a version tag set as well.
>> +		 */
>> +		.section .sun_m7_1insn_patch, "ax"
>> +		.word	661b
>> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
>> +		.previous
>> +661:		nop
>> +		.section .sun_m7_1insn_patch, "ax"
>> +		.word	661b
>> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
> 
> I commented on this on the last patch series revision.  PMCDPER could be
> set once when each CPU is configured rather than every time the kernel
> is entered.  Since it's never cleared, setting it repeatedly unnecessarily
> impacts the performance of etrap.

Yes, you did and I thought I had addressed it in that thread:

">> I considered that possibility. What made me uncomfortable with that 
is there is no way to prevent a driver/module or future code elsewhere 
in kernel from clearing PMCDPER with possibly good reason. If that were 
to happen, setting PMCDPER here ensures kernel will always see 
consistent behavior with system calls. It does come at a cost. Is that 
cost unacceptable to ensure consistent behavior?
> 
> Aren't you still at risk if the thread relinquishes the CPU while in the kernel and is then rescheduled on a CPU where PMCDPER has erroneously been left cleared?  You may need to save and restore PMCDPER as well as MCDPER on context switch, but I don't know if that will cover you completely.
> "

I should add setting PMCDPER to 1 in finish_arch_post_lock_switch() to 
address the possibility you had mentioned.

> 
> Also, there are places in rtrap where PSTATE is set before continuing
> execution in the kernel.  These should also be patched to set TSTATE_MCDE.
> 

I will find and fix those.

>> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
>> index 422b17880955..a9da205da394 100644
>> --- a/arch/sparc/kernel/setup_64.c
>> +++ b/arch/sparc/kernel/setup_64.c
>> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>> 	}
>> }
>>
>> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>> +			     struct sun4v_1insn_patch_entry *end)
>> +{
>> +	sun4v_patch_1insn_range(start, end);
>> +}
>> +
>> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
>> 			     struct sun4v_2insn_patch_entry *end)
>> {
>> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
>> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
>> 				&__sun4v_2insn_patch_end);
>> 	if (sun4v_chip_type = SUN4V_CHIP_SPARC_M7 ||
>> -	    sun4v_chip_type = SUN4V_CHIP_SPARC_SN)
>> +	    sun4v_chip_type = SUN4V_CHIP_SPARC_SN) {
>> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
>> +					 &__sun_m7_1insn_patch_end);
>> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
>> 					 &__sun_m7_2insn_patch_end);
> 
> Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
> here instead of adding new functions that just call these functions?

Sounds reasonable, I can change that.

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-30 22:27       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-30 22:27 UTC (permalink / raw)
  To: Anthony Yznaga
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

Hi Anthony,

Thanks for taking the time to provide feedback. My comments inline below.

On 08/25/2017 04:31 PM, Anthony Yznaga wrote:
> 
>> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
>> ......deleted......
>> +provided by the hypervisor to the kernel.  Kernel returns the value of
>> +ADI block size to userspace using auxiliary vector along with other ADI
>> +info. Following auxiliary vectors are provided by the kernel:
>> +
>> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
>> +			alignment, in bytes, of ADI versioning.
>> +	AT_ADI_NBITS	Number of ADI version bits in the VA
> 
> The previous patch series also defined AT_ADI_UEONADI.  Why was that
> removed?

This was based upon a conversation we had when you mentioned future 
processors may not implement this or change the way this is interpreted 
and any applications depending upon this value would break at that 
point. I removed it to eliminate building an unreliable dependency. If I 
misunderstood what you said, please let me know.

> 
>> +
>> +
>> +IMPORTANT NOTES:
>> +
>> +- Version tag values of 0x0 and 0xf are reserved.
> 
> The documentation should probably state more specifically that an
> in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
> meaning that a mismatch exception will never be generated regardless
> of the tag bits set in the VA accessing the memory.

Will do.

> 
>> +
>> +- Version tags are set on virtual addresses from userspace even though
>> +  tags are stored in physical memory. Tags are set on a physical page
>> +  after it has been allocated to a task and a pte has been created for
>> +  it.
>> +
>> +- When a task frees a memory page it had set version tags on, the page
>> +  goes back to free page pool. When this page is re-allocated to a task,
>> +  kernel clears the page using block initialization ASI which clears the
>> +  version tags as well for the page. If a page allocated to a task is
>> +  freed and allocated back to the same task, old version tags set by the
>> +  task on that page will no longer be present.
> 
> The specifics should be included here, too, so someone doesn't have
> to guess what's going on if they make changes and the tags are no longer
> cleared.  The HW clears the tag for a cacheline for block initializing
> stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
> PSTATE.mce is set when executing in the kernel, but pages are cleared
> using kernel physical mapping VAs which are mapped with TTE.mcd=0.
> 
> Another HW behavior that should be mentioned is that tag mismatches
> are not detected for non-faulting loads.

Sure, I can add that.

> 
>> +
>> +- Kernel does not set any tags for user pages and it is entirely a
>> +  task's responsibility to set any version tags. Kernel does ensure the
>> +  version tags are preserved if a page is swapped out to the disk and
>> +  swapped back in. It also preserves that version tags if a page is
>> +  migrated.
> 
> I only have a cursory understanding of how page migration works, but
> I could not see how the tags would be preserved if a page were migrated.
> I figured the place to copy the tags would be migrate_page_copy(), but
> I don't see changes there.
> 
> 

For migrating user pages, the way I understand the code works is if the 
page is mapped (which is the only time ADI tags are even in place), 
try_to_unmap() is called with TTU_MIGRATION flag set. try_to_unmap() 
will call arch_unmap_one() which saves the tags from currently mapped 
page. When the new page has been allocated, contents of the old page are 
faulted in through do_swap_page() which will call arch_do_swap_page(). 
arch_do_swap_page() then restores the ADI tags.


>> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
>> index 59bb5938d852..b799796ad963 100644
>> --- a/arch/sparc/include/asm/mman.h
>> +++ b/arch/sparc/include/asm/mman.h
>> @@ -6,5 +6,75 @@
>> #ifndef __ASSEMBLY__
>> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
>> int sparc_mmap_check(unsigned long addr, unsigned long len);
>> -#endif
>> +
>> +#ifdef CONFIG_SPARC64
>> +#include <asm/adi_64.h>
>> +
>> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
>> +{
>> +	if (prot & PROT_ADI) {
>> +		struct pt_regs *regs;
>> +
>> +		if (!current->mm->context.adi) {
>> +			regs = task_pt_regs(current);
>> +			regs->tstate |= TSTATE_MCDE;
>> +			current->mm->context.adi = true;
> 
> If a process is multi-threaded when it enables ADI on some memory for
> the first time, TSTATE_MCDE will only be set for the calling thread
> and it will not be possible to enable it for the other threads.
> One possible way to handle this is to enable TSTATE_MCDE for all user
> threads when they are initialized if adi_capable() returns true.
> 

Or set TSTATE_MCDE unconditionally here by removing "if 
(!current->mm->context.adi)"?

> 
>> +		}
>> +		return VM_SPARC_ADI;
>> +	} else {
>> +		return 0;
>> +	}
>> +}
>> +
>> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
>> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
>> +{
>> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
>> +}
>> +
>> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
>> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
>> +{
>> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
>> +		return 0;
>> +	if (prot & PROT_ADI) {
>> +		if (!adi_capable())
>> +			return 0;
>> +
>> +		/* ADI tags can not be set on read-only memory, so it makes
>> +		 * sense to enable ADI on writable memory only.
>> +		 */
>> +		if (!(prot & PROT_WRITE))
>> +			return 0;
> 
> This prevents the use of ADI for the legitimate case where shared memory
> is mapped read/write for a master process but mapped read-only for a
> client process.  The master process could set the tags and communicate
> the expected tag values to the client.

A non-writable mapping can access the shared memory using non-ADI tagged 
addresses if it does not enable ADI on its mappings, so it is 
superfluous to even allow enabling ADI. I can remove this if that helps 
any use cases that wouldn't work with above condition.

>> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
>> +				    struct vm_area_struct *vma,
>> +				    unsigned long addr)
>> +{
>> +	unsigned char *tags;
>> +	unsigned long i, size, max_desc, flags;
>> +	tag_storage_desc_t *tag_desc, *open_desc;
>> +	unsigned long end_addr, hole_start, hole_end;
>> +
>> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
>> +	open_desc = NULL;
>> +	hole_start = 0;
>> +	hole_end = ULONG_MAX;
>> +	end_addr = addr + PAGE_SIZE - 1;
>> +
>> +	/* Check if this vma already has tag storage descriptor
>> +	 * allocated for it.
>> +	 */
>> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
>> +	if (mm->context.tag_store) {
>> +		tag_desc = mm->context.tag_store;
>> +
>> +		/* Look for a matching entry for this address. While doing
>> +		 * that, look for the first open slot as well and find
>> +		 * the hole in already allocated range where this request
>> +		 * will fit in.
>> +		 */
>> +		for (i = 0; i < max_desc; i++) {
>> +			if (tag_desc->tag_users == 0) {
>> +				if (open_desc == NULL)
>> +					open_desc = tag_desc;
>> +			} else {
>> +				if ((addr >= tag_desc->start) &&
>> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
>> +					tag_desc->tag_users++;
>> +					goto out;
>> +				}
>> +			}
>> +			if ((tag_desc->start > end_addr) &&
>> +			    (tag_desc->start < hole_end))
>> +				hole_end = tag_desc->start;
>> +			if ((tag_desc->end < addr) &&
>> +			    (tag_desc->end > hole_start))
>> +				hole_start = tag_desc->end;
>> +			tag_desc++;
>> +		}
>> +
>> +	} else {
>> +		size = sizeof(tag_storage_desc_t)*max_desc;
>> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
> 
> The spin_lock_irqsave() above means that all but level 15 interrupts
> will be disabled when kzalloc() is called.  If kzalloc() can sleep
> there's a risk of deadlock.

I could call kzalloc() with GFP_NOWAIT instead of GFP_NOIO. Would that 
address the risk of deadlock?

> 
> 
>> +		if (mm->context.tag_store == NULL) {
>> +			tag_desc = NULL;
>> +			goto out;
>> +		}
>> +		tag_desc = mm->context.tag_store;
>> +		for (i = 0; i < max_desc; i++, tag_desc++)
>> +			tag_desc->tag_users = 0;
>> +		open_desc = mm->context.tag_store;
>> +		i = 0;
>> +	}
>> +
>> +	/* Check if we ran out of tag storage descriptors */
>> +	if (open_desc == NULL) {
>> +		tag_desc = NULL;
>> +		goto out;
>> +	}
>> +
>> +	/* Mark this tag descriptor slot in use and then initialize it */
>> +	tag_desc = open_desc;
>> +	tag_desc->tag_users = 1;
>> +
>> +	/* Tag storage has not been allocated for this vma and space
>> +	 * is available in tag storage descriptor. Since this page is
>> +	 * being swapped out, there is high probability subsequent pages
>> +	 * in the VMA will be swapped out as well. Allocates pages to
>> +	 * store tags for as many pages in this vma as possible but not
>> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
>> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
>> +	 * covers adi_blksize() worth of addresses. Check if the hole is
>> +	 * big enough to accommodate full address range for using
>> +	 * TAG_STORAGE_PAGES number of tag pages.
>> +	 */
>> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
>> +	end_addr = addr + (size*2*adi_blksize()) - 1;
> 
> Since size > PAGE_SIZE, end_addr could theoretically overflow >
> 
>> +	if (hole_end < end_addr) {
>> +		/* Available hole is too small on the upper end of
>> +		 * address. Can we expand the range towards the lower
>> +		 * address and maximize use of this slot?
>> +		 */
>> +		unsigned long tmp_addr;
>> +
>> +		end_addr = hole_end - 1;
>> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
> 
> Similarily, tmp_addr may underflow.

I will add checks for these two.

> 
>> +		if (tmp_addr < hole_start) {
>> +			/* Available hole is restricted on lower address
>> +			 * end as well
>> +			 */
>> +			tmp_addr = hole_start + 1;
>> +		}
>> +		addr = tmp_addr;
>> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
>> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
>> +		size = size * PAGE_SIZE;
>> +	}
>> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
> 
> Potential deadlock due to PIL=14?

Same as above - call kzalloc() with GFP_NOWAIT?

>> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
>> index 1276ca2567ba..7be33bf45cff 100644
>> --- a/arch/sparc/kernel/etrap_64.S
>> +++ b/arch/sparc/kernel/etrap_64.S
>> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
>> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
>> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
>> 		or	%l7, %l0, %l7
>> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>> +		/*
>> +		 * If userspace is using ADI, it could potentially pass
>> +		 * a pointer with version tag embedded in it. To maintain
>> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
>> +		 * would have already set TTE.mcd in an earlier call to
>> +		 * kernel and set the version tag for the address being
>> +		 * dereferenced. Setting PSTATE.mcde would ensure any
>> +		 * access to userspace data through a system call honors
>> +		 * ADI and does not allow a rogue app to bypass ADI by
>> +		 * using system calls. Setting PSTATE.mcde only affects
>> +		 * accesses to virtual addresses that have TTE.mcd set.
>> +		 * Set PMCDPER to ensure any exceptions caused by ADI
>> +		 * version tag mismatch are exposed before system call
>> +		 * returns to userspace. Setting PMCDPER affects only
>> +		 * writes to virtual addresses that have TTE.mcd set and
>> +		 * have a version tag set as well.
>> +		 */
>> +		.section .sun_m7_1insn_patch, "ax"
>> +		.word	661b
>> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
>> +		.previous
>> +661:		nop
>> +		.section .sun_m7_1insn_patch, "ax"
>> +		.word	661b
>> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
> 
> I commented on this on the last patch series revision.  PMCDPER could be
> set once when each CPU is configured rather than every time the kernel
> is entered.  Since it's never cleared, setting it repeatedly unnecessarily
> impacts the performance of etrap.

Yes, you did and I thought I had addressed it in that thread:

">> I considered that possibility. What made me uncomfortable with that 
is there is no way to prevent a driver/module or future code elsewhere 
in kernel from clearing PMCDPER with possibly good reason. If that were 
to happen, setting PMCDPER here ensures kernel will always see 
consistent behavior with system calls. It does come at a cost. Is that 
cost unacceptable to ensure consistent behavior?
> 
> Aren't you still at risk if the thread relinquishes the CPU while in the kernel and is then rescheduled on a CPU where PMCDPER has erroneously been left cleared?  You may need to save and restore PMCDPER as well as MCDPER on context switch, but I don't know if that will cover you completely.
> "

I should add setting PMCDPER to 1 in finish_arch_post_lock_switch() to 
address the possibility you had mentioned.

> 
> Also, there are places in rtrap where PSTATE is set before continuing
> execution in the kernel.  These should also be patched to set TSTATE_MCDE.
> 

I will find and fix those.

>> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
>> index 422b17880955..a9da205da394 100644
>> --- a/arch/sparc/kernel/setup_64.c
>> +++ b/arch/sparc/kernel/setup_64.c
>> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>> 	}
>> }
>>
>> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>> +			     struct sun4v_1insn_patch_entry *end)
>> +{
>> +	sun4v_patch_1insn_range(start, end);
>> +}
>> +
>> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
>> 			     struct sun4v_2insn_patch_entry *end)
>> {
>> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
>> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
>> 				&__sun4v_2insn_patch_end);
>> 	if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
>> -	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
>> +	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN) {
>> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
>> +					 &__sun_m7_1insn_patch_end);
>> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
>> 					 &__sun_m7_2insn_patch_end);
> 
> Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
> here instead of adding new functions that just call these functions?

Sounds reasonable, I can change that.

Thanks,
Khalid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-30 22:27       ` Khalid Aziz
  (?)
@ 2017-08-30 22:38         ` David Miller
  -1 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-30 22:38 UTC (permalink / raw)
  To: khalid.aziz
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed, 30 Aug 2017 16:27:54 -0600

>>> +#define arch_calc_vm_prot_bits(prot, pkey)
>>> sparc_calc_vm_prot_bits(prot)
>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long
>>> prot)
>>> +{
>>> +	if (prot & PROT_ADI) {
>>> +		struct pt_regs *regs;
>>> +
>>> +		if (!current->mm->context.adi) {
>>> +			regs = task_pt_regs(current);
>>> +			regs->tstate |= TSTATE_MCDE;
>>> +			current->mm->context.adi = true;
>> If a process is multi-threaded when it enables ADI on some memory for
>> the first time, TSTATE_MCDE will only be set for the calling thread
>> and it will not be possible to enable it for the other threads.
>> One possible way to handle this is to enable TSTATE_MCDE for all user
>> threads when they are initialized if adi_capable() returns true.
>> 
> 
> Or set TSTATE_MCDE unconditionally here by removing "if
> (!current->mm->context.adi)"?

I think you have to make "ADI enabled" a property of the mm_struct.

Then you can broadcast to mm->cpu_vm_mask a per-cpu interrupt that
updates regs->tstate of a thread using 'mm' is currently executing.

And in the context switch code you set TSTATE_MCDE if it's not set
already.

That should cover all threaded case.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-30 22:38         ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-30 22:38 UTC (permalink / raw)
  To: khalid.aziz
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed, 30 Aug 2017 16:27:54 -0600

>>> +#define arch_calc_vm_prot_bits(prot, pkey)
>>> sparc_calc_vm_prot_bits(prot)
>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long
>>> prot)
>>> +{
>>> +	if (prot & PROT_ADI) {
>>> +		struct pt_regs *regs;
>>> +
>>> +		if (!current->mm->context.adi) {
>>> +			regs = task_pt_regs(current);
>>> +			regs->tstate |= TSTATE_MCDE;
>>> +			current->mm->context.adi = true;
>> If a process is multi-threaded when it enables ADI on some memory for
>> the first time, TSTATE_MCDE will only be set for the calling thread
>> and it will not be possible to enable it for the other threads.
>> One possible way to handle this is to enable TSTATE_MCDE for all user
>> threads when they are initialized if adi_capable() returns true.
>> 
> 
> Or set TSTATE_MCDE unconditionally here by removing "if
> (!current->mm->context.adi)"?

I think you have to make "ADI enabled" a property of the mm_struct.

Then you can broadcast to mm->cpu_vm_mask a per-cpu interrupt that
updates regs->tstate of a thread using 'mm' is currently executing.

And in the context switch code you set TSTATE_MCDE if it's not set
already.

That should cover all threaded case.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-30 22:38         ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-30 22:38 UTC (permalink / raw)
  To: khalid.aziz
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed, 30 Aug 2017 16:27:54 -0600

>>> +#define arch_calc_vm_prot_bits(prot, pkey)
>>> sparc_calc_vm_prot_bits(prot)
>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long
>>> prot)
>>> +{
>>> +	if (prot & PROT_ADI) {
>>> +		struct pt_regs *regs;
>>> +
>>> +		if (!current->mm->context.adi) {
>>> +			regs = task_pt_regs(current);
>>> +			regs->tstate |= TSTATE_MCDE;
>>> +			current->mm->context.adi = true;
>> If a process is multi-threaded when it enables ADI on some memory for
>> the first time, TSTATE_MCDE will only be set for the calling thread
>> and it will not be possible to enable it for the other threads.
>> One possible way to handle this is to enable TSTATE_MCDE for all user
>> threads when they are initialized if adi_capable() returns true.
>> 
> 
> Or set TSTATE_MCDE unconditionally here by removing "if
> (!current->mm->context.adi)"?

I think you have to make "ADI enabled" a property of the mm_struct.

Then you can broadcast to mm->cpu_vm_mask a per-cpu interrupt that
updates regs->tstate of a thread using 'mm' is currently executing.

And in the context switch code you set TSTATE_MCDE if it's not set
already.

That should cover all threaded case.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-30 22:38         ` David Miller
  (?)
@ 2017-08-30 23:23           ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-30 23:23 UTC (permalink / raw)
  To: David Miller
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

On 08/30/2017 04:38 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed, 30 Aug 2017 16:27:54 -0600
> 
>>>> +#define arch_calc_vm_prot_bits(prot, pkey)
>>>> sparc_calc_vm_prot_bits(prot)
>>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long
>>>> prot)
>>>> +{
>>>> +	if (prot & PROT_ADI) {
>>>> +		struct pt_regs *regs;
>>>> +
>>>> +		if (!current->mm->context.adi) {
>>>> +			regs = task_pt_regs(current);
>>>> +			regs->tstate |= TSTATE_MCDE;
>>>> +			current->mm->context.adi = true;
>>> If a process is multi-threaded when it enables ADI on some memory for
>>> the first time, TSTATE_MCDE will only be set for the calling thread
>>> and it will not be possible to enable it for the other threads.
>>> One possible way to handle this is to enable TSTATE_MCDE for all user
>>> threads when they are initialized if adi_capable() returns true.
>>>
>>
>> Or set TSTATE_MCDE unconditionally here by removing "if
>> (!current->mm->context.adi)"?
> 
> I think you have to make "ADI enabled" a property of the mm_struct.
> 
> Then you can broadcast to mm->cpu_vm_mask a per-cpu interrupt that
> updates regs->tstate of a thread using 'mm' is currently executing.
> 
> And in the context switch code you set TSTATE_MCDE if it's not set
> already.
> 
> That should cover all threaded case.

That is an interesting idea. This would enable TSTATE_MCDE on all 
threads of a process as soon as one thread enables it. If we consider 
the case where the parent creates a shared memory area and spawns a 
bunch of threads. These threads access the shared memory without ADI 
enabled. Now one of the threads decides to enable ADI on the shared 
memory. As soon as it does that, we enable TSTATE_MCDE across all 
threads and since threads are all using the same TTE for the shared 
memory, every thread becomes subject to ADI verification. If one of the 
other threads was in the middle of accessing the shared memory, it will 
get a sigsegv. If we did not enable TSTATE_MCDE across all threads, it 
could have continued execution without fault. In other words, updating 
TSTATE_MCDE across all threads will eliminate the option of running some 
threads with ADI enabled and some not while accessing the same shared 
memory. This could be necessary at least for short periods of time 
before threads can communicate with each other and all switch to 
accessing shared memory with ADI enabled using same tag. Does that sound 
like a valid use case or am I off in the weeds here?

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-30 23:23           ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-30 23:23 UTC (permalink / raw)
  To: David Miller
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

On 08/30/2017 04:38 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed, 30 Aug 2017 16:27:54 -0600
> 
>>>> +#define arch_calc_vm_prot_bits(prot, pkey)
>>>> sparc_calc_vm_prot_bits(prot)
>>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long
>>>> prot)
>>>> +{
>>>> +	if (prot & PROT_ADI) {
>>>> +		struct pt_regs *regs;
>>>> +
>>>> +		if (!current->mm->context.adi) {
>>>> +			regs = task_pt_regs(current);
>>>> +			regs->tstate |= TSTATE_MCDE;
>>>> +			current->mm->context.adi = true;
>>> If a process is multi-threaded when it enables ADI on some memory for
>>> the first time, TSTATE_MCDE will only be set for the calling thread
>>> and it will not be possible to enable it for the other threads.
>>> One possible way to handle this is to enable TSTATE_MCDE for all user
>>> threads when they are initialized if adi_capable() returns true.
>>>
>>
>> Or set TSTATE_MCDE unconditionally here by removing "if
>> (!current->mm->context.adi)"?
> 
> I think you have to make "ADI enabled" a property of the mm_struct.
> 
> Then you can broadcast to mm->cpu_vm_mask a per-cpu interrupt that
> updates regs->tstate of a thread using 'mm' is currently executing.
> 
> And in the context switch code you set TSTATE_MCDE if it's not set
> already.
> 
> That should cover all threaded case.

That is an interesting idea. This would enable TSTATE_MCDE on all 
threads of a process as soon as one thread enables it. If we consider 
the case where the parent creates a shared memory area and spawns a 
bunch of threads. These threads access the shared memory without ADI 
enabled. Now one of the threads decides to enable ADI on the shared 
memory. As soon as it does that, we enable TSTATE_MCDE across all 
threads and since threads are all using the same TTE for the shared 
memory, every thread becomes subject to ADI verification. If one of the 
other threads was in the middle of accessing the shared memory, it will 
get a sigsegv. If we did not enable TSTATE_MCDE across all threads, it 
could have continued execution without fault. In other words, updating 
TSTATE_MCDE across all threads will eliminate the option of running some 
threads with ADI enabled and some not while accessing the same shared 
memory. This could be necessary at least for short periods of time 
before threads can communicate with each other and all switch to 
accessing shared memory with ADI enabled using same tag. Does that sound 
like a valid use case or am I off in the weeds here?

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-30 23:23           ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-30 23:23 UTC (permalink / raw)
  To: David Miller
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

On 08/30/2017 04:38 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed, 30 Aug 2017 16:27:54 -0600
> 
>>>> +#define arch_calc_vm_prot_bits(prot, pkey)
>>>> sparc_calc_vm_prot_bits(prot)
>>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long
>>>> prot)
>>>> +{
>>>> +	if (prot & PROT_ADI) {
>>>> +		struct pt_regs *regs;
>>>> +
>>>> +		if (!current->mm->context.adi) {
>>>> +			regs = task_pt_regs(current);
>>>> +			regs->tstate |= TSTATE_MCDE;
>>>> +			current->mm->context.adi = true;
>>> If a process is multi-threaded when it enables ADI on some memory for
>>> the first time, TSTATE_MCDE will only be set for the calling thread
>>> and it will not be possible to enable it for the other threads.
>>> One possible way to handle this is to enable TSTATE_MCDE for all user
>>> threads when they are initialized if adi_capable() returns true.
>>>
>>
>> Or set TSTATE_MCDE unconditionally here by removing "if
>> (!current->mm->context.adi)"?
> 
> I think you have to make "ADI enabled" a property of the mm_struct.
> 
> Then you can broadcast to mm->cpu_vm_mask a per-cpu interrupt that
> updates regs->tstate of a thread using 'mm' is currently executing.
> 
> And in the context switch code you set TSTATE_MCDE if it's not set
> already.
> 
> That should cover all threaded case.

That is an interesting idea. This would enable TSTATE_MCDE on all 
threads of a process as soon as one thread enables it. If we consider 
the case where the parent creates a shared memory area and spawns a 
bunch of threads. These threads access the shared memory without ADI 
enabled. Now one of the threads decides to enable ADI on the shared 
memory. As soon as it does that, we enable TSTATE_MCDE across all 
threads and since threads are all using the same TTE for the shared 
memory, every thread becomes subject to ADI verification. If one of the 
other threads was in the middle of accessing the shared memory, it will 
get a sigsegv. If we did not enable TSTATE_MCDE across all threads, it 
could have continued execution without fault. In other words, updating 
TSTATE_MCDE across all threads will eliminate the option of running some 
threads with ADI enabled and some not while accessing the same shared 
memory. This could be necessary at least for short periods of time 
before threads can communicate with each other and all switch to 
accessing shared memory with ADI enabled using same tag. Does that sound 
like a valid use case or am I off in the weeds here?

Thanks,
Khalid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-30 23:23           ` Khalid Aziz
  (?)
@ 2017-08-31  0:09             ` David Miller
  -1 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-31  0:09 UTC (permalink / raw)
  To: khalid.aziz
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed, 30 Aug 2017 17:23:37 -0600

> That is an interesting idea. This would enable TSTATE_MCDE on all
> threads of a process as soon as one thread enables it. If we consider
> the case where the parent creates a shared memory area and spawns a
> bunch of threads. These threads access the shared memory without ADI
> enabled. Now one of the threads decides to enable ADI on the shared
> memory. As soon as it does that, we enable TSTATE_MCDE across all
> threads and since threads are all using the same TTE for the shared
> memory, every thread becomes subject to ADI verification. If one of
> the other threads was in the middle of accessing the shared memory, it
> will get a sigsegv. If we did not enable TSTATE_MCDE across all
> threads, it could have continued execution without fault. In other
> words, updating TSTATE_MCDE across all threads will eliminate the
> option of running some threads with ADI enabled and some not while
> accessing the same shared memory. This could be necessary at least for
> short periods of time before threads can communicate with each other
> and all switch to accessing shared memory with ADI enabled using same
> tag. Does that sound like a valid use case or am I off in the weeds
> here?

A threaded application needs to synchronize and properly orchestrate
access to shared memory.

When a change is made to a mappping, in this case setting ADI
attributes, it's being done for the address space not the thread.

And the address space is shared amongst threads.

Therefore ADI is not really a per-thread property but rather
a per-address-space property.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-31  0:09             ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-31  0:09 UTC (permalink / raw)
  To: khalid.aziz
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed, 30 Aug 2017 17:23:37 -0600

> That is an interesting idea. This would enable TSTATE_MCDE on all
> threads of a process as soon as one thread enables it. If we consider
> the case where the parent creates a shared memory area and spawns a
> bunch of threads. These threads access the shared memory without ADI
> enabled. Now one of the threads decides to enable ADI on the shared
> memory. As soon as it does that, we enable TSTATE_MCDE across all
> threads and since threads are all using the same TTE for the shared
> memory, every thread becomes subject to ADI verification. If one of
> the other threads was in the middle of accessing the shared memory, it
> will get a sigsegv. If we did not enable TSTATE_MCDE across all
> threads, it could have continued execution without fault. In other
> words, updating TSTATE_MCDE across all threads will eliminate the
> option of running some threads with ADI enabled and some not while
> accessing the same shared memory. This could be necessary at least for
> short periods of time before threads can communicate with each other
> and all switch to accessing shared memory with ADI enabled using same
> tag. Does that sound like a valid use case or am I off in the weeds
> here?

A threaded application needs to synchronize and properly orchestrate
access to shared memory.

When a change is made to a mappping, in this case setting ADI
attributes, it's being done for the address space not the thread.

And the address space is shared amongst threads.

Therefore ADI is not really a per-thread property but rather
a per-address-space property.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-31  0:09             ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-08-31  0:09 UTC (permalink / raw)
  To: khalid.aziz
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Khalid Aziz <khalid.aziz@oracle.com>
Date: Wed, 30 Aug 2017 17:23:37 -0600

> That is an interesting idea. This would enable TSTATE_MCDE on all
> threads of a process as soon as one thread enables it. If we consider
> the case where the parent creates a shared memory area and spawns a
> bunch of threads. These threads access the shared memory without ADI
> enabled. Now one of the threads decides to enable ADI on the shared
> memory. As soon as it does that, we enable TSTATE_MCDE across all
> threads and since threads are all using the same TTE for the shared
> memory, every thread becomes subject to ADI verification. If one of
> the other threads was in the middle of accessing the shared memory, it
> will get a sigsegv. If we did not enable TSTATE_MCDE across all
> threads, it could have continued execution without fault. In other
> words, updating TSTATE_MCDE across all threads will eliminate the
> option of running some threads with ADI enabled and some not while
> accessing the same shared memory. This could be necessary at least for
> short periods of time before threads can communicate with each other
> and all switch to accessing shared memory with ADI enabled using same
> tag. Does that sound like a valid use case or am I off in the weeds
> here?

A threaded application needs to synchronize and properly orchestrate
access to shared memory.

When a change is made to a mappping, in this case setting ADI
attributes, it's being done for the address space not the thread.

And the address space is shared amongst threads.

Therefore ADI is not really a per-thread property but rather
a per-address-space property.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-31  0:09             ` David Miller
  (?)
@ 2017-08-31 16:38               ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-31 16:38 UTC (permalink / raw)
  To: David Miller
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

On 08/30/2017 06:09 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed, 30 Aug 2017 17:23:37 -0600
> 
>> That is an interesting idea. This would enable TSTATE_MCDE on all
>> threads of a process as soon as one thread enables it. If we consider
>> the case where the parent creates a shared memory area and spawns a
>> bunch of threads. These threads access the shared memory without ADI
>> enabled. Now one of the threads decides to enable ADI on the shared
>> memory. As soon as it does that, we enable TSTATE_MCDE across all
>> threads and since threads are all using the same TTE for the shared
>> memory, every thread becomes subject to ADI verification. If one of
>> the other threads was in the middle of accessing the shared memory, it
>> will get a sigsegv. If we did not enable TSTATE_MCDE across all
>> threads, it could have continued execution without fault. In other
>> words, updating TSTATE_MCDE across all threads will eliminate the
>> option of running some threads with ADI enabled and some not while
>> accessing the same shared memory. This could be necessary at least for
>> short periods of time before threads can communicate with each other
>> and all switch to accessing shared memory with ADI enabled using same
>> tag. Does that sound like a valid use case or am I off in the weeds
>> here?
> 
> A threaded application needs to synchronize and properly orchestrate
> access to shared memory.
> 
> When a change is made to a mappping, in this case setting ADI
> attributes, it's being done for the address space not the thread.
> 
> And the address space is shared amongst threads.
> 
> Therefore ADI is not really a per-thread property but rather
> a per-address-space property.
> 

That does make sense.

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-31 16:38               ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-31 16:38 UTC (permalink / raw)
  To: David Miller
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

On 08/30/2017 06:09 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed, 30 Aug 2017 17:23:37 -0600
> 
>> That is an interesting idea. This would enable TSTATE_MCDE on all
>> threads of a process as soon as one thread enables it. If we consider
>> the case where the parent creates a shared memory area and spawns a
>> bunch of threads. These threads access the shared memory without ADI
>> enabled. Now one of the threads decides to enable ADI on the shared
>> memory. As soon as it does that, we enable TSTATE_MCDE across all
>> threads and since threads are all using the same TTE for the shared
>> memory, every thread becomes subject to ADI verification. If one of
>> the other threads was in the middle of accessing the shared memory, it
>> will get a sigsegv. If we did not enable TSTATE_MCDE across all
>> threads, it could have continued execution without fault. In other
>> words, updating TSTATE_MCDE across all threads will eliminate the
>> option of running some threads with ADI enabled and some not while
>> accessing the same shared memory. This could be necessary at least for
>> short periods of time before threads can communicate with each other
>> and all switch to accessing shared memory with ADI enabled using same
>> tag. Does that sound like a valid use case or am I off in the weeds
>> here?
> 
> A threaded application needs to synchronize and properly orchestrate
> access to shared memory.
> 
> When a change is made to a mappping, in this case setting ADI
> attributes, it's being done for the address space not the thread.
> 
> And the address space is shared amongst threads.
> 
> Therefore ADI is not really a per-thread property but rather
> a per-address-space property.
> 

That does make sense.

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-08-31 16:38               ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-08-31 16:38 UTC (permalink / raw)
  To: David Miller
  Cc: anthony.yznaga, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

On 08/30/2017 06:09 PM, David Miller wrote:
> From: Khalid Aziz <khalid.aziz@oracle.com>
> Date: Wed, 30 Aug 2017 17:23:37 -0600
> 
>> That is an interesting idea. This would enable TSTATE_MCDE on all
>> threads of a process as soon as one thread enables it. If we consider
>> the case where the parent creates a shared memory area and spawns a
>> bunch of threads. These threads access the shared memory without ADI
>> enabled. Now one of the threads decides to enable ADI on the shared
>> memory. As soon as it does that, we enable TSTATE_MCDE across all
>> threads and since threads are all using the same TTE for the shared
>> memory, every thread becomes subject to ADI verification. If one of
>> the other threads was in the middle of accessing the shared memory, it
>> will get a sigsegv. If we did not enable TSTATE_MCDE across all
>> threads, it could have continued execution without fault. In other
>> words, updating TSTATE_MCDE across all threads will eliminate the
>> option of running some threads with ADI enabled and some not while
>> accessing the same shared memory. This could be necessary at least for
>> short periods of time before threads can communicate with each other
>> and all switch to accessing shared memory with ADI enabled using same
>> tag. Does that sound like a valid use case or am I off in the weeds
>> here?
> 
> A threaded application needs to synchronize and properly orchestrate
> access to shared memory.
> 
> When a change is made to a mappping, in this case setting ADI
> attributes, it's being done for the address space not the thread.
> 
> And the address space is shared amongst threads.
> 
> Therefore ADI is not really a per-thread property but rather
> a per-address-space property.
> 

That does make sense.

Thanks,
Khalid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-30 22:27       ` Khalid Aziz
  (?)
@ 2017-09-01  5:38         ` Anthony Yznaga
  -1 siblings, 0 replies; 86+ messages in thread
From: Anthony Yznaga @ 2017-09-01  5:38 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

Hi Khalid,

> On Aug 30, 2017, at 3:27 PM, Khalid Aziz <khalid.aziz@Oracle.com> wrote:
> 
> Hi Anthony,
> 
> Thanks for taking the time to provide feedback. My comments inline below.
> 
> On 08/25/2017 04:31 PM, Anthony Yznaga wrote:
>>> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
>>> ......deleted......
>>> +provided by the hypervisor to the kernel.  Kernel returns the value of
>>> +ADI block size to userspace using auxiliary vector along with other ADI
>>> +info. Following auxiliary vectors are provided by the kernel:
>>> +
>>> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
>>> +			alignment, in bytes, of ADI versioning.
>>> +	AT_ADI_NBITS	Number of ADI version bits in the VA
>> The previous patch series also defined AT_ADI_UEONADI.  Why was that
>> removed?
> 
> This was based upon a conversation we had when you mentioned future processors may not implement this or change the way this is interpreted and any applications depending upon this value would break at that point. I removed it to eliminate building an unreliable dependency. If I misunderstood what you said, please let me know.

On M7 there is an array of versions maintained for cachelines in the L2
cache. If a UE is detected in this array it results in the flush of all
eight ways of the array.  Clean lines go away, but dirty lines are
written back to memory with the version forced to 0xE.  The ue-on-adp MD
property communicates this tag value that may result from a UE in order
to give the guest the opportunity to avoid using the tag value.  An
application that intentionally used ADI in a way that relied on ADI
exceptions for its functionality may not want to have to consider
whether the mismatch was legitimate or due to a UE.

On M8 the HW implementation is changed and a tag value will never be
forced to another value.  That said, I think the ue-on-adp property
value was unfortunately inadvertently carried forward to M8.

It could probably be argued that the likelihood of seeing the UE is so
low that SW can ignore the possibility, but including the information
in an auxvec shouldn't break anything.


> 
>>> +
>>> +
>>> +IMPORTANT NOTES:
>>> +
>>> +- Version tag values of 0x0 and 0xf are reserved.
>> The documentation should probably state more specifically that an
>> in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
>> meaning that a mismatch exception will never be generated regardless
>> of the tag bits set in the VA accessing the memory.
> 
> Will do.
> 
>>> +
>>> +- Version tags are set on virtual addresses from userspace even though
>>> +  tags are stored in physical memory. Tags are set on a physical page
>>> +  after it has been allocated to a task and a pte has been created for
>>> +  it.
>>> +
>>> +- When a task frees a memory page it had set version tags on, the page
>>> +  goes back to free page pool. When this page is re-allocated to a task,
>>> +  kernel clears the page using block initialization ASI which clears the
>>> +  version tags as well for the page. If a page allocated to a task is
>>> +  freed and allocated back to the same task, old version tags set by the
>>> +  task on that page will no longer be present.
>> The specifics should be included here, too, so someone doesn't have
>> to guess what's going on if they make changes and the tags are no longer
>> cleared.  The HW clears the tag for a cacheline for block initializing
>> stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
>> PSTATE.mce is set when executing in the kernel, but pages are cleared
>> using kernel physical mapping VAs which are mapped with TTE.mcd=0.
>> Another HW behavior that should be mentioned is that tag mismatches
>> are not detected for non-faulting loads.
> 
> Sure, I can add that.
> 
>>> +
>>> +- Kernel does not set any tags for user pages and it is entirely a
>>> +  task's responsibility to set any version tags. Kernel does ensure the
>>> +  version tags are preserved if a page is swapped out to the disk and
>>> +  swapped back in. It also preserves that version tags if a page is
>>> +  migrated.
>> I only have a cursory understanding of how page migration works, but
>> I could not see how the tags would be preserved if a page were migrated.
>> I figured the place to copy the tags would be migrate_page_copy(), but
>> I don't see changes there.
> 
> For migrating user pages, the way I understand the code works is if the page is mapped (which is the only time ADI tags are even in place), try_to_unmap() is called with TTU_MIGRATION flag set. try_to_unmap() will call arch_unmap_one() which saves the tags from currently mapped page. When the new page has been allocated, contents of the old page are faulted in through do_swap_page() which will call arch_do_swap_page(). arch_do_swap_page() then restores the ADI tags.

My understanding from reading the code is that __unmap_and_move() calls
try_to_unmap() which unmaps the page and installs a migration pte.
move_to_new_page() is then called which copies the data.  Finally,
remove_migration_ptes() is called which removes the migration pte and
installs an updated regular pte.  If a fault on the page happens while
the migration pte is installed, do_swap_page() is called and the
faulting thread waits for the migration to complete before proceeding. 
However, if no fault happens before the migration completes, a regular
pte will be found by the next fault and do_swap_page() will not be
called.


> 
> 
>>> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
>>> index 59bb5938d852..b799796ad963 100644
>>> --- a/arch/sparc/include/asm/mman.h
>>> +++ b/arch/sparc/include/asm/mman.h
>>> @@ -6,5 +6,75 @@
>>> #ifndef __ASSEMBLY__
>>> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
>>> int sparc_mmap_check(unsigned long addr, unsigned long len);
>>> -#endif
>>> +
>>> +#ifdef CONFIG_SPARC64
>>> +#include <asm/adi_64.h>
>>> +
>>> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
>>> +{
>>> +	if (prot & PROT_ADI) {
>>> +		struct pt_regs *regs;
>>> +
>>> +		if (!current->mm->context.adi) {
>>> +			regs = task_pt_regs(current);
>>> +			regs->tstate |= TSTATE_MCDE;
>>> +			current->mm->context.adi = true;
>> If a process is multi-threaded when it enables ADI on some memory for
>> the first time, TSTATE_MCDE will only be set for the calling thread
>> and it will not be possible to enable it for the other threads.
>> One possible way to handle this is to enable TSTATE_MCDE for all user
>> threads when they are initialized if adi_capable() returns true.
> 
> Or set TSTATE_MCDE unconditionally here by removing "if (!current->mm->context.adi)"?
> 
>>> +		}
>>> +		return VM_SPARC_ADI;
>>> +	} else {
>>> +		return 0;
>>> +	}
>>> +}
>>> +
>>> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
>>> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
>>> +{
>>> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
>>> +}
>>> +
>>> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
>>> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
>>> +{
>>> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
>>> +		return 0;
>>> +	if (prot & PROT_ADI) {
>>> +		if (!adi_capable())
>>> +			return 0;
>>> +
>>> +		/* ADI tags can not be set on read-only memory, so it makes
>>> +		 * sense to enable ADI on writable memory only.
>>> +		 */
>>> +		if (!(prot & PROT_WRITE))
>>> +			return 0;
>> This prevents the use of ADI for the legitimate case where shared memory
>> is mapped read/write for a master process but mapped read-only for a
>> client process.  The master process could set the tags and communicate
>> the expected tag values to the client.
> 
> A non-writable mapping can access the shared memory using non-ADI tagged addresses if it does not enable ADI on its mappings, so it is superfluous to even allow enabling ADI. I can remove this if that helps any use cases that wouldn't work with above condition.

Allowing ADI to be enabled on read-only shared memory leaves the option
open to set up ADI in a way to detect unintended accesses that might
otherwise be missed.


> 
>>> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
>>> +				    struct vm_area_struct *vma,
>>> +				    unsigned long addr)
>>> +{
>>> +	unsigned char *tags;
>>> +	unsigned long i, size, max_desc, flags;
>>> +	tag_storage_desc_t *tag_desc, *open_desc;
>>> +	unsigned long end_addr, hole_start, hole_end;
>>> +
>>> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
>>> +	open_desc = NULL;
>>> +	hole_start = 0;
>>> +	hole_end = ULONG_MAX;
>>> +	end_addr = addr + PAGE_SIZE - 1;
>>> +
>>> +	/* Check if this vma already has tag storage descriptor
>>> +	 * allocated for it.
>>> +	 */
>>> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
>>> +	if (mm->context.tag_store) {
>>> +		tag_desc = mm->context.tag_store;
>>> +
>>> +		/* Look for a matching entry for this address. While doing
>>> +		 * that, look for the first open slot as well and find
>>> +		 * the hole in already allocated range where this request
>>> +		 * will fit in.
>>> +		 */
>>> +		for (i = 0; i < max_desc; i++) {
>>> +			if (tag_desc->tag_users == 0) {
>>> +				if (open_desc == NULL)
>>> +					open_desc = tag_desc;
>>> +			} else {
>>> +				if ((addr >= tag_desc->start) &&
>>> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
>>> +					tag_desc->tag_users++;
>>> +					goto out;
>>> +				}
>>> +			}
>>> +			if ((tag_desc->start > end_addr) &&
>>> +			    (tag_desc->start < hole_end))
>>> +				hole_end = tag_desc->start;
>>> +			if ((tag_desc->end < addr) &&
>>> +			    (tag_desc->end > hole_start))
>>> +				hole_start = tag_desc->end;
>>> +			tag_desc++;
>>> +		}
>>> +
>>> +	} else {
>>> +		size = sizeof(tag_storage_desc_t)*max_desc;
>>> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
>> The spin_lock_irqsave() above means that all but level 15 interrupts
>> will be disabled when kzalloc() is called.  If kzalloc() can sleep
>> there's a risk of deadlock.
> 
> I could call kzalloc() with GFP_NOWAIT instead of GFP_NOIO. Would that address the risk of deadlock?

I think so.  It may also mean that allocation failures are likely to be
seen since available memory is low enough to cause swapping in the first
place.


> 
>>> +		if (mm->context.tag_store == NULL) {
>>> +			tag_desc = NULL;
>>> +			goto out;
>>> +		}
>>> +		tag_desc = mm->context.tag_store;
>>> +		for (i = 0; i < max_desc; i++, tag_desc++)
>>> +			tag_desc->tag_users = 0;
>>> +		open_desc = mm->context.tag_store;
>>> +		i = 0;
>>> +	}
>>> +
>>> +	/* Check if we ran out of tag storage descriptors */
>>> +	if (open_desc == NULL) {
>>> +		tag_desc = NULL;
>>> +		goto out;
>>> +	}
>>> +
>>> +	/* Mark this tag descriptor slot in use and then initialize it */
>>> +	tag_desc = open_desc;
>>> +	tag_desc->tag_users = 1;
>>> +
>>> +	/* Tag storage has not been allocated for this vma and space
>>> +	 * is available in tag storage descriptor. Since this page is
>>> +	 * being swapped out, there is high probability subsequent pages
>>> +	 * in the VMA will be swapped out as well. Allocates pages to
>>> +	 * store tags for as many pages in this vma as possible but not
>>> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
>>> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
>>> +	 * covers adi_blksize() worth of addresses. Check if the hole is
>>> +	 * big enough to accommodate full address range for using
>>> +	 * TAG_STORAGE_PAGES number of tag pages.
>>> +	 */
>>> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
>>> +	end_addr = addr + (size*2*adi_blksize()) - 1;
>> Since size > PAGE_SIZE, end_addr could theoretically overflow >
>>> +	if (hole_end < end_addr) {
>>> +		/* Available hole is too small on the upper end of
>>> +		 * address. Can we expand the range towards the lower
>>> +		 * address and maximize use of this slot?
>>> +		 */
>>> +		unsigned long tmp_addr;
>>> +
>>> +		end_addr = hole_end - 1;
>>> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
>> Similarily, tmp_addr may underflow.
> 
> I will add checks for these two.
> 
>>> +		if (tmp_addr < hole_start) {
>>> +			/* Available hole is restricted on lower address
>>> +			 * end as well
>>> +			 */
>>> +			tmp_addr = hole_start + 1;
>>> +		}
>>> +		addr = tmp_addr;
>>> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
>>> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
>>> +		size = size * PAGE_SIZE;
>>> +	}
>>> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
>> Potential deadlock due to PIL=14?
> 
> Same as above - call kzalloc() with GFP_NOWAIT?
> 
>>> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
>>> index 1276ca2567ba..7be33bf45cff 100644
>>> --- a/arch/sparc/kernel/etrap_64.S
>>> +++ b/arch/sparc/kernel/etrap_64.S
>>> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
>>> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
>>> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
>>> 		or	%l7, %l0, %l7
>>> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>>> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>>> +		/*
>>> +		 * If userspace is using ADI, it could potentially pass
>>> +		 * a pointer with version tag embedded in it. To maintain
>>> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
>>> +		 * would have already set TTE.mcd in an earlier call to
>>> +		 * kernel and set the version tag for the address being
>>> +		 * dereferenced. Setting PSTATE.mcde would ensure any
>>> +		 * access to userspace data through a system call honors
>>> +		 * ADI and does not allow a rogue app to bypass ADI by
>>> +		 * using system calls. Setting PSTATE.mcde only affects
>>> +		 * accesses to virtual addresses that have TTE.mcd set.
>>> +		 * Set PMCDPER to ensure any exceptions caused by ADI
>>> +		 * version tag mismatch are exposed before system call
>>> +		 * returns to userspace. Setting PMCDPER affects only
>>> +		 * writes to virtual addresses that have TTE.mcd set and
>>> +		 * have a version tag set as well.
>>> +		 */
>>> +		.section .sun_m7_1insn_patch, "ax"
>>> +		.word	661b
>>> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
>>> +		.previous
>>> +661:		nop
>>> +		.section .sun_m7_1insn_patch, "ax"
>>> +		.word	661b
>>> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
>> I commented on this on the last patch series revision.  PMCDPER could be
>> set once when each CPU is configured rather than every time the kernel
>> is entered.  Since it's never cleared, setting it repeatedly unnecessarily
>> impacts the performance of etrap.
> 
> Yes, you did and I thought I had addressed it in that thread:
> 
> ">> I considered that possibility. What made me uncomfortable with that is there is no way to prevent a driver/module or future code elsewhere in kernel from clearing PMCDPER with possibly good reason. If that were to happen, setting PMCDPER here ensures kernel will always see consistent behavior with system calls. It does come at a cost. Is that cost unacceptable to ensure consistent behavior?

Any driver/module has the ability to cause problems by writing any
privileged register of its choice.  It would be a bug to clear PMCDPER
and not restore it, and the consequence is that a mismatch detected in
privileged mode would result in a disrupting exception instead of a
precise exception.  Perhaps a warning could be logged if this unexpected
case occurs.

Anthony


>> Aren't you still at risk if the thread relinquishes the CPU while in the kernel and is then rescheduled on a CPU where PMCDPER has erroneously been left cleared?  You may need to save and restore PMCDPER as well as MCDPER on context switch, but I don't know if that will cover you completely.
>> "
> 
> I should add setting PMCDPER to 1 in finish_arch_post_lock_switch() to address the possibility you had mentioned.
> 
>> Also, there are places in rtrap where PSTATE is set before continuing
>> execution in the kernel.  These should also be patched to set TSTATE_MCDE.
> 
> I will find and fix those.
> 
>>> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
>>> index 422b17880955..a9da205da394 100644
>>> --- a/arch/sparc/kernel/setup_64.c
>>> +++ b/arch/sparc/kernel/setup_64.c
>>> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>>> 	}
>>> }
>>> 
>>> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>>> +			     struct sun4v_1insn_patch_entry *end)
>>> +{
>>> +	sun4v_patch_1insn_range(start, end);
>>> +}
>>> +
>>> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
>>> 			     struct sun4v_2insn_patch_entry *end)
>>> {
>>> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
>>> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
>>> 				&__sun4v_2insn_patch_end);
>>> 	if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
>>> -	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
>>> +	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN) {
>>> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
>>> +					 &__sun_m7_1insn_patch_end);
>>> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
>>> 					 &__sun_m7_2insn_patch_end);
>> Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
>> here instead of adding new functions that just call these functions?
> 
> Sounds reasonable, I can change that.
> 
> Thanks,
> Khalid
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-01  5:38         ` Anthony Yznaga
  0 siblings, 0 replies; 86+ messages in thread
From: Anthony Yznaga @ 2017-09-01  5:38 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

Hi Khalid,

> On Aug 30, 2017, at 3:27 PM, Khalid Aziz <khalid.aziz@Oracle.com> wrote:
> 
> Hi Anthony,
> 
> Thanks for taking the time to provide feedback. My comments inline below.
> 
> On 08/25/2017 04:31 PM, Anthony Yznaga wrote:
>>> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
>>> ......deleted......
>>> +provided by the hypervisor to the kernel.  Kernel returns the value of
>>> +ADI block size to userspace using auxiliary vector along with other ADI
>>> +info. Following auxiliary vectors are provided by the kernel:
>>> +
>>> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
>>> +			alignment, in bytes, of ADI versioning.
>>> +	AT_ADI_NBITS	Number of ADI version bits in the VA
>> The previous patch series also defined AT_ADI_UEONADI.  Why was that
>> removed?
> 
> This was based upon a conversation we had when you mentioned future processors may not implement this or change the way this is interpreted and any applications depending upon this value would break at that point. I removed it to eliminate building an unreliable dependency. If I misunderstood what you said, please let me know.

On M7 there is an array of versions maintained for cachelines in the L2
cache. If a UE is detected in this array it results in the flush of all
eight ways of the array.  Clean lines go away, but dirty lines are
written back to memory with the version forced to 0xE.  The ue-on-adp MD
property communicates this tag value that may result from a UE in order
to give the guest the opportunity to avoid using the tag value.  An
application that intentionally used ADI in a way that relied on ADI
exceptions for its functionality may not want to have to consider
whether the mismatch was legitimate or due to a UE.

On M8 the HW implementation is changed and a tag value will never be
forced to another value.  That said, I think the ue-on-adp property
value was unfortunately inadvertently carried forward to M8.

It could probably be argued that the likelihood of seeing the UE is so
low that SW can ignore the possibility, but including the information
in an auxvec shouldn't break anything.


> 
>>> +
>>> +
>>> +IMPORTANT NOTES:
>>> +
>>> +- Version tag values of 0x0 and 0xf are reserved.
>> The documentation should probably state more specifically that an
>> in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
>> meaning that a mismatch exception will never be generated regardless
>> of the tag bits set in the VA accessing the memory.
> 
> Will do.
> 
>>> +
>>> +- Version tags are set on virtual addresses from userspace even though
>>> +  tags are stored in physical memory. Tags are set on a physical page
>>> +  after it has been allocated to a task and a pte has been created for
>>> +  it.
>>> +
>>> +- When a task frees a memory page it had set version tags on, the page
>>> +  goes back to free page pool. When this page is re-allocated to a task,
>>> +  kernel clears the page using block initialization ASI which clears the
>>> +  version tags as well for the page. If a page allocated to a task is
>>> +  freed and allocated back to the same task, old version tags set by the
>>> +  task on that page will no longer be present.
>> The specifics should be included here, too, so someone doesn't have
>> to guess what's going on if they make changes and the tags are no longer
>> cleared.  The HW clears the tag for a cacheline for block initializing
>> stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
>> PSTATE.mce is set when executing in the kernel, but pages are cleared
>> using kernel physical mapping VAs which are mapped with TTE.mcd=0.
>> Another HW behavior that should be mentioned is that tag mismatches
>> are not detected for non-faulting loads.
> 
> Sure, I can add that.
> 
>>> +
>>> +- Kernel does not set any tags for user pages and it is entirely a
>>> +  task's responsibility to set any version tags. Kernel does ensure the
>>> +  version tags are preserved if a page is swapped out to the disk and
>>> +  swapped back in. It also preserves that version tags if a page is
>>> +  migrated.
>> I only have a cursory understanding of how page migration works, but
>> I could not see how the tags would be preserved if a page were migrated.
>> I figured the place to copy the tags would be migrate_page_copy(), but
>> I don't see changes there.
> 
> For migrating user pages, the way I understand the code works is if the page is mapped (which is the only time ADI tags are even in place), try_to_unmap() is called with TTU_MIGRATION flag set. try_to_unmap() will call arch_unmap_one() which saves the tags from currently mapped page. When the new page has been allocated, contents of the old page are faulted in through do_swap_page() which will call arch_do_swap_page(). arch_do_swap_page() then restores the ADI tags.

My understanding from reading the code is that __unmap_and_move() calls
try_to_unmap() which unmaps the page and installs a migration pte.
move_to_new_page() is then called which copies the data.  Finally,
remove_migration_ptes() is called which removes the migration pte and
installs an updated regular pte.  If a fault on the page happens while
the migration pte is installed, do_swap_page() is called and the
faulting thread waits for the migration to complete before proceeding. 
However, if no fault happens before the migration completes, a regular
pte will be found by the next fault and do_swap_page() will not be
called.


> 
> 
>>> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
>>> index 59bb5938d852..b799796ad963 100644
>>> --- a/arch/sparc/include/asm/mman.h
>>> +++ b/arch/sparc/include/asm/mman.h
>>> @@ -6,5 +6,75 @@
>>> #ifndef __ASSEMBLY__
>>> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
>>> int sparc_mmap_check(unsigned long addr, unsigned long len);
>>> -#endif
>>> +
>>> +#ifdef CONFIG_SPARC64
>>> +#include <asm/adi_64.h>
>>> +
>>> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
>>> +{
>>> +	if (prot & PROT_ADI) {
>>> +		struct pt_regs *regs;
>>> +
>>> +		if (!current->mm->context.adi) {
>>> +			regs = task_pt_regs(current);
>>> +			regs->tstate |= TSTATE_MCDE;
>>> +			current->mm->context.adi = true;
>> If a process is multi-threaded when it enables ADI on some memory for
>> the first time, TSTATE_MCDE will only be set for the calling thread
>> and it will not be possible to enable it for the other threads.
>> One possible way to handle this is to enable TSTATE_MCDE for all user
>> threads when they are initialized if adi_capable() returns true.
> 
> Or set TSTATE_MCDE unconditionally here by removing "if (!current->mm->context.adi)"?
> 
>>> +		}
>>> +		return VM_SPARC_ADI;
>>> +	} else {
>>> +		return 0;
>>> +	}
>>> +}
>>> +
>>> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
>>> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
>>> +{
>>> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
>>> +}
>>> +
>>> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
>>> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
>>> +{
>>> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
>>> +		return 0;
>>> +	if (prot & PROT_ADI) {
>>> +		if (!adi_capable())
>>> +			return 0;
>>> +
>>> +		/* ADI tags can not be set on read-only memory, so it makes
>>> +		 * sense to enable ADI on writable memory only.
>>> +		 */
>>> +		if (!(prot & PROT_WRITE))
>>> +			return 0;
>> This prevents the use of ADI for the legitimate case where shared memory
>> is mapped read/write for a master process but mapped read-only for a
>> client process.  The master process could set the tags and communicate
>> the expected tag values to the client.
> 
> A non-writable mapping can access the shared memory using non-ADI tagged addresses if it does not enable ADI on its mappings, so it is superfluous to even allow enabling ADI. I can remove this if that helps any use cases that wouldn't work with above condition.

Allowing ADI to be enabled on read-only shared memory leaves the option
open to set up ADI in a way to detect unintended accesses that might
otherwise be missed.


> 
>>> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
>>> +				    struct vm_area_struct *vma,
>>> +				    unsigned long addr)
>>> +{
>>> +	unsigned char *tags;
>>> +	unsigned long i, size, max_desc, flags;
>>> +	tag_storage_desc_t *tag_desc, *open_desc;
>>> +	unsigned long end_addr, hole_start, hole_end;
>>> +
>>> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
>>> +	open_desc = NULL;
>>> +	hole_start = 0;
>>> +	hole_end = ULONG_MAX;
>>> +	end_addr = addr + PAGE_SIZE - 1;
>>> +
>>> +	/* Check if this vma already has tag storage descriptor
>>> +	 * allocated for it.
>>> +	 */
>>> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
>>> +	if (mm->context.tag_store) {
>>> +		tag_desc = mm->context.tag_store;
>>> +
>>> +		/* Look for a matching entry for this address. While doing
>>> +		 * that, look for the first open slot as well and find
>>> +		 * the hole in already allocated range where this request
>>> +		 * will fit in.
>>> +		 */
>>> +		for (i = 0; i < max_desc; i++) {
>>> +			if (tag_desc->tag_users = 0) {
>>> +				if (open_desc = NULL)
>>> +					open_desc = tag_desc;
>>> +			} else {
>>> +				if ((addr >= tag_desc->start) &&
>>> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
>>> +					tag_desc->tag_users++;
>>> +					goto out;
>>> +				}
>>> +			}
>>> +			if ((tag_desc->start > end_addr) &&
>>> +			    (tag_desc->start < hole_end))
>>> +				hole_end = tag_desc->start;
>>> +			if ((tag_desc->end < addr) &&
>>> +			    (tag_desc->end > hole_start))
>>> +				hole_start = tag_desc->end;
>>> +			tag_desc++;
>>> +		}
>>> +
>>> +	} else {
>>> +		size = sizeof(tag_storage_desc_t)*max_desc;
>>> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
>> The spin_lock_irqsave() above means that all but level 15 interrupts
>> will be disabled when kzalloc() is called.  If kzalloc() can sleep
>> there's a risk of deadlock.
> 
> I could call kzalloc() with GFP_NOWAIT instead of GFP_NOIO. Would that address the risk of deadlock?

I think so.  It may also mean that allocation failures are likely to be
seen since available memory is low enough to cause swapping in the first
place.


> 
>>> +		if (mm->context.tag_store = NULL) {
>>> +			tag_desc = NULL;
>>> +			goto out;
>>> +		}
>>> +		tag_desc = mm->context.tag_store;
>>> +		for (i = 0; i < max_desc; i++, tag_desc++)
>>> +			tag_desc->tag_users = 0;
>>> +		open_desc = mm->context.tag_store;
>>> +		i = 0;
>>> +	}
>>> +
>>> +	/* Check if we ran out of tag storage descriptors */
>>> +	if (open_desc = NULL) {
>>> +		tag_desc = NULL;
>>> +		goto out;
>>> +	}
>>> +
>>> +	/* Mark this tag descriptor slot in use and then initialize it */
>>> +	tag_desc = open_desc;
>>> +	tag_desc->tag_users = 1;
>>> +
>>> +	/* Tag storage has not been allocated for this vma and space
>>> +	 * is available in tag storage descriptor. Since this page is
>>> +	 * being swapped out, there is high probability subsequent pages
>>> +	 * in the VMA will be swapped out as well. Allocates pages to
>>> +	 * store tags for as many pages in this vma as possible but not
>>> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
>>> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
>>> +	 * covers adi_blksize() worth of addresses. Check if the hole is
>>> +	 * big enough to accommodate full address range for using
>>> +	 * TAG_STORAGE_PAGES number of tag pages.
>>> +	 */
>>> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
>>> +	end_addr = addr + (size*2*adi_blksize()) - 1;
>> Since size > PAGE_SIZE, end_addr could theoretically overflow >
>>> +	if (hole_end < end_addr) {
>>> +		/* Available hole is too small on the upper end of
>>> +		 * address. Can we expand the range towards the lower
>>> +		 * address and maximize use of this slot?
>>> +		 */
>>> +		unsigned long tmp_addr;
>>> +
>>> +		end_addr = hole_end - 1;
>>> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
>> Similarily, tmp_addr may underflow.
> 
> I will add checks for these two.
> 
>>> +		if (tmp_addr < hole_start) {
>>> +			/* Available hole is restricted on lower address
>>> +			 * end as well
>>> +			 */
>>> +			tmp_addr = hole_start + 1;
>>> +		}
>>> +		addr = tmp_addr;
>>> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
>>> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
>>> +		size = size * PAGE_SIZE;
>>> +	}
>>> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
>> Potential deadlock due to PIL\x14?
> 
> Same as above - call kzalloc() with GFP_NOWAIT?
> 
>>> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
>>> index 1276ca2567ba..7be33bf45cff 100644
>>> --- a/arch/sparc/kernel/etrap_64.S
>>> +++ b/arch/sparc/kernel/etrap_64.S
>>> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
>>> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
>>> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
>>> 		or	%l7, %l0, %l7
>>> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>>> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>>> +		/*
>>> +		 * If userspace is using ADI, it could potentially pass
>>> +		 * a pointer with version tag embedded in it. To maintain
>>> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
>>> +		 * would have already set TTE.mcd in an earlier call to
>>> +		 * kernel and set the version tag for the address being
>>> +		 * dereferenced. Setting PSTATE.mcde would ensure any
>>> +		 * access to userspace data through a system call honors
>>> +		 * ADI and does not allow a rogue app to bypass ADI by
>>> +		 * using system calls. Setting PSTATE.mcde only affects
>>> +		 * accesses to virtual addresses that have TTE.mcd set.
>>> +		 * Set PMCDPER to ensure any exceptions caused by ADI
>>> +		 * version tag mismatch are exposed before system call
>>> +		 * returns to userspace. Setting PMCDPER affects only
>>> +		 * writes to virtual addresses that have TTE.mcd set and
>>> +		 * have a version tag set as well.
>>> +		 */
>>> +		.section .sun_m7_1insn_patch, "ax"
>>> +		.word	661b
>>> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
>>> +		.previous
>>> +661:		nop
>>> +		.section .sun_m7_1insn_patch, "ax"
>>> +		.word	661b
>>> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
>> I commented on this on the last patch series revision.  PMCDPER could be
>> set once when each CPU is configured rather than every time the kernel
>> is entered.  Since it's never cleared, setting it repeatedly unnecessarily
>> impacts the performance of etrap.
> 
> Yes, you did and I thought I had addressed it in that thread:
> 
> ">> I considered that possibility. What made me uncomfortable with that is there is no way to prevent a driver/module or future code elsewhere in kernel from clearing PMCDPER with possibly good reason. If that were to happen, setting PMCDPER here ensures kernel will always see consistent behavior with system calls. It does come at a cost. Is that cost unacceptable to ensure consistent behavior?

Any driver/module has the ability to cause problems by writing any
privileged register of its choice.  It would be a bug to clear PMCDPER
and not restore it, and the consequence is that a mismatch detected in
privileged mode would result in a disrupting exception instead of a
precise exception.  Perhaps a warning could be logged if this unexpected
case occurs.

Anthony


>> Aren't you still at risk if the thread relinquishes the CPU while in the kernel and is then rescheduled on a CPU where PMCDPER has erroneously been left cleared?  You may need to save and restore PMCDPER as well as MCDPER on context switch, but I don't know if that will cover you completely.
>> "
> 
> I should add setting PMCDPER to 1 in finish_arch_post_lock_switch() to address the possibility you had mentioned.
> 
>> Also, there are places in rtrap where PSTATE is set before continuing
>> execution in the kernel.  These should also be patched to set TSTATE_MCDE.
> 
> I will find and fix those.
> 
>>> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
>>> index 422b17880955..a9da205da394 100644
>>> --- a/arch/sparc/kernel/setup_64.c
>>> +++ b/arch/sparc/kernel/setup_64.c
>>> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>>> 	}
>>> }
>>> 
>>> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>>> +			     struct sun4v_1insn_patch_entry *end)
>>> +{
>>> +	sun4v_patch_1insn_range(start, end);
>>> +}
>>> +
>>> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
>>> 			     struct sun4v_2insn_patch_entry *end)
>>> {
>>> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
>>> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
>>> 				&__sun4v_2insn_patch_end);
>>> 	if (sun4v_chip_type = SUN4V_CHIP_SPARC_M7 ||
>>> -	    sun4v_chip_type = SUN4V_CHIP_SPARC_SN)
>>> +	    sun4v_chip_type = SUN4V_CHIP_SPARC_SN) {
>>> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
>>> +					 &__sun_m7_1insn_patch_end);
>>> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
>>> 					 &__sun_m7_2insn_patch_end);
>> Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
>> here instead of adding new functions that just call these functions?
> 
> Sounds reasonable, I can change that.
> 
> Thanks,
> Khalid
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-01  5:38         ` Anthony Yznaga
  0 siblings, 0 replies; 86+ messages in thread
From: Anthony Yznaga @ 2017-09-01  5:38 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: David Miller, dave.hansen, corbet, Bob Picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

Hi Khalid,

> On Aug 30, 2017, at 3:27 PM, Khalid Aziz <khalid.aziz@Oracle.com> wrote:
> 
> Hi Anthony,
> 
> Thanks for taking the time to provide feedback. My comments inline below.
> 
> On 08/25/2017 04:31 PM, Anthony Yznaga wrote:
>>> On Aug 9, 2017, at 2:26 PM, Khalid Aziz <khalid.aziz@oracle.com> wrote:
>>> ......deleted......
>>> +provided by the hypervisor to the kernel.  Kernel returns the value of
>>> +ADI block size to userspace using auxiliary vector along with other ADI
>>> +info. Following auxiliary vectors are provided by the kernel:
>>> +
>>> +	AT_ADI_BLKSZ	ADI block size. This is the granularity and
>>> +			alignment, in bytes, of ADI versioning.
>>> +	AT_ADI_NBITS	Number of ADI version bits in the VA
>> The previous patch series also defined AT_ADI_UEONADI.  Why was that
>> removed?
> 
> This was based upon a conversation we had when you mentioned future processors may not implement this or change the way this is interpreted and any applications depending upon this value would break at that point. I removed it to eliminate building an unreliable dependency. If I misunderstood what you said, please let me know.

On M7 there is an array of versions maintained for cachelines in the L2
cache. If a UE is detected in this array it results in the flush of all
eight ways of the array.  Clean lines go away, but dirty lines are
written back to memory with the version forced to 0xE.  The ue-on-adp MD
property communicates this tag value that may result from a UE in order
to give the guest the opportunity to avoid using the tag value.  An
application that intentionally used ADI in a way that relied on ADI
exceptions for its functionality may not want to have to consider
whether the mismatch was legitimate or due to a UE.

On M8 the HW implementation is changed and a tag value will never be
forced to another value.  That said, I think the ue-on-adp property
value was unfortunately inadvertently carried forward to M8.

It could probably be argued that the likelihood of seeing the UE is so
low that SW can ignore the possibility, but including the information
in an auxvec shouldn't break anything.


> 
>>> +
>>> +
>>> +IMPORTANT NOTES:
>>> +
>>> +- Version tag values of 0x0 and 0xf are reserved.
>> The documentation should probably state more specifically that an
>> in-memory tag value of 0x0 or 0xf is treated as "match all" by the HW
>> meaning that a mismatch exception will never be generated regardless
>> of the tag bits set in the VA accessing the memory.
> 
> Will do.
> 
>>> +
>>> +- Version tags are set on virtual addresses from userspace even though
>>> +  tags are stored in physical memory. Tags are set on a physical page
>>> +  after it has been allocated to a task and a pte has been created for
>>> +  it.
>>> +
>>> +- When a task frees a memory page it had set version tags on, the page
>>> +  goes back to free page pool. When this page is re-allocated to a task,
>>> +  kernel clears the page using block initialization ASI which clears the
>>> +  version tags as well for the page. If a page allocated to a task is
>>> +  freed and allocated back to the same task, old version tags set by the
>>> +  task on that page will no longer be present.
>> The specifics should be included here, too, so someone doesn't have
>> to guess what's going on if they make changes and the tags are no longer
>> cleared.  The HW clears the tag for a cacheline for block initializing
>> stores to 64-byte aligned addresses if PSTATE.mcde=0 or TTE.mcd=0.
>> PSTATE.mce is set when executing in the kernel, but pages are cleared
>> using kernel physical mapping VAs which are mapped with TTE.mcd=0.
>> Another HW behavior that should be mentioned is that tag mismatches
>> are not detected for non-faulting loads.
> 
> Sure, I can add that.
> 
>>> +
>>> +- Kernel does not set any tags for user pages and it is entirely a
>>> +  task's responsibility to set any version tags. Kernel does ensure the
>>> +  version tags are preserved if a page is swapped out to the disk and
>>> +  swapped back in. It also preserves that version tags if a page is
>>> +  migrated.
>> I only have a cursory understanding of how page migration works, but
>> I could not see how the tags would be preserved if a page were migrated.
>> I figured the place to copy the tags would be migrate_page_copy(), but
>> I don't see changes there.
> 
> For migrating user pages, the way I understand the code works is if the page is mapped (which is the only time ADI tags are even in place), try_to_unmap() is called with TTU_MIGRATION flag set. try_to_unmap() will call arch_unmap_one() which saves the tags from currently mapped page. When the new page has been allocated, contents of the old page are faulted in through do_swap_page() which will call arch_do_swap_page(). arch_do_swap_page() then restores the ADI tags.

My understanding from reading the code is that __unmap_and_move() calls
try_to_unmap() which unmaps the page and installs a migration pte.
move_to_new_page() is then called which copies the data.  Finally,
remove_migration_ptes() is called which removes the migration pte and
installs an updated regular pte.  If a fault on the page happens while
the migration pte is installed, do_swap_page() is called and the
faulting thread waits for the migration to complete before proceeding. 
However, if no fault happens before the migration completes, a regular
pte will be found by the next fault and do_swap_page() will not be
called.


> 
> 
>>> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
>>> index 59bb5938d852..b799796ad963 100644
>>> --- a/arch/sparc/include/asm/mman.h
>>> +++ b/arch/sparc/include/asm/mman.h
>>> @@ -6,5 +6,75 @@
>>> #ifndef __ASSEMBLY__
>>> #define arch_mmap_check(addr,len,flags)	sparc_mmap_check(addr,len)
>>> int sparc_mmap_check(unsigned long addr, unsigned long len);
>>> -#endif
>>> +
>>> +#ifdef CONFIG_SPARC64
>>> +#include <asm/adi_64.h>
>>> +
>>> +#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
>>> +static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
>>> +{
>>> +	if (prot & PROT_ADI) {
>>> +		struct pt_regs *regs;
>>> +
>>> +		if (!current->mm->context.adi) {
>>> +			regs = task_pt_regs(current);
>>> +			regs->tstate |= TSTATE_MCDE;
>>> +			current->mm->context.adi = true;
>> If a process is multi-threaded when it enables ADI on some memory for
>> the first time, TSTATE_MCDE will only be set for the calling thread
>> and it will not be possible to enable it for the other threads.
>> One possible way to handle this is to enable TSTATE_MCDE for all user
>> threads when they are initialized if adi_capable() returns true.
> 
> Or set TSTATE_MCDE unconditionally here by removing "if (!current->mm->context.adi)"?
> 
>>> +		}
>>> +		return VM_SPARC_ADI;
>>> +	} else {
>>> +		return 0;
>>> +	}
>>> +}
>>> +
>>> +#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
>>> +static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
>>> +{
>>> +	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
>>> +}
>>> +
>>> +#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
>>> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
>>> +{
>>> +	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
>>> +		return 0;
>>> +	if (prot & PROT_ADI) {
>>> +		if (!adi_capable())
>>> +			return 0;
>>> +
>>> +		/* ADI tags can not be set on read-only memory, so it makes
>>> +		 * sense to enable ADI on writable memory only.
>>> +		 */
>>> +		if (!(prot & PROT_WRITE))
>>> +			return 0;
>> This prevents the use of ADI for the legitimate case where shared memory
>> is mapped read/write for a master process but mapped read-only for a
>> client process.  The master process could set the tags and communicate
>> the expected tag values to the client.
> 
> A non-writable mapping can access the shared memory using non-ADI tagged addresses if it does not enable ADI on its mappings, so it is superfluous to even allow enabling ADI. I can remove this if that helps any use cases that wouldn't work with above condition.

Allowing ADI to be enabled on read-only shared memory leaves the option
open to set up ADI in a way to detect unintended accesses that might
otherwise be missed.


> 
>>> +tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
>>> +				    struct vm_area_struct *vma,
>>> +				    unsigned long addr)
>>> +{
>>> +	unsigned char *tags;
>>> +	unsigned long i, size, max_desc, flags;
>>> +	tag_storage_desc_t *tag_desc, *open_desc;
>>> +	unsigned long end_addr, hole_start, hole_end;
>>> +
>>> +	max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
>>> +	open_desc = NULL;
>>> +	hole_start = 0;
>>> +	hole_end = ULONG_MAX;
>>> +	end_addr = addr + PAGE_SIZE - 1;
>>> +
>>> +	/* Check if this vma already has tag storage descriptor
>>> +	 * allocated for it.
>>> +	 */
>>> +	spin_lock_irqsave(&mm->context.tag_lock, flags);
>>> +	if (mm->context.tag_store) {
>>> +		tag_desc = mm->context.tag_store;
>>> +
>>> +		/* Look for a matching entry for this address. While doing
>>> +		 * that, look for the first open slot as well and find
>>> +		 * the hole in already allocated range where this request
>>> +		 * will fit in.
>>> +		 */
>>> +		for (i = 0; i < max_desc; i++) {
>>> +			if (tag_desc->tag_users == 0) {
>>> +				if (open_desc == NULL)
>>> +					open_desc = tag_desc;
>>> +			} else {
>>> +				if ((addr >= tag_desc->start) &&
>>> +				    (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
>>> +					tag_desc->tag_users++;
>>> +					goto out;
>>> +				}
>>> +			}
>>> +			if ((tag_desc->start > end_addr) &&
>>> +			    (tag_desc->start < hole_end))
>>> +				hole_end = tag_desc->start;
>>> +			if ((tag_desc->end < addr) &&
>>> +			    (tag_desc->end > hole_start))
>>> +				hole_start = tag_desc->end;
>>> +			tag_desc++;
>>> +		}
>>> +
>>> +	} else {
>>> +		size = sizeof(tag_storage_desc_t)*max_desc;
>>> +		mm->context.tag_store = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
>> The spin_lock_irqsave() above means that all but level 15 interrupts
>> will be disabled when kzalloc() is called.  If kzalloc() can sleep
>> there's a risk of deadlock.
> 
> I could call kzalloc() with GFP_NOWAIT instead of GFP_NOIO. Would that address the risk of deadlock?

I think so.  It may also mean that allocation failures are likely to be
seen since available memory is low enough to cause swapping in the first
place.


> 
>>> +		if (mm->context.tag_store == NULL) {
>>> +			tag_desc = NULL;
>>> +			goto out;
>>> +		}
>>> +		tag_desc = mm->context.tag_store;
>>> +		for (i = 0; i < max_desc; i++, tag_desc++)
>>> +			tag_desc->tag_users = 0;
>>> +		open_desc = mm->context.tag_store;
>>> +		i = 0;
>>> +	}
>>> +
>>> +	/* Check if we ran out of tag storage descriptors */
>>> +	if (open_desc == NULL) {
>>> +		tag_desc = NULL;
>>> +		goto out;
>>> +	}
>>> +
>>> +	/* Mark this tag descriptor slot in use and then initialize it */
>>> +	tag_desc = open_desc;
>>> +	tag_desc->tag_users = 1;
>>> +
>>> +	/* Tag storage has not been allocated for this vma and space
>>> +	 * is available in tag storage descriptor. Since this page is
>>> +	 * being swapped out, there is high probability subsequent pages
>>> +	 * in the VMA will be swapped out as well. Allocates pages to
>>> +	 * store tags for as many pages in this vma as possible but not
>>> +	 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
>>> +	 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
>>> +	 * covers adi_blksize() worth of addresses. Check if the hole is
>>> +	 * big enough to accommodate full address range for using
>>> +	 * TAG_STORAGE_PAGES number of tag pages.
>>> +	 */
>>> +	size = TAG_STORAGE_PAGES * PAGE_SIZE;
>>> +	end_addr = addr + (size*2*adi_blksize()) - 1;
>> Since size > PAGE_SIZE, end_addr could theoretically overflow >
>>> +	if (hole_end < end_addr) {
>>> +		/* Available hole is too small on the upper end of
>>> +		 * address. Can we expand the range towards the lower
>>> +		 * address and maximize use of this slot?
>>> +		 */
>>> +		unsigned long tmp_addr;
>>> +
>>> +		end_addr = hole_end - 1;
>>> +		tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
>> Similarily, tmp_addr may underflow.
> 
> I will add checks for these two.
> 
>>> +		if (tmp_addr < hole_start) {
>>> +			/* Available hole is restricted on lower address
>>> +			 * end as well
>>> +			 */
>>> +			tmp_addr = hole_start + 1;
>>> +		}
>>> +		addr = tmp_addr;
>>> +		size = (end_addr + 1 - addr)/(2*adi_blksize());
>>> +		size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
>>> +		size = size * PAGE_SIZE;
>>> +	}
>>> +	tags = kzalloc(size, GFP_NOIO|__GFP_NOWARN);
>> Potential deadlock due to PIL=14?
> 
> Same as above - call kzalloc() with GFP_NOWAIT?
> 
>>> diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
>>> index 1276ca2567ba..7be33bf45cff 100644
>>> --- a/arch/sparc/kernel/etrap_64.S
>>> +++ b/arch/sparc/kernel/etrap_64.S
>>> @@ -132,7 +132,33 @@ etrap_save:	save	%g2, -STACK_BIAS, %sp
>>> 		stx	%g6, [%sp + PTREGS_OFF + PT_V9_G6]
>>> 		stx	%g7, [%sp + PTREGS_OFF + PT_V9_G7]
>>> 		or	%l7, %l0, %l7
>>> -		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>>> +661:		sethi	%hi(TSTATE_TSO | TSTATE_PEF), %l0
>>> +		/*
>>> +		 * If userspace is using ADI, it could potentially pass
>>> +		 * a pointer with version tag embedded in it. To maintain
>>> +		 * the ADI security, we must enable PSTATE.mcde. Userspace
>>> +		 * would have already set TTE.mcd in an earlier call to
>>> +		 * kernel and set the version tag for the address being
>>> +		 * dereferenced. Setting PSTATE.mcde would ensure any
>>> +		 * access to userspace data through a system call honors
>>> +		 * ADI and does not allow a rogue app to bypass ADI by
>>> +		 * using system calls. Setting PSTATE.mcde only affects
>>> +		 * accesses to virtual addresses that have TTE.mcd set.
>>> +		 * Set PMCDPER to ensure any exceptions caused by ADI
>>> +		 * version tag mismatch are exposed before system call
>>> +		 * returns to userspace. Setting PMCDPER affects only
>>> +		 * writes to virtual addresses that have TTE.mcd set and
>>> +		 * have a version tag set as well.
>>> +		 */
>>> +		.section .sun_m7_1insn_patch, "ax"
>>> +		.word	661b
>>> +		sethi	%hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
>>> +		.previous
>>> +661:		nop
>>> +		.section .sun_m7_1insn_patch, "ax"
>>> +		.word	661b
>>> +		.word 0xaf902001	/* wrpr %g0, 1, %pmcdper */
>> I commented on this on the last patch series revision.  PMCDPER could be
>> set once when each CPU is configured rather than every time the kernel
>> is entered.  Since it's never cleared, setting it repeatedly unnecessarily
>> impacts the performance of etrap.
> 
> Yes, you did and I thought I had addressed it in that thread:
> 
> ">> I considered that possibility. What made me uncomfortable with that is there is no way to prevent a driver/module or future code elsewhere in kernel from clearing PMCDPER with possibly good reason. If that were to happen, setting PMCDPER here ensures kernel will always see consistent behavior with system calls. It does come at a cost. Is that cost unacceptable to ensure consistent behavior?

Any driver/module has the ability to cause problems by writing any
privileged register of its choice.  It would be a bug to clear PMCDPER
and not restore it, and the consequence is that a mismatch detected in
privileged mode would result in a disrupting exception instead of a
precise exception.  Perhaps a warning could be logged if this unexpected
case occurs.

Anthony


>> Aren't you still at risk if the thread relinquishes the CPU while in the kernel and is then rescheduled on a CPU where PMCDPER has erroneously been left cleared?  You may need to save and restore PMCDPER as well as MCDPER on context switch, but I don't know if that will cover you completely.
>> "
> 
> I should add setting PMCDPER to 1 in finish_arch_post_lock_switch() to address the possibility you had mentioned.
> 
>> Also, there are places in rtrap where PSTATE is set before continuing
>> execution in the kernel.  These should also be patched to set TSTATE_MCDE.
> 
> I will find and fix those.
> 
>>> diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
>>> index 422b17880955..a9da205da394 100644
>>> --- a/arch/sparc/kernel/setup_64.c
>>> +++ b/arch/sparc/kernel/setup_64.c
>>> @@ -240,6 +240,12 @@ void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>>> 	}
>>> }
>>> 
>>> +void sun_m7_patch_1insn_range(struct sun4v_1insn_patch_entry *start,
>>> +			     struct sun4v_1insn_patch_entry *end)
>>> +{
>>> +	sun4v_patch_1insn_range(start, end);
>>> +}
>>> +
>>> void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
>>> 			     struct sun4v_2insn_patch_entry *end)
>>> {
>>> @@ -289,9 +295,12 @@ static void __init sun4v_patch(void)
>>> 	sun4v_patch_2insn_range(&__sun4v_2insn_patch,
>>> 				&__sun4v_2insn_patch_end);
>>> 	if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
>>> -	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
>>> +	    sun4v_chip_type == SUN4V_CHIP_SPARC_SN) {
>>> +		sun_m7_patch_1insn_range(&__sun_m7_1insn_patch,
>>> +					 &__sun_m7_1insn_patch_end);
>>> 		sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
>>> 					 &__sun_m7_2insn_patch_end);
>> Why not call sun4v_patch_1insn_range() and sun4v_patch_2insn_range()
>> here instead of adding new functions that just call these functions?
> 
> Sounds reasonable, I can change that.
> 
> Thanks,
> Khalid
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-08-09 21:26   ` Khalid Aziz
  (?)
@ 2017-09-04 16:25     ` Pavel Machek
  -1 siblings, 0 replies; 86+ messages in thread
From: Pavel Machek @ 2017-09-04 16:25 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: davem, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

[-- Attachment #1: Type: text/plain, Size: 1196 bytes --]

Hi!

> ADI is a new feature supported on SPARC M7 and newer processors to allow
> hardware to catch rogue accesses to memory. ADI is supported for data
> fetches only and not instruction fetches. An app can enable ADI on its
> data pages, set version tags on them and use versioned addresses to
> access the data pages. Upper bits of the address contain the version
> tag. On M7 processors, upper four bits (bits 63-60) contain the version
> tag. If a rogue app attempts to access ADI enabled data pages, its
> access is blocked and processor generates an exception. Please see
> Documentation/sparc/adi.txt for further details.

I'm afraid I still don't understand what this is meant to prevent.

IOMMU ignores these, so this is not to prevent rogue DMA from doing
bad stuff.

Will gcc be able to compile code that uses these automatically? That
does not sound easy to me. Can libc automatically use this in malloc()
to prevent accessing freed data when buffers are overrun?

Is this for benefit of JITs?

Thanks,

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-04 16:25     ` Pavel Machek
  0 siblings, 0 replies; 86+ messages in thread
From: Pavel Machek @ 2017-09-04 16:25 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: davem, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum

[-- Attachment #1: Type: text/plain, Size: 1196 bytes --]

Hi!

> ADI is a new feature supported on SPARC M7 and newer processors to allow
> hardware to catch rogue accesses to memory. ADI is supported for data
> fetches only and not instruction fetches. An app can enable ADI on its
> data pages, set version tags on them and use versioned addresses to
> access the data pages. Upper bits of the address contain the version
> tag. On M7 processors, upper four bits (bits 63-60) contain the version
> tag. If a rogue app attempts to access ADI enabled data pages, its
> access is blocked and processor generates an exception. Please see
> Documentation/sparc/adi.txt for further details.

I'm afraid I still don't understand what this is meant to prevent.

IOMMU ignores these, so this is not to prevent rogue DMA from doing
bad stuff.

Will gcc be able to compile code that uses these automatically? That
does not sound easy to me. Can libc automatically use this in malloc()
to prevent accessing freed data when buffers are overrun?

Is this for benefit of JITs?

Thanks,

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-04 16:25     ` Pavel Machek
  0 siblings, 0 replies; 86+ messages in thread
From: Pavel Machek @ 2017-09-04 16:25 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: davem, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda

[-- Attachment #1: Type: text/plain, Size: 1196 bytes --]

Hi!

> ADI is a new feature supported on SPARC M7 and newer processors to allow
> hardware to catch rogue accesses to memory. ADI is supported for data
> fetches only and not instruction fetches. An app can enable ADI on its
> data pages, set version tags on them and use versioned addresses to
> access the data pages. Upper bits of the address contain the version
> tag. On M7 processors, upper four bits (bits 63-60) contain the version
> tag. If a rogue app attempts to access ADI enabled data pages, its
> access is blocked and processor generates an exception. Please see
> Documentation/sparc/adi.txt for further details.

I'm afraid I still don't understand what this is meant to prevent.

IOMMU ignores these, so this is not to prevent rogue DMA from doing
bad stuff.

Will gcc be able to compile code that uses these automatically? That
does not sound easy to me. Can libc automatically use this in malloc()
to prevent accessing freed data when buffers are overrun?

Is this for benefit of JITs?

Thanks,

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-09-04 16:25     ` Pavel Machek
  (?)
@ 2017-09-05 21:44       ` David Miller
  -1 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-09-05 21:44 UTC (permalink / raw)
  To: pavel
  Cc: khalid.aziz, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Pavel Machek <pavel@ucw.cz>
Date: Mon, 4 Sep 2017 18:25:30 +0200

> Will gcc be able to compile code that uses these automatically? That
> does not sound easy to me. Can libc automatically use this in malloc()
> to prevent accessing freed data when buffers are overrun?
> 
> Is this for benefit of JITs?

Anything that can control mappings and the virtual address used to
access memory can use ADI.

malloc() is of course one such case.  It can map memory with ADI
enabled, and return buffer addresses to malloc() callers with the
proper virtual address bits set to satisfy the ADI key checks.

And by induction anything using malloc() for it's memory allocation
gets ADI protection as well.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-05 21:44       ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-09-05 21:44 UTC (permalink / raw)
  To: pavel
  Cc: khalid.aziz, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Pavel Machek <pavel@ucw.cz>
Date: Mon, 4 Sep 2017 18:25:30 +0200

> Will gcc be able to compile code that uses these automatically? That
> does not sound easy to me. Can libc automatically use this in malloc()
> to prevent accessing freed data when buffers are overrun?
> 
> Is this for benefit of JITs?

Anything that can control mappings and the virtual address used to
access memory can use ADI.

malloc() is of course one such case.  It can map memory with ADI
enabled, and return buffer addresses to malloc() callers with the
proper virtual address bits set to satisfy the ADI key checks.

And by induction anything using malloc() for it's memory allocation
gets ADI protection as well.

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-05 21:44       ` David Miller
  0 siblings, 0 replies; 86+ messages in thread
From: David Miller @ 2017-09-05 21:44 UTC (permalink / raw)
  To: pavel
  Cc: khalid.aziz, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

From: Pavel Machek <pavel@ucw.cz>
Date: Mon, 4 Sep 2017 18:25:30 +0200

> Will gcc be able to compile code that uses these automatically? That
> does not sound easy to me. Can libc automatically use this in malloc()
> to prevent accessing freed data when buffers are overrun?
> 
> Is this for benefit of JITs?

Anything that can control mappings and the virtual address used to
access memory can use ADI.

malloc() is of course one such case.  It can map memory with ADI
enabled, and return buffer addresses to malloc() callers with the
proper virtual address bits set to satisfy the ADI key checks.

And by induction anything using malloc() for it's memory allocation
gets ADI protection as well.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-09-04 16:25     ` Pavel Machek
  (?)
@ 2017-09-06 14:10       ` Khalid Aziz
  -1 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-09-06 14:10 UTC (permalink / raw)
  To: Pavel Machek
  Cc: davem, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

On 09/04/2017 10:25 AM, Pavel Machek wrote:
> Hi!
> 
>> ADI is a new feature supported on SPARC M7 and newer processors to allow
>> hardware to catch rogue accesses to memory. ADI is supported for data
>> fetches only and not instruction fetches. An app can enable ADI on its
>> data pages, set version tags on them and use versioned addresses to
>> access the data pages. Upper bits of the address contain the version
>> tag. On M7 processors, upper four bits (bits 63-60) contain the version
>> tag. If a rogue app attempts to access ADI enabled data pages, its
>> access is blocked and processor generates an exception. Please see
>> Documentation/sparc/adi.txt for further details.
> 
> I'm afraid I still don't understand what this is meant to prevent.
> 
> IOMMU ignores these, so this is not to prevent rogue DMA from doing
> bad stuff.
> 
> Will gcc be able to compile code that uses these automatically? That
> does not sound easy to me. Can libc automatically use this in malloc()
> to prevent accessing freed data when buffers are overrun?
> 
> Is this for benefit of JITs?
> 

David explained it well. Yes, preventing buffer overflow is one of the 
uses of ADI. Protecting critical data from wild writes caused by 
programming errors is another use. ADI can be used for debugging as well 
during development.

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-06 14:10       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-09-06 14:10 UTC (permalink / raw)
  To: Pavel Machek
  Cc: davem, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

On 09/04/2017 10:25 AM, Pavel Machek wrote:
> Hi!
> 
>> ADI is a new feature supported on SPARC M7 and newer processors to allow
>> hardware to catch rogue accesses to memory. ADI is supported for data
>> fetches only and not instruction fetches. An app can enable ADI on its
>> data pages, set version tags on them and use versioned addresses to
>> access the data pages. Upper bits of the address contain the version
>> tag. On M7 processors, upper four bits (bits 63-60) contain the version
>> tag. If a rogue app attempts to access ADI enabled data pages, its
>> access is blocked and processor generates an exception. Please see
>> Documentation/sparc/adi.txt for further details.
> 
> I'm afraid I still don't understand what this is meant to prevent.
> 
> IOMMU ignores these, so this is not to prevent rogue DMA from doing
> bad stuff.
> 
> Will gcc be able to compile code that uses these automatically? That
> does not sound easy to me. Can libc automatically use this in malloc()
> to prevent accessing freed data when buffers are overrun?
> 
> Is this for benefit of JITs?
> 

David explained it well. Yes, preventing buffer overflow is one of the 
uses of ADI. Protecting critical data from wild writes caused by 
programming errors is another use. ADI can be used for debugging as well 
during development.

Thanks,
Khalid

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-06 14:10       ` Khalid Aziz
  0 siblings, 0 replies; 86+ messages in thread
From: Khalid Aziz @ 2017-09-06 14:10 UTC (permalink / raw)
  To: Pavel Machek
  Cc: davem, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm,
	Khalid Aziz

On 09/04/2017 10:25 AM, Pavel Machek wrote:
> Hi!
> 
>> ADI is a new feature supported on SPARC M7 and newer processors to allow
>> hardware to catch rogue accesses to memory. ADI is supported for data
>> fetches only and not instruction fetches. An app can enable ADI on its
>> data pages, set version tags on them and use versioned addresses to
>> access the data pages. Upper bits of the address contain the version
>> tag. On M7 processors, upper four bits (bits 63-60) contain the version
>> tag. If a rogue app attempts to access ADI enabled data pages, its
>> access is blocked and processor generates an exception. Please see
>> Documentation/sparc/adi.txt for further details.
> 
> I'm afraid I still don't understand what this is meant to prevent.
> 
> IOMMU ignores these, so this is not to prevent rogue DMA from doing
> bad stuff.
> 
> Will gcc be able to compile code that uses these automatically? That
> does not sound easy to me. Can libc automatically use this in malloc()
> to prevent accessing freed data when buffers are overrun?
> 
> Is this for benefit of JITs?
> 

David explained it well. Yes, preventing buffer overflow is one of the 
uses of ADI. Protecting critical data from wild writes caused by 
programming errors is another use. ADI can be used for debugging as well 
during development.

Thanks,
Khalid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-09-05 21:44       ` David Miller
@ 2017-09-06 22:32         ` Pavel Machek
  -1 siblings, 0 replies; 86+ messages in thread
From: Pavel Machek @ 2017-09-06 22:32 UTC (permalink / raw)
  To: David Miller
  Cc: khalid.aziz, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

[-- Attachment #1: Type: text/plain, Size: 1180 bytes --]

On Tue 2017-09-05 14:44:56, David Miller wrote:
> From: Pavel Machek <pavel@ucw.cz>
> Date: Mon, 4 Sep 2017 18:25:30 +0200
> 
> > Will gcc be able to compile code that uses these automatically? That
> > does not sound easy to me. Can libc automatically use this in malloc()
> > to prevent accessing freed data when buffers are overrun?
> > 
> > Is this for benefit of JITs?
> 
> Anything that can control mappings and the virtual address used to
> access memory can use ADI.
> 
> malloc() is of course one such case.  It can map memory with ADI
> enabled, and return buffer addresses to malloc() callers with the
> proper virtual address bits set to satisfy the ADI key checks.
> 
> And by induction anything using malloc() for it's memory allocation
> gets ADI protection as well.

I see; that's actually quite a nice trick.

I guess it does not protect against stack-based overflows, but should
help against heap-based overflows, so it improves security a bit, too.

Nice, thanks for explanation.
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-06 22:32         ` Pavel Machek
  0 siblings, 0 replies; 86+ messages in thread
From: Pavel Machek @ 2017-09-06 22:32 UTC (permalink / raw)
  To: David Miller
  Cc: khalid.aziz, dave.hansen, corbet, bob.picco, steven.sistare,
	pasha.tatashin, mike.kravetz, mingo, nitin.m.gupta,
	kirill.shutemov, tom.hromatka, eric.saint.etienne, allen.pais,
	cmetcalf, akpm, geert, tklauser, atish.patra, vijay.ac.kumar,
	peterz, mhocko, jack, lstoakes, hughd, thomas.tai,
	paul.gortmaker, ross.zwisler, dave.jiang, willy, ying.huang,
	zhongjiang, minchan, vegard.nossum, imbrenda, aneesh.kumar,
	aarcange, linux-doc, linux-kernel, sparclinux, linux-mm, khalid

[-- Attachment #1: Type: text/plain, Size: 1180 bytes --]

On Tue 2017-09-05 14:44:56, David Miller wrote:
> From: Pavel Machek <pavel@ucw.cz>
> Date: Mon, 4 Sep 2017 18:25:30 +0200
> 
> > Will gcc be able to compile code that uses these automatically? That
> > does not sound easy to me. Can libc automatically use this in malloc()
> > to prevent accessing freed data when buffers are overrun?
> > 
> > Is this for benefit of JITs?
> 
> Anything that can control mappings and the virtual address used to
> access memory can use ADI.
> 
> malloc() is of course one such case.  It can map memory with ADI
> enabled, and return buffer addresses to malloc() callers with the
> proper virtual address bits set to satisfy the ADI key checks.
> 
> And by induction anything using malloc() for it's memory allocation
> gets ADI protection as well.

I see; that's actually quite a nice trick.

I guess it does not protect against stack-based overflows, but should
help against heap-based overflows, so it improves security a bit, too.

Nice, thanks for explanation.
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
  2017-09-06 22:32         ` Pavel Machek
  (?)
@ 2017-09-08 12:18           ` Steven Sistare
  -1 siblings, 0 replies; 86+ messages in thread
From: Steven Sistare @ 2017-09-08 12:18 UTC (permalink / raw)
  To: Pavel Machek, David Miller
  Cc: khalid.aziz, dave.hansen, corbet, bob.picco, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

On 9/6/2017 6:32 PM, Pavel Machek wrote:
> On Tue 2017-09-05 14:44:56, David Miller wrote:
>> From: Pavel Machek <pavel@ucw.cz>
>> Date: Mon, 4 Sep 2017 18:25:30 +0200
>>
>>> Will gcc be able to compile code that uses these automatically? That
>>> does not sound easy to me. Can libc automatically use this in malloc()
>>> to prevent accessing freed data when buffers are overrun?
>>>
>>> Is this for benefit of JITs?
>>
>> Anything that can control mappings and the virtual address used to
>> access memory can use ADI.
>>
>> malloc() is of course one such case.  It can map memory with ADI
>> enabled, and return buffer addresses to malloc() callers with the
>> proper virtual address bits set to satisfy the ADI key checks.
>>
>> And by induction anything using malloc() for it's memory allocation
>> gets ADI protection as well.
> 
> I see; that's actually quite a nice trick.
> 
> I guess it does not protect against stack-based overflows, but should
> help against heap-based overflows, so it improves security a bit, too.
> 
> Nice, thanks for explanation.

ADI can also be used to protect the stack.  Modify ADI versions for
a 64B aligned portion of the register save area in the kernel spill
and fill handlers,  and accidental or malicious access to the area 
from userland will trap.  Other data on the stack can be corrupted, 
but one cannot linearly overflow into the next stack frame without 
tripping over the ADI canary.  There are a few other details to handle,
such as setjmp/longjmp and JITs that modify the stack, but that is the gist.  
This is not part of the current patch, but has been implemented on
Solaris.

ADI could protect other data on the stack, but that requires 
compiler code generation changes.

- Steve

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-08 12:18           ` Steven Sistare
  0 siblings, 0 replies; 86+ messages in thread
From: Steven Sistare @ 2017-09-08 12:18 UTC (permalink / raw)
  To: Pavel Machek, David Miller
  Cc: khalid.aziz, dave.hansen, corbet, bob.picco, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

On 9/6/2017 6:32 PM, Pavel Machek wrote:
> On Tue 2017-09-05 14:44:56, David Miller wrote:
>> From: Pavel Machek <pavel@ucw.cz>
>> Date: Mon, 4 Sep 2017 18:25:30 +0200
>>
>>> Will gcc be able to compile code that uses these automatically? That
>>> does not sound easy to me. Can libc automatically use this in malloc()
>>> to prevent accessing freed data when buffers are overrun?
>>>
>>> Is this for benefit of JITs?
>>
>> Anything that can control mappings and the virtual address used to
>> access memory can use ADI.
>>
>> malloc() is of course one such case.  It can map memory with ADI
>> enabled, and return buffer addresses to malloc() callers with the
>> proper virtual address bits set to satisfy the ADI key checks.
>>
>> And by induction anything using malloc() for it's memory allocation
>> gets ADI protection as well.
> 
> I see; that's actually quite a nice trick.
> 
> I guess it does not protect against stack-based overflows, but should
> help against heap-based overflows, so it improves security a bit, too.
> 
> Nice, thanks for explanation.

ADI can also be used to protect the stack.  Modify ADI versions for
a 64B aligned portion of the register save area in the kernel spill
and fill handlers,  and accidental or malicious access to the area 
from userland will trap.  Other data on the stack can be corrupted, 
but one cannot linearly overflow into the next stack frame without 
tripping over the ADI canary.  There are a few other details to handle,
such as setjmp/longjmp and JITs that modify the stack, but that is the gist.  
This is not part of the current patch, but has been implemented on
Solaris.

ADI could protect other data on the stack, but that requires 
compiler code generation changes.

- Steve

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity)
@ 2017-09-08 12:18           ` Steven Sistare
  0 siblings, 0 replies; 86+ messages in thread
From: Steven Sistare @ 2017-09-08 12:18 UTC (permalink / raw)
  To: Pavel Machek, David Miller
  Cc: khalid.aziz, dave.hansen, corbet, bob.picco, pasha.tatashin,
	mike.kravetz, mingo, nitin.m.gupta, kirill.shutemov,
	tom.hromatka, eric.saint.etienne, allen.pais, cmetcalf, akpm,
	geert, tklauser, atish.patra, vijay.ac.kumar, peterz, mhocko,
	jack, lstoakes, hughd, thomas.tai, paul.gortmaker, ross.zwisler,
	dave.jiang, willy, ying.huang, zhongjiang, minchan,
	vegard.nossum, imbrenda, aneesh.kumar, aarcange, linux-doc,
	linux-kernel, sparclinux, linux-mm, khalid

On 9/6/2017 6:32 PM, Pavel Machek wrote:
> On Tue 2017-09-05 14:44:56, David Miller wrote:
>> From: Pavel Machek <pavel@ucw.cz>
>> Date: Mon, 4 Sep 2017 18:25:30 +0200
>>
>>> Will gcc be able to compile code that uses these automatically? That
>>> does not sound easy to me. Can libc automatically use this in malloc()
>>> to prevent accessing freed data when buffers are overrun?
>>>
>>> Is this for benefit of JITs?
>>
>> Anything that can control mappings and the virtual address used to
>> access memory can use ADI.
>>
>> malloc() is of course one such case.  It can map memory with ADI
>> enabled, and return buffer addresses to malloc() callers with the
>> proper virtual address bits set to satisfy the ADI key checks.
>>
>> And by induction anything using malloc() for it's memory allocation
>> gets ADI protection as well.
> 
> I see; that's actually quite a nice trick.
> 
> I guess it does not protect against stack-based overflows, but should
> help against heap-based overflows, so it improves security a bit, too.
> 
> Nice, thanks for explanation.

ADI can also be used to protect the stack.  Modify ADI versions for
a 64B aligned portion of the register save area in the kernel spill
and fill handlers,  and accidental or malicious access to the area 
from userland will trap.  Other data on the stack can be corrupted, 
but one cannot linearly overflow into the next stack frame without 
tripping over the ADI canary.  There are a few other details to handle,
such as setjmp/longjmp and JITs that modify the stack, but that is the gist.  
This is not part of the current patch, but has been implemented on
Solaris.

ADI could protect other data on the stack, but that requires 
compiler code generation changes.

- Steve

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 86+ messages in thread

end of thread, other threads:[~2017-09-08 12:21 UTC | newest]

Thread overview: 86+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-09 21:25 [PATCH v7 0/9] Application Data Integrity feature introduced by SPARC M7 Khalid Aziz
2017-08-09 21:25 ` Khalid Aziz
2017-08-09 21:25 ` Khalid Aziz
2017-08-09 21:25 ` Khalid Aziz
2017-08-09 21:25 ` [PATCH v7 1/9] signals, sparc: Add signal codes for ADI violations Khalid Aziz
2017-08-09 21:25   ` Khalid Aziz
2017-08-09 21:25 ` [PATCH v7 2/9] mm, swap: Add infrastructure for saving page metadata on swap Khalid Aziz
2017-08-09 21:25   ` Khalid Aziz
2017-08-09 21:25   ` Khalid Aziz
2017-08-16  4:53   ` David Miller
2017-08-16  4:53     ` David Miller
2017-08-16  4:53     ` David Miller
2017-08-16 14:34     ` Khalid Aziz
2017-08-16 14:34       ` Khalid Aziz
2017-08-16 14:34       ` Khalid Aziz
2017-08-09 21:25 ` [PATCH v7 3/9] sparc64: Add support for ADI register fields, ASIs and traps Khalid Aziz
2017-08-09 21:25   ` Khalid Aziz
2017-08-09 21:25 ` [PATCH v7 4/9] sparc64: Add HV fault type handlers for ADI related faults Khalid Aziz
2017-08-09 21:25   ` Khalid Aziz
2017-08-09 21:25 ` [PATCH v7 5/9] sparc64: Add handler for "Memory Corruption Detected" trap Khalid Aziz
2017-08-09 21:25   ` Khalid Aziz
2017-08-09 21:25 ` [PATCH v7 6/9] sparc64: Add auxiliary vectors to report platform ADI properties Khalid Aziz
2017-08-09 21:25   ` Khalid Aziz
2017-08-09 21:26 ` [PATCH v7 7/9] mm: Add address parameter to arch_validate_prot() Khalid Aziz
2017-08-09 21:26   ` Khalid Aziz
2017-08-09 21:26   ` Khalid Aziz
2017-08-10 13:20   ` Michael Ellerman
2017-08-10 13:20     ` Michael Ellerman
2017-08-10 13:20     ` Michael Ellerman
2017-08-10 14:41     ` Khalid Aziz
2017-08-10 14:41       ` Khalid Aziz
2017-08-10 14:41       ` Khalid Aziz
2017-08-15  5:02       ` Michael Ellerman
2017-08-15  5:02         ` Michael Ellerman
2017-08-15  5:02         ` Michael Ellerman
2017-08-15  5:02         ` Michael Ellerman
2017-08-15 14:32         ` Khalid Aziz
2017-08-15 14:32           ` Khalid Aziz
2017-08-15 14:32           ` Khalid Aziz
2017-08-09 21:26 ` [PATCH v7 8/9] mm: Clear arch specific VM flags on protection change Khalid Aziz
2017-08-09 21:26   ` Khalid Aziz
2017-08-09 21:26   ` Khalid Aziz
2017-08-09 21:26 ` [PATCH v7 9/9] sparc64: Add support for ADI (Application Data Integrity) Khalid Aziz
2017-08-09 21:26   ` Khalid Aziz
2017-08-09 21:26   ` Khalid Aziz
2017-08-16  4:58   ` David Miller
2017-08-16  4:58     ` David Miller
2017-08-16  4:58     ` David Miller
2017-08-16 14:44     ` Khalid Aziz
2017-08-16 14:44       ` Khalid Aziz
2017-08-16 14:44       ` Khalid Aziz
2017-08-25 22:31   ` Anthony Yznaga
2017-08-25 22:31     ` Anthony Yznaga
2017-08-25 22:31     ` Anthony Yznaga
2017-08-30 22:27     ` Khalid Aziz
2017-08-30 22:27       ` Khalid Aziz
2017-08-30 22:27       ` Khalid Aziz
2017-08-30 22:38       ` David Miller
2017-08-30 22:38         ` David Miller
2017-08-30 22:38         ` David Miller
2017-08-30 23:23         ` Khalid Aziz
2017-08-30 23:23           ` Khalid Aziz
2017-08-30 23:23           ` Khalid Aziz
2017-08-31  0:09           ` David Miller
2017-08-31  0:09             ` David Miller
2017-08-31  0:09             ` David Miller
2017-08-31 16:38             ` Khalid Aziz
2017-08-31 16:38               ` Khalid Aziz
2017-08-31 16:38               ` Khalid Aziz
2017-09-01  5:38       ` Anthony Yznaga
2017-09-01  5:38         ` Anthony Yznaga
2017-09-01  5:38         ` Anthony Yznaga
2017-09-04 16:25   ` Pavel Machek
2017-09-04 16:25     ` Pavel Machek
2017-09-04 16:25     ` Pavel Machek
2017-09-05 21:44     ` David Miller
2017-09-05 21:44       ` David Miller
2017-09-05 21:44       ` David Miller
2017-09-06 22:32       ` Pavel Machek
2017-09-06 22:32         ` Pavel Machek
2017-09-08 12:18         ` Steven Sistare
2017-09-08 12:18           ` Steven Sistare
2017-09-08 12:18           ` Steven Sistare
2017-09-06 14:10     ` Khalid Aziz
2017-09-06 14:10       ` Khalid Aziz
2017-09-06 14:10       ` Khalid Aziz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.