All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)
@ 2016-08-22 23:23 ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

This RFC series provides support for AMD's new Secure Encrypted 
Virtualization (SEV) feature. This RFC is build upon Secure Memory 
Encryption (SME) RFC.

SEV is an extension to the AMD-V architecture which supports running 
multiple VMs under the control of a hypervisor. When enabled, SEV 
hardware tags all code and data with its VM ASID which indicates which 
VM the data originated from or is intended for. This tag is kept with 
the data at all times when inside the SOC, and prevents that data from 
being used by anyone other than the owner. While the tag protects VM 
data inside the SOC, AES with 128 bit encryption protects data outside 
the SOC. When data leaves or enters the SOC, it is encrypted/decrypted 
respectively by hardware with a key based on the associated tag.

SEV guest VMs have the concept of private and shared memory.  Private memory
is encrypted with the  guest-specific key, while shared memory may be encrypted
with hypervisor key.  Certain types of memory (namely instruction pages and
guest page tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like to
be private. The choice is done using the standard CPU page tables using
the C-bit, and is fully controlled by the guest. Due to security reasons
all the DMA operations inside the  guest must be performed on shared pages
(C-bit clear).  Note that since C-bit is only controllable by the guest OS
when it is operating in 64-bit or 32-bit PAE mode, in all other modes the
SEV hardware forces the C-bit to a 1.

SEV is designed to protect guest VMs from a benign but vulnerable
(i.e. not fully malicious) hypervisor. In particular, it reduces the attack
surface of guest VMs and can prevent certain types of VM-escape bugs
(e.g. hypervisor read-anywhere) from being used to steal guest data.

The RFC series also includes a crypto driver (psp.ko) which communicates
with SEV firmware that runs within the AMD secure processor provides a
secure key management interfaces. The hypervisor uses this interface to 
enable SEV for secure guest and perform common hypervisor activities
such as launching, running, snapshotting , migrating and debugging a 
guest. A new ioctl (KVM_SEV_ISSUE_CMD) is introduced which will enable
Qemu to send commands to the SEV firmware during guest life cycle.

The RFC series also includes patches required in guest OS to enable SEV 
feature. A guest OS can check SEV support by calling KVM_FEATURE cpuid 
instruction.

The following links provide additional details:

AMD Memory Encryption whitepaper:
 
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
    http://support.amd.com/TechDocs/24593.pdf
    SME is section 7.10
    SEV is section 15.34

Secure Encrypted Virutualization Key Management:
http://support.amd.com/TechDocs/55766_SEV-KM API_Spec.pdf

---

TODO:
- send qemu/seabios RFC's on respective mailing list
- integrate the psp driver with CCP driver (they share the PCI id's)
- add SEV guest migration command support
- add SEV snapshotting command support
- determine how to do ioremap of physical memory with mem encryption enabled
  (e.g acpi tables)
- determine how to share the guest memory with hypervisor for to support
  pvclock driver

Brijesh Singh (11):
      crypto: add AMD Platform Security Processor driver
      KVM: SVM: prepare to reserve asid for SEV guest
      KVM: SVM: prepare for SEV guest management API support
      KVM: introduce KVM_SEV_ISSUE_CMD ioctl
      KVM: SVM: add SEV launch start command
      KVM: SVM: add SEV launch update command
      KVM: SVM: add SEV_LAUNCH_FINISH command
      KVM: SVM: add KVM_SEV_GUEST_STATUS command
      KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
      KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
      KVM: SVM: add command to query SEV API version

Tom Lendacky (17):
      kvm: svm: Add support for additional SVM NPF error codes
      kvm: svm: Add kvm_fast_pio_in support
      kvm: svm: Use the hardware provided GPA instead of page walk
      x86: Secure Encrypted Virtualization (SEV) support
      KVM: SVM: prepare for new bit definition in nested_ctl
      KVM: SVM: Add SEV feature definitions to KVM
      x86: Do not encrypt memory areas if SEV is enabled
      Access BOOT related data encrypted with SEV active
      x86/efi: Access EFI data as encrypted when SEV is active
      x86: Change early_ioremap to early_memremap for BOOT data
      x86: Don't decrypt trampoline area if SEV is active
      x86: DMA support for SEV memory encryption
      iommu/amd: AMD IOMMU support for SEV
      x86: Don't set the SME MSR bit when SEV is active
      x86: Unroll string I/O when SEV is active
      x86: Add support to determine if running with SEV enabled
      KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature


 arch/x86/boot/compressed/Makefile      |    2 
 arch/x86/boot/compressed/head_64.S     |   19 +
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++
 arch/x86/include/asm/io.h              |   26 +
 arch/x86/include/asm/kvm_emulate.h     |    3 
 arch/x86/include/asm/kvm_host.h        |   27 +
 arch/x86/include/asm/mem_encrypt.h     |    3 
 arch/x86/include/asm/svm.h             |    3 
 arch/x86/include/uapi/asm/hyperv.h     |    4 
 arch/x86/include/uapi/asm/kvm_para.h   |    4 
 arch/x86/kernel/acpi/boot.c            |    4 
 arch/x86/kernel/head64.c               |    4 
 arch/x86/kernel/mem_encrypt.S          |   44 ++
 arch/x86/kernel/mpparse.c              |   10 
 arch/x86/kernel/setup.c                |    7 
 arch/x86/kernel/x8664_ksyms_64.c       |    1 
 arch/x86/kvm/cpuid.c                   |    4 
 arch/x86/kvm/mmu.c                     |   20 +
 arch/x86/kvm/svm.c                     |  906 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c                     |   73 +++
 arch/x86/mm/ioremap.c                  |    7 
 arch/x86/mm/mem_encrypt.c              |   50 ++
 arch/x86/platform/efi/efi_64.c         |   14 
 arch/x86/realmode/init.c               |   11 
 drivers/crypto/Kconfig                 |   11 
 drivers/crypto/Makefile                |    1 
 drivers/crypto/psp/Kconfig             |    8 
 drivers/crypto/psp/Makefile            |    3 
 drivers/crypto/psp/psp-dev.c           |  220 ++++++++
 drivers/crypto/psp/psp-dev.h           |   95 +++
 drivers/crypto/psp/psp-ops.c           |  454 ++++++++++++++++
 drivers/crypto/psp/psp-pci.c           |  376 +++++++++++++
 drivers/sfi/sfi_core.c                 |    6 
 include/linux/ccp-psp.h                |  833 +++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild              |    1 
 include/uapi/linux/ccp-psp.h           |  182 ++++++
 include/uapi/linux/kvm.h               |  125 ++++
 37 files changed, 3643 insertions(+), 41 deletions(-)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

-- 

Brijesh Singh

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)
@ 2016-08-22 23:23 ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

This RFC series provides support for AMD's new Secure Encrypted 
Virtualization (SEV) feature. This RFC is build upon Secure Memory 
Encryption (SME) RFC.

SEV is an extension to the AMD-V architecture which supports running 
multiple VMs under the control of a hypervisor. When enabled, SEV 
hardware tags all code and data with its VM ASID which indicates which 
VM the data originated from or is intended for. This tag is kept with 
the data at all times when inside the SOC, and prevents that data from 
being used by anyone other than the owner. While the tag protects VM 
data inside the SOC, AES with 128 bit encryption protects data outside 
the SOC. When data leaves or enters the SOC, it is encrypted/decrypted 
respectively by hardware with a key based on the associated tag.

SEV guest VMs have the concept of private and shared memory.  Private memory
is encrypted with the  guest-specific key, while shared memory may be encrypted
with hypervisor key.  Certain types of memory (namely instruction pages and
guest page tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like to
be private. The choice is done using the standard CPU page tables using
the C-bit, and is fully controlled by the guest. Due to security reasons
all the DMA operations inside the  guest must be performed on shared pages
(C-bit clear).  Note that since C-bit is only controllable by the guest OS
when it is operating in 64-bit or 32-bit PAE mode, in all other modes the
SEV hardware forces the C-bit to a 1.

SEV is designed to protect guest VMs from a benign but vulnerable
(i.e. not fully malicious) hypervisor. In particular, it reduces the attack
surface of guest VMs and can prevent certain types of VM-escape bugs
(e.g. hypervisor read-anywhere) from being used to steal guest data.

The RFC series also includes a crypto driver (psp.ko) which communicates
with SEV firmware that runs within the AMD secure processor provides a
secure key management interfaces. The hypervisor uses this interface to 
enable SEV for secure guest and perform common hypervisor activities
such as launching, running, snapshotting , migrating and debugging a 
guest. A new ioctl (KVM_SEV_ISSUE_CMD) is introduced which will enable
Qemu to send commands to the SEV firmware during guest life cycle.

The RFC series also includes patches required in guest OS to enable SEV 
feature. A guest OS can check SEV support by calling KVM_FEATURE cpuid 
instruction.

The following links provide additional details:

AMD Memory Encryption whitepaper:
 
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
    http://support.amd.com/TechDocs/24593.pdf
    SME is section 7.10
    SEV is section 15.34

Secure Encrypted Virutualization Key Management:
http://support.amd.com/TechDocs/55766_SEV-KM API_Spec.pdf

---

TODO:
- send qemu/seabios RFC's on respective mailing list
- integrate the psp driver with CCP driver (they share the PCI id's)
- add SEV guest migration command support
- add SEV snapshotting command support
- determine how to do ioremap of physical memory with mem encryption enabled
  (e.g acpi tables)
- determine how to share the guest memory with hypervisor for to support
  pvclock driver

Brijesh Singh (11):
      crypto: add AMD Platform Security Processor driver
      KVM: SVM: prepare to reserve asid for SEV guest
      KVM: SVM: prepare for SEV guest management API support
      KVM: introduce KVM_SEV_ISSUE_CMD ioctl
      KVM: SVM: add SEV launch start command
      KVM: SVM: add SEV launch update command
      KVM: SVM: add SEV_LAUNCH_FINISH command
      KVM: SVM: add KVM_SEV_GUEST_STATUS command
      KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
      KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
      KVM: SVM: add command to query SEV API version

Tom Lendacky (17):
      kvm: svm: Add support for additional SVM NPF error codes
      kvm: svm: Add kvm_fast_pio_in support
      kvm: svm: Use the hardware provided GPA instead of page walk
      x86: Secure Encrypted Virtualization (SEV) support
      KVM: SVM: prepare for new bit definition in nested_ctl
      KVM: SVM: Add SEV feature definitions to KVM
      x86: Do not encrypt memory areas if SEV is enabled
      Access BOOT related data encrypted with SEV active
      x86/efi: Access EFI data as encrypted when SEV is active
      x86: Change early_ioremap to early_memremap for BOOT data
      x86: Don't decrypt trampoline area if SEV is active
      x86: DMA support for SEV memory encryption
      iommu/amd: AMD IOMMU support for SEV
      x86: Don't set the SME MSR bit when SEV is active
      x86: Unroll string I/O when SEV is active
      x86: Add support to determine if running with SEV enabled
      KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature


 arch/x86/boot/compressed/Makefile      |    2 
 arch/x86/boot/compressed/head_64.S     |   19 +
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++
 arch/x86/include/asm/io.h              |   26 +
 arch/x86/include/asm/kvm_emulate.h     |    3 
 arch/x86/include/asm/kvm_host.h        |   27 +
 arch/x86/include/asm/mem_encrypt.h     |    3 
 arch/x86/include/asm/svm.h             |    3 
 arch/x86/include/uapi/asm/hyperv.h     |    4 
 arch/x86/include/uapi/asm/kvm_para.h   |    4 
 arch/x86/kernel/acpi/boot.c            |    4 
 arch/x86/kernel/head64.c               |    4 
 arch/x86/kernel/mem_encrypt.S          |   44 ++
 arch/x86/kernel/mpparse.c              |   10 
 arch/x86/kernel/setup.c                |    7 
 arch/x86/kernel/x8664_ksyms_64.c       |    1 
 arch/x86/kvm/cpuid.c                   |    4 
 arch/x86/kvm/mmu.c                     |   20 +
 arch/x86/kvm/svm.c                     |  906 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c                     |   73 +++
 arch/x86/mm/ioremap.c                  |    7 
 arch/x86/mm/mem_encrypt.c              |   50 ++
 arch/x86/platform/efi/efi_64.c         |   14 
 arch/x86/realmode/init.c               |   11 
 drivers/crypto/Kconfig                 |   11 
 drivers/crypto/Makefile                |    1 
 drivers/crypto/psp/Kconfig             |    8 
 drivers/crypto/psp/Makefile            |    3 
 drivers/crypto/psp/psp-dev.c           |  220 ++++++++
 drivers/crypto/psp/psp-dev.h           |   95 +++
 drivers/crypto/psp/psp-ops.c           |  454 ++++++++++++++++
 drivers/crypto/psp/psp-pci.c           |  376 +++++++++++++
 drivers/sfi/sfi_core.c                 |    6 
 include/linux/ccp-psp.h                |  833 +++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild              |    1 
 include/uapi/linux/ccp-psp.h           |  182 ++++++
 include/uapi/linux/kvm.h               |  125 ++++
 37 files changed, 3643 insertions(+), 41 deletions(-)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

-- 

Brijesh Singh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)
@ 2016-08-22 23:23 ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

This RFC series provides support for AMD's new Secure Encrypted 
Virtualization (SEV) feature. This RFC is build upon Secure Memory 
Encryption (SME) RFC.

SEV is an extension to the AMD-V architecture which supports running 
multiple VMs under the control of a hypervisor. When enabled, SEV 
hardware tags all code and data with its VM ASID which indicates which 
VM the data originated from or is intended for. This tag is kept with 
the data at all times when inside the SOC, and prevents that data from 
being used by anyone other than the owner. While the tag protects VM 
data inside the SOC, AES with 128 bit encryption protects data outside 
the SOC. When data leaves or enters the SOC, it is encrypted/decrypted 
respectively by hardware with a key based on the associated tag.

SEV guest VMs have the concept of private and shared memory.  Private memory
is encrypted with the  guest-specific key, while shared memory may be encrypted
with hypervisor key.  Certain types of memory (namely instruction pages and
guest page tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like to
be private. The choice is done using the standard CPU page tables using
the C-bit, and is fully controlled by the guest. Due to security reasons
all the DMA operations inside the  guest must be performed on shared pages
(C-bit clear).  Note that since C-bit is only controllable by the guest OS
when it is operating in 64-bit or 32-bit PAE mode, in all other modes the
SEV hardware forces the C-bit to a 1.

SEV is designed to protect guest VMs from a benign but vulnerable
(i.e. not fully malicious) hypervisor. In particular, it reduces the attack
surface of guest VMs and can prevent certain types of VM-escape bugs
(e.g. hypervisor read-anywhere) from being used to steal guest data.

The RFC series also includes a crypto driver (psp.ko) which communicates
with SEV firmware that runs within the AMD secure processor provides a
secure key management interfaces. The hypervisor uses this interface to 
enable SEV for secure guest and perform common hypervisor activities
such as launching, running, snapshotting , migrating and debugging a 
guest. A new ioctl (KVM_SEV_ISSUE_CMD) is introduced which will enable
Qemu to send commands to the SEV firmware during guest life cycle.

The RFC series also includes patches required in guest OS to enable SEV 
feature. A guest OS can check SEV support by calling KVM_FEATURE cpuid 
instruction.

The following links provide additional details:

AMD Memory Encryption whitepaper:
 
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
    http://support.amd.com/TechDocs/24593.pdf
    SME is section 7.10
    SEV is section 15.34

Secure Encrypted Virutualization Key Management:
http://support.amd.com/TechDocs/55766_SEV-KM API_Spec.pdf

---

TODO:
- send qemu/seabios RFC's on respective mailing list
- integrate the psp driver with CCP driver (they share the PCI id's)
- add SEV guest migration command support
- add SEV snapshotting command support
- determine how to do ioremap of physical memory with mem encryption enabled
  (e.g acpi tables)
- determine how to share the guest memory with hypervisor for to support
  pvclock driver

Brijesh Singh (11):
      crypto: add AMD Platform Security Processor driver
      KVM: SVM: prepare to reserve asid for SEV guest
      KVM: SVM: prepare for SEV guest management API support
      KVM: introduce KVM_SEV_ISSUE_CMD ioctl
      KVM: SVM: add SEV launch start command
      KVM: SVM: add SEV launch update command
      KVM: SVM: add SEV_LAUNCH_FINISH command
      KVM: SVM: add KVM_SEV_GUEST_STATUS command
      KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
      KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
      KVM: SVM: add command to query SEV API version

Tom Lendacky (17):
      kvm: svm: Add support for additional SVM NPF error codes
      kvm: svm: Add kvm_fast_pio_in support
      kvm: svm: Use the hardware provided GPA instead of page walk
      x86: Secure Encrypted Virtualization (SEV) support
      KVM: SVM: prepare for new bit definition in nested_ctl
      KVM: SVM: Add SEV feature definitions to KVM
      x86: Do not encrypt memory areas if SEV is enabled
      Access BOOT related data encrypted with SEV active
      x86/efi: Access EFI data as encrypted when SEV is active
      x86: Change early_ioremap to early_memremap for BOOT data
      x86: Don't decrypt trampoline area if SEV is active
      x86: DMA support for SEV memory encryption
      iommu/amd: AMD IOMMU support for SEV
      x86: Don't set the SME MSR bit when SEV is active
      x86: Unroll string I/O when SEV is active
      x86: Add support to determine if running with SEV enabled
      KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature


 arch/x86/boot/compressed/Makefile      |    2 
 arch/x86/boot/compressed/head_64.S     |   19 +
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++
 arch/x86/include/asm/io.h              |   26 +
 arch/x86/include/asm/kvm_emulate.h     |    3 
 arch/x86/include/asm/kvm_host.h        |   27 +
 arch/x86/include/asm/mem_encrypt.h     |    3 
 arch/x86/include/asm/svm.h             |    3 
 arch/x86/include/uapi/asm/hyperv.h     |    4 
 arch/x86/include/uapi/asm/kvm_para.h   |    4 
 arch/x86/kernel/acpi/boot.c            |    4 
 arch/x86/kernel/head64.c               |    4 
 arch/x86/kernel/mem_encrypt.S          |   44 ++
 arch/x86/kernel/mpparse.c              |   10 
 arch/x86/kernel/setup.c                |    7 
 arch/x86/kernel/x8664_ksyms_64.c       |    1 
 arch/x86/kvm/cpuid.c                   |    4 
 arch/x86/kvm/mmu.c                     |   20 +
 arch/x86/kvm/svm.c                     |  906 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c                     |   73 +++
 arch/x86/mm/ioremap.c                  |    7 
 arch/x86/mm/mem_encrypt.c              |   50 ++
 arch/x86/platform/efi/efi_64.c         |   14 
 arch/x86/realmode/init.c               |   11 
 drivers/crypto/Kconfig                 |   11 
 drivers/crypto/Makefile                |    1 
 drivers/crypto/psp/Kconfig             |    8 
 drivers/crypto/psp/Makefile            |    3 
 drivers/crypto/psp/psp-dev.c           |  220 ++++++++
 drivers/crypto/psp/psp-dev.h           |   95 +++
 drivers/crypto/psp/psp-ops.c           |  454 ++++++++++++++++
 drivers/crypto/psp/psp-pci.c           |  376 +++++++++++++
 drivers/sfi/sfi_core.c                 |    6 
 include/linux/ccp-psp.h                |  833 +++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild              |    1 
 include/uapi/linux/ccp-psp.h           |  182 ++++++
 include/uapi/linux/kvm.h               |  125 ++++
 37 files changed, 3643 insertions(+), 41 deletions(-)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

-- 

Brijesh Singh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)
@ 2016-08-22 23:23 ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

This RFC series provides support for AMD's new Secure Encrypted 
Virtualization (SEV) feature. This RFC is build upon Secure Memory 
Encryption (SME) RFC.

SEV is an extension to the AMD-V architecture which supports running 
multiple VMs under the control of a hypervisor. When enabled, SEV 
hardware tags all code and data with its VM ASID which indicates which 
VM the data originated from or is intended for. This tag is kept with 
the data at all times when inside the SOC, and prevents that data from 
being used by anyone other than the owner. While the tag protects VM 
data inside the SOC, AES with 128 bit encryption protects data outside 
the SOC. When data leaves or enters the SOC, it is encrypted/decrypted 
respectively by hardware with a key based on the associated tag.

SEV guest VMs have the concept of private and shared memory.  Private memory
is encrypted with the  guest-specific key, while shared memory may be encrypted
with hypervisor key.  Certain types of memory (namely instruction pages and
guest page tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like to
be private. The choice is done using the standard CPU page tables using
the C-bit, and is fully controlled by the guest. Due to security reasons
all the DMA operations inside the  guest must be performed on shared pages
(C-bit clear).  Note that since C-bit is only controllable by the guest OS
when it is operating in 64-bit or 32-bit PAE mode, in all other modes the
SEV hardware forces the C-bit to a 1.

SEV is designed to protect guest VMs from a benign but vulnerable
(i.e. not fully malicious) hypervisor. In particular, it reduces the attack
surface of guest VMs and can prevent certain types of VM-escape bugs
(e.g. hypervisor read-anywhere) from being used to steal guest data.

The RFC series also includes a crypto driver (psp.ko) which communicates
with SEV firmware that runs within the AMD secure processor provides a
secure key management interfaces. The hypervisor uses this interface to 
enable SEV for secure guest and perform common hypervisor activities
such as launching, running, snapshotting , migrating and debugging a 
guest. A new ioctl (KVM_SEV_ISSUE_CMD) is introduced which will enable
Qemu to send commands to the SEV firmware during guest life cycle.

The RFC series also includes patches required in guest OS to enable SEV 
feature. A guest OS can check SEV support by calling KVM_FEATURE cpuid 
instruction.

The following links provide additional details:

AMD Memory Encryption whitepaper:
 
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
    http://support.amd.com/TechDocs/24593.pdf
    SME is section 7.10
    SEV is section 15.34

Secure Encrypted Virutualization Key Management:
http://support.amd.com/TechDocs/55766_SEV-KM API_Spec.pdf

---

TODO:
- send qemu/seabios RFC's on respective mailing list
- integrate the psp driver with CCP driver (they share the PCI id's)
- add SEV guest migration command support
- add SEV snapshotting command support
- determine how to do ioremap of physical memory with mem encryption enabled
  (e.g acpi tables)
- determine how to share the guest memory with hypervisor for to support
  pvclock driver

Brijesh Singh (11):
      crypto: add AMD Platform Security Processor driver
      KVM: SVM: prepare to reserve asid for SEV guest
      KVM: SVM: prepare for SEV guest management API support
      KVM: introduce KVM_SEV_ISSUE_CMD ioctl
      KVM: SVM: add SEV launch start command
      KVM: SVM: add SEV launch update command
      KVM: SVM: add SEV_LAUNCH_FINISH command
      KVM: SVM: add KVM_SEV_GUEST_STATUS command
      KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
      KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
      KVM: SVM: add command to query SEV API version

Tom Lendacky (17):
      kvm: svm: Add support for additional SVM NPF error codes
      kvm: svm: Add kvm_fast_pio_in support
      kvm: svm: Use the hardware provided GPA instead of page walk
      x86: Secure Encrypted Virtualization (SEV) support
      KVM: SVM: prepare for new bit definition in nested_ctl
      KVM: SVM: Add SEV feature definitions to KVM
      x86: Do not encrypt memory areas if SEV is enabled
      Access BOOT related data encrypted with SEV active
      x86/efi: Access EFI data as encrypted when SEV is active
      x86: Change early_ioremap to early_memremap for BOOT data
      x86: Don't decrypt trampoline area if SEV is active
      x86: DMA support for SEV memory encryption
      iommu/amd: AMD IOMMU support for SEV
      x86: Don't set the SME MSR bit when SEV is active
      x86: Unroll string I/O when SEV is active
      x86: Add support to determine if running with SEV enabled
      KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature


 arch/x86/boot/compressed/Makefile      |    2 
 arch/x86/boot/compressed/head_64.S     |   19 +
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++
 arch/x86/include/asm/io.h              |   26 +
 arch/x86/include/asm/kvm_emulate.h     |    3 
 arch/x86/include/asm/kvm_host.h        |   27 +
 arch/x86/include/asm/mem_encrypt.h     |    3 
 arch/x86/include/asm/svm.h             |    3 
 arch/x86/include/uapi/asm/hyperv.h     |    4 
 arch/x86/include/uapi/asm/kvm_para.h   |    4 
 arch/x86/kernel/acpi/boot.c            |    4 
 arch/x86/kernel/head64.c               |    4 
 arch/x86/kernel/mem_encrypt.S          |   44 ++
 arch/x86/kernel/mpparse.c              |   10 
 arch/x86/kernel/setup.c                |    7 
 arch/x86/kernel/x8664_ksyms_64.c       |    1 
 arch/x86/kvm/cpuid.c                   |    4 
 arch/x86/kvm/mmu.c                     |   20 +
 arch/x86/kvm/svm.c                     |  906 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c                     |   73 +++
 arch/x86/mm/ioremap.c                  |    7 
 arch/x86/mm/mem_encrypt.c              |   50 ++
 arch/x86/platform/efi/efi_64.c         |   14 
 arch/x86/realmode/init.c               |   11 
 drivers/crypto/Kconfig                 |   11 
 drivers/crypto/Makefile                |    1 
 drivers/crypto/psp/Kconfig             |    8 
 drivers/crypto/psp/Makefile            |    3 
 drivers/crypto/psp/psp-dev.c           |  220 ++++++++
 drivers/crypto/psp/psp-dev.h           |   95 +++
 drivers/crypto/psp/psp-ops.c           |  454 ++++++++++++++++
 drivers/crypto/psp/psp-pci.c           |  376 +++++++++++++
 drivers/sfi/sfi_core.c                 |    6 
 include/linux/ccp-psp.h                |  833 +++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild              |    1 
 include/uapi/linux/ccp-psp.h           |  182 ++++++
 include/uapi/linux/kvm.h               |  125 ++++
 37 files changed, 3643 insertions(+), 41 deletions(-)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

-- 

Brijesh Singh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes
  2016-08-22 23:23 ` Brijesh Singh
                   ` (3 preceding siblings ...)
  (?)
@ 2016-08-22 23:23 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

AMD hardware adds two additional bits to aid in nested page fault handling.

Bit 32 - NPF occurred while translating the guest's final physical address
Bit 33 - NPF occurred while translating the guest page tables

The guest page tables fault indicator can be used as an aid for nested
virtualization. Using V0 for the host, V1 for the first level guest and
V2 for the second level guest, when both V1 and V2 are using nested paging
there are currently a number of unnecessary instruction emulations. When
V2 is launched shadow paging is used in V1 for the nested tables of V2. As
a result, KVM marks these pages as RO in the host nested page tables. When
V2 exits and we resume V1, these pages are still marked RO.

Every nested walk for a guest page table is treated as a user-level write
access and this causes a lot of NPFs because the V1 page tables are marked
RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
sees a write to a read-only page, emulates the V1 instruction and unprotects
the page (marking it RW). This patch looks for cases where we get a NPF due
to a guest page table walk where the page was marked RO. It immediately
unprotects the page and resumes the guest, leading to far fewer instruction
emulations when nested virtualization is used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |   11 ++++++++++-
 arch/x86/kvm/mmu.c              |   20 ++++++++++++++++++--
 arch/x86/kvm/svm.c              |    2 +-
 3 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c51c1cb..3f05d36 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -191,6 +191,8 @@ enum {
 #define PFERR_RSVD_BIT 3
 #define PFERR_FETCH_BIT 4
 #define PFERR_PK_BIT 5
+#define PFERR_GUEST_FINAL_BIT 32
+#define PFERR_GUEST_PAGE_BIT 33
 
 #define PFERR_PRESENT_MASK (1U << PFERR_PRESENT_BIT)
 #define PFERR_WRITE_MASK (1U << PFERR_WRITE_BIT)
@@ -198,6 +200,13 @@ enum {
 #define PFERR_RSVD_MASK (1U << PFERR_RSVD_BIT)
 #define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT)
 #define PFERR_PK_MASK (1U << PFERR_PK_BIT)
+#define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT)
+#define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT)
+
+#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK |	\
+				 PFERR_USER_MASK |		\
+				 PFERR_WRITE_MASK |		\
+				 PFERR_PRESENT_MASK)
 
 /* apic attention bits */
 #define KVM_APIC_CHECK_VAPIC	0
@@ -1203,7 +1212,7 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu);
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u64 error_code,
 		       void *insn, int insn_len);
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a7040f4..3b47a5d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4512,7 +4512,7 @@ static void make_mmu_pages_available(struct kvm_vcpu *vcpu)
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 }
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 		       void *insn, int insn_len)
 {
 	int r, emulation_type = EMULTYPE_RETRY;
@@ -4531,12 +4531,28 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
 			return r;
 	}
 
-	r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false);
+	r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
+				      false);
 	if (r < 0)
 		return r;
 	if (!r)
 		return 1;
 
+	/*
+	 * Before emulating the instruction, check if the error code
+	 * was due to a RO violation while translating the guest page.
+	 * This can occur when using nested virtualization with nested
+	 * paging in both guests. If true, we simply unprotect the page
+	 * and resume the guest.
+	 *
+	 * Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used
+	 *       in PFERR_NEXT_GUEST_PAGE)
+	 */
+	if (error_code == PFERR_NESTED_GUEST_PAGE) {
+		kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
+		return 1;
+	}
+
 	if (mmio_info_in_cache(vcpu, cr2, direct))
 		emulation_type = 0;
 emulate:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1e6b84b..d8b9c8c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1935,7 +1935,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
 static int pf_interception(struct vcpu_svm *svm)
 {
 	u64 fault_address = svm->vmcb->control.exit_info_2;
-	u32 error_code;
+	u64 error_code;
 	int r = 1;
 
 	switch (svm->apf_reason) {

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:23   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

AMD hardware adds two additional bits to aid in nested page fault handling.

Bit 32 - NPF occurred while translating the guest's final physical address
Bit 33 - NPF occurred while translating the guest page tables

The guest page tables fault indicator can be used as an aid for nested
virtualization. Using V0 for the host, V1 for the first level guest and
V2 for the second level guest, when both V1 and V2 are using nested paging
there are currently a number of unnecessary instruction emulations. When
V2 is launched shadow paging is used in V1 for the nested tables of V2. As
a result, KVM marks these pages as RO in the host nested page tables. When
V2 exits and we resume V1, these pages are still marked RO.

Every nested walk for a guest page table is treated as a user-level write
access and this causes a lot of NPFs because the V1 page tables are marked
RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
sees a write to a read-only page, emulates the V1 instruction and unprotects
the page (marking it RW). This patch looks for cases where we get a NPF due
to a guest page table walk where the page was marked RO. It immediately
unprotects the page and resumes the guest, leading to far fewer instruction
emulations when nested virtualization is used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |   11 ++++++++++-
 arch/x86/kvm/mmu.c              |   20 ++++++++++++++++++--
 arch/x86/kvm/svm.c              |    2 +-
 3 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c51c1cb..3f05d36 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -191,6 +191,8 @@ enum {
 #define PFERR_RSVD_BIT 3
 #define PFERR_FETCH_BIT 4
 #define PFERR_PK_BIT 5
+#define PFERR_GUEST_FINAL_BIT 32
+#define PFERR_GUEST_PAGE_BIT 33
 
 #define PFERR_PRESENT_MASK (1U << PFERR_PRESENT_BIT)
 #define PFERR_WRITE_MASK (1U << PFERR_WRITE_BIT)
@@ -198,6 +200,13 @@ enum {
 #define PFERR_RSVD_MASK (1U << PFERR_RSVD_BIT)
 #define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT)
 #define PFERR_PK_MASK (1U << PFERR_PK_BIT)
+#define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT)
+#define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT)
+
+#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK |	\
+				 PFERR_USER_MASK |		\
+				 PFERR_WRITE_MASK |		\
+				 PFERR_PRESENT_MASK)
 
 /* apic attention bits */
 #define KVM_APIC_CHECK_VAPIC	0
@@ -1203,7 +1212,7 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu);
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u64 error_code,
 		       void *insn, int insn_len);
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a7040f4..3b47a5d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4512,7 +4512,7 @@ static void make_mmu_pages_available(struct kvm_vcpu *vcpu)
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 }
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 		       void *insn, int insn_len)
 {
 	int r, emulation_type = EMULTYPE_RETRY;
@@ -4531,12 +4531,28 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
 			return r;
 	}
 
-	r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false);
+	r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
+				      false);
 	if (r < 0)
 		return r;
 	if (!r)
 		return 1;
 
+	/*
+	 * Before emulating the instruction, check if the error code
+	 * was due to a RO violation while translating the guest page.
+	 * This can occur when using nested virtualization with nested
+	 * paging in both guests. If true, we simply unprotect the page
+	 * and resume the guest.
+	 *
+	 * Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used
+	 *       in PFERR_NEXT_GUEST_PAGE)
+	 */
+	if (error_code == PFERR_NESTED_GUEST_PAGE) {
+		kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
+		return 1;
+	}
+
 	if (mmio_info_in_cache(vcpu, cr2, direct))
 		emulation_type = 0;
 emulate:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1e6b84b..d8b9c8c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1935,7 +1935,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
 static int pf_interception(struct vcpu_svm *svm)
 {
 	u64 fault_address = svm->vmcb->control.exit_info_2;
-	u32 error_code;
+	u64 error_code;
 	int r = 1;
 
 	switch (svm->apf_reason) {

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes
@ 2016-08-22 23:23   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

AMD hardware adds two additional bits to aid in nested page fault handling.

Bit 32 - NPF occurred while translating the guest's final physical address
Bit 33 - NPF occurred while translating the guest page tables

The guest page tables fault indicator can be used as an aid for nested
virtualization. Using V0 for the host, V1 for the first level guest and
V2 for the second level guest, when both V1 and V2 are using nested paging
there are currently a number of unnecessary instruction emulations. When
V2 is launched shadow paging is used in V1 for the nested tables of V2. As
a result, KVM marks these pages as RO in the host nested page tables. When
V2 exits and we resume V1, these pages are still marked RO.

Every nested walk for a guest page table is treated as a user-level write
access and this causes a lot of NPFs because the V1 page tables are marked
RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
sees a write to a read-only page, emulates the V1 instruction and unprotects
the page (marking it RW). This patch looks for cases where we get a NPF due
to a guest page table walk where the page was marked RO. It immediately
unprotects the page and resumes the guest, leading to far fewer instruction
emulations when nested virtualization is used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |   11 ++++++++++-
 arch/x86/kvm/mmu.c              |   20 ++++++++++++++++++--
 arch/x86/kvm/svm.c              |    2 +-
 3 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c51c1cb..3f05d36 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -191,6 +191,8 @@ enum {
 #define PFERR_RSVD_BIT 3
 #define PFERR_FETCH_BIT 4
 #define PFERR_PK_BIT 5
+#define PFERR_GUEST_FINAL_BIT 32
+#define PFERR_GUEST_PAGE_BIT 33
 
 #define PFERR_PRESENT_MASK (1U << PFERR_PRESENT_BIT)
 #define PFERR_WRITE_MASK (1U << PFERR_WRITE_BIT)
@@ -198,6 +200,13 @@ enum {
 #define PFERR_RSVD_MASK (1U << PFERR_RSVD_BIT)
 #define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT)
 #define PFERR_PK_MASK (1U << PFERR_PK_BIT)
+#define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT)
+#define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT)
+
+#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK |	\
+				 PFERR_USER_MASK |		\
+				 PFERR_WRITE_MASK |		\
+				 PFERR_PRESENT_MASK)
 
 /* apic attention bits */
 #define KVM_APIC_CHECK_VAPIC	0
@@ -1203,7 +1212,7 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu);
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u64 error_code,
 		       void *insn, int insn_len);
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a7040f4..3b47a5d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4512,7 +4512,7 @@ static void make_mmu_pages_available(struct kvm_vcpu *vcpu)
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 }
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 		       void *insn, int insn_len)
 {
 	int r, emulation_type = EMULTYPE_RETRY;
@@ -4531,12 +4531,28 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
 			return r;
 	}
 
-	r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false);
+	r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
+				      false);
 	if (r < 0)
 		return r;
 	if (!r)
 		return 1;
 
+	/*
+	 * Before emulating the instruction, check if the error code
+	 * was due to a RO violation while translating the guest page.
+	 * This can occur when using nested virtualization with nested
+	 * paging in both guests. If true, we simply unprotect the page
+	 * and resume the guest.
+	 *
+	 * Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used
+	 *       in PFERR_NEXT_GUEST_PAGE)
+	 */
+	if (error_code == PFERR_NESTED_GUEST_PAGE) {
+		kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
+		return 1;
+	}
+
 	if (mmio_info_in_cache(vcpu, cr2, direct))
 		emulation_type = 0;
 emulate:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1e6b84b..d8b9c8c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1935,7 +1935,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
 static int pf_interception(struct vcpu_svm *svm)
 {
 	u64 fault_address = svm->vmcb->control.exit_info_2;
-	u32 error_code;
+	u64 error_code;
 	int r = 1;
 
 	switch (svm->apf_reason) {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes
@ 2016-08-22 23:23   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

AMD hardware adds two additional bits to aid in nested page fault handling.

Bit 32 - NPF occurred while translating the guest's final physical address
Bit 33 - NPF occurred while translating the guest page tables

The guest page tables fault indicator can be used as an aid for nested
virtualization. Using V0 for the host, V1 for the first level guest and
V2 for the second level guest, when both V1 and V2 are using nested paging
there are currently a number of unnecessary instruction emulations. When
V2 is launched shadow paging is used in V1 for the nested tables of V2. As
a result, KVM marks these pages as RO in the host nested page tables. When
V2 exits and we resume V1, these pages are still marked RO.

Every nested walk for a guest page table is treated as a user-level write
access and this causes a lot of NPFs because the V1 page tables are marked
RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
sees a write to a read-only page, emulates the V1 instruction and unprotects
the page (marking it RW). This patch looks for cases where we get a NPF due
to a guest page table walk where the page was marked RO. It immediately
unprotects the page and resumes the guest, leading to far fewer instruction
emulations when nested virtualization is used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |   11 ++++++++++-
 arch/x86/kvm/mmu.c              |   20 ++++++++++++++++++--
 arch/x86/kvm/svm.c              |    2 +-
 3 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c51c1cb..3f05d36 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -191,6 +191,8 @@ enum {
 #define PFERR_RSVD_BIT 3
 #define PFERR_FETCH_BIT 4
 #define PFERR_PK_BIT 5
+#define PFERR_GUEST_FINAL_BIT 32
+#define PFERR_GUEST_PAGE_BIT 33
 
 #define PFERR_PRESENT_MASK (1U << PFERR_PRESENT_BIT)
 #define PFERR_WRITE_MASK (1U << PFERR_WRITE_BIT)
@@ -198,6 +200,13 @@ enum {
 #define PFERR_RSVD_MASK (1U << PFERR_RSVD_BIT)
 #define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT)
 #define PFERR_PK_MASK (1U << PFERR_PK_BIT)
+#define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT)
+#define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT)
+
+#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK |	\
+				 PFERR_USER_MASK |		\
+				 PFERR_WRITE_MASK |		\
+				 PFERR_PRESENT_MASK)
 
 /* apic attention bits */
 #define KVM_APIC_CHECK_VAPIC	0
@@ -1203,7 +1212,7 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu);
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u64 error_code,
 		       void *insn, int insn_len);
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a7040f4..3b47a5d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4512,7 +4512,7 @@ static void make_mmu_pages_available(struct kvm_vcpu *vcpu)
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 }
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 		       void *insn, int insn_len)
 {
 	int r, emulation_type = EMULTYPE_RETRY;
@@ -4531,12 +4531,28 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
 			return r;
 	}
 
-	r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false);
+	r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
+				      false);
 	if (r < 0)
 		return r;
 	if (!r)
 		return 1;
 
+	/*
+	 * Before emulating the instruction, check if the error code
+	 * was due to a RO violation while translating the guest page.
+	 * This can occur when using nested virtualization with nested
+	 * paging in both guests. If true, we simply unprotect the page
+	 * and resume the guest.
+	 *
+	 * Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used
+	 *       in PFERR_NEXT_GUEST_PAGE)
+	 */
+	if (error_code == PFERR_NESTED_GUEST_PAGE) {
+		kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
+		return 1;
+	}
+
 	if (mmio_info_in_cache(vcpu, cr2, direct))
 		emulation_type = 0;
 emulate:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1e6b84b..d8b9c8c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1935,7 +1935,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
 static int pf_interception(struct vcpu_svm *svm)
 {
 	u64 fault_address = svm->vmcb->control.exit_info_2;
-	u32 error_code;
+	u64 error_code;
 	int r = 1;
 
 	switch (svm->apf_reason) {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes
@ 2016-08-22 23:23   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

AMD hardware adds two additional bits to aid in nested page fault handling.

Bit 32 - NPF occurred while translating the guest's final physical address
Bit 33 - NPF occurred while translating the guest page tables

The guest page tables fault indicator can be used as an aid for nested
virtualization. Using V0 for the host, V1 for the first level guest and
V2 for the second level guest, when both V1 and V2 are using nested paging
there are currently a number of unnecessary instruction emulations. When
V2 is launched shadow paging is used in V1 for the nested tables of V2. As
a result, KVM marks these pages as RO in the host nested page tables. When
V2 exits and we resume V1, these pages are still marked RO.

Every nested walk for a guest page table is treated as a user-level write
access and this causes a lot of NPFs because the V1 page tables are marked
RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
sees a write to a read-only page, emulates the V1 instruction and unprotects
the page (marking it RW). This patch looks for cases where we get a NPF due
to a guest page table walk where the page was marked RO. It immediately
unprotects the page and resumes the guest, leading to far fewer instruction
emulations when nested virtualization is used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |   11 ++++++++++-
 arch/x86/kvm/mmu.c              |   20 ++++++++++++++++++--
 arch/x86/kvm/svm.c              |    2 +-
 3 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c51c1cb..3f05d36 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -191,6 +191,8 @@ enum {
 #define PFERR_RSVD_BIT 3
 #define PFERR_FETCH_BIT 4
 #define PFERR_PK_BIT 5
+#define PFERR_GUEST_FINAL_BIT 32
+#define PFERR_GUEST_PAGE_BIT 33
 
 #define PFERR_PRESENT_MASK (1U << PFERR_PRESENT_BIT)
 #define PFERR_WRITE_MASK (1U << PFERR_WRITE_BIT)
@@ -198,6 +200,13 @@ enum {
 #define PFERR_RSVD_MASK (1U << PFERR_RSVD_BIT)
 #define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT)
 #define PFERR_PK_MASK (1U << PFERR_PK_BIT)
+#define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT)
+#define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT)
+
+#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK |	\
+				 PFERR_USER_MASK |		\
+				 PFERR_WRITE_MASK |		\
+				 PFERR_PRESENT_MASK)
 
 /* apic attention bits */
 #define KVM_APIC_CHECK_VAPIC	0
@@ -1203,7 +1212,7 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu);
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u64 error_code,
 		       void *insn, int insn_len);
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a7040f4..3b47a5d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4512,7 +4512,7 @@ static void make_mmu_pages_available(struct kvm_vcpu *vcpu)
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 }
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 		       void *insn, int insn_len)
 {
 	int r, emulation_type = EMULTYPE_RETRY;
@@ -4531,12 +4531,28 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
 			return r;
 	}
 
-	r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false);
+	r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
+				      false);
 	if (r < 0)
 		return r;
 	if (!r)
 		return 1;
 
+	/*
+	 * Before emulating the instruction, check if the error code
+	 * was due to a RO violation while translating the guest page.
+	 * This can occur when using nested virtualization with nested
+	 * paging in both guests. If true, we simply unprotect the page
+	 * and resume the guest.
+	 *
+	 * Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used
+	 *       in PFERR_NEXT_GUEST_PAGE)
+	 */
+	if (error_code == PFERR_NESTED_GUEST_PAGE) {
+		kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
+		return 1;
+	}
+
 	if (mmio_info_in_cache(vcpu, cr2, direct))
 		emulation_type = 0;
 emulate:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1e6b84b..d8b9c8c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1935,7 +1935,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
 static int pf_interception(struct vcpu_svm *svm)
 {
 	u64 fault_address = svm->vmcb->control.exit_info_2;
-	u32 error_code;
+	u64 error_code;
 	int r = 1;
 
 	switch (svm->apf_reason) {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support
  2016-08-22 23:23 ` Brijesh Singh
                   ` (5 preceding siblings ...)
  (?)
@ 2016-08-22 23:23 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

Update the I/O interception support to add the kvm_fast_pio_in function
to speed up the in instruction similar to the out instruction.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/svm.c              |    5 +++--
 arch/x86/kvm/x86.c              |   43 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3f05d36..c38f878 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1133,6 +1133,7 @@ int kvm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
 struct x86_emulate_ctxt;
 
 int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port);
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port);
 void kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
 int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d8b9c8c..fd5a9a8 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2131,7 +2131,7 @@ static int io_interception(struct vcpu_svm *svm)
 	++svm->vcpu.stat.io_exits;
 	string = (io_info & SVM_IOIO_STR_MASK) != 0;
 	in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
-	if (string || in)
+	if (string)
 		return emulate_instruction(vcpu, 0) == EMULATE_DONE;
 
 	port = io_info >> 16;
@@ -2139,7 +2139,8 @@ static int io_interception(struct vcpu_svm *svm)
 	svm->next_rip = svm->vmcb->control.exit_info_2;
 	skip_emulated_instruction(&svm->vcpu);
 
-	return kvm_fast_pio_out(vcpu, size, port);
+	return in ? kvm_fast_pio_in(vcpu, size, port)
+		  : kvm_fast_pio_out(vcpu, size, port);
 }
 
 static int nmi_interception(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d432894..78295b0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5579,6 +5579,49 @@ int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port)
 }
 EXPORT_SYMBOL_GPL(kvm_fast_pio_out);
 
+static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+{
+	unsigned long val;
+
+	/* We should only ever be called with arch.pio.count equal to 1 */
+	BUG_ON(vcpu->arch.pio.count != 1);
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+					: 0;
+
+	/*
+	 * Since vcpu->arch.pio.count == 1 let emulator_pio_in_emulated perform
+	 * the copy and tracing
+	 */
+	emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, vcpu->arch.pio.size,
+				 vcpu->arch.pio.port, &val, 1);
+	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+
+	return 1;
+}
+
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port)
+{
+	unsigned long val;
+	int ret;
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX) : 0;
+
+	ret = emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, size, port,
+				       &val, 1);
+	if (ret) {
+		kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+		return ret;
+	}
+
+	vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_fast_pio_in);
+
 static int kvmclock_cpu_down_prep(unsigned int cpu)
 {
 	__this_cpu_write(cpu_tsc_khz, 0);

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:23   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Update the I/O interception support to add the kvm_fast_pio_in function
to speed up the in instruction similar to the out instruction.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/svm.c              |    5 +++--
 arch/x86/kvm/x86.c              |   43 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3f05d36..c38f878 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1133,6 +1133,7 @@ int kvm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
 struct x86_emulate_ctxt;
 
 int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port);
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port);
 void kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
 int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d8b9c8c..fd5a9a8 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2131,7 +2131,7 @@ static int io_interception(struct vcpu_svm *svm)
 	++svm->vcpu.stat.io_exits;
 	string = (io_info & SVM_IOIO_STR_MASK) != 0;
 	in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
-	if (string || in)
+	if (string)
 		return emulate_instruction(vcpu, 0) == EMULATE_DONE;
 
 	port = io_info >> 16;
@@ -2139,7 +2139,8 @@ static int io_interception(struct vcpu_svm *svm)
 	svm->next_rip = svm->vmcb->control.exit_info_2;
 	skip_emulated_instruction(&svm->vcpu);
 
-	return kvm_fast_pio_out(vcpu, size, port);
+	return in ? kvm_fast_pio_in(vcpu, size, port)
+		  : kvm_fast_pio_out(vcpu, size, port);
 }
 
 static int nmi_interception(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d432894..78295b0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5579,6 +5579,49 @@ int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port)
 }
 EXPORT_SYMBOL_GPL(kvm_fast_pio_out);
 
+static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+{
+	unsigned long val;
+
+	/* We should only ever be called with arch.pio.count equal to 1 */
+	BUG_ON(vcpu->arch.pio.count != 1);
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+					: 0;
+
+	/*
+	 * Since vcpu->arch.pio.count == 1 let emulator_pio_in_emulated perform
+	 * the copy and tracing
+	 */
+	emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, vcpu->arch.pio.size,
+				 vcpu->arch.pio.port, &val, 1);
+	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+
+	return 1;
+}
+
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port)
+{
+	unsigned long val;
+	int ret;
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX) : 0;
+
+	ret = emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, size, port,
+				       &val, 1);
+	if (ret) {
+		kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+		return ret;
+	}
+
+	vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_fast_pio_in);
+
 static int kvmclock_cpu_down_prep(unsigned int cpu)
 {
 	__this_cpu_write(cpu_tsc_khz, 0);

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support
@ 2016-08-22 23:23   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Update the I/O interception support to add the kvm_fast_pio_in function
to speed up the in instruction similar to the out instruction.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/svm.c              |    5 +++--
 arch/x86/kvm/x86.c              |   43 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3f05d36..c38f878 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1133,6 +1133,7 @@ int kvm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
 struct x86_emulate_ctxt;
 
 int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port);
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port);
 void kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
 int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d8b9c8c..fd5a9a8 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2131,7 +2131,7 @@ static int io_interception(struct vcpu_svm *svm)
 	++svm->vcpu.stat.io_exits;
 	string = (io_info & SVM_IOIO_STR_MASK) != 0;
 	in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
-	if (string || in)
+	if (string)
 		return emulate_instruction(vcpu, 0) == EMULATE_DONE;
 
 	port = io_info >> 16;
@@ -2139,7 +2139,8 @@ static int io_interception(struct vcpu_svm *svm)
 	svm->next_rip = svm->vmcb->control.exit_info_2;
 	skip_emulated_instruction(&svm->vcpu);
 
-	return kvm_fast_pio_out(vcpu, size, port);
+	return in ? kvm_fast_pio_in(vcpu, size, port)
+		  : kvm_fast_pio_out(vcpu, size, port);
 }
 
 static int nmi_interception(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d432894..78295b0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5579,6 +5579,49 @@ int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port)
 }
 EXPORT_SYMBOL_GPL(kvm_fast_pio_out);
 
+static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+{
+	unsigned long val;
+
+	/* We should only ever be called with arch.pio.count equal to 1 */
+	BUG_ON(vcpu->arch.pio.count != 1);
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+					: 0;
+
+	/*
+	 * Since vcpu->arch.pio.count == 1 let emulator_pio_in_emulated perform
+	 * the copy and tracing
+	 */
+	emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, vcpu->arch.pio.size,
+				 vcpu->arch.pio.port, &val, 1);
+	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+
+	return 1;
+}
+
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port)
+{
+	unsigned long val;
+	int ret;
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX) : 0;
+
+	ret = emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, size, port,
+				       &val, 1);
+	if (ret) {
+		kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+		return ret;
+	}
+
+	vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_fast_pio_in);
+
 static int kvmclock_cpu_down_prep(unsigned int cpu)
 {
 	__this_cpu_write(cpu_tsc_khz, 0);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support
@ 2016-08-22 23:23   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Update the I/O interception support to add the kvm_fast_pio_in function
to speed up the in instruction similar to the out instruction.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/svm.c              |    5 +++--
 arch/x86/kvm/x86.c              |   43 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3f05d36..c38f878 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1133,6 +1133,7 @@ int kvm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
 struct x86_emulate_ctxt;
 
 int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port);
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port);
 void kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
 int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d8b9c8c..fd5a9a8 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2131,7 +2131,7 @@ static int io_interception(struct vcpu_svm *svm)
 	++svm->vcpu.stat.io_exits;
 	string = (io_info & SVM_IOIO_STR_MASK) != 0;
 	in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
-	if (string || in)
+	if (string)
 		return emulate_instruction(vcpu, 0) == EMULATE_DONE;
 
 	port = io_info >> 16;
@@ -2139,7 +2139,8 @@ static int io_interception(struct vcpu_svm *svm)
 	svm->next_rip = svm->vmcb->control.exit_info_2;
 	skip_emulated_instruction(&svm->vcpu);
 
-	return kvm_fast_pio_out(vcpu, size, port);
+	return in ? kvm_fast_pio_in(vcpu, size, port)
+		  : kvm_fast_pio_out(vcpu, size, port);
 }
 
 static int nmi_interception(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d432894..78295b0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5579,6 +5579,49 @@ int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port)
 }
 EXPORT_SYMBOL_GPL(kvm_fast_pio_out);
 
+static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+{
+	unsigned long val;
+
+	/* We should only ever be called with arch.pio.count equal to 1 */
+	BUG_ON(vcpu->arch.pio.count != 1);
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+					: 0;
+
+	/*
+	 * Since vcpu->arch.pio.count == 1 let emulator_pio_in_emulated perform
+	 * the copy and tracing
+	 */
+	emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, vcpu->arch.pio.size,
+				 vcpu->arch.pio.port, &val, 1);
+	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+
+	return 1;
+}
+
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port)
+{
+	unsigned long val;
+	int ret;
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX) : 0;
+
+	ret = emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, size, port,
+				       &val, 1);
+	if (ret) {
+		kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+		return ret;
+	}
+
+	vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_fast_pio_in);
+
 static int kvmclock_cpu_down_prep(unsigned int cpu)
 {
 	__this_cpu_write(cpu_tsc_khz, 0);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support
@ 2016-08-22 23:23   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:23 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Update the I/O interception support to add the kvm_fast_pio_in function
to speed up the in instruction similar to the out instruction.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/svm.c              |    5 +++--
 arch/x86/kvm/x86.c              |   43 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3f05d36..c38f878 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1133,6 +1133,7 @@ int kvm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
 struct x86_emulate_ctxt;
 
 int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port);
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port);
 void kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
 int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d8b9c8c..fd5a9a8 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2131,7 +2131,7 @@ static int io_interception(struct vcpu_svm *svm)
 	++svm->vcpu.stat.io_exits;
 	string = (io_info & SVM_IOIO_STR_MASK) != 0;
 	in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
-	if (string || in)
+	if (string)
 		return emulate_instruction(vcpu, 0) == EMULATE_DONE;
 
 	port = io_info >> 16;
@@ -2139,7 +2139,8 @@ static int io_interception(struct vcpu_svm *svm)
 	svm->next_rip = svm->vmcb->control.exit_info_2;
 	skip_emulated_instruction(&svm->vcpu);
 
-	return kvm_fast_pio_out(vcpu, size, port);
+	return in ? kvm_fast_pio_in(vcpu, size, port)
+		  : kvm_fast_pio_out(vcpu, size, port);
 }
 
 static int nmi_interception(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d432894..78295b0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5579,6 +5579,49 @@ int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port)
 }
 EXPORT_SYMBOL_GPL(kvm_fast_pio_out);
 
+static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+{
+	unsigned long val;
+
+	/* We should only ever be called with arch.pio.count equal to 1 */
+	BUG_ON(vcpu->arch.pio.count != 1);
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+					: 0;
+
+	/*
+	 * Since vcpu->arch.pio.count == 1 let emulator_pio_in_emulated perform
+	 * the copy and tracing
+	 */
+	emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, vcpu->arch.pio.size,
+				 vcpu->arch.pio.port, &val, 1);
+	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+
+	return 1;
+}
+
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port)
+{
+	unsigned long val;
+	int ret;
+
+	/* For size less than 4 we merge, else we zero extend */
+	val = (size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX) : 0;
+
+	ret = emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, size, port,
+				       &val, 1);
+	if (ret) {
+		kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+		return ret;
+	}
+
+	vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_fast_pio_in);
+
 static int kvmclock_cpu_down_prep(unsigned int cpu)
 {
 	__this_cpu_write(cpu_tsc_khz, 0);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk
  2016-08-22 23:23 ` Brijesh Singh
                   ` (7 preceding siblings ...)
  (?)
@ 2016-08-22 23:24 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_emulate.h |    3 +++
 arch/x86/include/asm/kvm_host.h    |    3 +++
 arch/x86/kvm/svm.c                 |    2 ++
 arch/x86/kvm/x86.c                 |   17 ++++++++++++++++-
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index e9cd7be..2d1ac09 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -344,6 +344,9 @@ struct x86_emulate_ctxt {
 	struct read_cache mem_read;
 };
 
+/* String operation identifier (matches the definition in emulate.c) */
+#define CTXT_STRING_OP	(1 << 13)
+
 /* Repeat String Operation Prefix */
 #define REPE_PREFIX	0xf3
 #define REPNE_PREFIX	0xf2
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c38f878..b1dd673 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -667,6 +667,9 @@ struct kvm_vcpu_arch {
 
 	int pending_ioapic_eoi;
 	int pending_external_vector;
+
+	/* GPA available (AMD only) */
+	bool gpa_available;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fd5a9a8..9b2de7c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4055,6 +4055,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 
 	trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
 
+	vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+
 	if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
 		vcpu->arch.cr0 = svm->vmcb->save.cr0;
 	if (npt_enabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 78295b0..d6f2f4b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4382,7 +4382,19 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 		return 1;
 	}
 
-	*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
+	/*
+	 * If the exit was due to a NPF we may already have a GPA.
+	 * If the GPA is present, use it to avoid the GVA to GPA table
+	 * walk. Note, this cannot be used on string operations since
+	 * string operation using rep will only have the initial GPA
+	 * from when the NPF occurred.
+	 */
+	if (vcpu->arch.gpa_available &&
+	    !(vcpu->arch.emulate_ctxt.d & CTXT_STRING_OP))
+		*gpa = exception->address;
+	else
+		*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access,
+						       exception);
 
 	if (*gpa == UNMAPPED_GVA)
 		return -1;
@@ -5504,6 +5516,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
 	}
 
 restart:
+	/* Save the faulting GPA (cr2) in the address field */
+	ctxt->exception.address = cr2;
+
 	r = x86_emulate_insn(ctxt);
 
 	if (r == EMULATION_INTERCEPTED)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:24   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_emulate.h |    3 +++
 arch/x86/include/asm/kvm_host.h    |    3 +++
 arch/x86/kvm/svm.c                 |    2 ++
 arch/x86/kvm/x86.c                 |   17 ++++++++++++++++-
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index e9cd7be..2d1ac09 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -344,6 +344,9 @@ struct x86_emulate_ctxt {
 	struct read_cache mem_read;
 };
 
+/* String operation identifier (matches the definition in emulate.c) */
+#define CTXT_STRING_OP	(1 << 13)
+
 /* Repeat String Operation Prefix */
 #define REPE_PREFIX	0xf3
 #define REPNE_PREFIX	0xf2
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c38f878..b1dd673 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -667,6 +667,9 @@ struct kvm_vcpu_arch {
 
 	int pending_ioapic_eoi;
 	int pending_external_vector;
+
+	/* GPA available (AMD only) */
+	bool gpa_available;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fd5a9a8..9b2de7c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4055,6 +4055,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 
 	trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
 
+	vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+
 	if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
 		vcpu->arch.cr0 = svm->vmcb->save.cr0;
 	if (npt_enabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 78295b0..d6f2f4b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4382,7 +4382,19 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 		return 1;
 	}
 
-	*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
+	/*
+	 * If the exit was due to a NPF we may already have a GPA.
+	 * If the GPA is present, use it to avoid the GVA to GPA table
+	 * walk. Note, this cannot be used on string operations since
+	 * string operation using rep will only have the initial GPA
+	 * from when the NPF occurred.
+	 */
+	if (vcpu->arch.gpa_available &&
+	    !(vcpu->arch.emulate_ctxt.d & CTXT_STRING_OP))
+		*gpa = exception->address;
+	else
+		*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access,
+						       exception);
 
 	if (*gpa == UNMAPPED_GVA)
 		return -1;
@@ -5504,6 +5516,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
 	}
 
 restart:
+	/* Save the faulting GPA (cr2) in the address field */
+	ctxt->exception.address = cr2;
+
 	r = x86_emulate_insn(ctxt);
 
 	if (r == EMULATION_INTERCEPTED)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_emulate.h |    3 +++
 arch/x86/include/asm/kvm_host.h    |    3 +++
 arch/x86/kvm/svm.c                 |    2 ++
 arch/x86/kvm/x86.c                 |   17 ++++++++++++++++-
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index e9cd7be..2d1ac09 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -344,6 +344,9 @@ struct x86_emulate_ctxt {
 	struct read_cache mem_read;
 };
 
+/* String operation identifier (matches the definition in emulate.c) */
+#define CTXT_STRING_OP	(1 << 13)
+
 /* Repeat String Operation Prefix */
 #define REPE_PREFIX	0xf3
 #define REPNE_PREFIX	0xf2
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c38f878..b1dd673 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -667,6 +667,9 @@ struct kvm_vcpu_arch {
 
 	int pending_ioapic_eoi;
 	int pending_external_vector;
+
+	/* GPA available (AMD only) */
+	bool gpa_available;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fd5a9a8..9b2de7c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4055,6 +4055,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 
 	trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
 
+	vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+
 	if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
 		vcpu->arch.cr0 = svm->vmcb->save.cr0;
 	if (npt_enabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 78295b0..d6f2f4b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4382,7 +4382,19 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 		return 1;
 	}
 
-	*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
+	/*
+	 * If the exit was due to a NPF we may already have a GPA.
+	 * If the GPA is present, use it to avoid the GVA to GPA table
+	 * walk. Note, this cannot be used on string operations since
+	 * string operation using rep will only have the initial GPA
+	 * from when the NPF occurred.
+	 */
+	if (vcpu->arch.gpa_available &&
+	    !(vcpu->arch.emulate_ctxt.d & CTXT_STRING_OP))
+		*gpa = exception->address;
+	else
+		*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access,
+						       exception);
 
 	if (*gpa == UNMAPPED_GVA)
 		return -1;
@@ -5504,6 +5516,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
 	}
 
 restart:
+	/* Save the faulting GPA (cr2) in the address field */
+	ctxt->exception.address = cr2;
+
 	r = x86_emulate_insn(ctxt);
 
 	if (r == EMULATION_INTERCEPTED)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_emulate.h |    3 +++
 arch/x86/include/asm/kvm_host.h    |    3 +++
 arch/x86/kvm/svm.c                 |    2 ++
 arch/x86/kvm/x86.c                 |   17 ++++++++++++++++-
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index e9cd7be..2d1ac09 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -344,6 +344,9 @@ struct x86_emulate_ctxt {
 	struct read_cache mem_read;
 };
 
+/* String operation identifier (matches the definition in emulate.c) */
+#define CTXT_STRING_OP	(1 << 13)
+
 /* Repeat String Operation Prefix */
 #define REPE_PREFIX	0xf3
 #define REPNE_PREFIX	0xf2
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c38f878..b1dd673 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -667,6 +667,9 @@ struct kvm_vcpu_arch {
 
 	int pending_ioapic_eoi;
 	int pending_external_vector;
+
+	/* GPA available (AMD only) */
+	bool gpa_available;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fd5a9a8..9b2de7c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4055,6 +4055,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 
 	trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
 
+	vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+
 	if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
 		vcpu->arch.cr0 = svm->vmcb->save.cr0;
 	if (npt_enabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 78295b0..d6f2f4b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4382,7 +4382,19 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 		return 1;
 	}
 
-	*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
+	/*
+	 * If the exit was due to a NPF we may already have a GPA.
+	 * If the GPA is present, use it to avoid the GVA to GPA table
+	 * walk. Note, this cannot be used on string operations since
+	 * string operation using rep will only have the initial GPA
+	 * from when the NPF occurred.
+	 */
+	if (vcpu->arch.gpa_available &&
+	    !(vcpu->arch.emulate_ctxt.d & CTXT_STRING_OP))
+		*gpa = exception->address;
+	else
+		*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access,
+						       exception);
 
 	if (*gpa == UNMAPPED_GVA)
 		return -1;
@@ -5504,6 +5516,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
 	}
 
 restart:
+	/* Save the faulting GPA (cr2) in the address field */
+	ctxt->exception.address = cr2;
+
 	r = x86_emulate_insn(ctxt);
 
 	if (r == EMULATION_INTERCEPTED)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/kvm_emulate.h |    3 +++
 arch/x86/include/asm/kvm_host.h    |    3 +++
 arch/x86/kvm/svm.c                 |    2 ++
 arch/x86/kvm/x86.c                 |   17 ++++++++++++++++-
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index e9cd7be..2d1ac09 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -344,6 +344,9 @@ struct x86_emulate_ctxt {
 	struct read_cache mem_read;
 };
 
+/* String operation identifier (matches the definition in emulate.c) */
+#define CTXT_STRING_OP	(1 << 13)
+
 /* Repeat String Operation Prefix */
 #define REPE_PREFIX	0xf3
 #define REPNE_PREFIX	0xf2
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c38f878..b1dd673 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -667,6 +667,9 @@ struct kvm_vcpu_arch {
 
 	int pending_ioapic_eoi;
 	int pending_external_vector;
+
+	/* GPA available (AMD only) */
+	bool gpa_available;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fd5a9a8..9b2de7c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4055,6 +4055,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 
 	trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
 
+	vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+
 	if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
 		vcpu->arch.cr0 = svm->vmcb->save.cr0;
 	if (npt_enabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 78295b0..d6f2f4b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4382,7 +4382,19 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 		return 1;
 	}
 
-	*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
+	/*
+	 * If the exit was due to a NPF we may already have a GPA.
+	 * If the GPA is present, use it to avoid the GVA to GPA table
+	 * walk. Note, this cannot be used on string operations since
+	 * string operation using rep will only have the initial GPA
+	 * from when the NPF occurred.
+	 */
+	if (vcpu->arch.gpa_available &&
+	    !(vcpu->arch.emulate_ctxt.d & CTXT_STRING_OP))
+		*gpa = exception->address;
+	else
+		*gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access,
+						       exception);
 
 	if (*gpa == UNMAPPED_GVA)
 		return -1;
@@ -5504,6 +5516,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
 	}
 
 restart:
+	/* Save the faulting GPA (cr2) in the address field */
+	ctxt->exception.address = cr2;
+
 	r = x86_emulate_insn(ctxt);
 
 	if (r == EMULATION_INTERCEPTED)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support
  2016-08-22 23:23 ` Brijesh Singh
                   ` (8 preceding siblings ...)
  (?)
@ 2016-08-22 23:24 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

Provide support for Secure Encyrpted Virtualization (SEV). This initial
support defines the SEV active flag in order for the kernel to determine
if it is running with SEV active or not.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h |    3 +++
 arch/x86/kernel/mem_encrypt.S      |    8 ++++++++
 arch/x86/kernel/x8664_ksyms_64.c   |    1 +
 3 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index e395729..9c592d1 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,6 +20,7 @@
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
+extern unsigned int sev_active;
 
 u8 sme_get_me_loss(void);
 
@@ -50,6 +51,8 @@ void swiotlb_set_mem_dec(void *vaddr, unsigned long size);
 
 #define sme_me_mask		0UL
 
+#define sev_active		0
+
 static inline u8 sme_get_me_loss(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index bf9f6a9..6a8cd18 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -96,6 +96,10 @@ ENDPROC(sme_enable)
 
 ENTRY(sme_encrypt_kernel)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* If SEV is active then the kernel is already encrypted */
+	cmpl	$0, sev_active(%rip)
+	jnz	.Lencrypt_exit
+
 	/* If SME is not active then no need to encrypt the kernel */
 	cmpq	$0, sme_me_mask(%rip)
 	jz	.Lencrypt_exit
@@ -334,6 +338,10 @@ sme_me_loss:
 	.byte	0x00
 	.align	8
 
+ENTRY(sev_active)
+	.word	0x00000000
+	.align	8
+
 mem_encrypt_enable_option:
 	.asciz "mem_encrypt=on"
 	.align	8
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 651c4c8..14bfc0b 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -88,4 +88,5 @@ EXPORT_SYMBOL(___preempt_schedule_notrace);
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 EXPORT_SYMBOL_GPL(sme_me_mask);
 EXPORT_SYMBOL_GPL(sme_get_me_loss);
+EXPORT_SYMBOL_GPL(sev_active);
 #endif

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:24   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Provide support for Secure Encyrpted Virtualization (SEV). This initial
support defines the SEV active flag in order for the kernel to determine
if it is running with SEV active or not.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h |    3 +++
 arch/x86/kernel/mem_encrypt.S      |    8 ++++++++
 arch/x86/kernel/x8664_ksyms_64.c   |    1 +
 3 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index e395729..9c592d1 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,6 +20,7 @@
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
+extern unsigned int sev_active;
 
 u8 sme_get_me_loss(void);
 
@@ -50,6 +51,8 @@ void swiotlb_set_mem_dec(void *vaddr, unsigned long size);
 
 #define sme_me_mask		0UL
 
+#define sev_active		0
+
 static inline u8 sme_get_me_loss(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index bf9f6a9..6a8cd18 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -96,6 +96,10 @@ ENDPROC(sme_enable)
 
 ENTRY(sme_encrypt_kernel)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* If SEV is active then the kernel is already encrypted */
+	cmpl	$0, sev_active(%rip)
+	jnz	.Lencrypt_exit
+
 	/* If SME is not active then no need to encrypt the kernel */
 	cmpq	$0, sme_me_mask(%rip)
 	jz	.Lencrypt_exit
@@ -334,6 +338,10 @@ sme_me_loss:
 	.byte	0x00
 	.align	8
 
+ENTRY(sev_active)
+	.word	0x00000000
+	.align	8
+
 mem_encrypt_enable_option:
 	.asciz "mem_encrypt=on"
 	.align	8
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 651c4c8..14bfc0b 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -88,4 +88,5 @@ EXPORT_SYMBOL(___preempt_schedule_notrace);
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 EXPORT_SYMBOL_GPL(sme_me_mask);
 EXPORT_SYMBOL_GPL(sme_get_me_loss);
+EXPORT_SYMBOL_GPL(sev_active);
 #endif

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Provide support for Secure Encyrpted Virtualization (SEV). This initial
support defines the SEV active flag in order for the kernel to determine
if it is running with SEV active or not.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h |    3 +++
 arch/x86/kernel/mem_encrypt.S      |    8 ++++++++
 arch/x86/kernel/x8664_ksyms_64.c   |    1 +
 3 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index e395729..9c592d1 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,6 +20,7 @@
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
+extern unsigned int sev_active;
 
 u8 sme_get_me_loss(void);
 
@@ -50,6 +51,8 @@ void swiotlb_set_mem_dec(void *vaddr, unsigned long size);
 
 #define sme_me_mask		0UL
 
+#define sev_active		0
+
 static inline u8 sme_get_me_loss(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index bf9f6a9..6a8cd18 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -96,6 +96,10 @@ ENDPROC(sme_enable)
 
 ENTRY(sme_encrypt_kernel)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* If SEV is active then the kernel is already encrypted */
+	cmpl	$0, sev_active(%rip)
+	jnz	.Lencrypt_exit
+
 	/* If SME is not active then no need to encrypt the kernel */
 	cmpq	$0, sme_me_mask(%rip)
 	jz	.Lencrypt_exit
@@ -334,6 +338,10 @@ sme_me_loss:
 	.byte	0x00
 	.align	8
 
+ENTRY(sev_active)
+	.word	0x00000000
+	.align	8
+
 mem_encrypt_enable_option:
 	.asciz "mem_encrypt=on"
 	.align	8
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 651c4c8..14bfc0b 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -88,4 +88,5 @@ EXPORT_SYMBOL(___preempt_schedule_notrace);
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 EXPORT_SYMBOL_GPL(sme_me_mask);
 EXPORT_SYMBOL_GPL(sme_get_me_loss);
+EXPORT_SYMBOL_GPL(sev_active);
 #endif

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Provide support for Secure Encyrpted Virtualization (SEV). This initial
support defines the SEV active flag in order for the kernel to determine
if it is running with SEV active or not.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h |    3 +++
 arch/x86/kernel/mem_encrypt.S      |    8 ++++++++
 arch/x86/kernel/x8664_ksyms_64.c   |    1 +
 3 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index e395729..9c592d1 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,6 +20,7 @@
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
+extern unsigned int sev_active;
 
 u8 sme_get_me_loss(void);
 
@@ -50,6 +51,8 @@ void swiotlb_set_mem_dec(void *vaddr, unsigned long size);
 
 #define sme_me_mask		0UL
 
+#define sev_active		0
+
 static inline u8 sme_get_me_loss(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index bf9f6a9..6a8cd18 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -96,6 +96,10 @@ ENDPROC(sme_enable)
 
 ENTRY(sme_encrypt_kernel)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* If SEV is active then the kernel is already encrypted */
+	cmpl	$0, sev_active(%rip)
+	jnz	.Lencrypt_exit
+
 	/* If SME is not active then no need to encrypt the kernel */
 	cmpq	$0, sme_me_mask(%rip)
 	jz	.Lencrypt_exit
@@ -334,6 +338,10 @@ sme_me_loss:
 	.byte	0x00
 	.align	8
 
+ENTRY(sev_active)
+	.word	0x00000000
+	.align	8
+
 mem_encrypt_enable_option:
 	.asciz "mem_encrypt=on"
 	.align	8
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 651c4c8..14bfc0b 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -88,4 +88,5 @@ EXPORT_SYMBOL(___preempt_schedule_notrace);
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 EXPORT_SYMBOL_GPL(sme_me_mask);
 EXPORT_SYMBOL_GPL(sme_get_me_loss);
+EXPORT_SYMBOL_GPL(sev_active);
 #endif

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Provide support for Secure Encyrpted Virtualization (SEV). This initial
support defines the SEV active flag in order for the kernel to determine
if it is running with SEV active or not.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h |    3 +++
 arch/x86/kernel/mem_encrypt.S      |    8 ++++++++
 arch/x86/kernel/x8664_ksyms_64.c   |    1 +
 3 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index e395729..9c592d1 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,6 +20,7 @@
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
+extern unsigned int sev_active;
 
 u8 sme_get_me_loss(void);
 
@@ -50,6 +51,8 @@ void swiotlb_set_mem_dec(void *vaddr, unsigned long size);
 
 #define sme_me_mask		0UL
 
+#define sev_active		0
+
 static inline u8 sme_get_me_loss(void)
 {
 	return 0;
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index bf9f6a9..6a8cd18 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -96,6 +96,10 @@ ENDPROC(sme_enable)
 
 ENTRY(sme_encrypt_kernel)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* If SEV is active then the kernel is already encrypted */
+	cmpl	$0, sev_active(%rip)
+	jnz	.Lencrypt_exit
+
 	/* If SME is not active then no need to encrypt the kernel */
 	cmpq	$0, sme_me_mask(%rip)
 	jz	.Lencrypt_exit
@@ -334,6 +338,10 @@ sme_me_loss:
 	.byte	0x00
 	.align	8
 
+ENTRY(sev_active)
+	.word	0x00000000
+	.align	8
+
 mem_encrypt_enable_option:
 	.asciz "mem_encrypt=on"
 	.align	8
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 651c4c8..14bfc0b 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -88,4 +88,5 @@ EXPORT_SYMBOL(___preempt_schedule_notrace);
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 EXPORT_SYMBOL_GPL(sme_me_mask);
 EXPORT_SYMBOL_GPL(sme_get_me_loss);
+EXPORT_SYMBOL_GPL(sev_active);
 #endif

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl
  2016-08-22 23:23 ` Brijesh Singh
                   ` (11 preceding siblings ...)
  (?)
@ 2016-08-22 23:24 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

Currently the nested_ctl variable in the vmcb_control_area structure is
used to indicate nested paging support. The nested paging support field
is actually defined as bit 0 of the this field. In order to support a new
feature flag the usage of the nested_ctl and nested paging support must
be converted to operate on a single bit.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h |    2 ++
 arch/x86/kvm/svm.c         |    7 ++++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 14824fc..2aca535 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -136,6 +136,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
 	u16 attrib;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b2de7c..9b59260 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1177,7 +1177,7 @@ static void init_vmcb(struct vcpu_svm *svm)
 
 	if (npt_enabled) {
 		/* Setup VMCB for Nested Paging */
-		control->nested_ctl = 1;
+		control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
 		clr_intercept(svm, INTERCEPT_INVLPG);
 		clr_exception_intercept(svm, PF_VECTOR);
 		clr_cr_intercept(svm, INTERCEPT_CR3_READ);
@@ -2701,7 +2701,8 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
 	if (vmcb->control.asid == 0)
 		return false;
 
-	if (vmcb->control.nested_ctl && !npt_enabled)
+	if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
+	    !npt_enabled)
 		return false;
 
 	return true;
@@ -2776,7 +2777,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
 	else
 		svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
 
-	if (nested_vmcb->control.nested_ctl) {
+	if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
 		kvm_mmu_unload(&svm->vcpu);
 		svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
 		nested_svm_init_mmu_context(&svm->vcpu);

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:24   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Currently the nested_ctl variable in the vmcb_control_area structure is
used to indicate nested paging support. The nested paging support field
is actually defined as bit 0 of the this field. In order to support a new
feature flag the usage of the nested_ctl and nested paging support must
be converted to operate on a single bit.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h |    2 ++
 arch/x86/kvm/svm.c         |    7 ++++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 14824fc..2aca535 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -136,6 +136,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
 	u16 attrib;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b2de7c..9b59260 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1177,7 +1177,7 @@ static void init_vmcb(struct vcpu_svm *svm)
 
 	if (npt_enabled) {
 		/* Setup VMCB for Nested Paging */
-		control->nested_ctl = 1;
+		control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
 		clr_intercept(svm, INTERCEPT_INVLPG);
 		clr_exception_intercept(svm, PF_VECTOR);
 		clr_cr_intercept(svm, INTERCEPT_CR3_READ);
@@ -2701,7 +2701,8 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
 	if (vmcb->control.asid == 0)
 		return false;
 
-	if (vmcb->control.nested_ctl && !npt_enabled)
+	if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
+	    !npt_enabled)
 		return false;
 
 	return true;
@@ -2776,7 +2777,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
 	else
 		svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
 
-	if (nested_vmcb->control.nested_ctl) {
+	if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
 		kvm_mmu_unload(&svm->vcpu);
 		svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
 		nested_svm_init_mmu_context(&svm->vcpu);

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Currently the nested_ctl variable in the vmcb_control_area structure is
used to indicate nested paging support. The nested paging support field
is actually defined as bit 0 of the this field. In order to support a new
feature flag the usage of the nested_ctl and nested paging support must
be converted to operate on a single bit.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h |    2 ++
 arch/x86/kvm/svm.c         |    7 ++++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 14824fc..2aca535 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -136,6 +136,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
 	u16 attrib;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b2de7c..9b59260 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1177,7 +1177,7 @@ static void init_vmcb(struct vcpu_svm *svm)
 
 	if (npt_enabled) {
 		/* Setup VMCB for Nested Paging */
-		control->nested_ctl = 1;
+		control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
 		clr_intercept(svm, INTERCEPT_INVLPG);
 		clr_exception_intercept(svm, PF_VECTOR);
 		clr_cr_intercept(svm, INTERCEPT_CR3_READ);
@@ -2701,7 +2701,8 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
 	if (vmcb->control.asid == 0)
 		return false;
 
-	if (vmcb->control.nested_ctl && !npt_enabled)
+	if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
+	    !npt_enabled)
 		return false;
 
 	return true;
@@ -2776,7 +2777,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
 	else
 		svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
 
-	if (nested_vmcb->control.nested_ctl) {
+	if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
 		kvm_mmu_unload(&svm->vcpu);
 		svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
 		nested_svm_init_mmu_context(&svm->vcpu);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Currently the nested_ctl variable in the vmcb_control_area structure is
used to indicate nested paging support. The nested paging support field
is actually defined as bit 0 of the this field. In order to support a new
feature flag the usage of the nested_ctl and nested paging support must
be converted to operate on a single bit.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h |    2 ++
 arch/x86/kvm/svm.c         |    7 ++++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 14824fc..2aca535 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -136,6 +136,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
 	u16 attrib;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b2de7c..9b59260 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1177,7 +1177,7 @@ static void init_vmcb(struct vcpu_svm *svm)
 
 	if (npt_enabled) {
 		/* Setup VMCB for Nested Paging */
-		control->nested_ctl = 1;
+		control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
 		clr_intercept(svm, INTERCEPT_INVLPG);
 		clr_exception_intercept(svm, PF_VECTOR);
 		clr_cr_intercept(svm, INTERCEPT_CR3_READ);
@@ -2701,7 +2701,8 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
 	if (vmcb->control.asid == 0)
 		return false;
 
-	if (vmcb->control.nested_ctl && !npt_enabled)
+	if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
+	    !npt_enabled)
 		return false;
 
 	return true;
@@ -2776,7 +2777,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
 	else
 		svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
 
-	if (nested_vmcb->control.nested_ctl) {
+	if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
 		kvm_mmu_unload(&svm->vcpu);
 		svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
 		nested_svm_init_mmu_context(&svm->vcpu);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Currently the nested_ctl variable in the vmcb_control_area structure is
used to indicate nested paging support. The nested paging support field
is actually defined as bit 0 of the this field. In order to support a new
feature flag the usage of the nested_ctl and nested paging support must
be converted to operate on a single bit.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h |    2 ++
 arch/x86/kvm/svm.c         |    7 ++++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 14824fc..2aca535 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -136,6 +136,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
 	u16 attrib;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b2de7c..9b59260 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1177,7 +1177,7 @@ static void init_vmcb(struct vcpu_svm *svm)
 
 	if (npt_enabled) {
 		/* Setup VMCB for Nested Paging */
-		control->nested_ctl = 1;
+		control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
 		clr_intercept(svm, INTERCEPT_INVLPG);
 		clr_exception_intercept(svm, PF_VECTOR);
 		clr_cr_intercept(svm, INTERCEPT_CR3_READ);
@@ -2701,7 +2701,8 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
 	if (vmcb->control.asid == 0)
 		return false;
 
-	if (vmcb->control.nested_ctl && !npt_enabled)
+	if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
+	    !npt_enabled)
 		return false;
 
 	return true;
@@ -2776,7 +2777,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
 	else
 		svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
 
-	if (nested_vmcb->control.nested_ctl) {
+	if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
 		kvm_mmu_unload(&svm->vcpu);
 		svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
 		nested_svm_init_mmu_context(&svm->vcpu);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 06/28] KVM: SVM: Add SEV feature definitions to KVM
  2016-08-22 23:23 ` Brijesh Singh
                   ` (12 preceding siblings ...)
  (?)
@ 2016-08-22 23:24 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

Define a new KVM cpu feature for Secure Encrypted Virtualization (SEV).
The kernel will check for the presence of this feature to determine if
it is running with SEV active.

Define the SEV enable bit for the VMCB control structure. The hypervisor
will use this bit to enable SEV in the guest.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h           |    1 +
 arch/x86/include/uapi/asm/kvm_para.h |    1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2aca535..fba2a7b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -137,6 +137,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
 #define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+#define SVM_NESTED_CTL_SEV_ENABLE	BIT(1)
 
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 94dc8ca..67dd610f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -24,6 +24,7 @@
 #define KVM_FEATURE_STEAL_TIME		5
 #define KVM_FEATURE_PV_EOI		6
 #define KVM_FEATURE_PV_UNHALT		7
+#define KVM_FEATURE_SEV			8
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 06/28] KVM: SVM: Add SEV feature definitions to KVM
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:24   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Define a new KVM cpu feature for Secure Encrypted Virtualization (SEV).
The kernel will check for the presence of this feature to determine if
it is running with SEV active.

Define the SEV enable bit for the VMCB control structure. The hypervisor
will use this bit to enable SEV in the guest.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h           |    1 +
 arch/x86/include/uapi/asm/kvm_para.h |    1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2aca535..fba2a7b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -137,6 +137,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
 #define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+#define SVM_NESTED_CTL_SEV_ENABLE	BIT(1)
 
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 94dc8ca..67dd610f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -24,6 +24,7 @@
 #define KVM_FEATURE_STEAL_TIME		5
 #define KVM_FEATURE_PV_EOI		6
 #define KVM_FEATURE_PV_UNHALT		7
+#define KVM_FEATURE_SEV			8
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 06/28] KVM: SVM: Add SEV feature definitions to KVM
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Define a new KVM cpu feature for Secure Encrypted Virtualization (SEV).
The kernel will check for the presence of this feature to determine if
it is running with SEV active.

Define the SEV enable bit for the VMCB control structure. The hypervisor
will use this bit to enable SEV in the guest.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h           |    1 +
 arch/x86/include/uapi/asm/kvm_para.h |    1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2aca535..fba2a7b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -137,6 +137,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
 #define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+#define SVM_NESTED_CTL_SEV_ENABLE	BIT(1)
 
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 94dc8ca..67dd610f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -24,6 +24,7 @@
 #define KVM_FEATURE_STEAL_TIME		5
 #define KVM_FEATURE_PV_EOI		6
 #define KVM_FEATURE_PV_UNHALT		7
+#define KVM_FEATURE_SEV			8
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 06/28] KVM: SVM: Add SEV feature definitions to KVM
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Define a new KVM cpu feature for Secure Encrypted Virtualization (SEV).
The kernel will check for the presence of this feature to determine if
it is running with SEV active.

Define the SEV enable bit for the VMCB control structure. The hypervisor
will use this bit to enable SEV in the guest.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h           |    1 +
 arch/x86/include/uapi/asm/kvm_para.h |    1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2aca535..fba2a7b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -137,6 +137,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
 #define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+#define SVM_NESTED_CTL_SEV_ENABLE	BIT(1)
 
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 94dc8ca..67dd610f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -24,6 +24,7 @@
 #define KVM_FEATURE_STEAL_TIME		5
 #define KVM_FEATURE_PV_EOI		6
 #define KVM_FEATURE_PV_UNHALT		7
+#define KVM_FEATURE_SEV			8
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 06/28] KVM: SVM: Add SEV feature definitions to KVM
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Define a new KVM cpu feature for Secure Encrypted Virtualization (SEV).
The kernel will check for the presence of this feature to determine if
it is running with SEV active.

Define the SEV enable bit for the VMCB control structure. The hypervisor
will use this bit to enable SEV in the guest.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/svm.h           |    1 +
 arch/x86/include/uapi/asm/kvm_para.h |    1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2aca535..fba2a7b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -137,6 +137,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
 #define SVM_NESTED_CTL_NP_ENABLE	BIT(0)
+#define SVM_NESTED_CTL_SEV_ENABLE	BIT(1)
 
 struct __attribute__ ((__packed__)) vmcb_seg {
 	u16 selector;
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 94dc8ca..67dd610f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -24,6 +24,7 @@
 #define KVM_FEATURE_STEAL_TIME		5
 #define KVM_FEATURE_PV_EOI		6
 #define KVM_FEATURE_PV_UNHALT		7
+#define KVM_FEATURE_SEV			8
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 07/28] x86: Do not encrypt memory areas if SEV is enabled
  2016-08-22 23:23 ` Brijesh Singh
                   ` (15 preceding siblings ...)
  (?)
@ 2016-08-22 23:24 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

When running under SEV, some memory areas that were originally not
encrypted under SME are already encrypted. In these situations do not
attempt to encrypt them.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/head64.c |    4 ++--
 arch/x86/kernel/setup.c  |    7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 358d7bc..4a15def 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -114,7 +114,7 @@ static void __init create_unencrypted_mapping(void *address, unsigned long size)
 	unsigned long physaddr = (unsigned long)address - __PAGE_OFFSET;
 	pmdval_t pmd_flags, pmd;
 
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	/* Clear the encryption mask from the early_pmd_flags */
@@ -165,7 +165,7 @@ static void __init __clear_mapping(unsigned long address)
 
 static void __init clear_mapping(void *address, unsigned long size)
 {
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	do {
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index cec8a63..9c10383 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -380,10 +380,11 @@ static void __init reserve_initrd(void)
 
 	/*
 	 * This memory is marked encrypted by the kernel but the ramdisk
-	 * was loaded in the clear by the bootloader, so make sure that
-	 * the ramdisk image is encrypted.
+	 * was loaded in the clear by the bootloader (unless SEV is active),
+	 * so make sure that the ramdisk image is encrypted.
 	 */
-	sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
+	if (!sev_active)
+		sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
 
 	initrd_start = 0;
 

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 07/28] x86: Do not encrypt memory areas if SEV is enabled
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:24   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When running under SEV, some memory areas that were originally not
encrypted under SME are already encrypted. In these situations do not
attempt to encrypt them.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/head64.c |    4 ++--
 arch/x86/kernel/setup.c  |    7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 358d7bc..4a15def 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -114,7 +114,7 @@ static void __init create_unencrypted_mapping(void *address, unsigned long size)
 	unsigned long physaddr = (unsigned long)address - __PAGE_OFFSET;
 	pmdval_t pmd_flags, pmd;
 
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	/* Clear the encryption mask from the early_pmd_flags */
@@ -165,7 +165,7 @@ static void __init __clear_mapping(unsigned long address)
 
 static void __init clear_mapping(void *address, unsigned long size)
 {
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	do {
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index cec8a63..9c10383 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -380,10 +380,11 @@ static void __init reserve_initrd(void)
 
 	/*
 	 * This memory is marked encrypted by the kernel but the ramdisk
-	 * was loaded in the clear by the bootloader, so make sure that
-	 * the ramdisk image is encrypted.
+	 * was loaded in the clear by the bootloader (unless SEV is active),
+	 * so make sure that the ramdisk image is encrypted.
 	 */
-	sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
+	if (!sev_active)
+		sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
 
 	initrd_start = 0;
 

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 07/28] x86: Do not encrypt memory areas if SEV is enabled
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When running under SEV, some memory areas that were originally not
encrypted under SME are already encrypted. In these situations do not
attempt to encrypt them.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/head64.c |    4 ++--
 arch/x86/kernel/setup.c  |    7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 358d7bc..4a15def 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -114,7 +114,7 @@ static void __init create_unencrypted_mapping(void *address, unsigned long size)
 	unsigned long physaddr = (unsigned long)address - __PAGE_OFFSET;
 	pmdval_t pmd_flags, pmd;
 
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	/* Clear the encryption mask from the early_pmd_flags */
@@ -165,7 +165,7 @@ static void __init __clear_mapping(unsigned long address)
 
 static void __init clear_mapping(void *address, unsigned long size)
 {
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	do {
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index cec8a63..9c10383 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -380,10 +380,11 @@ static void __init reserve_initrd(void)
 
 	/*
 	 * This memory is marked encrypted by the kernel but the ramdisk
-	 * was loaded in the clear by the bootloader, so make sure that
-	 * the ramdisk image is encrypted.
+	 * was loaded in the clear by the bootloader (unless SEV is active),
+	 * so make sure that the ramdisk image is encrypted.
 	 */
-	sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
+	if (!sev_active)
+		sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
 
 	initrd_start = 0;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 07/28] x86: Do not encrypt memory areas if SEV is enabled
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When running under SEV, some memory areas that were originally not
encrypted under SME are already encrypted. In these situations do not
attempt to encrypt them.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/head64.c |    4 ++--
 arch/x86/kernel/setup.c  |    7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 358d7bc..4a15def 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -114,7 +114,7 @@ static void __init create_unencrypted_mapping(void *address, unsigned long size)
 	unsigned long physaddr = (unsigned long)address - __PAGE_OFFSET;
 	pmdval_t pmd_flags, pmd;
 
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	/* Clear the encryption mask from the early_pmd_flags */
@@ -165,7 +165,7 @@ static void __init __clear_mapping(unsigned long address)
 
 static void __init clear_mapping(void *address, unsigned long size)
 {
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	do {
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index cec8a63..9c10383 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -380,10 +380,11 @@ static void __init reserve_initrd(void)
 
 	/*
 	 * This memory is marked encrypted by the kernel but the ramdisk
-	 * was loaded in the clear by the bootloader, so make sure that
-	 * the ramdisk image is encrypted.
+	 * was loaded in the clear by the bootloader (unless SEV is active),
+	 * so make sure that the ramdisk image is encrypted.
 	 */
-	sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
+	if (!sev_active)
+		sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
 
 	initrd_start = 0;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 07/28] x86: Do not encrypt memory areas if SEV is enabled
@ 2016-08-22 23:24   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:24 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When running under SEV, some memory areas that were originally not
encrypted under SME are already encrypted. In these situations do not
attempt to encrypt them.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/head64.c |    4 ++--
 arch/x86/kernel/setup.c  |    7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 358d7bc..4a15def 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -114,7 +114,7 @@ static void __init create_unencrypted_mapping(void *address, unsigned long size)
 	unsigned long physaddr = (unsigned long)address - __PAGE_OFFSET;
 	pmdval_t pmd_flags, pmd;
 
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	/* Clear the encryption mask from the early_pmd_flags */
@@ -165,7 +165,7 @@ static void __init __clear_mapping(unsigned long address)
 
 static void __init clear_mapping(void *address, unsigned long size)
 {
-	if (!sme_me_mask)
+	if (!sme_me_mask || sev_active)
 		return;
 
 	do {
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index cec8a63..9c10383 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -380,10 +380,11 @@ static void __init reserve_initrd(void)
 
 	/*
 	 * This memory is marked encrypted by the kernel but the ramdisk
-	 * was loaded in the clear by the bootloader, so make sure that
-	 * the ramdisk image is encrypted.
+	 * was loaded in the clear by the bootloader (unless SEV is active),
+	 * so make sure that the ramdisk image is encrypted.
 	 */
-	sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
+	if (!sev_active)
+		sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
 
 	initrd_start = 0;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 08/28] Access BOOT related data encrypted with SEV active
  2016-08-22 23:23 ` Brijesh Singh
                   ` (17 preceding siblings ...)
  (?)
@ 2016-08-22 23:25 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization (SEV) is active, BOOT data (such as
EFI related data) is encrypted and needs to be access as such. Update the
architecture override in early_memremap to keep the encryption attribute
when mapping this data.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/ioremap.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index e3bdc5a..2ea6deb 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -429,10 +429,11 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
 					     pgprot_t prot)
 {
 	/*
-	 * If memory encryption is enabled and BOOT_DATA is being mapped
-	 * then remove the encryption bit.
+	 * If memory encryption is enabled, we are not running with
+	 * SEV active and BOOT_DATA is being mapped then remove the
+	 * encryption bit
 	 */
-	if (_PAGE_ENC && (owner == BOOT_DATA))
+	if (_PAGE_ENC && !sev_active && (owner == BOOT_DATA))
 		prot = __pgprot(pgprot_val(prot) & ~_PAGE_ENC);
 
 	return prot;

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 08/28] Access BOOT related data encrypted with SEV active
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:25   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization (SEV) is active, BOOT data (such as
EFI related data) is encrypted and needs to be access as such. Update the
architecture override in early_memremap to keep the encryption attribute
when mapping this data.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/ioremap.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index e3bdc5a..2ea6deb 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -429,10 +429,11 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
 					     pgprot_t prot)
 {
 	/*
-	 * If memory encryption is enabled and BOOT_DATA is being mapped
-	 * then remove the encryption bit.
+	 * If memory encryption is enabled, we are not running with
+	 * SEV active and BOOT_DATA is being mapped then remove the
+	 * encryption bit
 	 */
-	if (_PAGE_ENC && (owner == BOOT_DATA))
+	if (_PAGE_ENC && !sev_active && (owner == BOOT_DATA))
 		prot = __pgprot(pgprot_val(prot) & ~_PAGE_ENC);
 
 	return prot;

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 08/28] Access BOOT related data encrypted with SEV active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization (SEV) is active, BOOT data (such as
EFI related data) is encrypted and needs to be access as such. Update the
architecture override in early_memremap to keep the encryption attribute
when mapping this data.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/ioremap.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index e3bdc5a..2ea6deb 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -429,10 +429,11 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
 					     pgprot_t prot)
 {
 	/*
-	 * If memory encryption is enabled and BOOT_DATA is being mapped
-	 * then remove the encryption bit.
+	 * If memory encryption is enabled, we are not running with
+	 * SEV active and BOOT_DATA is being mapped then remove the
+	 * encryption bit
 	 */
-	if (_PAGE_ENC && (owner == BOOT_DATA))
+	if (_PAGE_ENC && !sev_active && (owner == BOOT_DATA))
 		prot = __pgprot(pgprot_val(prot) & ~_PAGE_ENC);
 
 	return prot;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 08/28] Access BOOT related data encrypted with SEV active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization (SEV) is active, BOOT data (such as
EFI related data) is encrypted and needs to be access as such. Update the
architecture override in early_memremap to keep the encryption attribute
when mapping this data.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/ioremap.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index e3bdc5a..2ea6deb 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -429,10 +429,11 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
 					     pgprot_t prot)
 {
 	/*
-	 * If memory encryption is enabled and BOOT_DATA is being mapped
-	 * then remove the encryption bit.
+	 * If memory encryption is enabled, we are not running with
+	 * SEV active and BOOT_DATA is being mapped then remove the
+	 * encryption bit
 	 */
-	if (_PAGE_ENC && (owner == BOOT_DATA))
+	if (_PAGE_ENC && !sev_active && (owner == BOOT_DATA))
 		prot = __pgprot(pgprot_val(prot) & ~_PAGE_ENC);
 
 	return prot;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 08/28] Access BOOT related data encrypted with SEV active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization (SEV) is active, BOOT data (such as
EFI related data) is encrypted and needs to be access as such. Update the
architecture override in early_memremap to keep the encryption attribute
when mapping this data.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/ioremap.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index e3bdc5a..2ea6deb 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -429,10 +429,11 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
 					     pgprot_t prot)
 {
 	/*
-	 * If memory encryption is enabled and BOOT_DATA is being mapped
-	 * then remove the encryption bit.
+	 * If memory encryption is enabled, we are not running with
+	 * SEV active and BOOT_DATA is being mapped then remove the
+	 * encryption bit
 	 */
-	if (_PAGE_ENC && (owner == BOOT_DATA))
+	if (_PAGE_ENC && !sev_active && (owner == BOOT_DATA))
 		prot = __pgprot(pgprot_val(prot) & ~_PAGE_ENC);
 
 	return prot;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-08-22 23:23 ` Brijesh Singh
                   ` (19 preceding siblings ...)
  (?)
@ 2016-08-22 23:25 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

EFI data is encrypted when the kernel is run under SEV. Update the
page table references to be sure the EFI memory areas are accessed
encrypted.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 0871ea4..98363f3 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
 
 int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 {
-	unsigned long pfn, text;
+	unsigned long pfn, text, flags;
 	efi_memory_desc_t *md;
 	struct page *page;
 	unsigned npages;
@@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
 	pgd = efi_pgd;
 
+	flags = _PAGE_NX | _PAGE_RW;
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	/*
 	 * It can happen that the physical address of new_memmap lands in memory
 	 * which is not mapped in the EFI page table. Therefore we need to go
@@ -237,7 +241,7 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	 * phys_efi_set_virtual_address_map().
 	 */
 	pfn = pa_memmap >> PAGE_SHIFT;
-	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, _PAGE_NX | _PAGE_RW)) {
+	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, flags)) {
 		pr_err("Error ident-mapping new memmap (0x%lx)!\n", pa_memmap);
 		return 1;
 	}
@@ -302,6 +306,9 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va)
 	if (!(md->attribute & EFI_MEMORY_WB))
 		flags |= _PAGE_PCD;
 
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	pfn = md->phys_addr >> PAGE_SHIFT;
 	if (kernel_map_pages_in_pgd(pgd, pfn, va, md->num_pages, flags))
 		pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n",
@@ -426,6 +433,9 @@ void __init efi_runtime_update_mappings(void)
 			(md->type != EFI_RUNTIME_SERVICES_CODE))
 			pf |= _PAGE_RW;
 
+		if (sev_active)
+			pf |= _PAGE_ENC;
+
 		/* Update the 1:1 mapping */
 		pfn = md->phys_addr >> PAGE_SHIFT;
 		if (kernel_map_pages_in_pgd(pgd, pfn, md->phys_addr, md->num_pages, pf))

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:25   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

EFI data is encrypted when the kernel is run under SEV. Update the
page table references to be sure the EFI memory areas are accessed
encrypted.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 0871ea4..98363f3 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
 
 int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 {
-	unsigned long pfn, text;
+	unsigned long pfn, text, flags;
 	efi_memory_desc_t *md;
 	struct page *page;
 	unsigned npages;
@@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
 	pgd = efi_pgd;
 
+	flags = _PAGE_NX | _PAGE_RW;
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	/*
 	 * It can happen that the physical address of new_memmap lands in memory
 	 * which is not mapped in the EFI page table. Therefore we need to go
@@ -237,7 +241,7 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	 * phys_efi_set_virtual_address_map().
 	 */
 	pfn = pa_memmap >> PAGE_SHIFT;
-	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, _PAGE_NX | _PAGE_RW)) {
+	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, flags)) {
 		pr_err("Error ident-mapping new memmap (0x%lx)!\n", pa_memmap);
 		return 1;
 	}
@@ -302,6 +306,9 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va)
 	if (!(md->attribute & EFI_MEMORY_WB))
 		flags |= _PAGE_PCD;
 
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	pfn = md->phys_addr >> PAGE_SHIFT;
 	if (kernel_map_pages_in_pgd(pgd, pfn, va, md->num_pages, flags))
 		pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n",
@@ -426,6 +433,9 @@ void __init efi_runtime_update_mappings(void)
 			(md->type != EFI_RUNTIME_SERVICES_CODE))
 			pf |= _PAGE_RW;
 
+		if (sev_active)
+			pf |= _PAGE_ENC;
+
 		/* Update the 1:1 mapping */
 		pfn = md->phys_addr >> PAGE_SHIFT;
 		if (kernel_map_pages_in_pgd(pgd, pfn, md->phys_addr, md->num_pages, pf))

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

EFI data is encrypted when the kernel is run under SEV. Update the
page table references to be sure the EFI memory areas are accessed
encrypted.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 0871ea4..98363f3 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
 
 int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 {
-	unsigned long pfn, text;
+	unsigned long pfn, text, flags;
 	efi_memory_desc_t *md;
 	struct page *page;
 	unsigned npages;
@@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
 	pgd = efi_pgd;
 
+	flags = _PAGE_NX | _PAGE_RW;
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	/*
 	 * It can happen that the physical address of new_memmap lands in memory
 	 * which is not mapped in the EFI page table. Therefore we need to go
@@ -237,7 +241,7 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	 * phys_efi_set_virtual_address_map().
 	 */
 	pfn = pa_memmap >> PAGE_SHIFT;
-	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, _PAGE_NX | _PAGE_RW)) {
+	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, flags)) {
 		pr_err("Error ident-mapping new memmap (0x%lx)!\n", pa_memmap);
 		return 1;
 	}
@@ -302,6 +306,9 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va)
 	if (!(md->attribute & EFI_MEMORY_WB))
 		flags |= _PAGE_PCD;
 
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	pfn = md->phys_addr >> PAGE_SHIFT;
 	if (kernel_map_pages_in_pgd(pgd, pfn, va, md->num_pages, flags))
 		pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n",
@@ -426,6 +433,9 @@ void __init efi_runtime_update_mappings(void)
 			(md->type != EFI_RUNTIME_SERVICES_CODE))
 			pf |= _PAGE_RW;
 
+		if (sev_active)
+			pf |= _PAGE_ENC;
+
 		/* Update the 1:1 mapping */
 		pfn = md->phys_addr >> PAGE_SHIFT;
 		if (kernel_map_pages_in_pgd(pgd, pfn, md->phys_addr, md->num_pages, pf))

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

EFI data is encrypted when the kernel is run under SEV. Update the
page table references to be sure the EFI memory areas are accessed
encrypted.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 0871ea4..98363f3 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
 
 int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 {
-	unsigned long pfn, text;
+	unsigned long pfn, text, flags;
 	efi_memory_desc_t *md;
 	struct page *page;
 	unsigned npages;
@@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
 	pgd = efi_pgd;
 
+	flags = _PAGE_NX | _PAGE_RW;
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	/*
 	 * It can happen that the physical address of new_memmap lands in memory
 	 * which is not mapped in the EFI page table. Therefore we need to go
@@ -237,7 +241,7 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	 * phys_efi_set_virtual_address_map().
 	 */
 	pfn = pa_memmap >> PAGE_SHIFT;
-	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, _PAGE_NX | _PAGE_RW)) {
+	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, flags)) {
 		pr_err("Error ident-mapping new memmap (0x%lx)!\n", pa_memmap);
 		return 1;
 	}
@@ -302,6 +306,9 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va)
 	if (!(md->attribute & EFI_MEMORY_WB))
 		flags |= _PAGE_PCD;
 
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	pfn = md->phys_addr >> PAGE_SHIFT;
 	if (kernel_map_pages_in_pgd(pgd, pfn, va, md->num_pages, flags))
 		pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n",
@@ -426,6 +433,9 @@ void __init efi_runtime_update_mappings(void)
 			(md->type != EFI_RUNTIME_SERVICES_CODE))
 			pf |= _PAGE_RW;
 
+		if (sev_active)
+			pf |= _PAGE_ENC;
+
 		/* Update the 1:1 mapping */
 		pfn = md->phys_addr >> PAGE_SHIFT;
 		if (kernel_map_pages_in_pgd(pgd, pfn, md->phys_addr, md->num_pages, pf))

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

EFI data is encrypted when the kernel is run under SEV. Update the
page table references to be sure the EFI memory areas are accessed
encrypted.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 0871ea4..98363f3 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
 
 int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 {
-	unsigned long pfn, text;
+	unsigned long pfn, text, flags;
 	efi_memory_desc_t *md;
 	struct page *page;
 	unsigned npages;
@@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
 	pgd = efi_pgd;
 
+	flags = _PAGE_NX | _PAGE_RW;
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	/*
 	 * It can happen that the physical address of new_memmap lands in memory
 	 * which is not mapped in the EFI page table. Therefore we need to go
@@ -237,7 +241,7 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
 	 * phys_efi_set_virtual_address_map().
 	 */
 	pfn = pa_memmap >> PAGE_SHIFT;
-	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, _PAGE_NX | _PAGE_RW)) {
+	if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, flags)) {
 		pr_err("Error ident-mapping new memmap (0x%lx)!\n", pa_memmap);
 		return 1;
 	}
@@ -302,6 +306,9 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va)
 	if (!(md->attribute & EFI_MEMORY_WB))
 		flags |= _PAGE_PCD;
 
+	if (sev_active)
+		flags |= _PAGE_ENC;
+
 	pfn = md->phys_addr >> PAGE_SHIFT;
 	if (kernel_map_pages_in_pgd(pgd, pfn, va, md->num_pages, flags))
 		pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n",
@@ -426,6 +433,9 @@ void __init efi_runtime_update_mappings(void)
 			(md->type != EFI_RUNTIME_SERVICES_CODE))
 			pf |= _PAGE_RW;
 
+		if (sev_active)
+			pf |= _PAGE_ENC;
+
 		/* Update the 1:1 mapping */
 		pfn = md->phys_addr >> PAGE_SHIFT;
 		if (kernel_map_pages_in_pgd(pgd, pfn, md->phys_addr, md->num_pages, pf))

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 10/28] x86: Change early_ioremap to early_memremap for BOOT data
  2016-08-22 23:23 ` Brijesh Singh
                   ` (21 preceding siblings ...)
  (?)
@ 2016-08-22 23:25 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/acpi/boot.c |    4 ++--
 arch/x86/kernel/mpparse.c   |   10 +++++-----
 drivers/sfi/sfi_core.c      |    6 +++---
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 1ad5fe2..4622ea2 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -120,7 +120,7 @@ char *__init __acpi_map_table(unsigned long phys, unsigned long size)
 	if (!phys || !size)
 		return NULL;
 
-	return early_ioremap(phys, size);
+	return early_memremap(phys, size, BOOT_DATA);
 }
 
 void __init __acpi_unmap_table(char *map, unsigned long size)
@@ -128,7 +128,7 @@ void __init __acpi_unmap_table(char *map, unsigned long size)
 	if (!map || !size)
 		return;
 
-	early_iounmap(map, size);
+	early_memunmap(map, size);
 }
 
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
index 0f8d204..04def9f 100644
--- a/arch/x86/kernel/mpparse.c
+++ b/arch/x86/kernel/mpparse.c
@@ -436,9 +436,9 @@ static unsigned long __init get_mpc_size(unsigned long physptr)
 	struct mpc_table *mpc;
 	unsigned long size;
 
-	mpc = early_ioremap(physptr, PAGE_SIZE);
+	mpc = early_memremap(physptr, PAGE_SIZE, BOOT_DATA);
 	size = mpc->length;
-	early_iounmap(mpc, PAGE_SIZE);
+	early_memunmap(mpc, PAGE_SIZE);
 	apic_printk(APIC_VERBOSE, "  mpc: %lx-%lx\n", physptr, physptr + size);
 
 	return size;
@@ -450,7 +450,7 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 	unsigned long size;
 
 	size = get_mpc_size(mpf->physptr);
-	mpc = early_ioremap(mpf->physptr, size);
+	mpc = early_memremap(mpf->physptr, size, BOOT_DATA);
 	/*
 	 * Read the physical hardware table.  Anything here will
 	 * override the defaults.
@@ -461,10 +461,10 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 #endif
 		pr_err("BIOS bug, MP table errors detected!...\n");
 		pr_cont("... disabling SMP support. (tell your hw vendor)\n");
-		early_iounmap(mpc, size);
+		early_memunmap(mpc, size);
 		return -1;
 	}
-	early_iounmap(mpc, size);
+	early_memunmap(mpc, size);
 
 	if (early)
 		return -1;
diff --git a/drivers/sfi/sfi_core.c b/drivers/sfi/sfi_core.c
index 296db7a..3078d35 100644
--- a/drivers/sfi/sfi_core.c
+++ b/drivers/sfi/sfi_core.c
@@ -92,7 +92,7 @@ static struct sfi_table_simple *syst_va __read_mostly;
 static u32 sfi_use_ioremap __read_mostly;
 
 /*
- * sfi_un/map_memory calls early_ioremap/iounmap which is a __init function
+ * sfi_un/map_memory calls early_memremap/memunmap which is a __init function
  * and introduces section mismatch. So use __ref to make it calm.
  */
 static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
@@ -103,7 +103,7 @@ static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
 	if (sfi_use_ioremap)
 		return ioremap_cache(phys, size);
 	else
-		return early_ioremap(phys, size);
+		return early_memremap(phys, size, BOOT_DATA);
 }
 
 static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
@@ -114,7 +114,7 @@ static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
 	if (sfi_use_ioremap)
 		iounmap(virt);
 	else
-		early_iounmap(virt, size);
+		early_memunmap(virt, size);
 }
 
 static void sfi_print_table_header(unsigned long long pa,

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 10/28] x86: Change early_ioremap to early_memremap for BOOT data
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:25   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/acpi/boot.c |    4 ++--
 arch/x86/kernel/mpparse.c   |   10 +++++-----
 drivers/sfi/sfi_core.c      |    6 +++---
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 1ad5fe2..4622ea2 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -120,7 +120,7 @@ char *__init __acpi_map_table(unsigned long phys, unsigned long size)
 	if (!phys || !size)
 		return NULL;
 
-	return early_ioremap(phys, size);
+	return early_memremap(phys, size, BOOT_DATA);
 }
 
 void __init __acpi_unmap_table(char *map, unsigned long size)
@@ -128,7 +128,7 @@ void __init __acpi_unmap_table(char *map, unsigned long size)
 	if (!map || !size)
 		return;
 
-	early_iounmap(map, size);
+	early_memunmap(map, size);
 }
 
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
index 0f8d204..04def9f 100644
--- a/arch/x86/kernel/mpparse.c
+++ b/arch/x86/kernel/mpparse.c
@@ -436,9 +436,9 @@ static unsigned long __init get_mpc_size(unsigned long physptr)
 	struct mpc_table *mpc;
 	unsigned long size;
 
-	mpc = early_ioremap(physptr, PAGE_SIZE);
+	mpc = early_memremap(physptr, PAGE_SIZE, BOOT_DATA);
 	size = mpc->length;
-	early_iounmap(mpc, PAGE_SIZE);
+	early_memunmap(mpc, PAGE_SIZE);
 	apic_printk(APIC_VERBOSE, "  mpc: %lx-%lx\n", physptr, physptr + size);
 
 	return size;
@@ -450,7 +450,7 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 	unsigned long size;
 
 	size = get_mpc_size(mpf->physptr);
-	mpc = early_ioremap(mpf->physptr, size);
+	mpc = early_memremap(mpf->physptr, size, BOOT_DATA);
 	/*
 	 * Read the physical hardware table.  Anything here will
 	 * override the defaults.
@@ -461,10 +461,10 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 #endif
 		pr_err("BIOS bug, MP table errors detected!...\n");
 		pr_cont("... disabling SMP support. (tell your hw vendor)\n");
-		early_iounmap(mpc, size);
+		early_memunmap(mpc, size);
 		return -1;
 	}
-	early_iounmap(mpc, size);
+	early_memunmap(mpc, size);
 
 	if (early)
 		return -1;
diff --git a/drivers/sfi/sfi_core.c b/drivers/sfi/sfi_core.c
index 296db7a..3078d35 100644
--- a/drivers/sfi/sfi_core.c
+++ b/drivers/sfi/sfi_core.c
@@ -92,7 +92,7 @@ static struct sfi_table_simple *syst_va __read_mostly;
 static u32 sfi_use_ioremap __read_mostly;
 
 /*
- * sfi_un/map_memory calls early_ioremap/iounmap which is a __init function
+ * sfi_un/map_memory calls early_memremap/memunmap which is a __init function
  * and introduces section mismatch. So use __ref to make it calm.
  */
 static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
@@ -103,7 +103,7 @@ static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
 	if (sfi_use_ioremap)
 		return ioremap_cache(phys, size);
 	else
-		return early_ioremap(phys, size);
+		return early_memremap(phys, size, BOOT_DATA);
 }
 
 static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
@@ -114,7 +114,7 @@ static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
 	if (sfi_use_ioremap)
 		iounmap(virt);
 	else
-		early_iounmap(virt, size);
+		early_memunmap(virt, size);
 }
 
 static void sfi_print_table_header(unsigned long long pa,

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 10/28] x86: Change early_ioremap to early_memremap for BOOT data
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/acpi/boot.c |    4 ++--
 arch/x86/kernel/mpparse.c   |   10 +++++-----
 drivers/sfi/sfi_core.c      |    6 +++---
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 1ad5fe2..4622ea2 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -120,7 +120,7 @@ char *__init __acpi_map_table(unsigned long phys, unsigned long size)
 	if (!phys || !size)
 		return NULL;
 
-	return early_ioremap(phys, size);
+	return early_memremap(phys, size, BOOT_DATA);
 }
 
 void __init __acpi_unmap_table(char *map, unsigned long size)
@@ -128,7 +128,7 @@ void __init __acpi_unmap_table(char *map, unsigned long size)
 	if (!map || !size)
 		return;
 
-	early_iounmap(map, size);
+	early_memunmap(map, size);
 }
 
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
index 0f8d204..04def9f 100644
--- a/arch/x86/kernel/mpparse.c
+++ b/arch/x86/kernel/mpparse.c
@@ -436,9 +436,9 @@ static unsigned long __init get_mpc_size(unsigned long physptr)
 	struct mpc_table *mpc;
 	unsigned long size;
 
-	mpc = early_ioremap(physptr, PAGE_SIZE);
+	mpc = early_memremap(physptr, PAGE_SIZE, BOOT_DATA);
 	size = mpc->length;
-	early_iounmap(mpc, PAGE_SIZE);
+	early_memunmap(mpc, PAGE_SIZE);
 	apic_printk(APIC_VERBOSE, "  mpc: %lx-%lx\n", physptr, physptr + size);
 
 	return size;
@@ -450,7 +450,7 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 	unsigned long size;
 
 	size = get_mpc_size(mpf->physptr);
-	mpc = early_ioremap(mpf->physptr, size);
+	mpc = early_memremap(mpf->physptr, size, BOOT_DATA);
 	/*
 	 * Read the physical hardware table.  Anything here will
 	 * override the defaults.
@@ -461,10 +461,10 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 #endif
 		pr_err("BIOS bug, MP table errors detected!...\n");
 		pr_cont("... disabling SMP support. (tell your hw vendor)\n");
-		early_iounmap(mpc, size);
+		early_memunmap(mpc, size);
 		return -1;
 	}
-	early_iounmap(mpc, size);
+	early_memunmap(mpc, size);
 
 	if (early)
 		return -1;
diff --git a/drivers/sfi/sfi_core.c b/drivers/sfi/sfi_core.c
index 296db7a..3078d35 100644
--- a/drivers/sfi/sfi_core.c
+++ b/drivers/sfi/sfi_core.c
@@ -92,7 +92,7 @@ static struct sfi_table_simple *syst_va __read_mostly;
 static u32 sfi_use_ioremap __read_mostly;
 
 /*
- * sfi_un/map_memory calls early_ioremap/iounmap which is a __init function
+ * sfi_un/map_memory calls early_memremap/memunmap which is a __init function
  * and introduces section mismatch. So use __ref to make it calm.
  */
 static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
@@ -103,7 +103,7 @@ static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
 	if (sfi_use_ioremap)
 		return ioremap_cache(phys, size);
 	else
-		return early_ioremap(phys, size);
+		return early_memremap(phys, size, BOOT_DATA);
 }
 
 static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
@@ -114,7 +114,7 @@ static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
 	if (sfi_use_ioremap)
 		iounmap(virt);
 	else
-		early_iounmap(virt, size);
+		early_memunmap(virt, size);
 }
 
 static void sfi_print_table_header(unsigned long long pa,

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 10/28] x86: Change early_ioremap to early_memremap for BOOT data
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/acpi/boot.c |    4 ++--
 arch/x86/kernel/mpparse.c   |   10 +++++-----
 drivers/sfi/sfi_core.c      |    6 +++---
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 1ad5fe2..4622ea2 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -120,7 +120,7 @@ char *__init __acpi_map_table(unsigned long phys, unsigned long size)
 	if (!phys || !size)
 		return NULL;
 
-	return early_ioremap(phys, size);
+	return early_memremap(phys, size, BOOT_DATA);
 }
 
 void __init __acpi_unmap_table(char *map, unsigned long size)
@@ -128,7 +128,7 @@ void __init __acpi_unmap_table(char *map, unsigned long size)
 	if (!map || !size)
 		return;
 
-	early_iounmap(map, size);
+	early_memunmap(map, size);
 }
 
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
index 0f8d204..04def9f 100644
--- a/arch/x86/kernel/mpparse.c
+++ b/arch/x86/kernel/mpparse.c
@@ -436,9 +436,9 @@ static unsigned long __init get_mpc_size(unsigned long physptr)
 	struct mpc_table *mpc;
 	unsigned long size;
 
-	mpc = early_ioremap(physptr, PAGE_SIZE);
+	mpc = early_memremap(physptr, PAGE_SIZE, BOOT_DATA);
 	size = mpc->length;
-	early_iounmap(mpc, PAGE_SIZE);
+	early_memunmap(mpc, PAGE_SIZE);
 	apic_printk(APIC_VERBOSE, "  mpc: %lx-%lx\n", physptr, physptr + size);
 
 	return size;
@@ -450,7 +450,7 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 	unsigned long size;
 
 	size = get_mpc_size(mpf->physptr);
-	mpc = early_ioremap(mpf->physptr, size);
+	mpc = early_memremap(mpf->physptr, size, BOOT_DATA);
 	/*
 	 * Read the physical hardware table.  Anything here will
 	 * override the defaults.
@@ -461,10 +461,10 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 #endif
 		pr_err("BIOS bug, MP table errors detected!...\n");
 		pr_cont("... disabling SMP support. (tell your hw vendor)\n");
-		early_iounmap(mpc, size);
+		early_memunmap(mpc, size);
 		return -1;
 	}
-	early_iounmap(mpc, size);
+	early_memunmap(mpc, size);
 
 	if (early)
 		return -1;
diff --git a/drivers/sfi/sfi_core.c b/drivers/sfi/sfi_core.c
index 296db7a..3078d35 100644
--- a/drivers/sfi/sfi_core.c
+++ b/drivers/sfi/sfi_core.c
@@ -92,7 +92,7 @@ static struct sfi_table_simple *syst_va __read_mostly;
 static u32 sfi_use_ioremap __read_mostly;
 
 /*
- * sfi_un/map_memory calls early_ioremap/iounmap which is a __init function
+ * sfi_un/map_memory calls early_memremap/memunmap which is a __init function
  * and introduces section mismatch. So use __ref to make it calm.
  */
 static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
@@ -103,7 +103,7 @@ static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
 	if (sfi_use_ioremap)
 		return ioremap_cache(phys, size);
 	else
-		return early_ioremap(phys, size);
+		return early_memremap(phys, size, BOOT_DATA);
 }
 
 static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
@@ -114,7 +114,7 @@ static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
 	if (sfi_use_ioremap)
 		iounmap(virt);
 	else
-		early_iounmap(virt, size);
+		early_memunmap(virt, size);
 }
 
 static void sfi_print_table_header(unsigned long long pa,

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 10/28] x86: Change early_ioremap to early_memremap for BOOT data
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/acpi/boot.c |    4 ++--
 arch/x86/kernel/mpparse.c   |   10 +++++-----
 drivers/sfi/sfi_core.c      |    6 +++---
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 1ad5fe2..4622ea2 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -120,7 +120,7 @@ char *__init __acpi_map_table(unsigned long phys, unsigned long size)
 	if (!phys || !size)
 		return NULL;
 
-	return early_ioremap(phys, size);
+	return early_memremap(phys, size, BOOT_DATA);
 }
 
 void __init __acpi_unmap_table(char *map, unsigned long size)
@@ -128,7 +128,7 @@ void __init __acpi_unmap_table(char *map, unsigned long size)
 	if (!map || !size)
 		return;
 
-	early_iounmap(map, size);
+	early_memunmap(map, size);
 }
 
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
index 0f8d204..04def9f 100644
--- a/arch/x86/kernel/mpparse.c
+++ b/arch/x86/kernel/mpparse.c
@@ -436,9 +436,9 @@ static unsigned long __init get_mpc_size(unsigned long physptr)
 	struct mpc_table *mpc;
 	unsigned long size;
 
-	mpc = early_ioremap(physptr, PAGE_SIZE);
+	mpc = early_memremap(physptr, PAGE_SIZE, BOOT_DATA);
 	size = mpc->length;
-	early_iounmap(mpc, PAGE_SIZE);
+	early_memunmap(mpc, PAGE_SIZE);
 	apic_printk(APIC_VERBOSE, "  mpc: %lx-%lx\n", physptr, physptr + size);
 
 	return size;
@@ -450,7 +450,7 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 	unsigned long size;
 
 	size = get_mpc_size(mpf->physptr);
-	mpc = early_ioremap(mpf->physptr, size);
+	mpc = early_memremap(mpf->physptr, size, BOOT_DATA);
 	/*
 	 * Read the physical hardware table.  Anything here will
 	 * override the defaults.
@@ -461,10 +461,10 @@ static int __init check_physptr(struct mpf_intel *mpf, unsigned int early)
 #endif
 		pr_err("BIOS bug, MP table errors detected!...\n");
 		pr_cont("... disabling SMP support. (tell your hw vendor)\n");
-		early_iounmap(mpc, size);
+		early_memunmap(mpc, size);
 		return -1;
 	}
-	early_iounmap(mpc, size);
+	early_memunmap(mpc, size);
 
 	if (early)
 		return -1;
diff --git a/drivers/sfi/sfi_core.c b/drivers/sfi/sfi_core.c
index 296db7a..3078d35 100644
--- a/drivers/sfi/sfi_core.c
+++ b/drivers/sfi/sfi_core.c
@@ -92,7 +92,7 @@ static struct sfi_table_simple *syst_va __read_mostly;
 static u32 sfi_use_ioremap __read_mostly;
 
 /*
- * sfi_un/map_memory calls early_ioremap/iounmap which is a __init function
+ * sfi_un/map_memory calls early_memremap/memunmap which is a __init function
  * and introduces section mismatch. So use __ref to make it calm.
  */
 static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
@@ -103,7 +103,7 @@ static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
 	if (sfi_use_ioremap)
 		return ioremap_cache(phys, size);
 	else
-		return early_ioremap(phys, size);
+		return early_memremap(phys, size, BOOT_DATA);
 }
 
 static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
@@ -114,7 +114,7 @@ static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
 	if (sfi_use_ioremap)
 		iounmap(virt);
 	else
-		early_iounmap(virt, size);
+		early_memunmap(virt, size);
 }
 
 static void sfi_print_table_header(unsigned long long pa,

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 11/28] x86: Don't decrypt trampoline area if SEV is active
  2016-08-22 23:23 ` Brijesh Singh
                   ` (22 preceding siblings ...)
  (?)
@ 2016-08-22 23:25 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization is active instruction fetches are
always interpreted as being from encrypted memory so the trampoline area
must remain encrypted when SEV is active.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index c3edb49..f3207e5 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -138,10 +138,13 @@ static void __init set_real_mode_permissions(void)
 	/*
 	 * If memory encryption is active, the trampoline area will need to
 	 * be in non-encrypted memory in order to bring up other processors
-	 * successfully.
+	 * successfully. This only applies to SME, SEV requires the trampoline
+	 * to be encrypted.
 	 */
-	sme_early_mem_dec(__pa(base), size);
-	sme_set_mem_dec(base, size);
+	if (!sev_active) {
+		sme_early_mem_dec(__pa(base), size);
+		sme_set_mem_dec(base, size);
+	}
 
 	set_memory_nx((unsigned long) base, size >> PAGE_SHIFT);
 	set_memory_ro((unsigned long) base, ro_size >> PAGE_SHIFT);

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 11/28] x86: Don't decrypt trampoline area if SEV is active
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:25   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization is active instruction fetches are
always interpreted as being from encrypted memory so the trampoline area
must remain encrypted when SEV is active.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index c3edb49..f3207e5 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -138,10 +138,13 @@ static void __init set_real_mode_permissions(void)
 	/*
 	 * If memory encryption is active, the trampoline area will need to
 	 * be in non-encrypted memory in order to bring up other processors
-	 * successfully.
+	 * successfully. This only applies to SME, SEV requires the trampoline
+	 * to be encrypted.
 	 */
-	sme_early_mem_dec(__pa(base), size);
-	sme_set_mem_dec(base, size);
+	if (!sev_active) {
+		sme_early_mem_dec(__pa(base), size);
+		sme_set_mem_dec(base, size);
+	}
 
 	set_memory_nx((unsigned long) base, size >> PAGE_SHIFT);
 	set_memory_ro((unsigned long) base, ro_size >> PAGE_SHIFT);

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 11/28] x86: Don't decrypt trampoline area if SEV is active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization is active instruction fetches are
always interpreted as being from encrypted memory so the trampoline area
must remain encrypted when SEV is active.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index c3edb49..f3207e5 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -138,10 +138,13 @@ static void __init set_real_mode_permissions(void)
 	/*
 	 * If memory encryption is active, the trampoline area will need to
 	 * be in non-encrypted memory in order to bring up other processors
-	 * successfully.
+	 * successfully. This only applies to SME, SEV requires the trampoline
+	 * to be encrypted.
 	 */
-	sme_early_mem_dec(__pa(base), size);
-	sme_set_mem_dec(base, size);
+	if (!sev_active) {
+		sme_early_mem_dec(__pa(base), size);
+		sme_set_mem_dec(base, size);
+	}
 
 	set_memory_nx((unsigned long) base, size >> PAGE_SHIFT);
 	set_memory_ro((unsigned long) base, ro_size >> PAGE_SHIFT);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 11/28] x86: Don't decrypt trampoline area if SEV is active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization is active instruction fetches are
always interpreted as being from encrypted memory so the trampoline area
must remain encrypted when SEV is active.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index c3edb49..f3207e5 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -138,10 +138,13 @@ static void __init set_real_mode_permissions(void)
 	/*
 	 * If memory encryption is active, the trampoline area will need to
 	 * be in non-encrypted memory in order to bring up other processors
-	 * successfully.
+	 * successfully. This only applies to SME, SEV requires the trampoline
+	 * to be encrypted.
 	 */
-	sme_early_mem_dec(__pa(base), size);
-	sme_set_mem_dec(base, size);
+	if (!sev_active) {
+		sme_early_mem_dec(__pa(base), size);
+		sme_set_mem_dec(base, size);
+	}
 
 	set_memory_nx((unsigned long) base, size >> PAGE_SHIFT);
 	set_memory_ro((unsigned long) base, ro_size >> PAGE_SHIFT);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 11/28] x86: Don't decrypt trampoline area if SEV is active
@ 2016-08-22 23:25   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:25 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When Secure Encrypted Virtualization is active instruction fetches are
always interpreted as being from encrypted memory so the trampoline area
must remain encrypted when SEV is active.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index c3edb49..f3207e5 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -138,10 +138,13 @@ static void __init set_real_mode_permissions(void)
 	/*
 	 * If memory encryption is active, the trampoline area will need to
 	 * be in non-encrypted memory in order to bring up other processors
-	 * successfully.
+	 * successfully. This only applies to SME, SEV requires the trampoline
+	 * to be encrypted.
 	 */
-	sme_early_mem_dec(__pa(base), size);
-	sme_set_mem_dec(base, size);
+	if (!sev_active) {
+		sme_early_mem_dec(__pa(base), size);
+		sme_set_mem_dec(base, size);
+	}
 
 	set_memory_nx((unsigned long) base, size >> PAGE_SHIFT);
 	set_memory_ro((unsigned long) base, ro_size >> PAGE_SHIFT);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 12/28] x86: DMA support for SEV memory encryption
  2016-08-22 23:23 ` Brijesh Singh
                   ` (25 preceding siblings ...)
  (?)
@ 2016-08-22 23:26 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA access to memory mapped as encrypted while SEV is active can not be
encrypted during device write or decrypted during device read. In order
for DMA to properly work when SEV is active, the swiotlb bounce buffers
must be used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |   48 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 1154353..ce6e3ea 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -173,8 +173,52 @@ void __init sme_early_init(void)
 	/* Update the protection map with memory encryption mask */
 	for (i = 0; i < ARRAY_SIZE(protection_map); i++)
 		protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask);
+
+	if (sev_active)
+		swiotlb_force = 1;
 }
 
+static void *sme_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+		       gfp_t gfp, unsigned long attrs)
+{
+	void *vaddr;
+
+	vaddr = x86_swiotlb_alloc_coherent(dev, size, dma_handle, gfp, attrs);
+	if (!vaddr)
+		return NULL;
+
+	/* Clear the SME encryption bit for DMA use */
+	sme_set_mem_dec(vaddr, size);
+
+	/* Remove the encryption bit from the DMA address */
+	*dma_handle &= ~sme_me_mask;
+
+	return vaddr;
+}
+
+static void sme_free(struct device *dev, size_t size, void *vaddr,
+		     dma_addr_t dma_handle, unsigned long attrs)
+{
+	/* Set the SME encryption bit for re-use as encrypted */
+	sme_set_mem_enc(vaddr, size);
+
+	x86_swiotlb_free_coherent(dev, size, vaddr, dma_handle, attrs);
+}
+
+static struct dma_map_ops sme_dma_ops = {
+	.alloc                  = sme_alloc,
+	.free                   = sme_free,
+	.map_page               = swiotlb_map_page,
+	.unmap_page             = swiotlb_unmap_page,
+	.map_sg                 = swiotlb_map_sg_attrs,
+	.unmap_sg               = swiotlb_unmap_sg_attrs,
+	.sync_single_for_cpu    = swiotlb_sync_single_for_cpu,
+	.sync_single_for_device = swiotlb_sync_single_for_device,
+	.sync_sg_for_cpu        = swiotlb_sync_sg_for_cpu,
+	.sync_sg_for_device     = swiotlb_sync_sg_for_device,
+	.mapping_error          = swiotlb_dma_mapping_error,
+};
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void)
 {
@@ -184,6 +228,10 @@ void __init mem_encrypt_init(void)
 	/* Make SWIOTLB use an unencrypted DMA area */
 	swiotlb_clear_encryption();
 
+	/* Use SEV DMA operations if SEV is active */
+	if (sev_active)
+		dma_ops = &sme_dma_ops;
+
 	pr_info("memory encryption active\n");
 }
 

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 12/28] x86: DMA support for SEV memory encryption
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:26   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA access to memory mapped as encrypted while SEV is active can not be
encrypted during device write or decrypted during device read. In order
for DMA to properly work when SEV is active, the swiotlb bounce buffers
must be used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |   48 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 1154353..ce6e3ea 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -173,8 +173,52 @@ void __init sme_early_init(void)
 	/* Update the protection map with memory encryption mask */
 	for (i = 0; i < ARRAY_SIZE(protection_map); i++)
 		protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask);
+
+	if (sev_active)
+		swiotlb_force = 1;
 }
 
+static void *sme_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+		       gfp_t gfp, unsigned long attrs)
+{
+	void *vaddr;
+
+	vaddr = x86_swiotlb_alloc_coherent(dev, size, dma_handle, gfp, attrs);
+	if (!vaddr)
+		return NULL;
+
+	/* Clear the SME encryption bit for DMA use */
+	sme_set_mem_dec(vaddr, size);
+
+	/* Remove the encryption bit from the DMA address */
+	*dma_handle &= ~sme_me_mask;
+
+	return vaddr;
+}
+
+static void sme_free(struct device *dev, size_t size, void *vaddr,
+		     dma_addr_t dma_handle, unsigned long attrs)
+{
+	/* Set the SME encryption bit for re-use as encrypted */
+	sme_set_mem_enc(vaddr, size);
+
+	x86_swiotlb_free_coherent(dev, size, vaddr, dma_handle, attrs);
+}
+
+static struct dma_map_ops sme_dma_ops = {
+	.alloc                  = sme_alloc,
+	.free                   = sme_free,
+	.map_page               = swiotlb_map_page,
+	.unmap_page             = swiotlb_unmap_page,
+	.map_sg                 = swiotlb_map_sg_attrs,
+	.unmap_sg               = swiotlb_unmap_sg_attrs,
+	.sync_single_for_cpu    = swiotlb_sync_single_for_cpu,
+	.sync_single_for_device = swiotlb_sync_single_for_device,
+	.sync_sg_for_cpu        = swiotlb_sync_sg_for_cpu,
+	.sync_sg_for_device     = swiotlb_sync_sg_for_device,
+	.mapping_error          = swiotlb_dma_mapping_error,
+};
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void)
 {
@@ -184,6 +228,10 @@ void __init mem_encrypt_init(void)
 	/* Make SWIOTLB use an unencrypted DMA area */
 	swiotlb_clear_encryption();
 
+	/* Use SEV DMA operations if SEV is active */
+	if (sev_active)
+		dma_ops = &sme_dma_ops;
+
 	pr_info("memory encryption active\n");
 }
 

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 12/28] x86: DMA support for SEV memory encryption
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA access to memory mapped as encrypted while SEV is active can not be
encrypted during device write or decrypted during device read. In order
for DMA to properly work when SEV is active, the swiotlb bounce buffers
must be used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |   48 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 1154353..ce6e3ea 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -173,8 +173,52 @@ void __init sme_early_init(void)
 	/* Update the protection map with memory encryption mask */
 	for (i = 0; i < ARRAY_SIZE(protection_map); i++)
 		protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask);
+
+	if (sev_active)
+		swiotlb_force = 1;
 }
 
+static void *sme_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+		       gfp_t gfp, unsigned long attrs)
+{
+	void *vaddr;
+
+	vaddr = x86_swiotlb_alloc_coherent(dev, size, dma_handle, gfp, attrs);
+	if (!vaddr)
+		return NULL;
+
+	/* Clear the SME encryption bit for DMA use */
+	sme_set_mem_dec(vaddr, size);
+
+	/* Remove the encryption bit from the DMA address */
+	*dma_handle &= ~sme_me_mask;
+
+	return vaddr;
+}
+
+static void sme_free(struct device *dev, size_t size, void *vaddr,
+		     dma_addr_t dma_handle, unsigned long attrs)
+{
+	/* Set the SME encryption bit for re-use as encrypted */
+	sme_set_mem_enc(vaddr, size);
+
+	x86_swiotlb_free_coherent(dev, size, vaddr, dma_handle, attrs);
+}
+
+static struct dma_map_ops sme_dma_ops = {
+	.alloc                  = sme_alloc,
+	.free                   = sme_free,
+	.map_page               = swiotlb_map_page,
+	.unmap_page             = swiotlb_unmap_page,
+	.map_sg                 = swiotlb_map_sg_attrs,
+	.unmap_sg               = swiotlb_unmap_sg_attrs,
+	.sync_single_for_cpu    = swiotlb_sync_single_for_cpu,
+	.sync_single_for_device = swiotlb_sync_single_for_device,
+	.sync_sg_for_cpu        = swiotlb_sync_sg_for_cpu,
+	.sync_sg_for_device     = swiotlb_sync_sg_for_device,
+	.mapping_error          = swiotlb_dma_mapping_error,
+};
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void)
 {
@@ -184,6 +228,10 @@ void __init mem_encrypt_init(void)
 	/* Make SWIOTLB use an unencrypted DMA area */
 	swiotlb_clear_encryption();
 
+	/* Use SEV DMA operations if SEV is active */
+	if (sev_active)
+		dma_ops = &sme_dma_ops;
+
 	pr_info("memory encryption active\n");
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 12/28] x86: DMA support for SEV memory encryption
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA access to memory mapped as encrypted while SEV is active can not be
encrypted during device write or decrypted during device read. In order
for DMA to properly work when SEV is active, the swiotlb bounce buffers
must be used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |   48 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 1154353..ce6e3ea 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -173,8 +173,52 @@ void __init sme_early_init(void)
 	/* Update the protection map with memory encryption mask */
 	for (i = 0; i < ARRAY_SIZE(protection_map); i++)
 		protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask);
+
+	if (sev_active)
+		swiotlb_force = 1;
 }
 
+static void *sme_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+		       gfp_t gfp, unsigned long attrs)
+{
+	void *vaddr;
+
+	vaddr = x86_swiotlb_alloc_coherent(dev, size, dma_handle, gfp, attrs);
+	if (!vaddr)
+		return NULL;
+
+	/* Clear the SME encryption bit for DMA use */
+	sme_set_mem_dec(vaddr, size);
+
+	/* Remove the encryption bit from the DMA address */
+	*dma_handle &= ~sme_me_mask;
+
+	return vaddr;
+}
+
+static void sme_free(struct device *dev, size_t size, void *vaddr,
+		     dma_addr_t dma_handle, unsigned long attrs)
+{
+	/* Set the SME encryption bit for re-use as encrypted */
+	sme_set_mem_enc(vaddr, size);
+
+	x86_swiotlb_free_coherent(dev, size, vaddr, dma_handle, attrs);
+}
+
+static struct dma_map_ops sme_dma_ops = {
+	.alloc                  = sme_alloc,
+	.free                   = sme_free,
+	.map_page               = swiotlb_map_page,
+	.unmap_page             = swiotlb_unmap_page,
+	.map_sg                 = swiotlb_map_sg_attrs,
+	.unmap_sg               = swiotlb_unmap_sg_attrs,
+	.sync_single_for_cpu    = swiotlb_sync_single_for_cpu,
+	.sync_single_for_device = swiotlb_sync_single_for_device,
+	.sync_sg_for_cpu        = swiotlb_sync_sg_for_cpu,
+	.sync_sg_for_device     = swiotlb_sync_sg_for_device,
+	.mapping_error          = swiotlb_dma_mapping_error,
+};
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void)
 {
@@ -184,6 +228,10 @@ void __init mem_encrypt_init(void)
 	/* Make SWIOTLB use an unencrypted DMA area */
 	swiotlb_clear_encryption();
 
+	/* Use SEV DMA operations if SEV is active */
+	if (sev_active)
+		dma_ops = &sme_dma_ops;
+
 	pr_info("memory encryption active\n");
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 12/28] x86: DMA support for SEV memory encryption
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA access to memory mapped as encrypted while SEV is active can not be
encrypted during device write or decrypted during device read. In order
for DMA to properly work when SEV is active, the swiotlb bounce buffers
must be used.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |   48 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 1154353..ce6e3ea 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -173,8 +173,52 @@ void __init sme_early_init(void)
 	/* Update the protection map with memory encryption mask */
 	for (i = 0; i < ARRAY_SIZE(protection_map); i++)
 		protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | sme_me_mask);
+
+	if (sev_active)
+		swiotlb_force = 1;
 }
 
+static void *sme_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+		       gfp_t gfp, unsigned long attrs)
+{
+	void *vaddr;
+
+	vaddr = x86_swiotlb_alloc_coherent(dev, size, dma_handle, gfp, attrs);
+	if (!vaddr)
+		return NULL;
+
+	/* Clear the SME encryption bit for DMA use */
+	sme_set_mem_dec(vaddr, size);
+
+	/* Remove the encryption bit from the DMA address */
+	*dma_handle &= ~sme_me_mask;
+
+	return vaddr;
+}
+
+static void sme_free(struct device *dev, size_t size, void *vaddr,
+		     dma_addr_t dma_handle, unsigned long attrs)
+{
+	/* Set the SME encryption bit for re-use as encrypted */
+	sme_set_mem_enc(vaddr, size);
+
+	x86_swiotlb_free_coherent(dev, size, vaddr, dma_handle, attrs);
+}
+
+static struct dma_map_ops sme_dma_ops = {
+	.alloc                  = sme_alloc,
+	.free                   = sme_free,
+	.map_page               = swiotlb_map_page,
+	.unmap_page             = swiotlb_unmap_page,
+	.map_sg                 = swiotlb_map_sg_attrs,
+	.unmap_sg               = swiotlb_unmap_sg_attrs,
+	.sync_single_for_cpu    = swiotlb_sync_single_for_cpu,
+	.sync_single_for_device = swiotlb_sync_single_for_device,
+	.sync_sg_for_cpu        = swiotlb_sync_sg_for_cpu,
+	.sync_sg_for_device     = swiotlb_sync_sg_for_device,
+	.mapping_error          = swiotlb_dma_mapping_error,
+};
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void)
 {
@@ -184,6 +228,10 @@ void __init mem_encrypt_init(void)
 	/* Make SWIOTLB use an unencrypted DMA area */
 	swiotlb_clear_encryption();
 
+	/* Use SEV DMA operations if SEV is active */
+	if (sev_active)
+		dma_ops = &sme_dma_ops;
+
 	pr_info("memory encryption active\n");
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 13/28] iommu/amd: AMD IOMMU support for SEV
  2016-08-22 23:23 ` Brijesh Singh
                   ` (27 preceding siblings ...)
  (?)
@ 2016-08-22 23:26 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA must be performed to memory that is not mapped encrypted when running
with SEV active. So if SEV is active, do not return the encryption mask
to the IOMMU.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ce6e3ea..d6e9f96 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -237,7 +237,7 @@ void __init mem_encrypt_init(void)
 
 unsigned long amd_iommu_get_me_mask(void)
 {
-	return sme_me_mask;
+	return sev_active ? 0 : sme_me_mask;
 }
 
 unsigned long swiotlb_get_me_mask(void)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 13/28] iommu/amd: AMD IOMMU support for SEV
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:26   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA must be performed to memory that is not mapped encrypted when running
with SEV active. So if SEV is active, do not return the encryption mask
to the IOMMU.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ce6e3ea..d6e9f96 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -237,7 +237,7 @@ void __init mem_encrypt_init(void)
 
 unsigned long amd_iommu_get_me_mask(void)
 {
-	return sme_me_mask;
+	return sev_active ? 0 : sme_me_mask;
 }
 
 unsigned long swiotlb_get_me_mask(void)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 13/28] iommu/amd: AMD IOMMU support for SEV
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA must be performed to memory that is not mapped encrypted when running
with SEV active. So if SEV is active, do not return the encryption mask
to the IOMMU.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ce6e3ea..d6e9f96 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -237,7 +237,7 @@ void __init mem_encrypt_init(void)
 
 unsigned long amd_iommu_get_me_mask(void)
 {
-	return sme_me_mask;
+	return sev_active ? 0 : sme_me_mask;
 }
 
 unsigned long swiotlb_get_me_mask(void)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 13/28] iommu/amd: AMD IOMMU support for SEV
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA must be performed to memory that is not mapped encrypted when running
with SEV active. So if SEV is active, do not return the encryption mask
to the IOMMU.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ce6e3ea..d6e9f96 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -237,7 +237,7 @@ void __init mem_encrypt_init(void)
 
 unsigned long amd_iommu_get_me_mask(void)
 {
-	return sme_me_mask;
+	return sev_active ? 0 : sme_me_mask;
 }
 
 unsigned long swiotlb_get_me_mask(void)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 13/28] iommu/amd: AMD IOMMU support for SEV
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

DMA must be performed to memory that is not mapped encrypted when running
with SEV active. So if SEV is active, do not return the encryption mask
to the IOMMU.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/mm/mem_encrypt.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ce6e3ea..d6e9f96 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -237,7 +237,7 @@ void __init mem_encrypt_init(void)
 
 unsigned long amd_iommu_get_me_mask(void)
 {
-	return sme_me_mask;
+	return sev_active ? 0 : sme_me_mask;
 }
 
 unsigned long swiotlb_get_me_mask(void)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 14/28] x86: Don't set the SME MSR bit when SEV is active
  2016-08-22 23:23 ` Brijesh Singh
                   ` (28 preceding siblings ...)
  (?)
@ 2016-08-22 23:26 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

When SEV is active the virtual machine cannot set the MSR for SME, so
don't set the trampoline flag for SME.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f3207e5..391d8ba 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -102,7 +102,7 @@ static void __init setup_real_mode(void)
 	*trampoline_cr4_features = mmu_cr4_features;
 
 	trampoline_header->flags = 0;
-	if (sme_me_mask)
+	if (sme_me_mask && !sev_active)
 		trampoline_header->flags |= TH_FLAGS_SME_ENABLE;
 
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 14/28] x86: Don't set the SME MSR bit when SEV is active
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:26   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When SEV is active the virtual machine cannot set the MSR for SME, so
don't set the trampoline flag for SME.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f3207e5..391d8ba 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -102,7 +102,7 @@ static void __init setup_real_mode(void)
 	*trampoline_cr4_features = mmu_cr4_features;
 
 	trampoline_header->flags = 0;
-	if (sme_me_mask)
+	if (sme_me_mask && !sev_active)
 		trampoline_header->flags |= TH_FLAGS_SME_ENABLE;
 
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 14/28] x86: Don't set the SME MSR bit when SEV is active
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When SEV is active the virtual machine cannot set the MSR for SME, so
don't set the trampoline flag for SME.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f3207e5..391d8ba 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -102,7 +102,7 @@ static void __init setup_real_mode(void)
 	*trampoline_cr4_features = mmu_cr4_features;
 
 	trampoline_header->flags = 0;
-	if (sme_me_mask)
+	if (sme_me_mask && !sev_active)
 		trampoline_header->flags |= TH_FLAGS_SME_ENABLE;
 
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 14/28] x86: Don't set the SME MSR bit when SEV is active
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

When SEV is active the virtual machine cannot set the MSR for SME, so
don't set the trampoline flag for SME.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f3207e5..391d8ba 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -102,7 +102,7 @@ static void __init setup_real_mode(void)
 	*trampoline_cr4_features = mmu_cr4_features;
 
 	trampoline_header->flags = 0;
-	if (sme_me_mask)
+	if (sme_me_mask && !sev_active)
 		trampoline_header->flags |= TH_FLAGS_SME_ENABLE;
 
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 14/28] x86: Don't set the SME MSR bit when SEV is active
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

When SEV is active the virtual machine cannot set the MSR for SME, so
don't set the trampoline flag for SME.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/realmode/init.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f3207e5..391d8ba 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -102,7 +102,7 @@ static void __init setup_real_mode(void)
 	*trampoline_cr4_features = mmu_cr4_features;
 
 	trampoline_header->flags = 0;
-	if (sme_me_mask)
+	if (sme_me_mask && !sev_active)
 		trampoline_header->flags |= TH_FLAGS_SME_ENABLE;
 
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 15/28] x86: Unroll string I/O when SEV is active
  2016-08-22 23:23 ` Brijesh Singh
                   ` (30 preceding siblings ...)
  (?)
@ 2016-08-22 23:26 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

Secure Encrypted Virtualization (SEV) does not support string I/O, so
unroll the string I/O operation into a loop operating on one element at
a time.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/io.h |   26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index de25aad..130b3e2 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -303,14 +303,32 @@ static inline unsigned type in##bwl##_p(int port)			\
 									\
 static inline void outs##bwl(int port, const void *addr, unsigned long count) \
 {									\
-	asm volatile("rep; outs" #bwl					\
-		     : "+S"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			out##bwl(*value, port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; outs" #bwl				\
+			     : "+S"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }									\
 									\
 static inline void ins##bwl(int port, void *addr, unsigned long count)	\
 {									\
-	asm volatile("rep; ins" #bwl					\
-		     : "+D"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			*value = in##bwl(port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; ins" #bwl				\
+			     : "+D"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }
 
 BUILDIO(b, b, char)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 15/28] x86: Unroll string I/O when SEV is active
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:26   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Secure Encrypted Virtualization (SEV) does not support string I/O, so
unroll the string I/O operation into a loop operating on one element at
a time.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/io.h |   26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index de25aad..130b3e2 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -303,14 +303,32 @@ static inline unsigned type in##bwl##_p(int port)			\
 									\
 static inline void outs##bwl(int port, const void *addr, unsigned long count) \
 {									\
-	asm volatile("rep; outs" #bwl					\
-		     : "+S"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			out##bwl(*value, port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; outs" #bwl				\
+			     : "+S"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }									\
 									\
 static inline void ins##bwl(int port, void *addr, unsigned long count)	\
 {									\
-	asm volatile("rep; ins" #bwl					\
-		     : "+D"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			*value = in##bwl(port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; ins" #bwl				\
+			     : "+D"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }
 
 BUILDIO(b, b, char)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 15/28] x86: Unroll string I/O when SEV is active
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Secure Encrypted Virtualization (SEV) does not support string I/O, so
unroll the string I/O operation into a loop operating on one element at
a time.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/io.h |   26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index de25aad..130b3e2 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -303,14 +303,32 @@ static inline unsigned type in##bwl##_p(int port)			\
 									\
 static inline void outs##bwl(int port, const void *addr, unsigned long count) \
 {									\
-	asm volatile("rep; outs" #bwl					\
-		     : "+S"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			out##bwl(*value, port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; outs" #bwl				\
+			     : "+S"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }									\
 									\
 static inline void ins##bwl(int port, void *addr, unsigned long count)	\
 {									\
-	asm volatile("rep; ins" #bwl					\
-		     : "+D"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			*value = in##bwl(port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; ins" #bwl				\
+			     : "+D"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }
 
 BUILDIO(b, b, char)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 15/28] x86: Unroll string I/O when SEV is active
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Secure Encrypted Virtualization (SEV) does not support string I/O, so
unroll the string I/O operation into a loop operating on one element at
a time.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/io.h |   26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index de25aad..130b3e2 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -303,14 +303,32 @@ static inline unsigned type in##bwl##_p(int port)			\
 									\
 static inline void outs##bwl(int port, const void *addr, unsigned long count) \
 {									\
-	asm volatile("rep; outs" #bwl					\
-		     : "+S"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			out##bwl(*value, port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; outs" #bwl				\
+			     : "+S"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }									\
 									\
 static inline void ins##bwl(int port, void *addr, unsigned long count)	\
 {									\
-	asm volatile("rep; ins" #bwl					\
-		     : "+D"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			*value = in##bwl(port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; ins" #bwl				\
+			     : "+D"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }
 
 BUILDIO(b, b, char)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 15/28] x86: Unroll string I/O when SEV is active
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Secure Encrypted Virtualization (SEV) does not support string I/O, so
unroll the string I/O operation into a loop operating on one element at
a time.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/io.h |   26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index de25aad..130b3e2 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -303,14 +303,32 @@ static inline unsigned type in##bwl##_p(int port)			\
 									\
 static inline void outs##bwl(int port, const void *addr, unsigned long count) \
 {									\
-	asm volatile("rep; outs" #bwl					\
-		     : "+S"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			out##bwl(*value, port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; outs" #bwl				\
+			     : "+S"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }									\
 									\
 static inline void ins##bwl(int port, void *addr, unsigned long count)	\
 {									\
-	asm volatile("rep; ins" #bwl					\
-		     : "+D"(addr), "+c"(count) : "d"(port));		\
+	if (sev_active) {						\
+		unsigned type *value = (unsigned type *)addr;		\
+		while (count) {						\
+			*value = in##bwl(port);				\
+			value++;					\
+			count--;					\
+		}							\
+	} else {							\
+		asm volatile("rep; ins" #bwl				\
+			     : "+D"(addr), "+c"(count) : "d"(port));	\
+	}								\
 }
 
 BUILDIO(b, b, char)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 16/28] x86: Add support to determine if running with SEV enabled
  2016-08-22 23:23 ` Brijesh Singh
                   ` (32 preceding siblings ...)
  (?)
@ 2016-08-22 23:26 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

Early in the boot process, add a check to determine if the kernel is
running with Secure Encrypted Virtualization (SEV) enabled. If active,
the kernel will perform steps necessary to insure the proper kernel
initialization process is performed.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/boot/compressed/Makefile      |    2 +
 arch/x86/boot/compressed/head_64.S     |   19 +++++
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++++++++++++++++++++++++++++++
 arch/x86/include/uapi/asm/hyperv.h     |    4 +
 arch/x86/include/uapi/asm/kvm_para.h   |    3 +
 arch/x86/kernel/mem_encrypt.S          |   36 +++++++++
 6 files changed, 187 insertions(+)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 536ccfc..4888df9 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -73,6 +73,8 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o $(obj)/misc.o \
 	$(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
 	$(obj)/piggy.o $(obj)/cpuflags.o
 
+vmlinux-objs-$(CONFIG_X86_64) += $(obj)/mem_encrypt.o
+
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 0d80a7a..acb907a 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -131,6 +131,19 @@ ENTRY(startup_32)
  /*
   * Build early 4G boot pagetable
   */
+	/*
+	 * If SEV is active set the encryption mask in the page tables. This
+	 * will insure that when the kernel is copied and decompressed it
+	 * will be done so encrypted.
+	 */
+	call	sev_active
+	xorl	%edx, %edx
+	testl	%eax, %eax
+	jz	1f
+	subl	$32, %eax	/* Encryption bit is always above bit 31 */
+	bts	%eax, %edx	/* Set encryption mask for page tables */
+1:
+
 	/* Initialize Page tables to 0 */
 	leal	pgtable(%ebx), %edi
 	xorl	%eax, %eax
@@ -141,12 +154,14 @@ ENTRY(startup_32)
 	leal	pgtable + 0(%ebx), %edi
 	leal	0x1007 (%edi), %eax
 	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 
 	/* Build Level 3 */
 	leal	pgtable + 0x1000(%ebx), %edi
 	leal	0x1007(%edi), %eax
 	movl	$4, %ecx
 1:	movl	%eax, 0x00(%edi)
+	addl	%edx, 0x04(%edi)
 	addl	$0x00001000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -157,6 +172,7 @@ ENTRY(startup_32)
 	movl	$0x00000183, %eax
 	movl	$2048, %ecx
 1:	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 	addl	$0x00200000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -344,6 +360,9 @@ preferred_addr:
 	subl	$_end, %ebx
 	addq	%rbp, %rbx
 
+	/* Check for SEV and adjust page tables as necessary */
+	call	sev_adjust
+
 	/* Set up the stack */
 	leaq	boot_stack_end(%rbx), %rsp
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
new file mode 100644
index 0000000..56e19f6
--- /dev/null
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -0,0 +1,123 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+
+#include <asm/processor-flags.h>
+#include <asm/msr.h>
+#include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
+
+	.text
+	.code32
+ENTRY(sev_active)
+	xor	%eax, %eax
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%ebx
+	push	%ecx
+	push	%edx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_sev
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %eax
+	andl	$0x3f, %eax
+	jmp	.Lsev_exit
+
+.Lno_sev:
+	xor	%eax, %eax
+
+.Lsev_exit:
+	pop	%edx
+	pop	%ecx
+	pop	%ebx
+
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_active)
+
+	.code64
+ENTRY(sev_adjust)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%rax
+	push	%rbx
+	push	%rcx
+	push	%rdx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_adjust
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_adjust
+
+	/*
+	 * Adjust/verify the page table entries to include the encryption
+	 * mask for the area where the compressed kernel is copied and
+	 * the area the kernel is decompressed into
+	 */
+
+.Lno_adjust:
+	pop	%rdx
+	pop	%rcx
+	pop	%rbx
+	pop	%rax
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_adjust)
diff --git a/arch/x86/include/uapi/asm/hyperv.h b/arch/x86/include/uapi/asm/hyperv.h
index 9b1a918..8278161 100644
--- a/arch/x86/include/uapi/asm/hyperv.h
+++ b/arch/x86/include/uapi/asm/hyperv.h
@@ -3,6 +3,8 @@
 
 #include <linux/types.h>
 
+#ifndef __ASSEMBLY__
+
 /*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
  * is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
@@ -363,4 +365,6 @@ struct hv_timer_message_payload {
 #define HV_STIMER_AUTOENABLE		(1ULL << 3)
 #define HV_STIMER_SINT(config)		(__u8)(((config) >> 16) & 0x0F)
 
+#endif	/* __ASSEMBLY__ */
+
 #endif
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 67dd610f..5788561 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -26,6 +26,8 @@
 #define KVM_FEATURE_PV_UNHALT		7
 #define KVM_FEATURE_SEV			8
 
+#ifndef __ASSEMBLY__
+
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
  */
@@ -98,5 +100,6 @@ struct kvm_vcpu_pv_apf_data {
 #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
 #define KVM_PV_EOI_DISABLED 0x0
 
+#endif	/* __ASSEMBLY__ */
 
 #endif /* _UAPI_ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index 6a8cd18..78fc608 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -17,11 +17,47 @@
 #include <asm/page.h>
 #include <asm/msr.h>
 #include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
 
 	.text
 	.code64
 ENTRY(sme_enable)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_hyp
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_mem_encrypt
+	bts	%ecx, sme_me_mask(%rip)
+
+	/* Indicate that SEV is active */
+	movl	$1, sev_active(%rip)
+	jmp	.Lmem_encrypt_exit
+
+.Lno_hyp:
 	/* Check for AMD processor */
 	xorl	%eax, %eax
 	cpuid

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 16/28] x86: Add support to determine if running with SEV enabled
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:26   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Early in the boot process, add a check to determine if the kernel is
running with Secure Encrypted Virtualization (SEV) enabled. If active,
the kernel will perform steps necessary to insure the proper kernel
initialization process is performed.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/boot/compressed/Makefile      |    2 +
 arch/x86/boot/compressed/head_64.S     |   19 +++++
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++++++++++++++++++++++++++++++
 arch/x86/include/uapi/asm/hyperv.h     |    4 +
 arch/x86/include/uapi/asm/kvm_para.h   |    3 +
 arch/x86/kernel/mem_encrypt.S          |   36 +++++++++
 6 files changed, 187 insertions(+)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 536ccfc..4888df9 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -73,6 +73,8 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o $(obj)/misc.o \
 	$(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
 	$(obj)/piggy.o $(obj)/cpuflags.o
 
+vmlinux-objs-$(CONFIG_X86_64) += $(obj)/mem_encrypt.o
+
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 0d80a7a..acb907a 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -131,6 +131,19 @@ ENTRY(startup_32)
  /*
   * Build early 4G boot pagetable
   */
+	/*
+	 * If SEV is active set the encryption mask in the page tables. This
+	 * will insure that when the kernel is copied and decompressed it
+	 * will be done so encrypted.
+	 */
+	call	sev_active
+	xorl	%edx, %edx
+	testl	%eax, %eax
+	jz	1f
+	subl	$32, %eax	/* Encryption bit is always above bit 31 */
+	bts	%eax, %edx	/* Set encryption mask for page tables */
+1:
+
 	/* Initialize Page tables to 0 */
 	leal	pgtable(%ebx), %edi
 	xorl	%eax, %eax
@@ -141,12 +154,14 @@ ENTRY(startup_32)
 	leal	pgtable + 0(%ebx), %edi
 	leal	0x1007 (%edi), %eax
 	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 
 	/* Build Level 3 */
 	leal	pgtable + 0x1000(%ebx), %edi
 	leal	0x1007(%edi), %eax
 	movl	$4, %ecx
 1:	movl	%eax, 0x00(%edi)
+	addl	%edx, 0x04(%edi)
 	addl	$0x00001000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -157,6 +172,7 @@ ENTRY(startup_32)
 	movl	$0x00000183, %eax
 	movl	$2048, %ecx
 1:	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 	addl	$0x00200000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -344,6 +360,9 @@ preferred_addr:
 	subl	$_end, %ebx
 	addq	%rbp, %rbx
 
+	/* Check for SEV and adjust page tables as necessary */
+	call	sev_adjust
+
 	/* Set up the stack */
 	leaq	boot_stack_end(%rbx), %rsp
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
new file mode 100644
index 0000000..56e19f6
--- /dev/null
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -0,0 +1,123 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+
+#include <asm/processor-flags.h>
+#include <asm/msr.h>
+#include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
+
+	.text
+	.code32
+ENTRY(sev_active)
+	xor	%eax, %eax
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%ebx
+	push	%ecx
+	push	%edx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_sev
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %eax
+	andl	$0x3f, %eax
+	jmp	.Lsev_exit
+
+.Lno_sev:
+	xor	%eax, %eax
+
+.Lsev_exit:
+	pop	%edx
+	pop	%ecx
+	pop	%ebx
+
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_active)
+
+	.code64
+ENTRY(sev_adjust)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%rax
+	push	%rbx
+	push	%rcx
+	push	%rdx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_adjust
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_adjust
+
+	/*
+	 * Adjust/verify the page table entries to include the encryption
+	 * mask for the area where the compressed kernel is copied and
+	 * the area the kernel is decompressed into
+	 */
+
+.Lno_adjust:
+	pop	%rdx
+	pop	%rcx
+	pop	%rbx
+	pop	%rax
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_adjust)
diff --git a/arch/x86/include/uapi/asm/hyperv.h b/arch/x86/include/uapi/asm/hyperv.h
index 9b1a918..8278161 100644
--- a/arch/x86/include/uapi/asm/hyperv.h
+++ b/arch/x86/include/uapi/asm/hyperv.h
@@ -3,6 +3,8 @@
 
 #include <linux/types.h>
 
+#ifndef __ASSEMBLY__
+
 /*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
  * is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
@@ -363,4 +365,6 @@ struct hv_timer_message_payload {
 #define HV_STIMER_AUTOENABLE		(1ULL << 3)
 #define HV_STIMER_SINT(config)		(__u8)(((config) >> 16) & 0x0F)
 
+#endif	/* __ASSEMBLY__ */
+
 #endif
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 67dd610f..5788561 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -26,6 +26,8 @@
 #define KVM_FEATURE_PV_UNHALT		7
 #define KVM_FEATURE_SEV			8
 
+#ifndef __ASSEMBLY__
+
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
  */
@@ -98,5 +100,6 @@ struct kvm_vcpu_pv_apf_data {
 #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
 #define KVM_PV_EOI_DISABLED 0x0
 
+#endif	/* __ASSEMBLY__ */
 
 #endif /* _UAPI_ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index 6a8cd18..78fc608 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -17,11 +17,47 @@
 #include <asm/page.h>
 #include <asm/msr.h>
 #include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
 
 	.text
 	.code64
 ENTRY(sme_enable)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_hyp
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_mem_encrypt
+	bts	%ecx, sme_me_mask(%rip)
+
+	/* Indicate that SEV is active */
+	movl	$1, sev_active(%rip)
+	jmp	.Lmem_encrypt_exit
+
+.Lno_hyp:
 	/* Check for AMD processor */
 	xorl	%eax, %eax
 	cpuid

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 16/28] x86: Add support to determine if running with SEV enabled
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Early in the boot process, add a check to determine if the kernel is
running with Secure Encrypted Virtualization (SEV) enabled. If active,
the kernel will perform steps necessary to insure the proper kernel
initialization process is performed.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/boot/compressed/Makefile      |    2 +
 arch/x86/boot/compressed/head_64.S     |   19 +++++
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++++++++++++++++++++++++++++++
 arch/x86/include/uapi/asm/hyperv.h     |    4 +
 arch/x86/include/uapi/asm/kvm_para.h   |    3 +
 arch/x86/kernel/mem_encrypt.S          |   36 +++++++++
 6 files changed, 187 insertions(+)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 536ccfc..4888df9 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -73,6 +73,8 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o $(obj)/misc.o \
 	$(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
 	$(obj)/piggy.o $(obj)/cpuflags.o
 
+vmlinux-objs-$(CONFIG_X86_64) += $(obj)/mem_encrypt.o
+
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 0d80a7a..acb907a 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -131,6 +131,19 @@ ENTRY(startup_32)
  /*
   * Build early 4G boot pagetable
   */
+	/*
+	 * If SEV is active set the encryption mask in the page tables. This
+	 * will insure that when the kernel is copied and decompressed it
+	 * will be done so encrypted.
+	 */
+	call	sev_active
+	xorl	%edx, %edx
+	testl	%eax, %eax
+	jz	1f
+	subl	$32, %eax	/* Encryption bit is always above bit 31 */
+	bts	%eax, %edx	/* Set encryption mask for page tables */
+1:
+
 	/* Initialize Page tables to 0 */
 	leal	pgtable(%ebx), %edi
 	xorl	%eax, %eax
@@ -141,12 +154,14 @@ ENTRY(startup_32)
 	leal	pgtable + 0(%ebx), %edi
 	leal	0x1007 (%edi), %eax
 	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 
 	/* Build Level 3 */
 	leal	pgtable + 0x1000(%ebx), %edi
 	leal	0x1007(%edi), %eax
 	movl	$4, %ecx
 1:	movl	%eax, 0x00(%edi)
+	addl	%edx, 0x04(%edi)
 	addl	$0x00001000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -157,6 +172,7 @@ ENTRY(startup_32)
 	movl	$0x00000183, %eax
 	movl	$2048, %ecx
 1:	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 	addl	$0x00200000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -344,6 +360,9 @@ preferred_addr:
 	subl	$_end, %ebx
 	addq	%rbp, %rbx
 
+	/* Check for SEV and adjust page tables as necessary */
+	call	sev_adjust
+
 	/* Set up the stack */
 	leaq	boot_stack_end(%rbx), %rsp
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
new file mode 100644
index 0000000..56e19f6
--- /dev/null
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -0,0 +1,123 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+
+#include <asm/processor-flags.h>
+#include <asm/msr.h>
+#include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
+
+	.text
+	.code32
+ENTRY(sev_active)
+	xor	%eax, %eax
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%ebx
+	push	%ecx
+	push	%edx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_sev
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %eax
+	andl	$0x3f, %eax
+	jmp	.Lsev_exit
+
+.Lno_sev:
+	xor	%eax, %eax
+
+.Lsev_exit:
+	pop	%edx
+	pop	%ecx
+	pop	%ebx
+
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_active)
+
+	.code64
+ENTRY(sev_adjust)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%rax
+	push	%rbx
+	push	%rcx
+	push	%rdx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_adjust
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_adjust
+
+	/*
+	 * Adjust/verify the page table entries to include the encryption
+	 * mask for the area where the compressed kernel is copied and
+	 * the area the kernel is decompressed into
+	 */
+
+.Lno_adjust:
+	pop	%rdx
+	pop	%rcx
+	pop	%rbx
+	pop	%rax
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_adjust)
diff --git a/arch/x86/include/uapi/asm/hyperv.h b/arch/x86/include/uapi/asm/hyperv.h
index 9b1a918..8278161 100644
--- a/arch/x86/include/uapi/asm/hyperv.h
+++ b/arch/x86/include/uapi/asm/hyperv.h
@@ -3,6 +3,8 @@
 
 #include <linux/types.h>
 
+#ifndef __ASSEMBLY__
+
 /*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
  * is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
@@ -363,4 +365,6 @@ struct hv_timer_message_payload {
 #define HV_STIMER_AUTOENABLE		(1ULL << 3)
 #define HV_STIMER_SINT(config)		(__u8)(((config) >> 16) & 0x0F)
 
+#endif	/* __ASSEMBLY__ */
+
 #endif
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 67dd610f..5788561 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -26,6 +26,8 @@
 #define KVM_FEATURE_PV_UNHALT		7
 #define KVM_FEATURE_SEV			8
 
+#ifndef __ASSEMBLY__
+
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
  */
@@ -98,5 +100,6 @@ struct kvm_vcpu_pv_apf_data {
 #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
 #define KVM_PV_EOI_DISABLED 0x0
 
+#endif	/* __ASSEMBLY__ */
 
 #endif /* _UAPI_ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index 6a8cd18..78fc608 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -17,11 +17,47 @@
 #include <asm/page.h>
 #include <asm/msr.h>
 #include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
 
 	.text
 	.code64
 ENTRY(sme_enable)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_hyp
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_mem_encrypt
+	bts	%ecx, sme_me_mask(%rip)
+
+	/* Indicate that SEV is active */
+	movl	$1, sev_active(%rip)
+	jmp	.Lmem_encrypt_exit
+
+.Lno_hyp:
 	/* Check for AMD processor */
 	xorl	%eax, %eax
 	cpuid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 16/28] x86: Add support to determine if running with SEV enabled
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Early in the boot process, add a check to determine if the kernel is
running with Secure Encrypted Virtualization (SEV) enabled. If active,
the kernel will perform steps necessary to insure the proper kernel
initialization process is performed.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/boot/compressed/Makefile      |    2 +
 arch/x86/boot/compressed/head_64.S     |   19 +++++
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++++++++++++++++++++++++++++++
 arch/x86/include/uapi/asm/hyperv.h     |    4 +
 arch/x86/include/uapi/asm/kvm_para.h   |    3 +
 arch/x86/kernel/mem_encrypt.S          |   36 +++++++++
 6 files changed, 187 insertions(+)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 536ccfc..4888df9 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -73,6 +73,8 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o $(obj)/misc.o \
 	$(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
 	$(obj)/piggy.o $(obj)/cpuflags.o
 
+vmlinux-objs-$(CONFIG_X86_64) += $(obj)/mem_encrypt.o
+
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 0d80a7a..acb907a 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -131,6 +131,19 @@ ENTRY(startup_32)
  /*
   * Build early 4G boot pagetable
   */
+	/*
+	 * If SEV is active set the encryption mask in the page tables. This
+	 * will insure that when the kernel is copied and decompressed it
+	 * will be done so encrypted.
+	 */
+	call	sev_active
+	xorl	%edx, %edx
+	testl	%eax, %eax
+	jz	1f
+	subl	$32, %eax	/* Encryption bit is always above bit 31 */
+	bts	%eax, %edx	/* Set encryption mask for page tables */
+1:
+
 	/* Initialize Page tables to 0 */
 	leal	pgtable(%ebx), %edi
 	xorl	%eax, %eax
@@ -141,12 +154,14 @@ ENTRY(startup_32)
 	leal	pgtable + 0(%ebx), %edi
 	leal	0x1007 (%edi), %eax
 	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 
 	/* Build Level 3 */
 	leal	pgtable + 0x1000(%ebx), %edi
 	leal	0x1007(%edi), %eax
 	movl	$4, %ecx
 1:	movl	%eax, 0x00(%edi)
+	addl	%edx, 0x04(%edi)
 	addl	$0x00001000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -157,6 +172,7 @@ ENTRY(startup_32)
 	movl	$0x00000183, %eax
 	movl	$2048, %ecx
 1:	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 	addl	$0x00200000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -344,6 +360,9 @@ preferred_addr:
 	subl	$_end, %ebx
 	addq	%rbp, %rbx
 
+	/* Check for SEV and adjust page tables as necessary */
+	call	sev_adjust
+
 	/* Set up the stack */
 	leaq	boot_stack_end(%rbx), %rsp
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
new file mode 100644
index 0000000..56e19f6
--- /dev/null
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -0,0 +1,123 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+
+#include <asm/processor-flags.h>
+#include <asm/msr.h>
+#include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
+
+	.text
+	.code32
+ENTRY(sev_active)
+	xor	%eax, %eax
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%ebx
+	push	%ecx
+	push	%edx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_sev
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %eax
+	andl	$0x3f, %eax
+	jmp	.Lsev_exit
+
+.Lno_sev:
+	xor	%eax, %eax
+
+.Lsev_exit:
+	pop	%edx
+	pop	%ecx
+	pop	%ebx
+
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_active)
+
+	.code64
+ENTRY(sev_adjust)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%rax
+	push	%rbx
+	push	%rcx
+	push	%rdx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_adjust
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_adjust
+
+	/*
+	 * Adjust/verify the page table entries to include the encryption
+	 * mask for the area where the compressed kernel is copied and
+	 * the area the kernel is decompressed into
+	 */
+
+.Lno_adjust:
+	pop	%rdx
+	pop	%rcx
+	pop	%rbx
+	pop	%rax
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_adjust)
diff --git a/arch/x86/include/uapi/asm/hyperv.h b/arch/x86/include/uapi/asm/hyperv.h
index 9b1a918..8278161 100644
--- a/arch/x86/include/uapi/asm/hyperv.h
+++ b/arch/x86/include/uapi/asm/hyperv.h
@@ -3,6 +3,8 @@
 
 #include <linux/types.h>
 
+#ifndef __ASSEMBLY__
+
 /*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
  * is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
@@ -363,4 +365,6 @@ struct hv_timer_message_payload {
 #define HV_STIMER_AUTOENABLE		(1ULL << 3)
 #define HV_STIMER_SINT(config)		(__u8)(((config) >> 16) & 0x0F)
 
+#endif	/* __ASSEMBLY__ */
+
 #endif
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 67dd610f..5788561 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -26,6 +26,8 @@
 #define KVM_FEATURE_PV_UNHALT		7
 #define KVM_FEATURE_SEV			8
 
+#ifndef __ASSEMBLY__
+
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
  */
@@ -98,5 +100,6 @@ struct kvm_vcpu_pv_apf_data {
 #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
 #define KVM_PV_EOI_DISABLED 0x0
 
+#endif	/* __ASSEMBLY__ */
 
 #endif /* _UAPI_ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index 6a8cd18..78fc608 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -17,11 +17,47 @@
 #include <asm/page.h>
 #include <asm/msr.h>
 #include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
 
 	.text
 	.code64
 ENTRY(sme_enable)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_hyp
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_mem_encrypt
+	bts	%ecx, sme_me_mask(%rip)
+
+	/* Indicate that SEV is active */
+	movl	$1, sev_active(%rip)
+	jmp	.Lmem_encrypt_exit
+
+.Lno_hyp:
 	/* Check for AMD processor */
 	xorl	%eax, %eax
 	cpuid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 16/28] x86: Add support to determine if running with SEV enabled
@ 2016-08-22 23:26   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:26 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Early in the boot process, add a check to determine if the kernel is
running with Secure Encrypted Virtualization (SEV) enabled. If active,
the kernel will perform steps necessary to insure the proper kernel
initialization process is performed.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/boot/compressed/Makefile      |    2 +
 arch/x86/boot/compressed/head_64.S     |   19 +++++
 arch/x86/boot/compressed/mem_encrypt.S |  123 ++++++++++++++++++++++++++++++++
 arch/x86/include/uapi/asm/hyperv.h     |    4 +
 arch/x86/include/uapi/asm/kvm_para.h   |    3 +
 arch/x86/kernel/mem_encrypt.S          |   36 +++++++++
 6 files changed, 187 insertions(+)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 536ccfc..4888df9 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -73,6 +73,8 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o $(obj)/misc.o \
 	$(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
 	$(obj)/piggy.o $(obj)/cpuflags.o
 
+vmlinux-objs-$(CONFIG_X86_64) += $(obj)/mem_encrypt.o
+
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 0d80a7a..acb907a 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -131,6 +131,19 @@ ENTRY(startup_32)
  /*
   * Build early 4G boot pagetable
   */
+	/*
+	 * If SEV is active set the encryption mask in the page tables. This
+	 * will insure that when the kernel is copied and decompressed it
+	 * will be done so encrypted.
+	 */
+	call	sev_active
+	xorl	%edx, %edx
+	testl	%eax, %eax
+	jz	1f
+	subl	$32, %eax	/* Encryption bit is always above bit 31 */
+	bts	%eax, %edx	/* Set encryption mask for page tables */
+1:
+
 	/* Initialize Page tables to 0 */
 	leal	pgtable(%ebx), %edi
 	xorl	%eax, %eax
@@ -141,12 +154,14 @@ ENTRY(startup_32)
 	leal	pgtable + 0(%ebx), %edi
 	leal	0x1007 (%edi), %eax
 	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 
 	/* Build Level 3 */
 	leal	pgtable + 0x1000(%ebx), %edi
 	leal	0x1007(%edi), %eax
 	movl	$4, %ecx
 1:	movl	%eax, 0x00(%edi)
+	addl	%edx, 0x04(%edi)
 	addl	$0x00001000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -157,6 +172,7 @@ ENTRY(startup_32)
 	movl	$0x00000183, %eax
 	movl	$2048, %ecx
 1:	movl	%eax, 0(%edi)
+	addl	%edx, 4(%edi)
 	addl	$0x00200000, %eax
 	addl	$8, %edi
 	decl	%ecx
@@ -344,6 +360,9 @@ preferred_addr:
 	subl	$_end, %ebx
 	addq	%rbp, %rbx
 
+	/* Check for SEV and adjust page tables as necessary */
+	call	sev_adjust
+
 	/* Set up the stack */
 	leaq	boot_stack_end(%rbx), %rsp
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
new file mode 100644
index 0000000..56e19f6
--- /dev/null
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -0,0 +1,123 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+
+#include <asm/processor-flags.h>
+#include <asm/msr.h>
+#include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
+
+	.text
+	.code32
+ENTRY(sev_active)
+	xor	%eax, %eax
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%ebx
+	push	%ecx
+	push	%edx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_sev
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_sev
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %eax
+	andl	$0x3f, %eax
+	jmp	.Lsev_exit
+
+.Lno_sev:
+	xor	%eax, %eax
+
+.Lsev_exit:
+	pop	%edx
+	pop	%ecx
+	pop	%ebx
+
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_active)
+
+	.code64
+ENTRY(sev_adjust)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	push	%rax
+	push	%rbx
+	push	%rcx
+	push	%rdx
+
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_adjust
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_adjust
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_adjust
+
+	/*
+	 * Adjust/verify the page table entries to include the encryption
+	 * mask for the area where the compressed kernel is copied and
+	 * the area the kernel is decompressed into
+	 */
+
+.Lno_adjust:
+	pop	%rdx
+	pop	%rcx
+	pop	%rbx
+	pop	%rax
+#endif	/* CONFIG_AMD_MEM_ENCRYPT */
+
+	ret
+ENDPROC(sev_adjust)
diff --git a/arch/x86/include/uapi/asm/hyperv.h b/arch/x86/include/uapi/asm/hyperv.h
index 9b1a918..8278161 100644
--- a/arch/x86/include/uapi/asm/hyperv.h
+++ b/arch/x86/include/uapi/asm/hyperv.h
@@ -3,6 +3,8 @@
 
 #include <linux/types.h>
 
+#ifndef __ASSEMBLY__
+
 /*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
  * is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
@@ -363,4 +365,6 @@ struct hv_timer_message_payload {
 #define HV_STIMER_AUTOENABLE		(1ULL << 3)
 #define HV_STIMER_SINT(config)		(__u8)(((config) >> 16) & 0x0F)
 
+#endif	/* __ASSEMBLY__ */
+
 #endif
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 67dd610f..5788561 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -26,6 +26,8 @@
 #define KVM_FEATURE_PV_UNHALT		7
 #define KVM_FEATURE_SEV			8
 
+#ifndef __ASSEMBLY__
+
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.
  */
@@ -98,5 +100,6 @@ struct kvm_vcpu_pv_apf_data {
 #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
 #define KVM_PV_EOI_DISABLED 0x0
 
+#endif	/* __ASSEMBLY__ */
 
 #endif /* _UAPI_ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index 6a8cd18..78fc608 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -17,11 +17,47 @@
 #include <asm/page.h>
 #include <asm/msr.h>
 #include <asm/asm-offsets.h>
+#include <uapi/asm/kvm_para.h>
 
 	.text
 	.code64
 ENTRY(sme_enable)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* Check if running under a hypervisor */
+	movl	$0x40000000, %eax
+	cpuid
+	cmpl	$0x40000001, %eax
+	jb	.Lno_hyp
+
+	movl	$0x40000001, %eax
+	cpuid
+	bt	$KVM_FEATURE_SEV, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Check for memory encryption feature:
+	 *   CPUID Fn8000_001F[EAX] - Bit 0
+	 */
+	movl	$0x8000001f, %eax
+	cpuid
+	bt	$0, %eax
+	jnc	.Lno_mem_encrypt
+
+	/*
+	 * Get memory encryption information:
+	 *   CPUID Fn8000_001F[EBX] - Bits 5:0
+	 *     Pagetable bit position used to indicate encryption
+	 */
+	movl	%ebx, %ecx
+	andl	$0x3f, %ecx
+	jz	.Lno_mem_encrypt
+	bts	%ecx, sme_me_mask(%rip)
+
+	/* Indicate that SEV is active */
+	movl	$1, sev_active(%rip)
+	jmp	.Lmem_encrypt_exit
+
+.Lno_hyp:
 	/* Check for AMD processor */
 	xorl	%eax, %eax
 	cpuid

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 17/28] KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature
  2016-08-22 23:23 ` Brijesh Singh
                   ` (34 preceding siblings ...)
  (?)
@ 2016-08-22 23:27 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

From: Tom Lendacky <thomas.lendacky@amd.com>

Modify the SVM cpuid update function to indicate if Secure Encrypted
Virtualization (SEV) is active by setting the SEV KVM cpu features bit
if SEV is active.  SEV is active if Secure Memory Encryption is active
in the host and the SEV_ENABLE bit of the VMCB is set.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kvm/cpuid.c |    4 +++-
 arch/x86/kvm/svm.c   |   18 ++++++++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3235e0f..d34faea 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -583,7 +583,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry->edx = 0;
 		break;
 	case 0x80000000:
-		entry->eax = min(entry->eax, 0x8000001a);
+		entry->eax = min(entry->eax, 0x8000001f);
 		break;
 	case 0x80000001:
 		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
@@ -616,6 +616,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	case 0x8000001d:
 		break;
+	case 0x8000001f:
+		break;
 	/*Add support for Centaur's CPUID instruction*/
 	case 0xC0000000:
 		/*Just support up to 0xC0000004 now*/
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b59260..211be94 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -43,6 +43,7 @@
 #include <asm/kvm_para.h>
 
 #include <asm/virtext.h>
+#include <asm/mem_encrypt.h>
 #include "trace.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
@@ -4677,10 +4678,27 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_cpuid_entry2 *entry;
+	struct vmcb_control_area *ca = &svm->vmcb->control;
+	struct kvm_cpuid_entry2 *features, *sev_info;
 
 	/* Update nrips enabled cache */
 	svm->nrips_enabled = !!guest_cpuid_has_nrips(&svm->vcpu);
 
+	/* Check for Secure Encrypted Virtualization support */
+	features = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+	if (!features)
+		return;
+
+	sev_info = kvm_find_cpuid_entry(vcpu, 0x8000001f, 0);
+	if (!sev_info)
+		return;
+
+	if (ca->nested_ctl & SVM_NESTED_CTL_SEV_ENABLE) {
+		features->eax |= (1 << KVM_FEATURE_SEV);
+		cpuid(0x8000001f, &sev_info->eax, &sev_info->ebx,
+		      &sev_info->ecx, &sev_info->edx);
+	}
+
 	if (!kvm_vcpu_apicv_active(vcpu))
 		return;
 

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 17/28] KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:27   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Modify the SVM cpuid update function to indicate if Secure Encrypted
Virtualization (SEV) is active by setting the SEV KVM cpu features bit
if SEV is active.  SEV is active if Secure Memory Encryption is active
in the host and the SEV_ENABLE bit of the VMCB is set.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kvm/cpuid.c |    4 +++-
 arch/x86/kvm/svm.c   |   18 ++++++++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3235e0f..d34faea 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -583,7 +583,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry->edx = 0;
 		break;
 	case 0x80000000:
-		entry->eax = min(entry->eax, 0x8000001a);
+		entry->eax = min(entry->eax, 0x8000001f);
 		break;
 	case 0x80000001:
 		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
@@ -616,6 +616,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	case 0x8000001d:
 		break;
+	case 0x8000001f:
+		break;
 	/*Add support for Centaur's CPUID instruction*/
 	case 0xC0000000:
 		/*Just support up to 0xC0000004 now*/
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b59260..211be94 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -43,6 +43,7 @@
 #include <asm/kvm_para.h>
 
 #include <asm/virtext.h>
+#include <asm/mem_encrypt.h>
 #include "trace.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
@@ -4677,10 +4678,27 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_cpuid_entry2 *entry;
+	struct vmcb_control_area *ca = &svm->vmcb->control;
+	struct kvm_cpuid_entry2 *features, *sev_info;
 
 	/* Update nrips enabled cache */
 	svm->nrips_enabled = !!guest_cpuid_has_nrips(&svm->vcpu);
 
+	/* Check for Secure Encrypted Virtualization support */
+	features = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+	if (!features)
+		return;
+
+	sev_info = kvm_find_cpuid_entry(vcpu, 0x8000001f, 0);
+	if (!sev_info)
+		return;
+
+	if (ca->nested_ctl & SVM_NESTED_CTL_SEV_ENABLE) {
+		features->eax |= (1 << KVM_FEATURE_SEV);
+		cpuid(0x8000001f, &sev_info->eax, &sev_info->ebx,
+		      &sev_info->ecx, &sev_info->edx);
+	}
+
 	if (!kvm_vcpu_apicv_active(vcpu))
 		return;
 

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 17/28] KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Modify the SVM cpuid update function to indicate if Secure Encrypted
Virtualization (SEV) is active by setting the SEV KVM cpu features bit
if SEV is active.  SEV is active if Secure Memory Encryption is active
in the host and the SEV_ENABLE bit of the VMCB is set.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kvm/cpuid.c |    4 +++-
 arch/x86/kvm/svm.c   |   18 ++++++++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3235e0f..d34faea 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -583,7 +583,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry->edx = 0;
 		break;
 	case 0x80000000:
-		entry->eax = min(entry->eax, 0x8000001a);
+		entry->eax = min(entry->eax, 0x8000001f);
 		break;
 	case 0x80000001:
 		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
@@ -616,6 +616,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	case 0x8000001d:
 		break;
+	case 0x8000001f:
+		break;
 	/*Add support for Centaur's CPUID instruction*/
 	case 0xC0000000:
 		/*Just support up to 0xC0000004 now*/
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b59260..211be94 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -43,6 +43,7 @@
 #include <asm/kvm_para.h>
 
 #include <asm/virtext.h>
+#include <asm/mem_encrypt.h>
 #include "trace.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
@@ -4677,10 +4678,27 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_cpuid_entry2 *entry;
+	struct vmcb_control_area *ca = &svm->vmcb->control;
+	struct kvm_cpuid_entry2 *features, *sev_info;
 
 	/* Update nrips enabled cache */
 	svm->nrips_enabled = !!guest_cpuid_has_nrips(&svm->vcpu);
 
+	/* Check for Secure Encrypted Virtualization support */
+	features = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+	if (!features)
+		return;
+
+	sev_info = kvm_find_cpuid_entry(vcpu, 0x8000001f, 0);
+	if (!sev_info)
+		return;
+
+	if (ca->nested_ctl & SVM_NESTED_CTL_SEV_ENABLE) {
+		features->eax |= (1 << KVM_FEATURE_SEV);
+		cpuid(0x8000001f, &sev_info->eax, &sev_info->ebx,
+		      &sev_info->ecx, &sev_info->edx);
+	}
+
 	if (!kvm_vcpu_apicv_active(vcpu))
 		return;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 17/28] KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

From: Tom Lendacky <thomas.lendacky@amd.com>

Modify the SVM cpuid update function to indicate if Secure Encrypted
Virtualization (SEV) is active by setting the SEV KVM cpu features bit
if SEV is active.  SEV is active if Secure Memory Encryption is active
in the host and the SEV_ENABLE bit of the VMCB is set.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kvm/cpuid.c |    4 +++-
 arch/x86/kvm/svm.c   |   18 ++++++++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3235e0f..d34faea 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -583,7 +583,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry->edx = 0;
 		break;
 	case 0x80000000:
-		entry->eax = min(entry->eax, 0x8000001a);
+		entry->eax = min(entry->eax, 0x8000001f);
 		break;
 	case 0x80000001:
 		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
@@ -616,6 +616,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	case 0x8000001d:
 		break;
+	case 0x8000001f:
+		break;
 	/*Add support for Centaur's CPUID instruction*/
 	case 0xC0000000:
 		/*Just support up to 0xC0000004 now*/
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b59260..211be94 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -43,6 +43,7 @@
 #include <asm/kvm_para.h>
 
 #include <asm/virtext.h>
+#include <asm/mem_encrypt.h>
 #include "trace.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
@@ -4677,10 +4678,27 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_cpuid_entry2 *entry;
+	struct vmcb_control_area *ca = &svm->vmcb->control;
+	struct kvm_cpuid_entry2 *features, *sev_info;
 
 	/* Update nrips enabled cache */
 	svm->nrips_enabled = !!guest_cpuid_has_nrips(&svm->vcpu);
 
+	/* Check for Secure Encrypted Virtualization support */
+	features = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+	if (!features)
+		return;
+
+	sev_info = kvm_find_cpuid_entry(vcpu, 0x8000001f, 0);
+	if (!sev_info)
+		return;
+
+	if (ca->nested_ctl & SVM_NESTED_CTL_SEV_ENABLE) {
+		features->eax |= (1 << KVM_FEATURE_SEV);
+		cpuid(0x8000001f, &sev_info->eax, &sev_info->ebx,
+		      &sev_info->ecx, &sev_info->edx);
+	}
+
 	if (!kvm_vcpu_apicv_active(vcpu))
 		return;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 17/28] KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

From: Tom Lendacky <thomas.lendacky@amd.com>

Modify the SVM cpuid update function to indicate if Secure Encrypted
Virtualization (SEV) is active by setting the SEV KVM cpu features bit
if SEV is active.  SEV is active if Secure Memory Encryption is active
in the host and the SEV_ENABLE bit of the VMCB is set.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kvm/cpuid.c |    4 +++-
 arch/x86/kvm/svm.c   |   18 ++++++++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3235e0f..d34faea 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -583,7 +583,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		entry->edx = 0;
 		break;
 	case 0x80000000:
-		entry->eax = min(entry->eax, 0x8000001a);
+		entry->eax = min(entry->eax, 0x8000001f);
 		break;
 	case 0x80000001:
 		entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
@@ -616,6 +616,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	case 0x8000001d:
 		break;
+	case 0x8000001f:
+		break;
 	/*Add support for Centaur's CPUID instruction*/
 	case 0xC0000000:
 		/*Just support up to 0xC0000004 now*/
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b59260..211be94 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -43,6 +43,7 @@
 #include <asm/kvm_para.h>
 
 #include <asm/virtext.h>
+#include <asm/mem_encrypt.h>
 #include "trace.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
@@ -4677,10 +4678,27 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_cpuid_entry2 *entry;
+	struct vmcb_control_area *ca = &svm->vmcb->control;
+	struct kvm_cpuid_entry2 *features, *sev_info;
 
 	/* Update nrips enabled cache */
 	svm->nrips_enabled = !!guest_cpuid_has_nrips(&svm->vcpu);
 
+	/* Check for Secure Encrypted Virtualization support */
+	features = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+	if (!features)
+		return;
+
+	sev_info = kvm_find_cpuid_entry(vcpu, 0x8000001f, 0);
+	if (!sev_info)
+		return;
+
+	if (ca->nested_ctl & SVM_NESTED_CTL_SEV_ENABLE) {
+		features->eax |= (1 << KVM_FEATURE_SEV);
+		cpuid(0x8000001f, &sev_info->eax, &sev_info->ebx,
+		      &sev_info->ecx, &sev_info->edx);
+	}
+
 	if (!kvm_vcpu_apicv_active(vcpu))
 		return;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
  2016-08-22 23:23 ` Brijesh Singh
                   ` (36 preceding siblings ...)
  (?)
@ 2016-08-22 23:27 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The driver to communicate with Secure Encrypted Virtualization (SEV)
firmware running within the AMD secure processor providing a secure key
management interface for SEV guests.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 drivers/crypto/Kconfig       |   11 +
 drivers/crypto/Makefile      |    1 
 drivers/crypto/psp/Kconfig   |    8 
 drivers/crypto/psp/Makefile  |    3 
 drivers/crypto/psp/psp-dev.c |  220 +++++++++++
 drivers/crypto/psp/psp-dev.h |   95 +++++
 drivers/crypto/psp/psp-ops.c |  454 +++++++++++++++++++++++
 drivers/crypto/psp/psp-pci.c |  376 +++++++++++++++++++
 include/linux/ccp-psp.h      |  833 ++++++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild    |    1 
 include/uapi/linux/ccp-psp.h |  182 +++++++++
 11 files changed, 2184 insertions(+)
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 1af94e2..3bdbc51 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -464,6 +464,17 @@ if CRYPTO_DEV_CCP
 	source "drivers/crypto/ccp/Kconfig"
 endif
 
+config CRYPTO_DEV_PSP
+	bool "Support for AMD Platform Security Processor"
+	depends on X86 && PCI
+	help
+	  The AMD Platform Security Processor provides hardware key-
+	  management services for VMGuard encrypted memory.
+
+if CRYPTO_DEV_PSP
+	source "drivers/crypto/psp/Kconfig"
+endif
+
 config CRYPTO_DEV_MXS_DCP
 	tristate "Support for Freescale MXS DCP"
 	depends on (ARCH_MXS || ARCH_MXC)
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 3c6432d..1ea1e08 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o
 obj-$(CONFIG_CRYPTO_DEV_BFIN_CRC) += bfin_crc.o
 obj-$(CONFIG_CRYPTO_DEV_CCP) += ccp/
+obj-$(CONFIG_CRYPTO_DEV_PSP) += psp/
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam/
 obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
 obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
diff --git a/drivers/crypto/psp/Kconfig b/drivers/crypto/psp/Kconfig
new file mode 100644
index 0000000..acd9b87
--- /dev/null
+++ b/drivers/crypto/psp/Kconfig
@@ -0,0 +1,8 @@
+config CRYPTO_DEV_PSP_DD
+	tristate "PSP Key Management device driver"
+	depends on CRYPTO_DEV_PSP
+	default m
+	help
+	  Provides the interface to use the AMD PSP key management APIs
+	  for use with the AMD Secure Enhanced Virtualization. If you
+	  choose 'M' here, this module will be called psp.
diff --git a/drivers/crypto/psp/Makefile b/drivers/crypto/psp/Makefile
new file mode 100644
index 0000000..1b7d00c
--- /dev/null
+++ b/drivers/crypto/psp/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_CRYPTO_DEV_PSP_DD) += psp.o
+psp-objs := psp-dev.o psp-ops.o
+psp-$(CONFIG_PCI) += psp-pci.o
diff --git a/drivers/crypto/psp/psp-dev.c b/drivers/crypto/psp/psp-dev.c
new file mode 100644
index 0000000..65d5c7e
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.c
@@ -0,0 +1,220 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+
+#include "psp-dev.h"
+
+MODULE_AUTHOR("Advanced Micro Devices, Inc.");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("0.1.0");
+MODULE_DESCRIPTION("AMD VMGuard key-management driver prototype");
+
+static struct psp_device *psp_master;
+
+static LIST_HEAD(psp_devs);
+static DEFINE_SPINLOCK(psp_devs_lock);
+
+static atomic_t psp_id;
+
+static void psp_add_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_add_tail(&psp->entry, &psp_devs);
+	psp_master = psp->get_master(&psp_devs);
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_del_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_del(&psp->entry);
+	if (psp == psp_master)
+		psp_master = NULL;
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_check_support(struct psp_device *psp)
+{
+	if (ioread32(psp->io_regs + PSP_CMDRESP))
+		psp->sev_enabled = 1;
+}
+
+/**
+ * psp_get_master_device - returns a pointer to the PSP master device structure
+ *
+ * Returns NULL if a PSP master device is not present, PSP device structure
+ * otherwise.
+ */
+struct psp_device *psp_get_master_device(void)
+{
+	return psp_master;
+}
+EXPORT_SYMBOL_GPL(psp_get_master_device);
+
+/**
+ * psp_get_device - returns a pointer to the PSP device structure
+ *
+ * Returns NULL if a PSP device is not present, PSP device structure otherwise.
+ */
+struct psp_device *psp_get_device(void)
+{
+	struct psp_device *psp = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	if (list_empty(&psp_devs))
+		goto unlock;
+
+	psp = list_first_entry(&psp_devs, struct psp_device, entry);
+
+unlock:
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+
+	return psp;
+}
+EXPORT_SYMBOL_GPL(psp_get_device);
+
+/**
+ * psp_alloc_struct - allocate and initialize the psp_device struct
+ *
+ * @dev: device struct of the PSP
+ */
+struct psp_device *psp_alloc_struct(struct device *dev)
+{
+	struct psp_device *psp;
+
+	psp = devm_kzalloc(dev, sizeof(*psp), GFP_KERNEL);
+	if (psp == NULL) {
+		dev_err(dev, "unable to allocate device struct\n");
+		return NULL;
+	}
+	psp->dev = dev;
+
+	psp->id = atomic_inc_return(&psp_id);
+	snprintf(psp->name, sizeof(psp->name), "psp%u", psp->id);
+
+	init_waitqueue_head(&psp->int_queue);
+
+	return psp;
+}
+
+/**
+ * psp_init - initialize the PSP device
+ *
+ * @psp: psp_device struct
+ */
+int psp_init(struct psp_device *psp)
+{
+	int ret;
+
+	psp_check_support(psp);
+
+	/* Disable and clear interrupts until ready */
+	iowrite32(0, psp->io_regs + PSP_P2CMSG_INTEN);
+	iowrite32(0xffffffff, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	/* Request an irq */
+	ret = psp->get_irq(psp);
+	if (ret) {
+		dev_err(psp->dev, "unable to allocate IRQ\n");
+		return ret;
+	}
+
+	/* Make the device struct available */
+	psp_add_device(psp);
+
+	/* Enable interrupts */
+	iowrite32(1 << PSP_CMD_COMPLETE_REG, psp->io_regs + PSP_P2CMSG_INTEN);
+
+	ret = psp_ops_init(psp);
+	if (ret)
+		dev_err(psp->dev, "psp_ops_init returned %d\n", ret);
+
+	return 0;
+}
+
+/**
+ * psp_destroy - tear down the PSP device
+ *
+ * @psp: psp_device struct
+ */
+void psp_destroy(struct psp_device *psp)
+{
+	psp_ops_exit(psp);
+
+	/* Remove general access to the device struct */
+	psp_del_device(psp);
+
+	psp->free_irq(psp);
+}
+
+/**
+ * psp_irq_handler - handle interrupts generated by the PSP device
+ *
+ * @irq: the irq associated with the interrupt
+ * @data: the data value supplied when the irq was created
+ */
+irqreturn_t psp_irq_handler(int irq, void *data)
+{
+	struct device *dev = data;
+	struct psp_device *psp = dev_get_drvdata(dev);
+	unsigned int status;
+
+	status = ioread32(psp->io_regs + PSP_P2CMSG_INTSTS);
+	if (status & (1 << PSP_CMD_COMPLETE_REG)) {
+		int reg;
+
+		reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (reg & PSP_CMDRESP_RESP) {
+			psp->int_rcvd = 1;
+			wake_up_interruptible(&psp->int_queue);
+		}
+	}
+
+	iowrite32(status, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	return IRQ_HANDLED;
+}
+
+static int __init psp_mod_init(void)
+{
+	int ret;
+
+	ret = psp_pci_init();
+	if (ret)
+		return ret;
+
+	return 0;
+}
+module_init(psp_mod_init);
+
+static void __exit psp_mod_exit(void)
+{
+	psp_pci_exit();
+}
+module_exit(psp_mod_exit);
diff --git a/drivers/crypto/psp/psp-dev.h b/drivers/crypto/psp/psp-dev.h
new file mode 100644
index 0000000..bb75ca2
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.h
@@ -0,0 +1,95 @@
+
+#ifndef __PSP_DEV_H__
+#define __PSP_DEV_H__
+
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/wait.h>
+#include <linux/dmapool.h>
+#include <linux/hw_random.h>
+#include <linux/interrupt.h>
+#include <linux/miscdevice.h>
+
+#define PSP_P2CMSG_INTEN		0x0110
+#define PSP_P2CMSG_INTSTS		0x0114
+
+#define PSP_C2PMSG_ATTR_0		0x0118
+#define PSP_C2PMSG_ATTR_1		0x011c
+#define PSP_C2PMSG_ATTR_2		0x0120
+#define PSP_C2PMSG_ATTR_3		0x0124
+#define PSP_P2CMSG_ATTR_0		0x0128
+
+#define PSP_C2PMSG(_num)		((_num) << 2)
+#define PSP_CMDRESP			PSP_C2PMSG(32)
+#define PSP_CMDBUFF_ADDR_LO		PSP_C2PMSG(56)
+#define PSP_CMDBUFF_ADDR_HI 		PSP_C2PMSG(57)
+
+#define PSP_P2CMSG(_num)		(_num << 2)
+#define PSP_CMD_COMPLETE_REG		1
+#define PSP_CMD_COMPLETE		PSP_P2CMSG(PSP_CMD_COMPLETE_REG)
+
+#define PSP_CMDRESP_CMD_SHIFT		16
+#define PSP_CMDRESP_IOC			BIT(0)
+#define PSP_CMDRESP_RESP		BIT(31)
+#define PSP_CMDRESP_ERR_MASK		0xffff
+
+#define PSP_DRIVER_NAME			"psp"
+
+struct psp_device {
+	struct list_head entry;
+
+	struct device *dev;
+
+	unsigned int id;
+	char name[32];
+
+	struct dentry *debugfs;
+	struct miscdevice misc;
+
+	unsigned int sev_enabled;
+
+	/*
+	 * Bus-specific device information
+	 */
+	void *dev_specific;
+	int (*get_irq)(struct psp_device *);
+	void (*free_irq)(struct psp_device *);
+	unsigned int irq;
+	struct psp_device *(*get_master)(struct list_head *list);
+
+	/*
+	 * I/O area used for device communication. Writing to the
+	 * mailbox registers generates an interrupt on the PSP.
+	 */
+	void __iomem *io_map;
+	void __iomem *io_regs;
+
+	/* Interrupt wait queue */
+	wait_queue_head_t int_queue;
+	unsigned int int_rcvd;
+};
+
+struct psp_device *psp_get_master_device(void);
+struct psp_device *psp_get_device(void);
+
+#ifdef CONFIG_PCI
+int psp_pci_init(void);
+void psp_pci_exit(void);
+#else
+static inline int psp_pci_init(void) { return 0; }
+static inline void psp_pci_exit(void) { }
+#endif
+
+struct psp_device *psp_alloc_struct(struct device *dev);
+int psp_init(struct psp_device *psp);
+void psp_destroy(struct psp_device *psp);
+
+int psp_ops_init(struct psp_device *psp);
+void psp_ops_exit(struct psp_device *psp);
+
+irqreturn_t psp_irq_handler(int irq, void *data);
+
+#endif /* PSP_DEV_H */
diff --git a/drivers/crypto/psp/psp-ops.c b/drivers/crypto/psp/psp-ops.c
new file mode 100644
index 0000000..81e8dc8
--- /dev/null
+++ b/drivers/crypto/psp/psp-ops.c
@@ -0,0 +1,454 @@
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/wait.h>
+#include <linux/mutex.h>
+#include <linux/ccp-psp.h>
+
+#include "psp-dev.h"
+
+static unsigned int psp_poll = 0;
+module_param(psp_poll, uint, 0444);
+MODULE_PARM_DESC(psp_poll, "Poll for command completion - any non-zero value");
+
+#define PSP_DEFAULT_TIMEOUT	2
+
+DEFINE_MUTEX(psp_cmd_mutex);
+
+static int psp_wait_cmd_poll(struct psp_device *psp, unsigned int timeout,
+			     unsigned int *reg)
+{
+	int wait = timeout * 10;	/* 100ms sleep => timeout * 10 */
+
+	while (--wait) {
+		msleep(100);
+
+		*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (*reg & PSP_CMDRESP_RESP)
+			break;
+	}
+
+	if (!wait) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static int psp_wait_cmd_ioc(struct psp_device *psp, unsigned int timeout,
+			    unsigned int *reg)
+{
+	unsigned long jiffie_timeout = timeout;
+	long ret;
+
+	jiffie_timeout *= HZ;
+
+	ret = wait_event_interruptible_timeout(psp->int_queue, psp->int_rcvd,
+					       jiffie_timeout);
+	if (ret <= 0) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	psp->int_rcvd = 0;
+
+	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+
+	return 0;
+}
+
+static int psp_wait_cmd(struct psp_device *psp, unsigned int timeout,
+			unsigned int *reg)
+{
+	return (*reg & PSP_CMDRESP_IOC) ? psp_wait_cmd_ioc(psp, timeout, reg)
+					: psp_wait_cmd_poll(psp, timeout, reg);
+}
+
+static int psp_issue_cmd(enum psp_cmd cmd, void *data, unsigned int timeout,
+			 int *psp_ret)
+{
+	struct psp_device *psp = psp_get_master_device();
+	unsigned int phys_lsb, phys_msb;
+	unsigned int reg, ret;
+
+	if (psp_ret)
+		*psp_ret = 0;
+
+	if (!psp)
+		return -ENODEV;
+
+	if (!psp->sev_enabled)
+		return -ENOTSUPP;
+
+	/* Set the physical address for the PSP */
+	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+
+	/* Only one command at a time... */
+	mutex_lock(&psp_cmd_mutex);
+
+	iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+	iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+	wmb();
+
+	reg = cmd;
+	reg <<= PSP_CMDRESP_CMD_SHIFT;
+	reg |= psp_poll ? 0 : PSP_CMDRESP_IOC;
+	iowrite32(reg, psp->io_regs + PSP_CMDRESP);
+
+	ret = psp_wait_cmd(psp, timeout, &reg);
+	if (ret)
+		goto unlock;
+
+	if (psp_ret)
+		*psp_ret = reg & PSP_CMDRESP_ERR_MASK;
+
+	if (reg & PSP_CMDRESP_ERR_MASK) {
+		dev_err(psp->dev, "psp command %u failed (%#010x)\n", cmd, reg & PSP_CMDRESP_ERR_MASK);
+		ret = -EIO;
+	}
+
+unlock:
+	mutex_unlock(&psp_cmd_mutex);
+
+	return ret;
+}
+
+int psp_platform_init(struct psp_data_init *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_INIT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_init);
+
+int psp_platform_shutdown(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SHUTDOWN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_shutdown);
+
+int psp_platform_status(struct psp_data_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PLATFORM_STATUS, data,
+			     PSP_DEFAULT_TIMEOUT, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_status);
+
+int psp_guest_launch_start(struct psp_data_launch_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_start);
+
+int psp_guest_launch_update(struct psp_data_launch_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_update);
+
+int psp_guest_launch_finish(struct psp_data_launch_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_finish);
+
+int psp_guest_activate(struct psp_data_activate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_ACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_activate);
+
+int psp_guest_deactivate(struct psp_data_deactivate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DEACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_deactivate);
+
+int psp_guest_df_flush(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DF_FLUSH, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_df_flush);
+
+int psp_guest_decommission(struct psp_data_decommission *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DECOMMISSION, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_decommission);
+
+int psp_guest_status(struct psp_data_guest_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_GUEST_STATUS, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_status);
+
+int psp_dbg_decrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_DECRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_decrypt);
+
+int psp_dbg_encrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_ENCRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_encrypt);
+
+int psp_guest_receive_start(struct psp_data_receive_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_start);
+
+int psp_guest_receive_update(struct psp_data_receive_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_update);
+
+int psp_guest_receive_finish(struct psp_data_receive_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_finish);
+
+int psp_guest_send_start(struct psp_data_send_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_start);
+
+int psp_guest_send_update(struct psp_data_send_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_update);
+
+int psp_guest_send_finish(struct psp_data_send_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_finish);
+
+int psp_platform_pdh_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_gen);
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_CERT_EXPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_cert_export);
+
+int psp_platform_pek_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_gen);
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				 int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CERT_IMPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_cert_import);
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CSR, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_csr);
+
+int psp_platform_factory_reset(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_FACTORY_RESET, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_factory_reset);
+
+static int psp_copy_to_user(void __user *argp, void *data, size_t size)
+{
+	int ret = 0;
+
+	if (copy_to_user(argp, data, size))
+		ret = -EFAULT;
+	free_pages_exact(data, size);
+
+	return ret;
+}
+
+static void *psp_copy_from_user(void __user *argp, size_t *size)
+{
+	u32 buffer_len;
+	void *data;
+
+	if (copy_from_user(&buffer_len, argp, sizeof(buffer_len)))
+		return ERR_PTR(-EFAULT);
+
+	data = alloc_pages_exact(buffer_len, GFP_KERNEL | __GFP_ZERO);
+	if (!data)
+		return ERR_PTR(-ENOMEM);
+	*size = buffer_len;
+
+	if (copy_from_user(data, argp, buffer_len)) {
+		free_pages_exact(data, *size);
+		return ERR_PTR(-EFAULT);
+	}
+
+	return data;
+}
+
+static long psp_ioctl(struct file *file, unsigned int ioctl, unsigned long arg)
+{
+	int ret = -EFAULT;
+	void *data = NULL;
+	size_t buffer_len = 0;
+	void __user *argp = (void __user *)arg;
+	struct psp_issue_cmd input;
+
+	if (ioctl != PSP_ISSUE_CMD)
+		return -EINVAL;
+
+	/* get input parameters */
+	if (copy_from_user(&input, argp, sizeof(struct psp_issue_cmd)))
+	       return -EFAULT;
+
+	if (input.cmd > PSP_CMD_MAX)
+		return -EINVAL;
+
+	switch (input.cmd) {
+
+	case PSP_CMD_INIT: {
+		struct psp_data_init *init;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		init = data;
+		ret = psp_platform_init(init, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_SHUTDOWN: {
+		ret = psp_platform_shutdown(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_FACTORY_RESET: {
+		ret = psp_platform_factory_reset(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PLATFORM_STATUS: {
+		struct psp_data_status *status;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		status = data;
+		ret = psp_platform_status(status, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_GEN: {
+		ret = psp_platform_pek_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CSR: {
+		struct psp_data_pek_csr *pek_csr;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		pek_csr = data;
+		ret = psp_platform_pek_csr(pek_csr, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CERT_IMPORT: {
+		struct psp_data_pek_cert_import *import;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		import = data;
+		ret = psp_platform_pek_cert_import(import, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_GEN: {
+		ret = psp_platform_pdh_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_CERT_EXPORT: {
+		struct psp_data_pdh_cert_export *export;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		export = data;
+		ret = psp_platform_pdh_cert_export(export, &input.psp_ret);
+		break;
+	}
+	default:
+		ret = -EINVAL;
+	}
+
+	if (data && psp_copy_to_user((void *)input.opaque,
+				data, buffer_len))
+		ret = -EFAULT;
+
+	if (copy_to_user(argp, &input, sizeof(struct psp_issue_cmd)))
+		ret = -EFAULT;
+
+	return ret;
+}
+
+static const struct file_operations fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = psp_ioctl,
+};
+
+int psp_ops_init(struct psp_device *psp)
+{
+	struct miscdevice *misc = &psp->misc;
+
+	misc->minor = MISC_DYNAMIC_MINOR;
+	misc->name = psp->name;
+	misc->fops = &fops;
+
+	return misc_register(misc);
+}
+
+void psp_ops_exit(struct psp_device *psp)
+{
+	misc_deregister(&psp->misc);
+}
diff --git a/drivers/crypto/psp/psp-pci.c b/drivers/crypto/psp/psp-pci.c
new file mode 100644
index 0000000..2b4c379
--- /dev/null
+++ b/drivers/crypto/psp/psp-pci.c
@@ -0,0 +1,376 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/dma-mapping.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+
+#include "psp-dev.h"
+
+#define IO_BAR				2
+#define IO_OFFSET			0x10500
+
+#define MSIX_VECTORS			2
+
+struct psp_msix {
+	u32 vector;
+	char name[16];
+};
+
+struct psp_pci {
+	struct pci_dev *pdev;
+	int msix_count;
+	struct psp_msix msix[MSIX_VECTORS];
+};
+
+static int psp_get_msix_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	struct msix_entry msix_entry[MSIX_VECTORS];
+	unsigned int name_len = sizeof(psp_pci->msix[0].name) - 1;
+	int v, ret;
+
+	for (v = 0; v < ARRAY_SIZE(msix_entry); v++)
+		msix_entry[v].entry = v;
+
+	ret = pci_enable_msix_range(pdev, msix_entry, 1, v);
+	if (ret < 0)
+		return ret;
+
+	psp_pci->msix_count = ret;
+	for (v = 0; v < psp_pci->msix_count; v++) {
+		/* Set the interrupt names and request the irqs */
+		snprintf(psp_pci->msix[v].name, name_len, "%s-%u", psp->name, v);
+		psp_pci->msix[v].vector = msix_entry[v].vector;
+		ret = request_irq(psp_pci->msix[v].vector, psp_irq_handler,
+				  0, psp_pci->msix[v].name, dev);
+		if (ret) {
+			dev_notice(dev, "unable to allocate MSI-X IRQ (%d)\n",
+				   ret);
+			goto e_irq;
+		}
+	}
+
+	return 0;
+
+e_irq:
+	while (v--)
+		free_irq(psp_pci->msix[v].vector, dev);
+	pci_disable_msix(pdev);
+	psp_pci->msix_count = 0;
+
+	return ret;
+}
+
+static int psp_get_msi_irq(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	int ret;
+
+	ret = pci_enable_msi(pdev);
+	if (ret)
+		return ret;
+
+	psp->irq = pdev->irq;
+	ret = request_irq(psp->irq, psp_irq_handler, 0, psp->name, dev);
+	if (ret) {
+		dev_notice(dev, "unable to allocate MSI IRQ (%d)\n", ret);
+		goto e_msi;
+	}
+
+	return 0;
+
+e_msi:
+	pci_disable_msi(pdev);
+
+	return ret;
+}
+
+static int psp_get_irqs(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	int ret;
+
+	ret = psp_get_msix_irqs(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI-X vectors, try MSI */
+	dev_notice(dev, "could not enable MSI-X (%d), trying MSI\n", ret);
+	ret = psp_get_msi_irq(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI interrupt */
+	dev_notice(dev, "could not enable MSI (%d), trying PCI\n", ret);
+
+	return ret;
+}
+
+void psp_free_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+
+	if (psp_pci->msix_count) {
+		while (psp_pci->msix_count--)
+			free_irq(psp_pci->msix[psp_pci->msix_count].vector,
+				 dev);
+		pci_disable_msix(pdev);
+	} else {
+		free_irq(psp->irq, dev);
+		pci_disable_msi(pdev);
+	}
+}
+
+static bool psp_is_master(struct psp_device *cur, struct psp_device *new)
+{
+	struct psp_pci *psp_pci_cur, *psp_pci_new;
+	struct pci_dev *pdev_cur, *pdev_new;
+
+	psp_pci_cur = cur->dev_specific;
+	psp_pci_new = new->dev_specific;
+
+	pdev_cur = psp_pci_cur->pdev;
+	pdev_new = psp_pci_new->pdev;
+
+	if (pdev_new->bus->number < pdev_cur->bus->number)
+		return true;
+
+	if (PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn))
+		return true;
+
+	if (PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn))
+		return true;
+
+	return false;
+}
+
+static struct psp_device *psp_get_master(struct list_head *list)
+{
+	struct psp_device *psp, *tmp;
+
+	psp = NULL;
+	list_for_each_entry(tmp, list, entry) {
+		if (!psp || psp_is_master(psp, tmp))
+			psp = tmp;
+	}
+
+	return psp;
+}
+
+static int psp_find_mmio_area(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	unsigned long io_flags;
+
+	io_flags = pci_resource_flags(pdev, IO_BAR);
+	if (io_flags & IORESOURCE_MEM)
+		return IO_BAR;
+
+	return -EIO;
+}
+
+static int psp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct psp_device *psp;
+	struct psp_pci *psp_pci;
+	struct device *dev = &pdev->dev;
+	unsigned int bar;
+	int ret;
+
+	ret = -ENOMEM;
+	psp = psp_alloc_struct(dev);
+	if (!psp)
+		goto e_err;
+
+	psp_pci = devm_kzalloc(dev, sizeof(*psp_pci), GFP_KERNEL);
+	if (!psp_pci) {
+		ret = -ENOMEM;
+		goto e_err;
+	}
+	psp_pci->pdev = pdev;
+	psp->dev_specific = psp_pci;
+	psp->get_irq = psp_get_irqs;
+	psp->free_irq = psp_free_irqs;
+	psp->get_master = psp_get_master;
+
+	ret = pci_request_regions(pdev, PSP_DRIVER_NAME);
+	if (ret) {
+		dev_err(dev, "pci_request_regions failed (%d)\n", ret);
+		goto e_err;
+	}
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		dev_err(dev, "pci_enable_device failed (%d)\n", ret);
+		goto e_regions;
+	}
+
+	pci_set_master(pdev);
+
+	ret = psp_find_mmio_area(psp);
+	if (ret < 0)
+		goto e_device;
+	bar = ret;
+
+	ret = -EIO;
+	psp->io_map = pci_iomap(pdev, bar, 0);
+	if (!psp->io_map) {
+		dev_err(dev, "pci_iomap failed\n");
+		goto e_device;
+	}
+	psp->io_regs = psp->io_map + IO_OFFSET;
+
+	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
+	if (ret) {
+		ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+		if (ret) {
+			dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n",
+				ret);
+			goto e_iomap;
+		}
+	}
+
+	dev_set_drvdata(dev, psp);
+
+	ret = psp_init(psp);
+	if (ret)
+		goto e_iomap;
+
+	dev_notice(dev, "enabled\n");
+
+	return 0;
+
+e_iomap:
+	pci_iounmap(pdev, psp->io_map);
+
+e_device:
+	pci_disable_device(pdev);
+
+e_regions:
+	pci_release_regions(pdev);
+
+e_err:
+	dev_notice(dev, "initialization failed\n");
+	return ret;
+}
+
+static void psp_pci_remove(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct psp_device *psp = dev_get_drvdata(dev);
+
+	if (!psp)
+		return;
+
+	psp_destroy(psp);
+
+	pci_iounmap(pdev, psp->io_map);
+
+	pci_disable_device(pdev);
+
+	pci_release_regions(pdev);
+
+	dev_notice(dev, "disabled\n");
+}
+
+#if 0
+#ifdef CONFIG_PM
+static int ccp_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 1;
+
+	/* Wake all the queue kthreads to prepare for suspend */
+	for (i = 0; i < ccp->cmd_q_count; i++)
+		wake_up_process(ccp->cmd_q[i].kthread);
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	/* Wait for all queue kthreads to say they're done */
+	while (!ccp_queues_suspended(ccp))
+		wait_event_interruptible(ccp->suspend_queue,
+					 ccp_queues_suspended(ccp));
+
+	return 0;
+}
+
+static int ccp_pci_resume(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 0;
+
+	/* Wake up all the kthreads */
+	for (i = 0; i < ccp->cmd_q_count; i++) {
+		ccp->cmd_q[i].suspended = 0;
+		wake_up_process(ccp->cmd_q[i].kthread);
+	}
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	return 0;
+}
+#endif
+#endif
+
+static const struct pci_device_id psp_pci_table[] = {
+	{ PCI_VDEVICE(AMD, 0x1456), },
+	/* Last entry must be zero */
+	{ 0, }
+};
+MODULE_DEVICE_TABLE(pci, psp_pci_table);
+
+static struct pci_driver psp_pci_driver = {
+	.name = PSP_DRIVER_NAME,
+	.id_table = psp_pci_table,
+	.probe = psp_pci_probe,
+	.remove = psp_pci_remove,
+#if 0
+#ifdef CONFIG_PM
+	.suspend = ccp_pci_suspend,
+	.resume = ccp_pci_resume,
+#endif
+#endif
+};
+
+int psp_pci_init(void)
+{
+	return pci_register_driver(&psp_pci_driver);
+}
+
+void psp_pci_exit(void)
+{
+	pci_unregister_driver(&psp_pci_driver);
+}
diff --git a/include/linux/ccp-psp.h b/include/linux/ccp-psp.h
new file mode 100644
index 0000000..b5e791c
--- /dev/null
+++ b/include/linux/ccp-psp.h
@@ -0,0 +1,833 @@
+/*
+ * AMD Secure Processor (PSP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPP_PSP_H__
+#define __CPP_PSP_H__
+
+#include <uapi/linux/ccp-psp.h>
+
+#ifdef CONFIG_X86
+#include <asm/mem_encrypt.h>
+
+#define __psp_pa(x)	__sme_pa(x)
+#else
+#define __psp_pa(x)	__pa(x)
+#endif
+
+/**
+ * struct psp_data_activate - PSP ACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to activate
+ * @asid: asid assigned to the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_activate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 asid;				/* In */
+};
+
+/**
+ * struct psp_data_deactivate - PSP DEACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to deactivate
+ */
+struct __attribute__ ((__packed__)) psp_data_deactivate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_launch_start - PSP LAUNCH_START command parameters
+ * @hdr: command header
+ * @handle: handle assigned to the VM
+ * @flags: configuration flags for the VM
+ * @policy: policy information for the VM
+ * @dh_pub_qx: the Qx parameter of the VM owners ECDH public key
+ * @dh_pub_qy: the Qy parameter of the VM owners ECDH public key
+ * @nonce: nonce generated by the VM owner
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_start {
+	struct psp_data_header hdr;
+	u32 handle;				/* In/Out */
+	u32 flags;				/* In */
+	u32 policy;				/* In */
+	u8  dh_pub_qx[32];			/* In */
+	u8  dh_pub_qy[32];			/* In */
+	u8  nonce[16];				/* In */
+};
+
+/**
+ * struct psp_data_launch_update - PSP LAUNCH_UPDATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to update
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_launch_vcpus - PSP LAUNCH_FINISH VCPU state information
+ * @state_length: length of the VCPU state information to measure
+ * @state_mask_addr: mask of the bytes within the VCPU state information
+ *                   to use in the measurment
+ * @state_count: number of VCPUs to measure
+ * @state_addr: physical address of the VCPU state (VMCB)
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_vcpus {
+	u32 state_length;			/* In */
+	u64 state_mask_addr;			/* In */
+	u32 state_count;			/* In */
+	u64 state_addr[];			/* In */
+};
+
+/**
+ * struct psp_data_launch_finish - PSP LAUNCH_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the encrypted VM memory areas
+ * @vcpus: the VCPU state information to include in the measurement
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In/Out */
+	struct psp_data_launch_vcpus vcpus;	/* In */
+};
+
+/**
+ * struct psp_data_decommission - PSP DECOMMISSION command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to decommission
+ */
+struct __attribute__ ((__packed__)) psp_data_decommission {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_guest_status - PSP GUEST_STATUS command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to retrieve status
+ * @policy: policy information for the VM
+ * @asid: current ASID of the VM
+ * @state: current state of the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_guest_status {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 policy;				/* Out */
+	u32 asid;				/* Out */
+	u8 state;				/* Out */
+};
+
+/**
+ * struct psp_data_dbg - PSP DBG_ENCRYPT/DBG_DECRYPT command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to perform debug operation
+ * @src_addr: source address of data to operate on
+ * @dst_addr: destination address of data to operate on
+ * @length: length of data to operate on
+ */
+struct __attribute__ ((__packed__)) psp_data_dbg {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_start - PSP RECEIVE_START command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @flags: flags for the receive process
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keypad
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryption nonce
+ * @dh_pub_qx: qx parameter of the origin's ECDH public key
+ * @dh_pub_qy: qy parameter of the origin's ECDH public key
+ * @nonce: nonce generated by the origin
+ */
+struct __attribute__((__packed__)) psp_data_receive_start {
+	struct psp_data_header hdr;	/* In/Out */
+	u32 handle;			/* In/Out */
+	u32 flags;			/* In */
+	u32 policy;			/* In */
+	u8 policy_meas[32];		/* In */
+	u8 wrapped_tek[24];		/* In */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];		/* In */
+	u8 reserved2[8];
+	u8 ten[16];			/* In */
+	u8 dh_pub_qx[32];		/* In */
+	u8 dh_pub_qy[32];		/* In */
+	u8 nonce[16];			/* In */
+};
+
+/**
+ * struct psp_receive_update - PSP RECEIVE_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @iv: initialization vector for this blob of memory
+ * @count: number of memory areas to be encrypted
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__((__packed__)) psp_data_receive_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8 iv[16];				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_finish - PSP RECEIVE_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_receive_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In */
+};
+
+/**
+ * struct psp_data_send_start - PSP SEND_START command parameters
+ * @hdr: command header
+ * @nonce: nonce generated by firmware
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keyed with TIK
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryrption nonce
+ * @iv: the IV of transport encryption block
+ * @handle: handle of the VM to process
+ * @flags: flags for send command
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @dh_pub_qx: the Qx parameter of the target DH public key
+ * @dh_pub_qy: the Qy parameter of the target DH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ask_sig_r: the r component of the ASK signature
+ * @ask_sig_s: the s component of the ASK signature
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate in chain
+ */
+
+struct __attribute__((__packed__)) psp_data_send_start {
+	struct psp_data_header hdr;			/* In/Out */
+	u8 nonce[16];					/* Out */
+	u32 policy;					/* Out */
+	u8 policy_meas[32];				/* Out */
+	u8 wrapped_tek[24];				/* Out */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];				/* Out */
+	u8 reserved2[8];
+	u8 ten[16];					/* Out */
+	u8 iv[16];					/* Out */
+	u32 handle;					/* In */
+	u32 flags;					/* In */
+	u8 api_major;					/* In */
+	u8 api_minor;					/* In */
+	u8 reserved3[2];
+	u32 serial;					/* In */
+	u8 dh_pub_qx[32];				/* In */
+	u8 dh_pub_qy[32];				/* In */
+	u8 pek_sig_r[32];				/* In */
+	u8 pek_sig_s[32];				/* In */
+	u8 cek_sig_r[32];				/* In */
+	u8 cek_sig_s[32];				/* In */
+	u8 cek_pub_qx[32];				/* In */
+	u8 cek_pub_qy[32];				/* In */
+	u8 ask_sig_r[32];				/* In */
+	u8 ask_sig_s[32];				/* In */
+	u32 ncerts;					/* In */
+	u32 cert_length;				/* In */
+	u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_send_update - PSP SEND_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @len: length of memory region to encrypt
+ * @src_addr: physical address of memory region to encrypt from
+ * @dst_addr: physical address of memory region to encrypt to
+ */
+struct __attribute__((__packed__)) psp_data_send_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_send_finish - PSP SEND_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_send_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* Out */
+};
+
+#if defined(CONFIG_CRYPTO_DEV_PSP_DD) || \
+	defined(CONFIG_CRYPTO_DEV_PSP_DD_MODULE)
+
+/**
+ * psp_platform_init - perform PSP INIT command
+ *
+ * @init: psp_data_init structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_init(struct psp_data_init *init, int *psp_ret);
+
+/**
+ * psp_platform_shutdown - perform PSP SHUTDOWN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_shutdown(int *psp_ret);
+
+/**
+ * psp_platform_status - perform PSP PLATFORM_STATUS command
+ *
+ * @init: psp_data_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_status(struct psp_data_status *status, int *psp_ret);
+
+/**
+ * psp_guest_launch_start - perform PSP LAUNCH_START command
+ *
+ * @start: psp_data_launch_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_start(struct psp_data_launch_start *start, int *psp_ret);
+
+/**
+ * psp_guest_launch_update - perform PSP LAUNCH_UPDATE command
+ *
+ * @update: psp_data_launch_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_update(struct psp_data_launch_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_launch_finish - perform PSP LAUNCH_FINISH command
+ *
+ * @finish: psp_data_launch_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_finish(struct psp_data_launch_finish *finish, int *psp_ret);
+
+/**
+ * psp_guest_activate - perform PSP ACTIVATE command
+ *
+ * @activate: psp_data_activate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_activate(struct psp_data_activate *activate, int *psp_ret);
+
+/**
+ * psp_guest_deactivate - perform PSP DEACTIVATE command
+ *
+ * @deactivate: psp_data_deactivate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_deactivate(struct psp_data_deactivate *deactivate, int *psp_ret);
+
+/**
+ * psp_guest_df_flush - perform PSP DF_FLUSH command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_df_flush(int *psp_ret);
+
+/**
+ * psp_guest_decommission - perform PSP DECOMMISSION command
+ *
+ * @decommission: psp_data_decommission structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_decommission(struct psp_data_decommission *decommission,
+			   int *psp_ret);
+
+/**
+ * psp_guest_status - perform PSP GUEST_STATUS command
+ *
+ * @status: psp_data_guest_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_status(struct psp_data_guest_status *status, int *psp_ret);
+
+/**
+ * psp_dbg_decrypt - perform PSP DBG_DECRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_dbg_encrypt - perform PSP DBG_ENCRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_guest_receive_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_receive_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_start(struct psp_data_receive_start *start, int *psp_ret);
+
+/**
+ * psp_guest_receive_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_receive_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_update(struct psp_data_receive_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_receive_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_receive_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_guest_send_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_send_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_start(struct psp_data_send_start *start, int *psp_ret);
+
+/**
+ * psp_guest_send_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_send_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_update(struct psp_data_send_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_send_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_send_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_finish(struct psp_data_send_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_platform_pdh_gen - perform PSP PDH_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_gen(int *psp_ret);
+
+/**
+ * psp_platform_pdh_cert_export - perform PSP PDH_CERT_EXPORT command
+ *
+ * @data: psp_data_platform_pdh_cert_export structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_gen - perform PSP PEK_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_gen(int *psp_ret);
+
+/**
+ * psp_platform_pek_cert_import - perform PSP PEK_CERT_IMPORT command
+ *
+ * @data: psp_data_platform_pek_cert_import structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_csr - perform PSP PEK_CSR command
+ *
+ * @data: psp_data_platform_pek_csr structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret);
+
+/**
+ * psp_platform_factory_reset - perform PSP FACTORY_RESET command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_factory_reset(int *psp_ret);
+
+#else	/* CONFIG_CRYPTO_DEV_PSP_DD is not enabled */
+
+static inline int psp_platform_status(struct psp_data_status *status,
+				      int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_init(struct psp_data_init *init, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_shutdown(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_start(struct psp_data_launch_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_update(struct psp_data_launch_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_finish(struct psp_data_launch_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_activate(struct psp_data_activate *activate,
+				     int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_deactivate(struct psp_data_deactivate *deactivate,
+				       int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_df_flush(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_decommission(struct psp_data_decommission *decommission,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_status(struct psp_data_guest_status *status,
+				   int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_start(struct psp_data_receive_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_update(struct psp_data_receive_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+static inline int psp_guest_send_start(struct psp_data_send_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_update(struct psp_data_send_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_finish(struct psp_data_send_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pdh_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pek_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_factory_reset(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+#endif	/* CONFIG_CRYPTO_DEV_PSP_DD */
+
+#endif	/* __CPP_PSP_H__ */
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 185f8ea..af2511a 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -470,3 +470,4 @@ header-y += xilinx-v4l2-controls.h
 header-y += zorro.h
 header-y += zorro_ids.h
 header-y += userfaultfd.h
+header-y += ccp-psp.h
diff --git a/include/uapi/linux/ccp-psp.h b/include/uapi/linux/ccp-psp.h
new file mode 100644
index 0000000..e780b46
--- /dev/null
+++ b/include/uapi/linux/ccp-psp.h
@@ -0,0 +1,182 @@
+#ifndef _UAPI_LINUX_CCP_PSP_
+#define _UAPI_LINUX_CCP_PSP_
+
+/*
+ * Userspace interface to communicated with CCP-PSP driver.
+ */
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+/**
+ * struct psp_data_header - Common PSP communication header
+ * @buffer_len: length of the buffer supplied to the PSP
+ */
+
+struct __attribute__ ((__packed__)) psp_data_header {
+	__u32 buffer_len;				/* In/Out */
+};
+
+/**
+ * struct psp_data_init - PSP INIT command parameters
+ * @hdr: command header
+ * @flags: processing flags
+ */
+struct __attribute__ ((__packed__)) psp_data_init {
+	struct psp_data_header hdr;
+	__u32 flags;				/* In */
+};
+
+/**
+ * struct psp_data_status - PSP PLATFORM_STATUS command parameters
+ * @hdr: command header
+ * @major: major API version
+ * @minor: minor API version
+ * @state: platform state
+ * @cert_status: bit fields describing certificate status
+ * @flags: platform flags
+ * @guest_count: number of active guests
+ */
+struct __attribute__ ((__packed__)) psp_data_status {
+	struct psp_data_header hdr;
+	__u8 api_major;				/* Out */
+	__u8 api_minor;				/* Out */
+	__u8 state;				/* Out */
+	__u8 cert_status;			/* Out */
+	__u32 flags;				/* Out */
+	__u32 guest_count;			/* Out */
+};
+
+/**
+ * struct psp_data_pek_csr - PSP PEK_CSR command parameters
+ * @hdr: command header
+ * @csr - certificate signing request formatted with PKCS
+ */
+struct __attribute__((__packed__)) psp_data_pek_csr {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 csr[];					/* Out */
+};
+
+/**
+ * struct psp_data_cert_import - PSP PEK_CERT_IMPORT command parameters
+ * @hdr: command header
+ * @ncerts: number of certificates in the chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pek_cert_import {
+	struct psp_data_header hdr;			/* In/Out */
+	__u32 ncerts;					/* In */
+	__u32 cert_len;					/* In */
+	__u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_pdh_cert_export - PSP PDH_CERT_EXPORT command parameters
+ * @hdr: command header
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @pdh_pub_qx: the Qx parameter of the target PDH public key
+ * @pdh_pub_qy: the Qy parameter of the target PDH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pdh_cert_export {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 api_major;					/* Out */
+	__u8 api_minor;					/* Out */
+	__u8 reserved1[2];
+	__u32 serial;					/* Out */
+	__u8 pdh_pub_qx[32];				/* Out */
+	__u8 pdh_pub_qy[32];				/* Out */
+	__u8 pek_sig_r[32];				/* Out */
+	__u8 pek_sig_s[32];				/* Out */
+	__u8 cek_sig_r[32];				/* Out */
+	__u8 cek_sig_s[32];				/* Out */
+	__u8 cek_pub_qx[32];				/* Out */
+	__u8 cek_pub_qy[32];				/* Out */
+	__u32 ncerts;					/* Out */
+	__u32 cert_len;					/* Out */
+	__u8 certs[];					/* Out */
+};
+
+/**
+ * platform and management commands
+ */
+enum psp_cmd {
+	PSP_CMD_INIT = 1,
+	PSP_CMD_LAUNCH_START,
+	PSP_CMD_LAUNCH_UPDATE,
+	PSP_CMD_LAUNCH_FINISH,
+	PSP_CMD_ACTIVATE,
+	PSP_CMD_DF_FLUSH,
+	PSP_CMD_SHUTDOWN,
+	PSP_CMD_FACTORY_RESET,
+	PSP_CMD_PLATFORM_STATUS,
+	PSP_CMD_PEK_GEN,
+	PSP_CMD_PEK_CSR,
+	PSP_CMD_PEK_CERT_IMPORT,
+	PSP_CMD_PDH_GEN,
+	PSP_CMD_PDH_CERT_EXPORT,
+	PSP_CMD_SEND_START,
+	PSP_CMD_SEND_UPDATE,
+	PSP_CMD_SEND_FINISH,
+	PSP_CMD_RECEIVE_START,
+	PSP_CMD_RECEIVE_UPDATE,
+	PSP_CMD_RECEIVE_FINISH,
+	PSP_CMD_GUEST_STATUS,
+	PSP_CMD_DEACTIVATE,
+	PSP_CMD_DECOMMISSION,
+	PSP_CMD_DBG_DECRYPT,
+	PSP_CMD_DBG_ENCRYPT,
+	PSP_CMD_MAX,
+};
+
+/**
+ * status code returned by the commands
+ */
+enum psp_ret_code {
+	PSP_RET_SUCCESS = 0,
+	PSP_RET_INVALID_PLATFORM_STATE,
+	PSP_RET_INVALID_GUEST_STATE,
+	PSP_RET_INAVLID_CONFIG,
+	PSP_RET_CMDBUF_TOO_SMALL,
+	PSP_RET_ALREADY_OWNED,
+	PSP_RET_INVALID_CERTIFICATE,
+	PSP_RET_POLICY_FAILURE,
+	PSP_RET_INACTIVE,
+	PSP_RET_INVALID_ADDRESS,
+	PSP_RET_BAD_SIGNATURE,
+	PSP_RET_BAD_MEASUREMENT,
+	PSP_RET_ASID_OWNED,
+	PSP_RET_INVALID_ASID,
+	PSP_RET_WBINVD_REQUIRED,
+	PSP_RET_DFFLUSH_REQUIRED,
+	PSP_RET_INVALID_GUEST,
+};
+
+/**
+ * struct psp_issue_cmd - PSP ioctl parameters
+ * @cmd: PSP commands to execute
+ * @opaque: pointer to the command structure
+ * @psp_ret: PSP return code on failure
+ */
+struct psp_issue_cmd {
+	__u32 cmd;					/* In */
+	__u64 opaque;					/* In */
+	__u32 psp_ret;					/* Out */
+};
+
+#define PSP_IOC_TYPE		'P'
+#define PSP_ISSUE_CMD	_IOWR(PSP_IOC_TYPE, 0x0, struct psp_issue_cmd)
+
+#endif /* _UAPI_LINUX_CCP_PSP_H */
+

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:27   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The driver to communicate with Secure Encrypted Virtualization (SEV)
firmware running within the AMD secure processor providing a secure key
management interface for SEV guests.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 drivers/crypto/Kconfig       |   11 +
 drivers/crypto/Makefile      |    1 
 drivers/crypto/psp/Kconfig   |    8 
 drivers/crypto/psp/Makefile  |    3 
 drivers/crypto/psp/psp-dev.c |  220 +++++++++++
 drivers/crypto/psp/psp-dev.h |   95 +++++
 drivers/crypto/psp/psp-ops.c |  454 +++++++++++++++++++++++
 drivers/crypto/psp/psp-pci.c |  376 +++++++++++++++++++
 include/linux/ccp-psp.h      |  833 ++++++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild    |    1 
 include/uapi/linux/ccp-psp.h |  182 +++++++++
 11 files changed, 2184 insertions(+)
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 1af94e2..3bdbc51 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -464,6 +464,17 @@ if CRYPTO_DEV_CCP
 	source "drivers/crypto/ccp/Kconfig"
 endif
 
+config CRYPTO_DEV_PSP
+	bool "Support for AMD Platform Security Processor"
+	depends on X86 && PCI
+	help
+	  The AMD Platform Security Processor provides hardware key-
+	  management services for VMGuard encrypted memory.
+
+if CRYPTO_DEV_PSP
+	source "drivers/crypto/psp/Kconfig"
+endif
+
 config CRYPTO_DEV_MXS_DCP
 	tristate "Support for Freescale MXS DCP"
 	depends on (ARCH_MXS || ARCH_MXC)
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 3c6432d..1ea1e08 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o
 obj-$(CONFIG_CRYPTO_DEV_BFIN_CRC) += bfin_crc.o
 obj-$(CONFIG_CRYPTO_DEV_CCP) += ccp/
+obj-$(CONFIG_CRYPTO_DEV_PSP) += psp/
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam/
 obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
 obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
diff --git a/drivers/crypto/psp/Kconfig b/drivers/crypto/psp/Kconfig
new file mode 100644
index 0000000..acd9b87
--- /dev/null
+++ b/drivers/crypto/psp/Kconfig
@@ -0,0 +1,8 @@
+config CRYPTO_DEV_PSP_DD
+	tristate "PSP Key Management device driver"
+	depends on CRYPTO_DEV_PSP
+	default m
+	help
+	  Provides the interface to use the AMD PSP key management APIs
+	  for use with the AMD Secure Enhanced Virtualization. If you
+	  choose 'M' here, this module will be called psp.
diff --git a/drivers/crypto/psp/Makefile b/drivers/crypto/psp/Makefile
new file mode 100644
index 0000000..1b7d00c
--- /dev/null
+++ b/drivers/crypto/psp/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_CRYPTO_DEV_PSP_DD) += psp.o
+psp-objs := psp-dev.o psp-ops.o
+psp-$(CONFIG_PCI) += psp-pci.o
diff --git a/drivers/crypto/psp/psp-dev.c b/drivers/crypto/psp/psp-dev.c
new file mode 100644
index 0000000..65d5c7e
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.c
@@ -0,0 +1,220 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+
+#include "psp-dev.h"
+
+MODULE_AUTHOR("Advanced Micro Devices, Inc.");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("0.1.0");
+MODULE_DESCRIPTION("AMD VMGuard key-management driver prototype");
+
+static struct psp_device *psp_master;
+
+static LIST_HEAD(psp_devs);
+static DEFINE_SPINLOCK(psp_devs_lock);
+
+static atomic_t psp_id;
+
+static void psp_add_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_add_tail(&psp->entry, &psp_devs);
+	psp_master = psp->get_master(&psp_devs);
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_del_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_del(&psp->entry);
+	if (psp == psp_master)
+		psp_master = NULL;
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_check_support(struct psp_device *psp)
+{
+	if (ioread32(psp->io_regs + PSP_CMDRESP))
+		psp->sev_enabled = 1;
+}
+
+/**
+ * psp_get_master_device - returns a pointer to the PSP master device structure
+ *
+ * Returns NULL if a PSP master device is not present, PSP device structure
+ * otherwise.
+ */
+struct psp_device *psp_get_master_device(void)
+{
+	return psp_master;
+}
+EXPORT_SYMBOL_GPL(psp_get_master_device);
+
+/**
+ * psp_get_device - returns a pointer to the PSP device structure
+ *
+ * Returns NULL if a PSP device is not present, PSP device structure otherwise.
+ */
+struct psp_device *psp_get_device(void)
+{
+	struct psp_device *psp = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	if (list_empty(&psp_devs))
+		goto unlock;
+
+	psp = list_first_entry(&psp_devs, struct psp_device, entry);
+
+unlock:
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+
+	return psp;
+}
+EXPORT_SYMBOL_GPL(psp_get_device);
+
+/**
+ * psp_alloc_struct - allocate and initialize the psp_device struct
+ *
+ * @dev: device struct of the PSP
+ */
+struct psp_device *psp_alloc_struct(struct device *dev)
+{
+	struct psp_device *psp;
+
+	psp = devm_kzalloc(dev, sizeof(*psp), GFP_KERNEL);
+	if (psp == NULL) {
+		dev_err(dev, "unable to allocate device struct\n");
+		return NULL;
+	}
+	psp->dev = dev;
+
+	psp->id = atomic_inc_return(&psp_id);
+	snprintf(psp->name, sizeof(psp->name), "psp%u", psp->id);
+
+	init_waitqueue_head(&psp->int_queue);
+
+	return psp;
+}
+
+/**
+ * psp_init - initialize the PSP device
+ *
+ * @psp: psp_device struct
+ */
+int psp_init(struct psp_device *psp)
+{
+	int ret;
+
+	psp_check_support(psp);
+
+	/* Disable and clear interrupts until ready */
+	iowrite32(0, psp->io_regs + PSP_P2CMSG_INTEN);
+	iowrite32(0xffffffff, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	/* Request an irq */
+	ret = psp->get_irq(psp);
+	if (ret) {
+		dev_err(psp->dev, "unable to allocate IRQ\n");
+		return ret;
+	}
+
+	/* Make the device struct available */
+	psp_add_device(psp);
+
+	/* Enable interrupts */
+	iowrite32(1 << PSP_CMD_COMPLETE_REG, psp->io_regs + PSP_P2CMSG_INTEN);
+
+	ret = psp_ops_init(psp);
+	if (ret)
+		dev_err(psp->dev, "psp_ops_init returned %d\n", ret);
+
+	return 0;
+}
+
+/**
+ * psp_destroy - tear down the PSP device
+ *
+ * @psp: psp_device struct
+ */
+void psp_destroy(struct psp_device *psp)
+{
+	psp_ops_exit(psp);
+
+	/* Remove general access to the device struct */
+	psp_del_device(psp);
+
+	psp->free_irq(psp);
+}
+
+/**
+ * psp_irq_handler - handle interrupts generated by the PSP device
+ *
+ * @irq: the irq associated with the interrupt
+ * @data: the data value supplied when the irq was created
+ */
+irqreturn_t psp_irq_handler(int irq, void *data)
+{
+	struct device *dev = data;
+	struct psp_device *psp = dev_get_drvdata(dev);
+	unsigned int status;
+
+	status = ioread32(psp->io_regs + PSP_P2CMSG_INTSTS);
+	if (status & (1 << PSP_CMD_COMPLETE_REG)) {
+		int reg;
+
+		reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (reg & PSP_CMDRESP_RESP) {
+			psp->int_rcvd = 1;
+			wake_up_interruptible(&psp->int_queue);
+		}
+	}
+
+	iowrite32(status, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	return IRQ_HANDLED;
+}
+
+static int __init psp_mod_init(void)
+{
+	int ret;
+
+	ret = psp_pci_init();
+	if (ret)
+		return ret;
+
+	return 0;
+}
+module_init(psp_mod_init);
+
+static void __exit psp_mod_exit(void)
+{
+	psp_pci_exit();
+}
+module_exit(psp_mod_exit);
diff --git a/drivers/crypto/psp/psp-dev.h b/drivers/crypto/psp/psp-dev.h
new file mode 100644
index 0000000..bb75ca2
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.h
@@ -0,0 +1,95 @@
+
+#ifndef __PSP_DEV_H__
+#define __PSP_DEV_H__
+
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/wait.h>
+#include <linux/dmapool.h>
+#include <linux/hw_random.h>
+#include <linux/interrupt.h>
+#include <linux/miscdevice.h>
+
+#define PSP_P2CMSG_INTEN		0x0110
+#define PSP_P2CMSG_INTSTS		0x0114
+
+#define PSP_C2PMSG_ATTR_0		0x0118
+#define PSP_C2PMSG_ATTR_1		0x011c
+#define PSP_C2PMSG_ATTR_2		0x0120
+#define PSP_C2PMSG_ATTR_3		0x0124
+#define PSP_P2CMSG_ATTR_0		0x0128
+
+#define PSP_C2PMSG(_num)		((_num) << 2)
+#define PSP_CMDRESP			PSP_C2PMSG(32)
+#define PSP_CMDBUFF_ADDR_LO		PSP_C2PMSG(56)
+#define PSP_CMDBUFF_ADDR_HI 		PSP_C2PMSG(57)
+
+#define PSP_P2CMSG(_num)		(_num << 2)
+#define PSP_CMD_COMPLETE_REG		1
+#define PSP_CMD_COMPLETE		PSP_P2CMSG(PSP_CMD_COMPLETE_REG)
+
+#define PSP_CMDRESP_CMD_SHIFT		16
+#define PSP_CMDRESP_IOC			BIT(0)
+#define PSP_CMDRESP_RESP		BIT(31)
+#define PSP_CMDRESP_ERR_MASK		0xffff
+
+#define PSP_DRIVER_NAME			"psp"
+
+struct psp_device {
+	struct list_head entry;
+
+	struct device *dev;
+
+	unsigned int id;
+	char name[32];
+
+	struct dentry *debugfs;
+	struct miscdevice misc;
+
+	unsigned int sev_enabled;
+
+	/*
+	 * Bus-specific device information
+	 */
+	void *dev_specific;
+	int (*get_irq)(struct psp_device *);
+	void (*free_irq)(struct psp_device *);
+	unsigned int irq;
+	struct psp_device *(*get_master)(struct list_head *list);
+
+	/*
+	 * I/O area used for device communication. Writing to the
+	 * mailbox registers generates an interrupt on the PSP.
+	 */
+	void __iomem *io_map;
+	void __iomem *io_regs;
+
+	/* Interrupt wait queue */
+	wait_queue_head_t int_queue;
+	unsigned int int_rcvd;
+};
+
+struct psp_device *psp_get_master_device(void);
+struct psp_device *psp_get_device(void);
+
+#ifdef CONFIG_PCI
+int psp_pci_init(void);
+void psp_pci_exit(void);
+#else
+static inline int psp_pci_init(void) { return 0; }
+static inline void psp_pci_exit(void) { }
+#endif
+
+struct psp_device *psp_alloc_struct(struct device *dev);
+int psp_init(struct psp_device *psp);
+void psp_destroy(struct psp_device *psp);
+
+int psp_ops_init(struct psp_device *psp);
+void psp_ops_exit(struct psp_device *psp);
+
+irqreturn_t psp_irq_handler(int irq, void *data);
+
+#endif /* PSP_DEV_H */
diff --git a/drivers/crypto/psp/psp-ops.c b/drivers/crypto/psp/psp-ops.c
new file mode 100644
index 0000000..81e8dc8
--- /dev/null
+++ b/drivers/crypto/psp/psp-ops.c
@@ -0,0 +1,454 @@
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/wait.h>
+#include <linux/mutex.h>
+#include <linux/ccp-psp.h>
+
+#include "psp-dev.h"
+
+static unsigned int psp_poll = 0;
+module_param(psp_poll, uint, 0444);
+MODULE_PARM_DESC(psp_poll, "Poll for command completion - any non-zero value");
+
+#define PSP_DEFAULT_TIMEOUT	2
+
+DEFINE_MUTEX(psp_cmd_mutex);
+
+static int psp_wait_cmd_poll(struct psp_device *psp, unsigned int timeout,
+			     unsigned int *reg)
+{
+	int wait = timeout * 10;	/* 100ms sleep => timeout * 10 */
+
+	while (--wait) {
+		msleep(100);
+
+		*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (*reg & PSP_CMDRESP_RESP)
+			break;
+	}
+
+	if (!wait) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static int psp_wait_cmd_ioc(struct psp_device *psp, unsigned int timeout,
+			    unsigned int *reg)
+{
+	unsigned long jiffie_timeout = timeout;
+	long ret;
+
+	jiffie_timeout *= HZ;
+
+	ret = wait_event_interruptible_timeout(psp->int_queue, psp->int_rcvd,
+					       jiffie_timeout);
+	if (ret <= 0) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	psp->int_rcvd = 0;
+
+	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+
+	return 0;
+}
+
+static int psp_wait_cmd(struct psp_device *psp, unsigned int timeout,
+			unsigned int *reg)
+{
+	return (*reg & PSP_CMDRESP_IOC) ? psp_wait_cmd_ioc(psp, timeout, reg)
+					: psp_wait_cmd_poll(psp, timeout, reg);
+}
+
+static int psp_issue_cmd(enum psp_cmd cmd, void *data, unsigned int timeout,
+			 int *psp_ret)
+{
+	struct psp_device *psp = psp_get_master_device();
+	unsigned int phys_lsb, phys_msb;
+	unsigned int reg, ret;
+
+	if (psp_ret)
+		*psp_ret = 0;
+
+	if (!psp)
+		return -ENODEV;
+
+	if (!psp->sev_enabled)
+		return -ENOTSUPP;
+
+	/* Set the physical address for the PSP */
+	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+
+	/* Only one command at a time... */
+	mutex_lock(&psp_cmd_mutex);
+
+	iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+	iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+	wmb();
+
+	reg = cmd;
+	reg <<= PSP_CMDRESP_CMD_SHIFT;
+	reg |= psp_poll ? 0 : PSP_CMDRESP_IOC;
+	iowrite32(reg, psp->io_regs + PSP_CMDRESP);
+
+	ret = psp_wait_cmd(psp, timeout, &reg);
+	if (ret)
+		goto unlock;
+
+	if (psp_ret)
+		*psp_ret = reg & PSP_CMDRESP_ERR_MASK;
+
+	if (reg & PSP_CMDRESP_ERR_MASK) {
+		dev_err(psp->dev, "psp command %u failed (%#010x)\n", cmd, reg & PSP_CMDRESP_ERR_MASK);
+		ret = -EIO;
+	}
+
+unlock:
+	mutex_unlock(&psp_cmd_mutex);
+
+	return ret;
+}
+
+int psp_platform_init(struct psp_data_init *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_INIT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_init);
+
+int psp_platform_shutdown(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SHUTDOWN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_shutdown);
+
+int psp_platform_status(struct psp_data_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PLATFORM_STATUS, data,
+			     PSP_DEFAULT_TIMEOUT, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_status);
+
+int psp_guest_launch_start(struct psp_data_launch_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_start);
+
+int psp_guest_launch_update(struct psp_data_launch_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_update);
+
+int psp_guest_launch_finish(struct psp_data_launch_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_finish);
+
+int psp_guest_activate(struct psp_data_activate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_ACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_activate);
+
+int psp_guest_deactivate(struct psp_data_deactivate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DEACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_deactivate);
+
+int psp_guest_df_flush(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DF_FLUSH, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_df_flush);
+
+int psp_guest_decommission(struct psp_data_decommission *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DECOMMISSION, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_decommission);
+
+int psp_guest_status(struct psp_data_guest_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_GUEST_STATUS, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_status);
+
+int psp_dbg_decrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_DECRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_decrypt);
+
+int psp_dbg_encrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_ENCRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_encrypt);
+
+int psp_guest_receive_start(struct psp_data_receive_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_start);
+
+int psp_guest_receive_update(struct psp_data_receive_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_update);
+
+int psp_guest_receive_finish(struct psp_data_receive_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_finish);
+
+int psp_guest_send_start(struct psp_data_send_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_start);
+
+int psp_guest_send_update(struct psp_data_send_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_update);
+
+int psp_guest_send_finish(struct psp_data_send_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_finish);
+
+int psp_platform_pdh_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_gen);
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_CERT_EXPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_cert_export);
+
+int psp_platform_pek_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_gen);
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				 int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CERT_IMPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_cert_import);
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CSR, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_csr);
+
+int psp_platform_factory_reset(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_FACTORY_RESET, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_factory_reset);
+
+static int psp_copy_to_user(void __user *argp, void *data, size_t size)
+{
+	int ret = 0;
+
+	if (copy_to_user(argp, data, size))
+		ret = -EFAULT;
+	free_pages_exact(data, size);
+
+	return ret;
+}
+
+static void *psp_copy_from_user(void __user *argp, size_t *size)
+{
+	u32 buffer_len;
+	void *data;
+
+	if (copy_from_user(&buffer_len, argp, sizeof(buffer_len)))
+		return ERR_PTR(-EFAULT);
+
+	data = alloc_pages_exact(buffer_len, GFP_KERNEL | __GFP_ZERO);
+	if (!data)
+		return ERR_PTR(-ENOMEM);
+	*size = buffer_len;
+
+	if (copy_from_user(data, argp, buffer_len)) {
+		free_pages_exact(data, *size);
+		return ERR_PTR(-EFAULT);
+	}
+
+	return data;
+}
+
+static long psp_ioctl(struct file *file, unsigned int ioctl, unsigned long arg)
+{
+	int ret = -EFAULT;
+	void *data = NULL;
+	size_t buffer_len = 0;
+	void __user *argp = (void __user *)arg;
+	struct psp_issue_cmd input;
+
+	if (ioctl != PSP_ISSUE_CMD)
+		return -EINVAL;
+
+	/* get input parameters */
+	if (copy_from_user(&input, argp, sizeof(struct psp_issue_cmd)))
+	       return -EFAULT;
+
+	if (input.cmd > PSP_CMD_MAX)
+		return -EINVAL;
+
+	switch (input.cmd) {
+
+	case PSP_CMD_INIT: {
+		struct psp_data_init *init;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		init = data;
+		ret = psp_platform_init(init, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_SHUTDOWN: {
+		ret = psp_platform_shutdown(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_FACTORY_RESET: {
+		ret = psp_platform_factory_reset(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PLATFORM_STATUS: {
+		struct psp_data_status *status;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		status = data;
+		ret = psp_platform_status(status, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_GEN: {
+		ret = psp_platform_pek_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CSR: {
+		struct psp_data_pek_csr *pek_csr;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		pek_csr = data;
+		ret = psp_platform_pek_csr(pek_csr, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CERT_IMPORT: {
+		struct psp_data_pek_cert_import *import;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		import = data;
+		ret = psp_platform_pek_cert_import(import, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_GEN: {
+		ret = psp_platform_pdh_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_CERT_EXPORT: {
+		struct psp_data_pdh_cert_export *export;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		export = data;
+		ret = psp_platform_pdh_cert_export(export, &input.psp_ret);
+		break;
+	}
+	default:
+		ret = -EINVAL;
+	}
+
+	if (data && psp_copy_to_user((void *)input.opaque,
+				data, buffer_len))
+		ret = -EFAULT;
+
+	if (copy_to_user(argp, &input, sizeof(struct psp_issue_cmd)))
+		ret = -EFAULT;
+
+	return ret;
+}
+
+static const struct file_operations fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = psp_ioctl,
+};
+
+int psp_ops_init(struct psp_device *psp)
+{
+	struct miscdevice *misc = &psp->misc;
+
+	misc->minor = MISC_DYNAMIC_MINOR;
+	misc->name = psp->name;
+	misc->fops = &fops;
+
+	return misc_register(misc);
+}
+
+void psp_ops_exit(struct psp_device *psp)
+{
+	misc_deregister(&psp->misc);
+}
diff --git a/drivers/crypto/psp/psp-pci.c b/drivers/crypto/psp/psp-pci.c
new file mode 100644
index 0000000..2b4c379
--- /dev/null
+++ b/drivers/crypto/psp/psp-pci.c
@@ -0,0 +1,376 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/dma-mapping.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+
+#include "psp-dev.h"
+
+#define IO_BAR				2
+#define IO_OFFSET			0x10500
+
+#define MSIX_VECTORS			2
+
+struct psp_msix {
+	u32 vector;
+	char name[16];
+};
+
+struct psp_pci {
+	struct pci_dev *pdev;
+	int msix_count;
+	struct psp_msix msix[MSIX_VECTORS];
+};
+
+static int psp_get_msix_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	struct msix_entry msix_entry[MSIX_VECTORS];
+	unsigned int name_len = sizeof(psp_pci->msix[0].name) - 1;
+	int v, ret;
+
+	for (v = 0; v < ARRAY_SIZE(msix_entry); v++)
+		msix_entry[v].entry = v;
+
+	ret = pci_enable_msix_range(pdev, msix_entry, 1, v);
+	if (ret < 0)
+		return ret;
+
+	psp_pci->msix_count = ret;
+	for (v = 0; v < psp_pci->msix_count; v++) {
+		/* Set the interrupt names and request the irqs */
+		snprintf(psp_pci->msix[v].name, name_len, "%s-%u", psp->name, v);
+		psp_pci->msix[v].vector = msix_entry[v].vector;
+		ret = request_irq(psp_pci->msix[v].vector, psp_irq_handler,
+				  0, psp_pci->msix[v].name, dev);
+		if (ret) {
+			dev_notice(dev, "unable to allocate MSI-X IRQ (%d)\n",
+				   ret);
+			goto e_irq;
+		}
+	}
+
+	return 0;
+
+e_irq:
+	while (v--)
+		free_irq(psp_pci->msix[v].vector, dev);
+	pci_disable_msix(pdev);
+	psp_pci->msix_count = 0;
+
+	return ret;
+}
+
+static int psp_get_msi_irq(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	int ret;
+
+	ret = pci_enable_msi(pdev);
+	if (ret)
+		return ret;
+
+	psp->irq = pdev->irq;
+	ret = request_irq(psp->irq, psp_irq_handler, 0, psp->name, dev);
+	if (ret) {
+		dev_notice(dev, "unable to allocate MSI IRQ (%d)\n", ret);
+		goto e_msi;
+	}
+
+	return 0;
+
+e_msi:
+	pci_disable_msi(pdev);
+
+	return ret;
+}
+
+static int psp_get_irqs(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	int ret;
+
+	ret = psp_get_msix_irqs(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI-X vectors, try MSI */
+	dev_notice(dev, "could not enable MSI-X (%d), trying MSI\n", ret);
+	ret = psp_get_msi_irq(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI interrupt */
+	dev_notice(dev, "could not enable MSI (%d), trying PCI\n", ret);
+
+	return ret;
+}
+
+void psp_free_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+
+	if (psp_pci->msix_count) {
+		while (psp_pci->msix_count--)
+			free_irq(psp_pci->msix[psp_pci->msix_count].vector,
+				 dev);
+		pci_disable_msix(pdev);
+	} else {
+		free_irq(psp->irq, dev);
+		pci_disable_msi(pdev);
+	}
+}
+
+static bool psp_is_master(struct psp_device *cur, struct psp_device *new)
+{
+	struct psp_pci *psp_pci_cur, *psp_pci_new;
+	struct pci_dev *pdev_cur, *pdev_new;
+
+	psp_pci_cur = cur->dev_specific;
+	psp_pci_new = new->dev_specific;
+
+	pdev_cur = psp_pci_cur->pdev;
+	pdev_new = psp_pci_new->pdev;
+
+	if (pdev_new->bus->number < pdev_cur->bus->number)
+		return true;
+
+	if (PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn))
+		return true;
+
+	if (PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn))
+		return true;
+
+	return false;
+}
+
+static struct psp_device *psp_get_master(struct list_head *list)
+{
+	struct psp_device *psp, *tmp;
+
+	psp = NULL;
+	list_for_each_entry(tmp, list, entry) {
+		if (!psp || psp_is_master(psp, tmp))
+			psp = tmp;
+	}
+
+	return psp;
+}
+
+static int psp_find_mmio_area(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	unsigned long io_flags;
+
+	io_flags = pci_resource_flags(pdev, IO_BAR);
+	if (io_flags & IORESOURCE_MEM)
+		return IO_BAR;
+
+	return -EIO;
+}
+
+static int psp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct psp_device *psp;
+	struct psp_pci *psp_pci;
+	struct device *dev = &pdev->dev;
+	unsigned int bar;
+	int ret;
+
+	ret = -ENOMEM;
+	psp = psp_alloc_struct(dev);
+	if (!psp)
+		goto e_err;
+
+	psp_pci = devm_kzalloc(dev, sizeof(*psp_pci), GFP_KERNEL);
+	if (!psp_pci) {
+		ret = -ENOMEM;
+		goto e_err;
+	}
+	psp_pci->pdev = pdev;
+	psp->dev_specific = psp_pci;
+	psp->get_irq = psp_get_irqs;
+	psp->free_irq = psp_free_irqs;
+	psp->get_master = psp_get_master;
+
+	ret = pci_request_regions(pdev, PSP_DRIVER_NAME);
+	if (ret) {
+		dev_err(dev, "pci_request_regions failed (%d)\n", ret);
+		goto e_err;
+	}
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		dev_err(dev, "pci_enable_device failed (%d)\n", ret);
+		goto e_regions;
+	}
+
+	pci_set_master(pdev);
+
+	ret = psp_find_mmio_area(psp);
+	if (ret < 0)
+		goto e_device;
+	bar = ret;
+
+	ret = -EIO;
+	psp->io_map = pci_iomap(pdev, bar, 0);
+	if (!psp->io_map) {
+		dev_err(dev, "pci_iomap failed\n");
+		goto e_device;
+	}
+	psp->io_regs = psp->io_map + IO_OFFSET;
+
+	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
+	if (ret) {
+		ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+		if (ret) {
+			dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n",
+				ret);
+			goto e_iomap;
+		}
+	}
+
+	dev_set_drvdata(dev, psp);
+
+	ret = psp_init(psp);
+	if (ret)
+		goto e_iomap;
+
+	dev_notice(dev, "enabled\n");
+
+	return 0;
+
+e_iomap:
+	pci_iounmap(pdev, psp->io_map);
+
+e_device:
+	pci_disable_device(pdev);
+
+e_regions:
+	pci_release_regions(pdev);
+
+e_err:
+	dev_notice(dev, "initialization failed\n");
+	return ret;
+}
+
+static void psp_pci_remove(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct psp_device *psp = dev_get_drvdata(dev);
+
+	if (!psp)
+		return;
+
+	psp_destroy(psp);
+
+	pci_iounmap(pdev, psp->io_map);
+
+	pci_disable_device(pdev);
+
+	pci_release_regions(pdev);
+
+	dev_notice(dev, "disabled\n");
+}
+
+#if 0
+#ifdef CONFIG_PM
+static int ccp_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 1;
+
+	/* Wake all the queue kthreads to prepare for suspend */
+	for (i = 0; i < ccp->cmd_q_count; i++)
+		wake_up_process(ccp->cmd_q[i].kthread);
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	/* Wait for all queue kthreads to say they're done */
+	while (!ccp_queues_suspended(ccp))
+		wait_event_interruptible(ccp->suspend_queue,
+					 ccp_queues_suspended(ccp));
+
+	return 0;
+}
+
+static int ccp_pci_resume(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 0;
+
+	/* Wake up all the kthreads */
+	for (i = 0; i < ccp->cmd_q_count; i++) {
+		ccp->cmd_q[i].suspended = 0;
+		wake_up_process(ccp->cmd_q[i].kthread);
+	}
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	return 0;
+}
+#endif
+#endif
+
+static const struct pci_device_id psp_pci_table[] = {
+	{ PCI_VDEVICE(AMD, 0x1456), },
+	/* Last entry must be zero */
+	{ 0, }
+};
+MODULE_DEVICE_TABLE(pci, psp_pci_table);
+
+static struct pci_driver psp_pci_driver = {
+	.name = PSP_DRIVER_NAME,
+	.id_table = psp_pci_table,
+	.probe = psp_pci_probe,
+	.remove = psp_pci_remove,
+#if 0
+#ifdef CONFIG_PM
+	.suspend = ccp_pci_suspend,
+	.resume = ccp_pci_resume,
+#endif
+#endif
+};
+
+int psp_pci_init(void)
+{
+	return pci_register_driver(&psp_pci_driver);
+}
+
+void psp_pci_exit(void)
+{
+	pci_unregister_driver(&psp_pci_driver);
+}
diff --git a/include/linux/ccp-psp.h b/include/linux/ccp-psp.h
new file mode 100644
index 0000000..b5e791c
--- /dev/null
+++ b/include/linux/ccp-psp.h
@@ -0,0 +1,833 @@
+/*
+ * AMD Secure Processor (PSP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPP_PSP_H__
+#define __CPP_PSP_H__
+
+#include <uapi/linux/ccp-psp.h>
+
+#ifdef CONFIG_X86
+#include <asm/mem_encrypt.h>
+
+#define __psp_pa(x)	__sme_pa(x)
+#else
+#define __psp_pa(x)	__pa(x)
+#endif
+
+/**
+ * struct psp_data_activate - PSP ACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to activate
+ * @asid: asid assigned to the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_activate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 asid;				/* In */
+};
+
+/**
+ * struct psp_data_deactivate - PSP DEACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to deactivate
+ */
+struct __attribute__ ((__packed__)) psp_data_deactivate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_launch_start - PSP LAUNCH_START command parameters
+ * @hdr: command header
+ * @handle: handle assigned to the VM
+ * @flags: configuration flags for the VM
+ * @policy: policy information for the VM
+ * @dh_pub_qx: the Qx parameter of the VM owners ECDH public key
+ * @dh_pub_qy: the Qy parameter of the VM owners ECDH public key
+ * @nonce: nonce generated by the VM owner
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_start {
+	struct psp_data_header hdr;
+	u32 handle;				/* In/Out */
+	u32 flags;				/* In */
+	u32 policy;				/* In */
+	u8  dh_pub_qx[32];			/* In */
+	u8  dh_pub_qy[32];			/* In */
+	u8  nonce[16];				/* In */
+};
+
+/**
+ * struct psp_data_launch_update - PSP LAUNCH_UPDATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to update
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_launch_vcpus - PSP LAUNCH_FINISH VCPU state information
+ * @state_length: length of the VCPU state information to measure
+ * @state_mask_addr: mask of the bytes within the VCPU state information
+ *                   to use in the measurment
+ * @state_count: number of VCPUs to measure
+ * @state_addr: physical address of the VCPU state (VMCB)
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_vcpus {
+	u32 state_length;			/* In */
+	u64 state_mask_addr;			/* In */
+	u32 state_count;			/* In */
+	u64 state_addr[];			/* In */
+};
+
+/**
+ * struct psp_data_launch_finish - PSP LAUNCH_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the encrypted VM memory areas
+ * @vcpus: the VCPU state information to include in the measurement
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In/Out */
+	struct psp_data_launch_vcpus vcpus;	/* In */
+};
+
+/**
+ * struct psp_data_decommission - PSP DECOMMISSION command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to decommission
+ */
+struct __attribute__ ((__packed__)) psp_data_decommission {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_guest_status - PSP GUEST_STATUS command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to retrieve status
+ * @policy: policy information for the VM
+ * @asid: current ASID of the VM
+ * @state: current state of the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_guest_status {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 policy;				/* Out */
+	u32 asid;				/* Out */
+	u8 state;				/* Out */
+};
+
+/**
+ * struct psp_data_dbg - PSP DBG_ENCRYPT/DBG_DECRYPT command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to perform debug operation
+ * @src_addr: source address of data to operate on
+ * @dst_addr: destination address of data to operate on
+ * @length: length of data to operate on
+ */
+struct __attribute__ ((__packed__)) psp_data_dbg {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_start - PSP RECEIVE_START command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @flags: flags for the receive process
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keypad
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryption nonce
+ * @dh_pub_qx: qx parameter of the origin's ECDH public key
+ * @dh_pub_qy: qy parameter of the origin's ECDH public key
+ * @nonce: nonce generated by the origin
+ */
+struct __attribute__((__packed__)) psp_data_receive_start {
+	struct psp_data_header hdr;	/* In/Out */
+	u32 handle;			/* In/Out */
+	u32 flags;			/* In */
+	u32 policy;			/* In */
+	u8 policy_meas[32];		/* In */
+	u8 wrapped_tek[24];		/* In */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];		/* In */
+	u8 reserved2[8];
+	u8 ten[16];			/* In */
+	u8 dh_pub_qx[32];		/* In */
+	u8 dh_pub_qy[32];		/* In */
+	u8 nonce[16];			/* In */
+};
+
+/**
+ * struct psp_receive_update - PSP RECEIVE_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @iv: initialization vector for this blob of memory
+ * @count: number of memory areas to be encrypted
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__((__packed__)) psp_data_receive_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8 iv[16];				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_finish - PSP RECEIVE_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_receive_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In */
+};
+
+/**
+ * struct psp_data_send_start - PSP SEND_START command parameters
+ * @hdr: command header
+ * @nonce: nonce generated by firmware
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keyed with TIK
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryrption nonce
+ * @iv: the IV of transport encryption block
+ * @handle: handle of the VM to process
+ * @flags: flags for send command
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @dh_pub_qx: the Qx parameter of the target DH public key
+ * @dh_pub_qy: the Qy parameter of the target DH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ask_sig_r: the r component of the ASK signature
+ * @ask_sig_s: the s component of the ASK signature
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate in chain
+ */
+
+struct __attribute__((__packed__)) psp_data_send_start {
+	struct psp_data_header hdr;			/* In/Out */
+	u8 nonce[16];					/* Out */
+	u32 policy;					/* Out */
+	u8 policy_meas[32];				/* Out */
+	u8 wrapped_tek[24];				/* Out */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];				/* Out */
+	u8 reserved2[8];
+	u8 ten[16];					/* Out */
+	u8 iv[16];					/* Out */
+	u32 handle;					/* In */
+	u32 flags;					/* In */
+	u8 api_major;					/* In */
+	u8 api_minor;					/* In */
+	u8 reserved3[2];
+	u32 serial;					/* In */
+	u8 dh_pub_qx[32];				/* In */
+	u8 dh_pub_qy[32];				/* In */
+	u8 pek_sig_r[32];				/* In */
+	u8 pek_sig_s[32];				/* In */
+	u8 cek_sig_r[32];				/* In */
+	u8 cek_sig_s[32];				/* In */
+	u8 cek_pub_qx[32];				/* In */
+	u8 cek_pub_qy[32];				/* In */
+	u8 ask_sig_r[32];				/* In */
+	u8 ask_sig_s[32];				/* In */
+	u32 ncerts;					/* In */
+	u32 cert_length;				/* In */
+	u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_send_update - PSP SEND_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @len: length of memory region to encrypt
+ * @src_addr: physical address of memory region to encrypt from
+ * @dst_addr: physical address of memory region to encrypt to
+ */
+struct __attribute__((__packed__)) psp_data_send_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_send_finish - PSP SEND_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_send_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* Out */
+};
+
+#if defined(CONFIG_CRYPTO_DEV_PSP_DD) || \
+	defined(CONFIG_CRYPTO_DEV_PSP_DD_MODULE)
+
+/**
+ * psp_platform_init - perform PSP INIT command
+ *
+ * @init: psp_data_init structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_init(struct psp_data_init *init, int *psp_ret);
+
+/**
+ * psp_platform_shutdown - perform PSP SHUTDOWN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_shutdown(int *psp_ret);
+
+/**
+ * psp_platform_status - perform PSP PLATFORM_STATUS command
+ *
+ * @init: psp_data_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_status(struct psp_data_status *status, int *psp_ret);
+
+/**
+ * psp_guest_launch_start - perform PSP LAUNCH_START command
+ *
+ * @start: psp_data_launch_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_start(struct psp_data_launch_start *start, int *psp_ret);
+
+/**
+ * psp_guest_launch_update - perform PSP LAUNCH_UPDATE command
+ *
+ * @update: psp_data_launch_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_update(struct psp_data_launch_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_launch_finish - perform PSP LAUNCH_FINISH command
+ *
+ * @finish: psp_data_launch_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_finish(struct psp_data_launch_finish *finish, int *psp_ret);
+
+/**
+ * psp_guest_activate - perform PSP ACTIVATE command
+ *
+ * @activate: psp_data_activate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_activate(struct psp_data_activate *activate, int *psp_ret);
+
+/**
+ * psp_guest_deactivate - perform PSP DEACTIVATE command
+ *
+ * @deactivate: psp_data_deactivate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_deactivate(struct psp_data_deactivate *deactivate, int *psp_ret);
+
+/**
+ * psp_guest_df_flush - perform PSP DF_FLUSH command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_df_flush(int *psp_ret);
+
+/**
+ * psp_guest_decommission - perform PSP DECOMMISSION command
+ *
+ * @decommission: psp_data_decommission structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_decommission(struct psp_data_decommission *decommission,
+			   int *psp_ret);
+
+/**
+ * psp_guest_status - perform PSP GUEST_STATUS command
+ *
+ * @status: psp_data_guest_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_status(struct psp_data_guest_status *status, int *psp_ret);
+
+/**
+ * psp_dbg_decrypt - perform PSP DBG_DECRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_dbg_encrypt - perform PSP DBG_ENCRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_guest_receive_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_receive_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_start(struct psp_data_receive_start *start, int *psp_ret);
+
+/**
+ * psp_guest_receive_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_receive_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_update(struct psp_data_receive_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_receive_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_receive_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_guest_send_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_send_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_start(struct psp_data_send_start *start, int *psp_ret);
+
+/**
+ * psp_guest_send_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_send_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_update(struct psp_data_send_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_send_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_send_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_finish(struct psp_data_send_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_platform_pdh_gen - perform PSP PDH_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_gen(int *psp_ret);
+
+/**
+ * psp_platform_pdh_cert_export - perform PSP PDH_CERT_EXPORT command
+ *
+ * @data: psp_data_platform_pdh_cert_export structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_gen - perform PSP PEK_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_gen(int *psp_ret);
+
+/**
+ * psp_platform_pek_cert_import - perform PSP PEK_CERT_IMPORT command
+ *
+ * @data: psp_data_platform_pek_cert_import structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_csr - perform PSP PEK_CSR command
+ *
+ * @data: psp_data_platform_pek_csr structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret);
+
+/**
+ * psp_platform_factory_reset - perform PSP FACTORY_RESET command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_factory_reset(int *psp_ret);
+
+#else	/* CONFIG_CRYPTO_DEV_PSP_DD is not enabled */
+
+static inline int psp_platform_status(struct psp_data_status *status,
+				      int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_init(struct psp_data_init *init, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_shutdown(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_start(struct psp_data_launch_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_update(struct psp_data_launch_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_finish(struct psp_data_launch_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_activate(struct psp_data_activate *activate,
+				     int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_deactivate(struct psp_data_deactivate *deactivate,
+				       int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_df_flush(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_decommission(struct psp_data_decommission *decommission,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_status(struct psp_data_guest_status *status,
+				   int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_start(struct psp_data_receive_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_update(struct psp_data_receive_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+static inline int psp_guest_send_start(struct psp_data_send_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_update(struct psp_data_send_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_finish(struct psp_data_send_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pdh_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pek_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_factory_reset(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+#endif	/* CONFIG_CRYPTO_DEV_PSP_DD */
+
+#endif	/* __CPP_PSP_H__ */
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 185f8ea..af2511a 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -470,3 +470,4 @@ header-y += xilinx-v4l2-controls.h
 header-y += zorro.h
 header-y += zorro_ids.h
 header-y += userfaultfd.h
+header-y += ccp-psp.h
diff --git a/include/uapi/linux/ccp-psp.h b/include/uapi/linux/ccp-psp.h
new file mode 100644
index 0000000..e780b46
--- /dev/null
+++ b/include/uapi/linux/ccp-psp.h
@@ -0,0 +1,182 @@
+#ifndef _UAPI_LINUX_CCP_PSP_
+#define _UAPI_LINUX_CCP_PSP_
+
+/*
+ * Userspace interface to communicated with CCP-PSP driver.
+ */
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+/**
+ * struct psp_data_header - Common PSP communication header
+ * @buffer_len: length of the buffer supplied to the PSP
+ */
+
+struct __attribute__ ((__packed__)) psp_data_header {
+	__u32 buffer_len;				/* In/Out */
+};
+
+/**
+ * struct psp_data_init - PSP INIT command parameters
+ * @hdr: command header
+ * @flags: processing flags
+ */
+struct __attribute__ ((__packed__)) psp_data_init {
+	struct psp_data_header hdr;
+	__u32 flags;				/* In */
+};
+
+/**
+ * struct psp_data_status - PSP PLATFORM_STATUS command parameters
+ * @hdr: command header
+ * @major: major API version
+ * @minor: minor API version
+ * @state: platform state
+ * @cert_status: bit fields describing certificate status
+ * @flags: platform flags
+ * @guest_count: number of active guests
+ */
+struct __attribute__ ((__packed__)) psp_data_status {
+	struct psp_data_header hdr;
+	__u8 api_major;				/* Out */
+	__u8 api_minor;				/* Out */
+	__u8 state;				/* Out */
+	__u8 cert_status;			/* Out */
+	__u32 flags;				/* Out */
+	__u32 guest_count;			/* Out */
+};
+
+/**
+ * struct psp_data_pek_csr - PSP PEK_CSR command parameters
+ * @hdr: command header
+ * @csr - certificate signing request formatted with PKCS
+ */
+struct __attribute__((__packed__)) psp_data_pek_csr {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 csr[];					/* Out */
+};
+
+/**
+ * struct psp_data_cert_import - PSP PEK_CERT_IMPORT command parameters
+ * @hdr: command header
+ * @ncerts: number of certificates in the chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pek_cert_import {
+	struct psp_data_header hdr;			/* In/Out */
+	__u32 ncerts;					/* In */
+	__u32 cert_len;					/* In */
+	__u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_pdh_cert_export - PSP PDH_CERT_EXPORT command parameters
+ * @hdr: command header
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @pdh_pub_qx: the Qx parameter of the target PDH public key
+ * @pdh_pub_qy: the Qy parameter of the target PDH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pdh_cert_export {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 api_major;					/* Out */
+	__u8 api_minor;					/* Out */
+	__u8 reserved1[2];
+	__u32 serial;					/* Out */
+	__u8 pdh_pub_qx[32];				/* Out */
+	__u8 pdh_pub_qy[32];				/* Out */
+	__u8 pek_sig_r[32];				/* Out */
+	__u8 pek_sig_s[32];				/* Out */
+	__u8 cek_sig_r[32];				/* Out */
+	__u8 cek_sig_s[32];				/* Out */
+	__u8 cek_pub_qx[32];				/* Out */
+	__u8 cek_pub_qy[32];				/* Out */
+	__u32 ncerts;					/* Out */
+	__u32 cert_len;					/* Out */
+	__u8 certs[];					/* Out */
+};
+
+/**
+ * platform and management commands
+ */
+enum psp_cmd {
+	PSP_CMD_INIT = 1,
+	PSP_CMD_LAUNCH_START,
+	PSP_CMD_LAUNCH_UPDATE,
+	PSP_CMD_LAUNCH_FINISH,
+	PSP_CMD_ACTIVATE,
+	PSP_CMD_DF_FLUSH,
+	PSP_CMD_SHUTDOWN,
+	PSP_CMD_FACTORY_RESET,
+	PSP_CMD_PLATFORM_STATUS,
+	PSP_CMD_PEK_GEN,
+	PSP_CMD_PEK_CSR,
+	PSP_CMD_PEK_CERT_IMPORT,
+	PSP_CMD_PDH_GEN,
+	PSP_CMD_PDH_CERT_EXPORT,
+	PSP_CMD_SEND_START,
+	PSP_CMD_SEND_UPDATE,
+	PSP_CMD_SEND_FINISH,
+	PSP_CMD_RECEIVE_START,
+	PSP_CMD_RECEIVE_UPDATE,
+	PSP_CMD_RECEIVE_FINISH,
+	PSP_CMD_GUEST_STATUS,
+	PSP_CMD_DEACTIVATE,
+	PSP_CMD_DECOMMISSION,
+	PSP_CMD_DBG_DECRYPT,
+	PSP_CMD_DBG_ENCRYPT,
+	PSP_CMD_MAX,
+};
+
+/**
+ * status code returned by the commands
+ */
+enum psp_ret_code {
+	PSP_RET_SUCCESS = 0,
+	PSP_RET_INVALID_PLATFORM_STATE,
+	PSP_RET_INVALID_GUEST_STATE,
+	PSP_RET_INAVLID_CONFIG,
+	PSP_RET_CMDBUF_TOO_SMALL,
+	PSP_RET_ALREADY_OWNED,
+	PSP_RET_INVALID_CERTIFICATE,
+	PSP_RET_POLICY_FAILURE,
+	PSP_RET_INACTIVE,
+	PSP_RET_INVALID_ADDRESS,
+	PSP_RET_BAD_SIGNATURE,
+	PSP_RET_BAD_MEASUREMENT,
+	PSP_RET_ASID_OWNED,
+	PSP_RET_INVALID_ASID,
+	PSP_RET_WBINVD_REQUIRED,
+	PSP_RET_DFFLUSH_REQUIRED,
+	PSP_RET_INVALID_GUEST,
+};
+
+/**
+ * struct psp_issue_cmd - PSP ioctl parameters
+ * @cmd: PSP commands to execute
+ * @opaque: pointer to the command structure
+ * @psp_ret: PSP return code on failure
+ */
+struct psp_issue_cmd {
+	__u32 cmd;					/* In */
+	__u64 opaque;					/* In */
+	__u32 psp_ret;					/* Out */
+};
+
+#define PSP_IOC_TYPE		'P'
+#define PSP_ISSUE_CMD	_IOWR(PSP_IOC_TYPE, 0x0, struct psp_issue_cmd)
+
+#endif /* _UAPI_LINUX_CCP_PSP_H */
+

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

The driver to communicate with Secure Encrypted Virtualization (SEV)
firmware running within the AMD secure processor providing a secure key
management interface for SEV guests.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 drivers/crypto/Kconfig       |   11 +
 drivers/crypto/Makefile      |    1 
 drivers/crypto/psp/Kconfig   |    8 
 drivers/crypto/psp/Makefile  |    3 
 drivers/crypto/psp/psp-dev.c |  220 +++++++++++
 drivers/crypto/psp/psp-dev.h |   95 +++++
 drivers/crypto/psp/psp-ops.c |  454 +++++++++++++++++++++++
 drivers/crypto/psp/psp-pci.c |  376 +++++++++++++++++++
 include/linux/ccp-psp.h      |  833 ++++++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild    |    1 
 include/uapi/linux/ccp-psp.h |  182 +++++++++
 11 files changed, 2184 insertions(+)
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 1af94e2..3bdbc51 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -464,6 +464,17 @@ if CRYPTO_DEV_CCP
 	source "drivers/crypto/ccp/Kconfig"
 endif
 
+config CRYPTO_DEV_PSP
+	bool "Support for AMD Platform Security Processor"
+	depends on X86 && PCI
+	help
+	  The AMD Platform Security Processor provides hardware key-
+	  management services for VMGuard encrypted memory.
+
+if CRYPTO_DEV_PSP
+	source "drivers/crypto/psp/Kconfig"
+endif
+
 config CRYPTO_DEV_MXS_DCP
 	tristate "Support for Freescale MXS DCP"
 	depends on (ARCH_MXS || ARCH_MXC)
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 3c6432d..1ea1e08 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o
 obj-$(CONFIG_CRYPTO_DEV_BFIN_CRC) += bfin_crc.o
 obj-$(CONFIG_CRYPTO_DEV_CCP) += ccp/
+obj-$(CONFIG_CRYPTO_DEV_PSP) += psp/
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam/
 obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
 obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
diff --git a/drivers/crypto/psp/Kconfig b/drivers/crypto/psp/Kconfig
new file mode 100644
index 0000000..acd9b87
--- /dev/null
+++ b/drivers/crypto/psp/Kconfig
@@ -0,0 +1,8 @@
+config CRYPTO_DEV_PSP_DD
+	tristate "PSP Key Management device driver"
+	depends on CRYPTO_DEV_PSP
+	default m
+	help
+	  Provides the interface to use the AMD PSP key management APIs
+	  for use with the AMD Secure Enhanced Virtualization. If you
+	  choose 'M' here, this module will be called psp.
diff --git a/drivers/crypto/psp/Makefile b/drivers/crypto/psp/Makefile
new file mode 100644
index 0000000..1b7d00c
--- /dev/null
+++ b/drivers/crypto/psp/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_CRYPTO_DEV_PSP_DD) += psp.o
+psp-objs := psp-dev.o psp-ops.o
+psp-$(CONFIG_PCI) += psp-pci.o
diff --git a/drivers/crypto/psp/psp-dev.c b/drivers/crypto/psp/psp-dev.c
new file mode 100644
index 0000000..65d5c7e
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.c
@@ -0,0 +1,220 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+
+#include "psp-dev.h"
+
+MODULE_AUTHOR("Advanced Micro Devices, Inc.");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("0.1.0");
+MODULE_DESCRIPTION("AMD VMGuard key-management driver prototype");
+
+static struct psp_device *psp_master;
+
+static LIST_HEAD(psp_devs);
+static DEFINE_SPINLOCK(psp_devs_lock);
+
+static atomic_t psp_id;
+
+static void psp_add_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_add_tail(&psp->entry, &psp_devs);
+	psp_master = psp->get_master(&psp_devs);
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_del_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_del(&psp->entry);
+	if (psp == psp_master)
+		psp_master = NULL;
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_check_support(struct psp_device *psp)
+{
+	if (ioread32(psp->io_regs + PSP_CMDRESP))
+		psp->sev_enabled = 1;
+}
+
+/**
+ * psp_get_master_device - returns a pointer to the PSP master device structure
+ *
+ * Returns NULL if a PSP master device is not present, PSP device structure
+ * otherwise.
+ */
+struct psp_device *psp_get_master_device(void)
+{
+	return psp_master;
+}
+EXPORT_SYMBOL_GPL(psp_get_master_device);
+
+/**
+ * psp_get_device - returns a pointer to the PSP device structure
+ *
+ * Returns NULL if a PSP device is not present, PSP device structure otherwise.
+ */
+struct psp_device *psp_get_device(void)
+{
+	struct psp_device *psp = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	if (list_empty(&psp_devs))
+		goto unlock;
+
+	psp = list_first_entry(&psp_devs, struct psp_device, entry);
+
+unlock:
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+
+	return psp;
+}
+EXPORT_SYMBOL_GPL(psp_get_device);
+
+/**
+ * psp_alloc_struct - allocate and initialize the psp_device struct
+ *
+ * @dev: device struct of the PSP
+ */
+struct psp_device *psp_alloc_struct(struct device *dev)
+{
+	struct psp_device *psp;
+
+	psp = devm_kzalloc(dev, sizeof(*psp), GFP_KERNEL);
+	if (psp == NULL) {
+		dev_err(dev, "unable to allocate device struct\n");
+		return NULL;
+	}
+	psp->dev = dev;
+
+	psp->id = atomic_inc_return(&psp_id);
+	snprintf(psp->name, sizeof(psp->name), "psp%u", psp->id);
+
+	init_waitqueue_head(&psp->int_queue);
+
+	return psp;
+}
+
+/**
+ * psp_init - initialize the PSP device
+ *
+ * @psp: psp_device struct
+ */
+int psp_init(struct psp_device *psp)
+{
+	int ret;
+
+	psp_check_support(psp);
+
+	/* Disable and clear interrupts until ready */
+	iowrite32(0, psp->io_regs + PSP_P2CMSG_INTEN);
+	iowrite32(0xffffffff, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	/* Request an irq */
+	ret = psp->get_irq(psp);
+	if (ret) {
+		dev_err(psp->dev, "unable to allocate IRQ\n");
+		return ret;
+	}
+
+	/* Make the device struct available */
+	psp_add_device(psp);
+
+	/* Enable interrupts */
+	iowrite32(1 << PSP_CMD_COMPLETE_REG, psp->io_regs + PSP_P2CMSG_INTEN);
+
+	ret = psp_ops_init(psp);
+	if (ret)
+		dev_err(psp->dev, "psp_ops_init returned %d\n", ret);
+
+	return 0;
+}
+
+/**
+ * psp_destroy - tear down the PSP device
+ *
+ * @psp: psp_device struct
+ */
+void psp_destroy(struct psp_device *psp)
+{
+	psp_ops_exit(psp);
+
+	/* Remove general access to the device struct */
+	psp_del_device(psp);
+
+	psp->free_irq(psp);
+}
+
+/**
+ * psp_irq_handler - handle interrupts generated by the PSP device
+ *
+ * @irq: the irq associated with the interrupt
+ * @data: the data value supplied when the irq was created
+ */
+irqreturn_t psp_irq_handler(int irq, void *data)
+{
+	struct device *dev = data;
+	struct psp_device *psp = dev_get_drvdata(dev);
+	unsigned int status;
+
+	status = ioread32(psp->io_regs + PSP_P2CMSG_INTSTS);
+	if (status & (1 << PSP_CMD_COMPLETE_REG)) {
+		int reg;
+
+		reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (reg & PSP_CMDRESP_RESP) {
+			psp->int_rcvd = 1;
+			wake_up_interruptible(&psp->int_queue);
+		}
+	}
+
+	iowrite32(status, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	return IRQ_HANDLED;
+}
+
+static int __init psp_mod_init(void)
+{
+	int ret;
+
+	ret = psp_pci_init();
+	if (ret)
+		return ret;
+
+	return 0;
+}
+module_init(psp_mod_init);
+
+static void __exit psp_mod_exit(void)
+{
+	psp_pci_exit();
+}
+module_exit(psp_mod_exit);
diff --git a/drivers/crypto/psp/psp-dev.h b/drivers/crypto/psp/psp-dev.h
new file mode 100644
index 0000000..bb75ca2
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.h
@@ -0,0 +1,95 @@
+
+#ifndef __PSP_DEV_H__
+#define __PSP_DEV_H__
+
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/wait.h>
+#include <linux/dmapool.h>
+#include <linux/hw_random.h>
+#include <linux/interrupt.h>
+#include <linux/miscdevice.h>
+
+#define PSP_P2CMSG_INTEN		0x0110
+#define PSP_P2CMSG_INTSTS		0x0114
+
+#define PSP_C2PMSG_ATTR_0		0x0118
+#define PSP_C2PMSG_ATTR_1		0x011c
+#define PSP_C2PMSG_ATTR_2		0x0120
+#define PSP_C2PMSG_ATTR_3		0x0124
+#define PSP_P2CMSG_ATTR_0		0x0128
+
+#define PSP_C2PMSG(_num)		((_num) << 2)
+#define PSP_CMDRESP			PSP_C2PMSG(32)
+#define PSP_CMDBUFF_ADDR_LO		PSP_C2PMSG(56)
+#define PSP_CMDBUFF_ADDR_HI 		PSP_C2PMSG(57)
+
+#define PSP_P2CMSG(_num)		(_num << 2)
+#define PSP_CMD_COMPLETE_REG		1
+#define PSP_CMD_COMPLETE		PSP_P2CMSG(PSP_CMD_COMPLETE_REG)
+
+#define PSP_CMDRESP_CMD_SHIFT		16
+#define PSP_CMDRESP_IOC			BIT(0)
+#define PSP_CMDRESP_RESP		BIT(31)
+#define PSP_CMDRESP_ERR_MASK		0xffff
+
+#define PSP_DRIVER_NAME			"psp"
+
+struct psp_device {
+	struct list_head entry;
+
+	struct device *dev;
+
+	unsigned int id;
+	char name[32];
+
+	struct dentry *debugfs;
+	struct miscdevice misc;
+
+	unsigned int sev_enabled;
+
+	/*
+	 * Bus-specific device information
+	 */
+	void *dev_specific;
+	int (*get_irq)(struct psp_device *);
+	void (*free_irq)(struct psp_device *);
+	unsigned int irq;
+	struct psp_device *(*get_master)(struct list_head *list);
+
+	/*
+	 * I/O area used for device communication. Writing to the
+	 * mailbox registers generates an interrupt on the PSP.
+	 */
+	void __iomem *io_map;
+	void __iomem *io_regs;
+
+	/* Interrupt wait queue */
+	wait_queue_head_t int_queue;
+	unsigned int int_rcvd;
+};
+
+struct psp_device *psp_get_master_device(void);
+struct psp_device *psp_get_device(void);
+
+#ifdef CONFIG_PCI
+int psp_pci_init(void);
+void psp_pci_exit(void);
+#else
+static inline int psp_pci_init(void) { return 0; }
+static inline void psp_pci_exit(void) { }
+#endif
+
+struct psp_device *psp_alloc_struct(struct device *dev);
+int psp_init(struct psp_device *psp);
+void psp_destroy(struct psp_device *psp);
+
+int psp_ops_init(struct psp_device *psp);
+void psp_ops_exit(struct psp_device *psp);
+
+irqreturn_t psp_irq_handler(int irq, void *data);
+
+#endif /* PSP_DEV_H */
diff --git a/drivers/crypto/psp/psp-ops.c b/drivers/crypto/psp/psp-ops.c
new file mode 100644
index 0000000..81e8dc8
--- /dev/null
+++ b/drivers/crypto/psp/psp-ops.c
@@ -0,0 +1,454 @@
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/wait.h>
+#include <linux/mutex.h>
+#include <linux/ccp-psp.h>
+
+#include "psp-dev.h"
+
+static unsigned int psp_poll = 0;
+module_param(psp_poll, uint, 0444);
+MODULE_PARM_DESC(psp_poll, "Poll for command completion - any non-zero value");
+
+#define PSP_DEFAULT_TIMEOUT	2
+
+DEFINE_MUTEX(psp_cmd_mutex);
+
+static int psp_wait_cmd_poll(struct psp_device *psp, unsigned int timeout,
+			     unsigned int *reg)
+{
+	int wait = timeout * 10;	/* 100ms sleep => timeout * 10 */
+
+	while (--wait) {
+		msleep(100);
+
+		*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (*reg & PSP_CMDRESP_RESP)
+			break;
+	}
+
+	if (!wait) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static int psp_wait_cmd_ioc(struct psp_device *psp, unsigned int timeout,
+			    unsigned int *reg)
+{
+	unsigned long jiffie_timeout = timeout;
+	long ret;
+
+	jiffie_timeout *= HZ;
+
+	ret = wait_event_interruptible_timeout(psp->int_queue, psp->int_rcvd,
+					       jiffie_timeout);
+	if (ret <= 0) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	psp->int_rcvd = 0;
+
+	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+
+	return 0;
+}
+
+static int psp_wait_cmd(struct psp_device *psp, unsigned int timeout,
+			unsigned int *reg)
+{
+	return (*reg & PSP_CMDRESP_IOC) ? psp_wait_cmd_ioc(psp, timeout, reg)
+					: psp_wait_cmd_poll(psp, timeout, reg);
+}
+
+static int psp_issue_cmd(enum psp_cmd cmd, void *data, unsigned int timeout,
+			 int *psp_ret)
+{
+	struct psp_device *psp = psp_get_master_device();
+	unsigned int phys_lsb, phys_msb;
+	unsigned int reg, ret;
+
+	if (psp_ret)
+		*psp_ret = 0;
+
+	if (!psp)
+		return -ENODEV;
+
+	if (!psp->sev_enabled)
+		return -ENOTSUPP;
+
+	/* Set the physical address for the PSP */
+	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+
+	/* Only one command at a time... */
+	mutex_lock(&psp_cmd_mutex);
+
+	iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+	iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+	wmb();
+
+	reg = cmd;
+	reg <<= PSP_CMDRESP_CMD_SHIFT;
+	reg |= psp_poll ? 0 : PSP_CMDRESP_IOC;
+	iowrite32(reg, psp->io_regs + PSP_CMDRESP);
+
+	ret = psp_wait_cmd(psp, timeout, &reg);
+	if (ret)
+		goto unlock;
+
+	if (psp_ret)
+		*psp_ret = reg & PSP_CMDRESP_ERR_MASK;
+
+	if (reg & PSP_CMDRESP_ERR_MASK) {
+		dev_err(psp->dev, "psp command %u failed (%#010x)\n", cmd, reg & PSP_CMDRESP_ERR_MASK);
+		ret = -EIO;
+	}
+
+unlock:
+	mutex_unlock(&psp_cmd_mutex);
+
+	return ret;
+}
+
+int psp_platform_init(struct psp_data_init *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_INIT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_init);
+
+int psp_platform_shutdown(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SHUTDOWN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_shutdown);
+
+int psp_platform_status(struct psp_data_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PLATFORM_STATUS, data,
+			     PSP_DEFAULT_TIMEOUT, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_status);
+
+int psp_guest_launch_start(struct psp_data_launch_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_start);
+
+int psp_guest_launch_update(struct psp_data_launch_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_update);
+
+int psp_guest_launch_finish(struct psp_data_launch_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_finish);
+
+int psp_guest_activate(struct psp_data_activate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_ACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_activate);
+
+int psp_guest_deactivate(struct psp_data_deactivate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DEACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_deactivate);
+
+int psp_guest_df_flush(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DF_FLUSH, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_df_flush);
+
+int psp_guest_decommission(struct psp_data_decommission *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DECOMMISSION, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_decommission);
+
+int psp_guest_status(struct psp_data_guest_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_GUEST_STATUS, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_status);
+
+int psp_dbg_decrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_DECRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_decrypt);
+
+int psp_dbg_encrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_ENCRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_encrypt);
+
+int psp_guest_receive_start(struct psp_data_receive_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_start);
+
+int psp_guest_receive_update(struct psp_data_receive_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_update);
+
+int psp_guest_receive_finish(struct psp_data_receive_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_finish);
+
+int psp_guest_send_start(struct psp_data_send_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_start);
+
+int psp_guest_send_update(struct psp_data_send_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_update);
+
+int psp_guest_send_finish(struct psp_data_send_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_finish);
+
+int psp_platform_pdh_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_gen);
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_CERT_EXPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_cert_export);
+
+int psp_platform_pek_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_gen);
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				 int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CERT_IMPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_cert_import);
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CSR, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_csr);
+
+int psp_platform_factory_reset(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_FACTORY_RESET, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_factory_reset);
+
+static int psp_copy_to_user(void __user *argp, void *data, size_t size)
+{
+	int ret = 0;
+
+	if (copy_to_user(argp, data, size))
+		ret = -EFAULT;
+	free_pages_exact(data, size);
+
+	return ret;
+}
+
+static void *psp_copy_from_user(void __user *argp, size_t *size)
+{
+	u32 buffer_len;
+	void *data;
+
+	if (copy_from_user(&buffer_len, argp, sizeof(buffer_len)))
+		return ERR_PTR(-EFAULT);
+
+	data = alloc_pages_exact(buffer_len, GFP_KERNEL | __GFP_ZERO);
+	if (!data)
+		return ERR_PTR(-ENOMEM);
+	*size = buffer_len;
+
+	if (copy_from_user(data, argp, buffer_len)) {
+		free_pages_exact(data, *size);
+		return ERR_PTR(-EFAULT);
+	}
+
+	return data;
+}
+
+static long psp_ioctl(struct file *file, unsigned int ioctl, unsigned long arg)
+{
+	int ret = -EFAULT;
+	void *data = NULL;
+	size_t buffer_len = 0;
+	void __user *argp = (void __user *)arg;
+	struct psp_issue_cmd input;
+
+	if (ioctl != PSP_ISSUE_CMD)
+		return -EINVAL;
+
+	/* get input parameters */
+	if (copy_from_user(&input, argp, sizeof(struct psp_issue_cmd)))
+	       return -EFAULT;
+
+	if (input.cmd > PSP_CMD_MAX)
+		return -EINVAL;
+
+	switch (input.cmd) {
+
+	case PSP_CMD_INIT: {
+		struct psp_data_init *init;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		init = data;
+		ret = psp_platform_init(init, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_SHUTDOWN: {
+		ret = psp_platform_shutdown(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_FACTORY_RESET: {
+		ret = psp_platform_factory_reset(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PLATFORM_STATUS: {
+		struct psp_data_status *status;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		status = data;
+		ret = psp_platform_status(status, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_GEN: {
+		ret = psp_platform_pek_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CSR: {
+		struct psp_data_pek_csr *pek_csr;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		pek_csr = data;
+		ret = psp_platform_pek_csr(pek_csr, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CERT_IMPORT: {
+		struct psp_data_pek_cert_import *import;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		import = data;
+		ret = psp_platform_pek_cert_import(import, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_GEN: {
+		ret = psp_platform_pdh_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_CERT_EXPORT: {
+		struct psp_data_pdh_cert_export *export;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		export = data;
+		ret = psp_platform_pdh_cert_export(export, &input.psp_ret);
+		break;
+	}
+	default:
+		ret = -EINVAL;
+	}
+
+	if (data && psp_copy_to_user((void *)input.opaque,
+				data, buffer_len))
+		ret = -EFAULT;
+
+	if (copy_to_user(argp, &input, sizeof(struct psp_issue_cmd)))
+		ret = -EFAULT;
+
+	return ret;
+}
+
+static const struct file_operations fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = psp_ioctl,
+};
+
+int psp_ops_init(struct psp_device *psp)
+{
+	struct miscdevice *misc = &psp->misc;
+
+	misc->minor = MISC_DYNAMIC_MINOR;
+	misc->name = psp->name;
+	misc->fops = &fops;
+
+	return misc_register(misc);
+}
+
+void psp_ops_exit(struct psp_device *psp)
+{
+	misc_deregister(&psp->misc);
+}
diff --git a/drivers/crypto/psp/psp-pci.c b/drivers/crypto/psp/psp-pci.c
new file mode 100644
index 0000000..2b4c379
--- /dev/null
+++ b/drivers/crypto/psp/psp-pci.c
@@ -0,0 +1,376 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/dma-mapping.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+
+#include "psp-dev.h"
+
+#define IO_BAR				2
+#define IO_OFFSET			0x10500
+
+#define MSIX_VECTORS			2
+
+struct psp_msix {
+	u32 vector;
+	char name[16];
+};
+
+struct psp_pci {
+	struct pci_dev *pdev;
+	int msix_count;
+	struct psp_msix msix[MSIX_VECTORS];
+};
+
+static int psp_get_msix_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	struct msix_entry msix_entry[MSIX_VECTORS];
+	unsigned int name_len = sizeof(psp_pci->msix[0].name) - 1;
+	int v, ret;
+
+	for (v = 0; v < ARRAY_SIZE(msix_entry); v++)
+		msix_entry[v].entry = v;
+
+	ret = pci_enable_msix_range(pdev, msix_entry, 1, v);
+	if (ret < 0)
+		return ret;
+
+	psp_pci->msix_count = ret;
+	for (v = 0; v < psp_pci->msix_count; v++) {
+		/* Set the interrupt names and request the irqs */
+		snprintf(psp_pci->msix[v].name, name_len, "%s-%u", psp->name, v);
+		psp_pci->msix[v].vector = msix_entry[v].vector;
+		ret = request_irq(psp_pci->msix[v].vector, psp_irq_handler,
+				  0, psp_pci->msix[v].name, dev);
+		if (ret) {
+			dev_notice(dev, "unable to allocate MSI-X IRQ (%d)\n",
+				   ret);
+			goto e_irq;
+		}
+	}
+
+	return 0;
+
+e_irq:
+	while (v--)
+		free_irq(psp_pci->msix[v].vector, dev);
+	pci_disable_msix(pdev);
+	psp_pci->msix_count = 0;
+
+	return ret;
+}
+
+static int psp_get_msi_irq(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	int ret;
+
+	ret = pci_enable_msi(pdev);
+	if (ret)
+		return ret;
+
+	psp->irq = pdev->irq;
+	ret = request_irq(psp->irq, psp_irq_handler, 0, psp->name, dev);
+	if (ret) {
+		dev_notice(dev, "unable to allocate MSI IRQ (%d)\n", ret);
+		goto e_msi;
+	}
+
+	return 0;
+
+e_msi:
+	pci_disable_msi(pdev);
+
+	return ret;
+}
+
+static int psp_get_irqs(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	int ret;
+
+	ret = psp_get_msix_irqs(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI-X vectors, try MSI */
+	dev_notice(dev, "could not enable MSI-X (%d), trying MSI\n", ret);
+	ret = psp_get_msi_irq(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI interrupt */
+	dev_notice(dev, "could not enable MSI (%d), trying PCI\n", ret);
+
+	return ret;
+}
+
+void psp_free_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+
+	if (psp_pci->msix_count) {
+		while (psp_pci->msix_count--)
+			free_irq(psp_pci->msix[psp_pci->msix_count].vector,
+				 dev);
+		pci_disable_msix(pdev);
+	} else {
+		free_irq(psp->irq, dev);
+		pci_disable_msi(pdev);
+	}
+}
+
+static bool psp_is_master(struct psp_device *cur, struct psp_device *new)
+{
+	struct psp_pci *psp_pci_cur, *psp_pci_new;
+	struct pci_dev *pdev_cur, *pdev_new;
+
+	psp_pci_cur = cur->dev_specific;
+	psp_pci_new = new->dev_specific;
+
+	pdev_cur = psp_pci_cur->pdev;
+	pdev_new = psp_pci_new->pdev;
+
+	if (pdev_new->bus->number < pdev_cur->bus->number)
+		return true;
+
+	if (PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn))
+		return true;
+
+	if (PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn))
+		return true;
+
+	return false;
+}
+
+static struct psp_device *psp_get_master(struct list_head *list)
+{
+	struct psp_device *psp, *tmp;
+
+	psp = NULL;
+	list_for_each_entry(tmp, list, entry) {
+		if (!psp || psp_is_master(psp, tmp))
+			psp = tmp;
+	}
+
+	return psp;
+}
+
+static int psp_find_mmio_area(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	unsigned long io_flags;
+
+	io_flags = pci_resource_flags(pdev, IO_BAR);
+	if (io_flags & IORESOURCE_MEM)
+		return IO_BAR;
+
+	return -EIO;
+}
+
+static int psp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct psp_device *psp;
+	struct psp_pci *psp_pci;
+	struct device *dev = &pdev->dev;
+	unsigned int bar;
+	int ret;
+
+	ret = -ENOMEM;
+	psp = psp_alloc_struct(dev);
+	if (!psp)
+		goto e_err;
+
+	psp_pci = devm_kzalloc(dev, sizeof(*psp_pci), GFP_KERNEL);
+	if (!psp_pci) {
+		ret = -ENOMEM;
+		goto e_err;
+	}
+	psp_pci->pdev = pdev;
+	psp->dev_specific = psp_pci;
+	psp->get_irq = psp_get_irqs;
+	psp->free_irq = psp_free_irqs;
+	psp->get_master = psp_get_master;
+
+	ret = pci_request_regions(pdev, PSP_DRIVER_NAME);
+	if (ret) {
+		dev_err(dev, "pci_request_regions failed (%d)\n", ret);
+		goto e_err;
+	}
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		dev_err(dev, "pci_enable_device failed (%d)\n", ret);
+		goto e_regions;
+	}
+
+	pci_set_master(pdev);
+
+	ret = psp_find_mmio_area(psp);
+	if (ret < 0)
+		goto e_device;
+	bar = ret;
+
+	ret = -EIO;
+	psp->io_map = pci_iomap(pdev, bar, 0);
+	if (!psp->io_map) {
+		dev_err(dev, "pci_iomap failed\n");
+		goto e_device;
+	}
+	psp->io_regs = psp->io_map + IO_OFFSET;
+
+	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
+	if (ret) {
+		ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+		if (ret) {
+			dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n",
+				ret);
+			goto e_iomap;
+		}
+	}
+
+	dev_set_drvdata(dev, psp);
+
+	ret = psp_init(psp);
+	if (ret)
+		goto e_iomap;
+
+	dev_notice(dev, "enabled\n");
+
+	return 0;
+
+e_iomap:
+	pci_iounmap(pdev, psp->io_map);
+
+e_device:
+	pci_disable_device(pdev);
+
+e_regions:
+	pci_release_regions(pdev);
+
+e_err:
+	dev_notice(dev, "initialization failed\n");
+	return ret;
+}
+
+static void psp_pci_remove(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct psp_device *psp = dev_get_drvdata(dev);
+
+	if (!psp)
+		return;
+
+	psp_destroy(psp);
+
+	pci_iounmap(pdev, psp->io_map);
+
+	pci_disable_device(pdev);
+
+	pci_release_regions(pdev);
+
+	dev_notice(dev, "disabled\n");
+}
+
+#if 0
+#ifdef CONFIG_PM
+static int ccp_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 1;
+
+	/* Wake all the queue kthreads to prepare for suspend */
+	for (i = 0; i < ccp->cmd_q_count; i++)
+		wake_up_process(ccp->cmd_q[i].kthread);
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	/* Wait for all queue kthreads to say they're done */
+	while (!ccp_queues_suspended(ccp))
+		wait_event_interruptible(ccp->suspend_queue,
+					 ccp_queues_suspended(ccp));
+
+	return 0;
+}
+
+static int ccp_pci_resume(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 0;
+
+	/* Wake up all the kthreads */
+	for (i = 0; i < ccp->cmd_q_count; i++) {
+		ccp->cmd_q[i].suspended = 0;
+		wake_up_process(ccp->cmd_q[i].kthread);
+	}
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	return 0;
+}
+#endif
+#endif
+
+static const struct pci_device_id psp_pci_table[] = {
+	{ PCI_VDEVICE(AMD, 0x1456), },
+	/* Last entry must be zero */
+	{ 0, }
+};
+MODULE_DEVICE_TABLE(pci, psp_pci_table);
+
+static struct pci_driver psp_pci_driver = {
+	.name = PSP_DRIVER_NAME,
+	.id_table = psp_pci_table,
+	.probe = psp_pci_probe,
+	.remove = psp_pci_remove,
+#if 0
+#ifdef CONFIG_PM
+	.suspend = ccp_pci_suspend,
+	.resume = ccp_pci_resume,
+#endif
+#endif
+};
+
+int psp_pci_init(void)
+{
+	return pci_register_driver(&psp_pci_driver);
+}
+
+void psp_pci_exit(void)
+{
+	pci_unregister_driver(&psp_pci_driver);
+}
diff --git a/include/linux/ccp-psp.h b/include/linux/ccp-psp.h
new file mode 100644
index 0000000..b5e791c
--- /dev/null
+++ b/include/linux/ccp-psp.h
@@ -0,0 +1,833 @@
+/*
+ * AMD Secure Processor (PSP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPP_PSP_H__
+#define __CPP_PSP_H__
+
+#include <uapi/linux/ccp-psp.h>
+
+#ifdef CONFIG_X86
+#include <asm/mem_encrypt.h>
+
+#define __psp_pa(x)	__sme_pa(x)
+#else
+#define __psp_pa(x)	__pa(x)
+#endif
+
+/**
+ * struct psp_data_activate - PSP ACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to activate
+ * @asid: asid assigned to the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_activate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 asid;				/* In */
+};
+
+/**
+ * struct psp_data_deactivate - PSP DEACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to deactivate
+ */
+struct __attribute__ ((__packed__)) psp_data_deactivate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_launch_start - PSP LAUNCH_START command parameters
+ * @hdr: command header
+ * @handle: handle assigned to the VM
+ * @flags: configuration flags for the VM
+ * @policy: policy information for the VM
+ * @dh_pub_qx: the Qx parameter of the VM owners ECDH public key
+ * @dh_pub_qy: the Qy parameter of the VM owners ECDH public key
+ * @nonce: nonce generated by the VM owner
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_start {
+	struct psp_data_header hdr;
+	u32 handle;				/* In/Out */
+	u32 flags;				/* In */
+	u32 policy;				/* In */
+	u8  dh_pub_qx[32];			/* In */
+	u8  dh_pub_qy[32];			/* In */
+	u8  nonce[16];				/* In */
+};
+
+/**
+ * struct psp_data_launch_update - PSP LAUNCH_UPDATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to update
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_launch_vcpus - PSP LAUNCH_FINISH VCPU state information
+ * @state_length: length of the VCPU state information to measure
+ * @state_mask_addr: mask of the bytes within the VCPU state information
+ *                   to use in the measurment
+ * @state_count: number of VCPUs to measure
+ * @state_addr: physical address of the VCPU state (VMCB)
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_vcpus {
+	u32 state_length;			/* In */
+	u64 state_mask_addr;			/* In */
+	u32 state_count;			/* In */
+	u64 state_addr[];			/* In */
+};
+
+/**
+ * struct psp_data_launch_finish - PSP LAUNCH_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the encrypted VM memory areas
+ * @vcpus: the VCPU state information to include in the measurement
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In/Out */
+	struct psp_data_launch_vcpus vcpus;	/* In */
+};
+
+/**
+ * struct psp_data_decommission - PSP DECOMMISSION command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to decommission
+ */
+struct __attribute__ ((__packed__)) psp_data_decommission {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_guest_status - PSP GUEST_STATUS command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to retrieve status
+ * @policy: policy information for the VM
+ * @asid: current ASID of the VM
+ * @state: current state of the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_guest_status {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 policy;				/* Out */
+	u32 asid;				/* Out */
+	u8 state;				/* Out */
+};
+
+/**
+ * struct psp_data_dbg - PSP DBG_ENCRYPT/DBG_DECRYPT command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to perform debug operation
+ * @src_addr: source address of data to operate on
+ * @dst_addr: destination address of data to operate on
+ * @length: length of data to operate on
+ */
+struct __attribute__ ((__packed__)) psp_data_dbg {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_start - PSP RECEIVE_START command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @flags: flags for the receive process
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keypad
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryption nonce
+ * @dh_pub_qx: qx parameter of the origin's ECDH public key
+ * @dh_pub_qy: qy parameter of the origin's ECDH public key
+ * @nonce: nonce generated by the origin
+ */
+struct __attribute__((__packed__)) psp_data_receive_start {
+	struct psp_data_header hdr;	/* In/Out */
+	u32 handle;			/* In/Out */
+	u32 flags;			/* In */
+	u32 policy;			/* In */
+	u8 policy_meas[32];		/* In */
+	u8 wrapped_tek[24];		/* In */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];		/* In */
+	u8 reserved2[8];
+	u8 ten[16];			/* In */
+	u8 dh_pub_qx[32];		/* In */
+	u8 dh_pub_qy[32];		/* In */
+	u8 nonce[16];			/* In */
+};
+
+/**
+ * struct psp_receive_update - PSP RECEIVE_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @iv: initialization vector for this blob of memory
+ * @count: number of memory areas to be encrypted
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__((__packed__)) psp_data_receive_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8 iv[16];				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_finish - PSP RECEIVE_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_receive_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In */
+};
+
+/**
+ * struct psp_data_send_start - PSP SEND_START command parameters
+ * @hdr: command header
+ * @nonce: nonce generated by firmware
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keyed with TIK
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryrption nonce
+ * @iv: the IV of transport encryption block
+ * @handle: handle of the VM to process
+ * @flags: flags for send command
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @dh_pub_qx: the Qx parameter of the target DH public key
+ * @dh_pub_qy: the Qy parameter of the target DH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ask_sig_r: the r component of the ASK signature
+ * @ask_sig_s: the s component of the ASK signature
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate in chain
+ */
+
+struct __attribute__((__packed__)) psp_data_send_start {
+	struct psp_data_header hdr;			/* In/Out */
+	u8 nonce[16];					/* Out */
+	u32 policy;					/* Out */
+	u8 policy_meas[32];				/* Out */
+	u8 wrapped_tek[24];				/* Out */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];				/* Out */
+	u8 reserved2[8];
+	u8 ten[16];					/* Out */
+	u8 iv[16];					/* Out */
+	u32 handle;					/* In */
+	u32 flags;					/* In */
+	u8 api_major;					/* In */
+	u8 api_minor;					/* In */
+	u8 reserved3[2];
+	u32 serial;					/* In */
+	u8 dh_pub_qx[32];				/* In */
+	u8 dh_pub_qy[32];				/* In */
+	u8 pek_sig_r[32];				/* In */
+	u8 pek_sig_s[32];				/* In */
+	u8 cek_sig_r[32];				/* In */
+	u8 cek_sig_s[32];				/* In */
+	u8 cek_pub_qx[32];				/* In */
+	u8 cek_pub_qy[32];				/* In */
+	u8 ask_sig_r[32];				/* In */
+	u8 ask_sig_s[32];				/* In */
+	u32 ncerts;					/* In */
+	u32 cert_length;				/* In */
+	u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_send_update - PSP SEND_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @len: length of memory region to encrypt
+ * @src_addr: physical address of memory region to encrypt from
+ * @dst_addr: physical address of memory region to encrypt to
+ */
+struct __attribute__((__packed__)) psp_data_send_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_send_finish - PSP SEND_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_send_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* Out */
+};
+
+#if defined(CONFIG_CRYPTO_DEV_PSP_DD) || \
+	defined(CONFIG_CRYPTO_DEV_PSP_DD_MODULE)
+
+/**
+ * psp_platform_init - perform PSP INIT command
+ *
+ * @init: psp_data_init structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_init(struct psp_data_init *init, int *psp_ret);
+
+/**
+ * psp_platform_shutdown - perform PSP SHUTDOWN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_shutdown(int *psp_ret);
+
+/**
+ * psp_platform_status - perform PSP PLATFORM_STATUS command
+ *
+ * @init: psp_data_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_status(struct psp_data_status *status, int *psp_ret);
+
+/**
+ * psp_guest_launch_start - perform PSP LAUNCH_START command
+ *
+ * @start: psp_data_launch_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_start(struct psp_data_launch_start *start, int *psp_ret);
+
+/**
+ * psp_guest_launch_update - perform PSP LAUNCH_UPDATE command
+ *
+ * @update: psp_data_launch_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_update(struct psp_data_launch_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_launch_finish - perform PSP LAUNCH_FINISH command
+ *
+ * @finish: psp_data_launch_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_finish(struct psp_data_launch_finish *finish, int *psp_ret);
+
+/**
+ * psp_guest_activate - perform PSP ACTIVATE command
+ *
+ * @activate: psp_data_activate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_activate(struct psp_data_activate *activate, int *psp_ret);
+
+/**
+ * psp_guest_deactivate - perform PSP DEACTIVATE command
+ *
+ * @deactivate: psp_data_deactivate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_deactivate(struct psp_data_deactivate *deactivate, int *psp_ret);
+
+/**
+ * psp_guest_df_flush - perform PSP DF_FLUSH command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_df_flush(int *psp_ret);
+
+/**
+ * psp_guest_decommission - perform PSP DECOMMISSION command
+ *
+ * @decommission: psp_data_decommission structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_decommission(struct psp_data_decommission *decommission,
+			   int *psp_ret);
+
+/**
+ * psp_guest_status - perform PSP GUEST_STATUS command
+ *
+ * @status: psp_data_guest_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_status(struct psp_data_guest_status *status, int *psp_ret);
+
+/**
+ * psp_dbg_decrypt - perform PSP DBG_DECRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_dbg_encrypt - perform PSP DBG_ENCRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_guest_receive_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_receive_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_start(struct psp_data_receive_start *start, int *psp_ret);
+
+/**
+ * psp_guest_receive_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_receive_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_update(struct psp_data_receive_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_receive_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_receive_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_guest_send_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_send_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_start(struct psp_data_send_start *start, int *psp_ret);
+
+/**
+ * psp_guest_send_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_send_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_update(struct psp_data_send_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_send_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_send_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_finish(struct psp_data_send_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_platform_pdh_gen - perform PSP PDH_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_gen(int *psp_ret);
+
+/**
+ * psp_platform_pdh_cert_export - perform PSP PDH_CERT_EXPORT command
+ *
+ * @data: psp_data_platform_pdh_cert_export structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_gen - perform PSP PEK_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_gen(int *psp_ret);
+
+/**
+ * psp_platform_pek_cert_import - perform PSP PEK_CERT_IMPORT command
+ *
+ * @data: psp_data_platform_pek_cert_import structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_csr - perform PSP PEK_CSR command
+ *
+ * @data: psp_data_platform_pek_csr structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret);
+
+/**
+ * psp_platform_factory_reset - perform PSP FACTORY_RESET command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_factory_reset(int *psp_ret);
+
+#else	/* CONFIG_CRYPTO_DEV_PSP_DD is not enabled */
+
+static inline int psp_platform_status(struct psp_data_status *status,
+				      int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_init(struct psp_data_init *init, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_shutdown(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_start(struct psp_data_launch_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_update(struct psp_data_launch_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_finish(struct psp_data_launch_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_activate(struct psp_data_activate *activate,
+				     int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_deactivate(struct psp_data_deactivate *deactivate,
+				       int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_df_flush(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_decommission(struct psp_data_decommission *decommission,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_status(struct psp_data_guest_status *status,
+				   int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_start(struct psp_data_receive_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_update(struct psp_data_receive_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+static inline int psp_guest_send_start(struct psp_data_send_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_update(struct psp_data_send_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_finish(struct psp_data_send_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pdh_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pek_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_factory_reset(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+#endif	/* CONFIG_CRYPTO_DEV_PSP_DD */
+
+#endif	/* __CPP_PSP_H__ */
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 185f8ea..af2511a 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -470,3 +470,4 @@ header-y += xilinx-v4l2-controls.h
 header-y += zorro.h
 header-y += zorro_ids.h
 header-y += userfaultfd.h
+header-y += ccp-psp.h
diff --git a/include/uapi/linux/ccp-psp.h b/include/uapi/linux/ccp-psp.h
new file mode 100644
index 0000000..e780b46
--- /dev/null
+++ b/include/uapi/linux/ccp-psp.h
@@ -0,0 +1,182 @@
+#ifndef _UAPI_LINUX_CCP_PSP_
+#define _UAPI_LINUX_CCP_PSP_
+
+/*
+ * Userspace interface to communicated with CCP-PSP driver.
+ */
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+/**
+ * struct psp_data_header - Common PSP communication header
+ * @buffer_len: length of the buffer supplied to the PSP
+ */
+
+struct __attribute__ ((__packed__)) psp_data_header {
+	__u32 buffer_len;				/* In/Out */
+};
+
+/**
+ * struct psp_data_init - PSP INIT command parameters
+ * @hdr: command header
+ * @flags: processing flags
+ */
+struct __attribute__ ((__packed__)) psp_data_init {
+	struct psp_data_header hdr;
+	__u32 flags;				/* In */
+};
+
+/**
+ * struct psp_data_status - PSP PLATFORM_STATUS command parameters
+ * @hdr: command header
+ * @major: major API version
+ * @minor: minor API version
+ * @state: platform state
+ * @cert_status: bit fields describing certificate status
+ * @flags: platform flags
+ * @guest_count: number of active guests
+ */
+struct __attribute__ ((__packed__)) psp_data_status {
+	struct psp_data_header hdr;
+	__u8 api_major;				/* Out */
+	__u8 api_minor;				/* Out */
+	__u8 state;				/* Out */
+	__u8 cert_status;			/* Out */
+	__u32 flags;				/* Out */
+	__u32 guest_count;			/* Out */
+};
+
+/**
+ * struct psp_data_pek_csr - PSP PEK_CSR command parameters
+ * @hdr: command header
+ * @csr - certificate signing request formatted with PKCS
+ */
+struct __attribute__((__packed__)) psp_data_pek_csr {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 csr[];					/* Out */
+};
+
+/**
+ * struct psp_data_cert_import - PSP PEK_CERT_IMPORT command parameters
+ * @hdr: command header
+ * @ncerts: number of certificates in the chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pek_cert_import {
+	struct psp_data_header hdr;			/* In/Out */
+	__u32 ncerts;					/* In */
+	__u32 cert_len;					/* In */
+	__u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_pdh_cert_export - PSP PDH_CERT_EXPORT command parameters
+ * @hdr: command header
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @pdh_pub_qx: the Qx parameter of the target PDH public key
+ * @pdh_pub_qy: the Qy parameter of the target PDH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pdh_cert_export {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 api_major;					/* Out */
+	__u8 api_minor;					/* Out */
+	__u8 reserved1[2];
+	__u32 serial;					/* Out */
+	__u8 pdh_pub_qx[32];				/* Out */
+	__u8 pdh_pub_qy[32];				/* Out */
+	__u8 pek_sig_r[32];				/* Out */
+	__u8 pek_sig_s[32];				/* Out */
+	__u8 cek_sig_r[32];				/* Out */
+	__u8 cek_sig_s[32];				/* Out */
+	__u8 cek_pub_qx[32];				/* Out */
+	__u8 cek_pub_qy[32];				/* Out */
+	__u32 ncerts;					/* Out */
+	__u32 cert_len;					/* Out */
+	__u8 certs[];					/* Out */
+};
+
+/**
+ * platform and management commands
+ */
+enum psp_cmd {
+	PSP_CMD_INIT = 1,
+	PSP_CMD_LAUNCH_START,
+	PSP_CMD_LAUNCH_UPDATE,
+	PSP_CMD_LAUNCH_FINISH,
+	PSP_CMD_ACTIVATE,
+	PSP_CMD_DF_FLUSH,
+	PSP_CMD_SHUTDOWN,
+	PSP_CMD_FACTORY_RESET,
+	PSP_CMD_PLATFORM_STATUS,
+	PSP_CMD_PEK_GEN,
+	PSP_CMD_PEK_CSR,
+	PSP_CMD_PEK_CERT_IMPORT,
+	PSP_CMD_PDH_GEN,
+	PSP_CMD_PDH_CERT_EXPORT,
+	PSP_CMD_SEND_START,
+	PSP_CMD_SEND_UPDATE,
+	PSP_CMD_SEND_FINISH,
+	PSP_CMD_RECEIVE_START,
+	PSP_CMD_RECEIVE_UPDATE,
+	PSP_CMD_RECEIVE_FINISH,
+	PSP_CMD_GUEST_STATUS,
+	PSP_CMD_DEACTIVATE,
+	PSP_CMD_DECOMMISSION,
+	PSP_CMD_DBG_DECRYPT,
+	PSP_CMD_DBG_ENCRYPT,
+	PSP_CMD_MAX,
+};
+
+/**
+ * status code returned by the commands
+ */
+enum psp_ret_code {
+	PSP_RET_SUCCESS = 0,
+	PSP_RET_INVALID_PLATFORM_STATE,
+	PSP_RET_INVALID_GUEST_STATE,
+	PSP_RET_INAVLID_CONFIG,
+	PSP_RET_CMDBUF_TOO_SMALL,
+	PSP_RET_ALREADY_OWNED,
+	PSP_RET_INVALID_CERTIFICATE,
+	PSP_RET_POLICY_FAILURE,
+	PSP_RET_INACTIVE,
+	PSP_RET_INVALID_ADDRESS,
+	PSP_RET_BAD_SIGNATURE,
+	PSP_RET_BAD_MEASUREMENT,
+	PSP_RET_ASID_OWNED,
+	PSP_RET_INVALID_ASID,
+	PSP_RET_WBINVD_REQUIRED,
+	PSP_RET_DFFLUSH_REQUIRED,
+	PSP_RET_INVALID_GUEST,
+};
+
+/**
+ * struct psp_issue_cmd - PSP ioctl parameters
+ * @cmd: PSP commands to execute
+ * @opaque: pointer to the command structure
+ * @psp_ret: PSP return code on failure
+ */
+struct psp_issue_cmd {
+	__u32 cmd;					/* In */
+	__u64 opaque;					/* In */
+	__u32 psp_ret;					/* Out */
+};
+
+#define PSP_IOC_TYPE		'P'
+#define PSP_ISSUE_CMD	_IOWR(PSP_IOC_TYPE, 0x0, struct psp_issue_cmd)
+
+#endif /* _UAPI_LINUX_CCP_PSP_H */
+

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

The driver to communicate with Secure Encrypted Virtualization (SEV)
firmware running within the AMD secure processor providing a secure key
management interface for SEV guests.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 drivers/crypto/Kconfig       |   11 +
 drivers/crypto/Makefile      |    1 
 drivers/crypto/psp/Kconfig   |    8 
 drivers/crypto/psp/Makefile  |    3 
 drivers/crypto/psp/psp-dev.c |  220 +++++++++++
 drivers/crypto/psp/psp-dev.h |   95 +++++
 drivers/crypto/psp/psp-ops.c |  454 +++++++++++++++++++++++
 drivers/crypto/psp/psp-pci.c |  376 +++++++++++++++++++
 include/linux/ccp-psp.h      |  833 ++++++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild    |    1 
 include/uapi/linux/ccp-psp.h |  182 +++++++++
 11 files changed, 2184 insertions(+)
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 1af94e2..3bdbc51 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -464,6 +464,17 @@ if CRYPTO_DEV_CCP
 	source "drivers/crypto/ccp/Kconfig"
 endif
 
+config CRYPTO_DEV_PSP
+	bool "Support for AMD Platform Security Processor"
+	depends on X86 && PCI
+	help
+	  The AMD Platform Security Processor provides hardware key-
+	  management services for VMGuard encrypted memory.
+
+if CRYPTO_DEV_PSP
+	source "drivers/crypto/psp/Kconfig"
+endif
+
 config CRYPTO_DEV_MXS_DCP
 	tristate "Support for Freescale MXS DCP"
 	depends on (ARCH_MXS || ARCH_MXC)
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 3c6432d..1ea1e08 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o
 obj-$(CONFIG_CRYPTO_DEV_BFIN_CRC) += bfin_crc.o
 obj-$(CONFIG_CRYPTO_DEV_CCP) += ccp/
+obj-$(CONFIG_CRYPTO_DEV_PSP) += psp/
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam/
 obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
 obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
diff --git a/drivers/crypto/psp/Kconfig b/drivers/crypto/psp/Kconfig
new file mode 100644
index 0000000..acd9b87
--- /dev/null
+++ b/drivers/crypto/psp/Kconfig
@@ -0,0 +1,8 @@
+config CRYPTO_DEV_PSP_DD
+	tristate "PSP Key Management device driver"
+	depends on CRYPTO_DEV_PSP
+	default m
+	help
+	  Provides the interface to use the AMD PSP key management APIs
+	  for use with the AMD Secure Enhanced Virtualization. If you
+	  choose 'M' here, this module will be called psp.
diff --git a/drivers/crypto/psp/Makefile b/drivers/crypto/psp/Makefile
new file mode 100644
index 0000000..1b7d00c
--- /dev/null
+++ b/drivers/crypto/psp/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_CRYPTO_DEV_PSP_DD) += psp.o
+psp-objs := psp-dev.o psp-ops.o
+psp-$(CONFIG_PCI) += psp-pci.o
diff --git a/drivers/crypto/psp/psp-dev.c b/drivers/crypto/psp/psp-dev.c
new file mode 100644
index 0000000..65d5c7e
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.c
@@ -0,0 +1,220 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+
+#include "psp-dev.h"
+
+MODULE_AUTHOR("Advanced Micro Devices, Inc.");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("0.1.0");
+MODULE_DESCRIPTION("AMD VMGuard key-management driver prototype");
+
+static struct psp_device *psp_master;
+
+static LIST_HEAD(psp_devs);
+static DEFINE_SPINLOCK(psp_devs_lock);
+
+static atomic_t psp_id;
+
+static void psp_add_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_add_tail(&psp->entry, &psp_devs);
+	psp_master = psp->get_master(&psp_devs);
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_del_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_del(&psp->entry);
+	if (psp == psp_master)
+		psp_master = NULL;
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_check_support(struct psp_device *psp)
+{
+	if (ioread32(psp->io_regs + PSP_CMDRESP))
+		psp->sev_enabled = 1;
+}
+
+/**
+ * psp_get_master_device - returns a pointer to the PSP master device structure
+ *
+ * Returns NULL if a PSP master device is not present, PSP device structure
+ * otherwise.
+ */
+struct psp_device *psp_get_master_device(void)
+{
+	return psp_master;
+}
+EXPORT_SYMBOL_GPL(psp_get_master_device);
+
+/**
+ * psp_get_device - returns a pointer to the PSP device structure
+ *
+ * Returns NULL if a PSP device is not present, PSP device structure otherwise.
+ */
+struct psp_device *psp_get_device(void)
+{
+	struct psp_device *psp = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	if (list_empty(&psp_devs))
+		goto unlock;
+
+	psp = list_first_entry(&psp_devs, struct psp_device, entry);
+
+unlock:
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+
+	return psp;
+}
+EXPORT_SYMBOL_GPL(psp_get_device);
+
+/**
+ * psp_alloc_struct - allocate and initialize the psp_device struct
+ *
+ * @dev: device struct of the PSP
+ */
+struct psp_device *psp_alloc_struct(struct device *dev)
+{
+	struct psp_device *psp;
+
+	psp = devm_kzalloc(dev, sizeof(*psp), GFP_KERNEL);
+	if (psp == NULL) {
+		dev_err(dev, "unable to allocate device struct\n");
+		return NULL;
+	}
+	psp->dev = dev;
+
+	psp->id = atomic_inc_return(&psp_id);
+	snprintf(psp->name, sizeof(psp->name), "psp%u", psp->id);
+
+	init_waitqueue_head(&psp->int_queue);
+
+	return psp;
+}
+
+/**
+ * psp_init - initialize the PSP device
+ *
+ * @psp: psp_device struct
+ */
+int psp_init(struct psp_device *psp)
+{
+	int ret;
+
+	psp_check_support(psp);
+
+	/* Disable and clear interrupts until ready */
+	iowrite32(0, psp->io_regs + PSP_P2CMSG_INTEN);
+	iowrite32(0xffffffff, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	/* Request an irq */
+	ret = psp->get_irq(psp);
+	if (ret) {
+		dev_err(psp->dev, "unable to allocate IRQ\n");
+		return ret;
+	}
+
+	/* Make the device struct available */
+	psp_add_device(psp);
+
+	/* Enable interrupts */
+	iowrite32(1 << PSP_CMD_COMPLETE_REG, psp->io_regs + PSP_P2CMSG_INTEN);
+
+	ret = psp_ops_init(psp);
+	if (ret)
+		dev_err(psp->dev, "psp_ops_init returned %d\n", ret);
+
+	return 0;
+}
+
+/**
+ * psp_destroy - tear down the PSP device
+ *
+ * @psp: psp_device struct
+ */
+void psp_destroy(struct psp_device *psp)
+{
+	psp_ops_exit(psp);
+
+	/* Remove general access to the device struct */
+	psp_del_device(psp);
+
+	psp->free_irq(psp);
+}
+
+/**
+ * psp_irq_handler - handle interrupts generated by the PSP device
+ *
+ * @irq: the irq associated with the interrupt
+ * @data: the data value supplied when the irq was created
+ */
+irqreturn_t psp_irq_handler(int irq, void *data)
+{
+	struct device *dev = data;
+	struct psp_device *psp = dev_get_drvdata(dev);
+	unsigned int status;
+
+	status = ioread32(psp->io_regs + PSP_P2CMSG_INTSTS);
+	if (status & (1 << PSP_CMD_COMPLETE_REG)) {
+		int reg;
+
+		reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (reg & PSP_CMDRESP_RESP) {
+			psp->int_rcvd = 1;
+			wake_up_interruptible(&psp->int_queue);
+		}
+	}
+
+	iowrite32(status, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	return IRQ_HANDLED;
+}
+
+static int __init psp_mod_init(void)
+{
+	int ret;
+
+	ret = psp_pci_init();
+	if (ret)
+		return ret;
+
+	return 0;
+}
+module_init(psp_mod_init);
+
+static void __exit psp_mod_exit(void)
+{
+	psp_pci_exit();
+}
+module_exit(psp_mod_exit);
diff --git a/drivers/crypto/psp/psp-dev.h b/drivers/crypto/psp/psp-dev.h
new file mode 100644
index 0000000..bb75ca2
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.h
@@ -0,0 +1,95 @@
+
+#ifndef __PSP_DEV_H__
+#define __PSP_DEV_H__
+
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/wait.h>
+#include <linux/dmapool.h>
+#include <linux/hw_random.h>
+#include <linux/interrupt.h>
+#include <linux/miscdevice.h>
+
+#define PSP_P2CMSG_INTEN		0x0110
+#define PSP_P2CMSG_INTSTS		0x0114
+
+#define PSP_C2PMSG_ATTR_0		0x0118
+#define PSP_C2PMSG_ATTR_1		0x011c
+#define PSP_C2PMSG_ATTR_2		0x0120
+#define PSP_C2PMSG_ATTR_3		0x0124
+#define PSP_P2CMSG_ATTR_0		0x0128
+
+#define PSP_C2PMSG(_num)		((_num) << 2)
+#define PSP_CMDRESP			PSP_C2PMSG(32)
+#define PSP_CMDBUFF_ADDR_LO		PSP_C2PMSG(56)
+#define PSP_CMDBUFF_ADDR_HI 		PSP_C2PMSG(57)
+
+#define PSP_P2CMSG(_num)		(_num << 2)
+#define PSP_CMD_COMPLETE_REG		1
+#define PSP_CMD_COMPLETE		PSP_P2CMSG(PSP_CMD_COMPLETE_REG)
+
+#define PSP_CMDRESP_CMD_SHIFT		16
+#define PSP_CMDRESP_IOC			BIT(0)
+#define PSP_CMDRESP_RESP		BIT(31)
+#define PSP_CMDRESP_ERR_MASK		0xffff
+
+#define PSP_DRIVER_NAME			"psp"
+
+struct psp_device {
+	struct list_head entry;
+
+	struct device *dev;
+
+	unsigned int id;
+	char name[32];
+
+	struct dentry *debugfs;
+	struct miscdevice misc;
+
+	unsigned int sev_enabled;
+
+	/*
+	 * Bus-specific device information
+	 */
+	void *dev_specific;
+	int (*get_irq)(struct psp_device *);
+	void (*free_irq)(struct psp_device *);
+	unsigned int irq;
+	struct psp_device *(*get_master)(struct list_head *list);
+
+	/*
+	 * I/O area used for device communication. Writing to the
+	 * mailbox registers generates an interrupt on the PSP.
+	 */
+	void __iomem *io_map;
+	void __iomem *io_regs;
+
+	/* Interrupt wait queue */
+	wait_queue_head_t int_queue;
+	unsigned int int_rcvd;
+};
+
+struct psp_device *psp_get_master_device(void);
+struct psp_device *psp_get_device(void);
+
+#ifdef CONFIG_PCI
+int psp_pci_init(void);
+void psp_pci_exit(void);
+#else
+static inline int psp_pci_init(void) { return 0; }
+static inline void psp_pci_exit(void) { }
+#endif
+
+struct psp_device *psp_alloc_struct(struct device *dev);
+int psp_init(struct psp_device *psp);
+void psp_destroy(struct psp_device *psp);
+
+int psp_ops_init(struct psp_device *psp);
+void psp_ops_exit(struct psp_device *psp);
+
+irqreturn_t psp_irq_handler(int irq, void *data);
+
+#endif /* PSP_DEV_H */
diff --git a/drivers/crypto/psp/psp-ops.c b/drivers/crypto/psp/psp-ops.c
new file mode 100644
index 0000000..81e8dc8
--- /dev/null
+++ b/drivers/crypto/psp/psp-ops.c
@@ -0,0 +1,454 @@
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/wait.h>
+#include <linux/mutex.h>
+#include <linux/ccp-psp.h>
+
+#include "psp-dev.h"
+
+static unsigned int psp_poll = 0;
+module_param(psp_poll, uint, 0444);
+MODULE_PARM_DESC(psp_poll, "Poll for command completion - any non-zero value");
+
+#define PSP_DEFAULT_TIMEOUT	2
+
+DEFINE_MUTEX(psp_cmd_mutex);
+
+static int psp_wait_cmd_poll(struct psp_device *psp, unsigned int timeout,
+			     unsigned int *reg)
+{
+	int wait = timeout * 10;	/* 100ms sleep => timeout * 10 */
+
+	while (--wait) {
+		msleep(100);
+
+		*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (*reg & PSP_CMDRESP_RESP)
+			break;
+	}
+
+	if (!wait) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static int psp_wait_cmd_ioc(struct psp_device *psp, unsigned int timeout,
+			    unsigned int *reg)
+{
+	unsigned long jiffie_timeout = timeout;
+	long ret;
+
+	jiffie_timeout *= HZ;
+
+	ret = wait_event_interruptible_timeout(psp->int_queue, psp->int_rcvd,
+					       jiffie_timeout);
+	if (ret <= 0) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	psp->int_rcvd = 0;
+
+	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+
+	return 0;
+}
+
+static int psp_wait_cmd(struct psp_device *psp, unsigned int timeout,
+			unsigned int *reg)
+{
+	return (*reg & PSP_CMDRESP_IOC) ? psp_wait_cmd_ioc(psp, timeout, reg)
+					: psp_wait_cmd_poll(psp, timeout, reg);
+}
+
+static int psp_issue_cmd(enum psp_cmd cmd, void *data, unsigned int timeout,
+			 int *psp_ret)
+{
+	struct psp_device *psp = psp_get_master_device();
+	unsigned int phys_lsb, phys_msb;
+	unsigned int reg, ret;
+
+	if (psp_ret)
+		*psp_ret = 0;
+
+	if (!psp)
+		return -ENODEV;
+
+	if (!psp->sev_enabled)
+		return -ENOTSUPP;
+
+	/* Set the physical address for the PSP */
+	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+
+	/* Only one command at a time... */
+	mutex_lock(&psp_cmd_mutex);
+
+	iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+	iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+	wmb();
+
+	reg = cmd;
+	reg <<= PSP_CMDRESP_CMD_SHIFT;
+	reg |= psp_poll ? 0 : PSP_CMDRESP_IOC;
+	iowrite32(reg, psp->io_regs + PSP_CMDRESP);
+
+	ret = psp_wait_cmd(psp, timeout, &reg);
+	if (ret)
+		goto unlock;
+
+	if (psp_ret)
+		*psp_ret = reg & PSP_CMDRESP_ERR_MASK;
+
+	if (reg & PSP_CMDRESP_ERR_MASK) {
+		dev_err(psp->dev, "psp command %u failed (%#010x)\n", cmd, reg & PSP_CMDRESP_ERR_MASK);
+		ret = -EIO;
+	}
+
+unlock:
+	mutex_unlock(&psp_cmd_mutex);
+
+	return ret;
+}
+
+int psp_platform_init(struct psp_data_init *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_INIT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_init);
+
+int psp_platform_shutdown(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SHUTDOWN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_shutdown);
+
+int psp_platform_status(struct psp_data_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PLATFORM_STATUS, data,
+			     PSP_DEFAULT_TIMEOUT, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_status);
+
+int psp_guest_launch_start(struct psp_data_launch_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_start);
+
+int psp_guest_launch_update(struct psp_data_launch_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_update);
+
+int psp_guest_launch_finish(struct psp_data_launch_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_finish);
+
+int psp_guest_activate(struct psp_data_activate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_ACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_activate);
+
+int psp_guest_deactivate(struct psp_data_deactivate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DEACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_deactivate);
+
+int psp_guest_df_flush(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DF_FLUSH, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_df_flush);
+
+int psp_guest_decommission(struct psp_data_decommission *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DECOMMISSION, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_decommission);
+
+int psp_guest_status(struct psp_data_guest_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_GUEST_STATUS, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_status);
+
+int psp_dbg_decrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_DECRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_decrypt);
+
+int psp_dbg_encrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_ENCRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_encrypt);
+
+int psp_guest_receive_start(struct psp_data_receive_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_start);
+
+int psp_guest_receive_update(struct psp_data_receive_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_update);
+
+int psp_guest_receive_finish(struct psp_data_receive_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_finish);
+
+int psp_guest_send_start(struct psp_data_send_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_start);
+
+int psp_guest_send_update(struct psp_data_send_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_update);
+
+int psp_guest_send_finish(struct psp_data_send_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_finish);
+
+int psp_platform_pdh_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_gen);
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_CERT_EXPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_cert_export);
+
+int psp_platform_pek_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_gen);
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				 int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CERT_IMPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_cert_import);
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CSR, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_csr);
+
+int psp_platform_factory_reset(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_FACTORY_RESET, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_factory_reset);
+
+static int psp_copy_to_user(void __user *argp, void *data, size_t size)
+{
+	int ret = 0;
+
+	if (copy_to_user(argp, data, size))
+		ret = -EFAULT;
+	free_pages_exact(data, size);
+
+	return ret;
+}
+
+static void *psp_copy_from_user(void __user *argp, size_t *size)
+{
+	u32 buffer_len;
+	void *data;
+
+	if (copy_from_user(&buffer_len, argp, sizeof(buffer_len)))
+		return ERR_PTR(-EFAULT);
+
+	data = alloc_pages_exact(buffer_len, GFP_KERNEL | __GFP_ZERO);
+	if (!data)
+		return ERR_PTR(-ENOMEM);
+	*size = buffer_len;
+
+	if (copy_from_user(data, argp, buffer_len)) {
+		free_pages_exact(data, *size);
+		return ERR_PTR(-EFAULT);
+	}
+
+	return data;
+}
+
+static long psp_ioctl(struct file *file, unsigned int ioctl, unsigned long arg)
+{
+	int ret = -EFAULT;
+	void *data = NULL;
+	size_t buffer_len = 0;
+	void __user *argp = (void __user *)arg;
+	struct psp_issue_cmd input;
+
+	if (ioctl != PSP_ISSUE_CMD)
+		return -EINVAL;
+
+	/* get input parameters */
+	if (copy_from_user(&input, argp, sizeof(struct psp_issue_cmd)))
+	       return -EFAULT;
+
+	if (input.cmd > PSP_CMD_MAX)
+		return -EINVAL;
+
+	switch (input.cmd) {
+
+	case PSP_CMD_INIT: {
+		struct psp_data_init *init;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		init = data;
+		ret = psp_platform_init(init, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_SHUTDOWN: {
+		ret = psp_platform_shutdown(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_FACTORY_RESET: {
+		ret = psp_platform_factory_reset(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PLATFORM_STATUS: {
+		struct psp_data_status *status;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		status = data;
+		ret = psp_platform_status(status, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_GEN: {
+		ret = psp_platform_pek_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CSR: {
+		struct psp_data_pek_csr *pek_csr;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		pek_csr = data;
+		ret = psp_platform_pek_csr(pek_csr, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CERT_IMPORT: {
+		struct psp_data_pek_cert_import *import;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		import = data;
+		ret = psp_platform_pek_cert_import(import, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_GEN: {
+		ret = psp_platform_pdh_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_CERT_EXPORT: {
+		struct psp_data_pdh_cert_export *export;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		export = data;
+		ret = psp_platform_pdh_cert_export(export, &input.psp_ret);
+		break;
+	}
+	default:
+		ret = -EINVAL;
+	}
+
+	if (data && psp_copy_to_user((void *)input.opaque,
+				data, buffer_len))
+		ret = -EFAULT;
+
+	if (copy_to_user(argp, &input, sizeof(struct psp_issue_cmd)))
+		ret = -EFAULT;
+
+	return ret;
+}
+
+static const struct file_operations fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = psp_ioctl,
+};
+
+int psp_ops_init(struct psp_device *psp)
+{
+	struct miscdevice *misc = &psp->misc;
+
+	misc->minor = MISC_DYNAMIC_MINOR;
+	misc->name = psp->name;
+	misc->fops = &fops;
+
+	return misc_register(misc);
+}
+
+void psp_ops_exit(struct psp_device *psp)
+{
+	misc_deregister(&psp->misc);
+}
diff --git a/drivers/crypto/psp/psp-pci.c b/drivers/crypto/psp/psp-pci.c
new file mode 100644
index 0000000..2b4c379
--- /dev/null
+++ b/drivers/crypto/psp/psp-pci.c
@@ -0,0 +1,376 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/dma-mapping.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+
+#include "psp-dev.h"
+
+#define IO_BAR				2
+#define IO_OFFSET			0x10500
+
+#define MSIX_VECTORS			2
+
+struct psp_msix {
+	u32 vector;
+	char name[16];
+};
+
+struct psp_pci {
+	struct pci_dev *pdev;
+	int msix_count;
+	struct psp_msix msix[MSIX_VECTORS];
+};
+
+static int psp_get_msix_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	struct msix_entry msix_entry[MSIX_VECTORS];
+	unsigned int name_len = sizeof(psp_pci->msix[0].name) - 1;
+	int v, ret;
+
+	for (v = 0; v < ARRAY_SIZE(msix_entry); v++)
+		msix_entry[v].entry = v;
+
+	ret = pci_enable_msix_range(pdev, msix_entry, 1, v);
+	if (ret < 0)
+		return ret;
+
+	psp_pci->msix_count = ret;
+	for (v = 0; v < psp_pci->msix_count; v++) {
+		/* Set the interrupt names and request the irqs */
+		snprintf(psp_pci->msix[v].name, name_len, "%s-%u", psp->name, v);
+		psp_pci->msix[v].vector = msix_entry[v].vector;
+		ret = request_irq(psp_pci->msix[v].vector, psp_irq_handler,
+				  0, psp_pci->msix[v].name, dev);
+		if (ret) {
+			dev_notice(dev, "unable to allocate MSI-X IRQ (%d)\n",
+				   ret);
+			goto e_irq;
+		}
+	}
+
+	return 0;
+
+e_irq:
+	while (v--)
+		free_irq(psp_pci->msix[v].vector, dev);
+	pci_disable_msix(pdev);
+	psp_pci->msix_count = 0;
+
+	return ret;
+}
+
+static int psp_get_msi_irq(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	int ret;
+
+	ret = pci_enable_msi(pdev);
+	if (ret)
+		return ret;
+
+	psp->irq = pdev->irq;
+	ret = request_irq(psp->irq, psp_irq_handler, 0, psp->name, dev);
+	if (ret) {
+		dev_notice(dev, "unable to allocate MSI IRQ (%d)\n", ret);
+		goto e_msi;
+	}
+
+	return 0;
+
+e_msi:
+	pci_disable_msi(pdev);
+
+	return ret;
+}
+
+static int psp_get_irqs(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	int ret;
+
+	ret = psp_get_msix_irqs(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI-X vectors, try MSI */
+	dev_notice(dev, "could not enable MSI-X (%d), trying MSI\n", ret);
+	ret = psp_get_msi_irq(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI interrupt */
+	dev_notice(dev, "could not enable MSI (%d), trying PCI\n", ret);
+
+	return ret;
+}
+
+void psp_free_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+
+	if (psp_pci->msix_count) {
+		while (psp_pci->msix_count--)
+			free_irq(psp_pci->msix[psp_pci->msix_count].vector,
+				 dev);
+		pci_disable_msix(pdev);
+	} else {
+		free_irq(psp->irq, dev);
+		pci_disable_msi(pdev);
+	}
+}
+
+static bool psp_is_master(struct psp_device *cur, struct psp_device *new)
+{
+	struct psp_pci *psp_pci_cur, *psp_pci_new;
+	struct pci_dev *pdev_cur, *pdev_new;
+
+	psp_pci_cur = cur->dev_specific;
+	psp_pci_new = new->dev_specific;
+
+	pdev_cur = psp_pci_cur->pdev;
+	pdev_new = psp_pci_new->pdev;
+
+	if (pdev_new->bus->number < pdev_cur->bus->number)
+		return true;
+
+	if (PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn))
+		return true;
+
+	if (PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn))
+		return true;
+
+	return false;
+}
+
+static struct psp_device *psp_get_master(struct list_head *list)
+{
+	struct psp_device *psp, *tmp;
+
+	psp = NULL;
+	list_for_each_entry(tmp, list, entry) {
+		if (!psp || psp_is_master(psp, tmp))
+			psp = tmp;
+	}
+
+	return psp;
+}
+
+static int psp_find_mmio_area(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	unsigned long io_flags;
+
+	io_flags = pci_resource_flags(pdev, IO_BAR);
+	if (io_flags & IORESOURCE_MEM)
+		return IO_BAR;
+
+	return -EIO;
+}
+
+static int psp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct psp_device *psp;
+	struct psp_pci *psp_pci;
+	struct device *dev = &pdev->dev;
+	unsigned int bar;
+	int ret;
+
+	ret = -ENOMEM;
+	psp = psp_alloc_struct(dev);
+	if (!psp)
+		goto e_err;
+
+	psp_pci = devm_kzalloc(dev, sizeof(*psp_pci), GFP_KERNEL);
+	if (!psp_pci) {
+		ret = -ENOMEM;
+		goto e_err;
+	}
+	psp_pci->pdev = pdev;
+	psp->dev_specific = psp_pci;
+	psp->get_irq = psp_get_irqs;
+	psp->free_irq = psp_free_irqs;
+	psp->get_master = psp_get_master;
+
+	ret = pci_request_regions(pdev, PSP_DRIVER_NAME);
+	if (ret) {
+		dev_err(dev, "pci_request_regions failed (%d)\n", ret);
+		goto e_err;
+	}
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		dev_err(dev, "pci_enable_device failed (%d)\n", ret);
+		goto e_regions;
+	}
+
+	pci_set_master(pdev);
+
+	ret = psp_find_mmio_area(psp);
+	if (ret < 0)
+		goto e_device;
+	bar = ret;
+
+	ret = -EIO;
+	psp->io_map = pci_iomap(pdev, bar, 0);
+	if (!psp->io_map) {
+		dev_err(dev, "pci_iomap failed\n");
+		goto e_device;
+	}
+	psp->io_regs = psp->io_map + IO_OFFSET;
+
+	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
+	if (ret) {
+		ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+		if (ret) {
+			dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n",
+				ret);
+			goto e_iomap;
+		}
+	}
+
+	dev_set_drvdata(dev, psp);
+
+	ret = psp_init(psp);
+	if (ret)
+		goto e_iomap;
+
+	dev_notice(dev, "enabled\n");
+
+	return 0;
+
+e_iomap:
+	pci_iounmap(pdev, psp->io_map);
+
+e_device:
+	pci_disable_device(pdev);
+
+e_regions:
+	pci_release_regions(pdev);
+
+e_err:
+	dev_notice(dev, "initialization failed\n");
+	return ret;
+}
+
+static void psp_pci_remove(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct psp_device *psp = dev_get_drvdata(dev);
+
+	if (!psp)
+		return;
+
+	psp_destroy(psp);
+
+	pci_iounmap(pdev, psp->io_map);
+
+	pci_disable_device(pdev);
+
+	pci_release_regions(pdev);
+
+	dev_notice(dev, "disabled\n");
+}
+
+#if 0
+#ifdef CONFIG_PM
+static int ccp_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 1;
+
+	/* Wake all the queue kthreads to prepare for suspend */
+	for (i = 0; i < ccp->cmd_q_count; i++)
+		wake_up_process(ccp->cmd_q[i].kthread);
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	/* Wait for all queue kthreads to say they're done */
+	while (!ccp_queues_suspended(ccp))
+		wait_event_interruptible(ccp->suspend_queue,
+					 ccp_queues_suspended(ccp));
+
+	return 0;
+}
+
+static int ccp_pci_resume(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 0;
+
+	/* Wake up all the kthreads */
+	for (i = 0; i < ccp->cmd_q_count; i++) {
+		ccp->cmd_q[i].suspended = 0;
+		wake_up_process(ccp->cmd_q[i].kthread);
+	}
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	return 0;
+}
+#endif
+#endif
+
+static const struct pci_device_id psp_pci_table[] = {
+	{ PCI_VDEVICE(AMD, 0x1456), },
+	/* Last entry must be zero */
+	{ 0, }
+};
+MODULE_DEVICE_TABLE(pci, psp_pci_table);
+
+static struct pci_driver psp_pci_driver = {
+	.name = PSP_DRIVER_NAME,
+	.id_table = psp_pci_table,
+	.probe = psp_pci_probe,
+	.remove = psp_pci_remove,
+#if 0
+#ifdef CONFIG_PM
+	.suspend = ccp_pci_suspend,
+	.resume = ccp_pci_resume,
+#endif
+#endif
+};
+
+int psp_pci_init(void)
+{
+	return pci_register_driver(&psp_pci_driver);
+}
+
+void psp_pci_exit(void)
+{
+	pci_unregister_driver(&psp_pci_driver);
+}
diff --git a/include/linux/ccp-psp.h b/include/linux/ccp-psp.h
new file mode 100644
index 0000000..b5e791c
--- /dev/null
+++ b/include/linux/ccp-psp.h
@@ -0,0 +1,833 @@
+/*
+ * AMD Secure Processor (PSP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPP_PSP_H__
+#define __CPP_PSP_H__
+
+#include <uapi/linux/ccp-psp.h>
+
+#ifdef CONFIG_X86
+#include <asm/mem_encrypt.h>
+
+#define __psp_pa(x)	__sme_pa(x)
+#else
+#define __psp_pa(x)	__pa(x)
+#endif
+
+/**
+ * struct psp_data_activate - PSP ACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to activate
+ * @asid: asid assigned to the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_activate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 asid;				/* In */
+};
+
+/**
+ * struct psp_data_deactivate - PSP DEACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to deactivate
+ */
+struct __attribute__ ((__packed__)) psp_data_deactivate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_launch_start - PSP LAUNCH_START command parameters
+ * @hdr: command header
+ * @handle: handle assigned to the VM
+ * @flags: configuration flags for the VM
+ * @policy: policy information for the VM
+ * @dh_pub_qx: the Qx parameter of the VM owners ECDH public key
+ * @dh_pub_qy: the Qy parameter of the VM owners ECDH public key
+ * @nonce: nonce generated by the VM owner
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_start {
+	struct psp_data_header hdr;
+	u32 handle;				/* In/Out */
+	u32 flags;				/* In */
+	u32 policy;				/* In */
+	u8  dh_pub_qx[32];			/* In */
+	u8  dh_pub_qy[32];			/* In */
+	u8  nonce[16];				/* In */
+};
+
+/**
+ * struct psp_data_launch_update - PSP LAUNCH_UPDATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to update
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_launch_vcpus - PSP LAUNCH_FINISH VCPU state information
+ * @state_length: length of the VCPU state information to measure
+ * @state_mask_addr: mask of the bytes within the VCPU state information
+ *                   to use in the measurment
+ * @state_count: number of VCPUs to measure
+ * @state_addr: physical address of the VCPU state (VMCB)
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_vcpus {
+	u32 state_length;			/* In */
+	u64 state_mask_addr;			/* In */
+	u32 state_count;			/* In */
+	u64 state_addr[];			/* In */
+};
+
+/**
+ * struct psp_data_launch_finish - PSP LAUNCH_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the encrypted VM memory areas
+ * @vcpus: the VCPU state information to include in the measurement
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In/Out */
+	struct psp_data_launch_vcpus vcpus;	/* In */
+};
+
+/**
+ * struct psp_data_decommission - PSP DECOMMISSION command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to decommission
+ */
+struct __attribute__ ((__packed__)) psp_data_decommission {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_guest_status - PSP GUEST_STATUS command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to retrieve status
+ * @policy: policy information for the VM
+ * @asid: current ASID of the VM
+ * @state: current state of the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_guest_status {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 policy;				/* Out */
+	u32 asid;				/* Out */
+	u8 state;				/* Out */
+};
+
+/**
+ * struct psp_data_dbg - PSP DBG_ENCRYPT/DBG_DECRYPT command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to perform debug operation
+ * @src_addr: source address of data to operate on
+ * @dst_addr: destination address of data to operate on
+ * @length: length of data to operate on
+ */
+struct __attribute__ ((__packed__)) psp_data_dbg {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_start - PSP RECEIVE_START command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @flags: flags for the receive process
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keypad
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryption nonce
+ * @dh_pub_qx: qx parameter of the origin's ECDH public key
+ * @dh_pub_qy: qy parameter of the origin's ECDH public key
+ * @nonce: nonce generated by the origin
+ */
+struct __attribute__((__packed__)) psp_data_receive_start {
+	struct psp_data_header hdr;	/* In/Out */
+	u32 handle;			/* In/Out */
+	u32 flags;			/* In */
+	u32 policy;			/* In */
+	u8 policy_meas[32];		/* In */
+	u8 wrapped_tek[24];		/* In */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];		/* In */
+	u8 reserved2[8];
+	u8 ten[16];			/* In */
+	u8 dh_pub_qx[32];		/* In */
+	u8 dh_pub_qy[32];		/* In */
+	u8 nonce[16];			/* In */
+};
+
+/**
+ * struct psp_receive_update - PSP RECEIVE_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @iv: initialization vector for this blob of memory
+ * @count: number of memory areas to be encrypted
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__((__packed__)) psp_data_receive_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8 iv[16];				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_finish - PSP RECEIVE_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_receive_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In */
+};
+
+/**
+ * struct psp_data_send_start - PSP SEND_START command parameters
+ * @hdr: command header
+ * @nonce: nonce generated by firmware
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keyed with TIK
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryrption nonce
+ * @iv: the IV of transport encryption block
+ * @handle: handle of the VM to process
+ * @flags: flags for send command
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @dh_pub_qx: the Qx parameter of the target DH public key
+ * @dh_pub_qy: the Qy parameter of the target DH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ask_sig_r: the r component of the ASK signature
+ * @ask_sig_s: the s component of the ASK signature
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate in chain
+ */
+
+struct __attribute__((__packed__)) psp_data_send_start {
+	struct psp_data_header hdr;			/* In/Out */
+	u8 nonce[16];					/* Out */
+	u32 policy;					/* Out */
+	u8 policy_meas[32];				/* Out */
+	u8 wrapped_tek[24];				/* Out */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];				/* Out */
+	u8 reserved2[8];
+	u8 ten[16];					/* Out */
+	u8 iv[16];					/* Out */
+	u32 handle;					/* In */
+	u32 flags;					/* In */
+	u8 api_major;					/* In */
+	u8 api_minor;					/* In */
+	u8 reserved3[2];
+	u32 serial;					/* In */
+	u8 dh_pub_qx[32];				/* In */
+	u8 dh_pub_qy[32];				/* In */
+	u8 pek_sig_r[32];				/* In */
+	u8 pek_sig_s[32];				/* In */
+	u8 cek_sig_r[32];				/* In */
+	u8 cek_sig_s[32];				/* In */
+	u8 cek_pub_qx[32];				/* In */
+	u8 cek_pub_qy[32];				/* In */
+	u8 ask_sig_r[32];				/* In */
+	u8 ask_sig_s[32];				/* In */
+	u32 ncerts;					/* In */
+	u32 cert_length;				/* In */
+	u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_send_update - PSP SEND_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @len: length of memory region to encrypt
+ * @src_addr: physical address of memory region to encrypt from
+ * @dst_addr: physical address of memory region to encrypt to
+ */
+struct __attribute__((__packed__)) psp_data_send_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_send_finish - PSP SEND_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_send_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* Out */
+};
+
+#if defined(CONFIG_CRYPTO_DEV_PSP_DD) || \
+	defined(CONFIG_CRYPTO_DEV_PSP_DD_MODULE)
+
+/**
+ * psp_platform_init - perform PSP INIT command
+ *
+ * @init: psp_data_init structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_init(struct psp_data_init *init, int *psp_ret);
+
+/**
+ * psp_platform_shutdown - perform PSP SHUTDOWN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_shutdown(int *psp_ret);
+
+/**
+ * psp_platform_status - perform PSP PLATFORM_STATUS command
+ *
+ * @init: psp_data_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_status(struct psp_data_status *status, int *psp_ret);
+
+/**
+ * psp_guest_launch_start - perform PSP LAUNCH_START command
+ *
+ * @start: psp_data_launch_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_start(struct psp_data_launch_start *start, int *psp_ret);
+
+/**
+ * psp_guest_launch_update - perform PSP LAUNCH_UPDATE command
+ *
+ * @update: psp_data_launch_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_update(struct psp_data_launch_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_launch_finish - perform PSP LAUNCH_FINISH command
+ *
+ * @finish: psp_data_launch_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_finish(struct psp_data_launch_finish *finish, int *psp_ret);
+
+/**
+ * psp_guest_activate - perform PSP ACTIVATE command
+ *
+ * @activate: psp_data_activate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_activate(struct psp_data_activate *activate, int *psp_ret);
+
+/**
+ * psp_guest_deactivate - perform PSP DEACTIVATE command
+ *
+ * @deactivate: psp_data_deactivate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_deactivate(struct psp_data_deactivate *deactivate, int *psp_ret);
+
+/**
+ * psp_guest_df_flush - perform PSP DF_FLUSH command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_df_flush(int *psp_ret);
+
+/**
+ * psp_guest_decommission - perform PSP DECOMMISSION command
+ *
+ * @decommission: psp_data_decommission structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_decommission(struct psp_data_decommission *decommission,
+			   int *psp_ret);
+
+/**
+ * psp_guest_status - perform PSP GUEST_STATUS command
+ *
+ * @status: psp_data_guest_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_status(struct psp_data_guest_status *status, int *psp_ret);
+
+/**
+ * psp_dbg_decrypt - perform PSP DBG_DECRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_dbg_encrypt - perform PSP DBG_ENCRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_guest_receive_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_receive_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_start(struct psp_data_receive_start *start, int *psp_ret);
+
+/**
+ * psp_guest_receive_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_receive_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_update(struct psp_data_receive_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_receive_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_receive_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_guest_send_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_send_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_start(struct psp_data_send_start *start, int *psp_ret);
+
+/**
+ * psp_guest_send_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_send_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_update(struct psp_data_send_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_send_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_send_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_finish(struct psp_data_send_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_platform_pdh_gen - perform PSP PDH_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_gen(int *psp_ret);
+
+/**
+ * psp_platform_pdh_cert_export - perform PSP PDH_CERT_EXPORT command
+ *
+ * @data: psp_data_platform_pdh_cert_export structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_gen - perform PSP PEK_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_gen(int *psp_ret);
+
+/**
+ * psp_platform_pek_cert_import - perform PSP PEK_CERT_IMPORT command
+ *
+ * @data: psp_data_platform_pek_cert_import structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_csr - perform PSP PEK_CSR command
+ *
+ * @data: psp_data_platform_pek_csr structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret);
+
+/**
+ * psp_platform_factory_reset - perform PSP FACTORY_RESET command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_factory_reset(int *psp_ret);
+
+#else	/* CONFIG_CRYPTO_DEV_PSP_DD is not enabled */
+
+static inline int psp_platform_status(struct psp_data_status *status,
+				      int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_init(struct psp_data_init *init, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_shutdown(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_start(struct psp_data_launch_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_update(struct psp_data_launch_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_finish(struct psp_data_launch_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_activate(struct psp_data_activate *activate,
+				     int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_deactivate(struct psp_data_deactivate *deactivate,
+				       int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_df_flush(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_decommission(struct psp_data_decommission *decommission,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_status(struct psp_data_guest_status *status,
+				   int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_start(struct psp_data_receive_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_update(struct psp_data_receive_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+static inline int psp_guest_send_start(struct psp_data_send_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_update(struct psp_data_send_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_finish(struct psp_data_send_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pdh_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pek_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_factory_reset(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+#endif	/* CONFIG_CRYPTO_DEV_PSP_DD */
+
+#endif	/* __CPP_PSP_H__ */
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 185f8ea..af2511a 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -470,3 +470,4 @@ header-y += xilinx-v4l2-controls.h
 header-y += zorro.h
 header-y += zorro_ids.h
 header-y += userfaultfd.h
+header-y += ccp-psp.h
diff --git a/include/uapi/linux/ccp-psp.h b/include/uapi/linux/ccp-psp.h
new file mode 100644
index 0000000..e780b46
--- /dev/null
+++ b/include/uapi/linux/ccp-psp.h
@@ -0,0 +1,182 @@
+#ifndef _UAPI_LINUX_CCP_PSP_
+#define _UAPI_LINUX_CCP_PSP_
+
+/*
+ * Userspace interface to communicated with CCP-PSP driver.
+ */
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+/**
+ * struct psp_data_header - Common PSP communication header
+ * @buffer_len: length of the buffer supplied to the PSP
+ */
+
+struct __attribute__ ((__packed__)) psp_data_header {
+	__u32 buffer_len;				/* In/Out */
+};
+
+/**
+ * struct psp_data_init - PSP INIT command parameters
+ * @hdr: command header
+ * @flags: processing flags
+ */
+struct __attribute__ ((__packed__)) psp_data_init {
+	struct psp_data_header hdr;
+	__u32 flags;				/* In */
+};
+
+/**
+ * struct psp_data_status - PSP PLATFORM_STATUS command parameters
+ * @hdr: command header
+ * @major: major API version
+ * @minor: minor API version
+ * @state: platform state
+ * @cert_status: bit fields describing certificate status
+ * @flags: platform flags
+ * @guest_count: number of active guests
+ */
+struct __attribute__ ((__packed__)) psp_data_status {
+	struct psp_data_header hdr;
+	__u8 api_major;				/* Out */
+	__u8 api_minor;				/* Out */
+	__u8 state;				/* Out */
+	__u8 cert_status;			/* Out */
+	__u32 flags;				/* Out */
+	__u32 guest_count;			/* Out */
+};
+
+/**
+ * struct psp_data_pek_csr - PSP PEK_CSR command parameters
+ * @hdr: command header
+ * @csr - certificate signing request formatted with PKCS
+ */
+struct __attribute__((__packed__)) psp_data_pek_csr {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 csr[];					/* Out */
+};
+
+/**
+ * struct psp_data_cert_import - PSP PEK_CERT_IMPORT command parameters
+ * @hdr: command header
+ * @ncerts: number of certificates in the chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pek_cert_import {
+	struct psp_data_header hdr;			/* In/Out */
+	__u32 ncerts;					/* In */
+	__u32 cert_len;					/* In */
+	__u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_pdh_cert_export - PSP PDH_CERT_EXPORT command parameters
+ * @hdr: command header
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @pdh_pub_qx: the Qx parameter of the target PDH public key
+ * @pdh_pub_qy: the Qy parameter of the target PDH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pdh_cert_export {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 api_major;					/* Out */
+	__u8 api_minor;					/* Out */
+	__u8 reserved1[2];
+	__u32 serial;					/* Out */
+	__u8 pdh_pub_qx[32];				/* Out */
+	__u8 pdh_pub_qy[32];				/* Out */
+	__u8 pek_sig_r[32];				/* Out */
+	__u8 pek_sig_s[32];				/* Out */
+	__u8 cek_sig_r[32];				/* Out */
+	__u8 cek_sig_s[32];				/* Out */
+	__u8 cek_pub_qx[32];				/* Out */
+	__u8 cek_pub_qy[32];				/* Out */
+	__u32 ncerts;					/* Out */
+	__u32 cert_len;					/* Out */
+	__u8 certs[];					/* Out */
+};
+
+/**
+ * platform and management commands
+ */
+enum psp_cmd {
+	PSP_CMD_INIT = 1,
+	PSP_CMD_LAUNCH_START,
+	PSP_CMD_LAUNCH_UPDATE,
+	PSP_CMD_LAUNCH_FINISH,
+	PSP_CMD_ACTIVATE,
+	PSP_CMD_DF_FLUSH,
+	PSP_CMD_SHUTDOWN,
+	PSP_CMD_FACTORY_RESET,
+	PSP_CMD_PLATFORM_STATUS,
+	PSP_CMD_PEK_GEN,
+	PSP_CMD_PEK_CSR,
+	PSP_CMD_PEK_CERT_IMPORT,
+	PSP_CMD_PDH_GEN,
+	PSP_CMD_PDH_CERT_EXPORT,
+	PSP_CMD_SEND_START,
+	PSP_CMD_SEND_UPDATE,
+	PSP_CMD_SEND_FINISH,
+	PSP_CMD_RECEIVE_START,
+	PSP_CMD_RECEIVE_UPDATE,
+	PSP_CMD_RECEIVE_FINISH,
+	PSP_CMD_GUEST_STATUS,
+	PSP_CMD_DEACTIVATE,
+	PSP_CMD_DECOMMISSION,
+	PSP_CMD_DBG_DECRYPT,
+	PSP_CMD_DBG_ENCRYPT,
+	PSP_CMD_MAX,
+};
+
+/**
+ * status code returned by the commands
+ */
+enum psp_ret_code {
+	PSP_RET_SUCCESS = 0,
+	PSP_RET_INVALID_PLATFORM_STATE,
+	PSP_RET_INVALID_GUEST_STATE,
+	PSP_RET_INAVLID_CONFIG,
+	PSP_RET_CMDBUF_TOO_SMALL,
+	PSP_RET_ALREADY_OWNED,
+	PSP_RET_INVALID_CERTIFICATE,
+	PSP_RET_POLICY_FAILURE,
+	PSP_RET_INACTIVE,
+	PSP_RET_INVALID_ADDRESS,
+	PSP_RET_BAD_SIGNATURE,
+	PSP_RET_BAD_MEASUREMENT,
+	PSP_RET_ASID_OWNED,
+	PSP_RET_INVALID_ASID,
+	PSP_RET_WBINVD_REQUIRED,
+	PSP_RET_DFFLUSH_REQUIRED,
+	PSP_RET_INVALID_GUEST,
+};
+
+/**
+ * struct psp_issue_cmd - PSP ioctl parameters
+ * @cmd: PSP commands to execute
+ * @opaque: pointer to the command structure
+ * @psp_ret: PSP return code on failure
+ */
+struct psp_issue_cmd {
+	__u32 cmd;					/* In */
+	__u64 opaque;					/* In */
+	__u32 psp_ret;					/* Out */
+};
+
+#define PSP_IOC_TYPE		'P'
+#define PSP_ISSUE_CMD	_IOWR(PSP_IOC_TYPE, 0x0, struct psp_issue_cmd)
+
+#endif /* _UAPI_LINUX_CCP_PSP_H */
+

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The driver to communicate with Secure Encrypted Virtualization (SEV)
firmware running within the AMD secure processor providing a secure key
management interface for SEV guests.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 drivers/crypto/Kconfig       |   11 +
 drivers/crypto/Makefile      |    1 
 drivers/crypto/psp/Kconfig   |    8 
 drivers/crypto/psp/Makefile  |    3 
 drivers/crypto/psp/psp-dev.c |  220 +++++++++++
 drivers/crypto/psp/psp-dev.h |   95 +++++
 drivers/crypto/psp/psp-ops.c |  454 +++++++++++++++++++++++
 drivers/crypto/psp/psp-pci.c |  376 +++++++++++++++++++
 include/linux/ccp-psp.h      |  833 ++++++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild    |    1 
 include/uapi/linux/ccp-psp.h |  182 +++++++++
 11 files changed, 2184 insertions(+)
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 1af94e2..3bdbc51 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -464,6 +464,17 @@ if CRYPTO_DEV_CCP
 	source "drivers/crypto/ccp/Kconfig"
 endif
 
+config CRYPTO_DEV_PSP
+	bool "Support for AMD Platform Security Processor"
+	depends on X86 && PCI
+	help
+	  The AMD Platform Security Processor provides hardware key-
+	  management services for VMGuard encrypted memory.
+
+if CRYPTO_DEV_PSP
+	source "drivers/crypto/psp/Kconfig"
+endif
+
 config CRYPTO_DEV_MXS_DCP
 	tristate "Support for Freescale MXS DCP"
 	depends on (ARCH_MXS || ARCH_MXC)
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 3c6432d..1ea1e08 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o
 obj-$(CONFIG_CRYPTO_DEV_BFIN_CRC) += bfin_crc.o
 obj-$(CONFIG_CRYPTO_DEV_CCP) += ccp/
+obj-$(CONFIG_CRYPTO_DEV_PSP) += psp/
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam/
 obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
 obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
diff --git a/drivers/crypto/psp/Kconfig b/drivers/crypto/psp/Kconfig
new file mode 100644
index 0000000..acd9b87
--- /dev/null
+++ b/drivers/crypto/psp/Kconfig
@@ -0,0 +1,8 @@
+config CRYPTO_DEV_PSP_DD
+	tristate "PSP Key Management device driver"
+	depends on CRYPTO_DEV_PSP
+	default m
+	help
+	  Provides the interface to use the AMD PSP key management APIs
+	  for use with the AMD Secure Enhanced Virtualization. If you
+	  choose 'M' here, this module will be called psp.
diff --git a/drivers/crypto/psp/Makefile b/drivers/crypto/psp/Makefile
new file mode 100644
index 0000000..1b7d00c
--- /dev/null
+++ b/drivers/crypto/psp/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_CRYPTO_DEV_PSP_DD) += psp.o
+psp-objs := psp-dev.o psp-ops.o
+psp-$(CONFIG_PCI) += psp-pci.o
diff --git a/drivers/crypto/psp/psp-dev.c b/drivers/crypto/psp/psp-dev.c
new file mode 100644
index 0000000..65d5c7e
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.c
@@ -0,0 +1,220 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+
+#include "psp-dev.h"
+
+MODULE_AUTHOR("Advanced Micro Devices, Inc.");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("0.1.0");
+MODULE_DESCRIPTION("AMD VMGuard key-management driver prototype");
+
+static struct psp_device *psp_master;
+
+static LIST_HEAD(psp_devs);
+static DEFINE_SPINLOCK(psp_devs_lock);
+
+static atomic_t psp_id;
+
+static void psp_add_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_add_tail(&psp->entry, &psp_devs);
+	psp_master = psp->get_master(&psp_devs);
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_del_device(struct psp_device *psp)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	list_del(&psp->entry);
+	if (psp == psp_master)
+		psp_master = NULL;
+
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static void psp_check_support(struct psp_device *psp)
+{
+	if (ioread32(psp->io_regs + PSP_CMDRESP))
+		psp->sev_enabled = 1;
+}
+
+/**
+ * psp_get_master_device - returns a pointer to the PSP master device structure
+ *
+ * Returns NULL if a PSP master device is not present, PSP device structure
+ * otherwise.
+ */
+struct psp_device *psp_get_master_device(void)
+{
+	return psp_master;
+}
+EXPORT_SYMBOL_GPL(psp_get_master_device);
+
+/**
+ * psp_get_device - returns a pointer to the PSP device structure
+ *
+ * Returns NULL if a PSP device is not present, PSP device structure otherwise.
+ */
+struct psp_device *psp_get_device(void)
+{
+	struct psp_device *psp = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&psp_devs_lock, flags);
+
+	if (list_empty(&psp_devs))
+		goto unlock;
+
+	psp = list_first_entry(&psp_devs, struct psp_device, entry);
+
+unlock:
+	spin_unlock_irqrestore(&psp_devs_lock, flags);
+
+	return psp;
+}
+EXPORT_SYMBOL_GPL(psp_get_device);
+
+/**
+ * psp_alloc_struct - allocate and initialize the psp_device struct
+ *
+ * @dev: device struct of the PSP
+ */
+struct psp_device *psp_alloc_struct(struct device *dev)
+{
+	struct psp_device *psp;
+
+	psp = devm_kzalloc(dev, sizeof(*psp), GFP_KERNEL);
+	if (psp == NULL) {
+		dev_err(dev, "unable to allocate device struct\n");
+		return NULL;
+	}
+	psp->dev = dev;
+
+	psp->id = atomic_inc_return(&psp_id);
+	snprintf(psp->name, sizeof(psp->name), "psp%u", psp->id);
+
+	init_waitqueue_head(&psp->int_queue);
+
+	return psp;
+}
+
+/**
+ * psp_init - initialize the PSP device
+ *
+ * @psp: psp_device struct
+ */
+int psp_init(struct psp_device *psp)
+{
+	int ret;
+
+	psp_check_support(psp);
+
+	/* Disable and clear interrupts until ready */
+	iowrite32(0, psp->io_regs + PSP_P2CMSG_INTEN);
+	iowrite32(0xffffffff, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	/* Request an irq */
+	ret = psp->get_irq(psp);
+	if (ret) {
+		dev_err(psp->dev, "unable to allocate IRQ\n");
+		return ret;
+	}
+
+	/* Make the device struct available */
+	psp_add_device(psp);
+
+	/* Enable interrupts */
+	iowrite32(1 << PSP_CMD_COMPLETE_REG, psp->io_regs + PSP_P2CMSG_INTEN);
+
+	ret = psp_ops_init(psp);
+	if (ret)
+		dev_err(psp->dev, "psp_ops_init returned %d\n", ret);
+
+	return 0;
+}
+
+/**
+ * psp_destroy - tear down the PSP device
+ *
+ * @psp: psp_device struct
+ */
+void psp_destroy(struct psp_device *psp)
+{
+	psp_ops_exit(psp);
+
+	/* Remove general access to the device struct */
+	psp_del_device(psp);
+
+	psp->free_irq(psp);
+}
+
+/**
+ * psp_irq_handler - handle interrupts generated by the PSP device
+ *
+ * @irq: the irq associated with the interrupt
+ * @data: the data value supplied when the irq was created
+ */
+irqreturn_t psp_irq_handler(int irq, void *data)
+{
+	struct device *dev = data;
+	struct psp_device *psp = dev_get_drvdata(dev);
+	unsigned int status;
+
+	status = ioread32(psp->io_regs + PSP_P2CMSG_INTSTS);
+	if (status & (1 << PSP_CMD_COMPLETE_REG)) {
+		int reg;
+
+		reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (reg & PSP_CMDRESP_RESP) {
+			psp->int_rcvd = 1;
+			wake_up_interruptible(&psp->int_queue);
+		}
+	}
+
+	iowrite32(status, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+	return IRQ_HANDLED;
+}
+
+static int __init psp_mod_init(void)
+{
+	int ret;
+
+	ret = psp_pci_init();
+	if (ret)
+		return ret;
+
+	return 0;
+}
+module_init(psp_mod_init);
+
+static void __exit psp_mod_exit(void)
+{
+	psp_pci_exit();
+}
+module_exit(psp_mod_exit);
diff --git a/drivers/crypto/psp/psp-dev.h b/drivers/crypto/psp/psp-dev.h
new file mode 100644
index 0000000..bb75ca2
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.h
@@ -0,0 +1,95 @@
+
+#ifndef __PSP_DEV_H__
+#define __PSP_DEV_H__
+
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/wait.h>
+#include <linux/dmapool.h>
+#include <linux/hw_random.h>
+#include <linux/interrupt.h>
+#include <linux/miscdevice.h>
+
+#define PSP_P2CMSG_INTEN		0x0110
+#define PSP_P2CMSG_INTSTS		0x0114
+
+#define PSP_C2PMSG_ATTR_0		0x0118
+#define PSP_C2PMSG_ATTR_1		0x011c
+#define PSP_C2PMSG_ATTR_2		0x0120
+#define PSP_C2PMSG_ATTR_3		0x0124
+#define PSP_P2CMSG_ATTR_0		0x0128
+
+#define PSP_C2PMSG(_num)		((_num) << 2)
+#define PSP_CMDRESP			PSP_C2PMSG(32)
+#define PSP_CMDBUFF_ADDR_LO		PSP_C2PMSG(56)
+#define PSP_CMDBUFF_ADDR_HI 		PSP_C2PMSG(57)
+
+#define PSP_P2CMSG(_num)		(_num << 2)
+#define PSP_CMD_COMPLETE_REG		1
+#define PSP_CMD_COMPLETE		PSP_P2CMSG(PSP_CMD_COMPLETE_REG)
+
+#define PSP_CMDRESP_CMD_SHIFT		16
+#define PSP_CMDRESP_IOC			BIT(0)
+#define PSP_CMDRESP_RESP		BIT(31)
+#define PSP_CMDRESP_ERR_MASK		0xffff
+
+#define PSP_DRIVER_NAME			"psp"
+
+struct psp_device {
+	struct list_head entry;
+
+	struct device *dev;
+
+	unsigned int id;
+	char name[32];
+
+	struct dentry *debugfs;
+	struct miscdevice misc;
+
+	unsigned int sev_enabled;
+
+	/*
+	 * Bus-specific device information
+	 */
+	void *dev_specific;
+	int (*get_irq)(struct psp_device *);
+	void (*free_irq)(struct psp_device *);
+	unsigned int irq;
+	struct psp_device *(*get_master)(struct list_head *list);
+
+	/*
+	 * I/O area used for device communication. Writing to the
+	 * mailbox registers generates an interrupt on the PSP.
+	 */
+	void __iomem *io_map;
+	void __iomem *io_regs;
+
+	/* Interrupt wait queue */
+	wait_queue_head_t int_queue;
+	unsigned int int_rcvd;
+};
+
+struct psp_device *psp_get_master_device(void);
+struct psp_device *psp_get_device(void);
+
+#ifdef CONFIG_PCI
+int psp_pci_init(void);
+void psp_pci_exit(void);
+#else
+static inline int psp_pci_init(void) { return 0; }
+static inline void psp_pci_exit(void) { }
+#endif
+
+struct psp_device *psp_alloc_struct(struct device *dev);
+int psp_init(struct psp_device *psp);
+void psp_destroy(struct psp_device *psp);
+
+int psp_ops_init(struct psp_device *psp);
+void psp_ops_exit(struct psp_device *psp);
+
+irqreturn_t psp_irq_handler(int irq, void *data);
+
+#endif /* PSP_DEV_H */
diff --git a/drivers/crypto/psp/psp-ops.c b/drivers/crypto/psp/psp-ops.c
new file mode 100644
index 0000000..81e8dc8
--- /dev/null
+++ b/drivers/crypto/psp/psp-ops.c
@@ -0,0 +1,454 @@
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/wait.h>
+#include <linux/mutex.h>
+#include <linux/ccp-psp.h>
+
+#include "psp-dev.h"
+
+static unsigned int psp_poll = 0;
+module_param(psp_poll, uint, 0444);
+MODULE_PARM_DESC(psp_poll, "Poll for command completion - any non-zero value");
+
+#define PSP_DEFAULT_TIMEOUT	2
+
+DEFINE_MUTEX(psp_cmd_mutex);
+
+static int psp_wait_cmd_poll(struct psp_device *psp, unsigned int timeout,
+			     unsigned int *reg)
+{
+	int wait = timeout * 10;	/* 100ms sleep => timeout * 10 */
+
+	while (--wait) {
+		msleep(100);
+
+		*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+		if (*reg & PSP_CMDRESP_RESP)
+			break;
+	}
+
+	if (!wait) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static int psp_wait_cmd_ioc(struct psp_device *psp, unsigned int timeout,
+			    unsigned int *reg)
+{
+	unsigned long jiffie_timeout = timeout;
+	long ret;
+
+	jiffie_timeout *= HZ;
+
+	ret = wait_event_interruptible_timeout(psp->int_queue, psp->int_rcvd,
+					       jiffie_timeout);
+	if (ret <= 0) {
+		dev_err(psp->dev, "psp command timed out\n");
+		return -ETIMEDOUT;
+	}
+
+	psp->int_rcvd = 0;
+
+	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+
+	return 0;
+}
+
+static int psp_wait_cmd(struct psp_device *psp, unsigned int timeout,
+			unsigned int *reg)
+{
+	return (*reg & PSP_CMDRESP_IOC) ? psp_wait_cmd_ioc(psp, timeout, reg)
+					: psp_wait_cmd_poll(psp, timeout, reg);
+}
+
+static int psp_issue_cmd(enum psp_cmd cmd, void *data, unsigned int timeout,
+			 int *psp_ret)
+{
+	struct psp_device *psp = psp_get_master_device();
+	unsigned int phys_lsb, phys_msb;
+	unsigned int reg, ret;
+
+	if (psp_ret)
+		*psp_ret = 0;
+
+	if (!psp)
+		return -ENODEV;
+
+	if (!psp->sev_enabled)
+		return -ENOTSUPP;
+
+	/* Set the physical address for the PSP */
+	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+
+	/* Only one command at a time... */
+	mutex_lock(&psp_cmd_mutex);
+
+	iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+	iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+	wmb();
+
+	reg = cmd;
+	reg <<= PSP_CMDRESP_CMD_SHIFT;
+	reg |= psp_poll ? 0 : PSP_CMDRESP_IOC;
+	iowrite32(reg, psp->io_regs + PSP_CMDRESP);
+
+	ret = psp_wait_cmd(psp, timeout, &reg);
+	if (ret)
+		goto unlock;
+
+	if (psp_ret)
+		*psp_ret = reg & PSP_CMDRESP_ERR_MASK;
+
+	if (reg & PSP_CMDRESP_ERR_MASK) {
+		dev_err(psp->dev, "psp command %u failed (%#010x)\n", cmd, reg & PSP_CMDRESP_ERR_MASK);
+		ret = -EIO;
+	}
+
+unlock:
+	mutex_unlock(&psp_cmd_mutex);
+
+	return ret;
+}
+
+int psp_platform_init(struct psp_data_init *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_INIT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_init);
+
+int psp_platform_shutdown(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SHUTDOWN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_shutdown);
+
+int psp_platform_status(struct psp_data_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PLATFORM_STATUS, data,
+			     PSP_DEFAULT_TIMEOUT, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_status);
+
+int psp_guest_launch_start(struct psp_data_launch_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_start);
+
+int psp_guest_launch_update(struct psp_data_launch_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_update);
+
+int psp_guest_launch_finish(struct psp_data_launch_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_LAUNCH_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_launch_finish);
+
+int psp_guest_activate(struct psp_data_activate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_ACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_activate);
+
+int psp_guest_deactivate(struct psp_data_deactivate *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DEACTIVATE, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_deactivate);
+
+int psp_guest_df_flush(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DF_FLUSH, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_df_flush);
+
+int psp_guest_decommission(struct psp_data_decommission *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DECOMMISSION, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_decommission);
+
+int psp_guest_status(struct psp_data_guest_status *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_GUEST_STATUS, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_status);
+
+int psp_dbg_decrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_DECRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_decrypt);
+
+int psp_dbg_encrypt(struct psp_data_dbg *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_DBG_ENCRYPT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_dbg_encrypt);
+
+int psp_guest_receive_start(struct psp_data_receive_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_start);
+
+int psp_guest_receive_update(struct psp_data_receive_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_update);
+
+int psp_guest_receive_finish(struct psp_data_receive_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_RECEIVE_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_receive_finish);
+
+int psp_guest_send_start(struct psp_data_send_start *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_START, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_start);
+
+int psp_guest_send_update(struct psp_data_send_update *data,
+			    unsigned int timeout, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_UPDATE, data, timeout, psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_update);
+
+int psp_guest_send_finish(struct psp_data_send_finish *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_SEND_FINISH, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_guest_send_finish);
+
+int psp_platform_pdh_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_gen);
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PDH_CERT_EXPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pdh_cert_export);
+
+int psp_platform_pek_gen(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_GEN, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_gen);
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				 int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CERT_IMPORT, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_cert_import);
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_PEK_CSR, data, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_pek_csr);
+
+int psp_platform_factory_reset(int *psp_ret)
+{
+	return psp_issue_cmd(PSP_CMD_FACTORY_RESET, NULL, PSP_DEFAULT_TIMEOUT,
+			     psp_ret);
+}
+EXPORT_SYMBOL_GPL(psp_platform_factory_reset);
+
+static int psp_copy_to_user(void __user *argp, void *data, size_t size)
+{
+	int ret = 0;
+
+	if (copy_to_user(argp, data, size))
+		ret = -EFAULT;
+	free_pages_exact(data, size);
+
+	return ret;
+}
+
+static void *psp_copy_from_user(void __user *argp, size_t *size)
+{
+	u32 buffer_len;
+	void *data;
+
+	if (copy_from_user(&buffer_len, argp, sizeof(buffer_len)))
+		return ERR_PTR(-EFAULT);
+
+	data = alloc_pages_exact(buffer_len, GFP_KERNEL | __GFP_ZERO);
+	if (!data)
+		return ERR_PTR(-ENOMEM);
+	*size = buffer_len;
+
+	if (copy_from_user(data, argp, buffer_len)) {
+		free_pages_exact(data, *size);
+		return ERR_PTR(-EFAULT);
+	}
+
+	return data;
+}
+
+static long psp_ioctl(struct file *file, unsigned int ioctl, unsigned long arg)
+{
+	int ret = -EFAULT;
+	void *data = NULL;
+	size_t buffer_len = 0;
+	void __user *argp = (void __user *)arg;
+	struct psp_issue_cmd input;
+
+	if (ioctl != PSP_ISSUE_CMD)
+		return -EINVAL;
+
+	/* get input parameters */
+	if (copy_from_user(&input, argp, sizeof(struct psp_issue_cmd)))
+	       return -EFAULT;
+
+	if (input.cmd > PSP_CMD_MAX)
+		return -EINVAL;
+
+	switch (input.cmd) {
+
+	case PSP_CMD_INIT: {
+		struct psp_data_init *init;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		init = data;
+		ret = psp_platform_init(init, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_SHUTDOWN: {
+		ret = psp_platform_shutdown(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_FACTORY_RESET: {
+		ret = psp_platform_factory_reset(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PLATFORM_STATUS: {
+		struct psp_data_status *status;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		status = data;
+		ret = psp_platform_status(status, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_GEN: {
+		ret = psp_platform_pek_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CSR: {
+		struct psp_data_pek_csr *pek_csr;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		pek_csr = data;
+		ret = psp_platform_pek_csr(pek_csr, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PEK_CERT_IMPORT: {
+		struct psp_data_pek_cert_import *import;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		import = data;
+		ret = psp_platform_pek_cert_import(import, &input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_GEN: {
+		ret = psp_platform_pdh_gen(&input.psp_ret);
+		break;
+	}
+	case PSP_CMD_PDH_CERT_EXPORT: {
+		struct psp_data_pdh_cert_export *export;
+
+		data = psp_copy_from_user((void*)input.opaque, &buffer_len);
+		if (IS_ERR(data))
+			break;
+
+		export = data;
+		ret = psp_platform_pdh_cert_export(export, &input.psp_ret);
+		break;
+	}
+	default:
+		ret = -EINVAL;
+	}
+
+	if (data && psp_copy_to_user((void *)input.opaque,
+				data, buffer_len))
+		ret = -EFAULT;
+
+	if (copy_to_user(argp, &input, sizeof(struct psp_issue_cmd)))
+		ret = -EFAULT;
+
+	return ret;
+}
+
+static const struct file_operations fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = psp_ioctl,
+};
+
+int psp_ops_init(struct psp_device *psp)
+{
+	struct miscdevice *misc = &psp->misc;
+
+	misc->minor = MISC_DYNAMIC_MINOR;
+	misc->name = psp->name;
+	misc->fops = &fops;
+
+	return misc_register(misc);
+}
+
+void psp_ops_exit(struct psp_device *psp)
+{
+	misc_deregister(&psp->misc);
+}
diff --git a/drivers/crypto/psp/psp-pci.c b/drivers/crypto/psp/psp-pci.c
new file mode 100644
index 0000000..2b4c379
--- /dev/null
+++ b/drivers/crypto/psp/psp-pci.c
@@ -0,0 +1,376 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/dma-mapping.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+
+#include "psp-dev.h"
+
+#define IO_BAR				2
+#define IO_OFFSET			0x10500
+
+#define MSIX_VECTORS			2
+
+struct psp_msix {
+	u32 vector;
+	char name[16];
+};
+
+struct psp_pci {
+	struct pci_dev *pdev;
+	int msix_count;
+	struct psp_msix msix[MSIX_VECTORS];
+};
+
+static int psp_get_msix_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	struct msix_entry msix_entry[MSIX_VECTORS];
+	unsigned int name_len = sizeof(psp_pci->msix[0].name) - 1;
+	int v, ret;
+
+	for (v = 0; v < ARRAY_SIZE(msix_entry); v++)
+		msix_entry[v].entry = v;
+
+	ret = pci_enable_msix_range(pdev, msix_entry, 1, v);
+	if (ret < 0)
+		return ret;
+
+	psp_pci->msix_count = ret;
+	for (v = 0; v < psp_pci->msix_count; v++) {
+		/* Set the interrupt names and request the irqs */
+		snprintf(psp_pci->msix[v].name, name_len, "%s-%u", psp->name, v);
+		psp_pci->msix[v].vector = msix_entry[v].vector;
+		ret = request_irq(psp_pci->msix[v].vector, psp_irq_handler,
+				  0, psp_pci->msix[v].name, dev);
+		if (ret) {
+			dev_notice(dev, "unable to allocate MSI-X IRQ (%d)\n",
+				   ret);
+			goto e_irq;
+		}
+	}
+
+	return 0;
+
+e_irq:
+	while (v--)
+		free_irq(psp_pci->msix[v].vector, dev);
+	pci_disable_msix(pdev);
+	psp_pci->msix_count = 0;
+
+	return ret;
+}
+
+static int psp_get_msi_irq(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	int ret;
+
+	ret = pci_enable_msi(pdev);
+	if (ret)
+		return ret;
+
+	psp->irq = pdev->irq;
+	ret = request_irq(psp->irq, psp_irq_handler, 0, psp->name, dev);
+	if (ret) {
+		dev_notice(dev, "unable to allocate MSI IRQ (%d)\n", ret);
+		goto e_msi;
+	}
+
+	return 0;
+
+e_msi:
+	pci_disable_msi(pdev);
+
+	return ret;
+}
+
+static int psp_get_irqs(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	int ret;
+
+	ret = psp_get_msix_irqs(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI-X vectors, try MSI */
+	dev_notice(dev, "could not enable MSI-X (%d), trying MSI\n", ret);
+	ret = psp_get_msi_irq(psp);
+	if (!ret)
+		return 0;
+
+	/* Couldn't get MSI interrupt */
+	dev_notice(dev, "could not enable MSI (%d), trying PCI\n", ret);
+
+	return ret;
+}
+
+void psp_free_irqs(struct psp_device *psp)
+{
+	struct psp_pci *psp_pci = psp->dev_specific;
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+
+	if (psp_pci->msix_count) {
+		while (psp_pci->msix_count--)
+			free_irq(psp_pci->msix[psp_pci->msix_count].vector,
+				 dev);
+		pci_disable_msix(pdev);
+	} else {
+		free_irq(psp->irq, dev);
+		pci_disable_msi(pdev);
+	}
+}
+
+static bool psp_is_master(struct psp_device *cur, struct psp_device *new)
+{
+	struct psp_pci *psp_pci_cur, *psp_pci_new;
+	struct pci_dev *pdev_cur, *pdev_new;
+
+	psp_pci_cur = cur->dev_specific;
+	psp_pci_new = new->dev_specific;
+
+	pdev_cur = psp_pci_cur->pdev;
+	pdev_new = psp_pci_new->pdev;
+
+	if (pdev_new->bus->number < pdev_cur->bus->number)
+		return true;
+
+	if (PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn))
+		return true;
+
+	if (PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn))
+		return true;
+
+	return false;
+}
+
+static struct psp_device *psp_get_master(struct list_head *list)
+{
+	struct psp_device *psp, *tmp;
+
+	psp = NULL;
+	list_for_each_entry(tmp, list, entry) {
+		if (!psp || psp_is_master(psp, tmp))
+			psp = tmp;
+	}
+
+	return psp;
+}
+
+static int psp_find_mmio_area(struct psp_device *psp)
+{
+	struct device *dev = psp->dev;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	unsigned long io_flags;
+
+	io_flags = pci_resource_flags(pdev, IO_BAR);
+	if (io_flags & IORESOURCE_MEM)
+		return IO_BAR;
+
+	return -EIO;
+}
+
+static int psp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct psp_device *psp;
+	struct psp_pci *psp_pci;
+	struct device *dev = &pdev->dev;
+	unsigned int bar;
+	int ret;
+
+	ret = -ENOMEM;
+	psp = psp_alloc_struct(dev);
+	if (!psp)
+		goto e_err;
+
+	psp_pci = devm_kzalloc(dev, sizeof(*psp_pci), GFP_KERNEL);
+	if (!psp_pci) {
+		ret = -ENOMEM;
+		goto e_err;
+	}
+	psp_pci->pdev = pdev;
+	psp->dev_specific = psp_pci;
+	psp->get_irq = psp_get_irqs;
+	psp->free_irq = psp_free_irqs;
+	psp->get_master = psp_get_master;
+
+	ret = pci_request_regions(pdev, PSP_DRIVER_NAME);
+	if (ret) {
+		dev_err(dev, "pci_request_regions failed (%d)\n", ret);
+		goto e_err;
+	}
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		dev_err(dev, "pci_enable_device failed (%d)\n", ret);
+		goto e_regions;
+	}
+
+	pci_set_master(pdev);
+
+	ret = psp_find_mmio_area(psp);
+	if (ret < 0)
+		goto e_device;
+	bar = ret;
+
+	ret = -EIO;
+	psp->io_map = pci_iomap(pdev, bar, 0);
+	if (!psp->io_map) {
+		dev_err(dev, "pci_iomap failed\n");
+		goto e_device;
+	}
+	psp->io_regs = psp->io_map + IO_OFFSET;
+
+	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
+	if (ret) {
+		ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+		if (ret) {
+			dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n",
+				ret);
+			goto e_iomap;
+		}
+	}
+
+	dev_set_drvdata(dev, psp);
+
+	ret = psp_init(psp);
+	if (ret)
+		goto e_iomap;
+
+	dev_notice(dev, "enabled\n");
+
+	return 0;
+
+e_iomap:
+	pci_iounmap(pdev, psp->io_map);
+
+e_device:
+	pci_disable_device(pdev);
+
+e_regions:
+	pci_release_regions(pdev);
+
+e_err:
+	dev_notice(dev, "initialization failed\n");
+	return ret;
+}
+
+static void psp_pci_remove(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct psp_device *psp = dev_get_drvdata(dev);
+
+	if (!psp)
+		return;
+
+	psp_destroy(psp);
+
+	pci_iounmap(pdev, psp->io_map);
+
+	pci_disable_device(pdev);
+
+	pci_release_regions(pdev);
+
+	dev_notice(dev, "disabled\n");
+}
+
+#if 0
+#ifdef CONFIG_PM
+static int ccp_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 1;
+
+	/* Wake all the queue kthreads to prepare for suspend */
+	for (i = 0; i < ccp->cmd_q_count; i++)
+		wake_up_process(ccp->cmd_q[i].kthread);
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	/* Wait for all queue kthreads to say they're done */
+	while (!ccp_queues_suspended(ccp))
+		wait_event_interruptible(ccp->suspend_queue,
+					 ccp_queues_suspended(ccp));
+
+	return 0;
+}
+
+static int ccp_pci_resume(struct pci_dev *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct ccp_device *ccp = dev_get_drvdata(dev);
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&ccp->cmd_lock, flags);
+
+	ccp->suspending = 0;
+
+	/* Wake up all the kthreads */
+	for (i = 0; i < ccp->cmd_q_count; i++) {
+		ccp->cmd_q[i].suspended = 0;
+		wake_up_process(ccp->cmd_q[i].kthread);
+	}
+
+	spin_unlock_irqrestore(&ccp->cmd_lock, flags);
+
+	return 0;
+}
+#endif
+#endif
+
+static const struct pci_device_id psp_pci_table[] = {
+	{ PCI_VDEVICE(AMD, 0x1456), },
+	/* Last entry must be zero */
+	{ 0, }
+};
+MODULE_DEVICE_TABLE(pci, psp_pci_table);
+
+static struct pci_driver psp_pci_driver = {
+	.name = PSP_DRIVER_NAME,
+	.id_table = psp_pci_table,
+	.probe = psp_pci_probe,
+	.remove = psp_pci_remove,
+#if 0
+#ifdef CONFIG_PM
+	.suspend = ccp_pci_suspend,
+	.resume = ccp_pci_resume,
+#endif
+#endif
+};
+
+int psp_pci_init(void)
+{
+	return pci_register_driver(&psp_pci_driver);
+}
+
+void psp_pci_exit(void)
+{
+	pci_unregister_driver(&psp_pci_driver);
+}
diff --git a/include/linux/ccp-psp.h b/include/linux/ccp-psp.h
new file mode 100644
index 0000000..b5e791c
--- /dev/null
+++ b/include/linux/ccp-psp.h
@@ -0,0 +1,833 @@
+/*
+ * AMD Secure Processor (PSP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPP_PSP_H__
+#define __CPP_PSP_H__
+
+#include <uapi/linux/ccp-psp.h>
+
+#ifdef CONFIG_X86
+#include <asm/mem_encrypt.h>
+
+#define __psp_pa(x)	__sme_pa(x)
+#else
+#define __psp_pa(x)	__pa(x)
+#endif
+
+/**
+ * struct psp_data_activate - PSP ACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to activate
+ * @asid: asid assigned to the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_activate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 asid;				/* In */
+};
+
+/**
+ * struct psp_data_deactivate - PSP DEACTIVATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to deactivate
+ */
+struct __attribute__ ((__packed__)) psp_data_deactivate {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_launch_start - PSP LAUNCH_START command parameters
+ * @hdr: command header
+ * @handle: handle assigned to the VM
+ * @flags: configuration flags for the VM
+ * @policy: policy information for the VM
+ * @dh_pub_qx: the Qx parameter of the VM owners ECDH public key
+ * @dh_pub_qy: the Qy parameter of the VM owners ECDH public key
+ * @nonce: nonce generated by the VM owner
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_start {
+	struct psp_data_header hdr;
+	u32 handle;				/* In/Out */
+	u32 flags;				/* In */
+	u32 policy;				/* In */
+	u8  dh_pub_qx[32];			/* In */
+	u8  dh_pub_qy[32];			/* In */
+	u8  nonce[16];				/* In */
+};
+
+/**
+ * struct psp_data_launch_update - PSP LAUNCH_UPDATE command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to update
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_launch_vcpus - PSP LAUNCH_FINISH VCPU state information
+ * @state_length: length of the VCPU state information to measure
+ * @state_mask_addr: mask of the bytes within the VCPU state information
+ *                   to use in the measurment
+ * @state_count: number of VCPUs to measure
+ * @state_addr: physical address of the VCPU state (VMCB)
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_vcpus {
+	u32 state_length;			/* In */
+	u64 state_mask_addr;			/* In */
+	u32 state_count;			/* In */
+	u64 state_addr[];			/* In */
+};
+
+/**
+ * struct psp_data_launch_finish - PSP LAUNCH_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the encrypted VM memory areas
+ * @vcpus: the VCPU state information to include in the measurement
+ */
+struct __attribute__ ((__packed__)) psp_data_launch_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In/Out */
+	struct psp_data_launch_vcpus vcpus;	/* In */
+};
+
+/**
+ * struct psp_data_decommission - PSP DECOMMISSION command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to decommission
+ */
+struct __attribute__ ((__packed__)) psp_data_decommission {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+};
+
+/**
+ * struct psp_data_guest_status - PSP GUEST_STATUS command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to retrieve status
+ * @policy: policy information for the VM
+ * @asid: current ASID of the VM
+ * @state: current state of the VM
+ */
+struct __attribute__ ((__packed__)) psp_data_guest_status {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u32 policy;				/* Out */
+	u32 asid;				/* Out */
+	u8 state;				/* Out */
+};
+
+/**
+ * struct psp_data_dbg - PSP DBG_ENCRYPT/DBG_DECRYPT command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to perform debug operation
+ * @src_addr: source address of data to operate on
+ * @dst_addr: destination address of data to operate on
+ * @length: length of data to operate on
+ */
+struct __attribute__ ((__packed__)) psp_data_dbg {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_start - PSP RECEIVE_START command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @flags: flags for the receive process
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keypad
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryption nonce
+ * @dh_pub_qx: qx parameter of the origin's ECDH public key
+ * @dh_pub_qy: qy parameter of the origin's ECDH public key
+ * @nonce: nonce generated by the origin
+ */
+struct __attribute__((__packed__)) psp_data_receive_start {
+	struct psp_data_header hdr;	/* In/Out */
+	u32 handle;			/* In/Out */
+	u32 flags;			/* In */
+	u32 policy;			/* In */
+	u8 policy_meas[32];		/* In */
+	u8 wrapped_tek[24];		/* In */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];		/* In */
+	u8 reserved2[8];
+	u8 ten[16];			/* In */
+	u8 dh_pub_qx[32];		/* In */
+	u8 dh_pub_qy[32];		/* In */
+	u8 nonce[16];			/* In */
+};
+
+/**
+ * struct psp_receive_update - PSP RECEIVE_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @iv: initialization vector for this blob of memory
+ * @count: number of memory areas to be encrypted
+ * @length: length of memory to be encrypted
+ * @address: physical address of memory region to encrypt
+ */
+struct __attribute__((__packed__)) psp_data_receive_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8 iv[16];				/* In */
+	u64 address;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_receive_finish - PSP RECEIVE_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_receive_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* In */
+};
+
+/**
+ * struct psp_data_send_start - PSP SEND_START command parameters
+ * @hdr: command header
+ * @nonce: nonce generated by firmware
+ * @policy: guest policy flags
+ * @policy_meas: HMAC of policy keyed with TIK
+ * @wrapped_tek: wrapped transport encryption key
+ * @wrapped_tik: wrapped transport integrity key
+ * @ten: transport encryrption nonce
+ * @iv: the IV of transport encryption block
+ * @handle: handle of the VM to process
+ * @flags: flags for send command
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @dh_pub_qx: the Qx parameter of the target DH public key
+ * @dh_pub_qy: the Qy parameter of the target DH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ask_sig_r: the r component of the ASK signature
+ * @ask_sig_s: the s component of the ASK signature
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate in chain
+ */
+
+struct __attribute__((__packed__)) psp_data_send_start {
+	struct psp_data_header hdr;			/* In/Out */
+	u8 nonce[16];					/* Out */
+	u32 policy;					/* Out */
+	u8 policy_meas[32];				/* Out */
+	u8 wrapped_tek[24];				/* Out */
+	u8 reserved1[8];
+	u8 wrapped_tik[24];				/* Out */
+	u8 reserved2[8];
+	u8 ten[16];					/* Out */
+	u8 iv[16];					/* Out */
+	u32 handle;					/* In */
+	u32 flags;					/* In */
+	u8 api_major;					/* In */
+	u8 api_minor;					/* In */
+	u8 reserved3[2];
+	u32 serial;					/* In */
+	u8 dh_pub_qx[32];				/* In */
+	u8 dh_pub_qy[32];				/* In */
+	u8 pek_sig_r[32];				/* In */
+	u8 pek_sig_s[32];				/* In */
+	u8 cek_sig_r[32];				/* In */
+	u8 cek_sig_s[32];				/* In */
+	u8 cek_pub_qx[32];				/* In */
+	u8 cek_pub_qy[32];				/* In */
+	u8 ask_sig_r[32];				/* In */
+	u8 ask_sig_s[32];				/* In */
+	u32 ncerts;					/* In */
+	u32 cert_length;				/* In */
+	u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_send_update - PSP SEND_UPDATE command parameters
+ *
+ * @hdr: command header
+ * @handle: handle of the VM to receiving the guest
+ * @len: length of memory region to encrypt
+ * @src_addr: physical address of memory region to encrypt from
+ * @dst_addr: physical address of memory region to encrypt to
+ */
+struct __attribute__((__packed__)) psp_data_send_update {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u64 src_addr;				/* In */
+	u64 dst_addr;				/* In */
+	u32 length;				/* In */
+};
+
+/**
+ * struct psp_data_send_finish - PSP SEND_FINISH command parameters
+ * @hdr: command header
+ * @handle: handle of the VM to process
+ * @measurement: the measurement of the transported guest
+ */
+struct __attribute__ ((__packed__)) psp_data_send_finish {
+	struct psp_data_header hdr;		/* In/Out */
+	u32 handle;				/* In */
+	u8  measurement[32];			/* Out */
+};
+
+#if defined(CONFIG_CRYPTO_DEV_PSP_DD) || \
+	defined(CONFIG_CRYPTO_DEV_PSP_DD_MODULE)
+
+/**
+ * psp_platform_init - perform PSP INIT command
+ *
+ * @init: psp_data_init structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_init(struct psp_data_init *init, int *psp_ret);
+
+/**
+ * psp_platform_shutdown - perform PSP SHUTDOWN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_shutdown(int *psp_ret);
+
+/**
+ * psp_platform_status - perform PSP PLATFORM_STATUS command
+ *
+ * @init: psp_data_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_status(struct psp_data_status *status, int *psp_ret);
+
+/**
+ * psp_guest_launch_start - perform PSP LAUNCH_START command
+ *
+ * @start: psp_data_launch_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_start(struct psp_data_launch_start *start, int *psp_ret);
+
+/**
+ * psp_guest_launch_update - perform PSP LAUNCH_UPDATE command
+ *
+ * @update: psp_data_launch_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_update(struct psp_data_launch_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_launch_finish - perform PSP LAUNCH_FINISH command
+ *
+ * @finish: psp_data_launch_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_launch_finish(struct psp_data_launch_finish *finish, int *psp_ret);
+
+/**
+ * psp_guest_activate - perform PSP ACTIVATE command
+ *
+ * @activate: psp_data_activate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_activate(struct psp_data_activate *activate, int *psp_ret);
+
+/**
+ * psp_guest_deactivate - perform PSP DEACTIVATE command
+ *
+ * @deactivate: psp_data_deactivate structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_deactivate(struct psp_data_deactivate *deactivate, int *psp_ret);
+
+/**
+ * psp_guest_df_flush - perform PSP DF_FLUSH command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_df_flush(int *psp_ret);
+
+/**
+ * psp_guest_decommission - perform PSP DECOMMISSION command
+ *
+ * @decommission: psp_data_decommission structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_decommission(struct psp_data_decommission *decommission,
+			   int *psp_ret);
+
+/**
+ * psp_guest_status - perform PSP GUEST_STATUS command
+ *
+ * @status: psp_data_guest_status structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_status(struct psp_data_guest_status *status, int *psp_ret);
+
+/**
+ * psp_dbg_decrypt - perform PSP DBG_DECRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_dbg_encrypt - perform PSP DBG_ENCRYPT command
+ *
+ * @dbg: psp_data_dbg structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret);
+
+/**
+ * psp_guest_receive_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_receive_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_start(struct psp_data_receive_start *start, int *psp_ret);
+
+/**
+ * psp_guest_receive_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_receive_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_update(struct psp_data_receive_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_receive_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_receive_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_guest_send_start - perform PSP RECEIVE_START command
+ *
+ * @start: psp_data_send_start structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_start(struct psp_data_send_start *start, int *psp_ret);
+
+/**
+ * psp_guest_send_update - perform PSP RECEIVE_UPDATE command
+ *
+ * @update: psp_data_send_update structure to be processed
+ * @timeout: command timeout
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_update(struct psp_data_send_update *update,
+			    unsigned int timeout, int *psp_ret);
+
+/**
+ * psp_guest_send_finish - perform PSP RECEIVE_FINISH command
+ *
+ * @finish: psp_data_send_finish structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_guest_send_finish(struct psp_data_send_finish *finish,
+			     int *psp_ret);
+
+/**
+ * psp_platform_pdh_gen - perform PSP PDH_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_gen(int *psp_ret);
+
+/**
+ * psp_platform_pdh_cert_export - perform PSP PDH_CERT_EXPORT command
+ *
+ * @data: psp_data_platform_pdh_cert_export structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_gen - perform PSP PEK_GEN command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_gen(int *psp_ret);
+
+/**
+ * psp_platform_pek_cert_import - perform PSP PEK_CERT_IMPORT command
+ *
+ * @data: psp_data_platform_pek_cert_import structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret);
+
+/**
+ * psp_platform_pek_csr - perform PSP PEK_CSR command
+ *
+ * @data: psp_data_platform_pek_csr structure to be processed
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret);
+
+/**
+ * psp_platform_factory_reset - perform PSP FACTORY_RESET command
+ *
+ * @psp_ret: PSP command return code
+ *
+ * Returns:
+ * 0 if the PSP successfully processed the command
+ * -%ENODEV    if the PSP device is not available
+ * -%ENOTSUPP  if the PSP does not support SEV
+ * -%ETIMEDOUT if the PSP command timed out
+ * -%EIO       if the PSP returned a non-zero return code
+ */
+int psp_platform_factory_reset(int *psp_ret);
+
+#else	/* CONFIG_CRYPTO_DEV_PSP_DD is not enabled */
+
+static inline int psp_platform_status(struct psp_data_status *status,
+				      int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_init(struct psp_data_init *init, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_shutdown(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_start(struct psp_data_launch_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_update(struct psp_data_launch_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_launch_finish(struct psp_data_launch_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_activate(struct psp_data_activate *activate,
+				     int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_deactivate(struct psp_data_deactivate *deactivate,
+				       int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_df_flush(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_decommission(struct psp_data_decommission *decommission,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_status(struct psp_data_guest_status *status,
+				   int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_decrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_dbg_encrypt(struct psp_data_dbg *dbg, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_start(struct psp_data_receive_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_update(struct psp_data_receive_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_receive_finish(struct psp_data_receive_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+static inline int psp_guest_send_start(struct psp_data_send_start *start,
+					 int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_update(struct psp_data_send_update *update,
+					  unsigned int timeout, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_guest_send_finish(struct psp_data_send_finish *finish,
+					  int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pdh_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pdh_cert_export(struct psp_data_pdh_cert_export *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_pek_gen(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_cert_import(struct psp_data_pek_cert_import *data,
+				int *psp_ret)
+{
+	return -ENODEV;
+}
+
+int psp_platform_pek_csr(struct psp_data_pek_csr *data, int *psp_ret)
+{
+	return -ENODEV;
+}
+
+static inline int psp_platform_factory_reset(int *psp_ret)
+{
+	return -ENODEV;
+}
+
+#endif	/* CONFIG_CRYPTO_DEV_PSP_DD */
+
+#endif	/* __CPP_PSP_H__ */
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 185f8ea..af2511a 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -470,3 +470,4 @@ header-y += xilinx-v4l2-controls.h
 header-y += zorro.h
 header-y += zorro_ids.h
 header-y += userfaultfd.h
+header-y += ccp-psp.h
diff --git a/include/uapi/linux/ccp-psp.h b/include/uapi/linux/ccp-psp.h
new file mode 100644
index 0000000..e780b46
--- /dev/null
+++ b/include/uapi/linux/ccp-psp.h
@@ -0,0 +1,182 @@
+#ifndef _UAPI_LINUX_CCP_PSP_
+#define _UAPI_LINUX_CCP_PSP_
+
+/*
+ * Userspace interface to communicated with CCP-PSP driver.
+ */
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+/**
+ * struct psp_data_header - Common PSP communication header
+ * @buffer_len: length of the buffer supplied to the PSP
+ */
+
+struct __attribute__ ((__packed__)) psp_data_header {
+	__u32 buffer_len;				/* In/Out */
+};
+
+/**
+ * struct psp_data_init - PSP INIT command parameters
+ * @hdr: command header
+ * @flags: processing flags
+ */
+struct __attribute__ ((__packed__)) psp_data_init {
+	struct psp_data_header hdr;
+	__u32 flags;				/* In */
+};
+
+/**
+ * struct psp_data_status - PSP PLATFORM_STATUS command parameters
+ * @hdr: command header
+ * @major: major API version
+ * @minor: minor API version
+ * @state: platform state
+ * @cert_status: bit fields describing certificate status
+ * @flags: platform flags
+ * @guest_count: number of active guests
+ */
+struct __attribute__ ((__packed__)) psp_data_status {
+	struct psp_data_header hdr;
+	__u8 api_major;				/* Out */
+	__u8 api_minor;				/* Out */
+	__u8 state;				/* Out */
+	__u8 cert_status;			/* Out */
+	__u32 flags;				/* Out */
+	__u32 guest_count;			/* Out */
+};
+
+/**
+ * struct psp_data_pek_csr - PSP PEK_CSR command parameters
+ * @hdr: command header
+ * @csr - certificate signing request formatted with PKCS
+ */
+struct __attribute__((__packed__)) psp_data_pek_csr {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 csr[];					/* Out */
+};
+
+/**
+ * struct psp_data_cert_import - PSP PEK_CERT_IMPORT command parameters
+ * @hdr: command header
+ * @ncerts: number of certificates in the chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pek_cert_import {
+	struct psp_data_header hdr;			/* In/Out */
+	__u32 ncerts;					/* In */
+	__u32 cert_len;					/* In */
+	__u8 certs[];					/* In */
+};
+
+/**
+ * struct psp_data_pdh_cert_export - PSP PDH_CERT_EXPORT command parameters
+ * @hdr: command header
+ * @major: API major number
+ * @minor: API minor number
+ * @serial: platform serial number
+ * @pdh_pub_qx: the Qx parameter of the target PDH public key
+ * @pdh_pub_qy: the Qy parameter of the target PDH public key
+ * @pek_sig_r: the r component of the PEK signature
+ * @pek_sig_s: the s component of the PEK signature
+ * @cek_sig_r: the r component of the CEK signature
+ * @cek_sig_s: the s component of the CEK signature
+ * @cek_pub_qx: the Qx parameter of the CEK public key
+ * @cek_pub_qy: the Qy parameter of the CEK public key
+ * @ncerts: number of certificates in certificate chain
+ * @cert_len: length of certificates
+ * @certs: certificate chain starting with PEK and end with CA certificate
+ */
+struct __attribute__((__packed__)) psp_data_pdh_cert_export {
+	struct psp_data_header hdr;			/* In/Out */
+	__u8 api_major;					/* Out */
+	__u8 api_minor;					/* Out */
+	__u8 reserved1[2];
+	__u32 serial;					/* Out */
+	__u8 pdh_pub_qx[32];				/* Out */
+	__u8 pdh_pub_qy[32];				/* Out */
+	__u8 pek_sig_r[32];				/* Out */
+	__u8 pek_sig_s[32];				/* Out */
+	__u8 cek_sig_r[32];				/* Out */
+	__u8 cek_sig_s[32];				/* Out */
+	__u8 cek_pub_qx[32];				/* Out */
+	__u8 cek_pub_qy[32];				/* Out */
+	__u32 ncerts;					/* Out */
+	__u32 cert_len;					/* Out */
+	__u8 certs[];					/* Out */
+};
+
+/**
+ * platform and management commands
+ */
+enum psp_cmd {
+	PSP_CMD_INIT = 1,
+	PSP_CMD_LAUNCH_START,
+	PSP_CMD_LAUNCH_UPDATE,
+	PSP_CMD_LAUNCH_FINISH,
+	PSP_CMD_ACTIVATE,
+	PSP_CMD_DF_FLUSH,
+	PSP_CMD_SHUTDOWN,
+	PSP_CMD_FACTORY_RESET,
+	PSP_CMD_PLATFORM_STATUS,
+	PSP_CMD_PEK_GEN,
+	PSP_CMD_PEK_CSR,
+	PSP_CMD_PEK_CERT_IMPORT,
+	PSP_CMD_PDH_GEN,
+	PSP_CMD_PDH_CERT_EXPORT,
+	PSP_CMD_SEND_START,
+	PSP_CMD_SEND_UPDATE,
+	PSP_CMD_SEND_FINISH,
+	PSP_CMD_RECEIVE_START,
+	PSP_CMD_RECEIVE_UPDATE,
+	PSP_CMD_RECEIVE_FINISH,
+	PSP_CMD_GUEST_STATUS,
+	PSP_CMD_DEACTIVATE,
+	PSP_CMD_DECOMMISSION,
+	PSP_CMD_DBG_DECRYPT,
+	PSP_CMD_DBG_ENCRYPT,
+	PSP_CMD_MAX,
+};
+
+/**
+ * status code returned by the commands
+ */
+enum psp_ret_code {
+	PSP_RET_SUCCESS = 0,
+	PSP_RET_INVALID_PLATFORM_STATE,
+	PSP_RET_INVALID_GUEST_STATE,
+	PSP_RET_INAVLID_CONFIG,
+	PSP_RET_CMDBUF_TOO_SMALL,
+	PSP_RET_ALREADY_OWNED,
+	PSP_RET_INVALID_CERTIFICATE,
+	PSP_RET_POLICY_FAILURE,
+	PSP_RET_INACTIVE,
+	PSP_RET_INVALID_ADDRESS,
+	PSP_RET_BAD_SIGNATURE,
+	PSP_RET_BAD_MEASUREMENT,
+	PSP_RET_ASID_OWNED,
+	PSP_RET_INVALID_ASID,
+	PSP_RET_WBINVD_REQUIRED,
+	PSP_RET_DFFLUSH_REQUIRED,
+	PSP_RET_INVALID_GUEST,
+};
+
+/**
+ * struct psp_issue_cmd - PSP ioctl parameters
+ * @cmd: PSP commands to execute
+ * @opaque: pointer to the command structure
+ * @psp_ret: PSP return code on failure
+ */
+struct psp_issue_cmd {
+	__u32 cmd;					/* In */
+	__u64 opaque;					/* In */
+	__u32 psp_ret;					/* Out */
+};
+
+#define PSP_IOC_TYPE		'P'
+#define PSP_ISSUE_CMD	_IOWR(PSP_IOC_TYPE, 0x0, struct psp_issue_cmd)
+
+#endif /* _UAPI_LINUX_CCP_PSP_H */
+

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest
  2016-08-22 23:23 ` Brijesh Singh
                   ` (38 preceding siblings ...)
  (?)
@ 2016-08-22 23:27 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

In current implementation, asid allocation starts from 1, this patch
adds a min_asid variable in svm_vcpu structure to allow starting asid
from something other than 1.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 211be94..f010b23 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -470,6 +470,7 @@ struct svm_cpu_data {
 	u64 asid_generation;
 	u32 max_asid;
 	u32 next_asid;
+	u32 min_asid;
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
@@ -726,6 +727,7 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
+	sd->min_asid = 1;
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -1887,7 +1889,7 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
 {
 	if (sd->next_asid > sd->max_asid) {
 		++sd->asid_generation;
-		sd->next_asid = 1;
+		sd->next_asid = sd->min_asid;
 		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:27   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

In current implementation, asid allocation starts from 1, this patch
adds a min_asid variable in svm_vcpu structure to allow starting asid
from something other than 1.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 211be94..f010b23 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -470,6 +470,7 @@ struct svm_cpu_data {
 	u64 asid_generation;
 	u32 max_asid;
 	u32 next_asid;
+	u32 min_asid;
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
@@ -726,6 +727,7 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
+	sd->min_asid = 1;
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -1887,7 +1889,7 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
 {
 	if (sd->next_asid > sd->max_asid) {
 		++sd->asid_generation;
-		sd->next_asid = 1;
+		sd->next_asid = sd->min_asid;
 		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

In current implementation, asid allocation starts from 1, this patch
adds a min_asid variable in svm_vcpu structure to allow starting asid
from something other than 1.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 211be94..f010b23 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -470,6 +470,7 @@ struct svm_cpu_data {
 	u64 asid_generation;
 	u32 max_asid;
 	u32 next_asid;
+	u32 min_asid;
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
@@ -726,6 +727,7 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
+	sd->min_asid = 1;
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -1887,7 +1889,7 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
 {
 	if (sd->next_asid > sd->max_asid) {
 		++sd->asid_generation;
-		sd->next_asid = 1;
+		sd->next_asid = sd->min_asid;
 		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

In current implementation, asid allocation starts from 1, this patch
adds a min_asid variable in svm_vcpu structure to allow starting asid
from something other than 1.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 211be94..f010b23 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -470,6 +470,7 @@ struct svm_cpu_data {
 	u64 asid_generation;
 	u32 max_asid;
 	u32 next_asid;
+	u32 min_asid;
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
@@ -726,6 +727,7 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
+	sd->min_asid = 1;
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -1887,7 +1889,7 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
 {
 	if (sd->next_asid > sd->max_asid) {
 		++sd->asid_generation;
-		sd->next_asid = 1;
+		sd->next_asid = sd->min_asid;
 		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest
@ 2016-08-22 23:27   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:27 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

In current implementation, asid allocation starts from 1, this patch
adds a min_asid variable in svm_vcpu structure to allow starting asid
from something other than 1.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 211be94..f010b23 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -470,6 +470,7 @@ struct svm_cpu_data {
 	u64 asid_generation;
 	u32 max_asid;
 	u32 next_asid;
+	u32 min_asid;
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
@@ -726,6 +727,7 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
+	sd->min_asid = 1;
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -1887,7 +1889,7 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
 {
 	if (sd->next_asid > sd->max_asid) {
 		++sd->asid_generation;
-		sd->next_asid = 1;
+		sd->next_asid = sd->min_asid;
 		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 20/28] KVM: SVM: prepare for SEV guest management API support
  2016-08-22 23:23 ` Brijesh Singh
                   ` (40 preceding siblings ...)
  (?)
@ 2016-08-22 23:28 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The patch adds initial support required for Secure Encrypted
Virtualization (SEV) guest management API's.

ASID management:
 - Reserve asid range for SEV guest, SEV asid range is obtained
   through CPUID Fn8000_001f[ECX]. A non-SEV guest can use any
   asid outside the SEV asid range.
 - SEV guest must have asid value within asid range obtained
   through CPUID.
 - SEV guest must have the same asid for all vcpu's. A TLB flush
   is required if different vcpu for the same ASID is to be run
   on the same host CPU.

- save SEV private structure in kvm_arch.

- If SEV is available then initialize PSP firmware during hardware probe

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    9 ++
 arch/x86/kvm/svm.c              |  213 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 221 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b1dd673..9b885fc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -715,6 +715,12 @@ struct kvm_hv {
 	u64 hv_crash_ctl;
 };
 
+struct kvm_sev_info {
+	unsigned int asid;	/* asid for this guest */
+	unsigned int handle;	/* firmware handle */
+	unsigned int ref_count; /* number of active vcpus */
+};
+
 struct kvm_arch {
 	unsigned int n_used_mmu_pages;
 	unsigned int n_requested_mmu_pages;
@@ -799,6 +805,9 @@ struct kvm_arch {
 
 	bool x2apic_format;
 	bool x2apic_broadcast_quirk_disabled;
+
+	/* struct for SEV guest */
+	struct kvm_sev_info sev_info;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f010b23..dcee635 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -34,6 +34,7 @@
 #include <linux/sched.h>
 #include <linux/trace_events.h>
 #include <linux/slab.h>
+#include <linux/ccp-psp.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -186,6 +187,9 @@ struct vcpu_svm {
 	struct page *avic_backing_page;
 	u64 *avic_physical_id_cache;
 	bool avic_is_running;
+
+	/* which host cpu was used for running this vcpu */
+	bool last_cpuid;
 };
 
 #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFF)
@@ -243,6 +247,25 @@ static int avic;
 module_param(avic, int, S_IRUGO);
 #endif
 
+/* Secure Encrypted Virtualization */
+static bool sev_enabled;
+static unsigned long max_sev_asid;
+static unsigned long *sev_asid_bitmap;
+
+#define kvm_sev_guest()		(kvm->arch.sev_info.handle)
+#define kvm_sev_handle()	(kvm->arch.sev_info.handle)
+#define kvm_sev_ref()		(kvm->arch.sev_info.ref_count++)
+#define kvm_sev_unref()		(kvm->arch.sev_info.ref_count--)
+#define svm_sev_handle()	(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_asid()		(svm->vcpu.kvm->arch.sev_info.asid)
+#define svm_sev_ref()		(svm->vcpu.kvm->arch.sev_info.ref_count++)
+#define svm_sev_unref()		(svm->vcpu.kvm->arch.sev_info.ref_count--)
+#define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
+
+static int sev_asid_new(void);
+static void sev_asid_free(int asid);
+
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
@@ -474,6 +497,8 @@ struct svm_cpu_data {
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
+
+	void **sev_vmcb;  /* index = sev_asid, value = vmcb pointer */
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -727,7 +752,10 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
-	sd->min_asid = 1;
+	sd->min_asid = max_sev_asid + 1;
+
+	if (sev_enabled)
+		memset(sd->sev_vmcb, 0, (max_sev_asid + 1) * sizeof(void *));
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -788,6 +816,7 @@ static void svm_cpu_uninit(int cpu)
 
 	per_cpu(svm_data, raw_smp_processor_id()) = NULL;
 	__free_page(sd->save_area);
+	kfree(sd->sev_vmcb);
 	kfree(sd);
 }
 
@@ -805,6 +834,14 @@ static int svm_cpu_init(int cpu)
 	if (!sd->save_area)
 		goto err_1;
 
+	if (sev_enabled) {
+		sd->sev_vmcb = kmalloc((max_sev_asid + 1) * sizeof(void *),
+					GFP_KERNEL);
+		r = -ENOMEM;
+		if (!sd->sev_vmcb)
+			goto err_1;
+	}
+
 	per_cpu(svm_data, cpu) = sd;
 
 	return 0;
@@ -931,6 +968,74 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
 	set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
 }
 
+static __init void sev_hardware_setup(void)
+{
+	int ret, psp_ret;
+	struct psp_data_init *init;
+	struct psp_data_status *status;
+
+	/*
+	 * Check SEV Feature Support: Fn8001_001F[EAX]
+	 * 	Bit 1: Secure Memory Virtualization supported
+	 */
+	if (!(cpuid_eax(0x8000001F) & 0x2))
+		return;
+
+	/*
+	 * Get maximum number of encrypted guest supported: Fn8001_001F[ECX]
+	 * 	Bit 31:0: Number of supported guest
+	 */
+	max_sev_asid = cpuid_ecx(0x8000001F);
+	if (!max_sev_asid)
+		return;
+
+	init = kzalloc(sizeof(*init), GFP_KERNEL);
+	if (!init)
+		return;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		goto err_1;
+
+	/* Initialize PSP firmware */
+	init->hdr.buffer_len = sizeof(*init);
+	init->flags = 0;
+	ret = psp_platform_init(init, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PSP_INIT ret=%d (%#x)\n", ret, psp_ret);
+		goto err_2;
+	}
+
+	/* Initialize SEV ASID bitmap */
+	sev_asid_bitmap = kmalloc(max(sizeof(unsigned long),
+				      max_sev_asid/8 + 1), GFP_KERNEL);
+	if (IS_ERR(sev_asid_bitmap)) {
+		psp_platform_shutdown(&psp_ret);
+		goto err_2;
+	}
+	bitmap_zero(sev_asid_bitmap, max_sev_asid);
+	set_bit(0, sev_asid_bitmap);  /* mark ASID 0 as used */
+
+	sev_enabled = 1;
+	printk(KERN_INFO "kvm: SEV enabled\n");
+
+	/* Query the platform status and print API version */
+	status->hdr.buffer_len = sizeof(*status);
+	ret = psp_platform_status(status, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PLATFORM_STATUS ret=%#x\n", psp_ret);
+		goto err_2;
+	}
+
+	printk(KERN_INFO "SEV API: %d.%d\n",
+			status->api_major, status->api_minor);
+err_2:
+	kfree(status);
+err_1:
+	kfree(init);
+	return;
+}
+
 static __init int svm_hardware_setup(void)
 {
 	int cpu;
@@ -966,6 +1071,8 @@ static __init int svm_hardware_setup(void)
 		kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE);
 	}
 
+	sev_hardware_setup();
+
 	for_each_possible_cpu(cpu) {
 		r = svm_cpu_init(cpu);
 		if (r)
@@ -1003,10 +1110,25 @@ err:
 	return r;
 }
 
+static __exit void sev_hardware_unsetup(void)
+{
+	int ret, psp_ret;
+
+	ret = psp_platform_shutdown(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "failed to shutdown PSP rc=%d (%#0x10x)\n",
+		ret, psp_ret);
+
+	kfree(sev_asid_bitmap);
+}
+
 static __exit void svm_hardware_unsetup(void)
 {
 	int cpu;
 
+	if (sev_enabled)
+		sev_hardware_unsetup();
+
 	for_each_possible_cpu(cpu)
 		svm_cpu_uninit(cpu);
 
@@ -1088,6 +1210,11 @@ static void avic_init_vmcb(struct vcpu_svm *svm)
 	svm->vcpu.arch.apicv_active = true;
 }
 
+static void sev_init_vmcb(struct vcpu_svm *svm)
+{
+	svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
+}
+
 static void init_vmcb(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -1202,6 +1329,10 @@ static void init_vmcb(struct vcpu_svm *svm)
 	if (avic)
 		avic_init_vmcb(svm);
 
+	if (svm_sev_guest())
+		sev_init_vmcb(svm);
+
+
 	mark_all_dirty(svm->vmcb);
 
 	enable_gif(svm);
@@ -1413,6 +1544,14 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 		avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
 }
 
+static void sev_init_vcpu(struct vcpu_svm *svm)
+{
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_ref();
+}
+
 static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 {
 	struct vcpu_svm *svm;
@@ -1475,6 +1614,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	init_vmcb(svm);
 
 	svm_init_osvw(&svm->vcpu);
+	sev_init_vcpu(svm);
 
 	return &svm->vcpu;
 
@@ -1494,6 +1634,23 @@ out:
 	return ERR_PTR(err);
 }
 
+static void sev_uninit_vcpu(struct vcpu_svm *svm)
+{
+	int cpu;
+	int asid = svm_sev_asid();
+	struct svm_cpu_data *sd;
+
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_unref();
+
+	for_each_possible_cpu(cpu) {
+		sd = per_cpu(svm_data, cpu);
+		sd->sev_vmcb[asid] = NULL;
+	}
+}
+
 static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -1502,6 +1659,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
 	__free_page(virt_to_page(svm->nested.hsave));
 	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
+	sev_uninit_vcpu(svm);
 	kvm_vcpu_uninit(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, svm);
 }
@@ -1945,6 +2103,11 @@ static int pf_interception(struct vcpu_svm *svm)
 	default:
 		error_code = svm->vmcb->control.exit_info_1;
 
+		/* In SEV mode, the guest physical address will have C-bit
+		 * set. C-bit must be cleared before handling the fault.
+		 */
+		if (svm_sev_guest())
+			fault_address &= ~sme_me_mask;
 		trace_kvm_page_fault(fault_address, error_code);
 		if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))
 			kvm_mmu_unprotect_page_virt(&svm->vcpu, fault_address);
@@ -4131,12 +4294,40 @@ static void reload_tss(struct kvm_vcpu *vcpu)
 	load_TR_desc();
 }
 
+static void pre_sev_run(struct vcpu_svm *svm)
+{
+	int asid = svm_sev_asid();
+	int cpu = raw_smp_processor_id();
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+
+	/* Assign the asid allocated for this SEV guest */
+	svm->vmcb->control.asid = svm_sev_asid();
+
+	/* Flush guest TLB:
+	 * - when different VMCB for the same ASID is to be run on the
+	 *   same host CPU
+	 *   or 
+	 * - this VMCB was executed on different host cpu in previous VMRUNs.
+	 */
+	if (sd->sev_vmcb[asid] != (void *)svm->vmcb ||
+		svm->last_cpuid != cpu)
+		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
+
+	svm->last_cpuid = cpu;
+	sd->sev_vmcb[asid] = (void *)svm->vmcb;
+
+	mark_dirty(svm->vmcb, VMCB_ASID);
+}
+
 static void pre_svm_run(struct vcpu_svm *svm)
 {
 	int cpu = raw_smp_processor_id();
 
 	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
 
+	if (svm_sev_guest())
+		return pre_sev_run(svm);
+
 	/* FIXME: handle wraparound of asid_generation */
 	if (svm->asid_generation != sd->asid_generation)
 		new_asid(svm, sd);
@@ -4985,6 +5176,26 @@ static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
 	avic_handle_ldr_update(vcpu);
 }
 
+static int sev_asid_new(void)
+{
+	int pos;
+
+	if (!sev_enabled)
+		return -ENOTTY;
+
+	pos = find_first_zero_bit(sev_asid_bitmap, max_sev_asid);
+	if (pos >= max_sev_asid)
+		return -EBUSY;
+
+	set_bit(pos, sev_asid_bitmap);
+	return pos;
+}
+
+static void sev_asid_free(int asid)
+{
+	clear_bit(asid, sev_asid_bitmap);
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 20/28] KVM: SVM: prepare for SEV guest management API support
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:28   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The patch adds initial support required for Secure Encrypted
Virtualization (SEV) guest management API's.

ASID management:
 - Reserve asid range for SEV guest, SEV asid range is obtained
   through CPUID Fn8000_001f[ECX]. A non-SEV guest can use any
   asid outside the SEV asid range.
 - SEV guest must have asid value within asid range obtained
   through CPUID.
 - SEV guest must have the same asid for all vcpu's. A TLB flush
   is required if different vcpu for the same ASID is to be run
   on the same host CPU.

- save SEV private structure in kvm_arch.

- If SEV is available then initialize PSP firmware during hardware probe

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    9 ++
 arch/x86/kvm/svm.c              |  213 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 221 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b1dd673..9b885fc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -715,6 +715,12 @@ struct kvm_hv {
 	u64 hv_crash_ctl;
 };
 
+struct kvm_sev_info {
+	unsigned int asid;	/* asid for this guest */
+	unsigned int handle;	/* firmware handle */
+	unsigned int ref_count; /* number of active vcpus */
+};
+
 struct kvm_arch {
 	unsigned int n_used_mmu_pages;
 	unsigned int n_requested_mmu_pages;
@@ -799,6 +805,9 @@ struct kvm_arch {
 
 	bool x2apic_format;
 	bool x2apic_broadcast_quirk_disabled;
+
+	/* struct for SEV guest */
+	struct kvm_sev_info sev_info;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f010b23..dcee635 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -34,6 +34,7 @@
 #include <linux/sched.h>
 #include <linux/trace_events.h>
 #include <linux/slab.h>
+#include <linux/ccp-psp.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -186,6 +187,9 @@ struct vcpu_svm {
 	struct page *avic_backing_page;
 	u64 *avic_physical_id_cache;
 	bool avic_is_running;
+
+	/* which host cpu was used for running this vcpu */
+	bool last_cpuid;
 };
 
 #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFF)
@@ -243,6 +247,25 @@ static int avic;
 module_param(avic, int, S_IRUGO);
 #endif
 
+/* Secure Encrypted Virtualization */
+static bool sev_enabled;
+static unsigned long max_sev_asid;
+static unsigned long *sev_asid_bitmap;
+
+#define kvm_sev_guest()		(kvm->arch.sev_info.handle)
+#define kvm_sev_handle()	(kvm->arch.sev_info.handle)
+#define kvm_sev_ref()		(kvm->arch.sev_info.ref_count++)
+#define kvm_sev_unref()		(kvm->arch.sev_info.ref_count--)
+#define svm_sev_handle()	(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_asid()		(svm->vcpu.kvm->arch.sev_info.asid)
+#define svm_sev_ref()		(svm->vcpu.kvm->arch.sev_info.ref_count++)
+#define svm_sev_unref()		(svm->vcpu.kvm->arch.sev_info.ref_count--)
+#define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
+
+static int sev_asid_new(void);
+static void sev_asid_free(int asid);
+
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
@@ -474,6 +497,8 @@ struct svm_cpu_data {
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
+
+	void **sev_vmcb;  /* index = sev_asid, value = vmcb pointer */
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -727,7 +752,10 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
-	sd->min_asid = 1;
+	sd->min_asid = max_sev_asid + 1;
+
+	if (sev_enabled)
+		memset(sd->sev_vmcb, 0, (max_sev_asid + 1) * sizeof(void *));
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -788,6 +816,7 @@ static void svm_cpu_uninit(int cpu)
 
 	per_cpu(svm_data, raw_smp_processor_id()) = NULL;
 	__free_page(sd->save_area);
+	kfree(sd->sev_vmcb);
 	kfree(sd);
 }
 
@@ -805,6 +834,14 @@ static int svm_cpu_init(int cpu)
 	if (!sd->save_area)
 		goto err_1;
 
+	if (sev_enabled) {
+		sd->sev_vmcb = kmalloc((max_sev_asid + 1) * sizeof(void *),
+					GFP_KERNEL);
+		r = -ENOMEM;
+		if (!sd->sev_vmcb)
+			goto err_1;
+	}
+
 	per_cpu(svm_data, cpu) = sd;
 
 	return 0;
@@ -931,6 +968,74 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
 	set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
 }
 
+static __init void sev_hardware_setup(void)
+{
+	int ret, psp_ret;
+	struct psp_data_init *init;
+	struct psp_data_status *status;
+
+	/*
+	 * Check SEV Feature Support: Fn8001_001F[EAX]
+	 * 	Bit 1: Secure Memory Virtualization supported
+	 */
+	if (!(cpuid_eax(0x8000001F) & 0x2))
+		return;
+
+	/*
+	 * Get maximum number of encrypted guest supported: Fn8001_001F[ECX]
+	 * 	Bit 31:0: Number of supported guest
+	 */
+	max_sev_asid = cpuid_ecx(0x8000001F);
+	if (!max_sev_asid)
+		return;
+
+	init = kzalloc(sizeof(*init), GFP_KERNEL);
+	if (!init)
+		return;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		goto err_1;
+
+	/* Initialize PSP firmware */
+	init->hdr.buffer_len = sizeof(*init);
+	init->flags = 0;
+	ret = psp_platform_init(init, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PSP_INIT ret=%d (%#x)\n", ret, psp_ret);
+		goto err_2;
+	}
+
+	/* Initialize SEV ASID bitmap */
+	sev_asid_bitmap = kmalloc(max(sizeof(unsigned long),
+				      max_sev_asid/8 + 1), GFP_KERNEL);
+	if (IS_ERR(sev_asid_bitmap)) {
+		psp_platform_shutdown(&psp_ret);
+		goto err_2;
+	}
+	bitmap_zero(sev_asid_bitmap, max_sev_asid);
+	set_bit(0, sev_asid_bitmap);  /* mark ASID 0 as used */
+
+	sev_enabled = 1;
+	printk(KERN_INFO "kvm: SEV enabled\n");
+
+	/* Query the platform status and print API version */
+	status->hdr.buffer_len = sizeof(*status);
+	ret = psp_platform_status(status, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PLATFORM_STATUS ret=%#x\n", psp_ret);
+		goto err_2;
+	}
+
+	printk(KERN_INFO "SEV API: %d.%d\n",
+			status->api_major, status->api_minor);
+err_2:
+	kfree(status);
+err_1:
+	kfree(init);
+	return;
+}
+
 static __init int svm_hardware_setup(void)
 {
 	int cpu;
@@ -966,6 +1071,8 @@ static __init int svm_hardware_setup(void)
 		kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE);
 	}
 
+	sev_hardware_setup();
+
 	for_each_possible_cpu(cpu) {
 		r = svm_cpu_init(cpu);
 		if (r)
@@ -1003,10 +1110,25 @@ err:
 	return r;
 }
 
+static __exit void sev_hardware_unsetup(void)
+{
+	int ret, psp_ret;
+
+	ret = psp_platform_shutdown(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "failed to shutdown PSP rc=%d (%#0x10x)\n",
+		ret, psp_ret);
+
+	kfree(sev_asid_bitmap);
+}
+
 static __exit void svm_hardware_unsetup(void)
 {
 	int cpu;
 
+	if (sev_enabled)
+		sev_hardware_unsetup();
+
 	for_each_possible_cpu(cpu)
 		svm_cpu_uninit(cpu);
 
@@ -1088,6 +1210,11 @@ static void avic_init_vmcb(struct vcpu_svm *svm)
 	svm->vcpu.arch.apicv_active = true;
 }
 
+static void sev_init_vmcb(struct vcpu_svm *svm)
+{
+	svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
+}
+
 static void init_vmcb(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -1202,6 +1329,10 @@ static void init_vmcb(struct vcpu_svm *svm)
 	if (avic)
 		avic_init_vmcb(svm);
 
+	if (svm_sev_guest())
+		sev_init_vmcb(svm);
+
+
 	mark_all_dirty(svm->vmcb);
 
 	enable_gif(svm);
@@ -1413,6 +1544,14 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 		avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
 }
 
+static void sev_init_vcpu(struct vcpu_svm *svm)
+{
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_ref();
+}
+
 static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 {
 	struct vcpu_svm *svm;
@@ -1475,6 +1614,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	init_vmcb(svm);
 
 	svm_init_osvw(&svm->vcpu);
+	sev_init_vcpu(svm);
 
 	return &svm->vcpu;
 
@@ -1494,6 +1634,23 @@ out:
 	return ERR_PTR(err);
 }
 
+static void sev_uninit_vcpu(struct vcpu_svm *svm)
+{
+	int cpu;
+	int asid = svm_sev_asid();
+	struct svm_cpu_data *sd;
+
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_unref();
+
+	for_each_possible_cpu(cpu) {
+		sd = per_cpu(svm_data, cpu);
+		sd->sev_vmcb[asid] = NULL;
+	}
+}
+
 static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -1502,6 +1659,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
 	__free_page(virt_to_page(svm->nested.hsave));
 	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
+	sev_uninit_vcpu(svm);
 	kvm_vcpu_uninit(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, svm);
 }
@@ -1945,6 +2103,11 @@ static int pf_interception(struct vcpu_svm *svm)
 	default:
 		error_code = svm->vmcb->control.exit_info_1;
 
+		/* In SEV mode, the guest physical address will have C-bit
+		 * set. C-bit must be cleared before handling the fault.
+		 */
+		if (svm_sev_guest())
+			fault_address &= ~sme_me_mask;
 		trace_kvm_page_fault(fault_address, error_code);
 		if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))
 			kvm_mmu_unprotect_page_virt(&svm->vcpu, fault_address);
@@ -4131,12 +4294,40 @@ static void reload_tss(struct kvm_vcpu *vcpu)
 	load_TR_desc();
 }
 
+static void pre_sev_run(struct vcpu_svm *svm)
+{
+	int asid = svm_sev_asid();
+	int cpu = raw_smp_processor_id();
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+
+	/* Assign the asid allocated for this SEV guest */
+	svm->vmcb->control.asid = svm_sev_asid();
+
+	/* Flush guest TLB:
+	 * - when different VMCB for the same ASID is to be run on the
+	 *   same host CPU
+	 *   or 
+	 * - this VMCB was executed on different host cpu in previous VMRUNs.
+	 */
+	if (sd->sev_vmcb[asid] != (void *)svm->vmcb ||
+		svm->last_cpuid != cpu)
+		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
+
+	svm->last_cpuid = cpu;
+	sd->sev_vmcb[asid] = (void *)svm->vmcb;
+
+	mark_dirty(svm->vmcb, VMCB_ASID);
+}
+
 static void pre_svm_run(struct vcpu_svm *svm)
 {
 	int cpu = raw_smp_processor_id();
 
 	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
 
+	if (svm_sev_guest())
+		return pre_sev_run(svm);
+
 	/* FIXME: handle wraparound of asid_generation */
 	if (svm->asid_generation != sd->asid_generation)
 		new_asid(svm, sd);
@@ -4985,6 +5176,26 @@ static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
 	avic_handle_ldr_update(vcpu);
 }
 
+static int sev_asid_new(void)
+{
+	int pos;
+
+	if (!sev_enabled)
+		return -ENOTTY;
+
+	pos = find_first_zero_bit(sev_asid_bitmap, max_sev_asid);
+	if (pos >= max_sev_asid)
+		return -EBUSY;
+
+	set_bit(pos, sev_asid_bitmap);
+	return pos;
+}
+
+static void sev_asid_free(int asid)
+{
+	clear_bit(asid, sev_asid_bitmap);
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 20/28] KVM: SVM: prepare for SEV guest management API support
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

The patch adds initial support required for Secure Encrypted
Virtualization (SEV) guest management API's.

ASID management:
 - Reserve asid range for SEV guest, SEV asid range is obtained
   through CPUID Fn8000_001f[ECX]. A non-SEV guest can use any
   asid outside the SEV asid range.
 - SEV guest must have asid value within asid range obtained
   through CPUID.
 - SEV guest must have the same asid for all vcpu's. A TLB flush
   is required if different vcpu for the same ASID is to be run
   on the same host CPU.

- save SEV private structure in kvm_arch.

- If SEV is available then initialize PSP firmware during hardware probe

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    9 ++
 arch/x86/kvm/svm.c              |  213 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 221 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b1dd673..9b885fc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -715,6 +715,12 @@ struct kvm_hv {
 	u64 hv_crash_ctl;
 };
 
+struct kvm_sev_info {
+	unsigned int asid;	/* asid for this guest */
+	unsigned int handle;	/* firmware handle */
+	unsigned int ref_count; /* number of active vcpus */
+};
+
 struct kvm_arch {
 	unsigned int n_used_mmu_pages;
 	unsigned int n_requested_mmu_pages;
@@ -799,6 +805,9 @@ struct kvm_arch {
 
 	bool x2apic_format;
 	bool x2apic_broadcast_quirk_disabled;
+
+	/* struct for SEV guest */
+	struct kvm_sev_info sev_info;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f010b23..dcee635 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -34,6 +34,7 @@
 #include <linux/sched.h>
 #include <linux/trace_events.h>
 #include <linux/slab.h>
+#include <linux/ccp-psp.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -186,6 +187,9 @@ struct vcpu_svm {
 	struct page *avic_backing_page;
 	u64 *avic_physical_id_cache;
 	bool avic_is_running;
+
+	/* which host cpu was used for running this vcpu */
+	bool last_cpuid;
 };
 
 #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFF)
@@ -243,6 +247,25 @@ static int avic;
 module_param(avic, int, S_IRUGO);
 #endif
 
+/* Secure Encrypted Virtualization */
+static bool sev_enabled;
+static unsigned long max_sev_asid;
+static unsigned long *sev_asid_bitmap;
+
+#define kvm_sev_guest()		(kvm->arch.sev_info.handle)
+#define kvm_sev_handle()	(kvm->arch.sev_info.handle)
+#define kvm_sev_ref()		(kvm->arch.sev_info.ref_count++)
+#define kvm_sev_unref()		(kvm->arch.sev_info.ref_count--)
+#define svm_sev_handle()	(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_asid()		(svm->vcpu.kvm->arch.sev_info.asid)
+#define svm_sev_ref()		(svm->vcpu.kvm->arch.sev_info.ref_count++)
+#define svm_sev_unref()		(svm->vcpu.kvm->arch.sev_info.ref_count--)
+#define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
+
+static int sev_asid_new(void);
+static void sev_asid_free(int asid);
+
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
@@ -474,6 +497,8 @@ struct svm_cpu_data {
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
+
+	void **sev_vmcb;  /* index = sev_asid, value = vmcb pointer */
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -727,7 +752,10 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
-	sd->min_asid = 1;
+	sd->min_asid = max_sev_asid + 1;
+
+	if (sev_enabled)
+		memset(sd->sev_vmcb, 0, (max_sev_asid + 1) * sizeof(void *));
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -788,6 +816,7 @@ static void svm_cpu_uninit(int cpu)
 
 	per_cpu(svm_data, raw_smp_processor_id()) = NULL;
 	__free_page(sd->save_area);
+	kfree(sd->sev_vmcb);
 	kfree(sd);
 }
 
@@ -805,6 +834,14 @@ static int svm_cpu_init(int cpu)
 	if (!sd->save_area)
 		goto err_1;
 
+	if (sev_enabled) {
+		sd->sev_vmcb = kmalloc((max_sev_asid + 1) * sizeof(void *),
+					GFP_KERNEL);
+		r = -ENOMEM;
+		if (!sd->sev_vmcb)
+			goto err_1;
+	}
+
 	per_cpu(svm_data, cpu) = sd;
 
 	return 0;
@@ -931,6 +968,74 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
 	set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
 }
 
+static __init void sev_hardware_setup(void)
+{
+	int ret, psp_ret;
+	struct psp_data_init *init;
+	struct psp_data_status *status;
+
+	/*
+	 * Check SEV Feature Support: Fn8001_001F[EAX]
+	 * 	Bit 1: Secure Memory Virtualization supported
+	 */
+	if (!(cpuid_eax(0x8000001F) & 0x2))
+		return;
+
+	/*
+	 * Get maximum number of encrypted guest supported: Fn8001_001F[ECX]
+	 * 	Bit 31:0: Number of supported guest
+	 */
+	max_sev_asid = cpuid_ecx(0x8000001F);
+	if (!max_sev_asid)
+		return;
+
+	init = kzalloc(sizeof(*init), GFP_KERNEL);
+	if (!init)
+		return;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		goto err_1;
+
+	/* Initialize PSP firmware */
+	init->hdr.buffer_len = sizeof(*init);
+	init->flags = 0;
+	ret = psp_platform_init(init, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PSP_INIT ret=%d (%#x)\n", ret, psp_ret);
+		goto err_2;
+	}
+
+	/* Initialize SEV ASID bitmap */
+	sev_asid_bitmap = kmalloc(max(sizeof(unsigned long),
+				      max_sev_asid/8 + 1), GFP_KERNEL);
+	if (IS_ERR(sev_asid_bitmap)) {
+		psp_platform_shutdown(&psp_ret);
+		goto err_2;
+	}
+	bitmap_zero(sev_asid_bitmap, max_sev_asid);
+	set_bit(0, sev_asid_bitmap);  /* mark ASID 0 as used */
+
+	sev_enabled = 1;
+	printk(KERN_INFO "kvm: SEV enabled\n");
+
+	/* Query the platform status and print API version */
+	status->hdr.buffer_len = sizeof(*status);
+	ret = psp_platform_status(status, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PLATFORM_STATUS ret=%#x\n", psp_ret);
+		goto err_2;
+	}
+
+	printk(KERN_INFO "SEV API: %d.%d\n",
+			status->api_major, status->api_minor);
+err_2:
+	kfree(status);
+err_1:
+	kfree(init);
+	return;
+}
+
 static __init int svm_hardware_setup(void)
 {
 	int cpu;
@@ -966,6 +1071,8 @@ static __init int svm_hardware_setup(void)
 		kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE);
 	}
 
+	sev_hardware_setup();
+
 	for_each_possible_cpu(cpu) {
 		r = svm_cpu_init(cpu);
 		if (r)
@@ -1003,10 +1110,25 @@ err:
 	return r;
 }
 
+static __exit void sev_hardware_unsetup(void)
+{
+	int ret, psp_ret;
+
+	ret = psp_platform_shutdown(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "failed to shutdown PSP rc=%d (%#0x10x)\n",
+		ret, psp_ret);
+
+	kfree(sev_asid_bitmap);
+}
+
 static __exit void svm_hardware_unsetup(void)
 {
 	int cpu;
 
+	if (sev_enabled)
+		sev_hardware_unsetup();
+
 	for_each_possible_cpu(cpu)
 		svm_cpu_uninit(cpu);
 
@@ -1088,6 +1210,11 @@ static void avic_init_vmcb(struct vcpu_svm *svm)
 	svm->vcpu.arch.apicv_active = true;
 }
 
+static void sev_init_vmcb(struct vcpu_svm *svm)
+{
+	svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
+}
+
 static void init_vmcb(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -1202,6 +1329,10 @@ static void init_vmcb(struct vcpu_svm *svm)
 	if (avic)
 		avic_init_vmcb(svm);
 
+	if (svm_sev_guest())
+		sev_init_vmcb(svm);
+
+
 	mark_all_dirty(svm->vmcb);
 
 	enable_gif(svm);
@@ -1413,6 +1544,14 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 		avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
 }
 
+static void sev_init_vcpu(struct vcpu_svm *svm)
+{
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_ref();
+}
+
 static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 {
 	struct vcpu_svm *svm;
@@ -1475,6 +1614,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	init_vmcb(svm);
 
 	svm_init_osvw(&svm->vcpu);
+	sev_init_vcpu(svm);
 
 	return &svm->vcpu;
 
@@ -1494,6 +1634,23 @@ out:
 	return ERR_PTR(err);
 }
 
+static void sev_uninit_vcpu(struct vcpu_svm *svm)
+{
+	int cpu;
+	int asid = svm_sev_asid();
+	struct svm_cpu_data *sd;
+
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_unref();
+
+	for_each_possible_cpu(cpu) {
+		sd = per_cpu(svm_data, cpu);
+		sd->sev_vmcb[asid] = NULL;
+	}
+}
+
 static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -1502,6 +1659,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
 	__free_page(virt_to_page(svm->nested.hsave));
 	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
+	sev_uninit_vcpu(svm);
 	kvm_vcpu_uninit(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, svm);
 }
@@ -1945,6 +2103,11 @@ static int pf_interception(struct vcpu_svm *svm)
 	default:
 		error_code = svm->vmcb->control.exit_info_1;
 
+		/* In SEV mode, the guest physical address will have C-bit
+		 * set. C-bit must be cleared before handling the fault.
+		 */
+		if (svm_sev_guest())
+			fault_address &= ~sme_me_mask;
 		trace_kvm_page_fault(fault_address, error_code);
 		if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))
 			kvm_mmu_unprotect_page_virt(&svm->vcpu, fault_address);
@@ -4131,12 +4294,40 @@ static void reload_tss(struct kvm_vcpu *vcpu)
 	load_TR_desc();
 }
 
+static void pre_sev_run(struct vcpu_svm *svm)
+{
+	int asid = svm_sev_asid();
+	int cpu = raw_smp_processor_id();
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+
+	/* Assign the asid allocated for this SEV guest */
+	svm->vmcb->control.asid = svm_sev_asid();
+
+	/* Flush guest TLB:
+	 * - when different VMCB for the same ASID is to be run on the
+	 *   same host CPU
+	 *   or 
+	 * - this VMCB was executed on different host cpu in previous VMRUNs.
+	 */
+	if (sd->sev_vmcb[asid] != (void *)svm->vmcb ||
+		svm->last_cpuid != cpu)
+		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
+
+	svm->last_cpuid = cpu;
+	sd->sev_vmcb[asid] = (void *)svm->vmcb;
+
+	mark_dirty(svm->vmcb, VMCB_ASID);
+}
+
 static void pre_svm_run(struct vcpu_svm *svm)
 {
 	int cpu = raw_smp_processor_id();
 
 	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
 
+	if (svm_sev_guest())
+		return pre_sev_run(svm);
+
 	/* FIXME: handle wraparound of asid_generation */
 	if (svm->asid_generation != sd->asid_generation)
 		new_asid(svm, sd);
@@ -4985,6 +5176,26 @@ static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
 	avic_handle_ldr_update(vcpu);
 }
 
+static int sev_asid_new(void)
+{
+	int pos;
+
+	if (!sev_enabled)
+		return -ENOTTY;
+
+	pos = find_first_zero_bit(sev_asid_bitmap, max_sev_asid);
+	if (pos >= max_sev_asid)
+		return -EBUSY;
+
+	set_bit(pos, sev_asid_bitmap);
+	return pos;
+}
+
+static void sev_asid_free(int asid)
+{
+	clear_bit(asid, sev_asid_bitmap);
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 20/28] KVM: SVM: prepare for SEV guest management API support
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

The patch adds initial support required for Secure Encrypted
Virtualization (SEV) guest management API's.

ASID management:
 - Reserve asid range for SEV guest, SEV asid range is obtained
   through CPUID Fn8000_001f[ECX]. A non-SEV guest can use any
   asid outside the SEV asid range.
 - SEV guest must have asid value within asid range obtained
   through CPUID.
 - SEV guest must have the same asid for all vcpu's. A TLB flush
   is required if different vcpu for the same ASID is to be run
   on the same host CPU.

- save SEV private structure in kvm_arch.

- If SEV is available then initialize PSP firmware during hardware probe

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    9 ++
 arch/x86/kvm/svm.c              |  213 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 221 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b1dd673..9b885fc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -715,6 +715,12 @@ struct kvm_hv {
 	u64 hv_crash_ctl;
 };
 
+struct kvm_sev_info {
+	unsigned int asid;	/* asid for this guest */
+	unsigned int handle;	/* firmware handle */
+	unsigned int ref_count; /* number of active vcpus */
+};
+
 struct kvm_arch {
 	unsigned int n_used_mmu_pages;
 	unsigned int n_requested_mmu_pages;
@@ -799,6 +805,9 @@ struct kvm_arch {
 
 	bool x2apic_format;
 	bool x2apic_broadcast_quirk_disabled;
+
+	/* struct for SEV guest */
+	struct kvm_sev_info sev_info;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f010b23..dcee635 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -34,6 +34,7 @@
 #include <linux/sched.h>
 #include <linux/trace_events.h>
 #include <linux/slab.h>
+#include <linux/ccp-psp.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -186,6 +187,9 @@ struct vcpu_svm {
 	struct page *avic_backing_page;
 	u64 *avic_physical_id_cache;
 	bool avic_is_running;
+
+	/* which host cpu was used for running this vcpu */
+	bool last_cpuid;
 };
 
 #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFF)
@@ -243,6 +247,25 @@ static int avic;
 module_param(avic, int, S_IRUGO);
 #endif
 
+/* Secure Encrypted Virtualization */
+static bool sev_enabled;
+static unsigned long max_sev_asid;
+static unsigned long *sev_asid_bitmap;
+
+#define kvm_sev_guest()		(kvm->arch.sev_info.handle)
+#define kvm_sev_handle()	(kvm->arch.sev_info.handle)
+#define kvm_sev_ref()		(kvm->arch.sev_info.ref_count++)
+#define kvm_sev_unref()		(kvm->arch.sev_info.ref_count--)
+#define svm_sev_handle()	(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_asid()		(svm->vcpu.kvm->arch.sev_info.asid)
+#define svm_sev_ref()		(svm->vcpu.kvm->arch.sev_info.ref_count++)
+#define svm_sev_unref()		(svm->vcpu.kvm->arch.sev_info.ref_count--)
+#define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
+
+static int sev_asid_new(void);
+static void sev_asid_free(int asid);
+
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
@@ -474,6 +497,8 @@ struct svm_cpu_data {
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
+
+	void **sev_vmcb;  /* index = sev_asid, value = vmcb pointer */
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -727,7 +752,10 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
-	sd->min_asid = 1;
+	sd->min_asid = max_sev_asid + 1;
+
+	if (sev_enabled)
+		memset(sd->sev_vmcb, 0, (max_sev_asid + 1) * sizeof(void *));
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -788,6 +816,7 @@ static void svm_cpu_uninit(int cpu)
 
 	per_cpu(svm_data, raw_smp_processor_id()) = NULL;
 	__free_page(sd->save_area);
+	kfree(sd->sev_vmcb);
 	kfree(sd);
 }
 
@@ -805,6 +834,14 @@ static int svm_cpu_init(int cpu)
 	if (!sd->save_area)
 		goto err_1;
 
+	if (sev_enabled) {
+		sd->sev_vmcb = kmalloc((max_sev_asid + 1) * sizeof(void *),
+					GFP_KERNEL);
+		r = -ENOMEM;
+		if (!sd->sev_vmcb)
+			goto err_1;
+	}
+
 	per_cpu(svm_data, cpu) = sd;
 
 	return 0;
@@ -931,6 +968,74 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
 	set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
 }
 
+static __init void sev_hardware_setup(void)
+{
+	int ret, psp_ret;
+	struct psp_data_init *init;
+	struct psp_data_status *status;
+
+	/*
+	 * Check SEV Feature Support: Fn8001_001F[EAX]
+	 * 	Bit 1: Secure Memory Virtualization supported
+	 */
+	if (!(cpuid_eax(0x8000001F) & 0x2))
+		return;
+
+	/*
+	 * Get maximum number of encrypted guest supported: Fn8001_001F[ECX]
+	 * 	Bit 31:0: Number of supported guest
+	 */
+	max_sev_asid = cpuid_ecx(0x8000001F);
+	if (!max_sev_asid)
+		return;
+
+	init = kzalloc(sizeof(*init), GFP_KERNEL);
+	if (!init)
+		return;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		goto err_1;
+
+	/* Initialize PSP firmware */
+	init->hdr.buffer_len = sizeof(*init);
+	init->flags = 0;
+	ret = psp_platform_init(init, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PSP_INIT ret=%d (%#x)\n", ret, psp_ret);
+		goto err_2;
+	}
+
+	/* Initialize SEV ASID bitmap */
+	sev_asid_bitmap = kmalloc(max(sizeof(unsigned long),
+				      max_sev_asid/8 + 1), GFP_KERNEL);
+	if (IS_ERR(sev_asid_bitmap)) {
+		psp_platform_shutdown(&psp_ret);
+		goto err_2;
+	}
+	bitmap_zero(sev_asid_bitmap, max_sev_asid);
+	set_bit(0, sev_asid_bitmap);  /* mark ASID 0 as used */
+
+	sev_enabled = 1;
+	printk(KERN_INFO "kvm: SEV enabled\n");
+
+	/* Query the platform status and print API version */
+	status->hdr.buffer_len = sizeof(*status);
+	ret = psp_platform_status(status, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PLATFORM_STATUS ret=%#x\n", psp_ret);
+		goto err_2;
+	}
+
+	printk(KERN_INFO "SEV API: %d.%d\n",
+			status->api_major, status->api_minor);
+err_2:
+	kfree(status);
+err_1:
+	kfree(init);
+	return;
+}
+
 static __init int svm_hardware_setup(void)
 {
 	int cpu;
@@ -966,6 +1071,8 @@ static __init int svm_hardware_setup(void)
 		kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE);
 	}
 
+	sev_hardware_setup();
+
 	for_each_possible_cpu(cpu) {
 		r = svm_cpu_init(cpu);
 		if (r)
@@ -1003,10 +1110,25 @@ err:
 	return r;
 }
 
+static __exit void sev_hardware_unsetup(void)
+{
+	int ret, psp_ret;
+
+	ret = psp_platform_shutdown(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "failed to shutdown PSP rc=%d (%#0x10x)\n",
+		ret, psp_ret);
+
+	kfree(sev_asid_bitmap);
+}
+
 static __exit void svm_hardware_unsetup(void)
 {
 	int cpu;
 
+	if (sev_enabled)
+		sev_hardware_unsetup();
+
 	for_each_possible_cpu(cpu)
 		svm_cpu_uninit(cpu);
 
@@ -1088,6 +1210,11 @@ static void avic_init_vmcb(struct vcpu_svm *svm)
 	svm->vcpu.arch.apicv_active = true;
 }
 
+static void sev_init_vmcb(struct vcpu_svm *svm)
+{
+	svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
+}
+
 static void init_vmcb(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -1202,6 +1329,10 @@ static void init_vmcb(struct vcpu_svm *svm)
 	if (avic)
 		avic_init_vmcb(svm);
 
+	if (svm_sev_guest())
+		sev_init_vmcb(svm);
+
+
 	mark_all_dirty(svm->vmcb);
 
 	enable_gif(svm);
@@ -1413,6 +1544,14 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 		avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
 }
 
+static void sev_init_vcpu(struct vcpu_svm *svm)
+{
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_ref();
+}
+
 static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 {
 	struct vcpu_svm *svm;
@@ -1475,6 +1614,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	init_vmcb(svm);
 
 	svm_init_osvw(&svm->vcpu);
+	sev_init_vcpu(svm);
 
 	return &svm->vcpu;
 
@@ -1494,6 +1634,23 @@ out:
 	return ERR_PTR(err);
 }
 
+static void sev_uninit_vcpu(struct vcpu_svm *svm)
+{
+	int cpu;
+	int asid = svm_sev_asid();
+	struct svm_cpu_data *sd;
+
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_unref();
+
+	for_each_possible_cpu(cpu) {
+		sd = per_cpu(svm_data, cpu);
+		sd->sev_vmcb[asid] = NULL;
+	}
+}
+
 static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -1502,6 +1659,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
 	__free_page(virt_to_page(svm->nested.hsave));
 	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
+	sev_uninit_vcpu(svm);
 	kvm_vcpu_uninit(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, svm);
 }
@@ -1945,6 +2103,11 @@ static int pf_interception(struct vcpu_svm *svm)
 	default:
 		error_code = svm->vmcb->control.exit_info_1;
 
+		/* In SEV mode, the guest physical address will have C-bit
+		 * set. C-bit must be cleared before handling the fault.
+		 */
+		if (svm_sev_guest())
+			fault_address &= ~sme_me_mask;
 		trace_kvm_page_fault(fault_address, error_code);
 		if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))
 			kvm_mmu_unprotect_page_virt(&svm->vcpu, fault_address);
@@ -4131,12 +4294,40 @@ static void reload_tss(struct kvm_vcpu *vcpu)
 	load_TR_desc();
 }
 
+static void pre_sev_run(struct vcpu_svm *svm)
+{
+	int asid = svm_sev_asid();
+	int cpu = raw_smp_processor_id();
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+
+	/* Assign the asid allocated for this SEV guest */
+	svm->vmcb->control.asid = svm_sev_asid();
+
+	/* Flush guest TLB:
+	 * - when different VMCB for the same ASID is to be run on the
+	 *   same host CPU
+	 *   or 
+	 * - this VMCB was executed on different host cpu in previous VMRUNs.
+	 */
+	if (sd->sev_vmcb[asid] != (void *)svm->vmcb ||
+		svm->last_cpuid != cpu)
+		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
+
+	svm->last_cpuid = cpu;
+	sd->sev_vmcb[asid] = (void *)svm->vmcb;
+
+	mark_dirty(svm->vmcb, VMCB_ASID);
+}
+
 static void pre_svm_run(struct vcpu_svm *svm)
 {
 	int cpu = raw_smp_processor_id();
 
 	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
 
+	if (svm_sev_guest())
+		return pre_sev_run(svm);
+
 	/* FIXME: handle wraparound of asid_generation */
 	if (svm->asid_generation != sd->asid_generation)
 		new_asid(svm, sd);
@@ -4985,6 +5176,26 @@ static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
 	avic_handle_ldr_update(vcpu);
 }
 
+static int sev_asid_new(void)
+{
+	int pos;
+
+	if (!sev_enabled)
+		return -ENOTTY;
+
+	pos = find_first_zero_bit(sev_asid_bitmap, max_sev_asid);
+	if (pos >= max_sev_asid)
+		return -EBUSY;
+
+	set_bit(pos, sev_asid_bitmap);
+	return pos;
+}
+
+static void sev_asid_free(int asid)
+{
+	clear_bit(asid, sev_asid_bitmap);
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 20/28] KVM: SVM: prepare for SEV guest management API support
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The patch adds initial support required for Secure Encrypted
Virtualization (SEV) guest management API's.

ASID management:
 - Reserve asid range for SEV guest, SEV asid range is obtained
   through CPUID Fn8000_001f[ECX]. A non-SEV guest can use any
   asid outside the SEV asid range.
 - SEV guest must have asid value within asid range obtained
   through CPUID.
 - SEV guest must have the same asid for all vcpu's. A TLB flush
   is required if different vcpu for the same ASID is to be run
   on the same host CPU.

- save SEV private structure in kvm_arch.

- If SEV is available then initialize PSP firmware during hardware probe

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    9 ++
 arch/x86/kvm/svm.c              |  213 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 221 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b1dd673..9b885fc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -715,6 +715,12 @@ struct kvm_hv {
 	u64 hv_crash_ctl;
 };
 
+struct kvm_sev_info {
+	unsigned int asid;	/* asid for this guest */
+	unsigned int handle;	/* firmware handle */
+	unsigned int ref_count; /* number of active vcpus */
+};
+
 struct kvm_arch {
 	unsigned int n_used_mmu_pages;
 	unsigned int n_requested_mmu_pages;
@@ -799,6 +805,9 @@ struct kvm_arch {
 
 	bool x2apic_format;
 	bool x2apic_broadcast_quirk_disabled;
+
+	/* struct for SEV guest */
+	struct kvm_sev_info sev_info;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f010b23..dcee635 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -34,6 +34,7 @@
 #include <linux/sched.h>
 #include <linux/trace_events.h>
 #include <linux/slab.h>
+#include <linux/ccp-psp.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -186,6 +187,9 @@ struct vcpu_svm {
 	struct page *avic_backing_page;
 	u64 *avic_physical_id_cache;
 	bool avic_is_running;
+
+	/* which host cpu was used for running this vcpu */
+	bool last_cpuid;
 };
 
 #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFF)
@@ -243,6 +247,25 @@ static int avic;
 module_param(avic, int, S_IRUGO);
 #endif
 
+/* Secure Encrypted Virtualization */
+static bool sev_enabled;
+static unsigned long max_sev_asid;
+static unsigned long *sev_asid_bitmap;
+
+#define kvm_sev_guest()		(kvm->arch.sev_info.handle)
+#define kvm_sev_handle()	(kvm->arch.sev_info.handle)
+#define kvm_sev_ref()		(kvm->arch.sev_info.ref_count++)
+#define kvm_sev_unref()		(kvm->arch.sev_info.ref_count--)
+#define svm_sev_handle()	(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_asid()		(svm->vcpu.kvm->arch.sev_info.asid)
+#define svm_sev_ref()		(svm->vcpu.kvm->arch.sev_info.ref_count++)
+#define svm_sev_unref()		(svm->vcpu.kvm->arch.sev_info.ref_count--)
+#define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
+
+static int sev_asid_new(void);
+static void sev_asid_free(int asid);
+
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
@@ -474,6 +497,8 @@ struct svm_cpu_data {
 	struct kvm_ldttss_desc *tss_desc;
 
 	struct page *save_area;
+
+	void **sev_vmcb;  /* index = sev_asid, value = vmcb pointer */
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -727,7 +752,10 @@ static int svm_hardware_enable(void)
 	sd->asid_generation = 1;
 	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
 	sd->next_asid = sd->max_asid + 1;
-	sd->min_asid = 1;
+	sd->min_asid = max_sev_asid + 1;
+
+	if (sev_enabled)
+		memset(sd->sev_vmcb, 0, (max_sev_asid + 1) * sizeof(void *));
 
 	native_store_gdt(&gdt_descr);
 	gdt = (struct desc_struct *)gdt_descr.address;
@@ -788,6 +816,7 @@ static void svm_cpu_uninit(int cpu)
 
 	per_cpu(svm_data, raw_smp_processor_id()) = NULL;
 	__free_page(sd->save_area);
+	kfree(sd->sev_vmcb);
 	kfree(sd);
 }
 
@@ -805,6 +834,14 @@ static int svm_cpu_init(int cpu)
 	if (!sd->save_area)
 		goto err_1;
 
+	if (sev_enabled) {
+		sd->sev_vmcb = kmalloc((max_sev_asid + 1) * sizeof(void *),
+					GFP_KERNEL);
+		r = -ENOMEM;
+		if (!sd->sev_vmcb)
+			goto err_1;
+	}
+
 	per_cpu(svm_data, cpu) = sd;
 
 	return 0;
@@ -931,6 +968,74 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
 	set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
 }
 
+static __init void sev_hardware_setup(void)
+{
+	int ret, psp_ret;
+	struct psp_data_init *init;
+	struct psp_data_status *status;
+
+	/*
+	 * Check SEV Feature Support: Fn8001_001F[EAX]
+	 * 	Bit 1: Secure Memory Virtualization supported
+	 */
+	if (!(cpuid_eax(0x8000001F) & 0x2))
+		return;
+
+	/*
+	 * Get maximum number of encrypted guest supported: Fn8001_001F[ECX]
+	 * 	Bit 31:0: Number of supported guest
+	 */
+	max_sev_asid = cpuid_ecx(0x8000001F);
+	if (!max_sev_asid)
+		return;
+
+	init = kzalloc(sizeof(*init), GFP_KERNEL);
+	if (!init)
+		return;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		goto err_1;
+
+	/* Initialize PSP firmware */
+	init->hdr.buffer_len = sizeof(*init);
+	init->flags = 0;
+	ret = psp_platform_init(init, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PSP_INIT ret=%d (%#x)\n", ret, psp_ret);
+		goto err_2;
+	}
+
+	/* Initialize SEV ASID bitmap */
+	sev_asid_bitmap = kmalloc(max(sizeof(unsigned long),
+				      max_sev_asid/8 + 1), GFP_KERNEL);
+	if (IS_ERR(sev_asid_bitmap)) {
+		psp_platform_shutdown(&psp_ret);
+		goto err_2;
+	}
+	bitmap_zero(sev_asid_bitmap, max_sev_asid);
+	set_bit(0, sev_asid_bitmap);  /* mark ASID 0 as used */
+
+	sev_enabled = 1;
+	printk(KERN_INFO "kvm: SEV enabled\n");
+
+	/* Query the platform status and print API version */
+	status->hdr.buffer_len = sizeof(*status);
+	ret = psp_platform_status(status, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: PLATFORM_STATUS ret=%#x\n", psp_ret);
+		goto err_2;
+	}
+
+	printk(KERN_INFO "SEV API: %d.%d\n",
+			status->api_major, status->api_minor);
+err_2:
+	kfree(status);
+err_1:
+	kfree(init);
+	return;
+}
+
 static __init int svm_hardware_setup(void)
 {
 	int cpu;
@@ -966,6 +1071,8 @@ static __init int svm_hardware_setup(void)
 		kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE);
 	}
 
+	sev_hardware_setup();
+
 	for_each_possible_cpu(cpu) {
 		r = svm_cpu_init(cpu);
 		if (r)
@@ -1003,10 +1110,25 @@ err:
 	return r;
 }
 
+static __exit void sev_hardware_unsetup(void)
+{
+	int ret, psp_ret;
+
+	ret = psp_platform_shutdown(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "failed to shutdown PSP rc=%d (%#0x10x)\n",
+		ret, psp_ret);
+
+	kfree(sev_asid_bitmap);
+}
+
 static __exit void svm_hardware_unsetup(void)
 {
 	int cpu;
 
+	if (sev_enabled)
+		sev_hardware_unsetup();
+
 	for_each_possible_cpu(cpu)
 		svm_cpu_uninit(cpu);
 
@@ -1088,6 +1210,11 @@ static void avic_init_vmcb(struct vcpu_svm *svm)
 	svm->vcpu.arch.apicv_active = true;
 }
 
+static void sev_init_vmcb(struct vcpu_svm *svm)
+{
+	svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
+}
+
 static void init_vmcb(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -1202,6 +1329,10 @@ static void init_vmcb(struct vcpu_svm *svm)
 	if (avic)
 		avic_init_vmcb(svm);
 
+	if (svm_sev_guest())
+		sev_init_vmcb(svm);
+
+
 	mark_all_dirty(svm->vmcb);
 
 	enable_gif(svm);
@@ -1413,6 +1544,14 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 		avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
 }
 
+static void sev_init_vcpu(struct vcpu_svm *svm)
+{
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_ref();
+}
+
 static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 {
 	struct vcpu_svm *svm;
@@ -1475,6 +1614,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	init_vmcb(svm);
 
 	svm_init_osvw(&svm->vcpu);
+	sev_init_vcpu(svm);
 
 	return &svm->vcpu;
 
@@ -1494,6 +1634,23 @@ out:
 	return ERR_PTR(err);
 }
 
+static void sev_uninit_vcpu(struct vcpu_svm *svm)
+{
+	int cpu;
+	int asid = svm_sev_asid();
+	struct svm_cpu_data *sd;
+
+	if (!svm_sev_guest())
+		return;
+
+	svm_sev_unref();
+
+	for_each_possible_cpu(cpu) {
+		sd = per_cpu(svm_data, cpu);
+		sd->sev_vmcb[asid] = NULL;
+	}
+}
+
 static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -1502,6 +1659,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
 	__free_page(virt_to_page(svm->nested.hsave));
 	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
+	sev_uninit_vcpu(svm);
 	kvm_vcpu_uninit(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, svm);
 }
@@ -1945,6 +2103,11 @@ static int pf_interception(struct vcpu_svm *svm)
 	default:
 		error_code = svm->vmcb->control.exit_info_1;
 
+		/* In SEV mode, the guest physical address will have C-bit
+		 * set. C-bit must be cleared before handling the fault.
+		 */
+		if (svm_sev_guest())
+			fault_address &= ~sme_me_mask;
 		trace_kvm_page_fault(fault_address, error_code);
 		if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))
 			kvm_mmu_unprotect_page_virt(&svm->vcpu, fault_address);
@@ -4131,12 +4294,40 @@ static void reload_tss(struct kvm_vcpu *vcpu)
 	load_TR_desc();
 }
 
+static void pre_sev_run(struct vcpu_svm *svm)
+{
+	int asid = svm_sev_asid();
+	int cpu = raw_smp_processor_id();
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+
+	/* Assign the asid allocated for this SEV guest */
+	svm->vmcb->control.asid = svm_sev_asid();
+
+	/* Flush guest TLB:
+	 * - when different VMCB for the same ASID is to be run on the
+	 *   same host CPU
+	 *   or 
+	 * - this VMCB was executed on different host cpu in previous VMRUNs.
+	 */
+	if (sd->sev_vmcb[asid] != (void *)svm->vmcb ||
+		svm->last_cpuid != cpu)
+		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
+
+	svm->last_cpuid = cpu;
+	sd->sev_vmcb[asid] = (void *)svm->vmcb;
+
+	mark_dirty(svm->vmcb, VMCB_ASID);
+}
+
 static void pre_svm_run(struct vcpu_svm *svm)
 {
 	int cpu = raw_smp_processor_id();
 
 	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
 
+	if (svm_sev_guest())
+		return pre_sev_run(svm);
+
 	/* FIXME: handle wraparound of asid_generation */
 	if (svm->asid_generation != sd->asid_generation)
 		new_asid(svm, sd);
@@ -4985,6 +5176,26 @@ static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
 	avic_handle_ldr_update(vcpu);
 }
 
+static int sev_asid_new(void)
+{
+	int pos;
+
+	if (!sev_enabled)
+		return -ENOTTY;
+
+	pos = find_first_zero_bit(sev_asid_bitmap, max_sev_asid);
+	if (pos >= max_sev_asid)
+		return -EBUSY;
+
+	set_bit(pos, sev_asid_bitmap);
+	return pos;
+}
+
+static void sev_asid_free(int asid)
+{
+	clear_bit(asid, sev_asid_bitmap);
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  2016-08-22 23:23 ` Brijesh Singh
                   ` (43 preceding siblings ...)
  (?)
@ 2016-08-22 23:28 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The ioctl will be used by qemu to issue the Secure Encrypted
Virtualization (SEV) guest commands to transition a guest into
into SEV-enabled mode.

a typical usage:

struct kvm_sev_launch_start start;
struct kvm_sev_issue_cmd data;

data.cmd = KVM_SEV_LAUNCH_START;
data.opaque = &start;

ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);

On SEV command failure, data.ret_code will contain the firmware error code.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    3 +
 arch/x86/kvm/x86.c              |   13 ++++
 include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 141 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9b885fc..a94e37d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1040,6 +1040,9 @@ struct kvm_x86_ops {
 	void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
 
 	void (*setup_mce)(struct kvm_vcpu *vcpu);
+
+	int (*sev_issue_cmd)(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *argp);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d6f2f4b..0c0adad 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3820,6 +3820,15 @@ split_irqchip_unlock:
 	return r;
 }
 
+static int kvm_vm_ioctl_sev_issue_cmd(struct kvm *kvm,
+				      struct kvm_sev_issue_cmd __user *argp)
+{
+	if (kvm_x86_ops->sev_issue_cmd)
+		return kvm_x86_ops->sev_issue_cmd(kvm, argp);
+
+	return -ENOTTY;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
 		       unsigned int ioctl, unsigned long arg)
 {
@@ -4085,6 +4094,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		r = kvm_vm_ioctl_enable_cap(kvm, &cap);
 		break;
 	}
+	case KVM_SEV_ISSUE_CMD: {
+		r = kvm_vm_ioctl_sev_issue_cmd(kvm, argp);
+		break;
+	}
 	default:
 		r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 300ef25..72c18c3 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1274,6 +1274,131 @@ struct kvm_s390_ucas_mapping {
 /* Available with KVM_CAP_X86_SMM */
 #define KVM_SMI                   _IO(KVMIO,   0xb7)
 
+/* Secure Encrypted Virtualization mode */
+enum sev_cmd {
+	KVM_SEV_LAUNCH_START = 0,
+	KVM_SEV_LAUNCH_UPDATE,
+	KVM_SEV_LAUNCH_FINISH,
+	KVM_SEV_GUEST_STATUS,
+	KVM_SEV_DBG_DECRYPT,
+	KVM_SEV_DBG_ENCRYPT,
+	KVM_SEV_RECEIVE_START,
+	KVM_SEV_RECEIVE_UPDATE,
+	KVM_SEV_RECEIVE_FINISH,
+	KVM_SEV_SEND_START,
+	KVM_SEV_SEND_UPDATE,
+	KVM_SEV_SEND_FINISH,
+	KVM_SEV_API_VERSION,
+	KVM_SEV_NR_MAX,
+};
+
+struct kvm_sev_issue_cmd {
+	__u32 cmd;
+	__u64 opaque;
+	__u32 ret_code;
+};
+
+struct kvm_sev_launch_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 nonce[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+};
+
+struct kvm_sev_launch_update {
+	__u64	address;
+	__u32	length;
+};
+
+struct kvm_sev_launch_finish {
+	__u32 vcpu_count;
+	__u32 vcpu_length;
+	__u64 vcpu_mask_addr;
+	__u32 vcpu_mask_length;
+	__u8  measurement[32];
+};
+
+struct kvm_sev_guest_status {
+	__u32 policy;
+	__u32 state;
+};
+
+struct kvm_sev_dbg_decrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_dbg_encrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 nonce[16];
+};
+
+struct kvm_sev_receive_update {
+	__u8 iv[16];
+	__u64 address;
+	__u32 length;
+};
+
+struct kvm_sev_receive_finish {
+	__u8 measurement[32];
+};
+
+struct kvm_sev_send_start {
+	__u8 nonce[16];
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 iv[16];
+	__u32 flags;
+	__u8 api_major;
+	__u8 api_minor;
+	__u32 serial;
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 pek_sig_r[32];
+	__u8 pek_sig_s[32];
+	__u8 cek_sig_r[32];
+	__u8 cek_sig_s[32];
+	__u8 cek_pub_qx[32];
+	__u8 cek_pub_qy[32];
+	__u8 ask_sig_r[32];
+	__u8 ask_sig_s[32];
+	__u32 ncerts;
+	__u32 cert_length;
+	__u64 certs_addr;
+};
+
+struct kvm_sev_send_update {
+	__u32 length;
+	__u64 src_addr;
+	__u64 dst_addr;
+};
+
+struct kvm_sev_send_finish {
+	__u8 measurement[32];
+};
+
+#define KVM_SEV_ISSUE_CMD	_IOWR(KVMIO, 0xb8, struct kvm_sev_issue_cmd)
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:28   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The ioctl will be used by qemu to issue the Secure Encrypted
Virtualization (SEV) guest commands to transition a guest into
into SEV-enabled mode.

a typical usage:

struct kvm_sev_launch_start start;
struct kvm_sev_issue_cmd data;

data.cmd = KVM_SEV_LAUNCH_START;
data.opaque = &start;

ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);

On SEV command failure, data.ret_code will contain the firmware error code.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    3 +
 arch/x86/kvm/x86.c              |   13 ++++
 include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 141 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9b885fc..a94e37d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1040,6 +1040,9 @@ struct kvm_x86_ops {
 	void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
 
 	void (*setup_mce)(struct kvm_vcpu *vcpu);
+
+	int (*sev_issue_cmd)(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *argp);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d6f2f4b..0c0adad 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3820,6 +3820,15 @@ split_irqchip_unlock:
 	return r;
 }
 
+static int kvm_vm_ioctl_sev_issue_cmd(struct kvm *kvm,
+				      struct kvm_sev_issue_cmd __user *argp)
+{
+	if (kvm_x86_ops->sev_issue_cmd)
+		return kvm_x86_ops->sev_issue_cmd(kvm, argp);
+
+	return -ENOTTY;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
 		       unsigned int ioctl, unsigned long arg)
 {
@@ -4085,6 +4094,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		r = kvm_vm_ioctl_enable_cap(kvm, &cap);
 		break;
 	}
+	case KVM_SEV_ISSUE_CMD: {
+		r = kvm_vm_ioctl_sev_issue_cmd(kvm, argp);
+		break;
+	}
 	default:
 		r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 300ef25..72c18c3 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1274,6 +1274,131 @@ struct kvm_s390_ucas_mapping {
 /* Available with KVM_CAP_X86_SMM */
 #define KVM_SMI                   _IO(KVMIO,   0xb7)
 
+/* Secure Encrypted Virtualization mode */
+enum sev_cmd {
+	KVM_SEV_LAUNCH_START = 0,
+	KVM_SEV_LAUNCH_UPDATE,
+	KVM_SEV_LAUNCH_FINISH,
+	KVM_SEV_GUEST_STATUS,
+	KVM_SEV_DBG_DECRYPT,
+	KVM_SEV_DBG_ENCRYPT,
+	KVM_SEV_RECEIVE_START,
+	KVM_SEV_RECEIVE_UPDATE,
+	KVM_SEV_RECEIVE_FINISH,
+	KVM_SEV_SEND_START,
+	KVM_SEV_SEND_UPDATE,
+	KVM_SEV_SEND_FINISH,
+	KVM_SEV_API_VERSION,
+	KVM_SEV_NR_MAX,
+};
+
+struct kvm_sev_issue_cmd {
+	__u32 cmd;
+	__u64 opaque;
+	__u32 ret_code;
+};
+
+struct kvm_sev_launch_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 nonce[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+};
+
+struct kvm_sev_launch_update {
+	__u64	address;
+	__u32	length;
+};
+
+struct kvm_sev_launch_finish {
+	__u32 vcpu_count;
+	__u32 vcpu_length;
+	__u64 vcpu_mask_addr;
+	__u32 vcpu_mask_length;
+	__u8  measurement[32];
+};
+
+struct kvm_sev_guest_status {
+	__u32 policy;
+	__u32 state;
+};
+
+struct kvm_sev_dbg_decrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_dbg_encrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 nonce[16];
+};
+
+struct kvm_sev_receive_update {
+	__u8 iv[16];
+	__u64 address;
+	__u32 length;
+};
+
+struct kvm_sev_receive_finish {
+	__u8 measurement[32];
+};
+
+struct kvm_sev_send_start {
+	__u8 nonce[16];
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 iv[16];
+	__u32 flags;
+	__u8 api_major;
+	__u8 api_minor;
+	__u32 serial;
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 pek_sig_r[32];
+	__u8 pek_sig_s[32];
+	__u8 cek_sig_r[32];
+	__u8 cek_sig_s[32];
+	__u8 cek_pub_qx[32];
+	__u8 cek_pub_qy[32];
+	__u8 ask_sig_r[32];
+	__u8 ask_sig_s[32];
+	__u32 ncerts;
+	__u32 cert_length;
+	__u64 certs_addr;
+};
+
+struct kvm_sev_send_update {
+	__u32 length;
+	__u64 src_addr;
+	__u64 dst_addr;
+};
+
+struct kvm_sev_send_finish {
+	__u8 measurement[32];
+};
+
+#define KVM_SEV_ISSUE_CMD	_IOWR(KVMIO, 0xb8, struct kvm_sev_issue_cmd)
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The ioctl will be used by qemu to issue the Secure Encrypted
Virtualization (SEV) guest commands to transition a guest into
into SEV-enabled mode.

a typical usage:

struct kvm_sev_launch_start start;
struct kvm_sev_issue_cmd data;

data.cmd = KVM_SEV_LAUNCH_START;
data.opaque = &start;

ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);

On SEV command failure, data.ret_code will contain the firmware error code.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    3 +
 arch/x86/kvm/x86.c              |   13 ++++
 include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 141 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9b885fc..a94e37d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1040,6 +1040,9 @@ struct kvm_x86_ops {
 	void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
 
 	void (*setup_mce)(struct kvm_vcpu *vcpu);
+
+	int (*sev_issue_cmd)(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *argp);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d6f2f4b..0c0adad 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3820,6 +3820,15 @@ split_irqchip_unlock:
 	return r;
 }
 
+static int kvm_vm_ioctl_sev_issue_cmd(struct kvm *kvm,
+				      struct kvm_sev_issue_cmd __user *argp)
+{
+	if (kvm_x86_ops->sev_issue_cmd)
+		return kvm_x86_ops->sev_issue_cmd(kvm, argp);
+
+	return -ENOTTY;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
 		       unsigned int ioctl, unsigned long arg)
 {
@@ -4085,6 +4094,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		r = kvm_vm_ioctl_enable_cap(kvm, &cap);
 		break;
 	}
+	case KVM_SEV_ISSUE_CMD: {
+		r = kvm_vm_ioctl_sev_issue_cmd(kvm, argp);
+		break;
+	}
 	default:
 		r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 300ef25..72c18c3 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1274,6 +1274,131 @@ struct kvm_s390_ucas_mapping {
 /* Available with KVM_CAP_X86_SMM */
 #define KVM_SMI                   _IO(KVMIO,   0xb7)
 
+/* Secure Encrypted Virtualization mode */
+enum sev_cmd {
+	KVM_SEV_LAUNCH_START = 0,
+	KVM_SEV_LAUNCH_UPDATE,
+	KVM_SEV_LAUNCH_FINISH,
+	KVM_SEV_GUEST_STATUS,
+	KVM_SEV_DBG_DECRYPT,
+	KVM_SEV_DBG_ENCRYPT,
+	KVM_SEV_RECEIVE_START,
+	KVM_SEV_RECEIVE_UPDATE,
+	KVM_SEV_RECEIVE_FINISH,
+	KVM_SEV_SEND_START,
+	KVM_SEV_SEND_UPDATE,
+	KVM_SEV_SEND_FINISH,
+	KVM_SEV_API_VERSION,
+	KVM_SEV_NR_MAX,
+};
+
+struct kvm_sev_issue_cmd {
+	__u32 cmd;
+	__u64 opaque;
+	__u32 ret_code;
+};
+
+struct kvm_sev_launch_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 nonce[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+};
+
+struct kvm_sev_launch_update {
+	__u64	address;
+	__u32	length;
+};
+
+struct kvm_sev_launch_finish {
+	__u32 vcpu_count;
+	__u32 vcpu_length;
+	__u64 vcpu_mask_addr;
+	__u32 vcpu_mask_length;
+	__u8  measurement[32];
+};
+
+struct kvm_sev_guest_status {
+	__u32 policy;
+	__u32 state;
+};
+
+struct kvm_sev_dbg_decrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_dbg_encrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 nonce[16];
+};
+
+struct kvm_sev_receive_update {
+	__u8 iv[16];
+	__u64 address;
+	__u32 length;
+};
+
+struct kvm_sev_receive_finish {
+	__u8 measurement[32];
+};
+
+struct kvm_sev_send_start {
+	__u8 nonce[16];
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 iv[16];
+	__u32 flags;
+	__u8 api_major;
+	__u8 api_minor;
+	__u32 serial;
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 pek_sig_r[32];
+	__u8 pek_sig_s[32];
+	__u8 cek_sig_r[32];
+	__u8 cek_sig_s[32];
+	__u8 cek_pub_qx[32];
+	__u8 cek_pub_qy[32];
+	__u8 ask_sig_r[32];
+	__u8 ask_sig_s[32];
+	__u32 ncerts;
+	__u32 cert_length;
+	__u64 certs_addr;
+};
+
+struct kvm_sev_send_update {
+	__u32 length;
+	__u64 src_addr;
+	__u64 dst_addr;
+};
+
+struct kvm_sev_send_finish {
+	__u8 measurement[32];
+};
+
+#define KVM_SEV_ISSUE_CMD	_IOWR(KVMIO, 0xb8, struct kvm_sev_issue_cmd)
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The ioctl will be used by qemu to issue the Secure Encrypted
Virtualization (SEV) guest commands to transition a guest into
into SEV-enabled mode.

a typical usage:

struct kvm_sev_launch_start start;
struct kvm_sev_issue_cmd data;

data.cmd = KVM_SEV_LAUNCH_START;
data.opaque = &start;

ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);

On SEV command failure, data.ret_code will contain the firmware error code.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    3 +
 arch/x86/kvm/x86.c              |   13 ++++
 include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 141 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9b885fc..a94e37d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1040,6 +1040,9 @@ struct kvm_x86_ops {
 	void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
 
 	void (*setup_mce)(struct kvm_vcpu *vcpu);
+
+	int (*sev_issue_cmd)(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *argp);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d6f2f4b..0c0adad 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3820,6 +3820,15 @@ split_irqchip_unlock:
 	return r;
 }
 
+static int kvm_vm_ioctl_sev_issue_cmd(struct kvm *kvm,
+				      struct kvm_sev_issue_cmd __user *argp)
+{
+	if (kvm_x86_ops->sev_issue_cmd)
+		return kvm_x86_ops->sev_issue_cmd(kvm, argp);
+
+	return -ENOTTY;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
 		       unsigned int ioctl, unsigned long arg)
 {
@@ -4085,6 +4094,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		r = kvm_vm_ioctl_enable_cap(kvm, &cap);
 		break;
 	}
+	case KVM_SEV_ISSUE_CMD: {
+		r = kvm_vm_ioctl_sev_issue_cmd(kvm, argp);
+		break;
+	}
 	default:
 		r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 300ef25..72c18c3 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1274,6 +1274,131 @@ struct kvm_s390_ucas_mapping {
 /* Available with KVM_CAP_X86_SMM */
 #define KVM_SMI                   _IO(KVMIO,   0xb7)
 
+/* Secure Encrypted Virtualization mode */
+enum sev_cmd {
+	KVM_SEV_LAUNCH_START = 0,
+	KVM_SEV_LAUNCH_UPDATE,
+	KVM_SEV_LAUNCH_FINISH,
+	KVM_SEV_GUEST_STATUS,
+	KVM_SEV_DBG_DECRYPT,
+	KVM_SEV_DBG_ENCRYPT,
+	KVM_SEV_RECEIVE_START,
+	KVM_SEV_RECEIVE_UPDATE,
+	KVM_SEV_RECEIVE_FINISH,
+	KVM_SEV_SEND_START,
+	KVM_SEV_SEND_UPDATE,
+	KVM_SEV_SEND_FINISH,
+	KVM_SEV_API_VERSION,
+	KVM_SEV_NR_MAX,
+};
+
+struct kvm_sev_issue_cmd {
+	__u32 cmd;
+	__u64 opaque;
+	__u32 ret_code;
+};
+
+struct kvm_sev_launch_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 nonce[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+};
+
+struct kvm_sev_launch_update {
+	__u64	address;
+	__u32	length;
+};
+
+struct kvm_sev_launch_finish {
+	__u32 vcpu_count;
+	__u32 vcpu_length;
+	__u64 vcpu_mask_addr;
+	__u32 vcpu_mask_length;
+	__u8  measurement[32];
+};
+
+struct kvm_sev_guest_status {
+	__u32 policy;
+	__u32 state;
+};
+
+struct kvm_sev_dbg_decrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_dbg_encrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 nonce[16];
+};
+
+struct kvm_sev_receive_update {
+	__u8 iv[16];
+	__u64 address;
+	__u32 length;
+};
+
+struct kvm_sev_receive_finish {
+	__u8 measurement[32];
+};
+
+struct kvm_sev_send_start {
+	__u8 nonce[16];
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 iv[16];
+	__u32 flags;
+	__u8 api_major;
+	__u8 api_minor;
+	__u32 serial;
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 pek_sig_r[32];
+	__u8 pek_sig_s[32];
+	__u8 cek_sig_r[32];
+	__u8 cek_sig_s[32];
+	__u8 cek_pub_qx[32];
+	__u8 cek_pub_qy[32];
+	__u8 ask_sig_r[32];
+	__u8 ask_sig_s[32];
+	__u32 ncerts;
+	__u32 cert_length;
+	__u64 certs_addr;
+};
+
+struct kvm_sev_send_update {
+	__u32 length;
+	__u64 src_addr;
+	__u64 dst_addr;
+};
+
+struct kvm_sev_send_finish {
+	__u8 measurement[32];
+};
+
+#define KVM_SEV_ISSUE_CMD	_IOWR(KVMIO, 0xb8, struct kvm_sev_issue_cmd)
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The ioctl will be used by qemu to issue the Secure Encrypted
Virtualization (SEV) guest commands to transition a guest into
into SEV-enabled mode.

a typical usage:

struct kvm_sev_launch_start start;
struct kvm_sev_issue_cmd data;

data.cmd = KVM_SEV_LAUNCH_START;
data.opaque = &start;

ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);

On SEV command failure, data.ret_code will contain the firmware error code.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    3 +
 arch/x86/kvm/x86.c              |   13 ++++
 include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 141 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9b885fc..a94e37d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1040,6 +1040,9 @@ struct kvm_x86_ops {
 	void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
 
 	void (*setup_mce)(struct kvm_vcpu *vcpu);
+
+	int (*sev_issue_cmd)(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *argp);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d6f2f4b..0c0adad 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3820,6 +3820,15 @@ split_irqchip_unlock:
 	return r;
 }
 
+static int kvm_vm_ioctl_sev_issue_cmd(struct kvm *kvm,
+				      struct kvm_sev_issue_cmd __user *argp)
+{
+	if (kvm_x86_ops->sev_issue_cmd)
+		return kvm_x86_ops->sev_issue_cmd(kvm, argp);
+
+	return -ENOTTY;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
 		       unsigned int ioctl, unsigned long arg)
 {
@@ -4085,6 +4094,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		r = kvm_vm_ioctl_enable_cap(kvm, &cap);
 		break;
 	}
+	case KVM_SEV_ISSUE_CMD: {
+		r = kvm_vm_ioctl_sev_issue_cmd(kvm, argp);
+		break;
+	}
 	default:
 		r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 300ef25..72c18c3 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1274,6 +1274,131 @@ struct kvm_s390_ucas_mapping {
 /* Available with KVM_CAP_X86_SMM */
 #define KVM_SMI                   _IO(KVMIO,   0xb7)
 
+/* Secure Encrypted Virtualization mode */
+enum sev_cmd {
+	KVM_SEV_LAUNCH_START = 0,
+	KVM_SEV_LAUNCH_UPDATE,
+	KVM_SEV_LAUNCH_FINISH,
+	KVM_SEV_GUEST_STATUS,
+	KVM_SEV_DBG_DECRYPT,
+	KVM_SEV_DBG_ENCRYPT,
+	KVM_SEV_RECEIVE_START,
+	KVM_SEV_RECEIVE_UPDATE,
+	KVM_SEV_RECEIVE_FINISH,
+	KVM_SEV_SEND_START,
+	KVM_SEV_SEND_UPDATE,
+	KVM_SEV_SEND_FINISH,
+	KVM_SEV_API_VERSION,
+	KVM_SEV_NR_MAX,
+};
+
+struct kvm_sev_issue_cmd {
+	__u32 cmd;
+	__u64 opaque;
+	__u32 ret_code;
+};
+
+struct kvm_sev_launch_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 nonce[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+};
+
+struct kvm_sev_launch_update {
+	__u64	address;
+	__u32	length;
+};
+
+struct kvm_sev_launch_finish {
+	__u32 vcpu_count;
+	__u32 vcpu_length;
+	__u64 vcpu_mask_addr;
+	__u32 vcpu_mask_length;
+	__u8  measurement[32];
+};
+
+struct kvm_sev_guest_status {
+	__u32 policy;
+	__u32 state;
+};
+
+struct kvm_sev_dbg_decrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_dbg_encrypt {
+	__u64 src_addr;
+	__u64 dst_addr;
+	__u32 length;
+};
+
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 flags;
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 nonce[16];
+};
+
+struct kvm_sev_receive_update {
+	__u8 iv[16];
+	__u64 address;
+	__u32 length;
+};
+
+struct kvm_sev_receive_finish {
+	__u8 measurement[32];
+};
+
+struct kvm_sev_send_start {
+	__u8 nonce[16];
+	__u32 policy;
+	__u8 policy_meas[32];
+	__u8 wrapped_tek[24];
+	__u8 wrapped_tik[24];
+	__u8 ten[16];
+	__u8 iv[16];
+	__u32 flags;
+	__u8 api_major;
+	__u8 api_minor;
+	__u32 serial;
+	__u8 dh_pub_qx[32];
+	__u8 dh_pub_qy[32];
+	__u8 pek_sig_r[32];
+	__u8 pek_sig_s[32];
+	__u8 cek_sig_r[32];
+	__u8 cek_sig_s[32];
+	__u8 cek_pub_qx[32];
+	__u8 cek_pub_qy[32];
+	__u8 ask_sig_r[32];
+	__u8 ask_sig_s[32];
+	__u32 ncerts;
+	__u32 cert_length;
+	__u64 certs_addr;
+};
+
+struct kvm_sev_send_update {
+	__u32 length;
+	__u64 src_addr;
+	__u64 dst_addr;
+};
+
+struct kvm_sev_send_finish {
+	__u8 measurement[32];
+};
+
+#define KVM_SEV_ISSUE_CMD	_IOWR(KVMIO, 0xb8, struct kvm_sev_issue_cmd)
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 22/28] KVM: SVM: add SEV launch start command
  2016-08-22 23:23 ` Brijesh Singh
                   ` (45 preceding siblings ...)
  (?)
@ 2016-08-22 23:28 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The command initate the process to launch this guest into
SEV-enabled mode.

For more information on command structure see [1], section 6.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  212 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index dcee635..0b6da4a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -265,6 +265,9 @@ static unsigned long *sev_asid_bitmap;
 
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+static void sev_deactivate_handle(unsigned int handle);
+static void sev_decommission_handle(unsigned int handle);
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret);
 
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
@@ -1645,9 +1648,18 @@ static void sev_uninit_vcpu(struct vcpu_svm *svm)
 
 	svm_sev_unref();
 
-	for_each_possible_cpu(cpu) {
-		sd = per_cpu(svm_data, cpu);
-		sd->sev_vmcb[asid] = NULL;
+	/* when reference count reaches to zero then free SEV asid and
+	 * deactivate psp handle
+	 */
+	if (!svm_sev_ref_count()) {
+		sev_deactivate_handle(svm_sev_handle());
+		sev_decommission_handle(svm_sev_handle());
+		sev_asid_free(svm_sev_asid());
+
+		for_each_possible_cpu(cpu) {
+			sd = per_cpu(svm_data, cpu);
+			sd->sev_vmcb[asid] = NULL;
+		}
 	}
 }
 
@@ -5196,6 +5208,198 @@ static void sev_asid_free(int asid)
 	clear_bit(asid, sev_asid_bitmap);
 }
 
+static void sev_decommission_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_decommission *decommission;
+
+	decommission = kzalloc(sizeof(*decommission), GFP_KERNEL);
+	if (!decommission)
+		return;
+
+	decommission->hdr.buffer_len = sizeof(*decommission);
+	decommission->handle = handle;
+	ret = psp_guest_decommission(decommission, &psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DECOMISSION ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+	kfree(decommission);
+}
+
+static void sev_deactivate_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_deactivate *deactivate;
+
+	deactivate = kzalloc(sizeof(*deactivate), GFP_KERNEL);
+	if (!deactivate)
+		return;
+
+	deactivate->hdr.buffer_len = sizeof(*deactivate);
+	deactivate->handle = handle;
+	ret = psp_guest_deactivate(deactivate, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DEACTIVATE ret=%d (%#010x)\n",
+				ret, psp_ret);
+		goto buffer_free;
+	}
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+buffer_free:
+	kfree(deactivate);
+}
+
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret)
+{
+	int ret;
+	struct psp_data_activate *activate;
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, *psp_ret);
+		return ret;
+	}
+
+	activate = kzalloc(sizeof(*activate), GFP_KERNEL);
+	if (!activate)
+		return -ENOMEM;
+
+	activate->hdr.buffer_len = sizeof(*activate);
+	activate->handle = handle;
+	activate->asid   = asid;
+	ret = psp_guest_activate(activate, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: ACTIVATE ret=%d (%#010x)\n",
+				ret, *psp_ret);
+	kfree(activate);
+	return ret;
+}
+
+static int sev_pre_start(struct kvm *kvm, int *asid)
+{
+	int ret;
+
+	/* If guest has active psp handle then deactivate before calling
+	 * launch start.
+	 */
+	if (kvm_sev_guest()) {
+		sev_deactivate_handle(kvm_sev_handle());
+		sev_decommission_handle(kvm_sev_handle());
+		*asid = kvm->arch.sev_info.asid;  /* reuse the asid */
+		ret = 0;
+	} else {
+		/* Allocate new asid for this launch */
+		ret = sev_asid_new();
+		if (ret < 0) {
+			printk(KERN_ERR "SEV: failed to allocate asid\n");
+			return ret;
+		}
+		*asid = ret;
+		ret = 0;
+	}
+
+	return ret;
+}
+
+static int sev_post_start(struct kvm *kvm, int asid, int handle, int *psp_ret)
+{
+	int ret;
+
+	/* activate asid */
+	ret = sev_activate_asid(handle, asid, psp_ret);
+	if (ret)
+		return ret;
+
+	kvm->arch.sev_info.handle = handle;
+	kvm->arch.sev_info.asid = asid;
+
+	return 0;
+}
+
+static int sev_launch_start(struct kvm *kvm,
+			    struct kvm_sev_launch_start __user *arg,
+			    int *psp_ret)
+{
+	int ret, asid;
+	struct kvm_sev_launch_start params;
+	struct psp_data_launch_start *start;
+
+	/* Get parameter from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	start = kzalloc(sizeof(*start), GFP_KERNEL);
+	if (!start)
+		return -ENOMEM;
+
+	ret = sev_pre_start(kvm, &asid);
+	if (ret)
+		goto err_1;
+
+	start->hdr.buffer_len = sizeof(*start);
+	start->flags  = params.flags;
+	start->policy = params.policy;
+	start->handle = params.handle;
+	memcpy(start->nonce, &params.nonce, sizeof(start->nonce));
+	memcpy(start->dh_pub_qx, &params.dh_pub_qx, sizeof(start->dh_pub_qx));
+	memcpy(start->dh_pub_qy, &params.dh_pub_qy, sizeof(start->dh_pub_qy));
+
+	/* launch start */
+	ret = psp_guest_launch_start(start, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_START ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	ret = sev_post_start(kvm, asid, start->handle, psp_ret);
+	if (ret)
+		goto err_2;
+
+	kfree(start);
+	return 0;
+
+err_2:
+	sev_asid_free(asid);
+err_1:
+	kfree(start);
+	return ret;
+}
+
+static int amd_sev_issue_cmd(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *user_data)
+{
+	int r = -ENOTTY;
+	struct kvm_sev_issue_cmd arg;
+
+	if (copy_from_user(&arg, user_data, sizeof(struct kvm_sev_issue_cmd)))
+		return -EFAULT;
+
+	switch (arg.cmd) {
+	case KVM_SEV_LAUNCH_START: {
+		r = sev_launch_start(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
+	default:
+		break;
+	}
+
+	if (copy_to_user(user_data, &arg, sizeof(struct kvm_sev_issue_cmd)))
+		r = -EFAULT;
+	return r;
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -5313,6 +5517,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.pmu_ops = &amd_pmu_ops,
 	.deliver_posted_interrupt = svm_deliver_avic_intr,
+
+	.sev_issue_cmd = amd_sev_issue_cmd,
 };
 
 static int __init svm_init(void)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 22/28] KVM: SVM: add SEV launch start command
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:28   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command initate the process to launch this guest into
SEV-enabled mode.

For more information on command structure see [1], section 6.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  212 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index dcee635..0b6da4a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -265,6 +265,9 @@ static unsigned long *sev_asid_bitmap;
 
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+static void sev_deactivate_handle(unsigned int handle);
+static void sev_decommission_handle(unsigned int handle);
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret);
 
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
@@ -1645,9 +1648,18 @@ static void sev_uninit_vcpu(struct vcpu_svm *svm)
 
 	svm_sev_unref();
 
-	for_each_possible_cpu(cpu) {
-		sd = per_cpu(svm_data, cpu);
-		sd->sev_vmcb[asid] = NULL;
+	/* when reference count reaches to zero then free SEV asid and
+	 * deactivate psp handle
+	 */
+	if (!svm_sev_ref_count()) {
+		sev_deactivate_handle(svm_sev_handle());
+		sev_decommission_handle(svm_sev_handle());
+		sev_asid_free(svm_sev_asid());
+
+		for_each_possible_cpu(cpu) {
+			sd = per_cpu(svm_data, cpu);
+			sd->sev_vmcb[asid] = NULL;
+		}
 	}
 }
 
@@ -5196,6 +5208,198 @@ static void sev_asid_free(int asid)
 	clear_bit(asid, sev_asid_bitmap);
 }
 
+static void sev_decommission_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_decommission *decommission;
+
+	decommission = kzalloc(sizeof(*decommission), GFP_KERNEL);
+	if (!decommission)
+		return;
+
+	decommission->hdr.buffer_len = sizeof(*decommission);
+	decommission->handle = handle;
+	ret = psp_guest_decommission(decommission, &psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DECOMISSION ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+	kfree(decommission);
+}
+
+static void sev_deactivate_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_deactivate *deactivate;
+
+	deactivate = kzalloc(sizeof(*deactivate), GFP_KERNEL);
+	if (!deactivate)
+		return;
+
+	deactivate->hdr.buffer_len = sizeof(*deactivate);
+	deactivate->handle = handle;
+	ret = psp_guest_deactivate(deactivate, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DEACTIVATE ret=%d (%#010x)\n",
+				ret, psp_ret);
+		goto buffer_free;
+	}
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+buffer_free:
+	kfree(deactivate);
+}
+
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret)
+{
+	int ret;
+	struct psp_data_activate *activate;
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, *psp_ret);
+		return ret;
+	}
+
+	activate = kzalloc(sizeof(*activate), GFP_KERNEL);
+	if (!activate)
+		return -ENOMEM;
+
+	activate->hdr.buffer_len = sizeof(*activate);
+	activate->handle = handle;
+	activate->asid   = asid;
+	ret = psp_guest_activate(activate, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: ACTIVATE ret=%d (%#010x)\n",
+				ret, *psp_ret);
+	kfree(activate);
+	return ret;
+}
+
+static int sev_pre_start(struct kvm *kvm, int *asid)
+{
+	int ret;
+
+	/* If guest has active psp handle then deactivate before calling
+	 * launch start.
+	 */
+	if (kvm_sev_guest()) {
+		sev_deactivate_handle(kvm_sev_handle());
+		sev_decommission_handle(kvm_sev_handle());
+		*asid = kvm->arch.sev_info.asid;  /* reuse the asid */
+		ret = 0;
+	} else {
+		/* Allocate new asid for this launch */
+		ret = sev_asid_new();
+		if (ret < 0) {
+			printk(KERN_ERR "SEV: failed to allocate asid\n");
+			return ret;
+		}
+		*asid = ret;
+		ret = 0;
+	}
+
+	return ret;
+}
+
+static int sev_post_start(struct kvm *kvm, int asid, int handle, int *psp_ret)
+{
+	int ret;
+
+	/* activate asid */
+	ret = sev_activate_asid(handle, asid, psp_ret);
+	if (ret)
+		return ret;
+
+	kvm->arch.sev_info.handle = handle;
+	kvm->arch.sev_info.asid = asid;
+
+	return 0;
+}
+
+static int sev_launch_start(struct kvm *kvm,
+			    struct kvm_sev_launch_start __user *arg,
+			    int *psp_ret)
+{
+	int ret, asid;
+	struct kvm_sev_launch_start params;
+	struct psp_data_launch_start *start;
+
+	/* Get parameter from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	start = kzalloc(sizeof(*start), GFP_KERNEL);
+	if (!start)
+		return -ENOMEM;
+
+	ret = sev_pre_start(kvm, &asid);
+	if (ret)
+		goto err_1;
+
+	start->hdr.buffer_len = sizeof(*start);
+	start->flags  = params.flags;
+	start->policy = params.policy;
+	start->handle = params.handle;
+	memcpy(start->nonce, &params.nonce, sizeof(start->nonce));
+	memcpy(start->dh_pub_qx, &params.dh_pub_qx, sizeof(start->dh_pub_qx));
+	memcpy(start->dh_pub_qy, &params.dh_pub_qy, sizeof(start->dh_pub_qy));
+
+	/* launch start */
+	ret = psp_guest_launch_start(start, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_START ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	ret = sev_post_start(kvm, asid, start->handle, psp_ret);
+	if (ret)
+		goto err_2;
+
+	kfree(start);
+	return 0;
+
+err_2:
+	sev_asid_free(asid);
+err_1:
+	kfree(start);
+	return ret;
+}
+
+static int amd_sev_issue_cmd(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *user_data)
+{
+	int r = -ENOTTY;
+	struct kvm_sev_issue_cmd arg;
+
+	if (copy_from_user(&arg, user_data, sizeof(struct kvm_sev_issue_cmd)))
+		return -EFAULT;
+
+	switch (arg.cmd) {
+	case KVM_SEV_LAUNCH_START: {
+		r = sev_launch_start(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
+	default:
+		break;
+	}
+
+	if (copy_to_user(user_data, &arg, sizeof(struct kvm_sev_issue_cmd)))
+		r = -EFAULT;
+	return r;
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -5313,6 +5517,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.pmu_ops = &amd_pmu_ops,
 	.deliver_posted_interrupt = svm_deliver_avic_intr,
+
+	.sev_issue_cmd = amd_sev_issue_cmd,
 };
 
 static int __init svm_init(void)

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 22/28] KVM: SVM: add SEV launch start command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

The command initate the process to launch this guest into
SEV-enabled mode.

For more information on command structure see [1], section 6.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  212 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index dcee635..0b6da4a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -265,6 +265,9 @@ static unsigned long *sev_asid_bitmap;
 
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+static void sev_deactivate_handle(unsigned int handle);
+static void sev_decommission_handle(unsigned int handle);
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret);
 
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
@@ -1645,9 +1648,18 @@ static void sev_uninit_vcpu(struct vcpu_svm *svm)
 
 	svm_sev_unref();
 
-	for_each_possible_cpu(cpu) {
-		sd = per_cpu(svm_data, cpu);
-		sd->sev_vmcb[asid] = NULL;
+	/* when reference count reaches to zero then free SEV asid and
+	 * deactivate psp handle
+	 */
+	if (!svm_sev_ref_count()) {
+		sev_deactivate_handle(svm_sev_handle());
+		sev_decommission_handle(svm_sev_handle());
+		sev_asid_free(svm_sev_asid());
+
+		for_each_possible_cpu(cpu) {
+			sd = per_cpu(svm_data, cpu);
+			sd->sev_vmcb[asid] = NULL;
+		}
 	}
 }
 
@@ -5196,6 +5208,198 @@ static void sev_asid_free(int asid)
 	clear_bit(asid, sev_asid_bitmap);
 }
 
+static void sev_decommission_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_decommission *decommission;
+
+	decommission = kzalloc(sizeof(*decommission), GFP_KERNEL);
+	if (!decommission)
+		return;
+
+	decommission->hdr.buffer_len = sizeof(*decommission);
+	decommission->handle = handle;
+	ret = psp_guest_decommission(decommission, &psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DECOMISSION ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+	kfree(decommission);
+}
+
+static void sev_deactivate_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_deactivate *deactivate;
+
+	deactivate = kzalloc(sizeof(*deactivate), GFP_KERNEL);
+	if (!deactivate)
+		return;
+
+	deactivate->hdr.buffer_len = sizeof(*deactivate);
+	deactivate->handle = handle;
+	ret = psp_guest_deactivate(deactivate, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DEACTIVATE ret=%d (%#010x)\n",
+				ret, psp_ret);
+		goto buffer_free;
+	}
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+buffer_free:
+	kfree(deactivate);
+}
+
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret)
+{
+	int ret;
+	struct psp_data_activate *activate;
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, *psp_ret);
+		return ret;
+	}
+
+	activate = kzalloc(sizeof(*activate), GFP_KERNEL);
+	if (!activate)
+		return -ENOMEM;
+
+	activate->hdr.buffer_len = sizeof(*activate);
+	activate->handle = handle;
+	activate->asid   = asid;
+	ret = psp_guest_activate(activate, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: ACTIVATE ret=%d (%#010x)\n",
+				ret, *psp_ret);
+	kfree(activate);
+	return ret;
+}
+
+static int sev_pre_start(struct kvm *kvm, int *asid)
+{
+	int ret;
+
+	/* If guest has active psp handle then deactivate before calling
+	 * launch start.
+	 */
+	if (kvm_sev_guest()) {
+		sev_deactivate_handle(kvm_sev_handle());
+		sev_decommission_handle(kvm_sev_handle());
+		*asid = kvm->arch.sev_info.asid;  /* reuse the asid */
+		ret = 0;
+	} else {
+		/* Allocate new asid for this launch */
+		ret = sev_asid_new();
+		if (ret < 0) {
+			printk(KERN_ERR "SEV: failed to allocate asid\n");
+			return ret;
+		}
+		*asid = ret;
+		ret = 0;
+	}
+
+	return ret;
+}
+
+static int sev_post_start(struct kvm *kvm, int asid, int handle, int *psp_ret)
+{
+	int ret;
+
+	/* activate asid */
+	ret = sev_activate_asid(handle, asid, psp_ret);
+	if (ret)
+		return ret;
+
+	kvm->arch.sev_info.handle = handle;
+	kvm->arch.sev_info.asid = asid;
+
+	return 0;
+}
+
+static int sev_launch_start(struct kvm *kvm,
+			    struct kvm_sev_launch_start __user *arg,
+			    int *psp_ret)
+{
+	int ret, asid;
+	struct kvm_sev_launch_start params;
+	struct psp_data_launch_start *start;
+
+	/* Get parameter from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	start = kzalloc(sizeof(*start), GFP_KERNEL);
+	if (!start)
+		return -ENOMEM;
+
+	ret = sev_pre_start(kvm, &asid);
+	if (ret)
+		goto err_1;
+
+	start->hdr.buffer_len = sizeof(*start);
+	start->flags  = params.flags;
+	start->policy = params.policy;
+	start->handle = params.handle;
+	memcpy(start->nonce, &params.nonce, sizeof(start->nonce));
+	memcpy(start->dh_pub_qx, &params.dh_pub_qx, sizeof(start->dh_pub_qx));
+	memcpy(start->dh_pub_qy, &params.dh_pub_qy, sizeof(start->dh_pub_qy));
+
+	/* launch start */
+	ret = psp_guest_launch_start(start, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_START ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	ret = sev_post_start(kvm, asid, start->handle, psp_ret);
+	if (ret)
+		goto err_2;
+
+	kfree(start);
+	return 0;
+
+err_2:
+	sev_asid_free(asid);
+err_1:
+	kfree(start);
+	return ret;
+}
+
+static int amd_sev_issue_cmd(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *user_data)
+{
+	int r = -ENOTTY;
+	struct kvm_sev_issue_cmd arg;
+
+	if (copy_from_user(&arg, user_data, sizeof(struct kvm_sev_issue_cmd)))
+		return -EFAULT;
+
+	switch (arg.cmd) {
+	case KVM_SEV_LAUNCH_START: {
+		r = sev_launch_start(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
+	default:
+		break;
+	}
+
+	if (copy_to_user(user_data, &arg, sizeof(struct kvm_sev_issue_cmd)))
+		r = -EFAULT;
+	return r;
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -5313,6 +5517,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.pmu_ops = &amd_pmu_ops,
 	.deliver_posted_interrupt = svm_deliver_avic_intr,
+
+	.sev_issue_cmd = amd_sev_issue_cmd,
 };
 
 static int __init svm_init(void)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 22/28] KVM: SVM: add SEV launch start command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

The command initate the process to launch this guest into
SEV-enabled mode.

For more information on command structure see [1], section 6.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  212 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index dcee635..0b6da4a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -265,6 +265,9 @@ static unsigned long *sev_asid_bitmap;
 
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+static void sev_deactivate_handle(unsigned int handle);
+static void sev_decommission_handle(unsigned int handle);
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret);
 
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
@@ -1645,9 +1648,18 @@ static void sev_uninit_vcpu(struct vcpu_svm *svm)
 
 	svm_sev_unref();
 
-	for_each_possible_cpu(cpu) {
-		sd = per_cpu(svm_data, cpu);
-		sd->sev_vmcb[asid] = NULL;
+	/* when reference count reaches to zero then free SEV asid and
+	 * deactivate psp handle
+	 */
+	if (!svm_sev_ref_count()) {
+		sev_deactivate_handle(svm_sev_handle());
+		sev_decommission_handle(svm_sev_handle());
+		sev_asid_free(svm_sev_asid());
+
+		for_each_possible_cpu(cpu) {
+			sd = per_cpu(svm_data, cpu);
+			sd->sev_vmcb[asid] = NULL;
+		}
 	}
 }
 
@@ -5196,6 +5208,198 @@ static void sev_asid_free(int asid)
 	clear_bit(asid, sev_asid_bitmap);
 }
 
+static void sev_decommission_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_decommission *decommission;
+
+	decommission = kzalloc(sizeof(*decommission), GFP_KERNEL);
+	if (!decommission)
+		return;
+
+	decommission->hdr.buffer_len = sizeof(*decommission);
+	decommission->handle = handle;
+	ret = psp_guest_decommission(decommission, &psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DECOMISSION ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+	kfree(decommission);
+}
+
+static void sev_deactivate_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_deactivate *deactivate;
+
+	deactivate = kzalloc(sizeof(*deactivate), GFP_KERNEL);
+	if (!deactivate)
+		return;
+
+	deactivate->hdr.buffer_len = sizeof(*deactivate);
+	deactivate->handle = handle;
+	ret = psp_guest_deactivate(deactivate, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DEACTIVATE ret=%d (%#010x)\n",
+				ret, psp_ret);
+		goto buffer_free;
+	}
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+buffer_free:
+	kfree(deactivate);
+}
+
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret)
+{
+	int ret;
+	struct psp_data_activate *activate;
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, *psp_ret);
+		return ret;
+	}
+
+	activate = kzalloc(sizeof(*activate), GFP_KERNEL);
+	if (!activate)
+		return -ENOMEM;
+
+	activate->hdr.buffer_len = sizeof(*activate);
+	activate->handle = handle;
+	activate->asid   = asid;
+	ret = psp_guest_activate(activate, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: ACTIVATE ret=%d (%#010x)\n",
+				ret, *psp_ret);
+	kfree(activate);
+	return ret;
+}
+
+static int sev_pre_start(struct kvm *kvm, int *asid)
+{
+	int ret;
+
+	/* If guest has active psp handle then deactivate before calling
+	 * launch start.
+	 */
+	if (kvm_sev_guest()) {
+		sev_deactivate_handle(kvm_sev_handle());
+		sev_decommission_handle(kvm_sev_handle());
+		*asid = kvm->arch.sev_info.asid;  /* reuse the asid */
+		ret = 0;
+	} else {
+		/* Allocate new asid for this launch */
+		ret = sev_asid_new();
+		if (ret < 0) {
+			printk(KERN_ERR "SEV: failed to allocate asid\n");
+			return ret;
+		}
+		*asid = ret;
+		ret = 0;
+	}
+
+	return ret;
+}
+
+static int sev_post_start(struct kvm *kvm, int asid, int handle, int *psp_ret)
+{
+	int ret;
+
+	/* activate asid */
+	ret = sev_activate_asid(handle, asid, psp_ret);
+	if (ret)
+		return ret;
+
+	kvm->arch.sev_info.handle = handle;
+	kvm->arch.sev_info.asid = asid;
+
+	return 0;
+}
+
+static int sev_launch_start(struct kvm *kvm,
+			    struct kvm_sev_launch_start __user *arg,
+			    int *psp_ret)
+{
+	int ret, asid;
+	struct kvm_sev_launch_start params;
+	struct psp_data_launch_start *start;
+
+	/* Get parameter from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	start = kzalloc(sizeof(*start), GFP_KERNEL);
+	if (!start)
+		return -ENOMEM;
+
+	ret = sev_pre_start(kvm, &asid);
+	if (ret)
+		goto err_1;
+
+	start->hdr.buffer_len = sizeof(*start);
+	start->flags  = params.flags;
+	start->policy = params.policy;
+	start->handle = params.handle;
+	memcpy(start->nonce, &params.nonce, sizeof(start->nonce));
+	memcpy(start->dh_pub_qx, &params.dh_pub_qx, sizeof(start->dh_pub_qx));
+	memcpy(start->dh_pub_qy, &params.dh_pub_qy, sizeof(start->dh_pub_qy));
+
+	/* launch start */
+	ret = psp_guest_launch_start(start, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_START ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	ret = sev_post_start(kvm, asid, start->handle, psp_ret);
+	if (ret)
+		goto err_2;
+
+	kfree(start);
+	return 0;
+
+err_2:
+	sev_asid_free(asid);
+err_1:
+	kfree(start);
+	return ret;
+}
+
+static int amd_sev_issue_cmd(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *user_data)
+{
+	int r = -ENOTTY;
+	struct kvm_sev_issue_cmd arg;
+
+	if (copy_from_user(&arg, user_data, sizeof(struct kvm_sev_issue_cmd)))
+		return -EFAULT;
+
+	switch (arg.cmd) {
+	case KVM_SEV_LAUNCH_START: {
+		r = sev_launch_start(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
+	default:
+		break;
+	}
+
+	if (copy_to_user(user_data, &arg, sizeof(struct kvm_sev_issue_cmd)))
+		r = -EFAULT;
+	return r;
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -5313,6 +5517,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.pmu_ops = &amd_pmu_ops,
 	.deliver_posted_interrupt = svm_deliver_avic_intr,
+
+	.sev_issue_cmd = amd_sev_issue_cmd,
 };
 
 static int __init svm_init(void)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 22/28] KVM: SVM: add SEV launch start command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command initate the process to launch this guest into
SEV-enabled mode.

For more information on command structure see [1], section 6.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  212 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index dcee635..0b6da4a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -265,6 +265,9 @@ static unsigned long *sev_asid_bitmap;
 
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+static void sev_deactivate_handle(unsigned int handle);
+static void sev_decommission_handle(unsigned int handle);
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret);
 
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
@@ -1645,9 +1648,18 @@ static void sev_uninit_vcpu(struct vcpu_svm *svm)
 
 	svm_sev_unref();
 
-	for_each_possible_cpu(cpu) {
-		sd = per_cpu(svm_data, cpu);
-		sd->sev_vmcb[asid] = NULL;
+	/* when reference count reaches to zero then free SEV asid and
+	 * deactivate psp handle
+	 */
+	if (!svm_sev_ref_count()) {
+		sev_deactivate_handle(svm_sev_handle());
+		sev_decommission_handle(svm_sev_handle());
+		sev_asid_free(svm_sev_asid());
+
+		for_each_possible_cpu(cpu) {
+			sd = per_cpu(svm_data, cpu);
+			sd->sev_vmcb[asid] = NULL;
+		}
 	}
 }
 
@@ -5196,6 +5208,198 @@ static void sev_asid_free(int asid)
 	clear_bit(asid, sev_asid_bitmap);
 }
 
+static void sev_decommission_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_decommission *decommission;
+
+	decommission = kzalloc(sizeof(*decommission), GFP_KERNEL);
+	if (!decommission)
+		return;
+
+	decommission->hdr.buffer_len = sizeof(*decommission);
+	decommission->handle = handle;
+	ret = psp_guest_decommission(decommission, &psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DECOMISSION ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+	kfree(decommission);
+}
+
+static void sev_deactivate_handle(unsigned int handle)
+{
+	int ret, psp_ret;
+	struct psp_data_deactivate *deactivate;
+
+	deactivate = kzalloc(sizeof(*deactivate), GFP_KERNEL);
+	if (!deactivate)
+		return;
+
+	deactivate->hdr.buffer_len = sizeof(*deactivate);
+	deactivate->handle = handle;
+	ret = psp_guest_deactivate(deactivate, &psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DEACTIVATE ret=%d (%#010x)\n",
+				ret, psp_ret);
+		goto buffer_free;
+	}
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(&psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, psp_ret);
+
+buffer_free:
+	kfree(deactivate);
+}
+
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret)
+{
+	int ret;
+	struct psp_data_activate *activate;
+
+	wbinvd_on_all_cpus();
+
+	ret = psp_guest_df_flush(psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+				ret, *psp_ret);
+		return ret;
+	}
+
+	activate = kzalloc(sizeof(*activate), GFP_KERNEL);
+	if (!activate)
+		return -ENOMEM;
+
+	activate->hdr.buffer_len = sizeof(*activate);
+	activate->handle = handle;
+	activate->asid   = asid;
+	ret = psp_guest_activate(activate, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: ACTIVATE ret=%d (%#010x)\n",
+				ret, *psp_ret);
+	kfree(activate);
+	return ret;
+}
+
+static int sev_pre_start(struct kvm *kvm, int *asid)
+{
+	int ret;
+
+	/* If guest has active psp handle then deactivate before calling
+	 * launch start.
+	 */
+	if (kvm_sev_guest()) {
+		sev_deactivate_handle(kvm_sev_handle());
+		sev_decommission_handle(kvm_sev_handle());
+		*asid = kvm->arch.sev_info.asid;  /* reuse the asid */
+		ret = 0;
+	} else {
+		/* Allocate new asid for this launch */
+		ret = sev_asid_new();
+		if (ret < 0) {
+			printk(KERN_ERR "SEV: failed to allocate asid\n");
+			return ret;
+		}
+		*asid = ret;
+		ret = 0;
+	}
+
+	return ret;
+}
+
+static int sev_post_start(struct kvm *kvm, int asid, int handle, int *psp_ret)
+{
+	int ret;
+
+	/* activate asid */
+	ret = sev_activate_asid(handle, asid, psp_ret);
+	if (ret)
+		return ret;
+
+	kvm->arch.sev_info.handle = handle;
+	kvm->arch.sev_info.asid = asid;
+
+	return 0;
+}
+
+static int sev_launch_start(struct kvm *kvm,
+			    struct kvm_sev_launch_start __user *arg,
+			    int *psp_ret)
+{
+	int ret, asid;
+	struct kvm_sev_launch_start params;
+	struct psp_data_launch_start *start;
+
+	/* Get parameter from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	start = kzalloc(sizeof(*start), GFP_KERNEL);
+	if (!start)
+		return -ENOMEM;
+
+	ret = sev_pre_start(kvm, &asid);
+	if (ret)
+		goto err_1;
+
+	start->hdr.buffer_len = sizeof(*start);
+	start->flags  = params.flags;
+	start->policy = params.policy;
+	start->handle = params.handle;
+	memcpy(start->nonce, &params.nonce, sizeof(start->nonce));
+	memcpy(start->dh_pub_qx, &params.dh_pub_qx, sizeof(start->dh_pub_qx));
+	memcpy(start->dh_pub_qy, &params.dh_pub_qy, sizeof(start->dh_pub_qy));
+
+	/* launch start */
+	ret = psp_guest_launch_start(start, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_START ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	ret = sev_post_start(kvm, asid, start->handle, psp_ret);
+	if (ret)
+		goto err_2;
+
+	kfree(start);
+	return 0;
+
+err_2:
+	sev_asid_free(asid);
+err_1:
+	kfree(start);
+	return ret;
+}
+
+static int amd_sev_issue_cmd(struct kvm *kvm,
+			     struct kvm_sev_issue_cmd __user *user_data)
+{
+	int r = -ENOTTY;
+	struct kvm_sev_issue_cmd arg;
+
+	if (copy_from_user(&arg, user_data, sizeof(struct kvm_sev_issue_cmd)))
+		return -EFAULT;
+
+	switch (arg.cmd) {
+	case KVM_SEV_LAUNCH_START: {
+		r = sev_launch_start(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
+	default:
+		break;
+	}
+
+	if (copy_to_user(user_data, &arg, sizeof(struct kvm_sev_issue_cmd)))
+		r = -EFAULT;
+	return r;
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -5313,6 +5517,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.pmu_ops = &amd_pmu_ops,
 	.deliver_posted_interrupt = svm_deliver_avic_intr,
+
+	.sev_issue_cmd = amd_sev_issue_cmd,
 };
 
 static int __init svm_init(void)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 23/28] KVM: SVM: add SEV launch update command
  2016-08-22 23:23 ` Brijesh Singh
                   ` (47 preceding siblings ...)
  (?)
@ 2016-08-22 23:28 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The command is used for encrypting guest memory region.

For more information see [1], section 6.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  126 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0b6da4a..c78bdc6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -35,6 +35,8 @@
 #include <linux/trace_events.h>
 #include <linux/slab.h>
 #include <linux/ccp-psp.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -263,6 +265,8 @@ static unsigned long *sev_asid_bitmap;
 #define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
 #define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
 
+#define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
+
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
 static void sev_deactivate_handle(unsigned int handle);
@@ -5376,6 +5380,123 @@ err_1:
 	return ret;
 }
 
+static int sev_pre_update(struct page **pages, unsigned long uaddr, int npages)
+{
+	int pinned;
+
+	/* pin the user virtual address */
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(uaddr, npages, 1, 0, pages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned != npages) {
+		printk(KERN_ERR "SEV: failed to pin  %d pages (got %d)\n",
+				npages, pinned);
+		goto err;
+	}
+
+	return 0;
+err:
+	if (pinned > 0)
+		release_pages(pages, pinned, 0);
+	return 1;
+}
+
+static int sev_launch_update(struct kvm *kvm,
+			     struct kvm_sev_launch_update __user *arg,
+			     int *psp_ret)
+{
+	int first, last;
+	struct page **inpages;
+	int ret, nr_pages;
+	unsigned long uaddr, ulen;
+	int i, buffer_len, len, offset;
+	struct kvm_sev_launch_update params;
+	struct psp_data_launch_update *update;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	uaddr = params.address;
+	ulen = params.length;
+
+	/* Get number of pages */
+	first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+	last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+	nr_pages = (last - first + 1);
+
+	/* allocate the buffers */
+	buffer_len = sizeof(*update);
+	update = kzalloc(buffer_len, GFP_KERNEL);
+	if (!update)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	ret = sev_pre_update(inpages, uaddr, nr_pages);
+	if (ret)
+		goto err_2;
+
+	/* the array of pages returned by get_user_pages() is a page-aligned
+	 * memory. Since the user buffer is probably not page-aligned, we need
+	 * to calculate the offset within a page for first update entry.
+	 */
+	offset = uaddr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+	ulen -= len;
+
+	/* update first page -
+	 * special care need to be taken for the first page because we might
+	 * be dealing with offset within the page
+	 */
+	update->hdr.buffer_len = buffer_len;
+	update->handle = kvm_sev_handle();
+	update->length = len;
+	update->address = __sev_page_pa(inpages[0]) + offset;
+	clflush_cache_range(page_address(inpages[0]), PAGE_SIZE);
+	ret = psp_guest_launch_update(update, 5, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+		goto err_3;
+	}
+
+	/* update remaining pages */
+	for (i = 1; i < nr_pages; i++) {
+
+		len = min_t(size_t, PAGE_SIZE, ulen);
+		ulen -= len;
+		update->length = len;
+		update->address = __sev_page_pa(inpages[i]);
+		clflush_cache_range(page_address(inpages[i]), PAGE_SIZE);
+
+		ret = psp_guest_launch_update(update, 5, psp_ret);
+		if (ret) {
+			printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+			goto err_3;
+		}
+	}
+
+err_3:
+	/* mark pages dirty */
+	for (i = 0; i < nr_pages; i++) {
+		set_page_dirty_lock(inpages[i]);
+		mark_page_accessed(inpages[i]);
+	}
+	release_pages(inpages, nr_pages, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(update);
+	return ret;
+}
+ 
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5391,6 +5512,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_UPDATE: {
+		r = sev_launch_update(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 23/28] KVM: SVM: add SEV launch update command
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:28   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command is used for encrypting guest memory region.

For more information see [1], section 6.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  126 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0b6da4a..c78bdc6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -35,6 +35,8 @@
 #include <linux/trace_events.h>
 #include <linux/slab.h>
 #include <linux/ccp-psp.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -263,6 +265,8 @@ static unsigned long *sev_asid_bitmap;
 #define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
 #define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
 
+#define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
+
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
 static void sev_deactivate_handle(unsigned int handle);
@@ -5376,6 +5380,123 @@ err_1:
 	return ret;
 }
 
+static int sev_pre_update(struct page **pages, unsigned long uaddr, int npages)
+{
+	int pinned;
+
+	/* pin the user virtual address */
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(uaddr, npages, 1, 0, pages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned != npages) {
+		printk(KERN_ERR "SEV: failed to pin  %d pages (got %d)\n",
+				npages, pinned);
+		goto err;
+	}
+
+	return 0;
+err:
+	if (pinned > 0)
+		release_pages(pages, pinned, 0);
+	return 1;
+}
+
+static int sev_launch_update(struct kvm *kvm,
+			     struct kvm_sev_launch_update __user *arg,
+			     int *psp_ret)
+{
+	int first, last;
+	struct page **inpages;
+	int ret, nr_pages;
+	unsigned long uaddr, ulen;
+	int i, buffer_len, len, offset;
+	struct kvm_sev_launch_update params;
+	struct psp_data_launch_update *update;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	uaddr = params.address;
+	ulen = params.length;
+
+	/* Get number of pages */
+	first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+	last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+	nr_pages = (last - first + 1);
+
+	/* allocate the buffers */
+	buffer_len = sizeof(*update);
+	update = kzalloc(buffer_len, GFP_KERNEL);
+	if (!update)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	ret = sev_pre_update(inpages, uaddr, nr_pages);
+	if (ret)
+		goto err_2;
+
+	/* the array of pages returned by get_user_pages() is a page-aligned
+	 * memory. Since the user buffer is probably not page-aligned, we need
+	 * to calculate the offset within a page for first update entry.
+	 */
+	offset = uaddr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+	ulen -= len;
+
+	/* update first page -
+	 * special care need to be taken for the first page because we might
+	 * be dealing with offset within the page
+	 */
+	update->hdr.buffer_len = buffer_len;
+	update->handle = kvm_sev_handle();
+	update->length = len;
+	update->address = __sev_page_pa(inpages[0]) + offset;
+	clflush_cache_range(page_address(inpages[0]), PAGE_SIZE);
+	ret = psp_guest_launch_update(update, 5, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+		goto err_3;
+	}
+
+	/* update remaining pages */
+	for (i = 1; i < nr_pages; i++) {
+
+		len = min_t(size_t, PAGE_SIZE, ulen);
+		ulen -= len;
+		update->length = len;
+		update->address = __sev_page_pa(inpages[i]);
+		clflush_cache_range(page_address(inpages[i]), PAGE_SIZE);
+
+		ret = psp_guest_launch_update(update, 5, psp_ret);
+		if (ret) {
+			printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+			goto err_3;
+		}
+	}
+
+err_3:
+	/* mark pages dirty */
+	for (i = 0; i < nr_pages; i++) {
+		set_page_dirty_lock(inpages[i]);
+		mark_page_accessed(inpages[i]);
+	}
+	release_pages(inpages, nr_pages, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(update);
+	return ret;
+}
+ 
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5391,6 +5512,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_UPDATE: {
+		r = sev_launch_update(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 23/28] KVM: SVM: add SEV launch update command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot-jKBdWWKqtFpg9hUCZPvPmw,
	linux-efi-u79uwXL29TY76Z2rM5mHXA, brijesh.singh-5C7GfCeVMHo,
	kvm-u79uwXL29TY76Z2rM5mHXA, rkrcmar-H+wXaHxf7aLQT0dZR+AlfA,
	matt-mF/unelCI9GS6iBeEJttW/XRex20P6io,
	linus.walleij-QSEj5FYQhm4dnm+yROfE0A,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	paul.gortmaker-CWA4WttNNZF54TAoqtyWWQ,
	hpa-YMNOUZJC4hwAvxtiuMwx3w,
	dan.j.williams-ral2JQCrhuEAvxtiuMwx3w,
	aarcange-H+wXaHxf7aLQT0dZR+AlfA, sfr-3FnU+UHB4dNDw9hX6IcOSA,
	andriy.shevchenko-VuQAYsv1563Yd54FQh9/CA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	bhe-H+wXaHxf7aLQT0dZR+AlfA, xemul-bzQdu9zFT3WakBO8gow8eQ,
	joro-zLv9SwRftAIdnm+yROfE0A, x86-DgEjT+Ai2ygdnm+yROfE0A,
	mingo-H+wXaHxf7aLQT0dZR+AlfA, msalter-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-VuQAYsv1563Yd54FQh9/CA, bp-l3A5Bk7waGM,
	dyoung-H+wXaHxf7aLQT0dZR+AlfA, thomas.lendacky-5C7GfCeVMHo,
	jroedel-l3A5Bk7waGM, keescook-F7+t8E8rja9g9hUCZPvPmw,
	toshi.kani-ZPxbGqLxI0U, mathieu.desnoyers-vg+e7yoeK/dWk0Htik3J/w,
	devel-tBiZLqfeLfOHmIFyCCdPziST3g8Odh+X,
	tglx-hfZtesqFncYOwBW4kG4KsQ, mchehab-DgEjT+Ai2yhQFI55V6+gNQ

The command is used for encrypting guest memory region.

For more information see [1], section 6.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh-5C7GfCeVMHo@public.gmane.org>
---
 arch/x86/kvm/svm.c |  126 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0b6da4a..c78bdc6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -35,6 +35,8 @@
 #include <linux/trace_events.h>
 #include <linux/slab.h>
 #include <linux/ccp-psp.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -263,6 +265,8 @@ static unsigned long *sev_asid_bitmap;
 #define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
 #define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
 
+#define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
+
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
 static void sev_deactivate_handle(unsigned int handle);
@@ -5376,6 +5380,123 @@ err_1:
 	return ret;
 }
 
+static int sev_pre_update(struct page **pages, unsigned long uaddr, int npages)
+{
+	int pinned;
+
+	/* pin the user virtual address */
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(uaddr, npages, 1, 0, pages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned != npages) {
+		printk(KERN_ERR "SEV: failed to pin  %d pages (got %d)\n",
+				npages, pinned);
+		goto err;
+	}
+
+	return 0;
+err:
+	if (pinned > 0)
+		release_pages(pages, pinned, 0);
+	return 1;
+}
+
+static int sev_launch_update(struct kvm *kvm,
+			     struct kvm_sev_launch_update __user *arg,
+			     int *psp_ret)
+{
+	int first, last;
+	struct page **inpages;
+	int ret, nr_pages;
+	unsigned long uaddr, ulen;
+	int i, buffer_len, len, offset;
+	struct kvm_sev_launch_update params;
+	struct psp_data_launch_update *update;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	uaddr = params.address;
+	ulen = params.length;
+
+	/* Get number of pages */
+	first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+	last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+	nr_pages = (last - first + 1);
+
+	/* allocate the buffers */
+	buffer_len = sizeof(*update);
+	update = kzalloc(buffer_len, GFP_KERNEL);
+	if (!update)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	ret = sev_pre_update(inpages, uaddr, nr_pages);
+	if (ret)
+		goto err_2;
+
+	/* the array of pages returned by get_user_pages() is a page-aligned
+	 * memory. Since the user buffer is probably not page-aligned, we need
+	 * to calculate the offset within a page for first update entry.
+	 */
+	offset = uaddr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+	ulen -= len;
+
+	/* update first page -
+	 * special care need to be taken for the first page because we might
+	 * be dealing with offset within the page
+	 */
+	update->hdr.buffer_len = buffer_len;
+	update->handle = kvm_sev_handle();
+	update->length = len;
+	update->address = __sev_page_pa(inpages[0]) + offset;
+	clflush_cache_range(page_address(inpages[0]), PAGE_SIZE);
+	ret = psp_guest_launch_update(update, 5, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+		goto err_3;
+	}
+
+	/* update remaining pages */
+	for (i = 1; i < nr_pages; i++) {
+
+		len = min_t(size_t, PAGE_SIZE, ulen);
+		ulen -= len;
+		update->length = len;
+		update->address = __sev_page_pa(inpages[i]);
+		clflush_cache_range(page_address(inpages[i]), PAGE_SIZE);
+
+		ret = psp_guest_launch_update(update, 5, psp_ret);
+		if (ret) {
+			printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+			goto err_3;
+		}
+	}
+
+err_3:
+	/* mark pages dirty */
+	for (i = 0; i < nr_pages; i++) {
+		set_page_dirty_lock(inpages[i]);
+		mark_page_accessed(inpages[i]);
+	}
+	release_pages(inpages, nr_pages, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(update);
+	return ret;
+}
+ 
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5391,6 +5512,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_UPDATE: {
+		r = sev_launch_update(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 23/28] KVM: SVM: add SEV launch update command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot-jKBdWWKqtFpg9hUCZPvPmw,
	linux-efi-u79uwXL29TY76Z2rM5mHXA, brijesh.singh-5C7GfCeVMHo,
	kvm-u79uwXL29TY76Z2rM5mHXA, rkrcmar-H+wXaHxf7aLQT0dZR+AlfA,
	matt-mF/unelCI9GS6iBeEJttW/XRex20P6io,
	linus.walleij-QSEj5FYQhm4dnm+yROfE0A,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	paul.gortmaker-CWA4WttNNZF54TAoqtyWWQ,
	hpa-YMNOUZJC4hwAvxtiuMwx3w,
	dan.j.williams-ral2JQCrhuEAvxtiuMwx3w,
	aarcange-H+wXaHxf7aLQT0dZR+AlfA, sfr-3FnU+UHB4dNDw9hX6IcOSA,
	andriy.shevchenko-VuQAYsv1563Yd54FQh9/CA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	bhe-H+wXaHxf7aLQT0dZR+AlfA, xemul-bzQdu9zFT3WakBO8gow8eQ,
	joro-zLv9SwRftAIdnm+yROfE0A, x86-DgEjT+Ai2ygdnm+yROfE0A,
	mingo-H+wXaHxf7aLQT0dZR+AlfA, msalter-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-VuQAYsv1563Yd54FQh9/CA, bp-l3A5Bk7waGM,
	dyoung-H+wXaHxf7aLQT0dZR+AlfA, thomas.lendacky-5C7GfCeVMHo,
	jroedel-l3A5Bk7waGM, keescook-F7+t8E8rja9g9hUCZPvPmw,
	toshi.kani-ZPxbGqLxI0U, mathieu.desnoyers-vg+e7yoeK/dWk0Htik3J/w,
	devel-tBiZLqfeLfOHmIFyCCdPziST3g8Odh+X,
	tglx-hfZtesqFncYOwBW4kG4KsQ, mchehab-DgEjT+Ai2yhQFI55V6+gNQ

The command is used for encrypting guest memory region.

For more information see [1], section 6.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh-5C7GfCeVMHo@public.gmane.org>
---
 arch/x86/kvm/svm.c |  126 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0b6da4a..c78bdc6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -35,6 +35,8 @@
 #include <linux/trace_events.h>
 #include <linux/slab.h>
 #include <linux/ccp-psp.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -263,6 +265,8 @@ static unsigned long *sev_asid_bitmap;
 #define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
 #define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
 
+#define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
+
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
 static void sev_deactivate_handle(unsigned int handle);
@@ -5376,6 +5380,123 @@ err_1:
 	return ret;
 }
 
+static int sev_pre_update(struct page **pages, unsigned long uaddr, int npages)
+{
+	int pinned;
+
+	/* pin the user virtual address */
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(uaddr, npages, 1, 0, pages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned != npages) {
+		printk(KERN_ERR "SEV: failed to pin  %d pages (got %d)\n",
+				npages, pinned);
+		goto err;
+	}
+
+	return 0;
+err:
+	if (pinned > 0)
+		release_pages(pages, pinned, 0);
+	return 1;
+}
+
+static int sev_launch_update(struct kvm *kvm,
+			     struct kvm_sev_launch_update __user *arg,
+			     int *psp_ret)
+{
+	int first, last;
+	struct page **inpages;
+	int ret, nr_pages;
+	unsigned long uaddr, ulen;
+	int i, buffer_len, len, offset;
+	struct kvm_sev_launch_update params;
+	struct psp_data_launch_update *update;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	uaddr = params.address;
+	ulen = params.length;
+
+	/* Get number of pages */
+	first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+	last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+	nr_pages = (last - first + 1);
+
+	/* allocate the buffers */
+	buffer_len = sizeof(*update);
+	update = kzalloc(buffer_len, GFP_KERNEL);
+	if (!update)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	ret = sev_pre_update(inpages, uaddr, nr_pages);
+	if (ret)
+		goto err_2;
+
+	/* the array of pages returned by get_user_pages() is a page-aligned
+	 * memory. Since the user buffer is probably not page-aligned, we need
+	 * to calculate the offset within a page for first update entry.
+	 */
+	offset = uaddr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+	ulen -= len;
+
+	/* update first page -
+	 * special care need to be taken for the first page because we might
+	 * be dealing with offset within the page
+	 */
+	update->hdr.buffer_len = buffer_len;
+	update->handle = kvm_sev_handle();
+	update->length = len;
+	update->address = __sev_page_pa(inpages[0]) + offset;
+	clflush_cache_range(page_address(inpages[0]), PAGE_SIZE);
+	ret = psp_guest_launch_update(update, 5, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+		goto err_3;
+	}
+
+	/* update remaining pages */
+	for (i = 1; i < nr_pages; i++) {
+
+		len = min_t(size_t, PAGE_SIZE, ulen);
+		ulen -= len;
+		update->length = len;
+		update->address = __sev_page_pa(inpages[i]);
+		clflush_cache_range(page_address(inpages[i]), PAGE_SIZE);
+
+		ret = psp_guest_launch_update(update, 5, psp_ret);
+		if (ret) {
+			printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+			goto err_3;
+		}
+	}
+
+err_3:
+	/* mark pages dirty */
+	for (i = 0; i < nr_pages; i++) {
+		set_page_dirty_lock(inpages[i]);
+		mark_page_accessed(inpages[i]);
+	}
+	release_pages(inpages, nr_pages, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(update);
+	return ret;
+}
+ 
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5391,6 +5512,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_UPDATE: {
+		r = sev_launch_update(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 23/28] KVM: SVM: add SEV launch update command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command is used for encrypting guest memory region.

For more information see [1], section 6.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  126 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0b6da4a..c78bdc6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -35,6 +35,8 @@
 #include <linux/trace_events.h>
 #include <linux/slab.h>
 #include <linux/ccp-psp.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
 
 #include <asm/apic.h>
 #include <asm/perf_event.h>
@@ -263,6 +265,8 @@ static unsigned long *sev_asid_bitmap;
 #define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
 #define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)
 
+#define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
+
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
 static void sev_deactivate_handle(unsigned int handle);
@@ -5376,6 +5380,123 @@ err_1:
 	return ret;
 }
 
+static int sev_pre_update(struct page **pages, unsigned long uaddr, int npages)
+{
+	int pinned;
+
+	/* pin the user virtual address */
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(uaddr, npages, 1, 0, pages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned != npages) {
+		printk(KERN_ERR "SEV: failed to pin  %d pages (got %d)\n",
+				npages, pinned);
+		goto err;
+	}
+
+	return 0;
+err:
+	if (pinned > 0)
+		release_pages(pages, pinned, 0);
+	return 1;
+}
+
+static int sev_launch_update(struct kvm *kvm,
+			     struct kvm_sev_launch_update __user *arg,
+			     int *psp_ret)
+{
+	int first, last;
+	struct page **inpages;
+	int ret, nr_pages;
+	unsigned long uaddr, ulen;
+	int i, buffer_len, len, offset;
+	struct kvm_sev_launch_update params;
+	struct psp_data_launch_update *update;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, arg, sizeof(*arg)))
+		return -EFAULT;
+
+	uaddr = params.address;
+	ulen = params.length;
+
+	/* Get number of pages */
+	first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+	last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+	nr_pages = (last - first + 1);
+
+	/* allocate the buffers */
+	buffer_len = sizeof(*update);
+	update = kzalloc(buffer_len, GFP_KERNEL);
+	if (!update)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	ret = sev_pre_update(inpages, uaddr, nr_pages);
+	if (ret)
+		goto err_2;
+
+	/* the array of pages returned by get_user_pages() is a page-aligned
+	 * memory. Since the user buffer is probably not page-aligned, we need
+	 * to calculate the offset within a page for first update entry.
+	 */
+	offset = uaddr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+	ulen -= len;
+
+	/* update first page -
+	 * special care need to be taken for the first page because we might
+	 * be dealing with offset within the page
+	 */
+	update->hdr.buffer_len = buffer_len;
+	update->handle = kvm_sev_handle();
+	update->length = len;
+	update->address = __sev_page_pa(inpages[0]) + offset;
+	clflush_cache_range(page_address(inpages[0]), PAGE_SIZE);
+	ret = psp_guest_launch_update(update, 5, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+		goto err_3;
+	}
+
+	/* update remaining pages */
+	for (i = 1; i < nr_pages; i++) {
+
+		len = min_t(size_t, PAGE_SIZE, ulen);
+		ulen -= len;
+		update->length = len;
+		update->address = __sev_page_pa(inpages[i]);
+		clflush_cache_range(page_address(inpages[i]), PAGE_SIZE);
+
+		ret = psp_guest_launch_update(update, 5, psp_ret);
+		if (ret) {
+			printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+				"ret=%d (%#010x)\n", update->address,
+				update->length, ret, *psp_ret);
+			goto err_3;
+		}
+	}
+
+err_3:
+	/* mark pages dirty */
+	for (i = 0; i < nr_pages; i++) {
+		set_page_dirty_lock(inpages[i]);
+		mark_page_accessed(inpages[i]);
+	}
+	release_pages(inpages, nr_pages, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(update);
+	return ret;
+}
+ 
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5391,6 +5512,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_UPDATE: {
+		r = sev_launch_update(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 24/28] KVM: SVM: add SEV_LAUNCH_FINISH command
  2016-08-22 23:23 ` Brijesh Singh
                   ` (48 preceding siblings ...)
  (?)
@ 2016-08-22 23:28 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The command is used for finializing the guest launch into SEV mode.

For more information see [1], section 6.3

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   78 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c78bdc6..60cc0f7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5497,6 +5497,79 @@ err_1:
 	return ret;
 }
  
+static int sev_launch_finish(struct kvm *kvm,
+			     struct kvm_sev_launch_finish __user *argp,
+			     int *psp_ret)
+{
+	int i, ret;
+	void *mask = NULL;
+	int buffer_len, len;
+	struct kvm_vcpu *vcpu;
+	struct psp_data_launch_finish *finish;
+	struct kvm_sev_launch_finish params;
+
+	if (!kvm_sev_guest())
+		return -EINVAL;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	buffer_len = sizeof(*finish) + (sizeof(u64) * params.vcpu_count);
+	finish = kzalloc(buffer_len, GFP_KERNEL);
+	if (!finish)
+		return -ENOMEM;
+
+	/* copy the vcpu mask from user */
+	if (params.vcpu_mask_length && params.vcpu_mask_addr) {
+		ret = -ENOMEM;
+		mask = (void *) get_zeroed_page(GFP_KERNEL);
+		if (!mask)
+			goto err_1;
+
+		len = min_t(size_t, PAGE_SIZE, params.vcpu_mask_length);
+		ret = -EFAULT;
+		if (copy_from_user(mask, (uint8_t*)params.vcpu_mask_addr, len))
+			goto err_2;
+		finish->vcpus.state_mask_addr = __psp_pa(mask);
+	}
+
+	finish->handle = kvm_sev_handle();
+	finish->hdr.buffer_len = buffer_len;
+	finish->vcpus.state_count = params.vcpu_count;
+	finish->vcpus.state_length = params.vcpu_length;
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		finish->vcpus.state_addr[i] =
+					to_svm(vcpu)->vmcb_pa | sme_me_mask;
+		if (i == params.vcpu_count)
+			break;
+	}
+
+	/* launch finish */
+	ret = psp_guest_launch_finish(finish, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_FINISH ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	/* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
+	 * KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
+	 */
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		svm_cpuid_update(vcpu);
+
+	/* copy the measurement for user */
+	if (copy_to_user(argp->measurement, finish->measurement, 32))
+		ret = -EFAULT;
+
+err_2:
+	free_page((unsigned long)mask);
+err_1:
+	kfree(finish);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5517,6 +5590,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_FINISH: {
+		r = sev_launch_finish(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 24/28] KVM: SVM: add SEV_LAUNCH_FINISH command
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:28   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command is used for finializing the guest launch into SEV mode.

For more information see [1], section 6.3

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   78 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c78bdc6..60cc0f7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5497,6 +5497,79 @@ err_1:
 	return ret;
 }
  
+static int sev_launch_finish(struct kvm *kvm,
+			     struct kvm_sev_launch_finish __user *argp,
+			     int *psp_ret)
+{
+	int i, ret;
+	void *mask = NULL;
+	int buffer_len, len;
+	struct kvm_vcpu *vcpu;
+	struct psp_data_launch_finish *finish;
+	struct kvm_sev_launch_finish params;
+
+	if (!kvm_sev_guest())
+		return -EINVAL;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	buffer_len = sizeof(*finish) + (sizeof(u64) * params.vcpu_count);
+	finish = kzalloc(buffer_len, GFP_KERNEL);
+	if (!finish)
+		return -ENOMEM;
+
+	/* copy the vcpu mask from user */
+	if (params.vcpu_mask_length && params.vcpu_mask_addr) {
+		ret = -ENOMEM;
+		mask = (void *) get_zeroed_page(GFP_KERNEL);
+		if (!mask)
+			goto err_1;
+
+		len = min_t(size_t, PAGE_SIZE, params.vcpu_mask_length);
+		ret = -EFAULT;
+		if (copy_from_user(mask, (uint8_t*)params.vcpu_mask_addr, len))
+			goto err_2;
+		finish->vcpus.state_mask_addr = __psp_pa(mask);
+	}
+
+	finish->handle = kvm_sev_handle();
+	finish->hdr.buffer_len = buffer_len;
+	finish->vcpus.state_count = params.vcpu_count;
+	finish->vcpus.state_length = params.vcpu_length;
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		finish->vcpus.state_addr[i] =
+					to_svm(vcpu)->vmcb_pa | sme_me_mask;
+		if (i == params.vcpu_count)
+			break;
+	}
+
+	/* launch finish */
+	ret = psp_guest_launch_finish(finish, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_FINISH ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	/* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
+	 * KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
+	 */
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		svm_cpuid_update(vcpu);
+
+	/* copy the measurement for user */
+	if (copy_to_user(argp->measurement, finish->measurement, 32))
+		ret = -EFAULT;
+
+err_2:
+	free_page((unsigned long)mask);
+err_1:
+	kfree(finish);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5517,6 +5590,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_FINISH: {
+		r = sev_launch_finish(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 24/28] KVM: SVM: add SEV_LAUNCH_FINISH command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.boun

The command is used for finializing the guest launch into SEV mode.

For more information see [1], section 6.3

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   78 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c78bdc6..60cc0f7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5497,6 +5497,79 @@ err_1:
 	return ret;
 }
  
+static int sev_launch_finish(struct kvm *kvm,
+			     struct kvm_sev_launch_finish __user *argp,
+			     int *psp_ret)
+{
+	int i, ret;
+	void *mask = NULL;
+	int buffer_len, len;
+	struct kvm_vcpu *vcpu;
+	struct psp_data_launch_finish *finish;
+	struct kvm_sev_launch_finish params;
+
+	if (!kvm_sev_guest())
+		return -EINVAL;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	buffer_len = sizeof(*finish) + (sizeof(u64) * params.vcpu_count);
+	finish = kzalloc(buffer_len, GFP_KERNEL);
+	if (!finish)
+		return -ENOMEM;
+
+	/* copy the vcpu mask from user */
+	if (params.vcpu_mask_length && params.vcpu_mask_addr) {
+		ret = -ENOMEM;
+		mask = (void *) get_zeroed_page(GFP_KERNEL);
+		if (!mask)
+			goto err_1;
+
+		len = min_t(size_t, PAGE_SIZE, params.vcpu_mask_length);
+		ret = -EFAULT;
+		if (copy_from_user(mask, (uint8_t*)params.vcpu_mask_addr, len))
+			goto err_2;
+		finish->vcpus.state_mask_addr = __psp_pa(mask);
+	}
+
+	finish->handle = kvm_sev_handle();
+	finish->hdr.buffer_len = buffer_len;
+	finish->vcpus.state_count = params.vcpu_count;
+	finish->vcpus.state_length = params.vcpu_length;
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		finish->vcpus.state_addr[i] =
+					to_svm(vcpu)->vmcb_pa | sme_me_mask;
+		if (i == params.vcpu_count)
+			break;
+	}
+
+	/* launch finish */
+	ret = psp_guest_launch_finish(finish, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_FINISH ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	/* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
+	 * KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
+	 */
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		svm_cpuid_update(vcpu);
+
+	/* copy the measurement for user */
+	if (copy_to_user(argp->measurement, finish->measurement, 32))
+		ret = -EFAULT;
+
+err_2:
+	free_page((unsigned long)mask);
+err_1:
+	kfree(finish);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5517,6 +5590,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_FINISH: {
+		r = sev_launch_finish(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 24/28] KVM: SVM: add SEV_LAUNCH_FINISH command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.boun

The command is used for finializing the guest launch into SEV mode.

For more information see [1], section 6.3

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   78 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c78bdc6..60cc0f7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5497,6 +5497,79 @@ err_1:
 	return ret;
 }
  
+static int sev_launch_finish(struct kvm *kvm,
+			     struct kvm_sev_launch_finish __user *argp,
+			     int *psp_ret)
+{
+	int i, ret;
+	void *mask = NULL;
+	int buffer_len, len;
+	struct kvm_vcpu *vcpu;
+	struct psp_data_launch_finish *finish;
+	struct kvm_sev_launch_finish params;
+
+	if (!kvm_sev_guest())
+		return -EINVAL;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	buffer_len = sizeof(*finish) + (sizeof(u64) * params.vcpu_count);
+	finish = kzalloc(buffer_len, GFP_KERNEL);
+	if (!finish)
+		return -ENOMEM;
+
+	/* copy the vcpu mask from user */
+	if (params.vcpu_mask_length && params.vcpu_mask_addr) {
+		ret = -ENOMEM;
+		mask = (void *) get_zeroed_page(GFP_KERNEL);
+		if (!mask)
+			goto err_1;
+
+		len = min_t(size_t, PAGE_SIZE, params.vcpu_mask_length);
+		ret = -EFAULT;
+		if (copy_from_user(mask, (uint8_t*)params.vcpu_mask_addr, len))
+			goto err_2;
+		finish->vcpus.state_mask_addr = __psp_pa(mask);
+	}
+
+	finish->handle = kvm_sev_handle();
+	finish->hdr.buffer_len = buffer_len;
+	finish->vcpus.state_count = params.vcpu_count;
+	finish->vcpus.state_length = params.vcpu_length;
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		finish->vcpus.state_addr[i] =
+					to_svm(vcpu)->vmcb_pa | sme_me_mask;
+		if (i == params.vcpu_count)
+			break;
+	}
+
+	/* launch finish */
+	ret = psp_guest_launch_finish(finish, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_FINISH ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	/* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
+	 * KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
+	 */
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		svm_cpuid_update(vcpu);
+
+	/* copy the measurement for user */
+	if (copy_to_user(argp->measurement, finish->measurement, 32))
+		ret = -EFAULT;
+
+err_2:
+	free_page((unsigned long)mask);
+err_1:
+	kfree(finish);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5517,6 +5590,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_FINISH: {
+		r = sev_launch_finish(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 24/28] KVM: SVM: add SEV_LAUNCH_FINISH command
@ 2016-08-22 23:28   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:28 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command is used for finializing the guest launch into SEV mode.

For more information see [1], section 6.3

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   78 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c78bdc6..60cc0f7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5497,6 +5497,79 @@ err_1:
 	return ret;
 }
  
+static int sev_launch_finish(struct kvm *kvm,
+			     struct kvm_sev_launch_finish __user *argp,
+			     int *psp_ret)
+{
+	int i, ret;
+	void *mask = NULL;
+	int buffer_len, len;
+	struct kvm_vcpu *vcpu;
+	struct psp_data_launch_finish *finish;
+	struct kvm_sev_launch_finish params;
+
+	if (!kvm_sev_guest())
+		return -EINVAL;
+
+	/* Get the parameters from the user */
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	buffer_len = sizeof(*finish) + (sizeof(u64) * params.vcpu_count);
+	finish = kzalloc(buffer_len, GFP_KERNEL);
+	if (!finish)
+		return -ENOMEM;
+
+	/* copy the vcpu mask from user */
+	if (params.vcpu_mask_length && params.vcpu_mask_addr) {
+		ret = -ENOMEM;
+		mask = (void *) get_zeroed_page(GFP_KERNEL);
+		if (!mask)
+			goto err_1;
+
+		len = min_t(size_t, PAGE_SIZE, params.vcpu_mask_length);
+		ret = -EFAULT;
+		if (copy_from_user(mask, (uint8_t*)params.vcpu_mask_addr, len))
+			goto err_2;
+		finish->vcpus.state_mask_addr = __psp_pa(mask);
+	}
+
+	finish->handle = kvm_sev_handle();
+	finish->hdr.buffer_len = buffer_len;
+	finish->vcpus.state_count = params.vcpu_count;
+	finish->vcpus.state_length = params.vcpu_length;
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		finish->vcpus.state_addr[i] =
+					to_svm(vcpu)->vmcb_pa | sme_me_mask;
+		if (i == params.vcpu_count)
+			break;
+	}
+
+	/* launch finish */
+	ret = psp_guest_launch_finish(finish, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: LAUNCH_FINISH ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_2;
+	}
+
+	/* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
+	 * KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
+	 */
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		svm_cpuid_update(vcpu);
+
+	/* copy the measurement for user */
+	if (copy_to_user(argp->measurement, finish->measurement, 32))
+		ret = -EFAULT;
+
+err_2:
+	free_page((unsigned long)mask);
+err_1:
+	kfree(finish);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5517,6 +5590,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_LAUNCH_FINISH: {
+		r = sev_launch_finish(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 25/28] KVM: SVM: add KVM_SEV_GUEST_STATUS command
  2016-08-22 23:23 ` Brijesh Singh
                   ` (50 preceding siblings ...)
  (?)
@ 2016-08-22 23:29 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The command is used to query the SEV guest status.

For more information see [1], section 6.10

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 60cc0f7..63e7d15 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5570,6 +5570,42 @@ err_1:
 	return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm,
+			    struct kvm_sev_guest_status __user *argp,
+			    int *psp_ret)
+{
+	int ret;
+	struct kvm_sev_guest_status params;
+	struct psp_data_guest_status *status;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	status->hdr.buffer_len = sizeof(*status);
+	status->handle = kvm_sev_handle();
+	ret = psp_guest_status(status, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: GUEST_STATUS ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_1;
+	}
+	params.policy = status->policy;
+	params.state = status->state;
+
+	if (copy_to_user(argp, &params, sizeof(*argp)))
+		ret = -EFAULT;
+err_1:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5595,6 +5631,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_GUEST_STATUS: {
+		r = sev_guest_status(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 25/28] KVM: SVM: add KVM_SEV_GUEST_STATUS command
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:29   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command is used to query the SEV guest status.

For more information see [1], section 6.10

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 60cc0f7..63e7d15 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5570,6 +5570,42 @@ err_1:
 	return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm,
+			    struct kvm_sev_guest_status __user *argp,
+			    int *psp_ret)
+{
+	int ret;
+	struct kvm_sev_guest_status params;
+	struct psp_data_guest_status *status;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	status->hdr.buffer_len = sizeof(*status);
+	status->handle = kvm_sev_handle();
+	ret = psp_guest_status(status, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: GUEST_STATUS ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_1;
+	}
+	params.policy = status->policy;
+	params.state = status->state;
+
+	if (copy_to_user(argp, &params, sizeof(*argp)))
+		ret = -EFAULT;
+err_1:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5595,6 +5631,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_GUEST_STATUS: {
+		r = sev_guest_status(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 25/28] KVM: SVM: add KVM_SEV_GUEST_STATUS command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.boun

The command is used to query the SEV guest status.

For more information see [1], section 6.10

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 60cc0f7..63e7d15 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5570,6 +5570,42 @@ err_1:
 	return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm,
+			    struct kvm_sev_guest_status __user *argp,
+			    int *psp_ret)
+{
+	int ret;
+	struct kvm_sev_guest_status params;
+	struct psp_data_guest_status *status;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	status->hdr.buffer_len = sizeof(*status);
+	status->handle = kvm_sev_handle();
+	ret = psp_guest_status(status, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: GUEST_STATUS ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_1;
+	}
+	params.policy = status->policy;
+	params.state = status->state;
+
+	if (copy_to_user(argp, &params, sizeof(*argp)))
+		ret = -EFAULT;
+err_1:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5595,6 +5631,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_GUEST_STATUS: {
+		r = sev_guest_status(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 25/28] KVM: SVM: add KVM_SEV_GUEST_STATUS command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.boun

The command is used to query the SEV guest status.

For more information see [1], section 6.10

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 60cc0f7..63e7d15 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5570,6 +5570,42 @@ err_1:
 	return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm,
+			    struct kvm_sev_guest_status __user *argp,
+			    int *psp_ret)
+{
+	int ret;
+	struct kvm_sev_guest_status params;
+	struct psp_data_guest_status *status;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	status->hdr.buffer_len = sizeof(*status);
+	status->handle = kvm_sev_handle();
+	ret = psp_guest_status(status, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: GUEST_STATUS ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_1;
+	}
+	params.policy = status->policy;
+	params.state = status->state;
+
+	if (copy_to_user(argp, &params, sizeof(*argp)))
+		ret = -EFAULT;
+err_1:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5595,6 +5631,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_GUEST_STATUS: {
+		r = sev_guest_status(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 25/28] KVM: SVM: add KVM_SEV_GUEST_STATUS command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command is used to query the SEV guest status.

For more information see [1], section 6.10

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 60cc0f7..63e7d15 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5570,6 +5570,42 @@ err_1:
 	return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm,
+			    struct kvm_sev_guest_status __user *argp,
+			    int *psp_ret)
+{
+	int ret;
+	struct kvm_sev_guest_status params;
+	struct psp_data_guest_status *status;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&params, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	status->hdr.buffer_len = sizeof(*status);
+	status->handle = kvm_sev_handle();
+	ret = psp_guest_status(status, psp_ret);
+	if (ret) {
+		printk(KERN_ERR "SEV: GUEST_STATUS ret=%d (%#010x)\n",
+			ret, *psp_ret);
+		goto err_1;
+	}
+	params.policy = status->policy;
+	params.state = status->state;
+
+	if (copy_to_user(argp, &params, sizeof(*argp)))
+		ret = -EFAULT;
+err_1:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5595,6 +5631,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_GUEST_STATUS: {
+		r = sev_guest_status(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 26/28] KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
  2016-08-22 23:23 ` Brijesh Singh
                   ` (52 preceding siblings ...)
  (?)
@ 2016-08-22 23:29 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The command decrypts a page of guest memory for debugging purposes.

For more information see [1], section 7.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   83 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 63e7d15..b383bc7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5606,6 +5606,84 @@ err_1:
 	return ret;
 }
 
+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+				  void *dst, int *psp_ret)
+{
+	int ret, pinned;
+	struct page **inpages;
+	struct psp_data_dbg *decrypt;
+
+	decrypt = kzalloc(sizeof(*decrypt), GFP_KERNEL);
+	if (!decrypt)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	/* pin the user virtual address */
+	ret = -EFAULT;
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(src, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned < 0)
+		goto err_2;
+
+	decrypt->hdr.buffer_len = sizeof(*decrypt);
+	decrypt->handle = kvm_sev_handle();
+	decrypt->dst_addr = __pa(dst) | sme_me_mask;
+	decrypt->src_addr = __sev_page_pa(inpages[0]);
+	decrypt->length = PAGE_SIZE;
+
+	ret = psp_dbg_decrypt(decrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+				ret, *psp_ret);
+	release_pages(inpages, 1, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(decrypt);
+	return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_decrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int ret, offset, len;
+	struct kvm_sev_dbg_decrypt debug;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	/* decrypt one page */
+	ret = __sev_dbg_decrypt_page(kvm, debug.src_addr, data, psp_ret);
+	if (ret)
+		goto err_1;
+
+	/* we have decrypted full page but copy request length */
+	offset = debug.src_addr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), debug.length);
+	if (copy_to_user((uint8_t *)debug.dst_addr, data + offset, len))
+		ret = -EFAULT;
+err_1:
+	free_page((unsigned long)data);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5636,6 +5714,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_DECRYPT: {
+		r = sev_dbg_decrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 26/28] KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:29   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command decrypts a page of guest memory for debugging purposes.

For more information see [1], section 7.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   83 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 63e7d15..b383bc7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5606,6 +5606,84 @@ err_1:
 	return ret;
 }
 
+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+				  void *dst, int *psp_ret)
+{
+	int ret, pinned;
+	struct page **inpages;
+	struct psp_data_dbg *decrypt;
+
+	decrypt = kzalloc(sizeof(*decrypt), GFP_KERNEL);
+	if (!decrypt)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	/* pin the user virtual address */
+	ret = -EFAULT;
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(src, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned < 0)
+		goto err_2;
+
+	decrypt->hdr.buffer_len = sizeof(*decrypt);
+	decrypt->handle = kvm_sev_handle();
+	decrypt->dst_addr = __pa(dst) | sme_me_mask;
+	decrypt->src_addr = __sev_page_pa(inpages[0]);
+	decrypt->length = PAGE_SIZE;
+
+	ret = psp_dbg_decrypt(decrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+				ret, *psp_ret);
+	release_pages(inpages, 1, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(decrypt);
+	return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_decrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int ret, offset, len;
+	struct kvm_sev_dbg_decrypt debug;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	/* decrypt one page */
+	ret = __sev_dbg_decrypt_page(kvm, debug.src_addr, data, psp_ret);
+	if (ret)
+		goto err_1;
+
+	/* we have decrypted full page but copy request length */
+	offset = debug.src_addr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), debug.length);
+	if (copy_to_user((uint8_t *)debug.dst_addr, data + offset, len))
+		ret = -EFAULT;
+err_1:
+	free_page((unsigned long)data);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5636,6 +5714,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_DECRYPT: {
+		r = sev_dbg_decrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 26/28] KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

The command decrypts a page of guest memory for debugging purposes.

For more information see [1], section 7.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   83 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 63e7d15..b383bc7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5606,6 +5606,84 @@ err_1:
 	return ret;
 }
 
+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+				  void *dst, int *psp_ret)
+{
+	int ret, pinned;
+	struct page **inpages;
+	struct psp_data_dbg *decrypt;
+
+	decrypt = kzalloc(sizeof(*decrypt), GFP_KERNEL);
+	if (!decrypt)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	/* pin the user virtual address */
+	ret = -EFAULT;
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(src, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned < 0)
+		goto err_2;
+
+	decrypt->hdr.buffer_len = sizeof(*decrypt);
+	decrypt->handle = kvm_sev_handle();
+	decrypt->dst_addr = __pa(dst) | sme_me_mask;
+	decrypt->src_addr = __sev_page_pa(inpages[0]);
+	decrypt->length = PAGE_SIZE;
+
+	ret = psp_dbg_decrypt(decrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+				ret, *psp_ret);
+	release_pages(inpages, 1, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(decrypt);
+	return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_decrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int ret, offset, len;
+	struct kvm_sev_dbg_decrypt debug;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	/* decrypt one page */
+	ret = __sev_dbg_decrypt_page(kvm, debug.src_addr, data, psp_ret);
+	if (ret)
+		goto err_1;
+
+	/* we have decrypted full page but copy request length */
+	offset = debug.src_addr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), debug.length);
+	if (copy_to_user((uint8_t *)debug.dst_addr, data + offset, len))
+		ret = -EFAULT;
+err_1:
+	free_page((unsigned long)data);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5636,6 +5714,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_DECRYPT: {
+		r = sev_dbg_decrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 26/28] KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

The command decrypts a page of guest memory for debugging purposes.

For more information see [1], section 7.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   83 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 63e7d15..b383bc7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5606,6 +5606,84 @@ err_1:
 	return ret;
 }
 
+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+				  void *dst, int *psp_ret)
+{
+	int ret, pinned;
+	struct page **inpages;
+	struct psp_data_dbg *decrypt;
+
+	decrypt = kzalloc(sizeof(*decrypt), GFP_KERNEL);
+	if (!decrypt)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	/* pin the user virtual address */
+	ret = -EFAULT;
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(src, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned < 0)
+		goto err_2;
+
+	decrypt->hdr.buffer_len = sizeof(*decrypt);
+	decrypt->handle = kvm_sev_handle();
+	decrypt->dst_addr = __pa(dst) | sme_me_mask;
+	decrypt->src_addr = __sev_page_pa(inpages[0]);
+	decrypt->length = PAGE_SIZE;
+
+	ret = psp_dbg_decrypt(decrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+				ret, *psp_ret);
+	release_pages(inpages, 1, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(decrypt);
+	return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_decrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int ret, offset, len;
+	struct kvm_sev_dbg_decrypt debug;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	/* decrypt one page */
+	ret = __sev_dbg_decrypt_page(kvm, debug.src_addr, data, psp_ret);
+	if (ret)
+		goto err_1;
+
+	/* we have decrypted full page but copy request length */
+	offset = debug.src_addr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), debug.length);
+	if (copy_to_user((uint8_t *)debug.dst_addr, data + offset, len))
+		ret = -EFAULT;
+err_1:
+	free_page((unsigned long)data);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5636,6 +5714,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_DECRYPT: {
+		r = sev_dbg_decrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 26/28] KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command decrypts a page of guest memory for debugging purposes.

For more information see [1], section 7.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   83 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 63e7d15..b383bc7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5606,6 +5606,84 @@ err_1:
 	return ret;
 }
 
+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+				  void *dst, int *psp_ret)
+{
+	int ret, pinned;
+	struct page **inpages;
+	struct psp_data_dbg *decrypt;
+
+	decrypt = kzalloc(sizeof(*decrypt), GFP_KERNEL);
+	if (!decrypt)
+		return -ENOMEM;
+
+	ret = -ENOMEM;
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		goto err_1;
+
+	/* pin the user virtual address */
+	ret = -EFAULT;
+	down_read(&current->mm->mmap_sem);
+	pinned = get_user_pages(src, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (pinned < 0)
+		goto err_2;
+
+	decrypt->hdr.buffer_len = sizeof(*decrypt);
+	decrypt->handle = kvm_sev_handle();
+	decrypt->dst_addr = __pa(dst) | sme_me_mask;
+	decrypt->src_addr = __sev_page_pa(inpages[0]);
+	decrypt->length = PAGE_SIZE;
+
+	ret = psp_dbg_decrypt(decrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+				ret, *psp_ret);
+	release_pages(inpages, 1, 0);
+err_2:
+	kfree(inpages);
+err_1:
+	kfree(decrypt);
+	return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_decrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int ret, offset, len;
+	struct kvm_sev_dbg_decrypt debug;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	/* decrypt one page */
+	ret = __sev_dbg_decrypt_page(kvm, debug.src_addr, data, psp_ret);
+	if (ret)
+		goto err_1;
+
+	/* we have decrypted full page but copy request length */
+	offset = debug.src_addr & (PAGE_SIZE - 1);
+	len = min_t(size_t, (PAGE_SIZE - offset), debug.length);
+	if (copy_to_user((uint8_t *)debug.dst_addr, data + offset, len))
+		ret = -EFAULT;
+err_1:
+	free_page((unsigned long)data);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5636,6 +5714,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_DECRYPT: {
+		r = sev_dbg_decrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 27/28] KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:29   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The command encrypts a region of guest memory for debugging purposes.

For more information see [1], section 7.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  100 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 100 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b383bc7..4af195d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5684,6 +5684,101 @@ err_1:
 	return ret;
 }
 
+static int sev_dbg_encrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_encrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int len, ret, d_off;
+	struct page **inpages;
+	struct psp_data_dbg *encrypt;
+	struct kvm_sev_dbg_encrypt debug;
+	unsigned long src_addr, dst_addr;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	len = debug.length;
+	src_addr = debug.src_addr;
+	dst_addr = debug.dst_addr;
+
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		return -ENOMEM;
+
+	/* pin the guest destination virtual address */
+	down_read(&current->mm->mmap_sem);
+	ret = get_user_pages(dst_addr, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (ret < 0)
+		goto err_1;
+
+	encrypt = kzalloc(sizeof(*encrypt), GFP_KERNEL);
+	if (!encrypt)
+		goto err_2;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		goto err_3;
+
+	encrypt->hdr.buffer_len = sizeof(*encrypt);
+	encrypt->handle = kvm_sev_handle();
+
+	if ((len & 15) || (dst_addr & 15)) {
+		/* if destination address and length are not 16-byte
+		 * aligned then:
+		 * a) decrypt destination page into temporary buffer
+		 * b) copy source data into temporary buffer at correct offset
+		 * c) encrypt temporary buffer
+		 */
+		ret = __sev_dbg_decrypt_page(kvm, dst_addr, data, psp_ret);
+		if (ret)
+			goto err_4;
+
+		d_off = dst_addr & (PAGE_SIZE - 1);
+		ret = -EFAULT;
+		if (copy_from_user(data + d_off,
+					(uint8_t *)debug.src_addr, len))
+			goto err_4;
+
+		encrypt->length = PAGE_SIZE;
+		encrypt->src_addr = __pa(data) | sme_me_mask;
+		encrypt->dst_addr =  __sev_page_pa(inpages[0]);
+	} else {
+		if (copy_from_user(data, (uint8_t *)debug.src_addr, len))
+			goto err_4;
+
+		d_off = dst_addr & (PAGE_SIZE - 1);
+		encrypt->length = len;
+		encrypt->src_addr = __pa(data) | sme_me_mask;
+		encrypt->dst_addr = __sev_page_pa(inpages[0]);
+		encrypt->dst_addr += d_off;
+	}
+
+	ret = psp_dbg_encrypt(encrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_ENCRYPT: [%#lx=>%#lx+%#x] "
+			"%d (%#010x)\n",src_addr, dst_addr, len,
+			ret, *psp_ret);
+
+err_4:
+	free_page((unsigned long)data);
+err_3:
+	kfree(encrypt);
+err_2:
+	release_pages(inpages, 1, 0);
+err_1:
+	kfree(inpages);
+
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5719,6 +5814,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_ENCRYPT: {
+		r = sev_dbg_encrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 27/28] KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command encrypts a region of guest memory for debugging purposes.

For more information see [1], section 7.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  100 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 100 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b383bc7..4af195d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5684,6 +5684,101 @@ err_1:
 	return ret;
 }
 
+static int sev_dbg_encrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_encrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int len, ret, d_off;
+	struct page **inpages;
+	struct psp_data_dbg *encrypt;
+	struct kvm_sev_dbg_encrypt debug;
+	unsigned long src_addr, dst_addr;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	len = debug.length;
+	src_addr = debug.src_addr;
+	dst_addr = debug.dst_addr;
+
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		return -ENOMEM;
+
+	/* pin the guest destination virtual address */
+	down_read(&current->mm->mmap_sem);
+	ret = get_user_pages(dst_addr, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (ret < 0)
+		goto err_1;
+
+	encrypt = kzalloc(sizeof(*encrypt), GFP_KERNEL);
+	if (!encrypt)
+		goto err_2;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		goto err_3;
+
+	encrypt->hdr.buffer_len = sizeof(*encrypt);
+	encrypt->handle = kvm_sev_handle();
+
+	if ((len & 15) || (dst_addr & 15)) {
+		/* if destination address and length are not 16-byte
+		 * aligned then:
+		 * a) decrypt destination page into temporary buffer
+		 * b) copy source data into temporary buffer at correct offset
+		 * c) encrypt temporary buffer
+		 */
+		ret = __sev_dbg_decrypt_page(kvm, dst_addr, data, psp_ret);
+		if (ret)
+			goto err_4;
+
+		d_off = dst_addr & (PAGE_SIZE - 1);
+		ret = -EFAULT;
+		if (copy_from_user(data + d_off,
+					(uint8_t *)debug.src_addr, len))
+			goto err_4;
+
+		encrypt->length = PAGE_SIZE;
+		encrypt->src_addr = __pa(data) | sme_me_mask;
+		encrypt->dst_addr =  __sev_page_pa(inpages[0]);
+	} else {
+		if (copy_from_user(data, (uint8_t *)debug.src_addr, len))
+			goto err_4;
+
+		d_off = dst_addr & (PAGE_SIZE - 1);
+		encrypt->length = len;
+		encrypt->src_addr = __pa(data) | sme_me_mask;
+		encrypt->dst_addr = __sev_page_pa(inpages[0]);
+		encrypt->dst_addr += d_off;
+	}
+
+	ret = psp_dbg_encrypt(encrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_ENCRYPT: [%#lx=>%#lx+%#x] "
+			"%d (%#010x)\n",src_addr, dst_addr, len,
+			ret, *psp_ret);
+
+err_4:
+	free_page((unsigned long)data);
+err_3:
+	kfree(encrypt);
+err_2:
+	release_pages(inpages, 1, 0);
+err_1:
+	kfree(inpages);
+
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5719,6 +5814,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_ENCRYPT: {
+		r = sev_dbg_encrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 27/28] KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

The command encrypts a region of guest memory for debugging purposes.

For more information see [1], section 7.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  100 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 100 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b383bc7..4af195d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5684,6 +5684,101 @@ err_1:
 	return ret;
 }
 
+static int sev_dbg_encrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_encrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int len, ret, d_off;
+	struct page **inpages;
+	struct psp_data_dbg *encrypt;
+	struct kvm_sev_dbg_encrypt debug;
+	unsigned long src_addr, dst_addr;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	len = debug.length;
+	src_addr = debug.src_addr;
+	dst_addr = debug.dst_addr;
+
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		return -ENOMEM;
+
+	/* pin the guest destination virtual address */
+	down_read(&current->mm->mmap_sem);
+	ret = get_user_pages(dst_addr, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (ret < 0)
+		goto err_1;
+
+	encrypt = kzalloc(sizeof(*encrypt), GFP_KERNEL);
+	if (!encrypt)
+		goto err_2;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		goto err_3;
+
+	encrypt->hdr.buffer_len = sizeof(*encrypt);
+	encrypt->handle = kvm_sev_handle();
+
+	if ((len & 15) || (dst_addr & 15)) {
+		/* if destination address and length are not 16-byte
+		 * aligned then:
+		 * a) decrypt destination page into temporary buffer
+		 * b) copy source data into temporary buffer at correct offset
+		 * c) encrypt temporary buffer
+		 */
+		ret = __sev_dbg_decrypt_page(kvm, dst_addr, data, psp_ret);
+		if (ret)
+			goto err_4;
+
+		d_off = dst_addr & (PAGE_SIZE - 1);
+		ret = -EFAULT;
+		if (copy_from_user(data + d_off,
+					(uint8_t *)debug.src_addr, len))
+			goto err_4;
+
+		encrypt->length = PAGE_SIZE;
+		encrypt->src_addr = __pa(data) | sme_me_mask;
+		encrypt->dst_addr =  __sev_page_pa(inpages[0]);
+	} else {
+		if (copy_from_user(data, (uint8_t *)debug.src_addr, len))
+			goto err_4;
+
+		d_off = dst_addr & (PAGE_SIZE - 1);
+		encrypt->length = len;
+		encrypt->src_addr = __pa(data) | sme_me_mask;
+		encrypt->dst_addr = __sev_page_pa(inpages[0]);
+		encrypt->dst_addr += d_off;
+	}
+
+	ret = psp_dbg_encrypt(encrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_ENCRYPT: [%#lx=>%#lx+%#x] "
+			"%d (%#010x)\n",src_addr, dst_addr, len,
+			ret, *psp_ret);
+
+err_4:
+	free_page((unsigned long)data);
+err_3:
+	kfree(encrypt);
+err_2:
+	release_pages(inpages, 1, 0);
+err_1:
+	kfree(inpages);
+
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5719,6 +5814,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_ENCRYPT: {
+		r = sev_dbg_encrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 27/28] KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

The command encrypts a region of guest memory for debugging purposes.

For more information see [1], section 7.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |  100 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 100 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b383bc7..4af195d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5684,6 +5684,101 @@ err_1:
 	return ret;
 }
 
+static int sev_dbg_encrypt(struct kvm *kvm,
+			   struct kvm_sev_dbg_encrypt __user *argp,
+			   int *psp_ret)
+{
+	void *data;
+	int len, ret, d_off;
+	struct page **inpages;
+	struct psp_data_dbg *encrypt;
+	struct kvm_sev_dbg_encrypt debug;
+	unsigned long src_addr, dst_addr;
+
+	if (!kvm_sev_guest())
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, argp, sizeof(*argp)))
+		return -EFAULT;
+
+	if (debug.length > PAGE_SIZE)
+		return -EINVAL;
+
+	len = debug.length;
+	src_addr = debug.src_addr;
+	dst_addr = debug.dst_addr;
+
+	inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+	if (!inpages)
+		return -ENOMEM;
+
+	/* pin the guest destination virtual address */
+	down_read(&current->mm->mmap_sem);
+	ret = get_user_pages(dst_addr, 1, 1, 0, inpages, NULL);
+	up_read(&current->mm->mmap_sem);
+	if (ret < 0)
+		goto err_1;
+
+	encrypt = kzalloc(sizeof(*encrypt), GFP_KERNEL);
+	if (!encrypt)
+		goto err_2;
+
+	data = (void *) get_zeroed_page(GFP_KERNEL);
+	if (!data)
+		goto err_3;
+
+	encrypt->hdr.buffer_len = sizeof(*encrypt);
+	encrypt->handle = kvm_sev_handle();
+
+	if ((len & 15) || (dst_addr & 15)) {
+		/* if destination address and length are not 16-byte
+		 * aligned then:
+		 * a) decrypt destination page into temporary buffer
+		 * b) copy source data into temporary buffer at correct offset
+		 * c) encrypt temporary buffer
+		 */
+		ret = __sev_dbg_decrypt_page(kvm, dst_addr, data, psp_ret);
+		if (ret)
+			goto err_4;
+
+		d_off = dst_addr & (PAGE_SIZE - 1);
+		ret = -EFAULT;
+		if (copy_from_user(data + d_off,
+					(uint8_t *)debug.src_addr, len))
+			goto err_4;
+
+		encrypt->length = PAGE_SIZE;
+		encrypt->src_addr = __pa(data) | sme_me_mask;
+		encrypt->dst_addr =  __sev_page_pa(inpages[0]);
+	} else {
+		if (copy_from_user(data, (uint8_t *)debug.src_addr, len))
+			goto err_4;
+
+		d_off = dst_addr & (PAGE_SIZE - 1);
+		encrypt->length = len;
+		encrypt->src_addr = __pa(data) | sme_me_mask;
+		encrypt->dst_addr = __sev_page_pa(inpages[0]);
+		encrypt->dst_addr += d_off;
+	}
+
+	ret = psp_dbg_encrypt(encrypt, psp_ret);
+	if (ret)
+		printk(KERN_ERR "SEV: DEBUG_ENCRYPT: [%#lx=>%#lx+%#x] "
+			"%d (%#010x)\n",src_addr, dst_addr, len,
+			ret, *psp_ret);
+
+err_4:
+	free_page((unsigned long)data);
+err_3:
+	kfree(encrypt);
+err_2:
+	release_pages(inpages, 1, 0);
+err_1:
+	kfree(inpages);
+
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5719,6 +5814,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_DBG_ENCRYPT: {
+		r = sev_dbg_encrypt(kvm, (void *)arg.opaque,
+					&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 28/28] KVM: SVM: add command to query SEV API version
  2016-08-22 23:23 ` Brijesh Singh
                   ` (56 preceding siblings ...)
  (?)
@ 2016-08-22 23:29 ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 4af195d..88b8f89 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5779,6 +5779,25 @@ err_1:
 	return ret;
 }
 
+static int sev_api_version(int *psp_ret)
+{
+	int ret;
+	struct psp_data_status *status;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	ret = psp_platform_status(status, psp_ret);
+	if (ret)
+		goto err;
+
+	ret = (status->api_major << 8) | status->api_minor;
+err:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5819,6 +5838,10 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_API_VERSION: {
+		r = sev_api_version(&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 28/28] KVM: SVM: add command to query SEV API version
  2016-08-22 23:23 ` Brijesh Singh
  (?)
  (?)
@ 2016-08-22 23:29   ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 4af195d..88b8f89 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5779,6 +5779,25 @@ err_1:
 	return ret;
 }
 
+static int sev_api_version(int *psp_ret)
+{
+	int ret;
+	struct psp_data_status *status;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	ret = psp_platform_status(status, psp_ret);
+	if (ret)
+		goto err;
+
+	ret = (status->api_major << 8) | status->api_minor;
+err:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5819,6 +5838,10 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_API_VERSION: {
+		r = sev_api_version(&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 28/28] KVM: SVM: add command to query SEV API version
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 4af195d..88b8f89 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5779,6 +5779,25 @@ err_1:
 	return ret;
 }
 
+static int sev_api_version(int *psp_ret)
+{
+	int ret;
+	struct psp_data_status *status;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	ret = psp_platform_status(status, psp_ret);
+	if (ret)
+		goto err;
+
+	ret = (status->api_major << 8) | status->api_minor;
+err:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5819,6 +5838,10 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_API_VERSION: {
+		r = sev_api_version(&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 28/28] KVM: SVM: add command to query SEV API version
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounin

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 4af195d..88b8f89 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5779,6 +5779,25 @@ err_1:
 	return ret;
 }
 
+static int sev_api_version(int *psp_ret)
+{
+	int ret;
+	struct psp_data_status *status;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	ret = psp_platform_status(status, psp_ret);
+	if (ret)
+		goto err;
+
+	ret = (status->api_major << 8) | status->api_minor;
+err:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5819,6 +5838,10 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_API_VERSION: {
+		r = sev_api_version(&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 28/28] KVM: SVM: add command to query SEV API version
@ 2016-08-22 23:29   ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-08-22 23:29 UTC (permalink / raw)
  To: simon.guinot, linux-efi, brijesh.singh, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c |   23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 4af195d..88b8f89 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5779,6 +5779,25 @@ err_1:
 	return ret;
 }
 
+static int sev_api_version(int *psp_ret)
+{
+	int ret;
+	struct psp_data_status *status;
+
+	status = kzalloc(sizeof(*status), GFP_KERNEL);
+	if (!status)
+		return -ENOMEM;
+
+	ret = psp_platform_status(status, psp_ret);
+	if (ret)
+		goto err;
+
+	ret = (status->api_major << 8) | status->api_minor;
+err:
+	kfree(status);
+	return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 			     struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5819,6 +5838,10 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
 					&arg.ret_code);
 		break;
 	}
+	case KVM_SEV_API_VERSION: {
+		r = sev_api_version(&arg.ret_code);
+		break;
+	}
 	default:
 		break;
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
  2016-08-22 23:27   ` Brijesh Singh
  (?)
@ 2016-08-23  7:14     ` Herbert Xu
  -1 siblings, 0 replies; 255+ messages in thread
From: Herbert Xu @ 2016-08-23  7:14 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, bhe, xemul, joro, x86, mingo, msalter,
	ross.zwisler, bp, dyoung, thomas.lendacky, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzin

On Mon, Aug 22, 2016 at 07:27:22PM -0400, Brijesh Singh wrote:
> The driver to communicate with Secure Encrypted Virtualization (SEV)
> firmware running within the AMD secure processor providing a secure key
> management interface for SEV guests.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>

This driver doesn't seem to hook into the Crypto API at all, is
there any reason why it should be in drivers/crypto?

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
@ 2016-08-23  7:14     ` Herbert Xu
  0 siblings, 0 replies; 255+ messages in thread
From: Herbert Xu @ 2016-08-23  7:14 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, bhe, xemul, joro, x86, mingo, msalter,
	ross.zwisler, bp, dyoung, thomas.lendacky, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:27:22PM -0400, Brijesh Singh wrote:
> The driver to communicate with Secure Encrypted Virtualization (SEV)
> firmware running within the AMD secure processor providing a secure key
> management interface for SEV guests.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>

This driver doesn't seem to hook into the Crypto API at all, is
there any reason why it should be in drivers/crypto?

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
@ 2016-08-23  7:14     ` Herbert Xu
  0 siblings, 0 replies; 255+ messages in thread
From: Herbert Xu @ 2016-08-23  7:14 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, bhe, xemul, joro, x86, mingo, msalter,
	ross.zwisler, bp, dyoung, thomas.lendacky, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:27:22PM -0400, Brijesh Singh wrote:
> The driver to communicate with Secure Encrypted Virtualization (SEV)
> firmware running within the AMD secure processor providing a secure key
> management interface for SEV guests.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>

This driver doesn't seem to hook into the Crypto API at all, is
there any reason why it should be in drivers/crypto?

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
  2016-08-23  7:14     ` Herbert Xu
  (?)
  (?)
@ 2016-08-24 12:02       ` Tom Lendacky
  -1 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-08-24 12:02 UTC (permalink / raw)
  To: Herbert Xu, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, bhe, xemul, joro, x86, mingo, msalter,
	ross.zwisler, bp, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof



On 08/23/2016 02:14 AM, Herbert Xu wrote:
> On Mon, Aug 22, 2016 at 07:27:22PM -0400, Brijesh Singh wrote:
>> The driver to communicate with Secure Encrypted Virtualization (SEV)
>> firmware running within the AMD secure processor providing a secure key
>> management interface for SEV guests.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> 
> This driver doesn't seem to hook into the Crypto API at all, is
> there any reason why it should be in drivers/crypto?

Yes, this needs to be cleaned up.  The PSP and the CCP share the same
PCI id, so this has to be integrated with the CCP. It could either
be moved into the drivers/crypto/ccp directory or both the psp and
ccp device specific support can be moved somewhere else leaving just
the ccp crypto API related files in drivers/crypto/ccp.

Thanks,
Tom

> 
> Thanks,
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
@ 2016-08-24 12:02       ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-08-24 12:02 UTC (permalink / raw)
  To: Herbert Xu, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, bhe, xemul, joro, x86, mingo, msalter,
	ross.zwisler, bp, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, pbonzini, akpm, davem



On 08/23/2016 02:14 AM, Herbert Xu wrote:
> On Mon, Aug 22, 2016 at 07:27:22PM -0400, Brijesh Singh wrote:
>> The driver to communicate with Secure Encrypted Virtualization (SEV)
>> firmware running within the AMD secure processor providing a secure key
>> management interface for SEV guests.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> 
> This driver doesn't seem to hook into the Crypto API at all, is
> there any reason why it should be in drivers/crypto?

Yes, this needs to be cleaned up.  The PSP and the CCP share the same
PCI id, so this has to be integrated with the CCP. It could either
be moved into the drivers/crypto/ccp directory or both the psp and
ccp device specific support can be moved somewhere else leaving just
the ccp crypto API related files in drivers/crypto/ccp.

Thanks,
Tom

> 
> Thanks,
> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
@ 2016-08-24 12:02       ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-08-24 12:02 UTC (permalink / raw)
  To: Herbert Xu, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, bhe, xemul, joro, x86, mingo, msalter,
	ross.zwisler, bp, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof



On 08/23/2016 02:14 AM, Herbert Xu wrote:
> On Mon, Aug 22, 2016 at 07:27:22PM -0400, Brijesh Singh wrote:
>> The driver to communicate with Secure Encrypted Virtualization (SEV)
>> firmware running within the AMD secure processor providing a secure key
>> management interface for SEV guests.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> 
> This driver doesn't seem to hook into the Crypto API at all, is
> there any reason why it should be in drivers/crypto?

Yes, this needs to be cleaned up.  The PSP and the CCP share the same
PCI id, so this has to be integrated with the CCP. It could either
be moved into the drivers/crypto/ccp directory or both the psp and
ccp device specific support can be moved somewhere else leaving just
the ccp crypto API related files in drivers/crypto/ccp.

Thanks,
Tom

> 
> Thanks,
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver
@ 2016-08-24 12:02       ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-08-24 12:02 UTC (permalink / raw)
  To: Herbert Xu, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, bhe, xemul, joro, x86, mingo, msalter,
	ross.zwisler, bp, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, pbonzini, akpm, davem



On 08/23/2016 02:14 AM, Herbert Xu wrote:
> On Mon, Aug 22, 2016 at 07:27:22PM -0400, Brijesh Singh wrote:
>> The driver to communicate with Secure Encrypted Virtualization (SEV)
>> firmware running within the AMD secure processor providing a secure key
>> management interface for SEV guests.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> 
> This driver doesn't seem to hook into the Crypto API at all, is
> there any reason why it should be in drivers/crypto?

Yes, this needs to be cleaned up.  The PSP and the CCP share the same
PCI id, so this has to be integrated with the CCP. It could either
be moved into the drivers/crypto/ccp directory or both the psp and
ccp device specific support can be moved somewhere else leaving just
the ccp crypto API related files in drivers/crypto/ccp.

Thanks,
Tom

> 
> Thanks,
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes
  2016-08-22 23:23   ` Brijesh Singh
  (?)
@ 2016-09-13  9:56     ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-13  9:56 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto

On Mon, Aug 22, 2016 at 07:23:44PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> AMD hardware adds two additional bits to aid in nested page fault handling.
> 
> Bit 32 - NPF occurred while translating the guest's final physical address
> Bit 33 - NPF occurred while translating the guest page tables
> 
> The guest page tables fault indicator can be used as an aid for nested
> virtualization. Using V0 for the host, V1 for the first level guest and
> V2 for the second level guest, when both V1 and V2 are using nested paging
> there are currently a number of unnecessary instruction emulations. When
> V2 is launched shadow paging is used in V1 for the nested tables of V2. As
> a result, KVM marks these pages as RO in the host nested page tables. When
> V2 exits and we resume V1, these pages are still marked RO.
> 
> Every nested walk for a guest page table is treated as a user-level write
> access and this causes a lot of NPFs because the V1 page tables are marked
> RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
> sees a write to a read-only page, emulates the V1 instruction and unprotects
> the page (marking it RW). This patch looks for cases where we get a NPF due
> to a guest page table walk where the page was marked RO. It immediately
> unprotects the page and resumes the guest, leading to far fewer instruction
> emulations when nested virtualization is used.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |   11 ++++++++++-
>  arch/x86/kvm/mmu.c              |   20 ++++++++++++++++++--
>  arch/x86/kvm/svm.c              |    2 +-
>  3 files changed, 29 insertions(+), 4 deletions(-)

FWIW: Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes
@ 2016-09-13  9:56     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-13  9:56 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:23:44PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> AMD hardware adds two additional bits to aid in nested page fault handling.
> 
> Bit 32 - NPF occurred while translating the guest's final physical address
> Bit 33 - NPF occurred while translating the guest page tables
> 
> The guest page tables fault indicator can be used as an aid for nested
> virtualization. Using V0 for the host, V1 for the first level guest and
> V2 for the second level guest, when both V1 and V2 are using nested paging
> there are currently a number of unnecessary instruction emulations. When
> V2 is launched shadow paging is used in V1 for the nested tables of V2. As
> a result, KVM marks these pages as RO in the host nested page tables. When
> V2 exits and we resume V1, these pages are still marked RO.
> 
> Every nested walk for a guest page table is treated as a user-level write
> access and this causes a lot of NPFs because the V1 page tables are marked
> RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
> sees a write to a read-only page, emulates the V1 instruction and unprotects
> the page (marking it RW). This patch looks for cases where we get a NPF due
> to a guest page table walk where the page was marked RO. It immediately
> unprotects the page and resumes the guest, leading to far fewer instruction
> emulations when nested virtualization is used.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |   11 ++++++++++-
>  arch/x86/kvm/mmu.c              |   20 ++++++++++++++++++--
>  arch/x86/kvm/svm.c              |    2 +-
>  3 files changed, 29 insertions(+), 4 deletions(-)

FWIW: Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes
@ 2016-09-13  9:56     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-13  9:56 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:23:44PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> AMD hardware adds two additional bits to aid in nested page fault handling.
> 
> Bit 32 - NPF occurred while translating the guest's final physical address
> Bit 33 - NPF occurred while translating the guest page tables
> 
> The guest page tables fault indicator can be used as an aid for nested
> virtualization. Using V0 for the host, V1 for the first level guest and
> V2 for the second level guest, when both V1 and V2 are using nested paging
> there are currently a number of unnecessary instruction emulations. When
> V2 is launched shadow paging is used in V1 for the nested tables of V2. As
> a result, KVM marks these pages as RO in the host nested page tables. When
> V2 exits and we resume V1, these pages are still marked RO.
> 
> Every nested walk for a guest page table is treated as a user-level write
> access and this causes a lot of NPFs because the V1 page tables are marked
> RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
> sees a write to a read-only page, emulates the V1 instruction and unprotects
> the page (marking it RW). This patch looks for cases where we get a NPF due
> to a guest page table walk where the page was marked RO. It immediately
> unprotects the page and resumes the guest, leading to far fewer instruction
> emulations when nested virtualization is used.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |   11 ++++++++++-
>  arch/x86/kvm/mmu.c              |   20 ++++++++++++++++++--
>  arch/x86/kvm/svm.c              |    2 +-
>  3 files changed, 29 insertions(+), 4 deletions(-)

FWIW: Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support
  2016-08-22 23:23   ` Brijesh Singh
  (?)
@ 2016-09-21 10:58     ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-21 10:58 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot-jKBdWWKqtFpg9hUCZPvPmw,
	linux-efi-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA,
	rkrcmar-H+wXaHxf7aLQT0dZR+AlfA,
	matt-mF/unelCI9GS6iBeEJttW/XRex20P6io,
	linus.walleij-QSEj5FYQhm4dnm+yROfE0A,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	paul.gortmaker-CWA4WttNNZF54TAoqtyWWQ,
	hpa-YMNOUZJC4hwAvxtiuMwx3w,
	dan.j.williams-ral2JQCrhuEAvxtiuMwx3w,
	aarcange-H+wXaHxf7aLQT0dZR+AlfA, sfr-3FnU+UHB4dNDw9hX6IcOSA,
	andriy.shevchenko-VuQAYsv1563Yd54FQh9/CA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	bhe-H+wXaHxf7aLQT0dZR+AlfA, xemul-bzQdu9zFT3WakBO8gow8eQ,
	joro-zLv9SwRftAIdnm+yROfE0A, x86-DgEjT+Ai2ygdnm+yROfE0A,
	mingo-H+wXaHxf7aLQT0dZR+AlfA, msalter-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-VuQAYsv1563Yd54FQh9/CA,
	dyoung-H+wXaHxf7aLQT0dZR+AlfA, thomas.lendacky-5C7GfCeVMHo,
	jroedel-l3A5Bk7waGM, keescook-F7+t8E8rja9g9hUCZPvPmw,
	toshi.kani-ZPxbGqLxI0U, mathieu.desnoyers-vg+e7yoeK/dWk0Htik3J/w,
	devel-tBiZLqfeLfOHmIFyCCdPziST3g8Odh+X,
	tglx-hfZtesqFncYOwBW4kG4KsQ, mchehab-DgEjT+Ai2ygdnm+yROfE0A,
	iamjoonsoo.kim-Hm3cg6mZ9cc,
	labbott-rxtnV0ftBwyoClj4AeEUq9i2O/JbrIOy,
	tony.luck-ral2JQCrhuEAvxtiuMwx3w, alexandre.bounine

On Mon, Aug 22, 2016 at 07:23:54PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky-5C7GfCeVMHo@public.gmane.org>
> 
> Update the I/O interception support to add the kvm_fast_pio_in function
> to speed up the in instruction similar to the out instruction.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky-5C7GfCeVMHo@public.gmane.org>
> ---
>  arch/x86/include/asm/kvm_host.h |    1 +
>  arch/x86/kvm/svm.c              |    5 +++--
>  arch/x86/kvm/x86.c              |   43 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 47 insertions(+), 2 deletions(-)

FWIW: Reviewed-by: Borislav Petkov <bp-l3A5Bk7waGM@public.gmane.org>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support
@ 2016-09-21 10:58     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-21 10:58 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:23:54PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Update the I/O interception support to add the kvm_fast_pio_in function
> to speed up the in instruction similar to the out instruction.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |    1 +
>  arch/x86/kvm/svm.c              |    5 +++--
>  arch/x86/kvm/x86.c              |   43 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 47 insertions(+), 2 deletions(-)

FWIW: Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support
@ 2016-09-21 10:58     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-21 10:58 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:23:54PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Update the I/O interception support to add the kvm_fast_pio_in function
> to speed up the in instruction similar to the out instruction.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |    1 +
>  arch/x86/kvm/svm.c              |    5 +++--
>  arch/x86/kvm/x86.c              |   43 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 47 insertions(+), 2 deletions(-)

FWIW: Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk
  2016-08-22 23:24   ` Brijesh Singh
  (?)
@ 2016-09-21 17:16     ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-21 17:16 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto

On Mon, Aug 22, 2016 at 07:24:07PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> When a guest causes a NPF which requires emulation, KVM sometimes walks
> the guest page tables to translate the GVA to a GPA. This is unnecessary
> most of the time on AMD hardware since the hardware provides the GPA in
> EXITINFO2.
> 
> The only exception cases involve string operations involving rep or
> operations that use two memory locations. With rep, the GPA will only be
> the value of the initial NPF and with dual memory locations we won't know
> which memory address was translated into EXITINFO2.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_emulate.h |    3 +++
>  arch/x86/include/asm/kvm_host.h    |    3 +++
>  arch/x86/kvm/svm.c                 |    2 ++
>  arch/x86/kvm/x86.c                 |   17 ++++++++++++++++-
>  4 files changed, 24 insertions(+), 1 deletion(-)

FWIW, LGTM. (Gotta love replying in acronyms :-))

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk
@ 2016-09-21 17:16     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-21 17:16 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:24:07PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> When a guest causes a NPF which requires emulation, KVM sometimes walks
> the guest page tables to translate the GVA to a GPA. This is unnecessary
> most of the time on AMD hardware since the hardware provides the GPA in
> EXITINFO2.
> 
> The only exception cases involve string operations involving rep or
> operations that use two memory locations. With rep, the GPA will only be
> the value of the initial NPF and with dual memory locations we won't know
> which memory address was translated into EXITINFO2.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_emulate.h |    3 +++
>  arch/x86/include/asm/kvm_host.h    |    3 +++
>  arch/x86/kvm/svm.c                 |    2 ++
>  arch/x86/kvm/x86.c                 |   17 ++++++++++++++++-
>  4 files changed, 24 insertions(+), 1 deletion(-)

FWIW, LGTM. (Gotta love replying in acronyms :-))

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk
@ 2016-09-21 17:16     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-21 17:16 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:24:07PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> When a guest causes a NPF which requires emulation, KVM sometimes walks
> the guest page tables to translate the GVA to a GPA. This is unnecessary
> most of the time on AMD hardware since the hardware provides the GPA in
> EXITINFO2.
> 
> The only exception cases involve string operations involving rep or
> operations that use two memory locations. With rep, the GPA will only be
> the value of the initial NPF and with dual memory locations we won't know
> which memory address was translated into EXITINFO2.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/kvm_emulate.h |    3 +++
>  arch/x86/include/asm/kvm_host.h    |    3 +++
>  arch/x86/kvm/svm.c                 |    2 ++
>  arch/x86/kvm/x86.c                 |   17 ++++++++++++++++-
>  4 files changed, 24 insertions(+), 1 deletion(-)

FWIW, LGTM. (Gotta love replying in acronyms :-))

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl
  2016-08-22 23:24   ` Brijesh Singh
  (?)
@ 2016-09-22 14:17     ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:17 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine

On Mon, Aug 22, 2016 at 07:24:32PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Currently the nested_ctl variable in the vmcb_control_area structure is
> used to indicate nested paging support. The nested paging support field
> is actually defined as bit 0 of the this field. In order to support a new
> feature flag the usage of the nested_ctl and nested paging support must
> be converted to operate on a single bit.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/svm.h |    2 ++
>  arch/x86/kvm/svm.c         |    7 ++++---
>  2 files changed, 6 insertions(+), 3 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl
@ 2016-09-22 14:17     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:17 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:24:32PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Currently the nested_ctl variable in the vmcb_control_area structure is
> used to indicate nested paging support. The nested paging support field
> is actually defined as bit 0 of the this field. In order to support a new
> feature flag the usage of the nested_ctl and nested paging support must
> be converted to operate on a single bit.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/svm.h |    2 ++
>  arch/x86/kvm/svm.c         |    7 ++++---
>  2 files changed, 6 insertions(+), 3 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl
@ 2016-09-22 14:17     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:17 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:24:32PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Currently the nested_ctl variable in the vmcb_control_area structure is
> used to indicate nested paging support. The nested paging support field
> is actually defined as bit 0 of the this field. In order to support a new
> feature flag the usage of the nested_ctl and nested paging support must
> be converted to operate on a single bit.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/include/asm/svm.h |    2 ++
>  arch/x86/kvm/svm.c         |    7 ++++---
>  2 files changed, 6 insertions(+), 3 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-08-22 23:25   ` Brijesh Singh
  (?)
  (?)
@ 2016-09-22 14:35     ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:35 UTC (permalink / raw)
  To: Brijesh Singh, thomas.lendacky
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto

On Mon, Aug 22, 2016 at 07:25:25PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> EFI data is encrypted when the kernel is run under SEV. Update the
> page table references to be sure the EFI memory areas are accessed
> encrypted.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
> index 0871ea4..98363f3 100644
> --- a/arch/x86/platform/efi/efi_64.c
> +++ b/arch/x86/platform/efi/efi_64.c
> @@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
>  
>  int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>  {
> -	unsigned long pfn, text;
> +	unsigned long pfn, text, flags;
>  	efi_memory_desc_t *md;
>  	struct page *page;
>  	unsigned npages;
> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>  	pgd = efi_pgd;
>  
> +	flags = _PAGE_NX | _PAGE_RW;
> +	if (sev_active)
> +		flags |= _PAGE_ENC;

So this is confusing me. There's this patch which says EFI data is
accessed in the clear:

https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net

but now here it is encrypted when SEV is enabled.

Do you mean, it is encrypted here because we're in the guest kernel?

Thanks.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 14:35     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:35 UTC (permalink / raw)
  To: Brijesh Singh, thomas.lendacky
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:25:25PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> EFI data is encrypted when the kernel is run under SEV. Update the
> page table references to be sure the EFI memory areas are accessed
> encrypted.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
> index 0871ea4..98363f3 100644
> --- a/arch/x86/platform/efi/efi_64.c
> +++ b/arch/x86/platform/efi/efi_64.c
> @@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
>  
>  int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>  {
> -	unsigned long pfn, text;
> +	unsigned long pfn, text, flags;
>  	efi_memory_desc_t *md;
>  	struct page *page;
>  	unsigned npages;
> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>  	pgd = efi_pgd;
>  
> +	flags = _PAGE_NX | _PAGE_RW;
> +	if (sev_active)
> +		flags |= _PAGE_ENC;

So this is confusing me. There's this patch which says EFI data is
accessed in the clear:

https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net

but now here it is encrypted when SEV is enabled.

Do you mean, it is encrypted here because we're in the guest kernel?

Thanks.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 14:35     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:35 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto

On Mon, Aug 22, 2016 at 07:25:25PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> EFI data is encrypted when the kernel is run under SEV. Update the
> page table references to be sure the EFI memory areas are accessed
> encrypted.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
> index 0871ea4..98363f3 100644
> --- a/arch/x86/platform/efi/efi_64.c
> +++ b/arch/x86/platform/efi/efi_64.c
> @@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
>  
>  int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>  {
> -	unsigned long pfn, text;
> +	unsigned long pfn, text, flags;
>  	efi_memory_desc_t *md;
>  	struct page *page;
>  	unsigned npages;
> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>  	pgd = efi_pgd;
>  
> +	flags = _PAGE_NX | _PAGE_RW;
> +	if (sev_active)
> +		flags |= _PAGE_ENC;

So this is confusing me. There's this patch which says EFI data is
accessed in the clear:

https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net

but now here it is encrypted when SEV is enabled.

Do you mean, it is encrypted here because we're in the guest kernel?

Thanks.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 14:35     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:35 UTC (permalink / raw)
  To: Brijesh Singh, thomas.lendacky
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, pbonzini, akpm, davem

On Mon, Aug 22, 2016 at 07:25:25PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> EFI data is encrypted when the kernel is run under SEV. Update the
> page table references to be sure the EFI memory areas are accessed
> encrypted.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
> index 0871ea4..98363f3 100644
> --- a/arch/x86/platform/efi/efi_64.c
> +++ b/arch/x86/platform/efi/efi_64.c
> @@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
>  
>  int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>  {
> -	unsigned long pfn, text;
> +	unsigned long pfn, text, flags;
>  	efi_memory_desc_t *md;
>  	struct page *page;
>  	unsigned npages;
> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>  	pgd = efi_pgd;
>  
> +	flags = _PAGE_NX | _PAGE_RW;
> +	if (sev_active)
> +		flags |= _PAGE_ENC;

So this is confusing me. There's this patch which says EFI data is
accessed in the clear:

https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net

but now here it is encrypted when SEV is enabled.

Do you mean, it is encrypted here because we're in the guest kernel?

Thanks.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 14:35     ` Borislav Petkov
  (?)
@ 2016-09-22 14:45       ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 14:45 UTC (permalink / raw)
  To: Borislav Petkov, Brijesh Singh, thomas.lendacky
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel



On 22/09/2016 16:35, Borislav Petkov wrote:
>> > @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>> >  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>> >  	pgd = efi_pgd;
>> >  
>> > +	flags = _PAGE_NX | _PAGE_RW;
>> > +	if (sev_active)
>> > +		flags |= _PAGE_ENC;
> So this is confusing me. There's this patch which says EFI data is
> accessed in the clear:
> 
> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
> 
> but now here it is encrypted when SEV is enabled.
> 
> Do you mean, it is encrypted here because we're in the guest kernel?

I suspect this patch is untested, and also wrong. :)

The main difference between the SME and SEV encryption, from the point
of view of the kernel, is that real-mode always writes unencrypted in
SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
and learn about the C bit, so EFI boot data should be unprotected in SEV
guests.

Because the firmware volume is written to high memory in encrypted form,
and because the PEI phase runs in 32-bit mode, the firmware code will be
encrypted; on the other hand, data that is placed in low memory for the
kernel can be unencrypted, thus limiting differences between SME and SEV.

	Important: I don't know what you guys are doing for SEV and
	Windows guests, but if you are doing something I would really
	appreciate doing things in the open.  If Linux and Windows end
	up doing different things with EFI boot data, ACPI tables, etc.
	it will be a huge pain.  On the other hand, if we can enjoy
	being first, that's great.

In fact, I have suggested in the QEMU list that SEV guests should always
use UEFI; because BIOS runs in real-mode or 32-bit non-paging protected
mode, BIOS must always write encrypted data, which becomes painful in
the kernel.

And regarding the above "important" point, all I know is that Microsoft
for sure will be happy to restrict SEV to UEFI guests. :)

There are still some differences, mostly around the real mode trampoline
executed by the kernel, but they should be much smaller.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 14:45       ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 14:45 UTC (permalink / raw)
  To: Borislav Petkov, Brijesh Singh, thomas.lendacky
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem



On 22/09/2016 16:35, Borislav Petkov wrote:
>> > @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>> >  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>> >  	pgd = efi_pgd;
>> >  
>> > +	flags = _PAGE_NX | _PAGE_RW;
>> > +	if (sev_active)
>> > +		flags |= _PAGE_ENC;
> So this is confusing me. There's this patch which says EFI data is
> accessed in the clear:
> 
> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
> 
> but now here it is encrypted when SEV is enabled.
> 
> Do you mean, it is encrypted here because we're in the guest kernel?

I suspect this patch is untested, and also wrong. :)

The main difference between the SME and SEV encryption, from the point
of view of the kernel, is that real-mode always writes unencrypted in
SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
and learn about the C bit, so EFI boot data should be unprotected in SEV
guests.

Because the firmware volume is written to high memory in encrypted form,
and because the PEI phase runs in 32-bit mode, the firmware code will be
encrypted; on the other hand, data that is placed in low memory for the
kernel can be unencrypted, thus limiting differences between SME and SEV.

	Important: I don't know what you guys are doing for SEV and
	Windows guests, but if you are doing something I would really
	appreciate doing things in the open.  If Linux and Windows end
	up doing different things with EFI boot data, ACPI tables, etc.
	it will be a huge pain.  On the other hand, if we can enjoy
	being first, that's great.

In fact, I have suggested in the QEMU list that SEV guests should always
use UEFI; because BIOS runs in real-mode or 32-bit non-paging protected
mode, BIOS must always write encrypted data, which becomes painful in
the kernel.

And regarding the above "important" point, all I know is that Microsoft
for sure will be happy to restrict SEV to UEFI guests. :)

There are still some differences, mostly around the real mode trampoline
executed by the kernel, but they should be much smaller.

Paolo

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 14:45       ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 14:45 UTC (permalink / raw)
  To: Borislav Petkov, Brijesh Singh, thomas.lendacky
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem



On 22/09/2016 16:35, Borislav Petkov wrote:
>> > @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>> >  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>> >  	pgd = efi_pgd;
>> >  
>> > +	flags = _PAGE_NX | _PAGE_RW;
>> > +	if (sev_active)
>> > +		flags |= _PAGE_ENC;
> So this is confusing me. There's this patch which says EFI data is
> accessed in the clear:
> 
> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
> 
> but now here it is encrypted when SEV is enabled.
> 
> Do you mean, it is encrypted here because we're in the guest kernel?

I suspect this patch is untested, and also wrong. :)

The main difference between the SME and SEV encryption, from the point
of view of the kernel, is that real-mode always writes unencrypted in
SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
and learn about the C bit, so EFI boot data should be unprotected in SEV
guests.

Because the firmware volume is written to high memory in encrypted form,
and because the PEI phase runs in 32-bit mode, the firmware code will be
encrypted; on the other hand, data that is placed in low memory for the
kernel can be unencrypted, thus limiting differences between SME and SEV.

	Important: I don't know what you guys are doing for SEV and
	Windows guests, but if you are doing something I would really
	appreciate doing things in the open.  If Linux and Windows end
	up doing different things with EFI boot data, ACPI tables, etc.
	it will be a huge pain.  On the other hand, if we can enjoy
	being first, that's great.

In fact, I have suggested in the QEMU list that SEV guests should always
use UEFI; because BIOS runs in real-mode or 32-bit non-paging protected
mode, BIOS must always write encrypted data, which becomes painful in
the kernel.

And regarding the above "important" point, all I know is that Microsoft
for sure will be happy to restrict SEV to UEFI guests. :)

There are still some differences, mostly around the real mode trampoline
executed by the kernel, but they should be much smaller.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 14:45       ` Paolo Bonzini
  (?)
  (?)
@ 2016-09-22 14:59         ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:59 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel

On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
> The main difference between the SME and SEV encryption, from the point
> of view of the kernel, is that real-mode always writes unencrypted in
> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
> and learn about the C bit, so EFI boot data should be unprotected in SEV
> guests.

Actually, it is different: you can start fully encrypted in SME, see:

https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net

The last paragraph alludes to a certain transparent mode where you're
already encrypted and only certain pieces like EFI is not encrypted. I
think the aim is to have the transparent mode be the default one, which
makes most sense anyway.

The EFI regions are unencrypted for obvious reasons and you need to
access them as such.

> Because the firmware volume is written to high memory in encrypted
> form, and because the PEI phase runs in 32-bit mode, the firmware
> code will be encrypted; on the other hand, data that is placed in low
> memory for the kernel can be unencrypted, thus limiting differences
> between SME and SEV.

When you run fully encrypted, you still need to access EFI tables in the
clear. That's why I'm confused about this patch here.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 14:59         ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:59 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
> The main difference between the SME and SEV encryption, from the point
> of view of the kernel, is that real-mode always writes unencrypted in
> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
> and learn about the C bit, so EFI boot data should be unprotected in SEV
> guests.

Actually, it is different: you can start fully encrypted in SME, see:

https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net

The last paragraph alludes to a certain transparent mode where you're
already encrypted and only certain pieces like EFI is not encrypted. I
think the aim is to have the transparent mode be the default one, which
makes most sense anyway.

The EFI regions are unencrypted for obvious reasons and you need to
access them as such.

> Because the firmware volume is written to high memory in encrypted
> form, and because the PEI phase runs in 32-bit mode, the firmware
> code will be encrypted; on the other hand, data that is placed in low
> memory for the kernel can be unencrypted, thus limiting differences
> between SME and SEV.

When you run fully encrypted, you still need to access EFI tables in the
clear. That's why I'm confused about this patch here.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 14:59         ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:59 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel

On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
> The main difference between the SME and SEV encryption, from the point
> of view of the kernel, is that real-mode always writes unencrypted in
> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
> and learn about the C bit, so EFI boot data should be unprotected in SEV
> guests.

Actually, it is different: you can start fully encrypted in SME, see:

https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net

The last paragraph alludes to a certain transparent mode where you're
already encrypted and only certain pieces like EFI is not encrypted. I
think the aim is to have the transparent mode be the default one, which
makes most sense anyway.

The EFI regions are unencrypted for obvious reasons and you need to
access them as such.

> Because the firmware volume is written to high memory in encrypted
> form, and because the PEI phase runs in 32-bit mode, the firmware
> code will be encrypted; on the other hand, data that is placed in low
> memory for the kernel can be unencrypted, thus limiting differences
> between SME and SEV.

When you run fully encrypted, you still need to access EFI tables in the
clear. That's why I'm confused about this patch here.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 14:59         ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 14:59 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
> The main difference between the SME and SEV encryption, from the point
> of view of the kernel, is that real-mode always writes unencrypted in
> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
> and learn about the C bit, so EFI boot data should be unprotected in SEV
> guests.

Actually, it is different: you can start fully encrypted in SME, see:

https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net

The last paragraph alludes to a certain transparent mode where you're
already encrypted and only certain pieces like EFI is not encrypted. I
think the aim is to have the transparent mode be the default one, which
makes most sense anyway.

The EFI regions are unencrypted for obvious reasons and you need to
access them as such.

> Because the firmware volume is written to high memory in encrypted
> form, and because the PEI phase runs in 32-bit mode, the firmware
> code will be encrypted; on the other hand, data that is placed in low
> memory for the kernel can be unencrypted, thus limiting differences
> between SME and SEV.

When you run fully encrypted, you still need to access EFI tables in the
clear. That's why I'm confused about this patch here.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support
  2016-08-22 23:24   ` Brijesh Singh
  (?)
@ 2016-09-22 15:00     ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 15:00 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: linux-efi, kvm, rkrcmar, matt, linus.walleij, paul.gortmaker,
	hpa, tglx, aarcange, sfr, mchehab, simon.guinot, bhe, xemul,
	joro, x86, mingo, msalter, ross.zwisler, labbott, dyoung,
	thomas.lendacky, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, pbonzini, dan.j.williams, andriy.shevchenko,
	akpm, herbert, tony.luck, linux-mm, kuleshovmail, linux-kernel,
	mcgrof, linux-crypto, devel

On Mon, Aug 22, 2016 at 07:24:19PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>

Subject: [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support

Please start patch commit heading with a verb, i.e.:

"x86: Add AMD Secure Encrypted Virtualization (SEV) support"

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 
_______________________________________________
devel mailing list
devel@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support
@ 2016-09-22 15:00     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 15:00 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:24:19PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>

Subject: [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support

Please start patch commit heading with a verb, i.e.:

"x86: Add AMD Secure Encrypted Virtualization (SEV) support"

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support
@ 2016-09-22 15:00     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 15:00 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, thomas.lendacky, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, pbonzini, akpm,
	davem

On Mon, Aug 22, 2016 at 07:24:19PM -0400, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@amd.com>

Subject: [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support

Please start patch commit heading with a verb, i.e.:

"x86: Add AMD Secure Encrypted Virtualization (SEV) support"

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 14:59         ` Borislav Petkov
  (?)
@ 2016-09-22 15:05           ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 15:05 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-efi, Brijesh Singh, kvm, rkrcmar, matt, linus.walleij,
	paul.gortmaker, hpa, tglx, aarcange, sfr, mchehab, simon.guinot,
	bhe, xemul, joro, x86, mingo, msalter, ross.zwisler, labbott,
	dyoung, thomas.lendacky, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, dan.j.williams, andriy.shevchenko, akpm,
	herbert, tony.luck, linux-mm, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, devel



On 22/09/2016 16:59, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
>> The main difference between the SME and SEV encryption, from the point
>> of view of the kernel, is that real-mode always writes unencrypted in
>> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
>> and learn about the C bit, so EFI boot data should be unprotected in SEV
>> guests.
> 
> Actually, it is different: you can start fully encrypted in SME, see:
> 
> https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net
> 
> The last paragraph alludes to a certain transparent mode where you're
> already encrypted and only certain pieces like EFI is not encrypted.

Which paragraph?

>> Because the firmware volume is written to high memory in encrypted
>> form, and because the PEI phase runs in 32-bit mode, the firmware
>> code will be encrypted; on the other hand, data that is placed in low
>> memory for the kernel can be unencrypted, thus limiting differences
>> between SME and SEV.
> 
> When you run fully encrypted, you still need to access EFI tables in the
> clear. That's why I'm confused about this patch here.

I might be wrong, but I don't think this patch was tested with OVMF or Duet.

Paolo

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 15:05           ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 15:05 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem



On 22/09/2016 16:59, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
>> The main difference between the SME and SEV encryption, from the point
>> of view of the kernel, is that real-mode always writes unencrypted in
>> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
>> and learn about the C bit, so EFI boot data should be unprotected in SEV
>> guests.
> 
> Actually, it is different: you can start fully encrypted in SME, see:
> 
> https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net
> 
> The last paragraph alludes to a certain transparent mode where you're
> already encrypted and only certain pieces like EFI is not encrypted.

Which paragraph?

>> Because the firmware volume is written to high memory in encrypted
>> form, and because the PEI phase runs in 32-bit mode, the firmware
>> code will be encrypted; on the other hand, data that is placed in low
>> memory for the kernel can be unencrypted, thus limiting differences
>> between SME and SEV.
> 
> When you run fully encrypted, you still need to access EFI tables in the
> clear. That's why I'm confused about this patch here.

I might be wrong, but I don't think this patch was tested with OVMF or Duet.

Paolo

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 15:05           ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 15:05 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem



On 22/09/2016 16:59, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
>> The main difference between the SME and SEV encryption, from the point
>> of view of the kernel, is that real-mode always writes unencrypted in
>> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
>> and learn about the C bit, so EFI boot data should be unprotected in SEV
>> guests.
> 
> Actually, it is different: you can start fully encrypted in SME, see:
> 
> https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net
> 
> The last paragraph alludes to a certain transparent mode where you're
> already encrypted and only certain pieces like EFI is not encrypted.

Which paragraph?

>> Because the firmware volume is written to high memory in encrypted
>> form, and because the PEI phase runs in 32-bit mode, the firmware
>> code will be encrypted; on the other hand, data that is placed in low
>> memory for the kernel can be unencrypted, thus limiting differences
>> between SME and SEV.
> 
> When you run fully encrypted, you still need to access EFI tables in the
> clear. That's why I'm confused about this patch here.

I might be wrong, but I don't think this patch was tested with OVMF or Duet.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 15:05           ` Paolo Bonzini
  (?)
  (?)
@ 2016-09-22 17:07             ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 17:07 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel

On Thu, Sep 22, 2016 at 05:05:54PM +0200, Paolo Bonzini wrote:
> Which paragraph?

"Linux relies on BIOS to set this bit if BIOS has determined that the
reduction in the physical address space as a result of enabling memory
encryption..."

Basically, you can enable SME in the BIOS and you're all set.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:07             ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 17:07 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 05:05:54PM +0200, Paolo Bonzini wrote:
> Which paragraph?

"Linux relies on BIOS to set this bit if BIOS has determined that the
reduction in the physical address space as a result of enabling memory
encryption..."

Basically, you can enable SME in the BIOS and you're all set.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:07             ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 17:07 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel

On Thu, Sep 22, 2016 at 05:05:54PM +0200, Paolo Bonzini wrote:
> Which paragraph?

"Linux relies on BIOS to set this bit if BIOS has determined that the
reduction in the physical address space as a result of enabling memory
encryption..."

Basically, you can enable SME in the BIOS and you're all set.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:07             ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 17:07 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 05:05:54PM +0200, Paolo Bonzini wrote:
> Which paragraph?

"Linux relies on BIOS to set this bit if BIOS has determined that the
reduction in the physical address space as a result of enabling memory
encryption..."

Basically, you can enable SME in the BIOS and you're all set.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 17:07             ` Borislav Petkov
  (?)
  (?)
@ 2016-09-22 17:08               ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 17:08 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck



On 22/09/2016 19:07, Borislav Petkov wrote:
>> Which paragraph?
> "Linux relies on BIOS to set this bit if BIOS has determined that the
> reduction in the physical address space as a result of enabling memory
> encryption..."
> 
> Basically, you can enable SME in the BIOS and you're all set.

That's not how I read it.  I just figured that the BIOS has some magic
things high in the physical address space and if you reduce the physical
address space the BIOS (which is called from e.g. EFI runtime services)
would have problems with that.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:08               ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 17:08 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem



On 22/09/2016 19:07, Borislav Petkov wrote:
>> Which paragraph?
> "Linux relies on BIOS to set this bit if BIOS has determined that the
> reduction in the physical address space as a result of enabling memory
> encryption..."
> 
> Basically, you can enable SME in the BIOS and you're all set.

That's not how I read it.  I just figured that the BIOS has some magic
things high in the physical address space and if you reduce the physical
address space the BIOS (which is called from e.g. EFI runtime services)
would have problems with that.

Paolo

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:08               ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 17:08 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck



On 22/09/2016 19:07, Borislav Petkov wrote:
>> Which paragraph?
> "Linux relies on BIOS to set this bit if BIOS has determined that the
> reduction in the physical address space as a result of enabling memory
> encryption..."
> 
> Basically, you can enable SME in the BIOS and you're all set.

That's not how I read it.  I just figured that the BIOS has some magic
things high in the physical address space and if you reduce the physical
address space the BIOS (which is called from e.g. EFI runtime services)
would have problems with that.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:08               ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 17:08 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem



On 22/09/2016 19:07, Borislav Petkov wrote:
>> Which paragraph?
> "Linux relies on BIOS to set this bit if BIOS has determined that the
> reduction in the physical address space as a result of enabling memory
> encryption..."
> 
> Basically, you can enable SME in the BIOS and you're all set.

That's not how I read it.  I just figured that the BIOS has some magic
things high in the physical address space and if you reduce the physical
address space the BIOS (which is called from e.g. EFI runtime services)
would have problems with that.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 17:08               ` Paolo Bonzini
  (?)
@ 2016-09-22 17:27                 ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 17:27 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-efi, Brijesh Singh, kvm, rkrcmar, matt, linus.walleij,
	paul.gortmaker, hpa, tglx, aarcange, sfr, mchehab, simon.guinot,
	bhe, xemul, joro, x86, mingo, msalter, ross.zwisler, labbott,
	dyoung, thomas.lendacky, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, dan.j.williams, andriy.shevchenko, akpm,
	herbert, tony.luck, linux-mm, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, devel

On Thu, Sep 22, 2016 at 07:08:50PM +0200, Paolo Bonzini wrote:
> That's not how I read it.  I just figured that the BIOS has some magic
> things high in the physical address space and if you reduce the physical
> address space the BIOS (which is called from e.g. EFI runtime services)
> would have problems with that.

Yeah, I had to ask about that myself and Tom will have it explained
better in the next version.

The reduction in physical address space happens when SME enabled because
you need a couple of bits in the PTE with which to say which key has
encrypted that page. So it is an indelible part of the SME machinery.

Btw, section "7.10 Secure Memory Encryption" has an initial writeup:

http://support.amd.com/TechDocs/24593.pdf

Now that I skim over it, it doesn't mention the BIOS thing but that'll
be updated.

HTH.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 
_______________________________________________
devel mailing list
devel@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:27                 ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 17:27 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 07:08:50PM +0200, Paolo Bonzini wrote:
> That's not how I read it.  I just figured that the BIOS has some magic
> things high in the physical address space and if you reduce the physical
> address space the BIOS (which is called from e.g. EFI runtime services)
> would have problems with that.

Yeah, I had to ask about that myself and Tom will have it explained
better in the next version.

The reduction in physical address space happens when SME enabled because
you need a couple of bits in the PTE with which to say which key has
encrypted that page. So it is an indelible part of the SME machinery.

Btw, section "7.10 Secure Memory Encryption" has an initial writeup:

http://support.amd.com/TechDocs/24593.pdf

Now that I skim over it, it doesn't mention the BIOS thing but that'll
be updated.

HTH.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:27                 ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 17:27 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Brijesh Singh, thomas.lendacky, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 07:08:50PM +0200, Paolo Bonzini wrote:
> That's not how I read it.  I just figured that the BIOS has some magic
> things high in the physical address space and if you reduce the physical
> address space the BIOS (which is called from e.g. EFI runtime services)
> would have problems with that.

Yeah, I had to ask about that myself and Tom will have it explained
better in the next version.

The reduction in physical address space happens when SME enabled because
you need a couple of bits in the PTE with which to say which key has
encrypted that page. So it is an indelible part of the SME machinery.

Btw, section "7.10 Secure Memory Encryption" has an initial writeup:

http://support.amd.com/TechDocs/24593.pdf

Now that I skim over it, it doesn't mention the BIOS thing but that'll
be updated.

HTH.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 14:35     ` Borislav Petkov
  (?)
  (?)
@ 2016-09-22 17:46       ` Tom Lendacky
  -1 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 17:46 UTC (permalink / raw)
  To: Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel

On 09/22/2016 09:35 AM, Borislav Petkov wrote:
> On Mon, Aug 22, 2016 at 07:25:25PM -0400, Brijesh Singh wrote:
>> From: Tom Lendacky <thomas.lendacky@amd.com>
>>
>> EFI data is encrypted when the kernel is run under SEV. Update the
>> page table references to be sure the EFI memory areas are accessed
>> encrypted.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> ---
>>  arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
>> index 0871ea4..98363f3 100644
>> --- a/arch/x86/platform/efi/efi_64.c
>> +++ b/arch/x86/platform/efi/efi_64.c
>> @@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
>>  
>>  int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>  {
>> -	unsigned long pfn, text;
>> +	unsigned long pfn, text, flags;
>>  	efi_memory_desc_t *md;
>>  	struct page *page;
>>  	unsigned npages;
>> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>>  	pgd = efi_pgd;
>>  
>> +	flags = _PAGE_NX | _PAGE_RW;
>> +	if (sev_active)
>> +		flags |= _PAGE_ENC;
> 
> So this is confusing me. There's this patch which says EFI data is
> accessed in the clear:
> 
> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
> 
> but now here it is encrypted when SEV is enabled.
> 
> Do you mean, it is encrypted here because we're in the guest kernel?

Yes, the idea is that the SEV guest will be running encrypted from the
start, including the BIOS/UEFI, and so all of the EFI related data will
be encrypted.

Thanks,
Tom

> 
> Thanks.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:46       ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 17:46 UTC (permalink / raw)
  To: Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, pbonzini, akpm, davem

On 09/22/2016 09:35 AM, Borislav Petkov wrote:
> On Mon, Aug 22, 2016 at 07:25:25PM -0400, Brijesh Singh wrote:
>> From: Tom Lendacky <thomas.lendacky@amd.com>
>>
>> EFI data is encrypted when the kernel is run under SEV. Update the
>> page table references to be sure the EFI memory areas are accessed
>> encrypted.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> ---
>>  arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
>> index 0871ea4..98363f3 100644
>> --- a/arch/x86/platform/efi/efi_64.c
>> +++ b/arch/x86/platform/efi/efi_64.c
>> @@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
>>  
>>  int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>  {
>> -	unsigned long pfn, text;
>> +	unsigned long pfn, text, flags;
>>  	efi_memory_desc_t *md;
>>  	struct page *page;
>>  	unsigned npages;
>> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>>  	pgd = efi_pgd;
>>  
>> +	flags = _PAGE_NX | _PAGE_RW;
>> +	if (sev_active)
>> +		flags |= _PAGE_ENC;
> 
> So this is confusing me. There's this patch which says EFI data is
> accessed in the clear:
> 
> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
> 
> but now here it is encrypted when SEV is enabled.
> 
> Do you mean, it is encrypted here because we're in the guest kernel?

Yes, the idea is that the SEV guest will be running encrypted from the
start, including the BIOS/UEFI, and so all of the EFI related data will
be encrypted.

Thanks,
Tom

> 
> Thanks.
> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:46       ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 17:46 UTC (permalink / raw)
  To: Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel

On 09/22/2016 09:35 AM, Borislav Petkov wrote:
> On Mon, Aug 22, 2016 at 07:25:25PM -0400, Brijesh Singh wrote:
>> From: Tom Lendacky <thomas.lendacky@amd.com>
>>
>> EFI data is encrypted when the kernel is run under SEV. Update the
>> page table references to be sure the EFI memory areas are accessed
>> encrypted.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> ---
>>  arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
>> index 0871ea4..98363f3 100644
>> --- a/arch/x86/platform/efi/efi_64.c
>> +++ b/arch/x86/platform/efi/efi_64.c
>> @@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
>>  
>>  int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>  {
>> -	unsigned long pfn, text;
>> +	unsigned long pfn, text, flags;
>>  	efi_memory_desc_t *md;
>>  	struct page *page;
>>  	unsigned npages;
>> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>>  	pgd = efi_pgd;
>>  
>> +	flags = _PAGE_NX | _PAGE_RW;
>> +	if (sev_active)
>> +		flags |= _PAGE_ENC;
> 
> So this is confusing me. There's this patch which says EFI data is
> accessed in the clear:
> 
> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
> 
> but now here it is encrypted when SEV is enabled.
> 
> Do you mean, it is encrypted here because we're in the guest kernel?

Yes, the idea is that the SEV guest will be running encrypted from the
start, including the BIOS/UEFI, and so all of the EFI related data will
be encrypted.

Thanks,
Tom

> 
> Thanks.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 17:46       ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 17:46 UTC (permalink / raw)
  To: Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, pbonzini, akpm, davem

On 09/22/2016 09:35 AM, Borislav Petkov wrote:
> On Mon, Aug 22, 2016 at 07:25:25PM -0400, Brijesh Singh wrote:
>> From: Tom Lendacky <thomas.lendacky@amd.com>
>>
>> EFI data is encrypted when the kernel is run under SEV. Update the
>> page table references to be sure the EFI memory areas are accessed
>> encrypted.
>>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> ---
>>  arch/x86/platform/efi/efi_64.c |   14 ++++++++++++--
>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
>> index 0871ea4..98363f3 100644
>> --- a/arch/x86/platform/efi/efi_64.c
>> +++ b/arch/x86/platform/efi/efi_64.c
>> @@ -213,7 +213,7 @@ void efi_sync_low_kernel_mappings(void)
>>  
>>  int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>  {
>> -	unsigned long pfn, text;
>> +	unsigned long pfn, text, flags;
>>  	efi_memory_desc_t *md;
>>  	struct page *page;
>>  	unsigned npages;
>> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>>  	pgd = efi_pgd;
>>  
>> +	flags = _PAGE_NX | _PAGE_RW;
>> +	if (sev_active)
>> +		flags |= _PAGE_ENC;
> 
> So this is confusing me. There's this patch which says EFI data is
> accessed in the clear:
> 
> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
> 
> but now here it is encrypted when SEV is enabled.
> 
> Do you mean, it is encrypted here because we're in the guest kernel?

Yes, the idea is that the SEV guest will be running encrypted from the
start, including the BIOS/UEFI, and so all of the EFI related data will
be encrypted.

Thanks,
Tom

> 
> Thanks.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 17:46       ` Tom Lendacky
  (?)
@ 2016-09-22 18:23         ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 18:23 UTC (permalink / raw)
  To: Tom Lendacky, Borislav Petkov, Brijesh Singh
  Cc: linux-efi, kvm, rkrcmar, matt, linus.walleij, paul.gortmaker,
	hpa, tglx, aarcange, sfr, mchehab, simon.guinot, bhe, xemul,
	joro, x86, mingo, msalter, ross.zwisler, labbott, dyoung,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, dan.j.williams,
	andriy.shevchenko, akpm, herbert, tony.luck, linux-mm,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, devel,
	iamjoonsoo.kim, alexandre.bounine



On 22/09/2016 19:46, Tom Lendacky wrote:
>> > Do you mean, it is encrypted here because we're in the guest kernel?
> Yes, the idea is that the SEV guest will be running encrypted from the
> start, including the BIOS/UEFI, and so all of the EFI related data will
> be encrypted.

Unless this is part of some spec, it's easier if things are the same in
SME and SEV.

Paolo

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:23         ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 18:23 UTC (permalink / raw)
  To: Tom Lendacky, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem



On 22/09/2016 19:46, Tom Lendacky wrote:
>> > Do you mean, it is encrypted here because we're in the guest kernel?
> Yes, the idea is that the SEV guest will be running encrypted from the
> start, including the BIOS/UEFI, and so all of the EFI related data will
> be encrypted.

Unless this is part of some spec, it's easier if things are the same in
SME and SEV.

Paolo

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:23         ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 18:23 UTC (permalink / raw)
  To: Tom Lendacky, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem



On 22/09/2016 19:46, Tom Lendacky wrote:
>> > Do you mean, it is encrypted here because we're in the guest kernel?
> Yes, the idea is that the SEV guest will be running encrypted from the
> start, including the BIOS/UEFI, and so all of the EFI related data will
> be encrypted.

Unless this is part of some spec, it's easier if things are the same in
SME and SEV.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 18:23         ` Paolo Bonzini
  (?)
@ 2016-09-22 18:37           ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Tom Lendacky, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel

On Thu, Sep 22, 2016 at 08:23:36PM +0200, Paolo Bonzini wrote:
> Unless this is part of some spec, it's easier if things are the same in
> SME and SEV.

Yeah, I was pondering over how sprinkling sev_active checks might not be
so clean.

I'm wondering if we could make the EFI regions presented to the guest
unencrypted too, as part of some SEV-specific init routine so that the
guest kernel doesn't need to do anything different.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:37           ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Tom Lendacky, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 08:23:36PM +0200, Paolo Bonzini wrote:
> Unless this is part of some spec, it's easier if things are the same in
> SME and SEV.

Yeah, I was pondering over how sprinkling sev_active checks might not be
so clean.

I'm wondering if we could make the EFI regions presented to the guest
unencrypted too, as part of some SEV-specific init routine so that the
guest kernel doesn't need to do anything different.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:37           ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 18:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Tom Lendacky, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 08:23:36PM +0200, Paolo Bonzini wrote:
> Unless this is part of some spec, it's easier if things are the same in
> SME and SEV.

Yeah, I was pondering over how sprinkling sev_active checks might not be
so clean.

I'm wondering if we could make the EFI regions presented to the guest
unencrypted too, as part of some SEV-specific init routine so that the
guest kernel doesn't need to do anything different.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 18:37           ` Borislav Petkov
@ 2016-09-22 18:44               ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 18:44 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Tom Lendacky, Brijesh Singh, simon.guinot-jKBdWWKqtFpg9hUCZPvPmw,
	linux-efi-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA,
	rkrcmar-H+wXaHxf7aLQT0dZR+AlfA,
	matt-mF/unelCI9GS6iBeEJttW/XRex20P6io,
	linus.walleij-QSEj5FYQhm4dnm+yROfE0A,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	paul.gortmaker-CWA4WttNNZF54TAoqtyWWQ,
	hpa-YMNOUZJC4hwAvxtiuMwx3w,
	dan.j.williams-ral2JQCrhuEAvxtiuMwx3w,
	aarcange-H+wXaHxf7aLQT0dZR+AlfA, sfr-3FnU+UHB4dNDw9hX6IcOSA,
	andriy.shevchenko-VuQAYsv1563Yd54FQh9/CA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	bhe-H+wXaHxf7aLQT0dZR+AlfA, xemul-bzQdu9zFT3WakBO8gow8eQ,
	joro-zLv9SwRftAIdnm+yROfE0A, x86-DgEjT+Ai2ygdnm+yROfE0A,
	mingo-H+wXaHxf7aLQT0dZR+AlfA, msalter-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-VuQAYsv1563Yd54FQh9/CA,
	dyoung-H+wXaHxf7aLQT0dZR+AlfA, jroedel-l3A5Bk7waGM,
	keescook-F7+t8E8rja9g9hUCZPvPmw, toshi.kani-ZPxbGqLxI0U,
	mathieu.desnoyers-vg+e7yoeK/dWk0Htik3J/w,
	devel-tBiZLqfeLfOHmIFyCCdPziST3g8Odh+X,
	tglx-hfZtesqFncYOwBW4kG4KsQ, mchehab-DgEjT+Ai2ygdnm+yROfE0A,
	iamjoonsoo.kim-Hm3cg6mZ9cc, labbott



On 22/09/2016 20:37, Borislav Petkov wrote:
>> > Unless this is part of some spec, it's easier if things are the same in
>> > SME and SEV.
> Yeah, I was pondering over how sprinkling sev_active checks might not be
> so clean.
> 
> I'm wondering if we could make the EFI regions presented to the guest
> unencrypted too, as part of some SEV-specific init routine so that the
> guest kernel doesn't need to do anything different.

That too, but why not fix it in the firmware?...  (Again, if there's any
MSFT guy looking at this offlist, let's involve him in the discussion).

Paolo

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:44               ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 18:44 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Tom Lendacky, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel



On 22/09/2016 20:37, Borislav Petkov wrote:
>> > Unless this is part of some spec, it's easier if things are the same in
>> > SME and SEV.
> Yeah, I was pondering over how sprinkling sev_active checks might not be
> so clean.
> 
> I'm wondering if we could make the EFI regions presented to the guest
> unencrypted too, as part of some SEV-specific init routine so that the
> guest kernel doesn't need to do anything different.

That too, but why not fix it in the firmware?...  (Again, if there's any
MSFT guy looking at this offlist, let's involve him in the discussion).

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 14:45       ` Paolo Bonzini
  (?)
  (?)
@ 2016-09-22 18:47         ` Tom Lendacky
  -1 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 18:47 UTC (permalink / raw)
  To: Paolo Bonzini, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.l

On 09/22/2016 09:45 AM, Paolo Bonzini wrote:
> 
> 
> On 22/09/2016 16:35, Borislav Petkov wrote:
>>>> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>>>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>>>>  	pgd = efi_pgd;
>>>>  
>>>> +	flags = _PAGE_NX | _PAGE_RW;
>>>> +	if (sev_active)
>>>> +		flags |= _PAGE_ENC;
>> So this is confusing me. There's this patch which says EFI data is
>> accessed in the clear:
>>
>> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
>>
>> but now here it is encrypted when SEV is enabled.
>>
>> Do you mean, it is encrypted here because we're in the guest kernel?
> 
> I suspect this patch is untested, and also wrong. :)

Yes, it is untested but not sure that it is wrong...  It all depends on
how we add SEV support to the guest UEFI BIOS.  My take would be to have
the EFI data and ACPI tables encrypted.

> 
> The main difference between the SME and SEV encryption, from the point
> of view of the kernel, is that real-mode always writes unencrypted in
> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
> and learn about the C bit, so EFI boot data should be unprotected in SEV
> guests.
> 
> Because the firmware volume is written to high memory in encrypted form,
> and because the PEI phase runs in 32-bit mode, the firmware code will be
> encrypted; on the other hand, data that is placed in low memory for the
> kernel can be unencrypted, thus limiting differences between SME and SEV.

I like the idea of limiting the differences but it would leave the EFI
data and ACPI tables exposed and able to be manipulated.

> 
> 	Important: I don't know what you guys are doing for SEV and
> 	Windows guests, but if you are doing something I would really
> 	appreciate doing things in the open.  If Linux and Windows end
> 	up doing different things with EFI boot data, ACPI tables, etc.
> 	it will be a huge pain.  On the other hand, if we can enjoy
> 	being first, that's great.

We haven't discussed Windows guests under SEV yet, but as you say, we
need to do things the same.

Thanks,
Tom

> 
> In fact, I have suggested in the QEMU list that SEV guests should always
> use UEFI; because BIOS runs in real-mode or 32-bit non-paging protected
> mode, BIOS must always write encrypted data, which becomes painful in
> the kernel.
> 
> And regarding the above "important" point, all I know is that Microsoft
> for sure will be happy to restrict SEV to UEFI guests. :)
> 
> There are still some differences, mostly around the real mode trampoline
> executed by the kernel, but they should be much smaller.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:47         ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 18:47 UTC (permalink / raw)
  To: Paolo Bonzini, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem

On 09/22/2016 09:45 AM, Paolo Bonzini wrote:
> 
> 
> On 22/09/2016 16:35, Borislav Petkov wrote:
>>>> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>>>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>>>>  	pgd = efi_pgd;
>>>>  
>>>> +	flags = _PAGE_NX | _PAGE_RW;
>>>> +	if (sev_active)
>>>> +		flags |= _PAGE_ENC;
>> So this is confusing me. There's this patch which says EFI data is
>> accessed in the clear:
>>
>> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
>>
>> but now here it is encrypted when SEV is enabled.
>>
>> Do you mean, it is encrypted here because we're in the guest kernel?
> 
> I suspect this patch is untested, and also wrong. :)

Yes, it is untested but not sure that it is wrong...  It all depends on
how we add SEV support to the guest UEFI BIOS.  My take would be to have
the EFI data and ACPI tables encrypted.

> 
> The main difference between the SME and SEV encryption, from the point
> of view of the kernel, is that real-mode always writes unencrypted in
> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
> and learn about the C bit, so EFI boot data should be unprotected in SEV
> guests.
> 
> Because the firmware volume is written to high memory in encrypted form,
> and because the PEI phase runs in 32-bit mode, the firmware code will be
> encrypted; on the other hand, data that is placed in low memory for the
> kernel can be unencrypted, thus limiting differences between SME and SEV.

I like the idea of limiting the differences but it would leave the EFI
data and ACPI tables exposed and able to be manipulated.

> 
> 	Important: I don't know what you guys are doing for SEV and
> 	Windows guests, but if you are doing something I would really
> 	appreciate doing things in the open.  If Linux and Windows end
> 	up doing different things with EFI boot data, ACPI tables, etc.
> 	it will be a huge pain.  On the other hand, if we can enjoy
> 	being first, that's great.

We haven't discussed Windows guests under SEV yet, but as you say, we
need to do things the same.

Thanks,
Tom

> 
> In fact, I have suggested in the QEMU list that SEV guests should always
> use UEFI; because BIOS runs in real-mode or 32-bit non-paging protected
> mode, BIOS must always write encrypted data, which becomes painful in
> the kernel.
> 
> And regarding the above "important" point, all I know is that Microsoft
> for sure will be happy to restrict SEV to UEFI guests. :)
> 
> There are still some differences, mostly around the real mode trampoline
> executed by the kernel, but they should be much smaller.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:47         ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 18:47 UTC (permalink / raw)
  To: Paolo Bonzini, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.l

On 09/22/2016 09:45 AM, Paolo Bonzini wrote:
> 
> 
> On 22/09/2016 16:35, Borislav Petkov wrote:
>>>> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>>>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>>>>  	pgd = efi_pgd;
>>>>  
>>>> +	flags = _PAGE_NX | _PAGE_RW;
>>>> +	if (sev_active)
>>>> +		flags |= _PAGE_ENC;
>> So this is confusing me. There's this patch which says EFI data is
>> accessed in the clear:
>>
>> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
>>
>> but now here it is encrypted when SEV is enabled.
>>
>> Do you mean, it is encrypted here because we're in the guest kernel?
> 
> I suspect this patch is untested, and also wrong. :)

Yes, it is untested but not sure that it is wrong...  It all depends on
how we add SEV support to the guest UEFI BIOS.  My take would be to have
the EFI data and ACPI tables encrypted.

> 
> The main difference between the SME and SEV encryption, from the point
> of view of the kernel, is that real-mode always writes unencrypted in
> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
> and learn about the C bit, so EFI boot data should be unprotected in SEV
> guests.
> 
> Because the firmware volume is written to high memory in encrypted form,
> and because the PEI phase runs in 32-bit mode, the firmware code will be
> encrypted; on the other hand, data that is placed in low memory for the
> kernel can be unencrypted, thus limiting differences between SME and SEV.

I like the idea of limiting the differences but it would leave the EFI
data and ACPI tables exposed and able to be manipulated.

> 
> 	Important: I don't know what you guys are doing for SEV and
> 	Windows guests, but if you are doing something I would really
> 	appreciate doing things in the open.  If Linux and Windows end
> 	up doing different things with EFI boot data, ACPI tables, etc.
> 	it will be a huge pain.  On the other hand, if we can enjoy
> 	being first, that's great.

We haven't discussed Windows guests under SEV yet, but as you say, we
need to do things the same.

Thanks,
Tom

> 
> In fact, I have suggested in the QEMU list that SEV guests should always
> use UEFI; because BIOS runs in real-mode or 32-bit non-paging protected
> mode, BIOS must always write encrypted data, which becomes painful in
> the kernel.
> 
> And regarding the above "important" point, all I know is that Microsoft
> for sure will be happy to restrict SEV to UEFI guests. :)
> 
> There are still some differences, mostly around the real mode trampoline
> executed by the kernel, but they should be much smaller.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:47         ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 18:47 UTC (permalink / raw)
  To: Paolo Bonzini, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem

On 09/22/2016 09:45 AM, Paolo Bonzini wrote:
> 
> 
> On 22/09/2016 16:35, Borislav Petkov wrote:
>>>> @@ -230,6 +230,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
>>>>  	efi_scratch.efi_pgt = (pgd_t *)__sme_pa(efi_pgd);
>>>>  	pgd = efi_pgd;
>>>>  
>>>> +	flags = _PAGE_NX | _PAGE_RW;
>>>> +	if (sev_active)
>>>> +		flags |= _PAGE_ENC;
>> So this is confusing me. There's this patch which says EFI data is
>> accessed in the clear:
>>
>> https://lkml.kernel.org/r/20160822223738.29880.6909.stgit@tlendack-t1.amdoffice.net
>>
>> but now here it is encrypted when SEV is enabled.
>>
>> Do you mean, it is encrypted here because we're in the guest kernel?
> 
> I suspect this patch is untested, and also wrong. :)

Yes, it is untested but not sure that it is wrong...  It all depends on
how we add SEV support to the guest UEFI BIOS.  My take would be to have
the EFI data and ACPI tables encrypted.

> 
> The main difference between the SME and SEV encryption, from the point
> of view of the kernel, is that real-mode always writes unencrypted in
> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
> and learn about the C bit, so EFI boot data should be unprotected in SEV
> guests.
> 
> Because the firmware volume is written to high memory in encrypted form,
> and because the PEI phase runs in 32-bit mode, the firmware code will be
> encrypted; on the other hand, data that is placed in low memory for the
> kernel can be unencrypted, thus limiting differences between SME and SEV.

I like the idea of limiting the differences but it would leave the EFI
data and ACPI tables exposed and able to be manipulated.

> 
> 	Important: I don't know what you guys are doing for SEV and
> 	Windows guests, but if you are doing something I would really
> 	appreciate doing things in the open.  If Linux and Windows end
> 	up doing different things with EFI boot data, ACPI tables, etc.
> 	it will be a huge pain.  On the other hand, if we can enjoy
> 	being first, that's great.

We haven't discussed Windows guests under SEV yet, but as you say, we
need to do things the same.

Thanks,
Tom

> 
> In fact, I have suggested in the QEMU list that SEV guests should always
> use UEFI; because BIOS runs in real-mode or 32-bit non-paging protected
> mode, BIOS must always write encrypted data, which becomes painful in
> the kernel.
> 
> And regarding the above "important" point, all I know is that Microsoft
> for sure will be happy to restrict SEV to UEFI guests. :)
> 
> There are still some differences, mostly around the real mode trampoline
> executed by the kernel, but they should be much smaller.
> 
> Paolo
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 18:47         ` Tom Lendacky
  (?)
@ 2016-09-22 18:50           ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 18:50 UTC (permalink / raw)
  To: Tom Lendacky, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel



On 22/09/2016 20:47, Tom Lendacky wrote:
> > Because the firmware volume is written to high memory in encrypted form,
> > and because the PEI phase runs in 32-bit mode, the firmware code will be
> > encrypted; on the other hand, data that is placed in low memory for the
> > kernel can be unencrypted, thus limiting differences between SME and SEV.
> 
> I like the idea of limiting the differences but it would leave the EFI
> data and ACPI tables exposed and able to be manipulated.

Hmm, that makes sense.  So I guess this has to stay, and Borislav's
proposal doesn't fly either.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:50           ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 18:50 UTC (permalink / raw)
  To: Tom Lendacky, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem



On 22/09/2016 20:47, Tom Lendacky wrote:
> > Because the firmware volume is written to high memory in encrypted form,
> > and because the PEI phase runs in 32-bit mode, the firmware code will be
> > encrypted; on the other hand, data that is placed in low memory for the
> > kernel can be unencrypted, thus limiting differences between SME and SEV.
> 
> I like the idea of limiting the differences but it would leave the EFI
> data and ACPI tables exposed and able to be manipulated.

Hmm, that makes sense.  So I guess this has to stay, and Borislav's
proposal doesn't fly either.

Paolo

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:50           ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-09-22 18:50 UTC (permalink / raw)
  To: Tom Lendacky, Borislav Petkov, Brijesh Singh
  Cc: simon.guinot, linux-efi, kvm, rkrcmar, matt, linus.walleij,
	linux-mm, paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem



On 22/09/2016 20:47, Tom Lendacky wrote:
> > Because the firmware volume is written to high memory in encrypted form,
> > and because the PEI phase runs in 32-bit mode, the firmware code will be
> > encrypted; on the other hand, data that is placed in low memory for the
> > kernel can be unencrypted, thus limiting differences between SME and SEV.
> 
> I like the idea of limiting the differences but it would leave the EFI
> data and ACPI tables exposed and able to be manipulated.

Hmm, that makes sense.  So I guess this has to stay, and Borislav's
proposal doesn't fly either.

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 14:59         ` Borislav Petkov
  (?)
  (?)
@ 2016-09-22 18:59           ` Tom Lendacky
  -1 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 18:59 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, dyoung, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim

On 09/22/2016 09:59 AM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
>> The main difference between the SME and SEV encryption, from the point
>> of view of the kernel, is that real-mode always writes unencrypted in
>> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
>> and learn about the C bit, so EFI boot data should be unprotected in SEV
>> guests.
> 
> Actually, it is different: you can start fully encrypted in SME, see:
> 
> https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net
> 
> The last paragraph alludes to a certain transparent mode where you're
> already encrypted and only certain pieces like EFI is not encrypted. I
> think the aim is to have the transparent mode be the default one, which
> makes most sense anyway.

There is a new Transparent SME mode that is now part of the overall
SME support, but I'm not alluding to that in the documentation at all.
In TSME mode, everything that goes through the memory controller would
be encrypted and that would include EFI data, etc.  TSME would be
enabled through a BIOS option, thus allowing legacy OSes to benefit.

> 
> The EFI regions are unencrypted for obvious reasons and you need to
> access them as such.
> 
>> Because the firmware volume is written to high memory in encrypted
>> form, and because the PEI phase runs in 32-bit mode, the firmware
>> code will be encrypted; on the other hand, data that is placed in low
>> memory for the kernel can be unencrypted, thus limiting differences
>> between SME and SEV.
> 
> When you run fully encrypted, you still need to access EFI tables in the
> clear. That's why I'm confused about this patch here.

This patch assumes that the EFI regions of a guest would be encrypted.

Thanks,
Tom

> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:59           ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 18:59 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, dyoung, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On 09/22/2016 09:59 AM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
>> The main difference between the SME and SEV encryption, from the point
>> of view of the kernel, is that real-mode always writes unencrypted in
>> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
>> and learn about the C bit, so EFI boot data should be unprotected in SEV
>> guests.
> 
> Actually, it is different: you can start fully encrypted in SME, see:
> 
> https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net
> 
> The last paragraph alludes to a certain transparent mode where you're
> already encrypted and only certain pieces like EFI is not encrypted. I
> think the aim is to have the transparent mode be the default one, which
> makes most sense anyway.

There is a new Transparent SME mode that is now part of the overall
SME support, but I'm not alluding to that in the documentation at all.
In TSME mode, everything that goes through the memory controller would
be encrypted and that would include EFI data, etc.  TSME would be
enabled through a BIOS option, thus allowing legacy OSes to benefit.

> 
> The EFI regions are unencrypted for obvious reasons and you need to
> access them as such.
> 
>> Because the firmware volume is written to high memory in encrypted
>> form, and because the PEI phase runs in 32-bit mode, the firmware
>> code will be encrypted; on the other hand, data that is placed in low
>> memory for the kernel can be unencrypted, thus limiting differences
>> between SME and SEV.
> 
> When you run fully encrypted, you still need to access EFI tables in the
> clear. That's why I'm confused about this patch here.

This patch assumes that the EFI regions of a guest would be encrypted.

Thanks,
Tom

> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:59           ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 18:59 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, dyoung, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim

On 09/22/2016 09:59 AM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
>> The main difference between the SME and SEV encryption, from the point
>> of view of the kernel, is that real-mode always writes unencrypted in
>> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
>> and learn about the C bit, so EFI boot data should be unprotected in SEV
>> guests.
> 
> Actually, it is different: you can start fully encrypted in SME, see:
> 
> https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net
> 
> The last paragraph alludes to a certain transparent mode where you're
> already encrypted and only certain pieces like EFI is not encrypted. I
> think the aim is to have the transparent mode be the default one, which
> makes most sense anyway.

There is a new Transparent SME mode that is now part of the overall
SME support, but I'm not alluding to that in the documentation at all.
In TSME mode, everything that goes through the memory controller would
be encrypted and that would include EFI data, etc.  TSME would be
enabled through a BIOS option, thus allowing legacy OSes to benefit.

> 
> The EFI regions are unencrypted for obvious reasons and you need to
> access them as such.
> 
>> Because the firmware volume is written to high memory in encrypted
>> form, and because the PEI phase runs in 32-bit mode, the firmware
>> code will be encrypted; on the other hand, data that is placed in low
>> memory for the kernel can be unencrypted, thus limiting differences
>> between SME and SEV.
> 
> When you run fully encrypted, you still need to access EFI tables in the
> clear. That's why I'm confused about this patch here.

This patch assumes that the EFI regions of a guest would be encrypted.

Thanks,
Tom

> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 18:59           ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 18:59 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, dyoung, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On 09/22/2016 09:59 AM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 04:45:51PM +0200, Paolo Bonzini wrote:
>> The main difference between the SME and SEV encryption, from the point
>> of view of the kernel, is that real-mode always writes unencrypted in
>> SME and always writes encrypted in SEV.  But UEFI can run in 64-bit mode
>> and learn about the C bit, so EFI boot data should be unprotected in SEV
>> guests.
> 
> Actually, it is different: you can start fully encrypted in SME, see:
> 
> https://lkml.kernel.org/r/20160822223539.29880.96739.stgit@tlendack-t1.amdoffice.net
> 
> The last paragraph alludes to a certain transparent mode where you're
> already encrypted and only certain pieces like EFI is not encrypted. I
> think the aim is to have the transparent mode be the default one, which
> makes most sense anyway.

There is a new Transparent SME mode that is now part of the overall
SME support, but I'm not alluding to that in the documentation at all.
In TSME mode, everything that goes through the memory controller would
be encrypted and that would include EFI data, etc.  TSME would be
enabled through a BIOS option, thus allowing legacy OSes to benefit.

> 
> The EFI regions are unencrypted for obvious reasons and you need to
> access them as such.
> 
>> Because the firmware volume is written to high memory in encrypted
>> form, and because the PEI phase runs in 32-bit mode, the firmware
>> code will be encrypted; on the other hand, data that is placed in low
>> memory for the kernel can be unencrypted, thus limiting differences
>> between SME and SEV.
> 
> When you run fully encrypted, you still need to access EFI tables in the
> clear. That's why I'm confused about this patch here.

This patch assumes that the EFI regions of a guest would be encrypted.

Thanks,
Tom

> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 17:07             ` Borislav Petkov
  (?)
  (?)
@ 2016-09-22 19:04               ` Tom Lendacky
  -1 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 19:04 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, dyoung, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail

On 09/22/2016 12:07 PM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 05:05:54PM +0200, Paolo Bonzini wrote:
>> Which paragraph?
> 
> "Linux relies on BIOS to set this bit if BIOS has determined that the
> reduction in the physical address space as a result of enabling memory
> encryption..."
> 
> Basically, you can enable SME in the BIOS and you're all set.

That's not what I mean here.  If the BIOS sets the SMEE bit in the
SYS_CFG msr then, even if the encryption bit is never used, there is
still a reduction in physical address space.

Transparent SME (TSME) will be a BIOS option that will result in the
memory controller performing encryption no matter what. In this case
all data will be encrypted without a reduction in physical address
space.

Thanks,
Tom

> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 19:04               ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 19:04 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, dyoung, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On 09/22/2016 12:07 PM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 05:05:54PM +0200, Paolo Bonzini wrote:
>> Which paragraph?
> 
> "Linux relies on BIOS to set this bit if BIOS has determined that the
> reduction in the physical address space as a result of enabling memory
> encryption..."
> 
> Basically, you can enable SME in the BIOS and you're all set.

That's not what I mean here.  If the BIOS sets the SMEE bit in the
SYS_CFG msr then, even if the encryption bit is never used, there is
still a reduction in physical address space.

Transparent SME (TSME) will be a BIOS option that will result in the
memory controller performing encryption no matter what. In this case
all data will be encrypted without a reduction in physical address
space.

Thanks,
Tom

> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 19:04               ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 19:04 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, dyoung, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail

On 09/22/2016 12:07 PM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 05:05:54PM +0200, Paolo Bonzini wrote:
>> Which paragraph?
> 
> "Linux relies on BIOS to set this bit if BIOS has determined that the
> reduction in the physical address space as a result of enabling memory
> encryption..."
> 
> Basically, you can enable SME in the BIOS and you're all set.

That's not what I mean here.  If the BIOS sets the SMEE bit in the
SYS_CFG msr then, even if the encryption bit is never used, there is
still a reduction in physical address space.

Transparent SME (TSME) will be a BIOS option that will result in the
memory controller performing encryption no matter what. In this case
all data will be encrypted without a reduction in physical address
space.

Thanks,
Tom

> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 19:04               ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 19:04 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, dyoung, jroedel, keescook,
	toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On 09/22/2016 12:07 PM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 05:05:54PM +0200, Paolo Bonzini wrote:
>> Which paragraph?
> 
> "Linux relies on BIOS to set this bit if BIOS has determined that the
> reduction in the physical address space as a result of enabling memory
> encryption..."
> 
> Basically, you can enable SME in the BIOS and you're all set.

That's not what I mean here.  If the BIOS sets the SMEE bit in the
SYS_CFG msr then, even if the encryption bit is never used, there is
still a reduction in physical address space.

Transparent SME (TSME) will be a BIOS option that will result in the
memory controller performing encryption no matter what. In this case
all data will be encrypted without a reduction in physical address
space.

Thanks,
Tom

> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 19:04               ` Tom Lendacky
  (?)
@ 2016-09-22 19:11                 ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 19:11 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel

On Thu, Sep 22, 2016 at 02:04:27PM -0500, Tom Lendacky wrote:
> That's not what I mean here.  If the BIOS sets the SMEE bit in the
> SYS_CFG msr then, even if the encryption bit is never used, there is
> still a reduction in physical address space.

I thought that reduction is the reservation of bits for the SME mask.

What other reduction is there?

> Transparent SME (TSME) will be a BIOS option that will result in the
> memory controller performing encryption no matter what. In this case
> all data will be encrypted without a reduction in physical address
> space.

Now I'm confused: aren't we reducing the address space with the SME
mask?

Or what reduction do you mean?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 19:11                 ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 19:11 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 02:04:27PM -0500, Tom Lendacky wrote:
> That's not what I mean here.  If the BIOS sets the SMEE bit in the
> SYS_CFG msr then, even if the encryption bit is never used, there is
> still a reduction in physical address space.

I thought that reduction is the reservation of bits for the SME mask.

What other reduction is there?

> Transparent SME (TSME) will be a BIOS option that will result in the
> memory controller performing encryption no matter what. In this case
> all data will be encrypted without a reduction in physical address
> space.

Now I'm confused: aren't we reducing the address space with the SME
mask?

Or what reduction do you mean?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 19:11                 ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 19:11 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 02:04:27PM -0500, Tom Lendacky wrote:
> That's not what I mean here.  If the BIOS sets the SMEE bit in the
> SYS_CFG msr then, even if the encryption bit is never used, there is
> still a reduction in physical address space.

I thought that reduction is the reservation of bits for the SME mask.

What other reduction is there?

> Transparent SME (TSME) will be a BIOS option that will result in the
> memory controller performing encryption no matter what. In this case
> all data will be encrypted without a reduction in physical address
> space.

Now I'm confused: aren't we reducing the address space with the SME
mask?

Or what reduction do you mean?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 19:11                 ` Borislav Petkov
  (?)
  (?)
@ 2016-09-22 19:49                   ` Tom Lendacky
  -1 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 19:49 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexan

On 09/22/2016 02:11 PM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 02:04:27PM -0500, Tom Lendacky wrote:
>> That's not what I mean here.  If the BIOS sets the SMEE bit in the
>> SYS_CFG msr then, even if the encryption bit is never used, there is
>> still a reduction in physical address space.
> 
> I thought that reduction is the reservation of bits for the SME mask.
> 
> What other reduction is there?

There is a reduction in physical address space for the SME mask and the
bits used to aid in identifying the ASID associated with the memory
request. This allows for the memory controller to determine the key to
be used for the encryption operation (host/hypervisor key vs. an SEV
guest key).

Thanks,
Tom

> 
>> Transparent SME (TSME) will be a BIOS option that will result in the
>> memory controller performing encryption no matter what. In this case
>> all data will be encrypted without a reduction in physical address
>> space.
> 
> Now I'm confused: aren't we reducing the address space with the SME
> mask?
> 
> Or what reduction do you mean?
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 19:49                   ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 19:49 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On 09/22/2016 02:11 PM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 02:04:27PM -0500, Tom Lendacky wrote:
>> That's not what I mean here.  If the BIOS sets the SMEE bit in the
>> SYS_CFG msr then, even if the encryption bit is never used, there is
>> still a reduction in physical address space.
> 
> I thought that reduction is the reservation of bits for the SME mask.
> 
> What other reduction is there?

There is a reduction in physical address space for the SME mask and the
bits used to aid in identifying the ASID associated with the memory
request. This allows for the memory controller to determine the key to
be used for the encryption operation (host/hypervisor key vs. an SEV
guest key).

Thanks,
Tom

> 
>> Transparent SME (TSME) will be a BIOS option that will result in the
>> memory controller performing encryption no matter what. In this case
>> all data will be encrypted without a reduction in physical address
>> space.
> 
> Now I'm confused: aren't we reducing the address space with the SME
> mask?
> 
> Or what reduction do you mean?
> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 19:49                   ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 19:49 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexan

On 09/22/2016 02:11 PM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 02:04:27PM -0500, Tom Lendacky wrote:
>> That's not what I mean here.  If the BIOS sets the SMEE bit in the
>> SYS_CFG msr then, even if the encryption bit is never used, there is
>> still a reduction in physical address space.
> 
> I thought that reduction is the reservation of bits for the SME mask.
> 
> What other reduction is there?

There is a reduction in physical address space for the SME mask and the
bits used to aid in identifying the ASID associated with the memory
request. This allows for the memory controller to determine the key to
be used for the encryption operation (host/hypervisor key vs. an SEV
guest key).

Thanks,
Tom

> 
>> Transparent SME (TSME) will be a BIOS option that will result in the
>> memory controller performing encryption no matter what. In this case
>> all data will be encrypted without a reduction in physical address
>> space.
> 
> Now I'm confused: aren't we reducing the address space with the SME
> mask?
> 
> Or what reduction do you mean?
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 19:49                   ` Tom Lendacky
  0 siblings, 0 replies; 255+ messages in thread
From: Tom Lendacky @ 2016-09-22 19:49 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On 09/22/2016 02:11 PM, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 02:04:27PM -0500, Tom Lendacky wrote:
>> That's not what I mean here.  If the BIOS sets the SMEE bit in the
>> SYS_CFG msr then, even if the encryption bit is never used, there is
>> still a reduction in physical address space.
> 
> I thought that reduction is the reservation of bits for the SME mask.
> 
> What other reduction is there?

There is a reduction in physical address space for the SME mask and the
bits used to aid in identifying the ASID associated with the memory
request. This allows for the memory controller to determine the key to
be used for the encryption operation (host/hypervisor key vs. an SEV
guest key).

Thanks,
Tom

> 
>> Transparent SME (TSME) will be a BIOS option that will result in the
>> memory controller performing encryption no matter what. In this case
>> all data will be encrypted without a reduction in physical address
>> space.
> 
> Now I'm confused: aren't we reducing the address space with the SME
> mask?
> 
> Or what reduction do you mean?
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 19:49                   ` Tom Lendacky
  (?)
@ 2016-09-22 20:10                     ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 20:10 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel

On Thu, Sep 22, 2016 at 02:49:22PM -0500, Tom Lendacky wrote:
> > I thought that reduction is the reservation of bits for the SME mask.
> > 
> > What other reduction is there?
> 
> There is a reduction in physical address space for the SME mask and the
> bits used to aid in identifying the ASID associated with the memory
> request. This allows for the memory controller to determine the key to
> be used for the encryption operation (host/hypervisor key vs. an SEV
> guest key).

Ok, I think I see what you mean: you call SME mask the bit in CPUID
Fn8000_001F[EBX][5:0], i.e., the C-bit, i.e. sme_me_mask. And the other
reduction is the key ASID, i.e., CPUID Fn8000_001F[EBX][11:6], i.e.
sme_me_loss.

I think we're on the same page - I was simply calling everything SME
mask because both are together in the PTE:

"Additionally, in some implementations, the physical address size of the
processor may be reduced when memory encryption features are enabled,
for example from 48 to 43 bits."

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 20:10                     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 20:10 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 02:49:22PM -0500, Tom Lendacky wrote:
> > I thought that reduction is the reservation of bits for the SME mask.
> > 
> > What other reduction is there?
> 
> There is a reduction in physical address space for the SME mask and the
> bits used to aid in identifying the ASID associated with the memory
> request. This allows for the memory controller to determine the key to
> be used for the encryption operation (host/hypervisor key vs. an SEV
> guest key).

Ok, I think I see what you mean: you call SME mask the bit in CPUID
Fn8000_001F[EBX][5:0], i.e., the C-bit, i.e. sme_me_mask. And the other
reduction is the key ASID, i.e., CPUID Fn8000_001F[EBX][11:6], i.e.
sme_me_loss.

I think we're on the same page - I was simply calling everything SME
mask because both are together in the PTE:

"Additionally, in some implementations, the physical address size of the
processor may be reduced when memory encryption features are enabled,
for example from 48 to 43 bits."

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-22 20:10                     ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-22 20:10 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Paolo Bonzini, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem

On Thu, Sep 22, 2016 at 02:49:22PM -0500, Tom Lendacky wrote:
> > I thought that reduction is the reservation of bits for the SME mask.
> > 
> > What other reduction is there?
> 
> There is a reduction in physical address space for the SME mask and the
> bits used to aid in identifying the ASID associated with the memory
> request. This allows for the memory controller to determine the key to
> be used for the encryption operation (host/hypervisor key vs. an SEV
> guest key).

Ok, I think I see what you mean: you call SME mask the bit in CPUID
Fn8000_001F[EBX][5:0], i.e., the C-bit, i.e. sme_me_mask. And the other
reduction is the key ASID, i.e., CPUID Fn8000_001F[EBX][11:6], i.e.
sme_me_loss.

I think we're on the same page - I was simply calling everything SME
mask because both are together in the PTE:

"Additionally, in some implementations, the physical address size of the
processor may be reduced when memory encryption features are enabled,
for example from 48 to 43 bits."

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-22 18:37           ` Borislav Petkov
  (?)
@ 2016-09-23  9:33             ` Kai Huang
  -1 siblings, 0 replies; 255+ messages in thread
From: Kai Huang @ 2016-09-23  9:33 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Tom Lendacky, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott



On 23/09/16 06:37, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 08:23:36PM +0200, Paolo Bonzini wrote:
>> Unless this is part of some spec, it's easier if things are the same in
>> SME and SEV.
> Yeah, I was pondering over how sprinkling sev_active checks might not be
> so clean.
>
> I'm wondering if we could make the EFI regions presented to the guest
> unencrypted too, as part of some SEV-specific init routine so that the
> guest kernel doesn't need to do anything different.
How is this even possible? The spec clearly says under SEV only in long 
mode or PAE mode guest can control whether memory is encrypted via 
c-bit, and in other modes guest will be always in encrypted mode. Guest 
EFI is also virtual, so are you suggesting EFI code (or code which loads 
EFI) should also be modified to load EFI as unencrypted? Looks it's not 
even possible to happen.

Thanks,
-Kai
>


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-23  9:33             ` Kai Huang
  0 siblings, 0 replies; 255+ messages in thread
From: Kai Huang @ 2016-09-23  9:33 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Tom Lendacky, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem



On 23/09/16 06:37, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 08:23:36PM +0200, Paolo Bonzini wrote:
>> Unless this is part of some spec, it's easier if things are the same in
>> SME and SEV.
> Yeah, I was pondering over how sprinkling sev_active checks might not be
> so clean.
>
> I'm wondering if we could make the EFI regions presented to the guest
> unencrypted too, as part of some SEV-specific init routine so that the
> guest kernel doesn't need to do anything different.
How is this even possible? The spec clearly says under SEV only in long 
mode or PAE mode guest can control whether memory is encrypted via 
c-bit, and in other modes guest will be always in encrypted mode. Guest 
EFI is also virtual, so are you suggesting EFI code (or code which loads 
EFI) should also be modified to load EFI as unencrypted? Looks it's not 
even possible to happen.

Thanks,
-Kai
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-23  9:33             ` Kai Huang
  0 siblings, 0 replies; 255+ messages in thread
From: Kai Huang @ 2016-09-23  9:33 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: Tom Lendacky, Brijesh Singh, simon.guinot, linux-efi, kvm,
	rkrcmar, matt, linus.walleij, linux-mm, paul.gortmaker, hpa,
	dan.j.williams, aarcange, sfr, andriy.shevchenko, herbert, bhe,
	xemul, joro, x86, mingo, msalter, ross.zwisler, dyoung, jroedel,
	keescook, toshi.kani, mathieu.desnoyers, devel, tglx, mchehab,
	iamjoonsoo.kim, labbott, tony.luck, alexandre.bounine,
	kuleshovmail, linux-kernel, mcgrof, linux-crypto, akpm, davem



On 23/09/16 06:37, Borislav Petkov wrote:
> On Thu, Sep 22, 2016 at 08:23:36PM +0200, Paolo Bonzini wrote:
>> Unless this is part of some spec, it's easier if things are the same in
>> SME and SEV.
> Yeah, I was pondering over how sprinkling sev_active checks might not be
> so clean.
>
> I'm wondering if we could make the EFI regions presented to the guest
> unencrypted too, as part of some SEV-specific init routine so that the
> guest kernel doesn't need to do anything different.
How is this even possible? The spec clearly says under SEV only in long 
mode or PAE mode guest can control whether memory is encrypted via 
c-bit, and in other modes guest will be always in encrypted mode. Guest 
EFI is also virtual, so are you suggesting EFI code (or code which loads 
EFI) should also be modified to load EFI as unencrypted? Looks it's not 
even possible to happen.

Thanks,
-Kai
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
  2016-09-23  9:33             ` Kai Huang
  (?)
@ 2016-09-23  9:50               ` Borislav Petkov
  -1 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-23  9:50 UTC (permalink / raw)
  To: Kai Huang
  Cc: Paolo Bonzini, Tom Lendacky, Brijesh Singh, simon.guinot,
	linux-efi, kvm, rkrcmar, matt, linus.walleij, linux-mm,
	paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, ku

On Fri, Sep 23, 2016 at 09:33:00PM +1200, Kai Huang wrote:
> How is this even possible? The spec clearly says under SEV only in long mode
> or PAE mode guest can control whether memory is encrypted via c-bit, and in
> other modes guest will be always in encrypted mode.

I was suggesting the hypervisor supplies the EFI ranges unencrypted. But
that is not a good idea because firmware data is exposed then, see same
thread from yesterday.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-23  9:50               ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-23  9:50 UTC (permalink / raw)
  To: Kai Huang
  Cc: Paolo Bonzini, Tom Lendacky, Brijesh Singh, simon.guinot,
	linux-efi, kvm, rkrcmar, matt, linus.walleij, linux-mm,
	paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem

On Fri, Sep 23, 2016 at 09:33:00PM +1200, Kai Huang wrote:
> How is this even possible? The spec clearly says under SEV only in long mode
> or PAE mode guest can control whether memory is encrypted via c-bit, and in
> other modes guest will be always in encrypted mode.

I was suggesting the hypervisor supplies the EFI ranges unencrypted. But
that is not a good idea because firmware data is exposed then, see same
thread from yesterday.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active
@ 2016-09-23  9:50               ` Borislav Petkov
  0 siblings, 0 replies; 255+ messages in thread
From: Borislav Petkov @ 2016-09-23  9:50 UTC (permalink / raw)
  To: Kai Huang
  Cc: Paolo Bonzini, Tom Lendacky, Brijesh Singh, simon.guinot,
	linux-efi, kvm, rkrcmar, matt, linus.walleij, linux-mm,
	paul.gortmaker, hpa, dan.j.williams, aarcange, sfr,
	andriy.shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross.zwisler, dyoung, jroedel, keescook, toshi.kani,
	mathieu.desnoyers, devel, tglx, mchehab, iamjoonsoo.kim, labbott,
	tony.luck, alexandre.bounine, kuleshovmail, linux-kernel, mcgrof,
	linux-crypto, akpm, davem

On Fri, Sep 23, 2016 at 09:33:00PM +1200, Kai Huang wrote:
> How is this even possible? The spec clearly says under SEV only in long mode
> or PAE mode guest can control whether memory is encrypted via c-bit, and in
> other modes guest will be always in encrypted mode.

I was suggesting the hypervisor supplies the EFI ranges unencrypted. But
that is not a good idea because firmware data is exposed then, see same
thread from yesterday.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix ImendA?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG NA 1/4 rnberg)
-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest
  2016-08-22 23:27   ` Brijesh Singh
@ 2016-10-13 10:17     ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-13 10:17 UTC (permalink / raw)
  To: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck



On 23/08/2016 01:27, Brijesh Singh wrote:
> In current implementation, asid allocation starts from 1, this patch
> adds a min_asid variable in svm_vcpu structure to allow starting asid
> from something other than 1.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/kvm/svm.c |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 211be94..f010b23 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -470,6 +470,7 @@ struct svm_cpu_data {
>  	u64 asid_generation;
>  	u32 max_asid;
>  	u32 next_asid;
> +	u32 min_asid;
>  	struct kvm_ldttss_desc *tss_desc;
>  
>  	struct page *save_area;
> @@ -726,6 +727,7 @@ static int svm_hardware_enable(void)
>  	sd->asid_generation = 1;
>  	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
>  	sd->next_asid = sd->max_asid + 1;
> +	sd->min_asid = 1;
>  
>  	native_store_gdt(&gdt_descr);
>  	gdt = (struct desc_struct *)gdt_descr.address;
> @@ -1887,7 +1889,7 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
>  {
>  	if (sd->next_asid > sd->max_asid) {
>  		++sd->asid_generation;
> -		sd->next_asid = 1;
> +		sd->next_asid = sd->min_asid;
>  		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
>  	}
>  
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest
@ 2016-10-13 10:17     ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-13 10:17 UTC (permalink / raw)
  To: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck



On 23/08/2016 01:27, Brijesh Singh wrote:
> In current implementation, asid allocation starts from 1, this patch
> adds a min_asid variable in svm_vcpu structure to allow starting asid
> from something other than 1.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/kvm/svm.c |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 211be94..f010b23 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -470,6 +470,7 @@ struct svm_cpu_data {
>  	u64 asid_generation;
>  	u32 max_asid;
>  	u32 next_asid;
> +	u32 min_asid;
>  	struct kvm_ldttss_desc *tss_desc;
>  
>  	struct page *save_area;
> @@ -726,6 +727,7 @@ static int svm_hardware_enable(void)
>  	sd->asid_generation = 1;
>  	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
>  	sd->next_asid = sd->max_asid + 1;
> +	sd->min_asid = 1;
>  
>  	native_store_gdt(&gdt_descr);
>  	gdt = (struct desc_struct *)gdt_descr.address;
> @@ -1887,7 +1889,7 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
>  {
>  	if (sd->next_asid > sd->max_asid) {
>  		++sd->asid_generation;
> -		sd->next_asid = 1;
> +		sd->next_asid = sd->min_asid;
>  		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
>  	}
>  
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 20/28] KVM: SVM: prepare for SEV guest management API support
  2016-08-22 23:28   ` Brijesh Singh
                     ` (2 preceding siblings ...)
  (?)
@ 2016-10-13 10:41   ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-13 10:41 UTC (permalink / raw)
  To: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck



On 23/08/2016 01:28, Brijesh Singh wrote:
> The patch adds initial support required for Secure Encrypted
> Virtualization (SEV) guest management API's.
> 
> ASID management:
>  - Reserve asid range for SEV guest, SEV asid range is obtained
>    through CPUID Fn8000_001f[ECX]. A non-SEV guest can use any
>    asid outside the SEV asid range.
>  - SEV guest must have asid value within asid range obtained
>    through CPUID.
>  - SEV guest must have the same asid for all vcpu's. A TLB flush
>    is required if different vcpu for the same ASID is to be run
>    on the same host CPU.
> 
> - save SEV private structure in kvm_arch.
> 
> - If SEV is available then initialize PSP firmware during hardware probe
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |    9 ++
>  arch/x86/kvm/svm.c              |  213 +++++++++++++++++++++++++++++++++++++++
>  2 files changed, 221 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index b1dd673..9b885fc 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -715,6 +715,12 @@ struct kvm_hv {
>  	u64 hv_crash_ctl;
>  };
>  
> +struct kvm_sev_info {
> +	unsigned int asid;	/* asid for this guest */
> +	unsigned int handle;	/* firmware handle */
> +	unsigned int ref_count; /* number of active vcpus */
> +};
> +
>  struct kvm_arch {
>  	unsigned int n_used_mmu_pages;
>  	unsigned int n_requested_mmu_pages;
> @@ -799,6 +805,9 @@ struct kvm_arch {
>  
>  	bool x2apic_format;
>  	bool x2apic_broadcast_quirk_disabled;
> +
> +	/* struct for SEV guest */
> +	struct kvm_sev_info sev_info;
>  };
>  
>  struct kvm_vm_stat {
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index f010b23..dcee635 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -34,6 +34,7 @@
>  #include <linux/sched.h>
>  #include <linux/trace_events.h>
>  #include <linux/slab.h>
> +#include <linux/ccp-psp.h>
>  
>  #include <asm/apic.h>
>  #include <asm/perf_event.h>
> @@ -186,6 +187,9 @@ struct vcpu_svm {
>  	struct page *avic_backing_page;
>  	u64 *avic_physical_id_cache;
>  	bool avic_is_running;
> +
> +	/* which host cpu was used for running this vcpu */
> +	bool last_cpuid;
>  };
>  
>  #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFF)
> @@ -243,6 +247,25 @@ static int avic;
>  module_param(avic, int, S_IRUGO);
>  #endif
>  
> +/* Secure Encrypted Virtualization */
> +static bool sev_enabled;

You can check max_sev_asid != 0 instead (wrapped in a sev_enabled()
function).

> +static unsigned long max_sev_asid;

Need not be 64-bit.

> +static unsigned long *sev_asid_bitmap;

Please note what lock protects this, and modify it with __set_bit and
__clear_bit.

> +#define kvm_sev_guest()		(kvm->arch.sev_info.handle)
> +#define kvm_sev_handle()	(kvm->arch.sev_info.handle)
> +#define kvm_sev_ref()		(kvm->arch.sev_info.ref_count++)
> +#define kvm_sev_unref()		(kvm->arch.sev_info.ref_count--)
> +#define svm_sev_handle()	(svm->vcpu.kvm->arch.sev_info.handle)
> +#define svm_sev_asid()		(svm->vcpu.kvm->arch.sev_info.asid)
> +#define svm_sev_ref()		(svm->vcpu.kvm->arch.sev_info.ref_count++)
> +#define svm_sev_unref()		(svm->vcpu.kvm->arch.sev_info.ref_count--)
> +#define svm_sev_guest()		(svm->vcpu.kvm->arch.sev_info.handle)
> +#define svm_sev_ref_count()	(svm->vcpu.kvm->arch.sev_info.ref_count)

Why is the reference count necessary?  Could you use the kvm refcount
instead and free the ASID in kvm_x86_ops->vm_destroy?  Also, what lock
protects the reference count?

Also please remove the macros in general.  If there is only a struct
vcpu_svm*, use

    struct kvm_arch *vm_data = &svm->vcpu.kvm->arch;

as done for example in avic_init_vmcb.

> +
> +static int sev_asid_new(void);
> +static void sev_asid_free(int asid);
> +
>  static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
>  static void svm_flush_tlb(struct kvm_vcpu *vcpu);
>  static void svm_complete_interrupts(struct vcpu_svm *svm);
> @@ -474,6 +497,8 @@ struct svm_cpu_data {
>  	struct kvm_ldttss_desc *tss_desc;
>  
>  	struct page *save_area;
> +
> +	void **sev_vmcb;  /* index = sev_asid, value = vmcb pointer */

It's not a void**, it's a struct vmcb**.  Please rename it to sev_vmcbs,
too, so that it's clear that it's an array.

>  };
>  
>  static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
> @@ -727,7 +752,10 @@ static int svm_hardware_enable(void)
>  	sd->asid_generation = 1;
>  	sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
>  	sd->next_asid = sd->max_asid + 1;
> -	sd->min_asid = 1;
> +	sd->min_asid = max_sev_asid + 1;
> +
> +	if (sev_enabled)
> +		memset(sd->sev_vmcb, 0, (max_sev_asid + 1) * sizeof(void *));

This seems strange.  You should clear the field, for each possible CPU,
in sev_asid_free, not in sev_uninit_vcpu.  Then when you reuse the ASID,
sev_vmcbs[asid] will be NULL everywhere.

> @@ -931,6 +968,74 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
>  	set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
>  }
>  
> +static __init void sev_hardware_setup(void)
> +{
> +	int ret, psp_ret;
> +	struct psp_data_init *init;
> +	struct psp_data_status *status;
> +
> +	/*
> +	 * Check SEV Feature Support: Fn8001_001F[EAX]
> +	 * 	Bit 1: Secure Memory Virtualization supported
> +	 */
> +	if (!(cpuid_eax(0x8000001F) & 0x2))
> +		return;
> +
> +	/*
> +	 * Get maximum number of encrypted guest supported: Fn8001_001F[ECX]
> +	 * 	Bit 31:0: Number of supported guest
> +	 */
> +	max_sev_asid = cpuid_ecx(0x8000001F);
> +	if (!max_sev_asid)
> +		return;
> +
> +	init = kzalloc(sizeof(*init), GFP_KERNEL);
> +	if (!init)
> +		return;
> +
> +	status = kzalloc(sizeof(*status), GFP_KERNEL);
> +	if (!status)
> +		goto err_1;
> +
> +	/* Initialize PSP firmware */
> +	init->hdr.buffer_len = sizeof(*init);
> +	init->flags = 0;
> +	ret = psp_platform_init(init, &psp_ret);
> +	if (ret) {
> +		printk(KERN_ERR "SEV: PSP_INIT ret=%d (%#x)\n", ret, psp_ret);
> +		goto err_2;
> +	}
> +
> +	/* Initialize SEV ASID bitmap */
> +	sev_asid_bitmap = kmalloc(max(sizeof(unsigned long),
> +				      max_sev_asid/8 + 1), GFP_KERNEL);

What you want here is

	kcalloc(BITS_TO_LONGS(max_sev_asid), sizeof(unsigned long),
		GFP_KERNEL);

> +	if (IS_ERR(sev_asid_bitmap)) {
> +		psp_platform_shutdown(&psp_ret);
> +		goto err_2;
> +	}
> +	bitmap_zero(sev_asid_bitmap, max_sev_asid);

... and then no need for the bitmap_zero.

> +	set_bit(0, sev_asid_bitmap);  /* mark ASID 0 as used */
> +
> +	sev_enabled = 1;
> +	printk(KERN_INFO "kvm: SEV enabled\n");
> +
> +	/* Query the platform status and print API version */
> +	status->hdr.buffer_len = sizeof(*status);
> +	ret = psp_platform_status(status, &psp_ret);
> +	if (ret) {
> +		printk(KERN_ERR "SEV: PLATFORM_STATUS ret=%#x\n", psp_ret);
> +		goto err_2;
> +	}
> +
> +	printk(KERN_INFO "SEV API: %d.%d\n",
> +			status->api_major, status->api_minor);
> +err_2:
> +	kfree(status);
> +err_1:
> +	kfree(init);
> +	return;
> +}
> +
>  static __init int svm_hardware_setup(void)
>  {
>  	int cpu;
> @@ -966,6 +1071,8 @@ static __init int svm_hardware_setup(void)
>  		kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE);
>  	}
>  
> +	sev_hardware_setup();
> +
>  	for_each_possible_cpu(cpu) {
>  		r = svm_cpu_init(cpu);
>  		if (r)
> @@ -1003,10 +1110,25 @@ err:
>  	return r;
>  }
>  
> +static __exit void sev_hardware_unsetup(void)
> +{
> +	int ret, psp_ret;
> +
> +	ret = psp_platform_shutdown(&psp_ret);
> +	if (ret)
> +		printk(KERN_ERR "failed to shutdown PSP rc=%d (%#0x10x)\n",
> +		ret, psp_ret);
> +
> +	kfree(sev_asid_bitmap);
> +}
> +
>  static __exit void svm_hardware_unsetup(void)
>  {
>  	int cpu;
>  
> +	if (sev_enabled)
> +		sev_hardware_unsetup();
> +
>  	for_each_possible_cpu(cpu)
>  		svm_cpu_uninit(cpu);
>  
> @@ -1088,6 +1210,11 @@ static void avic_init_vmcb(struct vcpu_svm *svm)
>  	svm->vcpu.arch.apicv_active = true;
>  }
>  
> +static void sev_init_vmcb(struct vcpu_svm *svm)
> +{
> +	svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
> +}
> +
>  static void init_vmcb(struct vcpu_svm *svm)
>  {
>  	struct vmcb_control_area *control = &svm->vmcb->control;
> @@ -1202,6 +1329,10 @@ static void init_vmcb(struct vcpu_svm *svm)
>  	if (avic)
>  		avic_init_vmcb(svm);
>  
> +	if (svm_sev_guest())
> +		sev_init_vmcb(svm);
> +
> +
>  	mark_all_dirty(svm->vmcb);
>  
>  	enable_gif(svm);
> @@ -1413,6 +1544,14 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
>  		avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
>  }
>  
> +static void sev_init_vcpu(struct vcpu_svm *svm)
> +{
> +	if (!svm_sev_guest())
> +		return;
> +
> +	svm_sev_ref();
> +}
> +
>  static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
>  {
>  	struct vcpu_svm *svm;
> @@ -1475,6 +1614,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
>  	init_vmcb(svm);
>  
>  	svm_init_osvw(&svm->vcpu);
> +	sev_init_vcpu(svm);
>  
>  	return &svm->vcpu;
>  
> @@ -1494,6 +1634,23 @@ out:
>  	return ERR_PTR(err);
>  }
>  
> +static void sev_uninit_vcpu(struct vcpu_svm *svm)
> +{
> +	int cpu;
> +	int asid = svm_sev_asid();
> +	struct svm_cpu_data *sd;
> +
> +	if (!svm_sev_guest())
> +		return;
> +
> +	svm_sev_unref();
> +
> +	for_each_possible_cpu(cpu) {
> +		sd = per_cpu(svm_data, cpu);
> +		sd->sev_vmcb[asid] = NULL;
> +	}
> +}
> +
>  static void svm_free_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_svm *svm = to_svm(vcpu);
> @@ -1502,6 +1659,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
>  	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
>  	__free_page(virt_to_page(svm->nested.hsave));
>  	__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
> +	sev_uninit_vcpu(svm);
>  	kvm_vcpu_uninit(vcpu);
>  	kmem_cache_free(kvm_vcpu_cache, svm);
>  }
> @@ -1945,6 +2103,11 @@ static int pf_interception(struct vcpu_svm *svm)
>  	default:
>  		error_code = svm->vmcb->control.exit_info_1;
>  
> +		/* In SEV mode, the guest physical address will have C-bit
> +		 * set. C-bit must be cleared before handling the fault.
> +		 */
> +		if (svm_sev_guest())
> +			fault_address &= ~sme_me_mask;
>  		trace_kvm_page_fault(fault_address, error_code);
>  		if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))
>  			kvm_mmu_unprotect_page_virt(&svm->vcpu, fault_address);
> @@ -4131,12 +4294,40 @@ static void reload_tss(struct kvm_vcpu *vcpu)
>  	load_TR_desc();
>  }
>  
> +static void pre_sev_run(struct vcpu_svm *svm)
> +{
> +	int asid = svm_sev_asid();
> +	int cpu = raw_smp_processor_id();
> +	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
> +
> +	/* Assign the asid allocated for this SEV guest */
> +	svm->vmcb->control.asid = svm_sev_asid();
> +
> +	/* Flush guest TLB:
> +	 * - when different VMCB for the same ASID is to be run on the
> +	 *   same host CPU
> +	 *   or 
> +	 * - this VMCB was executed on different host cpu in previous VMRUNs.
> +	 */
> +	if (sd->sev_vmcb[asid] != (void *)svm->vmcb ||
> +		svm->last_cpuid != cpu)
> +		svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
> +
> +	svm->last_cpuid = cpu;
> +	sd->sev_vmcb[asid] = (void *)svm->vmcb;
> +
> +	mark_dirty(svm->vmcb, VMCB_ASID);
> +}
> +
>  static void pre_svm_run(struct vcpu_svm *svm)
>  {
>  	int cpu = raw_smp_processor_id();
>  
>  	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
>  
> +	if (svm_sev_guest())
> +		return pre_sev_run(svm);
> +
>  	/* FIXME: handle wraparound of asid_generation */
>  	if (svm->asid_generation != sd->asid_generation)
>  		new_asid(svm, sd);
> @@ -4985,6 +5176,26 @@ static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
>  	avic_handle_ldr_update(vcpu);
>  }
>  
> +static int sev_asid_new(void)
> +{
> +	int pos;
> +
> +	if (!sev_enabled)
> +		return -ENOTTY;
> +
> +	pos = find_first_zero_bit(sev_asid_bitmap, max_sev_asid);
> +	if (pos >= max_sev_asid)
> +		return -EBUSY;
> +
> +	set_bit(pos, sev_asid_bitmap);
> +	return pos;
> +}
> +
> +static void sev_asid_free(int asid)
> +{
> +	clear_bit(asid, sev_asid_bitmap);
> +}

Please move these (and sev_asid_bitmap) to patch 22 where they're first
used.

Paolo

>  static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  	.cpu_has_kvm_support = has_svm,
>  	.disabled_by_bios = is_disabled,
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  2016-08-22 23:28   ` Brijesh Singh
@ 2016-10-13 10:45     ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-13 10:45 UTC (permalink / raw)
  To: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab



On 23/08/2016 01:28, Brijesh Singh wrote:
> The ioctl will be used by qemu to issue the Secure Encrypted
> Virtualization (SEV) guest commands to transition a guest into
> into SEV-enabled mode.
> 
> a typical usage:
> 
> struct kvm_sev_launch_start start;
> struct kvm_sev_issue_cmd data;
> 
> data.cmd = KVM_SEV_LAUNCH_START;
> data.opaque = &start;
> 
> ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);
> 
> On SEV command failure, data.ret_code will contain the firmware error code.

Please modify the ioctl to require the file descriptor for the PSP.  A
program without access to /dev/psp should not be able to use SEV.

> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |    3 +
>  arch/x86/kvm/x86.c              |   13 ++++
>  include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 141 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 9b885fc..a94e37d 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1040,6 +1040,9 @@ struct kvm_x86_ops {
>  	void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
>  
>  	void (*setup_mce)(struct kvm_vcpu *vcpu);
> +
> +	int (*sev_issue_cmd)(struct kvm *kvm,
> +			     struct kvm_sev_issue_cmd __user *argp);
>  };
>  
>  struct kvm_arch_async_pf {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d6f2f4b..0c0adad 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3820,6 +3820,15 @@ split_irqchip_unlock:
>  	return r;
>  }
>  
> +static int kvm_vm_ioctl_sev_issue_cmd(struct kvm *kvm,
> +				      struct kvm_sev_issue_cmd __user *argp)
> +{
> +	if (kvm_x86_ops->sev_issue_cmd)
> +		return kvm_x86_ops->sev_issue_cmd(kvm, argp);
> +
> +	return -ENOTTY;
> +}

Please make a more generic vm_ioctl callback.

>  long kvm_arch_vm_ioctl(struct file *filp,
>  		       unsigned int ioctl, unsigned long arg)
>  {
> @@ -4085,6 +4094,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
>  		r = kvm_vm_ioctl_enable_cap(kvm, &cap);
>  		break;
>  	}
> +	case KVM_SEV_ISSUE_CMD: {
> +		r = kvm_vm_ioctl_sev_issue_cmd(kvm, argp);
> +		break;
> +	}
>  	default:
>  		r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
>  	}
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 300ef25..72c18c3 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1274,6 +1274,131 @@ struct kvm_s390_ucas_mapping {
>  /* Available with KVM_CAP_X86_SMM */
>  #define KVM_SMI                   _IO(KVMIO,   0xb7)
>  
> +/* Secure Encrypted Virtualization mode */
> +enum sev_cmd {
> +	KVM_SEV_LAUNCH_START = 0,
> +	KVM_SEV_LAUNCH_UPDATE,
> +	KVM_SEV_LAUNCH_FINISH,
> +	KVM_SEV_GUEST_STATUS,
> +	KVM_SEV_DBG_DECRYPT,
> +	KVM_SEV_DBG_ENCRYPT,
> +	KVM_SEV_RECEIVE_START,
> +	KVM_SEV_RECEIVE_UPDATE,
> +	KVM_SEV_RECEIVE_FINISH,
> +	KVM_SEV_SEND_START,
> +	KVM_SEV_SEND_UPDATE,
> +	KVM_SEV_SEND_FINISH,
> +	KVM_SEV_API_VERSION,
> +	KVM_SEV_NR_MAX,
> +};
> +
> +struct kvm_sev_issue_cmd {
> +	__u32 cmd;
> +	__u64 opaque;
> +	__u32 ret_code;
> +};
> +
> +struct kvm_sev_launch_start {
> +	__u32 handle;
> +	__u32 flags;
> +	__u32 policy;
> +	__u8 nonce[16];
> +	__u8 dh_pub_qx[32];
> +	__u8 dh_pub_qy[32];
> +};
> +
> +struct kvm_sev_launch_update {
> +	__u64	address;
> +	__u32	length;
> +};
> +
> +struct kvm_sev_launch_finish {
> +	__u32 vcpu_count;
> +	__u32 vcpu_length;
> +	__u64 vcpu_mask_addr;
> +	__u32 vcpu_mask_length;
> +	__u8  measurement[32];
> +};
> +
> +struct kvm_sev_guest_status {
> +	__u32 policy;
> +	__u32 state;
> +};
> +
> +struct kvm_sev_dbg_decrypt {
> +	__u64 src_addr;
> +	__u64 dst_addr;
> +	__u32 length;
> +};
> +
> +struct kvm_sev_dbg_encrypt {
> +	__u64 src_addr;
> +	__u64 dst_addr;
> +	__u32 length;
> +};
> +
> +struct kvm_sev_receive_start {
> +	__u32 handle;
> +	__u32 flags;
> +	__u32 policy;
> +	__u8 policy_meas[32];
> +	__u8 wrapped_tek[24];
> +	__u8 wrapped_tik[24];
> +	__u8 ten[16];
> +	__u8 dh_pub_qx[32];
> +	__u8 dh_pub_qy[32];
> +	__u8 nonce[16];
> +};
> +
> +struct kvm_sev_receive_update {
> +	__u8 iv[16];
> +	__u64 address;
> +	__u32 length;
> +};
> +
> +struct kvm_sev_receive_finish {
> +	__u8 measurement[32];
> +};
> +
> +struct kvm_sev_send_start {
> +	__u8 nonce[16];
> +	__u32 policy;
> +	__u8 policy_meas[32];
> +	__u8 wrapped_tek[24];
> +	__u8 wrapped_tik[24];
> +	__u8 ten[16];
> +	__u8 iv[16];
> +	__u32 flags;
> +	__u8 api_major;
> +	__u8 api_minor;
> +	__u32 serial;
> +	__u8 dh_pub_qx[32];
> +	__u8 dh_pub_qy[32];
> +	__u8 pek_sig_r[32];
> +	__u8 pek_sig_s[32];
> +	__u8 cek_sig_r[32];
> +	__u8 cek_sig_s[32];
> +	__u8 cek_pub_qx[32];
> +	__u8 cek_pub_qy[32];
> +	__u8 ask_sig_r[32];
> +	__u8 ask_sig_s[32];
> +	__u32 ncerts;
> +	__u32 cert_length;
> +	__u64 certs_addr;
> +};
> +
> +struct kvm_sev_send_update {
> +	__u32 length;
> +	__u64 src_addr;
> +	__u64 dst_addr;
> +};
> +
> +struct kvm_sev_send_finish {
> +	__u8 measurement[32];
> +};
> +
> +#define KVM_SEV_ISSUE_CMD	_IOWR(KVMIO, 0xb8, struct kvm_sev_issue_cmd)
> +
>  #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
>  #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
>  #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
> 
> 

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-13 10:45     ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-13 10:45 UTC (permalink / raw)
  To: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab



On 23/08/2016 01:28, Brijesh Singh wrote:
> The ioctl will be used by qemu to issue the Secure Encrypted
> Virtualization (SEV) guest commands to transition a guest into
> into SEV-enabled mode.
> 
> a typical usage:
> 
> struct kvm_sev_launch_start start;
> struct kvm_sev_issue_cmd data;
> 
> data.cmd = KVM_SEV_LAUNCH_START;
> data.opaque = &start;
> 
> ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);
> 
> On SEV command failure, data.ret_code will contain the firmware error code.

Please modify the ioctl to require the file descriptor for the PSP.  A
program without access to /dev/psp should not be able to use SEV.

> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |    3 +
>  arch/x86/kvm/x86.c              |   13 ++++
>  include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 141 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 9b885fc..a94e37d 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1040,6 +1040,9 @@ struct kvm_x86_ops {
>  	void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
>  
>  	void (*setup_mce)(struct kvm_vcpu *vcpu);
> +
> +	int (*sev_issue_cmd)(struct kvm *kvm,
> +			     struct kvm_sev_issue_cmd __user *argp);
>  };
>  
>  struct kvm_arch_async_pf {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d6f2f4b..0c0adad 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3820,6 +3820,15 @@ split_irqchip_unlock:
>  	return r;
>  }
>  
> +static int kvm_vm_ioctl_sev_issue_cmd(struct kvm *kvm,
> +				      struct kvm_sev_issue_cmd __user *argp)
> +{
> +	if (kvm_x86_ops->sev_issue_cmd)
> +		return kvm_x86_ops->sev_issue_cmd(kvm, argp);
> +
> +	return -ENOTTY;
> +}

Please make a more generic vm_ioctl callback.

>  long kvm_arch_vm_ioctl(struct file *filp,
>  		       unsigned int ioctl, unsigned long arg)
>  {
> @@ -4085,6 +4094,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
>  		r = kvm_vm_ioctl_enable_cap(kvm, &cap);
>  		break;
>  	}
> +	case KVM_SEV_ISSUE_CMD: {
> +		r = kvm_vm_ioctl_sev_issue_cmd(kvm, argp);
> +		break;
> +	}
>  	default:
>  		r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
>  	}
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 300ef25..72c18c3 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1274,6 +1274,131 @@ struct kvm_s390_ucas_mapping {
>  /* Available with KVM_CAP_X86_SMM */
>  #define KVM_SMI                   _IO(KVMIO,   0xb7)
>  
> +/* Secure Encrypted Virtualization mode */
> +enum sev_cmd {
> +	KVM_SEV_LAUNCH_START = 0,
> +	KVM_SEV_LAUNCH_UPDATE,
> +	KVM_SEV_LAUNCH_FINISH,
> +	KVM_SEV_GUEST_STATUS,
> +	KVM_SEV_DBG_DECRYPT,
> +	KVM_SEV_DBG_ENCRYPT,
> +	KVM_SEV_RECEIVE_START,
> +	KVM_SEV_RECEIVE_UPDATE,
> +	KVM_SEV_RECEIVE_FINISH,
> +	KVM_SEV_SEND_START,
> +	KVM_SEV_SEND_UPDATE,
> +	KVM_SEV_SEND_FINISH,
> +	KVM_SEV_API_VERSION,
> +	KVM_SEV_NR_MAX,
> +};
> +
> +struct kvm_sev_issue_cmd {
> +	__u32 cmd;
> +	__u64 opaque;
> +	__u32 ret_code;
> +};
> +
> +struct kvm_sev_launch_start {
> +	__u32 handle;
> +	__u32 flags;
> +	__u32 policy;
> +	__u8 nonce[16];
> +	__u8 dh_pub_qx[32];
> +	__u8 dh_pub_qy[32];
> +};
> +
> +struct kvm_sev_launch_update {
> +	__u64	address;
> +	__u32	length;
> +};
> +
> +struct kvm_sev_launch_finish {
> +	__u32 vcpu_count;
> +	__u32 vcpu_length;
> +	__u64 vcpu_mask_addr;
> +	__u32 vcpu_mask_length;
> +	__u8  measurement[32];
> +};
> +
> +struct kvm_sev_guest_status {
> +	__u32 policy;
> +	__u32 state;
> +};
> +
> +struct kvm_sev_dbg_decrypt {
> +	__u64 src_addr;
> +	__u64 dst_addr;
> +	__u32 length;
> +};
> +
> +struct kvm_sev_dbg_encrypt {
> +	__u64 src_addr;
> +	__u64 dst_addr;
> +	__u32 length;
> +};
> +
> +struct kvm_sev_receive_start {
> +	__u32 handle;
> +	__u32 flags;
> +	__u32 policy;
> +	__u8 policy_meas[32];
> +	__u8 wrapped_tek[24];
> +	__u8 wrapped_tik[24];
> +	__u8 ten[16];
> +	__u8 dh_pub_qx[32];
> +	__u8 dh_pub_qy[32];
> +	__u8 nonce[16];
> +};
> +
> +struct kvm_sev_receive_update {
> +	__u8 iv[16];
> +	__u64 address;
> +	__u32 length;
> +};
> +
> +struct kvm_sev_receive_finish {
> +	__u8 measurement[32];
> +};
> +
> +struct kvm_sev_send_start {
> +	__u8 nonce[16];
> +	__u32 policy;
> +	__u8 policy_meas[32];
> +	__u8 wrapped_tek[24];
> +	__u8 wrapped_tik[24];
> +	__u8 ten[16];
> +	__u8 iv[16];
> +	__u32 flags;
> +	__u8 api_major;
> +	__u8 api_minor;
> +	__u32 serial;
> +	__u8 dh_pub_qx[32];
> +	__u8 dh_pub_qy[32];
> +	__u8 pek_sig_r[32];
> +	__u8 pek_sig_s[32];
> +	__u8 cek_sig_r[32];
> +	__u8 cek_sig_s[32];
> +	__u8 cek_pub_qx[32];
> +	__u8 cek_pub_qy[32];
> +	__u8 ask_sig_r[32];
> +	__u8 ask_sig_s[32];
> +	__u32 ncerts;
> +	__u32 cert_length;
> +	__u64 certs_addr;
> +};
> +
> +struct kvm_sev_send_update {
> +	__u32 length;
> +	__u64 src_addr;
> +	__u64 dst_addr;
> +};
> +
> +struct kvm_sev_send_finish {
> +	__u8 measurement[32];
> +};
> +
> +#define KVM_SEV_ISSUE_CMD	_IOWR(KVMIO, 0xb8, struct kvm_sev_issue_cmd)
> +
>  #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
>  #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
>  #define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 22/28] KVM: SVM: add SEV launch start command
  2016-08-22 23:28   ` Brijesh Singh
                     ` (2 preceding siblings ...)
  (?)
@ 2016-10-13 11:12   ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-13 11:12 UTC (permalink / raw)
  To: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck



On 23/08/2016 01:28, Brijesh Singh wrote:
> +static int sev_launch_start(struct kvm *kvm,
> +			    struct kvm_sev_launch_start __user *arg,
> +			    int *psp_ret)
> +{
> +	int ret, asid;
> +	struct kvm_sev_launch_start params;
> +	struct psp_data_launch_start *start;
> +
> +	/* Get parameter from the user */
> +	if (copy_from_user(&params, arg, sizeof(*arg)))
> +		return -EFAULT;
> +
> +	start = kzalloc(sizeof(*start), GFP_KERNEL);
> +	if (!start)
> +		return -ENOMEM;
> +
> +	ret = sev_pre_start(kvm, &asid);

You need some locking in sev_asid_{new,free}.  Probably &kvm_lock.  The
SEV_ISSUE_CMD ioctl instead should take &kvm->lock.

Paolo

> +	if (ret)
> +		goto err_1;
> +
> +	start->hdr.buffer_len = sizeof(*start);
> +	start->flags  = params.flags;
> +	start->policy = params.policy;
> +	start->handle = params.handle;
> +	memcpy(start->nonce, &params.nonce, sizeof(start->nonce));
> +	memcpy(start->dh_pub_qx, &params.dh_pub_qx, sizeof(start->dh_pub_qx));
> +	memcpy(start->dh_pub_qy, &params.dh_pub_qy, sizeof(start->dh_pub_qy));
> +
> +	/* launch start */
> +	ret = psp_guest_launch_start(start, psp_ret);
> +	if (ret) {
> +		printk(KERN_ERR "SEV: LAUNCH_START ret=%d (%#010x)\n",
> +			ret, *psp_ret);
> +		goto err_2;
> +	}
> +
> +	ret = sev_post_start(kvm, asid, start->handle, psp_ret);
> +	if (ret)
> +		goto err_2;

Paolo

> +	kfree(start);
> +	return 0;
> +
> +err_2:
> +	sev_asid_free(asid);
> +err_1:
> +	kfree(start);
> +	return ret;
> +}
> +
> +static int amd_sev_issue_cmd(struct kvm *kvm,
> +			     struct kvm_sev_issue_cmd __user *user_data)
> +{
> +	int r = -ENOTTY;
> +	struct kvm_sev_issue_cmd arg;
> +
> +	if (copy_from_user(&arg, user_data, sizeof(struct kvm_sev_issue_cmd)))
> +		return -EFAULT;
> +
> +	switch (arg.cmd) {
> +	case KVM_SEV_LAUNCH_START: {
> +		r = sev_launch_start(kvm, (void *)arg.opaque,
> +					&arg.ret_code);
> +		break;
> +	}
> +	default:
> +		break;
> +	}
> +
> +	if (copy_to_user(user_data, &arg, sizeof(struct kvm_sev_issue_cmd)))
> +		r = -EFAULT;
> +	return r;
> +}
> +
>  static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  	.cpu_has_kvm_support = has_svm,
>  	.disabled_by_bios = is_disabled,
> @@ -5313,6 +5517,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>  
>  	.pmu_ops = &amd_pmu_ops,
>  	.deliver_posted_interrupt = svm_deliver_avic_intr,
> +
> +	.sev_issue_cmd = amd_sev_issue_cmd,
>  };
>  
>  static int __init svm_init(void)
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 24/28] KVM: SVM: add SEV_LAUNCH_FINISH command
  2016-08-22 23:28   ` Brijesh Singh
                     ` (2 preceding siblings ...)
  (?)
@ 2016-10-13 11:16   ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-13 11:16 UTC (permalink / raw)
  To: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck



On 23/08/2016 01:28, Brijesh Singh wrote:
> +
> +	/* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
> +	 * KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
> +	 */
> +	kvm_for_each_vcpu(i, vcpu, kvm)
> +		svm_cpuid_update(vcpu);
> +

Do you need another call to sev_init_vmcb here?

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)
  2016-08-22 23:23 ` Brijesh Singh
                   ` (57 preceding siblings ...)
  (?)
@ 2016-10-13 11:19 ` Paolo Bonzini
  2016-10-17 13:51     ` Brijesh Singh
  -1 siblings, 1 reply; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-13 11:19 UTC (permalink / raw)
  To: Brijesh Singh, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck



On 23/08/2016 01:23, Brijesh Singh wrote:
> TODO:
> - send qemu/seabios RFC's on respective mailing list
> - integrate the psp driver with CCP driver (they share the PCI id's)
> - add SEV guest migration command support
> - add SEV snapshotting command support
> - determine how to do ioremap of physical memory with mem encryption enabled
>   (e.g acpi tables)

The would be encrypted, right?  Similar to the EFI data in patch 9.

> - determine how to share the guest memory with hypervisor for to support
>   pvclock driver

Is it enough if the guest makes that page unencrypted?

I reviewed the KVM host-side patches and they are pretty
straightforward, so the comments on each patch suffice.

Thanks,

Paolo

> Brijesh Singh (11):
>       crypto: add AMD Platform Security Processor driver
>       KVM: SVM: prepare to reserve asid for SEV guest
>       KVM: SVM: prepare for SEV guest management API support
>       KVM: introduce KVM_SEV_ISSUE_CMD ioctl
>       KVM: SVM: add SEV launch start command
>       KVM: SVM: add SEV launch update command
>       KVM: SVM: add SEV_LAUNCH_FINISH command
>       KVM: SVM: add KVM_SEV_GUEST_STATUS command
>       KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
>       KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
>       KVM: SVM: add command to query SEV API version
> 
> Tom Lendacky (17):
>       kvm: svm: Add support for additional SVM NPF error codes
>       kvm: svm: Add kvm_fast_pio_in support
>       kvm: svm: Use the hardware provided GPA instead of page walk
>       x86: Secure Encrypted Virtualization (SEV) support
>       KVM: SVM: prepare for new bit definition in nested_ctl
>       KVM: SVM: Add SEV feature definitions to KVM
>       x86: Do not encrypt memory areas if SEV is enabled
>       Access BOOT related data encrypted with SEV active
>       x86/efi: Access EFI data as encrypted when SEV is active
>       x86: Change early_ioremap to early_memremap for BOOT data
>       x86: Don't decrypt trampoline area if SEV is active
>       x86: DMA support for SEV memory encryption
>       iommu/amd: AMD IOMMU support for SEV
>       x86: Don't set the SME MSR bit when SEV is active
>       x86: Unroll string I/O when SEV is active
>       x86: Add support to determine if running with SEV enabled
>       KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature
> 
> 
>  arch/x86/boot/compressed/Makefile      |    2 
>  arch/x86/boot/compressed/head_64.S     |   19 +
>  arch/x86/boot/compressed/mem_encrypt.S |  123 ++++
>  arch/x86/include/asm/io.h              |   26 +
>  arch/x86/include/asm/kvm_emulate.h     |    3 
>  arch/x86/include/asm/kvm_host.h        |   27 +
>  arch/x86/include/asm/mem_encrypt.h     |    3 
>  arch/x86/include/asm/svm.h             |    3 
>  arch/x86/include/uapi/asm/hyperv.h     |    4 
>  arch/x86/include/uapi/asm/kvm_para.h   |    4 
>  arch/x86/kernel/acpi/boot.c            |    4 
>  arch/x86/kernel/head64.c               |    4 
>  arch/x86/kernel/mem_encrypt.S          |   44 ++
>  arch/x86/kernel/mpparse.c              |   10 
>  arch/x86/kernel/setup.c                |    7 
>  arch/x86/kernel/x8664_ksyms_64.c       |    1 
>  arch/x86/kvm/cpuid.c                   |    4 
>  arch/x86/kvm/mmu.c                     |   20 +
>  arch/x86/kvm/svm.c                     |  906 ++++++++++++++++++++++++++++++++
>  arch/x86/kvm/x86.c                     |   73 +++
>  arch/x86/mm/ioremap.c                  |    7 
>  arch/x86/mm/mem_encrypt.c              |   50 ++
>  arch/x86/platform/efi/efi_64.c         |   14 
>  arch/x86/realmode/init.c               |   11 
>  drivers/crypto/Kconfig                 |   11 
>  drivers/crypto/Makefile                |    1 
>  drivers/crypto/psp/Kconfig             |    8 
>  drivers/crypto/psp/Makefile            |    3 
>  drivers/crypto/psp/psp-dev.c           |  220 ++++++++
>  drivers/crypto/psp/psp-dev.h           |   95 +++
>  drivers/crypto/psp/psp-ops.c           |  454 ++++++++++++++++
>  drivers/crypto/psp/psp-pci.c           |  376 +++++++++++++
>  drivers/sfi/sfi_core.c                 |    6 
>  include/linux/ccp-psp.h                |  833 +++++++++++++++++++++++++++++
>  include/uapi/linux/Kbuild              |    1 
>  include/uapi/linux/ccp-psp.h           |  182 ++++++
>  include/uapi/linux/kvm.h               |  125 ++++
>  37 files changed, 3643 insertions(+), 41 deletions(-)
>  create mode 100644 arch/x86/boot/compressed/mem_encrypt.S
>  create mode 100644 drivers/crypto/psp/Kconfig
>  create mode 100644 drivers/crypto/psp/Makefile
>  create mode 100644 drivers/crypto/psp/psp-dev.c
>  create mode 100644 drivers/crypto/psp/psp-dev.h
>  create mode 100644 drivers/crypto/psp/psp-ops.c
>  create mode 100644 drivers/crypto/psp/psp-pci.c
>  create mode 100644 include/linux/ccp-psp.h
>  create mode 100644 include/uapi/linux/ccp-psp.h
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)
  2016-10-13 11:19 ` [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD) Paolo Bonzini
@ 2016-10-17 13:51     ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-10-17 13:51 UTC (permalink / raw)
  To: Paolo Bonzini, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck
  Cc: brijesh.singh

Hi Paolo,

Thanks for reviews. I will incorporate your feedbacks in v2.

On 10/13/2016 06:19 AM, Paolo Bonzini wrote:
>
>
> On 23/08/2016 01:23, Brijesh Singh wrote:
>> TODO:
>> - send qemu/seabios RFC's on respective mailing list
>> - integrate the psp driver with CCP driver (they share the PCI id's)
>> - add SEV guest migration command support
>> - add SEV snapshotting command support
>> - determine how to do ioremap of physical memory with mem encryption enabled
>>   (e.g acpi tables)
>
> The would be encrypted, right?  Similar to the EFI data in patch 9.

Yes.

>
>> - determine how to share the guest memory with hypervisor for to support
>>   pvclock driver
>
> Is it enough if the guest makes that page unencrypted?
>

Yes that should be enough. If guest can mark a page as unencrypted then 
hypervisor should be able to read and write to that particular page.

Tom's patches have introduced API (set_memory_dec) to mark memory as 
unencrypted but pvclock drv runs very early during boot (when irq was 
disabled). Because of this we are not able to use set_memory_dec() to 
mark the page as unencrypted. Will need to come up with method for 
handling these cases.

> I reviewed the KVM host-side patches and they are pretty
> straightforward, so the comments on each patch suffice.
>
> Thanks,
>
> Paolo
>
>> Brijesh Singh (11):
>>       crypto: add AMD Platform Security Processor driver
>>       KVM: SVM: prepare to reserve asid for SEV guest
>>       KVM: SVM: prepare for SEV guest management API support
>>       KVM: introduce KVM_SEV_ISSUE_CMD ioctl
>>       KVM: SVM: add SEV launch start command
>>       KVM: SVM: add SEV launch update command
>>       KVM: SVM: add SEV_LAUNCH_FINISH command
>>       KVM: SVM: add KVM_SEV_GUEST_STATUS command
>>       KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
>>       KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
>>       KVM: SVM: add command to query SEV API version
>>
>> Tom Lendacky (17):
>>       kvm: svm: Add support for additional SVM NPF error codes
>>       kvm: svm: Add kvm_fast_pio_in support
>>       kvm: svm: Use the hardware provided GPA instead of page walk
>>       x86: Secure Encrypted Virtualization (SEV) support
>>       KVM: SVM: prepare for new bit definition in nested_ctl
>>       KVM: SVM: Add SEV feature definitions to KVM
>>       x86: Do not encrypt memory areas if SEV is enabled
>>       Access BOOT related data encrypted with SEV active
>>       x86/efi: Access EFI data as encrypted when SEV is active
>>       x86: Change early_ioremap to early_memremap for BOOT data
>>       x86: Don't decrypt trampoline area if SEV is active
>>       x86: DMA support for SEV memory encryption
>>       iommu/amd: AMD IOMMU support for SEV
>>       x86: Don't set the SME MSR bit when SEV is active
>>       x86: Unroll string I/O when SEV is active
>>       x86: Add support to determine if running with SEV enabled
>>       KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature
>>
>>
>>  arch/x86/boot/compressed/Makefile      |    2
>>  arch/x86/boot/compressed/head_64.S     |   19 +
>>  arch/x86/boot/compressed/mem_encrypt.S |  123 ++++
>>  arch/x86/include/asm/io.h              |   26 +
>>  arch/x86/include/asm/kvm_emulate.h     |    3
>>  arch/x86/include/asm/kvm_host.h        |   27 +
>>  arch/x86/include/asm/mem_encrypt.h     |    3
>>  arch/x86/include/asm/svm.h             |    3
>>  arch/x86/include/uapi/asm/hyperv.h     |    4
>>  arch/x86/include/uapi/asm/kvm_para.h   |    4
>>  arch/x86/kernel/acpi/boot.c            |    4
>>  arch/x86/kernel/head64.c               |    4
>>  arch/x86/kernel/mem_encrypt.S          |   44 ++
>>  arch/x86/kernel/mpparse.c              |   10
>>  arch/x86/kernel/setup.c                |    7
>>  arch/x86/kernel/x8664_ksyms_64.c       |    1
>>  arch/x86/kvm/cpuid.c                   |    4
>>  arch/x86/kvm/mmu.c                     |   20 +
>>  arch/x86/kvm/svm.c                     |  906 ++++++++++++++++++++++++++++++++
>>  arch/x86/kvm/x86.c                     |   73 +++
>>  arch/x86/mm/ioremap.c                  |    7
>>  arch/x86/mm/mem_encrypt.c              |   50 ++
>>  arch/x86/platform/efi/efi_64.c         |   14
>>  arch/x86/realmode/init.c               |   11
>>  drivers/crypto/Kconfig                 |   11
>>  drivers/crypto/Makefile                |    1
>>  drivers/crypto/psp/Kconfig             |    8
>>  drivers/crypto/psp/Makefile            |    3
>>  drivers/crypto/psp/psp-dev.c           |  220 ++++++++
>>  drivers/crypto/psp/psp-dev.h           |   95 +++
>>  drivers/crypto/psp/psp-ops.c           |  454 ++++++++++++++++
>>  drivers/crypto/psp/psp-pci.c           |  376 +++++++++++++
>>  drivers/sfi/sfi_core.c                 |    6
>>  include/linux/ccp-psp.h                |  833 +++++++++++++++++++++++++++++
>>  include/uapi/linux/Kbuild              |    1
>>  include/uapi/linux/ccp-psp.h           |  182 ++++++
>>  include/uapi/linux/kvm.h               |  125 ++++
>>  37 files changed, 3643 insertions(+), 41 deletions(-)
>>  create mode 100644 arch/x86/boot/compressed/mem_encrypt.S
>>  create mode 100644 drivers/crypto/psp/Kconfig
>>  create mode 100644 drivers/crypto/psp/Makefile
>>  create mode 100644 drivers/crypto/psp/psp-dev.c
>>  create mode 100644 drivers/crypto/psp/psp-dev.h
>>  create mode 100644 drivers/crypto/psp/psp-ops.c
>>  create mode 100644 drivers/crypto/psp/psp-pci.c
>>  create mode 100644 include/linux/ccp-psp.h
>>  create mode 100644 include/uapi/linux/ccp-psp.h
>>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)
@ 2016-10-17 13:51     ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-10-17 13:51 UTC (permalink / raw)
  To: Paolo Bonzini, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab, iamjoonsoo.kim, labbott, tony.luck
  Cc: brijesh.singh

Hi Paolo,

Thanks for reviews. I will incorporate your feedbacks in v2.

On 10/13/2016 06:19 AM, Paolo Bonzini wrote:
>
>
> On 23/08/2016 01:23, Brijesh Singh wrote:
>> TODO:
>> - send qemu/seabios RFC's on respective mailing list
>> - integrate the psp driver with CCP driver (they share the PCI id's)
>> - add SEV guest migration command support
>> - add SEV snapshotting command support
>> - determine how to do ioremap of physical memory with mem encryption enabled
>>   (e.g acpi tables)
>
> The would be encrypted, right?  Similar to the EFI data in patch 9.

Yes.

>
>> - determine how to share the guest memory with hypervisor for to support
>>   pvclock driver
>
> Is it enough if the guest makes that page unencrypted?
>

Yes that should be enough. If guest can mark a page as unencrypted then 
hypervisor should be able to read and write to that particular page.

Tom's patches have introduced API (set_memory_dec) to mark memory as 
unencrypted but pvclock drv runs very early during boot (when irq was 
disabled). Because of this we are not able to use set_memory_dec() to 
mark the page as unencrypted. Will need to come up with method for 
handling these cases.

> I reviewed the KVM host-side patches and they are pretty
> straightforward, so the comments on each patch suffice.
>
> Thanks,
>
> Paolo
>
>> Brijesh Singh (11):
>>       crypto: add AMD Platform Security Processor driver
>>       KVM: SVM: prepare to reserve asid for SEV guest
>>       KVM: SVM: prepare for SEV guest management API support
>>       KVM: introduce KVM_SEV_ISSUE_CMD ioctl
>>       KVM: SVM: add SEV launch start command
>>       KVM: SVM: add SEV launch update command
>>       KVM: SVM: add SEV_LAUNCH_FINISH command
>>       KVM: SVM: add KVM_SEV_GUEST_STATUS command
>>       KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
>>       KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
>>       KVM: SVM: add command to query SEV API version
>>
>> Tom Lendacky (17):
>>       kvm: svm: Add support for additional SVM NPF error codes
>>       kvm: svm: Add kvm_fast_pio_in support
>>       kvm: svm: Use the hardware provided GPA instead of page walk
>>       x86: Secure Encrypted Virtualization (SEV) support
>>       KVM: SVM: prepare for new bit definition in nested_ctl
>>       KVM: SVM: Add SEV feature definitions to KVM
>>       x86: Do not encrypt memory areas if SEV is enabled
>>       Access BOOT related data encrypted with SEV active
>>       x86/efi: Access EFI data as encrypted when SEV is active
>>       x86: Change early_ioremap to early_memremap for BOOT data
>>       x86: Don't decrypt trampoline area if SEV is active
>>       x86: DMA support for SEV memory encryption
>>       iommu/amd: AMD IOMMU support for SEV
>>       x86: Don't set the SME MSR bit when SEV is active
>>       x86: Unroll string I/O when SEV is active
>>       x86: Add support to determine if running with SEV enabled
>>       KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature
>>
>>
>>  arch/x86/boot/compressed/Makefile      |    2
>>  arch/x86/boot/compressed/head_64.S     |   19 +
>>  arch/x86/boot/compressed/mem_encrypt.S |  123 ++++
>>  arch/x86/include/asm/io.h              |   26 +
>>  arch/x86/include/asm/kvm_emulate.h     |    3
>>  arch/x86/include/asm/kvm_host.h        |   27 +
>>  arch/x86/include/asm/mem_encrypt.h     |    3
>>  arch/x86/include/asm/svm.h             |    3
>>  arch/x86/include/uapi/asm/hyperv.h     |    4
>>  arch/x86/include/uapi/asm/kvm_para.h   |    4
>>  arch/x86/kernel/acpi/boot.c            |    4
>>  arch/x86/kernel/head64.c               |    4
>>  arch/x86/kernel/mem_encrypt.S          |   44 ++
>>  arch/x86/kernel/mpparse.c              |   10
>>  arch/x86/kernel/setup.c                |    7
>>  arch/x86/kernel/x8664_ksyms_64.c       |    1
>>  arch/x86/kvm/cpuid.c                   |    4
>>  arch/x86/kvm/mmu.c                     |   20 +
>>  arch/x86/kvm/svm.c                     |  906 ++++++++++++++++++++++++++++++++
>>  arch/x86/kvm/x86.c                     |   73 +++
>>  arch/x86/mm/ioremap.c                  |    7
>>  arch/x86/mm/mem_encrypt.c              |   50 ++
>>  arch/x86/platform/efi/efi_64.c         |   14
>>  arch/x86/realmode/init.c               |   11
>>  drivers/crypto/Kconfig                 |   11
>>  drivers/crypto/Makefile                |    1
>>  drivers/crypto/psp/Kconfig             |    8
>>  drivers/crypto/psp/Makefile            |    3
>>  drivers/crypto/psp/psp-dev.c           |  220 ++++++++
>>  drivers/crypto/psp/psp-dev.h           |   95 +++
>>  drivers/crypto/psp/psp-ops.c           |  454 ++++++++++++++++
>>  drivers/crypto/psp/psp-pci.c           |  376 +++++++++++++
>>  drivers/sfi/sfi_core.c                 |    6
>>  include/linux/ccp-psp.h                |  833 +++++++++++++++++++++++++++++
>>  include/uapi/linux/Kbuild              |    1
>>  include/uapi/linux/ccp-psp.h           |  182 ++++++
>>  include/uapi/linux/kvm.h               |  125 ++++
>>  37 files changed, 3643 insertions(+), 41 deletions(-)
>>  create mode 100644 arch/x86/boot/compressed/mem_encrypt.S
>>  create mode 100644 drivers/crypto/psp/Kconfig
>>  create mode 100644 drivers/crypto/psp/Makefile
>>  create mode 100644 drivers/crypto/psp/psp-dev.c
>>  create mode 100644 drivers/crypto/psp/psp-dev.h
>>  create mode 100644 drivers/crypto/psp/psp-ops.c
>>  create mode 100644 drivers/crypto/psp/psp-pci.c
>>  create mode 100644 include/linux/ccp-psp.h
>>  create mode 100644 include/uapi/linux/ccp-psp.h
>>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  2016-10-13 10:45     ` Paolo Bonzini
  (?)
@ 2016-10-17 17:57       ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-10-17 17:57 UTC (permalink / raw)
  To: Paolo Bonzini, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab
  Cc: brijesh.singh

Hi Paolo,


On 10/13/2016 05:45 AM, Paolo Bonzini wrote:
>
>
> On 23/08/2016 01:28, Brijesh Singh wrote:
>> The ioctl will be used by qemu to issue the Secure Encrypted
>> Virtualization (SEV) guest commands to transition a guest into
>> into SEV-enabled mode.
>>
>> a typical usage:
>>
>> struct kvm_sev_launch_start start;
>> struct kvm_sev_issue_cmd data;
>>
>> data.cmd = KVM_SEV_LAUNCH_START;
>> data.opaque = &start;
>>
>> ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);
>>
>> On SEV command failure, data.ret_code will contain the firmware error code.
>
> Please modify the ioctl to require the file descriptor for the PSP.  A
> program without access to /dev/psp should not be able to use SEV.
>

I am not sure if I fully understand this feedback. Let me summaries what 
we have right now.

At highest level SEV key management commands are divided into two sections:

- platform  management : commands used during platform provisioning. PSP 
drv provides ioctl's for these commands. Qemu will not use these 
ioctl's, i believe these ioctl will be used by other tools.

- guest management: command used during guest life cycle. PSP drv 
exports various function and KVM drv calls these function when it 
receives the SEV_ISSUE_CMD ioctl from qemu.

If I understanding correctly then you are recommending that instead of 
exporting various functions from PSP drv we should expose one function 
for the all the guest command handling (something like this).

int psp_issue_cmd_external_user(struct file *filep,
			    	int cmd, unsigned long addr,
			    	int *psp_ret)
{
	/* here we check to ensure that file->f_ops is a valid
	 * psp instance.
          */
	if (filep->f_ops != &psp_fops)
		return -EINVAL;

	/* handle the command */
	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
}

In KVM driver use something like this to invoke the PSP command handler.

int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
		     unsigned long data)
{
	int ret;
	struct fd f;

	f = fdget(input->psp_fd);
	if (!f.file)
		return -EBADF;
	....

	psp_issue_cmd_external_user(f.file, input->cmd,
				    data, &input->psp_ret);
	....
}

Please let me know if I understood this correctly.

>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/include/asm/kvm_host.h |    3 +
>>  arch/x86/kvm/x86.c              |   13 ++++
>>  include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 141 insertions(+)
>>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-17 17:57       ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-10-17 17:57 UTC (permalink / raw)
  To: Paolo Bonzini, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab
  Cc: brijesh.singh

Hi Paolo,


On 10/13/2016 05:45 AM, Paolo Bonzini wrote:
>
>
> On 23/08/2016 01:28, Brijesh Singh wrote:
>> The ioctl will be used by qemu to issue the Secure Encrypted
>> Virtualization (SEV) guest commands to transition a guest into
>> into SEV-enabled mode.
>>
>> a typical usage:
>>
>> struct kvm_sev_launch_start start;
>> struct kvm_sev_issue_cmd data;
>>
>> data.cmd = KVM_SEV_LAUNCH_START;
>> data.opaque = &start;
>>
>> ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);
>>
>> On SEV command failure, data.ret_code will contain the firmware error code.
>
> Please modify the ioctl to require the file descriptor for the PSP.  A
> program without access to /dev/psp should not be able to use SEV.
>

I am not sure if I fully understand this feedback. Let me summaries what 
we have right now.

At highest level SEV key management commands are divided into two sections:

- platform  management : commands used during platform provisioning. PSP 
drv provides ioctl's for these commands. Qemu will not use these 
ioctl's, i believe these ioctl will be used by other tools.

- guest management: command used during guest life cycle. PSP drv 
exports various function and KVM drv calls these function when it 
receives the SEV_ISSUE_CMD ioctl from qemu.

If I understanding correctly then you are recommending that instead of 
exporting various functions from PSP drv we should expose one function 
for the all the guest command handling (something like this).

int psp_issue_cmd_external_user(struct file *filep,
			    	int cmd, unsigned long addr,
			    	int *psp_ret)
{
	/* here we check to ensure that file->f_ops is a valid
	 * psp instance.
          */
	if (filep->f_ops != &psp_fops)
		return -EINVAL;

	/* handle the command */
	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
}

In KVM driver use something like this to invoke the PSP command handler.

int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
		     unsigned long data)
{
	int ret;
	struct fd f;

	f = fdget(input->psp_fd);
	if (!f.file)
		return -EBADF;
	....

	psp_issue_cmd_external_user(f.file, input->cmd,
				    data, &input->psp_ret);
	....
}

Please let me know if I understood this correctly.

>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/include/asm/kvm_host.h |    3 +
>>  arch/x86/kvm/x86.c              |   13 ++++
>>  include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 141 insertions(+)
>>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-17 17:57       ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-10-17 17:57 UTC (permalink / raw)
  To: Paolo Bonzini, simon.guinot, linux-efi, kvm, rkrcmar, matt,
	linus.walleij, linux-mm, paul.gortmaker, hpa, dan.j.williams,
	aarcange, sfr, andriy.shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross.zwisler, bp, dyoung, thomas.lendacky,
	jroedel, keescook, toshi.kani, mathieu.desnoyers, devel, tglx,
	mchehab
  Cc: brijesh.singh

Hi Paolo,


On 10/13/2016 05:45 AM, Paolo Bonzini wrote:
>
>
> On 23/08/2016 01:28, Brijesh Singh wrote:
>> The ioctl will be used by qemu to issue the Secure Encrypted
>> Virtualization (SEV) guest commands to transition a guest into
>> into SEV-enabled mode.
>>
>> a typical usage:
>>
>> struct kvm_sev_launch_start start;
>> struct kvm_sev_issue_cmd data;
>>
>> data.cmd = KVM_SEV_LAUNCH_START;
>> data.opaque = &start;
>>
>> ret = ioctl(fd, KVM_SEV_ISSUE_CMD, &data);
>>
>> On SEV command failure, data.ret_code will contain the firmware error code.
>
> Please modify the ioctl to require the file descriptor for the PSP.  A
> program without access to /dev/psp should not be able to use SEV.
>

I am not sure if I fully understand this feedback. Let me summaries what 
we have right now.

At highest level SEV key management commands are divided into two sections:

- platform  management : commands used during platform provisioning. PSP 
drv provides ioctl's for these commands. Qemu will not use these 
ioctl's, i believe these ioctl will be used by other tools.

- guest management: command used during guest life cycle. PSP drv 
exports various function and KVM drv calls these function when it 
receives the SEV_ISSUE_CMD ioctl from qemu.

If I understanding correctly then you are recommending that instead of 
exporting various functions from PSP drv we should expose one function 
for the all the guest command handling (something like this).

int psp_issue_cmd_external_user(struct file *filep,
			    	int cmd, unsigned long addr,
			    	int *psp_ret)
{
	/* here we check to ensure that file->f_ops is a valid
	 * psp instance.
          */
	if (filep->f_ops != &psp_fops)
		return -EINVAL;

	/* handle the command */
	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
}

In KVM driver use something like this to invoke the PSP command handler.

int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
		     unsigned long data)
{
	int ret;
	struct fd f;

	f = fdget(input->psp_fd);
	if (!f.file)
		return -EBADF;
	....

	psp_issue_cmd_external_user(f.file, input->cmd,
				    data, &input->psp_ret);
	....
}

Please let me know if I understood this correctly.

>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/include/asm/kvm_host.h |    3 +
>>  arch/x86/kvm/x86.c              |   13 ++++
>>  include/uapi/linux/kvm.h        |  125 +++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 141 insertions(+)
>>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  2016-10-17 17:57       ` Brijesh Singh
  (?)
@ 2016-10-17 20:14         ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-17 20:14 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon guinot, linux-efi, kvm, rkrcmar, matt, linus walleij,
	linux-mm, paul gortmaker, hpa, dan j williams, aarcange, sfr,
	andriy shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross zwisler, bp, dyoung, thomas lendacky, jroedel,
	keescook, toshi kani, mathieu desnoyers

> I am not sure if I fully understand this feedback. Let me summaries what
> we have right now.
> 
> At highest level SEV key management commands are divided into two sections:
> 
> - platform  management : commands used during platform provisioning. PSP
> drv provides ioctl's for these commands. Qemu will not use these
> ioctl's, i believe these ioctl will be used by other tools.
> 
> - guest management: command used during guest life cycle. PSP drv
> exports various function and KVM drv calls these function when it
> receives the SEV_ISSUE_CMD ioctl from qemu.
> 
> If I understanding correctly then you are recommending that instead of
> exporting various functions from PSP drv we should expose one function
> for the all the guest command handling (something like this).

My understanding is that a user could exhaust the ASIDs for encrypted
VMs if it was allowed to start an arbitrary number of KVM guests.  So
we would need some kind of control.  Is this correct?

If so, does /dev/psp provide any functionality that you believe is
dangerous for the KVM userspace (which runs in a very confined
environment)?  Is this functionality blocked through capability
checks?

Thanks,

Paolo


> int psp_issue_cmd_external_user(struct file *filep,
> 			    	int cmd, unsigned long addr,
> 			    	int *psp_ret)
> {
> 	/* here we check to ensure that file->f_ops is a valid
> 	 * psp instance.
>           */
> 	if (filep->f_ops != &psp_fops)
> 		return -EINVAL;
> 
> 	/* handle the command */
> 	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
> }
> 
> In KVM driver use something like this to invoke the PSP command handler.
> 
> int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
> 		     unsigned long data)
> {
> 	int ret;
> 	struct fd f;
> 
> 	f = fdget(input->psp_fd);
> 	if (!f.file)
> 		return -EBADF;
> 	....
> 
> 	psp_issue_cmd_external_user(f.file, input->cmd,
> 				    data, &input->psp_ret);
> 	....
> }
> 
> Please let me know if I understood this correctly.
> 
> >> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> >> ---
> >>  arch/x86/include/asm/kvm_host.h |    3 +
> >>  arch/x86/kvm/x86.c              |   13 ++++
> >>  include/uapi/linux/kvm.h        |  125
> >>  +++++++++++++++++++++++++++++++++++++++
> >>  3 files changed, 141 insertions(+)
> >>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-17 20:14         ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-17 20:14 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon guinot, linux-efi, kvm, rkrcmar, matt, linus walleij,
	linux-mm, paul gortmaker, hpa, dan j williams, aarcange, sfr,
	andriy shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross zwisler, bp, dyoung, thomas lendacky, jroedel,
	keescook, toshi kani, mathieu desnoyers, de

> I am not sure if I fully understand this feedback. Let me summaries what
> we have right now.
> 
> At highest level SEV key management commands are divided into two sections:
> 
> - platform  management : commands used during platform provisioning. PSP
> drv provides ioctl's for these commands. Qemu will not use these
> ioctl's, i believe these ioctl will be used by other tools.
> 
> - guest management: command used during guest life cycle. PSP drv
> exports various function and KVM drv calls these function when it
> receives the SEV_ISSUE_CMD ioctl from qemu.
> 
> If I understanding correctly then you are recommending that instead of
> exporting various functions from PSP drv we should expose one function
> for the all the guest command handling (something like this).

My understanding is that a user could exhaust the ASIDs for encrypted
VMs if it was allowed to start an arbitrary number of KVM guests.  So
we would need some kind of control.  Is this correct?

If so, does /dev/psp provide any functionality that you believe is
dangerous for the KVM userspace (which runs in a very confined
environment)?  Is this functionality blocked through capability
checks?

Thanks,

Paolo


> int psp_issue_cmd_external_user(struct file *filep,
> 			    	int cmd, unsigned long addr,
> 			    	int *psp_ret)
> {
> 	/* here we check to ensure that file->f_ops is a valid
> 	 * psp instance.
>           */
> 	if (filep->f_ops != &psp_fops)
> 		return -EINVAL;
> 
> 	/* handle the command */
> 	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
> }
> 
> In KVM driver use something like this to invoke the PSP command handler.
> 
> int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
> 		     unsigned long data)
> {
> 	int ret;
> 	struct fd f;
> 
> 	f = fdget(input->psp_fd);
> 	if (!f.file)
> 		return -EBADF;
> 	....
> 
> 	psp_issue_cmd_external_user(f.file, input->cmd,
> 				    data, &input->psp_ret);
> 	....
> }
> 
> Please let me know if I understood this correctly.
> 
> >> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> >> ---
> >>  arch/x86/include/asm/kvm_host.h |    3 +
> >>  arch/x86/kvm/x86.c              |   13 ++++
> >>  include/uapi/linux/kvm.h        |  125
> >>  +++++++++++++++++++++++++++++++++++++++
> >>  3 files changed, 141 insertions(+)
> >>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-17 20:14         ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-17 20:14 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon guinot, linux-efi, kvm, rkrcmar, matt, linus walleij,
	linux-mm, paul gortmaker, hpa, dan j williams, aarcange, sfr,
	andriy shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross zwisler, bp, dyoung, thomas lendacky, jroedel,
	keescook, toshi kani, mathieu desnoyers, devel, tglx, mchehab

> I am not sure if I fully understand this feedback. Let me summaries what
> we have right now.
> 
> At highest level SEV key management commands are divided into two sections:
> 
> - platform  management : commands used during platform provisioning. PSP
> drv provides ioctl's for these commands. Qemu will not use these
> ioctl's, i believe these ioctl will be used by other tools.
> 
> - guest management: command used during guest life cycle. PSP drv
> exports various function and KVM drv calls these function when it
> receives the SEV_ISSUE_CMD ioctl from qemu.
> 
> If I understanding correctly then you are recommending that instead of
> exporting various functions from PSP drv we should expose one function
> for the all the guest command handling (something like this).

My understanding is that a user could exhaust the ASIDs for encrypted
VMs if it was allowed to start an arbitrary number of KVM guests.  So
we would need some kind of control.  Is this correct?

If so, does /dev/psp provide any functionality that you believe is
dangerous for the KVM userspace (which runs in a very confined
environment)?  Is this functionality blocked through capability
checks?

Thanks,

Paolo


> int psp_issue_cmd_external_user(struct file *filep,
> 			    	int cmd, unsigned long addr,
> 			    	int *psp_ret)
> {
> 	/* here we check to ensure that file->f_ops is a valid
> 	 * psp instance.
>           */
> 	if (filep->f_ops != &psp_fops)
> 		return -EINVAL;
> 
> 	/* handle the command */
> 	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
> }
> 
> In KVM driver use something like this to invoke the PSP command handler.
> 
> int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
> 		     unsigned long data)
> {
> 	int ret;
> 	struct fd f;
> 
> 	f = fdget(input->psp_fd);
> 	if (!f.file)
> 		return -EBADF;
> 	....
> 
> 	psp_issue_cmd_external_user(f.file, input->cmd,
> 				    data, &input->psp_ret);
> 	....
> }
> 
> Please let me know if I understood this correctly.
> 
> >> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> >> ---
> >>  arch/x86/include/asm/kvm_host.h |    3 +
> >>  arch/x86/kvm/x86.c              |   13 ++++
> >>  include/uapi/linux/kvm.h        |  125
> >>  +++++++++++++++++++++++++++++++++++++++
> >>  3 files changed, 141 insertions(+)
> >>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  2016-10-17 20:14         ` Paolo Bonzini
  (?)
@ 2016-10-18 19:32           ` Brijesh Singh
  -1 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-10-18 19:32 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: brijesh.singh, simon guinot, linux-efi, kvm, rkrcmar, matt,
	linus walleij, linux-mm, paul gortmaker, hpa, dan j williams,
	aarcange, sfr, andriy shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross zwisler, bp, dyoung, thomas lendacky,
	jroedel, keescook, toshi kani, mathieu desnoyers, devel, tgl

Hi Paolo,

On 10/17/2016 03:14 PM, Paolo Bonzini wrote:
>> I am not sure if I fully understand this feedback. Let me summaries what
>> we have right now.
>>
>> At highest level SEV key management commands are divided into two sections:
>>
>> - platform  management : commands used during platform provisioning. PSP
>> drv provides ioctl's for these commands. Qemu will not use these
>> ioctl's, i believe these ioctl will be used by other tools.
>>
>> - guest management: command used during guest life cycle. PSP drv
>> exports various function and KVM drv calls these function when it
>> receives the SEV_ISSUE_CMD ioctl from qemu.
>>
>> If I understanding correctly then you are recommending that instead of
>> exporting various functions from PSP drv we should expose one function
>> for the all the guest command handling (something like this).
>
> My understanding is that a user could exhaust the ASIDs for encrypted
> VMs if it was allowed to start an arbitrary number of KVM guests.  So
> we would need some kind of control.  Is this correct?
>

Yes, there is limited number of ASIDs for encrypted VMs. Do we need to 
pass the psp_fd into SEV_ISSUE_CMD ioctl or can we handle it from Qemu 
itself ? e.g when user asks to transition a guest into SEV-enabled mode 
then before calling LAUNCH_START Qemu tries to open /dev/psp device. If 
open() returns success then we know user has permission to communicate 
with PSP firmware. Please let me know if I am missing something.

> If so, does /dev/psp provide any functionality that you believe is
> dangerous for the KVM userspace (which runs in a very confined
> environment)?  Is this functionality blocked through capability
> checks?
>

I do not see /dev/psp providing anything which would be dangerous to KVM 
userspace. It should be safe to access /dev/psp into KVM userspace.

> Thanks,
>
> Paolo
>
>
>> int psp_issue_cmd_external_user(struct file *filep,
>> 			    	int cmd, unsigned long addr,
>> 			    	int *psp_ret)
>> {
>> 	/* here we check to ensure that file->f_ops is a valid
>> 	 * psp instance.
>>           */
>> 	if (filep->f_ops != &psp_fops)
>> 		return -EINVAL;
>>
>> 	/* handle the command */
>> 	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
>> }
>>
>> In KVM driver use something like this to invoke the PSP command handler.
>>
>> int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
>> 		     unsigned long data)
>> {
>> 	int ret;
>> 	struct fd f;
>>
>> 	f = fdget(input->psp_fd);
>> 	if (!f.file)
>> 		return -EBADF;
>> 	....
>>
>> 	psp_issue_cmd_external_user(f.file, input->cmd,
>> 				    data, &input->psp_ret);
>> 	....
>> }
>>
>> Please let me know if I understood this correctly.
>>
>>>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>>>> ---
>>>>  arch/x86/include/asm/kvm_host.h |    3 +
>>>>  arch/x86/kvm/x86.c              |   13 ++++
>>>>  include/uapi/linux/kvm.h        |  125
>>>>  +++++++++++++++++++++++++++++++++++++++
>>>>  3 files changed, 141 insertions(+)
>>>>
>>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-18 19:32           ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-10-18 19:32 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: brijesh.singh, simon guinot, linux-efi, kvm, rkrcmar, matt,
	linus walleij, linux-mm, paul gortmaker, hpa, dan j williams,
	aarcange, sfr, andriy shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross zwisler, bp, dyoung, thomas lendacky,
	jroedel, keescook, toshi kani, mathieu desnoyers, devel, tgl

Hi Paolo,

On 10/17/2016 03:14 PM, Paolo Bonzini wrote:
>> I am not sure if I fully understand this feedback. Let me summaries what
>> we have right now.
>>
>> At highest level SEV key management commands are divided into two sections:
>>
>> - platform  management : commands used during platform provisioning. PSP
>> drv provides ioctl's for these commands. Qemu will not use these
>> ioctl's, i believe these ioctl will be used by other tools.
>>
>> - guest management: command used during guest life cycle. PSP drv
>> exports various function and KVM drv calls these function when it
>> receives the SEV_ISSUE_CMD ioctl from qemu.
>>
>> If I understanding correctly then you are recommending that instead of
>> exporting various functions from PSP drv we should expose one function
>> for the all the guest command handling (something like this).
>
> My understanding is that a user could exhaust the ASIDs for encrypted
> VMs if it was allowed to start an arbitrary number of KVM guests.  So
> we would need some kind of control.  Is this correct?
>

Yes, there is limited number of ASIDs for encrypted VMs. Do we need to 
pass the psp_fd into SEV_ISSUE_CMD ioctl or can we handle it from Qemu 
itself ? e.g when user asks to transition a guest into SEV-enabled mode 
then before calling LAUNCH_START Qemu tries to open /dev/psp device. If 
open() returns success then we know user has permission to communicate 
with PSP firmware. Please let me know if I am missing something.

> If so, does /dev/psp provide any functionality that you believe is
> dangerous for the KVM userspace (which runs in a very confined
> environment)?  Is this functionality blocked through capability
> checks?
>

I do not see /dev/psp providing anything which would be dangerous to KVM 
userspace. It should be safe to access /dev/psp into KVM userspace.

> Thanks,
>
> Paolo
>
>
>> int psp_issue_cmd_external_user(struct file *filep,
>> 			    	int cmd, unsigned long addr,
>> 			    	int *psp_ret)
>> {
>> 	/* here we check to ensure that file->f_ops is a valid
>> 	 * psp instance.
>>           */
>> 	if (filep->f_ops != &psp_fops)
>> 		return -EINVAL;
>>
>> 	/* handle the command */
>> 	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
>> }
>>
>> In KVM driver use something like this to invoke the PSP command handler.
>>
>> int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
>> 		     unsigned long data)
>> {
>> 	int ret;
>> 	struct fd f;
>>
>> 	f = fdget(input->psp_fd);
>> 	if (!f.file)
>> 		return -EBADF;
>> 	....
>>
>> 	psp_issue_cmd_external_user(f.file, input->cmd,
>> 				    data, &input->psp_ret);
>> 	....
>> }
>>
>> Please let me know if I understood this correctly.
>>
>>>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>>>> ---
>>>>  arch/x86/include/asm/kvm_host.h |    3 +
>>>>  arch/x86/kvm/x86.c              |   13 ++++
>>>>  include/uapi/linux/kvm.h        |  125
>>>>  +++++++++++++++++++++++++++++++++++++++
>>>>  3 files changed, 141 insertions(+)
>>>>
>>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-18 19:32           ` Brijesh Singh
  0 siblings, 0 replies; 255+ messages in thread
From: Brijesh Singh @ 2016-10-18 19:32 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: brijesh.singh, simon guinot, linux-efi, kvm, rkrcmar, matt,
	linus walleij, linux-mm, paul gortmaker, hpa, dan j williams,
	aarcange, sfr, andriy shevchenko, herbert, bhe, xemul, joro, x86,
	mingo, msalter, ross zwisler, bp, dyoung, thomas lendacky,
	jroedel, keescook, toshi kani, mathieu desnoyers, devel, tglx,
	mchehab

Hi Paolo,

On 10/17/2016 03:14 PM, Paolo Bonzini wrote:
>> I am not sure if I fully understand this feedback. Let me summaries what
>> we have right now.
>>
>> At highest level SEV key management commands are divided into two sections:
>>
>> - platform  management : commands used during platform provisioning. PSP
>> drv provides ioctl's for these commands. Qemu will not use these
>> ioctl's, i believe these ioctl will be used by other tools.
>>
>> - guest management: command used during guest life cycle. PSP drv
>> exports various function and KVM drv calls these function when it
>> receives the SEV_ISSUE_CMD ioctl from qemu.
>>
>> If I understanding correctly then you are recommending that instead of
>> exporting various functions from PSP drv we should expose one function
>> for the all the guest command handling (something like this).
>
> My understanding is that a user could exhaust the ASIDs for encrypted
> VMs if it was allowed to start an arbitrary number of KVM guests.  So
> we would need some kind of control.  Is this correct?
>

Yes, there is limited number of ASIDs for encrypted VMs. Do we need to 
pass the psp_fd into SEV_ISSUE_CMD ioctl or can we handle it from Qemu 
itself ? e.g when user asks to transition a guest into SEV-enabled mode 
then before calling LAUNCH_START Qemu tries to open /dev/psp device. If 
open() returns success then we know user has permission to communicate 
with PSP firmware. Please let me know if I am missing something.

> If so, does /dev/psp provide any functionality that you believe is
> dangerous for the KVM userspace (which runs in a very confined
> environment)?  Is this functionality blocked through capability
> checks?
>

I do not see /dev/psp providing anything which would be dangerous to KVM 
userspace. It should be safe to access /dev/psp into KVM userspace.

> Thanks,
>
> Paolo
>
>
>> int psp_issue_cmd_external_user(struct file *filep,
>> 			    	int cmd, unsigned long addr,
>> 			    	int *psp_ret)
>> {
>> 	/* here we check to ensure that file->f_ops is a valid
>> 	 * psp instance.
>>           */
>> 	if (filep->f_ops != &psp_fops)
>> 		return -EINVAL;
>>
>> 	/* handle the command */
>> 	return psp_issue_cmd (cmd, addr, timeout, psp_ret);
>> }
>>
>> In KVM driver use something like this to invoke the PSP command handler.
>>
>> int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
>> 		     unsigned long data)
>> {
>> 	int ret;
>> 	struct fd f;
>>
>> 	f = fdget(input->psp_fd);
>> 	if (!f.file)
>> 		return -EBADF;
>> 	....
>>
>> 	psp_issue_cmd_external_user(f.file, input->cmd,
>> 				    data, &input->psp_ret);
>> 	....
>> }
>>
>> Please let me know if I understood this correctly.
>>
>>>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>>>> ---
>>>>  arch/x86/include/asm/kvm_host.h |    3 +
>>>>  arch/x86/kvm/x86.c              |   13 ++++
>>>>  include/uapi/linux/kvm.h        |  125
>>>>  +++++++++++++++++++++++++++++++++++++++
>>>>  3 files changed, 141 insertions(+)
>>>>
>>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  2016-10-18 19:32           ` Brijesh Singh
  (?)
@ 2016-10-18 21:44             ` Paolo Bonzini
  -1 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-18 21:44 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon guinot, linux-efi, kvm, rkrcmar, matt, linus walleij,
	linux-mm, paul gortmaker, hpa, dan j williams, aarcange, sfr,
	andriy shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross zwisler, bp, dyoung, thomas lendacky, jroedel,
	keescook, toshi kani, mathieu desnoyers


> > If I understanding correctly then you are recommending that instead of
> > exporting various functions from PSP drv we should expose one function
> > for the all the guest command handling (something like this).
> >
> > My understanding is that a user could exhaust the ASIDs for encrypted
> > VMs if it was allowed to start an arbitrary number of KVM guests.  So
> > we would need some kind of control.  Is this correct?
> 
> Yes, there is limited number of ASIDs for encrypted VMs. Do we need to
> pass the psp_fd into SEV_ISSUE_CMD ioctl or can we handle it from Qemu
> itself ? e.g when user asks to transition a guest into SEV-enabled mode
> then before calling LAUNCH_START Qemu tries to open /dev/psp device. If
> open() returns success then we know user has permission to communicate
> with PSP firmware.

No, this is a stateful mechanism and it's hard to implement.  Passing a
/dev/psp file descriptor is the simplest way to "prove" that you have
access to the device.

Thanks,

Paolo

> > If so, does /dev/psp provide any functionality that you believe is
> > dangerous for the KVM userspace (which runs in a very confined
> > environment)?  Is this functionality blocked through capability
> > checks?
> 
> I do not see /dev/psp providing anything which would be dangerous to KVM
> userspace. It should be safe to access /dev/psp into KVM userspace.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-18 21:44             ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-18 21:44 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon guinot, linux-efi, kvm, rkrcmar, matt, linus walleij,
	linux-mm, paul gortmaker, hpa, dan j williams, aarcange, sfr,
	andriy shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross zwisler, bp, dyoung, thomas lendacky, jroedel,
	keescook, toshi kani, mathieu desnoyers, de


> > If I understanding correctly then you are recommending that instead of
> > exporting various functions from PSP drv we should expose one function
> > for the all the guest command handling (something like this).
> >
> > My understanding is that a user could exhaust the ASIDs for encrypted
> > VMs if it was allowed to start an arbitrary number of KVM guests.  So
> > we would need some kind of control.  Is this correct?
> 
> Yes, there is limited number of ASIDs for encrypted VMs. Do we need to
> pass the psp_fd into SEV_ISSUE_CMD ioctl or can we handle it from Qemu
> itself ? e.g when user asks to transition a guest into SEV-enabled mode
> then before calling LAUNCH_START Qemu tries to open /dev/psp device. If
> open() returns success then we know user has permission to communicate
> with PSP firmware.

No, this is a stateful mechanism and it's hard to implement.  Passing a
/dev/psp file descriptor is the simplest way to "prove" that you have
access to the device.

Thanks,

Paolo

> > If so, does /dev/psp provide any functionality that you believe is
> > dangerous for the KVM userspace (which runs in a very confined
> > environment)?  Is this functionality blocked through capability
> > checks?
> 
> I do not see /dev/psp providing anything which would be dangerous to KVM
> userspace. It should be safe to access /dev/psp into KVM userspace.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl
@ 2016-10-18 21:44             ` Paolo Bonzini
  0 siblings, 0 replies; 255+ messages in thread
From: Paolo Bonzini @ 2016-10-18 21:44 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: simon guinot, linux-efi, kvm, rkrcmar, matt, linus walleij,
	linux-mm, paul gortmaker, hpa, dan j williams, aarcange, sfr,
	andriy shevchenko, herbert, bhe, xemul, joro, x86, mingo,
	msalter, ross zwisler, bp, dyoung, thomas lendacky, jroedel,
	keescook, toshi kani, mathieu desnoyers, devel, tglx, mchehab


> > If I understanding correctly then you are recommending that instead of
> > exporting various functions from PSP drv we should expose one function
> > for the all the guest command handling (something like this).
> >
> > My understanding is that a user could exhaust the ASIDs for encrypted
> > VMs if it was allowed to start an arbitrary number of KVM guests.  So
> > we would need some kind of control.  Is this correct?
> 
> Yes, there is limited number of ASIDs for encrypted VMs. Do we need to
> pass the psp_fd into SEV_ISSUE_CMD ioctl or can we handle it from Qemu
> itself ? e.g when user asks to transition a guest into SEV-enabled mode
> then before calling LAUNCH_START Qemu tries to open /dev/psp device. If
> open() returns success then we know user has permission to communicate
> with PSP firmware.

No, this is a stateful mechanism and it's hard to implement.  Passing a
/dev/psp file descriptor is the simplest way to "prove" that you have
access to the device.

Thanks,

Paolo

> > If so, does /dev/psp provide any functionality that you believe is
> > dangerous for the KVM userspace (which runs in a very confined
> > environment)?  Is this functionality blocked through capability
> > checks?
> 
> I do not see /dev/psp providing anything which would be dangerous to KVM
> userspace. It should be safe to access /dev/psp into KVM userspace.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 255+ messages in thread

end of thread, other threads:[~2016-10-18 21:45 UTC | newest]

Thread overview: 255+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-22 23:23 [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD) Brijesh Singh
2016-08-22 23:23 ` Brijesh Singh
2016-08-22 23:23 ` Brijesh Singh
2016-08-22 23:23 ` Brijesh Singh
2016-08-22 23:23 ` [RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes Brijesh Singh
2016-08-22 23:23   ` Brijesh Singh
2016-08-22 23:23   ` Brijesh Singh
2016-08-22 23:23   ` Brijesh Singh
2016-09-13  9:56   ` Borislav Petkov
2016-09-13  9:56     ` Borislav Petkov
2016-09-13  9:56     ` Borislav Petkov
2016-08-22 23:23 ` Brijesh Singh
2016-08-22 23:23 ` [RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support Brijesh Singh
2016-08-22 23:23   ` Brijesh Singh
2016-08-22 23:23   ` Brijesh Singh
2016-08-22 23:23   ` Brijesh Singh
2016-09-21 10:58   ` Borislav Petkov
2016-09-21 10:58     ` Borislav Petkov
2016-09-21 10:58     ` Borislav Petkov
2016-08-22 23:23 ` Brijesh Singh
2016-08-22 23:24 ` [RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-09-21 17:16   ` Borislav Petkov
2016-09-21 17:16     ` Borislav Petkov
2016-09-21 17:16     ` Borislav Petkov
2016-08-22 23:24 ` Brijesh Singh
2016-08-22 23:24 ` [RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support Brijesh Singh
2016-08-22 23:24 ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-09-22 15:00   ` Borislav Petkov
2016-09-22 15:00     ` Borislav Petkov
2016-09-22 15:00     ` Borislav Petkov
2016-08-22 23:24 ` [RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-09-22 14:17   ` Borislav Petkov
2016-09-22 14:17     ` Borislav Petkov
2016-09-22 14:17     ` Borislav Petkov
2016-08-22 23:24 ` Brijesh Singh
2016-08-22 23:24 ` [RFC PATCH v1 06/28] KVM: SVM: Add SEV feature definitions to KVM Brijesh Singh
2016-08-22 23:24 ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24 ` [RFC PATCH v1 07/28] x86: Do not encrypt memory areas if SEV is enabled Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24   ` Brijesh Singh
2016-08-22 23:24 ` Brijesh Singh
2016-08-22 23:25 ` [RFC PATCH v1 08/28] Access BOOT related data encrypted with SEV active Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25 ` Brijesh Singh
2016-08-22 23:25 ` [RFC PATCH v1 09/28] x86/efi: Access EFI data as encrypted when SEV is active Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-09-22 14:35   ` Borislav Petkov
2016-09-22 14:35     ` Borislav Petkov
2016-09-22 14:35     ` Borislav Petkov
2016-09-22 14:35     ` Borislav Petkov
2016-09-22 14:45     ` Paolo Bonzini
2016-09-22 14:45       ` Paolo Bonzini
2016-09-22 14:45       ` Paolo Bonzini
2016-09-22 14:59       ` Borislav Petkov
2016-09-22 14:59         ` Borislav Petkov
2016-09-22 14:59         ` Borislav Petkov
2016-09-22 14:59         ` Borislav Petkov
2016-09-22 15:05         ` Paolo Bonzini
2016-09-22 15:05           ` Paolo Bonzini
2016-09-22 15:05           ` Paolo Bonzini
2016-09-22 17:07           ` Borislav Petkov
2016-09-22 17:07             ` Borislav Petkov
2016-09-22 17:07             ` Borislav Petkov
2016-09-22 17:07             ` Borislav Petkov
2016-09-22 17:08             ` Paolo Bonzini
2016-09-22 17:08               ` Paolo Bonzini
2016-09-22 17:08               ` Paolo Bonzini
2016-09-22 17:08               ` Paolo Bonzini
2016-09-22 17:27               ` Borislav Petkov
2016-09-22 17:27                 ` Borislav Petkov
2016-09-22 17:27                 ` Borislav Petkov
2016-09-22 19:04             ` Tom Lendacky
2016-09-22 19:04               ` Tom Lendacky
2016-09-22 19:04               ` Tom Lendacky
2016-09-22 19:04               ` Tom Lendacky
2016-09-22 19:11               ` Borislav Petkov
2016-09-22 19:11                 ` Borislav Petkov
2016-09-22 19:11                 ` Borislav Petkov
2016-09-22 19:49                 ` Tom Lendacky
2016-09-22 19:49                   ` Tom Lendacky
2016-09-22 19:49                   ` Tom Lendacky
2016-09-22 19:49                   ` Tom Lendacky
2016-09-22 20:10                   ` Borislav Petkov
2016-09-22 20:10                     ` Borislav Petkov
2016-09-22 20:10                     ` Borislav Petkov
2016-09-22 18:59         ` Tom Lendacky
2016-09-22 18:59           ` Tom Lendacky
2016-09-22 18:59           ` Tom Lendacky
2016-09-22 18:59           ` Tom Lendacky
2016-09-22 18:47       ` Tom Lendacky
2016-09-22 18:47         ` Tom Lendacky
2016-09-22 18:47         ` Tom Lendacky
2016-09-22 18:47         ` Tom Lendacky
2016-09-22 18:50         ` Paolo Bonzini
2016-09-22 18:50           ` Paolo Bonzini
2016-09-22 18:50           ` Paolo Bonzini
2016-09-22 17:46     ` Tom Lendacky
2016-09-22 17:46       ` Tom Lendacky
2016-09-22 17:46       ` Tom Lendacky
2016-09-22 17:46       ` Tom Lendacky
2016-09-22 18:23       ` Paolo Bonzini
2016-09-22 18:23         ` Paolo Bonzini
2016-09-22 18:23         ` Paolo Bonzini
2016-09-22 18:37         ` Borislav Petkov
2016-09-22 18:37           ` Borislav Petkov
2016-09-22 18:37           ` Borislav Petkov
     [not found]           ` <20160922183759.7ahw2kbxit3epnzk-fF5Pk5pvG8Y@public.gmane.org>
2016-09-22 18:44             ` Paolo Bonzini
2016-09-22 18:44               ` Paolo Bonzini
2016-09-23  9:33           ` Kai Huang
2016-09-23  9:33             ` Kai Huang
2016-09-23  9:33             ` Kai Huang
2016-09-23  9:50             ` Borislav Petkov
2016-09-23  9:50               ` Borislav Petkov
2016-09-23  9:50               ` Borislav Petkov
2016-08-22 23:25 ` Brijesh Singh
2016-08-22 23:25 ` [RFC PATCH v1 10/28] x86: Change early_ioremap to early_memremap for BOOT data Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25 ` Brijesh Singh
2016-08-22 23:25 ` [RFC PATCH v1 11/28] x86: Don't decrypt trampoline area if SEV is active Brijesh Singh
2016-08-22 23:25 ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:25   ` Brijesh Singh
2016-08-22 23:26 ` [RFC PATCH v1 12/28] x86: DMA support for SEV memory encryption Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26 ` Brijesh Singh
2016-08-22 23:26 ` [RFC PATCH v1 13/28] iommu/amd: AMD IOMMU support for SEV Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26 ` Brijesh Singh
2016-08-22 23:26 ` [RFC PATCH v1 14/28] x86: Don't set the SME MSR bit when SEV is active Brijesh Singh
2016-08-22 23:26 ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26 ` [RFC PATCH v1 15/28] x86: Unroll string I/O " Brijesh Singh
2016-08-22 23:26 ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26 ` [RFC PATCH v1 16/28] x86: Add support to determine if running with SEV enabled Brijesh Singh
2016-08-22 23:26 ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:26   ` Brijesh Singh
2016-08-22 23:27 ` [RFC PATCH v1 17/28] KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature Brijesh Singh
2016-08-22 23:27 ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-08-22 23:27 ` [RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver Brijesh Singh
2016-08-22 23:27 ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-08-23  7:14   ` Herbert Xu
2016-08-23  7:14     ` Herbert Xu
2016-08-23  7:14     ` Herbert Xu
2016-08-24 12:02     ` Tom Lendacky
2016-08-24 12:02       ` Tom Lendacky
2016-08-24 12:02       ` Tom Lendacky
2016-08-24 12:02       ` Tom Lendacky
2016-08-22 23:27 ` [RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest Brijesh Singh
2016-08-22 23:27 ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-08-22 23:27   ` Brijesh Singh
2016-10-13 10:17   ` Paolo Bonzini
2016-10-13 10:17     ` Paolo Bonzini
2016-08-22 23:28 ` [RFC PATCH v1 20/28] KVM: SVM: prepare for SEV guest management API support Brijesh Singh
2016-08-22 23:28 ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-10-13 10:41   ` Paolo Bonzini
2016-08-22 23:28 ` [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-10-13 10:45   ` Paolo Bonzini
2016-10-13 10:45     ` Paolo Bonzini
2016-10-17 17:57     ` Brijesh Singh
2016-10-17 17:57       ` Brijesh Singh
2016-10-17 17:57       ` Brijesh Singh
2016-10-17 20:14       ` Paolo Bonzini
2016-10-17 20:14         ` Paolo Bonzini
2016-10-17 20:14         ` Paolo Bonzini
2016-10-18 19:32         ` Brijesh Singh
2016-10-18 19:32           ` Brijesh Singh
2016-10-18 19:32           ` Brijesh Singh
2016-10-18 21:44           ` Paolo Bonzini
2016-10-18 21:44             ` Paolo Bonzini
2016-10-18 21:44             ` Paolo Bonzini
2016-08-22 23:28 ` Brijesh Singh
2016-08-22 23:28 ` [RFC PATCH v1 22/28] KVM: SVM: add SEV launch start command Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-10-13 11:12   ` Paolo Bonzini
2016-08-22 23:28 ` Brijesh Singh
2016-08-22 23:28 ` [RFC PATCH v1 23/28] KVM: SVM: add SEV launch update command Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28 ` Brijesh Singh
2016-08-22 23:28 ` [RFC PATCH v1 24/28] KVM: SVM: add SEV_LAUNCH_FINISH command Brijesh Singh
2016-08-22 23:28 ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-08-22 23:28   ` Brijesh Singh
2016-10-13 11:16   ` Paolo Bonzini
2016-08-22 23:29 ` [RFC PATCH v1 25/28] KVM: SVM: add KVM_SEV_GUEST_STATUS command Brijesh Singh
2016-08-22 23:29 ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29 ` [RFC PATCH v1 26/28] KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command Brijesh Singh
2016-08-22 23:29 ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29 ` [RFC PATCH v1 27/28] KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29 ` [RFC PATCH v1 28/28] KVM: SVM: add command to query SEV API version Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29   ` Brijesh Singh
2016-08-22 23:29 ` Brijesh Singh
2016-10-13 11:19 ` [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD) Paolo Bonzini
2016-10-17 13:51   ` Brijesh Singh
2016-10-17 13:51     ` Brijesh Singh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.