All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Cédric Le Goater" <clg@kaod.org>
To: Ram Pai <linuxram@us.ibm.com>
Cc: aik@ozlabs.ru, andmike@linux.ibm.com, groug@kaod.org,
	kvm-ppc@vger.kernel.org, sukadev@linux.vnet.ibm.com,
	linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com,
	David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [EXTERNAL] Re: [RFC PATCH v1] powerpc/prom_init: disable XIVE in Secure VM.
Date: Tue, 3 Mar 2020 20:08:51 +0100	[thread overview]
Message-ID: <6f7ea308-3505-6070-dde1-20fee8fdddc3@kaod.org> (raw)
In-Reply-To: <20200303170205.GA5416@oc0525413822.ibm.com>

>>>   4) I'm guessing the problem with XIVE in SVM mode is that XIVE needs
>>>      to write to event queues in guest memory, which would have to be
>>>      explicitly shared for secure mode.  That's true whether it's KVM
>>>      or qemu accessing the guest memory, so kernel_irqchip=on/off is
>>>      entirely irrelevant.
>>
>> This problem should be already fixed.
>> The XIVE event queues are shared 
>  	
> Yes i have a patch for the guest kernel that shares the event 
> queue page with the hypervisor. This is done using the
> UV_SHARE_PAGE ultracall. This patch is not sent out to any any mailing
> lists yet. However the patch by itself does not solve the xive problem
> for secure VM.

yes because you also need to share the XIVE TIMA and ESB pages mapped 
in xive_native_esb_fault() and xive_native_tima_fault(). 

>> and the remaining problem with XIVE is the KVM page fault handler 
>> populating the TIMA and ESB pages. Ultravisor doesn't seem to support
>> this feature and this breaks interrupt management in the guest. 
> 
> Yes. This is the bigger issue that needs to be fixed. When the secure guest
> accesses the page associated with the xive memslot, a page fault is
> generated, which the ultravisor reflects to the hypervisor. Hypervisor
> seems to be mapping Hardware-page to that GPA. Unforatunately it is not
> informing the ultravisor of that map.  I am trying to understand the
> root cause. But since I am not sure what more issues I might run into
> after chasing down that issue, I figured its better to disable xive
> support in SVM in the interim.

Is it possible to call uv_share_page() from the hypervisor ? 

> **** BTW: I figured, I dont need this intermin patch to disable xive for
> secure VM.  Just doing "svm=on xive=off" on the kernel command line is
> sufficient for now. *****

Yes. 

>> But, kernel_irqchip=off should work out of the box. It seems it doesn't. 
>> Something to investigate.
> 
> Dont know why. 

We need to understand why. 

You still need the patch to share the event queue page allocated by the 
guest OS because QEMU will enqueue events. But you should not need anything
else.

> Does this option, disable the chip from interrupting the
> guest directly; instead mediates the interrupt through the hypervisor?

Yes. The KVM backend is unused, the XIVE interrupt controller is deactivated
for the guest and QEMU notifies the vCPUs directly.  

The TIMA and ESB pages belong the QEMU process and the guest OS will do 
some load and store operations onto them for interrupt management. Is that 
OK from a UV perspective ?  

Thanks,

C.

WARNING: multiple messages have this Message-ID (diff)
From: "Cédric Le Goater" <clg@kaod.org>
To: Ram Pai <linuxram@us.ibm.com>
Cc: aik@ozlabs.ru, andmike@linux.ibm.com, groug@kaod.org,
	kvm-ppc@vger.kernel.org, sukadev@linux.vnet.ibm.com,
	linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com,
	David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [EXTERNAL] Re: [RFC PATCH v1] powerpc/prom_init: disable XIVE in Secure VM.
Date: Tue, 03 Mar 2020 19:08:51 +0000	[thread overview]
Message-ID: <6f7ea308-3505-6070-dde1-20fee8fdddc3@kaod.org> (raw)
In-Reply-To: <20200303170205.GA5416@oc0525413822.ibm.com>

>>>   4) I'm guessing the problem with XIVE in SVM mode is that XIVE needs
>>>      to write to event queues in guest memory, which would have to be
>>>      explicitly shared for secure mode.  That's true whether it's KVM
>>>      or qemu accessing the guest memory, so kernel_irqchip=on/off is
>>>      entirely irrelevant.
>>
>> This problem should be already fixed.
>> The XIVE event queues are shared 
>  	
> Yes i have a patch for the guest kernel that shares the event 
> queue page with the hypervisor. This is done using the
> UV_SHARE_PAGE ultracall. This patch is not sent out to any any mailing
> lists yet. However the patch by itself does not solve the xive problem
> for secure VM.

yes because you also need to share the XIVE TIMA and ESB pages mapped 
in xive_native_esb_fault() and xive_native_tima_fault(). 

>> and the remaining problem with XIVE is the KVM page fault handler 
>> populating the TIMA and ESB pages. Ultravisor doesn't seem to support
>> this feature and this breaks interrupt management in the guest. 
> 
> Yes. This is the bigger issue that needs to be fixed. When the secure guest
> accesses the page associated with the xive memslot, a page fault is
> generated, which the ultravisor reflects to the hypervisor. Hypervisor
> seems to be mapping Hardware-page to that GPA. Unforatunately it is not
> informing the ultravisor of that map.  I am trying to understand the
> root cause. But since I am not sure what more issues I might run into
> after chasing down that issue, I figured its better to disable xive
> support in SVM in the interim.

Is it possible to call uv_share_page() from the hypervisor ? 

> **** BTW: I figured, I dont need this intermin patch to disable xive for
> secure VM.  Just doing "svm=on xive=off" on the kernel command line is
> sufficient for now. *****

Yes. 

>> But, kernel_irqchip=off should work out of the box. It seems it doesn't. 
>> Something to investigate.
> 
> Dont know why. 

We need to understand why. 

You still need the patch to share the event queue page allocated by the 
guest OS because QEMU will enqueue events. But you should not need anything
else.

> Does this option, disable the chip from interrupting the
> guest directly; instead mediates the interrupt through the hypervisor?

Yes. The KVM backend is unused, the XIVE interrupt controller is deactivated
for the guest and QEMU notifies the vCPUs directly.  

The TIMA and ESB pages belong the QEMU process and the guest OS will do 
some load and store operations onto them for interrupt management. Is that 
OK from a UV perspective ?  

Thanks,

C.

  parent reply	other threads:[~2020-03-03 19:26 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-29  7:54 [RFC PATCH v1] powerpc/prom_init: disable XIVE in Secure VM Ram Pai
2020-02-29  7:54 ` Ram Pai
2020-02-29  8:27 ` Cédric Le Goater
2020-02-29  8:27   ` Cédric Le Goater
2020-02-29 22:51   ` Ram Pai
2020-02-29 22:51     ` Ram Pai
2020-03-02  7:34     ` Cédric Le Goater
2020-03-02  7:34       ` Cédric Le Goater
2020-03-02 20:54 ` Greg Kurz
2020-03-02 20:54   ` Greg Kurz
2020-03-02 23:32 ` David Gibson
2020-03-02 23:32   ` David Gibson
2020-03-03  6:50   ` Cédric Le Goater
2020-03-03  6:50     ` Cédric Le Goater
2020-03-03 17:02     ` Ram Pai
2020-03-03 17:02       ` Ram Pai
2020-03-03 17:45       ` Greg Kurz
2020-03-03 17:45         ` Greg Kurz
2020-03-03 18:56         ` Ram Pai
2020-03-03 18:56           ` Ram Pai
2020-03-04 10:59           ` Greg Kurz
2020-03-04 10:59             ` Greg Kurz
2020-03-04 15:13             ` Ram Pai
2020-03-04 15:13               ` Ram Pai
2020-03-04 15:37             ` Ram Pai
2020-03-04 15:37               ` Ram Pai
2020-03-04 15:56               ` Cédric Le Goater
2020-03-04 15:56                 ` Cédric Le Goater
2020-03-04 23:55                 ` David Gibson
2020-03-04 23:55                   ` David Gibson
2020-03-05  7:15                   ` Cédric Le Goater
2020-03-05  7:15                     ` Cédric Le Goater
2020-03-05 15:15                   ` Ram Pai
2020-03-05 15:15                     ` Ram Pai
2020-03-05 15:36                     ` Cédric Le Goater
2020-03-05 15:36                       ` Cédric Le Goater
2020-03-03 19:18         ` [EXTERNAL] " Cédric Le Goater
2020-03-03 19:18           ` Cédric Le Goater
2020-03-04  8:34           ` Greg Kurz
2020-03-04  8:34             ` Greg Kurz
2020-03-03 19:08       ` Cédric Le Goater [this message]
2020-03-03 19:08         ` Cédric Le Goater
2020-03-03 20:29         ` Ram Pai
2020-03-03 20:29           ` Ram Pai
2020-03-05 11:41           ` Cédric Le Goater
2020-03-05 11:41             ` Cédric Le Goater
  -- strict thread matches above, loose matches on Subject: below --
2020-02-29  7:27 Ram Pai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6f7ea308-3505-6070-dde1-20fee8fdddc3@kaod.org \
    --to=clg@kaod.org \
    --cc=aik@ozlabs.ru \
    --cc=andmike@linux.ibm.com \
    --cc=bauerman@linux.ibm.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=groug@kaod.org \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=linuxram@us.ibm.com \
    --cc=sukadev@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.