All of lore.kernel.org
 help / color / mirror / Atom feed
* Nested paging in nested SVM setup
@ 2014-06-18 11:36 Valentine Sinitsyn
  2014-06-18 12:47 ` Jan Kiszka
  0 siblings, 1 reply; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-06-18 11:36 UTC (permalink / raw)
  To: kvm

Hi all,

I'm using a KVM/Qemu nested SVM setup to debug another hypervisor 
(Jailhouse) I contribute to. IOW, the scheme is: AMD64 Linux host 
running [paravirtualized] AMD64 Linux guest (the same kernel as the 
host) running Jailhouse.

Jailhouse, in turn, uses Nested Paging to virtualize xAPIC: APIC page 
(0xfee00000, no APIC remapping) is mapped read-only into Jailhouse's 
guests. This of course implies that APIC page appears to Jailhouse 
guests as uncacheable (UC).

Is it achievable in the setup I described, or do I need to run my code 
on a real hardware to make the APIC page accesses in Jailhouse guests 
uncacheable?

Thanks in advance.

--
Best regards,
Valentine Sinitsyn

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-06-18 11:36 Nested paging in nested SVM setup Valentine Sinitsyn
@ 2014-06-18 12:47 ` Jan Kiszka
  2014-06-18 16:59   ` Valentine Sinitsyn
                     ` (2 more replies)
  0 siblings, 3 replies; 38+ messages in thread
From: Jan Kiszka @ 2014-06-18 12:47 UTC (permalink / raw)
  To: Valentine Sinitsyn, kvm

On 2014-06-18 13:36, Valentine Sinitsyn wrote:
> Hi all,
> 
> I'm using a KVM/Qemu nested SVM setup to debug another hypervisor
> (Jailhouse) I contribute to. IOW, the scheme is: AMD64 Linux host
> running [paravirtualized] AMD64 Linux guest (the same kernel as the
> host) running Jailhouse.
> 
> Jailhouse, in turn, uses Nested Paging to virtualize xAPIC: APIC page
> (0xfee00000, no APIC remapping) is mapped read-only into Jailhouse's
> guests. This of course implies that APIC page appears to Jailhouse
> guests as uncacheable (UC).
> 
> Is it achievable in the setup I described, or do I need to run my code
> on a real hardware to make the APIC page accesses in Jailhouse guests
> uncacheable?

If we want to provide useful nested SVM support, this must be feasible.
If there is a bug, it has to be fixed.

Maybe you can describe how you configured the involved units (NPT
structures, guest / host PAR, MTRR etc.).

Even better would be a test case based on kvm-unit-tests (see [1],
x86/svm.c) that replicates the observed behavior. If it reveals a bug,
this test would be very valuable for making sure it remains fixed (once
that is done).

Jan

[1] https://git.kernel.org/cgit/virt/kvm/kvm-unit-tests.git

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-06-18 12:47 ` Jan Kiszka
@ 2014-06-18 16:59   ` Valentine Sinitsyn
  2014-06-19  9:32     ` Paolo Bonzini
  2014-06-19  5:03   ` Valentine Sinitsyn
  2014-08-20  6:46   ` Valentine Sinitsyn
  2 siblings, 1 reply; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-06-18 16:59 UTC (permalink / raw)
  To: Jan Kiszka, kvm

Hi Jan,

> If we want to provide useful nested SVM support, this must be feasible.
> If there is a bug, it has to be fixed.
I was more concerned about if it is supported (and it means I do 
something wrong), or if it is not supported (at least, now).

> Maybe you can describe how you configured the involved units (NPT
> structures, guest / host PAR, MTRR etc.).
I've tried different combinations, but to be specific:
- NPT: four-level long-mode page tables; all PTEs except terminal have 
U,R,P bits set (0x07), as per APMv2 15.25.5
- APIC page pte; physical address 0xfee00000, flags: PAT, PWT, PCD, U, P 
(0x9D)
- guest PAT and host PAT are the same, 0x7010600070106 (as set by the 
Linux kernel). Guest PAT is stored in VMCB; host PAT is restored at each 
#VMEXIT.
- MTRRs. No changes to what Linux use prior to VM entry here; #0 (base 
0x80000000, mask 0xFF80000000) uncacheable, others are disabled.

I also noticed that setting PAT MSR from the nested hypervisor leaves 
high word unassigned, i.e. the code like this:

   mov $0x70106, %rax
   mov %rax, %rdx
   mov $0x0277, %rcx
   wrmsr
   rdmsr

yields %rax = 0, %rdx = 0x70106.

> Even better would be a test case based on kvm-unit-tests (see [1],
Will have a look at it, thanks.

--
Best regards,
Valentine Sinitsyn

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-06-18 12:47 ` Jan Kiszka
  2014-06-18 16:59   ` Valentine Sinitsyn
@ 2014-06-19  5:03   ` Valentine Sinitsyn
  2014-08-20  6:46   ` Valentine Sinitsyn
  2 siblings, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-06-19  5:03 UTC (permalink / raw)
  To: Jan Kiszka, kvm

Hi all,

> If we want to provide useful nested SVM support, this must be feasible.
> If there is a bug, it has to be fixed.
I did a quick look on KVM sources this morning, and although I can be 
wrong, this really looks like a bug.

The reason is nested_svm_vmrun() doesn't do anything to host or guest 
PATs (thus no easy way to set memory type if NPT is used and guest PAT 
has non-default value). And since svm_set_cr0() explicitly clears CD 
(for a good reason, I suppose), there seem to be no way to control 
caching from the nested setup altogether.

--
Regards,
Valentine Sinitsyn

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-06-18 16:59   ` Valentine Sinitsyn
@ 2014-06-19  9:32     ` Paolo Bonzini
  0 siblings, 0 replies; 38+ messages in thread
From: Paolo Bonzini @ 2014-06-19  9:32 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 18/06/2014 18:59, Valentine Sinitsyn ha scritto:
>
> I also noticed that setting PAT MSR from the nested hypervisor leaves
> high word unassigned, i.e. the code like this:
>
>   mov $0x70106, %rax
>   mov %rax, %rdx
>   mov $0x0277, %rcx
>   wrmsr
>   rdmsr
>
> yields %rax = 0, %rdx = 0x70106.

This should be the trivial fix:

diff --git a/arch/x86/include/asm/kvm_host.h 
b/arch/x86/include/asm/kvm_host.h
index 0b140dc65bee..8a1cdc0f8fe7 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -461,7 +461,7 @@ struct kvm_vcpu_arch {
  	bool nmi_injected;    /* Trying to inject an NMI this entry */

  	struct mtrr_state_type mtrr_state;
-	u32 pat;
+	u64 pat;

  	unsigned switch_db_regs;
  	unsigned long db[KVM_NR_DB_REGS];

Paolo

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-06-18 12:47 ` Jan Kiszka
  2014-06-18 16:59   ` Valentine Sinitsyn
  2014-06-19  5:03   ` Valentine Sinitsyn
@ 2014-08-20  6:46   ` Valentine Sinitsyn
  2014-08-20  6:55     ` Paolo Bonzini
  2014-09-01 17:04     ` Paolo Bonzini
  2 siblings, 2 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-20  6:46 UTC (permalink / raw)
  To: Jan Kiszka, kvm

Hi all,

Please excuse me for bringing alive a two-month old thread, but I had 
time to investigate the issue a bit only recently.

On 18.06.2014 18:47, Jan Kiszka wrote:
> On 2014-06-18 13:36, Valentine Sinitsyn wrote:
> If we want to provide useful nested SVM support, this must be feasible.
> If there is a bug, it has to be fixed.
Looks like it is a bug in KVM. I had a chance to run the same code 
bare-metal ([1], line 310 is uncommented for bare-metal case but present 
for nested SVM), and it seems to work as expected. However, When I trace 
it in nested SVM setup, after some successful APIC reads and writes, I 
get the following:

>  qemu-system-x86-1968  [001] 220417.681261: kvm_nested_vmexit:    rip: 0xffffffff8104f5b8 reason: npf ext_inf1: 0x000000010000000f ext_inf2: 0x00000000fee00300 ext_int:
>  0x00000000 ext_int_err: 0x00000000
>  qemu-system-x86-1968  [001] 220417.681261: kvm_page_fault:       address fee00300 error_code f
>  qemu-system-x86-1968  [001] 220417.681263: kvm_emulate_insn:     0:ffffffff8104f5b8:89 04 25 00 93 5f ff (prot64)
>  qemu-system-x86-1968  [001] 220417.681268: kvm_inj_exception:     (0x23c)
>  qemu-system-x86-1968  [001] 220417.681269: kvm_entry:            vcpu 0
>  qemu-system-x86-1968  [001] 220417.681271: kvm_exit:             reason  rip 0xffffffff8104f5b8 info 0 0

You can see the problem here: the code tries to access APIC MMIO 
register, which is trapped by KVM's MMU code (at nested page table 
walk). During MMIO access emulation, KVM decides to inject 0x23c 
exception (which looks wrong, as there is no exception with this number 
defined). After that things become flawed (pay attention to empty reason 
in the last line; the VMCB is certainly not in the state KVM 
expects/supports).

I'm no KVM expert, and will be grateful for debugging suggestions (or 
maybe even assistance).

Many thanks for the help.

1. 
https://github.com/vsinitsyn/jailhouse/blob/amd-v/hypervisor/arch/x86/svm.c#L301

--
Regards,
Valentine Sinitsyn

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-20  6:46   ` Valentine Sinitsyn
@ 2014-08-20  6:55     ` Paolo Bonzini
  2014-08-20  7:37       ` Valentine Sinitsyn
  2014-09-01 17:04     ` Paolo Bonzini
  1 sibling, 1 reply; 38+ messages in thread
From: Paolo Bonzini @ 2014-08-20  6:55 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 20/08/2014 08:46, Valentine Sinitsyn ha scritto:
> 
> You can see the problem here: the code tries to access APIC MMIO
> register, which is trapped by KVM's MMU code (at nested page table
> walk). During MMIO access emulation, KVM decides to inject 0x23c
> exception (which looks wrong, as there is no exception with this number
> defined). After that things become flawed (pay attention to empty reason
> in the last line; the VMCB is certainly not in the state KVM
> expects/supports).
> 
> I'm no KVM expert, and will be grateful for debugging suggestions (or
> maybe even assistance).

Is the 0x23c always the same?  Can you try this patch?

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 204422de3fed..194e9300a31b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -346,6 +346,7 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
 
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 
+	WARN_ON(nr > 0x1f);
 	if (!vcpu->arch.exception.pending) {
 	queue:
 		vcpu->arch.exception.pending = true;

Paolo

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-20  6:55     ` Paolo Bonzini
@ 2014-08-20  7:37       ` Valentine Sinitsyn
  2014-08-20  8:11         ` Paolo Bonzini
  0 siblings, 1 reply; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-20  7:37 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

Hi Paolo,

On 20.08.2014 12:55, Paolo Bonzini wrote:
> Is the 0x23c always the same?
No, it's just a garbage - I've seen other values as well (0x80 last time).

>  Can you try this patch?
Sure. It does print a warning:

[ 2176.722098] ------------[ cut here ]------------
[ 2176.722118] WARNING: CPU: 0 PID: 1488 at 
/home/val/kvm-kmod/x86/x86.c:368 kvm_multiple_exception+0x121/0x130 [kvm]()
[ 2176.722121] Modules linked in: kvm_amd(O) kvm(O) amd_freq_sensitivity 
snd_hda_codec_realtek snd_hda_codec_hdmi snd_hda_codec_generic 
crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel 
snd_hda_intel aesni_intel snd_hda_controller radeon snd_hda_codec 
ipmi_si aes_x86_64 ipmi_msghandler snd_hwdep ttm r8169 ppdev mii lrw 
gf128mul snd_pcm glue_helper drm_kms_helper snd_timer fam15h_power evdev 
drm shpchp snd ablk_helper cryptd microcode mac_hid soundcore serio_raw 
pcspkr i2c_algo_bit k10temp i2c_piix4 i2c_core parport_pc parport hwmon 
edac_core tpm_tis edac_mce_amd tpm video button acpi_cpufreq processor 
ext4 crc16 mbcache jbd2 sd_mod crc_t10dif crct10dif_common atkbd libps2 
ahci libahci ohci_pci ohci_hcd ehci_pci xhci_hcd libata ehci_hcd usbcore 
scsi_mod usb_common i8042 serio [last unloaded: kvm]

[ 2176.722217] CPU: 0 PID: 1488 Comm: qemu-system-x86 Tainted: G 
W  O  3.16.1-1-ARCH #1
[ 2176.722221] Hardware name: To Be Filled By O.E.M. To Be Filled By 
O.E.M./IMB-A180, BIOS L0.17 05/24/2013
[ 2176.722224]  0000000000000000 0000000025350f51 ffff8800919fbbc0 
ffffffff8152ae6c
[ 2176.722229]  0000000000000000 ffff8800919fbbf8 ffffffff8106e45d 
ffff880037f68000
[ 2176.722234]  0000000000000080 0000000000000001 00000000000081a4 
0000000000000000
[ 2176.722239] Call Trace:
[ 2176.722250]  [<ffffffff8152ae6c>] dump_stack+0x4d/0x6f
[ 2176.722257]  [<ffffffff8106e45d>] warn_slowpath_common+0x7d/0xa0
[ 2176.722262]  [<ffffffff8106e58a>] warn_slowpath_null+0x1a/0x20
[ 2176.722275]  [<ffffffffa0651e41>] kvm_multiple_exception+0x121/0x130 
[kvm]
[ 2176.722288]  [<ffffffffa06594f8>] x86_emulate_instruction+0x548/0x640 
[kvm]
[ 2176.722303]  [<ffffffffa06653e1>] kvm_mmu_page_fault+0x91/0xf0 [kvm]
[ 2176.722310]  [<ffffffffa04eb6a7>] pf_interception+0xd7/0x180 [kvm_amd]
[ 2176.722317]  [<ffffffff8104e876>] ? native_apic_mem_write+0x6/0x10
[ 2176.722323]  [<ffffffffa04ef261>] handle_exit+0x141/0x9d0 [kvm_amd]
[ 2176.722335]  [<ffffffffa065512c>] ? kvm_set_cr8+0x1c/0x20 [kvm]
[ 2176.722341]  [<ffffffffa04ea3e0>] ? nested_svm_get_tdp_cr3+0x20/0x20 
[kvm_amd]
[ 2176.722355]  [<ffffffffa065adc7>] 
kvm_arch_vcpu_ioctl_run+0x597/0x1210 [kvm]
[ 2176.722368]  [<ffffffffa065705b>] ? kvm_arch_vcpu_load+0xbb/0x200 [kvm]
[ 2176.722378]  [<ffffffffa064a152>] kvm_vcpu_ioctl+0x2b2/0x5c0 [kvm]
[ 2176.722384]  [<ffffffff810b66b4>] ? __wake_up+0x44/0x50
[ 2176.722390]  [<ffffffff81200dcc>] ? fsnotify+0x28c/0x370
[ 2176.722397]  [<ffffffff811d4a70>] do_vfs_ioctl+0x2d0/0x4b0
[ 2176.722403]  [<ffffffff811df18e>] ? __fget+0x6e/0xb0
[ 2176.722408]  [<ffffffff811d4cd1>] SyS_ioctl+0x81/0xa0
[ 2176.722414]  [<ffffffff81530be9>] system_call_fastpath+0x16/0x1b
[ 2176.722418] ---[ end trace b0f81744c5a5ea4a ]---

Thanks,
Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-20  7:37       ` Valentine Sinitsyn
@ 2014-08-20  8:11         ` Paolo Bonzini
  2014-08-20  9:49           ` Valentine Sinitsyn
  2014-08-21  6:28           ` Valentine Sinitsyn
  0 siblings, 2 replies; 38+ messages in thread
From: Paolo Bonzini @ 2014-08-20  8:11 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 20/08/2014 09:37, Valentine Sinitsyn ha scritto:
> Hi Paolo,
> 
> On 20.08.2014 12:55, Paolo Bonzini wrote:
>> Is the 0x23c always the same?
> No, it's just a garbage - I've seen other values as well (0x80 last time).
> 
>>  Can you try this patch?
> Sure. It does print a warning:
> 
> [ 2176.722098] ------------[ cut here ]------------
> [ 2176.722118] WARNING: CPU: 0 PID: 1488 at
> /home/val/kvm-kmod/x86/x86.c:368 kvm_multiple_exception+0x121/0x130 [kvm]()
> [ 2176.722121] Modules linked in: kvm_amd(O) kvm(O) amd_freq_sensitivity
> snd_hda_codec_realtek snd_hda_codec_hdmi snd_hda_codec_generic
> crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel
> snd_hda_intel aesni_intel snd_hda_controller radeon snd_hda_codec
> ipmi_si aes_x86_64 ipmi_msghandler snd_hwdep ttm r8169 ppdev mii lrw
> gf128mul snd_pcm glue_helper drm_kms_helper snd_timer fam15h_power evdev
> drm shpchp snd ablk_helper cryptd microcode mac_hid soundcore serio_raw
> pcspkr i2c_algo_bit k10temp i2c_piix4 i2c_core parport_pc parport hwmon
> edac_core tpm_tis edac_mce_amd tpm video button acpi_cpufreq processor
> ext4 crc16 mbcache jbd2 sd_mod crc_t10dif crct10dif_common atkbd libps2
> ahci libahci ohci_pci ohci_hcd ehci_pci xhci_hcd libata ehci_hcd usbcore
> scsi_mod usb_common i8042 serio [last unloaded: kvm]
> 
> [ 2176.722217] CPU: 0 PID: 1488 Comm: qemu-system-x86 Tainted: G W  O 
> 3.16.1-1-ARCH #1
> [ 2176.722221] Hardware name: To Be Filled By O.E.M. To Be Filled By
> O.E.M./IMB-A180, BIOS L0.17 05/24/2013
> [ 2176.722224]  0000000000000000 0000000025350f51 ffff8800919fbbc0
> ffffffff8152ae6c
> [ 2176.722229]  0000000000000000 ffff8800919fbbf8 ffffffff8106e45d
> ffff880037f68000
> [ 2176.722234]  0000000000000080 0000000000000001 00000000000081a4
> 0000000000000000
> [ 2176.722239] Call Trace:
> [ 2176.722250]  [<ffffffff8152ae6c>] dump_stack+0x4d/0x6f
> [ 2176.722257]  [<ffffffff8106e45d>] warn_slowpath_common+0x7d/0xa0
> [ 2176.722262]  [<ffffffff8106e58a>] warn_slowpath_null+0x1a/0x20
> [ 2176.722275]  [<ffffffffa0651e41>] kvm_multiple_exception+0x121/0x130
> [kvm]
> [ 2176.722288]  [<ffffffffa06594f8>] x86_emulate_instruction+0x548/0x640
> [kvm]
> [ 2176.722303]  [<ffffffffa06653e1>] kvm_mmu_page_fault+0x91/0xf0 [kvm]
> [ 2176.722310]  [<ffffffffa04eb6a7>] pf_interception+0xd7/0x180 [kvm_amd]
> [ 2176.722317]  [<ffffffff8104e876>] ? native_apic_mem_write+0x6/0x10
> [ 2176.722323]  [<ffffffffa04ef261>] handle_exit+0x141/0x9d0 [kvm_amd]
> [ 2176.722335]  [<ffffffffa065512c>] ? kvm_set_cr8+0x1c/0x20 [kvm]
> [ 2176.722341]  [<ffffffffa04ea3e0>] ? nested_svm_get_tdp_cr3+0x20/0x20
> [kvm_amd]
> [ 2176.722355]  [<ffffffffa065adc7>]
> kvm_arch_vcpu_ioctl_run+0x597/0x1210 [kvm]
> [ 2176.722368]  [<ffffffffa065705b>] ? kvm_arch_vcpu_load+0xbb/0x200 [kvm]
> [ 2176.722378]  [<ffffffffa064a152>] kvm_vcpu_ioctl+0x2b2/0x5c0 [kvm]
> [ 2176.722384]  [<ffffffff810b66b4>] ? __wake_up+0x44/0x50
> [ 2176.722390]  [<ffffffff81200dcc>] ? fsnotify+0x28c/0x370
> [ 2176.722397]  [<ffffffff811d4a70>] do_vfs_ioctl+0x2d0/0x4b0
> [ 2176.722403]  [<ffffffff811df18e>] ? __fget+0x6e/0xb0
> [ 2176.722408]  [<ffffffff811d4cd1>] SyS_ioctl+0x81/0xa0
> [ 2176.722414]  [<ffffffff81530be9>] system_call_fastpath+0x16/0x1b
> [ 2176.722418] ---[ end trace b0f81744c5a5ea4a ]---
> 
> Thanks,
> Valentine
> -- 
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

I audited the various places that return X86EMUl_PROPAGATE_FAULT and
I think the culprit is this code in paging_tmpl.h.

 	real_gpa = mmu->translate_gpa(vcpu, gfn_to_gpa(gfn), access);
	if (real_gpa == UNMAPPED_GVA)
 		return 0;

It returns zero without setting fault.vector.

Another patch...  I will post parts of it separately, if I am right
you should get 0xfe as the vector and a WARN from the gva_to_gpa function.

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index ef297919a691..e5bf13003cd2 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -527,6 +527,7 @@ static unsigned long seg_base(struct x86_emulate_ctxt *ctxt, int seg)
 static int emulate_exception(struct x86_emulate_ctxt *ctxt, int vec,
 			     u32 error, bool valid)
 {
+	WARN_ON(vec > 0x1f);
 	ctxt->exception.vector = vec;
 	ctxt->exception.error_code = error;
 	ctxt->exception.error_code_valid = valid;
@@ -3016,7 +3015,7 @@ static int em_movbe(struct x86_emulate_ctxt *ctxt)
 		ctxt->dst.val = swab64(ctxt->src.val);
 		break;
 	default:
-		return X86EMUL_PROPAGATE_FAULT;
+		BUG();
 	}
 	return X86EMUL_CONTINUE;
 }
@@ -4829,8 +4828,10 @@ writeback:
 	ctxt->eip = ctxt->_eip;
 
 done:
-	if (rc == X86EMUL_PROPAGATE_FAULT)
+	if (rc == X86EMUL_PROPAGATE_FAULT) {
+		WARN_ON(ctxt->exception.vector > 0x1f);
 		ctxt->have_exception = true;
+	}
 	if (rc == X86EMUL_INTERCEPTED)
 		return EMULATION_INTERCEPTED;
 
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 410776528265..cd91d03c9320 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -365,8 +365,10 @@ retry_walk:
 		gfn += pse36_gfn_delta(pte);
 
 	real_gpa = mmu->translate_gpa(vcpu, gfn_to_gpa(gfn), access);
-	if (real_gpa == UNMAPPED_GVA)
+	if (real_gpa == UNMAPPED_GVA) {
+		walker->fault.vector = 0xfe;
 		return 0;
+	}
 
 	walker->gfn = real_gpa >> PAGE_SHIFT;
 
@@ -875,8 +877,10 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t vaddr, u32 access,
 	if (r) {
 		gpa = gfn_to_gpa(walker.gfn);
 		gpa |= vaddr & ~PAGE_MASK;
-	} else if (exception)
+	} else if (exception) {
+		WARN_ON(walker.fault.vector > 0x1f);
 		*exception = walker.fault;
+	}
 
 	return gpa;
 }
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 737b4bdac41c..71f05585894e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5248,6 +5249,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
 
 		ctxt->interruptibility = 0;
 		ctxt->have_exception = false;
+		ctxt->exception.vector = 0xff;
 		ctxt->perm_ok = false;
 
 		ctxt->ud = emulation_type & EMULTYPE_TRAP_UD;


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-20  8:11         ` Paolo Bonzini
@ 2014-08-20  9:49           ` Valentine Sinitsyn
  2014-08-21  6:28           ` Valentine Sinitsyn
  1 sibling, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-20  9:49 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 20.08.2014 14:11, Paolo Bonzini wrote:
> Another patch...  I will post parts of it separately, if I am right
> you should get 0xfe as the vector and a WARN from the gva_to_gpa function.
I confirm the vector is 0xfe, however I see no warnings from 
gva_to_gpa() - only from emulate_exception():

> [ 3417.251967] ------------[ cut here ]------------
> [ 3417.251983] WARNING: CPU: 1 PID: 1584 at /home/val/kvm-kmod/x86/emulate.c:4839 x86_emulate_insn+0xb33/0xb70 [kvm]()

I can see both warnings, if I move 'WARN(walker.fault.vector > 0x1f)' 
from gva_to_gpa() to gva_to_gpa_nested(), however:

> [ 3841.420019] WARNING: CPU: 0 PID: 1945 at /home/val/kvm-kmod/x86/paging_tmpl.h:903 paging64_gva_to_gpa_nested+0xd1/0xe0 [kvm]()
> [ 3841.420457] WARNING: CPU: 0 PID: 1945 at /home/val/kvm-kmod/x86/emulate.c:4839 x86_emulate_insn+0xb33/0xb70 [kvm]()

Thanks,
Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-20  8:11         ` Paolo Bonzini
  2014-08-20  9:49           ` Valentine Sinitsyn
@ 2014-08-21  6:28           ` Valentine Sinitsyn
  2014-08-21  8:48             ` Valentine Sinitsyn
  1 sibling, 1 reply; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-21  6:28 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

Hi all,

On 20.08.2014 14:11, Paolo Bonzini wrote:
> Another patch...  I will post parts of it separately, if I am right
> you should get 0xfe as the vector and a WARN from the gva_to_gpa function.

With the patch like this:

> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 410776528265..cd91d03c9320 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -365,8 +365,10 @@ retry_walk:
>  		gfn += pse36_gfn_delta(pte);
>
>  	real_gpa = mmu->translate_gpa(vcpu, gfn_to_gpa(gfn), access);
> 	if (real_gpa == UNMAPPED_GVA)
 > - 		return 0;
 > + 		goto error;
>
>  	walker->gfn = real_gpa >> PAGE_SHIFT;

KVM seems to work properly (no weird exceptions injected), although my 
code now freezes (quick look on the trace suggests it's looping reading 
APIC). Not sure whose bug is it, will look further.

Thanks for the help.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21  6:28           ` Valentine Sinitsyn
@ 2014-08-21  8:48             ` Valentine Sinitsyn
  2014-08-21 11:04               ` Paolo Bonzini
  2014-08-21 11:24               ` Paolo Bonzini
  0 siblings, 2 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-21  8:48 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 21.08.2014 12:28, Valentine Sinitsyn wrote:
> KVM seems to work properly (no weird exceptions injected), although my
> code now freezes (quick look on the trace suggests it's looping reading
> APIC). Not sure whose bug is it, will look further.
Looks like the problem is that if nested page tables maps some GPA to 
0xfee00000 HPA, it's really mapped to this HPA, and not intercepted with 
KVM's virtual LAPIC implementation. Consider the following trace:

>  qemu-system-x86-344   [000]   644.974072: kvm_entry:            vcpu 0
>  qemu-system-x86-344   [000]   644.974075: kvm_exit:             reason npf rip 0xffffffff8104e883 info 10000000d fee000f0
>  qemu-system-x86-344   [000]   644.974075: kvm_page_fault:       address fee000f0 error_code d
>  qemu-system-x86-344   [000]   644.974077: kvm_emulate_insn:     0:ffffffff8104e883:8b 87 00 b0 5f ff (prot64)
>  qemu-system-x86-344   [000]   644.974078: kvm_apic:             apic_read APIC_SPIV = 0xf
>  qemu-system-x86-344   [000]   644.974079: kvm_mmio:             mmio read len 4 gpa 0xfee000f0 val 0x72007200000000f
>  qemu-system-x86-344   [000]   644.974081: kvm_entry:            vcpu 0
Here, I set up NPT so that any access to 0xfee00000 nested guest 
physical address cause VM exit. Then, my code writes or reads register 
that is mapped to 0xfee00000 KVM's GPA. kvm_apic is called, and 
everything works as expected.

However, if I set up NTP to make 0xfee00000 nested guest physical 
address reads don't cause nested VM exit (by simply clearing U/S flag in 
the NPTE), I get:

>  qemu-system-x86-1066  [003]  1105.864286: kvm_exit:             reason npf rip 0xffffffff8104eaa4 info 10000000f fee00310
>  qemu-system-x86-1066  [003]  1105.864287: kvm_nested_vmexit:    rip: 0xffffffff8104eaa4 reason: npf ext_inf1: 0x000000010000000f ext_inf2: 0x00000000fee00310 ext_int: 0x00000000 ext_int_err: 0x00000000
>  qemu-system-x86-1066  [003]  1105.864287: kvm_page_fault:       address fee00310 error_code f
>  qemu-system-x86-1064  [001]  1105.864288: kvm_exit:             reason npf rip 0xffffffff8104e876 info 10000000f fee000b0
>  qemu-system-x86-1066  [003]  1105.864289: kvm_emulate_insn:     0:ffffffff8104eaa4:89 14 25 10 b3 5f ff (prot64)
>  qemu-system-x86-1064  [001]  1105.864289: kvm_nested_vmexit:    rip: 0xffffffff8104e876 reason: npf ext_inf1: 0x000000010000000f ext_inf2: 0x00000000fee000b0 ext_int: 0x00000000 ext_int_err: 0x00000000
>  qemu-system-x86-1064  [001]  1105.864289: kvm_page_fault:       address fee000b0 error_code f
>  qemu-system-x86-1064  [001]  1105.864291: kvm_emulate_insn:     0:ffffffff8104e876:89 b7 00 b0 5f ff (prot64)
>  qemu-system-x86-1066  [003]  1105.864292: kvm_inj_exception:    e (0x2)
>  qemu-system-x86-1066  [003]  1105.864293: kvm_entry:            vcpu 3
>  qemu-system-x86-1064  [001]  1105.864294: kvm_inj_exception:    e (0x2)
>  qemu-system-x86-1064  [001]  1105.864295: kvm_entry:            vcpu 1

No kvm_apic: after NPTs are set up, no page faults caused by register 
read (error_code: d), to trap and emulate APIC access.

So I'm returning to my original question: is this an intended behavior 
of KVM that APIC access on nested page tables level are not trapped, or 
is this a bug?

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21  8:48             ` Valentine Sinitsyn
@ 2014-08-21 11:04               ` Paolo Bonzini
  2014-08-21 11:06                 ` Jan Kiszka
  2014-08-21 11:16                 ` Valentine Sinitsyn
  2014-08-21 11:24               ` Paolo Bonzini
  1 sibling, 2 replies; 38+ messages in thread
From: Paolo Bonzini @ 2014-08-21 11:04 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 21/08/2014 10:48, Valentine Sinitsyn ha scritto:
> So I'm returning to my original question: is this an intended behavior
> of KVM that APIC access on nested page tables level are not trapped, or
> is this a bug?

I think it's just a bug.  Nobody thought that you'd let L2 access L1's
APIC via NPT.  Let me think of how to fix it.

Paolo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 11:04               ` Paolo Bonzini
@ 2014-08-21 11:06                 ` Jan Kiszka
  2014-08-21 11:12                   ` Valentine Sinitsyn
  2014-08-21 11:16                 ` Valentine Sinitsyn
  1 sibling, 1 reply; 38+ messages in thread
From: Jan Kiszka @ 2014-08-21 11:06 UTC (permalink / raw)
  To: Paolo Bonzini, Valentine Sinitsyn, kvm

On 2014-08-21 13:04, Paolo Bonzini wrote:
> Il 21/08/2014 10:48, Valentine Sinitsyn ha scritto:
>> So I'm returning to my original question: is this an intended behavior
>> of KVM that APIC access on nested page tables level are not trapped, or
>> is this a bug?
> 
> I think it's just a bug.  Nobody thought that you'd let L2 access L1's
> APIC via NPT.  Let me think of how to fix it.

Do you think it would only affect the APIC, or could it cause troubles
with other pass-through devices as well (some PCI BAR e.g.)?

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 11:06                 ` Jan Kiszka
@ 2014-08-21 11:12                   ` Valentine Sinitsyn
  0 siblings, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-21 11:12 UTC (permalink / raw)
  To: Jan Kiszka, Paolo Bonzini, kvm

On 21.08.2014 17:06, Jan Kiszka wrote:
> Do you think it would only affect the APIC, or could it cause troubles
> with other pass-through devices as well (some PCI BAR e.g.)?
I've skimmed the KVM sources only quickly, but if feel there is nothing 
APIC-specific in nested paging code. I.e. access to any MMIO range 
mapped by nested page tables the way I did will not be trapped by KVM.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 11:04               ` Paolo Bonzini
  2014-08-21 11:06                 ` Jan Kiszka
@ 2014-08-21 11:16                 ` Valentine Sinitsyn
  1 sibling, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-21 11:16 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 21.08.2014 17:04, Paolo Bonzini wrote:
> I think it's just a bug.  Nobody thought that you'd let L2 access L1's
Sure, this is by no means a common use case. However can be seen as a 
flaw that lets the malicious guest to affects others by mapping and 
reprogramming APICs or other.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21  8:48             ` Valentine Sinitsyn
  2014-08-21 11:04               ` Paolo Bonzini
@ 2014-08-21 11:24               ` Paolo Bonzini
  2014-08-21 12:28                 ` Valentine Sinitsyn
  2014-08-21 17:35                 ` Valentine Sinitsyn
  1 sibling, 2 replies; 38+ messages in thread
From: Paolo Bonzini @ 2014-08-21 11:24 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 21/08/2014 10:48, Valentine Sinitsyn ha scritto:
> 
> No kvm_apic: after NPTs are set up, no page faults caused by register
> read (error_code: d), to trap and emulate APIC access.

It seems to work for VMX (see the testcase I just sent).  For SVM, can you
check if this test works for you, so that we can work on a simple testcase?

The patch applies to git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
and you can run the test like this (64-bit host):

   ./configure
   make
   ./x86-run x86/svm.flat -cpu host

Paolo

diff --git a/x86/svm.c b/x86/svm.c
index a9b29b1..aff00da 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -797,6 +797,27 @@ static bool npt_pfwalk_check(struct test *test)
 	   && (test->vmcb->control.exit_info_2 == read_cr3());
 }
 
+static void npt_l1mmio_prepare(struct test *test)
+{
+    vmcb_ident(test->vmcb);
+}
+
+u32 nested_apic_version;
+
+static void npt_l1mmio_test(struct test *test)
+{
+    u64 *data = (void*)(0xfee00030UL);
+
+    nested_apic_version = *data;
+}
+
+static bool npt_l1mmio_check(struct test *test)
+{
+    u64 *data = (void*)(0xfee00030);
+
+    return (nested_apic_version == *data);
+}
+
 static void latency_prepare(struct test *test)
 {
     default_prepare(test);
@@ -962,6 +983,8 @@ static struct test tests[] = {
 	    default_finished, npt_rw_check },
     { "npt_pfwalk", npt_supported, npt_pfwalk_prepare, null_test,
 	    default_finished, npt_pfwalk_check },
+    { "npt_l1mmio", npt_supported, npt_l1mmio_prepare, npt_l1mmio_test,
+	    default_finished, npt_l1mmio_check },
     { "latency_run_exit", default_supported, latency_prepare, latency_test,
       latency_finished, latency_check },
     { "latency_svm_insn", default_supported, lat_svm_insn_prepare, null_test,


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 11:24               ` Paolo Bonzini
@ 2014-08-21 12:28                 ` Valentine Sinitsyn
  2014-08-21 12:38                   ` Valentine Sinitsyn
                                     ` (2 more replies)
  2014-08-21 17:35                 ` Valentine Sinitsyn
  1 sibling, 3 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-21 12:28 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 21.08.2014 17:24, Paolo Bonzini wrote:
> It seems to work for VMX (see the testcase I just sent).  For SVM, can you
> check if this test works for you, so that we can work on a simple testcase?
It passes for SVM, too.

However, npt_rsvd seems to be broken - maybe that is the reason?

Also, I tried to use different register values for npt_l1mmio_test() and 
npt_l1mmio_check() (like 0xfee00030 and 0xfee00400), but got test passed 
as well. Could it be a false positive then?

> qemu-system-x86_64 -enable-kvm -device pc-testdev -device isa-debug-exit,iobase=0xf4,iosize=0x4 -display none -serial stdio -device pci-testdev -kernel x86/svm.flat -cpu host
> enabling apic
> paging enabled
> cr0 = 80010011
> cr3 = 7fff000
> cr4 = 20
> NPT detected - running all tests with NPT enabled
> null: PASS
> vmrun: PASS
> ioio: PASS
> vmrun intercept check: PASS
> cr3 read intercept: PASS
> cr3 read nointercept: PASS
> next_rip: PASS
> mode_switch: PASS
> asid_zero: PASS
> sel_cr0_bug: PASS
> npt_nx: PASS
> npt_us: PASS
> npt_rsvd: FAIL
> npt_rw: PASS
> npt_pfwalk: PASS
> npt_l1mmio: PASS
>     Latency VMRUN : max: 93973 min: 22447 avg: 22766
>     Latency VMEXIT: max: 428760 min: 23039 avg: 23832
> latency_run_exit: PASS
>     Latency VMLOAD: max: 35697 min: 3828 avg: 3937
>     Latency VMSAVE: max: 42953 min: 3889 avg: 4012
>     Latency STGI:   max: 42961 min: 3517 avg: 3595
>     Latency CLGI:   max: 41177 min: 2859 avg: 2924
> latency_svm_insn: PASS
>
> SUMMARY: 18 TESTS, 1 FAILURES
> Return value from qemu: 3

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 12:28                 ` Valentine Sinitsyn
@ 2014-08-21 12:38                   ` Valentine Sinitsyn
  2014-08-21 13:40                   ` Valentine Sinitsyn
  2014-09-01 17:41                   ` Paolo Bonzini
  2 siblings, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-21 12:38 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 21.08.2014 18:28, Valentine Sinitsyn wrote:
> Also, I tried to use different register values for npt_l1mmio_test() and
> npt_l1mmio_check() (like 0xfee00030 and 0xfee00400), but got test passed
Just a small clarification: I made npt_l1mmio_test() to read 0xfee00030 
and npt_l1mmio_check() to compare against 0xfee00020 or 0xfee00400. No 
reason, just arbitrary values to check if anything compares non-equal in 
the check.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 12:28                 ` Valentine Sinitsyn
  2014-08-21 12:38                   ` Valentine Sinitsyn
@ 2014-08-21 13:40                   ` Valentine Sinitsyn
  2014-09-01 17:41                   ` Paolo Bonzini
  2 siblings, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-21 13:40 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

Sorry for the chain letters.

On 21.08.2014 18:28, Valentine Sinitsyn wrote:
> It passes for SVM, too.
I also looked at SVM tests more closely, and found out that NPT maps the 
whole memory-range as cached memory. This can also be a reason for a 
false positive in the test (if there is one). Will look into it later today.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 11:24               ` Paolo Bonzini
  2014-08-21 12:28                 ` Valentine Sinitsyn
@ 2014-08-21 17:35                 ` Valentine Sinitsyn
  2014-08-21 20:31                   ` Paolo Bonzini
  1 sibling, 1 reply; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-21 17:35 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 21.08.2014 17:24, Paolo Bonzini wrote:
> It seems to work for VMX (see the testcase I just sent).  For SVM, can you
> check if this test works for you, so that we can work on a simple testcase?
I was able to reproduce the bug with your testcase when I changed APIC 
register access size (see below). Please check if it fails on VMX as 
well now.

On a side note, npt_rsvd also seem to be broken as I mentioned previously.

HTH,
Valentine

diff --git a/x86/svm.c b/x86/svm.c
index a9b29b1..d0ddff7 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -797,6 +797,27 @@ static bool npt_pfwalk_check(struct test *test)
  	   && (test->vmcb->control.exit_info_2 == read_cr3());
  }

+static void npt_l1mmio_prepare(struct test *test)
+{
+    vmcb_ident(test->vmcb);
+}
+
+u32 nested_apic_version;
+
+static void npt_l1mmio_test(struct test *test)
+{
+    u32 *data = (u32*)(0xfee00030UL);
+
+    nested_apic_version = *data;
+}
+
+static bool npt_l1mmio_check(struct test *test)
+{
+    u32 *data = (u32*)(0xfee00030);
+
+    return (nested_apic_version == *data);
+}
+
  static void latency_prepare(struct test *test)
  {
      default_prepare(test);
@@ -962,6 +983,8 @@ static struct test tests[] = {
  	    default_finished, npt_rw_check },
      { "npt_pfwalk", npt_supported, npt_pfwalk_prepare, null_test,
  	    default_finished, npt_pfwalk_check },
+    { "npt_l1mmio", npt_supported, npt_l1mmio_prepare, npt_l1mmio_test,
+	    default_finished, npt_l1mmio_check },
      { "latency_run_exit", default_supported, latency_prepare, 
latency_test,
        latency_finished, latency_check },
      { "latency_svm_insn", default_supported, lat_svm_insn_prepare, 
null_test,




^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 17:35                 ` Valentine Sinitsyn
@ 2014-08-21 20:31                   ` Paolo Bonzini
  2014-08-22  4:33                     ` Valentine Sinitsyn
  0 siblings, 1 reply; 38+ messages in thread
From: Paolo Bonzini @ 2014-08-21 20:31 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 21/08/2014 19:35, Valentine Sinitsyn ha scritto:
>>
> I was able to reproduce the bug with your testcase when I changed APIC
> register access size (see below). Please check if it fails on VMX as
> well now.

VMX used the right access size already, the tests are separate for VMX
and SVM.

> On a side note, npt_rsvd also seem to be broken as I mentioned previously.

Yup, thanks.  I think I noticed a month ago or so.

Paolo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 20:31                   ` Paolo Bonzini
@ 2014-08-22  4:33                     ` Valentine Sinitsyn
  2014-08-22  8:53                       ` Paolo Bonzini
  2014-09-01 16:11                       ` Paolo Bonzini
  0 siblings, 2 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-08-22  4:33 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 22.08.2014 02:31, Paolo Bonzini wrote:
> VMX used the right access size already, the tests are separate for VMX
> and SVM.
Sure. So the bug is NPT-specific?

BTW I was likely wrong stating:

> if nested page tables maps some GPA to 0xfee00000 HPA, it's really mapped to this HPA,

Looks more like it gets mapped wherever 0xfee00000 GPA is translated to. 
No flaw then, just a minor glitch (for a general public).

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-22  4:33                     ` Valentine Sinitsyn
@ 2014-08-22  8:53                       ` Paolo Bonzini
  2014-09-01 16:11                       ` Paolo Bonzini
  1 sibling, 0 replies; 38+ messages in thread
From: Paolo Bonzini @ 2014-08-22  8:53 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 22/08/2014 06:33, Valentine Sinitsyn ha scritto:
> On 22.08.2014 02:31, Paolo Bonzini wrote:
>> VMX used the right access size already, the tests are separate for VMX
>> and SVM.
> Sure. So the bug is NPT-specific?

Looks like that, yes.

Paolo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-22  4:33                     ` Valentine Sinitsyn
  2014-08-22  8:53                       ` Paolo Bonzini
@ 2014-09-01 16:11                       ` Paolo Bonzini
  1 sibling, 0 replies; 38+ messages in thread
From: Paolo Bonzini @ 2014-09-01 16:11 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 22/08/2014 06:33, Valentine Sinitsyn ha scritto:
> On 22.08.2014 02:31, Paolo Bonzini wrote:
>> VMX used the right access size already, the tests are separate for VMX
>> and SVM.
> Sure. So the bug is NPT-specific?

Hmm, unfortunately the test cannot reproduce the bug, at least with 3.16.
It only failed due to a (somewhat unbelievable...) typo:

diff --git a/x86/svm.c b/x86/svm.c
index 54d804b..ca1e64e 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -87,7 +87,7 @@ static void setup_svm(void)
         page = alloc_page();
 
         for (j = 0; j < 512; ++j)
-            page[j] = (u64)pte[(i * 514) + j] | 0x027ULL;
+            page[j] = (u64)pte[(i * 512) + j] | 0x027ULL;
 
         pde[i] = page;
     }

The trace correctly points at APIC_LVR for both the guest read:

 qemu-system-x86-23749 [019]  6718.397998: kvm_exit:             reason npf rip 0x4003ba info 100000004 fee00030
 qemu-system-x86-23749 [019]  6718.397998: kvm_nested_vmexit:    rip: 0x00000000004003ba reason: npf ext_inf1: 0x0000000100000004 ext_inf2: 0x00000000fee00030 ext_int: 0x00000000 ext_int_err: 0x00000000
 qemu-system-x86-23749 [019]  6718.397999: kvm_page_fault:       address fee00030 error_code 4
 qemu-system-x86-23749 [019]  6718.398009: kvm_emulate_insn:     0:4003ba:a1 30 00 e0 fe 00 00 00 00 (prot64)
 qemu-system-x86-23749 [019]  6718.398013: kvm_apic:             apic_read APIC_LVR = 0x1050014
 qemu-system-x86-23749 [019]  6718.398014: kvm_mmio:             mmio read len 4 gpa 0xfee00030 val 0x1050014
 qemu-system-x86-23749 [019]  6718.398015: kvm_entry:            vcpu 0

and the host read:

 qemu-system-x86-23749 [019]  6718.398035: kvm_entry:            vcpu 0
 qemu-system-x86-23749 [019]  6718.398036: kvm_exit:             reason npf rip 0x4003ca info 10000000d fee00030
 qemu-system-x86-23749 [019]  6718.398037: kvm_page_fault:       address fee00030 error_code d
 qemu-system-x86-23749 [019]  6718.398039: kvm_emulate_insn:     0:4003ca:a1 30 00 e0 fe 00 00 00 00 (prot64)
 qemu-system-x86-23749 [019]  6718.398040: kvm_apic:             apic_read APIC_LVR = 0x1050014
 qemu-system-x86-23749 [019]  6718.398040: kvm_mmio:             mmio read len 4 gpa 0xfee00030 val 0x1050014

The different error codes are because the first read will install the shadow
page.  If I change the test to do two reads, the error codes match.  I will
look at this more closely tomorrow.

Paolo

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-20  6:46   ` Valentine Sinitsyn
  2014-08-20  6:55     ` Paolo Bonzini
@ 2014-09-01 17:04     ` Paolo Bonzini
  2014-09-02  6:09       ` Valentine Sinitsyn
  1 sibling, 1 reply; 38+ messages in thread
From: Paolo Bonzini @ 2014-09-01 17:04 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 20/08/2014 08:46, Valentine Sinitsyn ha scritto:
> Looks like it is a bug in KVM. I had a chance to run the same code
> bare-metal ([1], line 310 is uncommented for bare-metal case but present
> for nested SVM), and it seems to work as expected. However, When I trace
> it in nested SVM setup, after some successful APIC reads and writes, I
> get the following:

Valentine, can you produce another trace, this time with both kvm and
kvmmmu events enabled?

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-08-21 12:28                 ` Valentine Sinitsyn
  2014-08-21 12:38                   ` Valentine Sinitsyn
  2014-08-21 13:40                   ` Valentine Sinitsyn
@ 2014-09-01 17:41                   ` Paolo Bonzini
  2014-09-01 19:21                     ` Valentine Sinitsyn
  2 siblings, 1 reply; 38+ messages in thread
From: Paolo Bonzini @ 2014-09-01 17:41 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 21/08/2014 14:28, Valentine Sinitsyn ha scritto:
>> It seems to work for VMX (see the testcase I just sent).  For SVM, can
>> you check if this test works for you, so that we can work on a simple
>> testcase?
> 
> However, npt_rsvd seems to be broken - maybe that is the reason?

BTW npt_rsvd does *not* fail on the machine I've been testing on today.

Can you retry running the tests with the latest kvm-unit-tests (branch
"master"), gather a trace of kvm and kvmmmu events, and send the
compressed trace.dat my way?

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-01 17:41                   ` Paolo Bonzini
@ 2014-09-01 19:21                     ` Valentine Sinitsyn
  2014-09-02  8:25                       ` Paolo Bonzini
  0 siblings, 1 reply; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-09-01 19:21 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

Hi Paolo,

On 01.09.2014 23:41, Paolo Bonzini wrote:
> Il 21/08/2014 14:28, Valentine Sinitsyn ha scritto:
> BTW npt_rsvd does *not* fail on the machine I've been testing on today.
I can confirm l1mmio test doesn't fail in kvm-unit-test's master 
anymore. npt_rsvd still does. I also needed to disable ioio test, or it 
was hanging for a long time (this doesn't happen if I use Jan's patched 
KVM that have IOPM bugs fixed). However, l1mmio test passes regardless I 
use stock kvm 3.16.1 or a patched version.

> Can you retry running the tests with the latest kvm-unit-tests (branch
> "master"), gather a trace of kvm and kvmmmu events, and send the
> compressed trace.dat my way?
You mean the trace when the problem reveal itself (not from running 
tests), I assume? It's around 2G uncompressed (probably I'm enabling 
tracing to early or doing anything else wrong). Will look into it 
tomorrow, hopefully, I can reduce the size (e.g. by switching to 
uniprocessor mode). Below is a trace snippet similar to the one I've 
sent earlier.

----------------------------------------------------------------------
qemu-system-x86-2728  [002]  1726.426225: kvm_exit:             reason 
npf rip 0xffffffff8104e876 info 10000000f fee000b0
  qemu-system-x86-2728  [002]  1726.426226: kvm_nested_vmexit:    rip: 
0xffffffff8104e876 reason: npf ext_inf1: 0x000000010000000f ext_inf2: 
0x00000000fee000b0 ext_int: 0x00000000 ext_int_err: 0x00000000
  qemu-system-x86-2728  [002]  1726.426227: kvm_page_fault: 
address fee000b0 error_code f
  qemu-system-x86-2725  [000]  1726.426227: kvm_exit:             reason 
npf rip 0xffffffff8104e876 info 10000000f fee000b0
  qemu-system-x86-2725  [000]  1726.426228: kvm_nested_vmexit:    rip: 
0xffffffff8104e876 reason: npf ext_inf1: 0x000000010000000f ext_inf2: 
0x00000000fee000b0 ext_int: 0x00000000 ext_int_err: 0x00000000
  qemu-system-x86-2725  [000]  1726.426229: kvm_page_fault: 
address fee000b0 error_code f
  qemu-system-x86-2728  [002]  1726.426229: kvm_emulate_insn: 
0:ffffffff8104e876:89 b7 00 b0 5f ff (prot64)
  qemu-system-x86-2725  [000]  1726.426230: kvm_emulate_insn: 
0:ffffffff8104e876:89 b7 00 b0 5f ff (prot64)
  qemu-system-x86-2728  [002]  1726.426231: kvm_mmu_pagetable_walk: addr 
ffffffffff5fb0b0 pferr 2 W
  qemu-system-x86-2725  [000]  1726.426231: kvm_mmu_pagetable_walk: addr 
ffffffffff5fb0b0 pferr 2 W
  qemu-system-x86-2728  [002]  1726.426231: kvm_mmu_pagetable_walk: addr 
1811000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426232: kvm_mmu_pagetable_walk: addr 
36c49000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426232: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426232: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426232: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426233: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2728  [002]  1726.426233: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2725  [000]  1726.426233: kvm_mmu_paging_element: pte 
36c000e7 level 2
  qemu-system-x86-2728  [002]  1726.426233: kvm_mmu_paging_element: pte 
1814067 level 4
  qemu-system-x86-2725  [000]  1726.426233: kvm_mmu_paging_element: pte 
1814067 level 4
  qemu-system-x86-2728  [002]  1726.426233: kvm_mmu_pagetable_walk: addr 
1814000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426234: kvm_mmu_pagetable_walk: addr 
1814000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426234: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426234: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426234: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426235: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2728  [002]  1726.426235: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2725  [000]  1726.426235: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2728  [002]  1726.426235: kvm_mmu_paging_element: pte 
1816067 level 3
  qemu-system-x86-2725  [000]  1726.426235: kvm_mmu_paging_element: pte 
1816067 level 3
  qemu-system-x86-2728  [002]  1726.426235: kvm_mmu_pagetable_walk: addr 
1816000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426236: kvm_mmu_pagetable_walk: addr 
1816000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426236: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426236: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426236: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426236: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2728  [002]  1726.426236: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2725  [000]  1726.426237: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2728  [002]  1726.426237: kvm_mmu_paging_element: pte 
1a06067 level 2
  qemu-system-x86-2725  [000]  1726.426237: kvm_mmu_paging_element: pte 
1a06067 level 2
  qemu-system-x86-2725  [000]  1726.426238: kvm_mmu_pagetable_walk: addr 
1a06000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426238: kvm_mmu_pagetable_walk: addr 
1a06000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426238: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426238: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426238: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426239: kvm_mmu_paging_element: pte 
1a000e7 level 2
  qemu-system-x86-2728  [002]  1726.426239: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426239: kvm_mmu_paging_element: pte 
80000000fee0017b level 1
  qemu-system-x86-2728  [002]  1726.426239: kvm_mmu_paging_element: pte 
1a000e7 level 2
  qemu-system-x86-2725  [000]  1726.426239: kvm_mmu_pagetable_walk: addr 
fee00000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426239: kvm_mmu_paging_element: pte 
80000000fee0017b level 1
  qemu-system-x86-2725  [000]  1726.426240: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426240: kvm_mmu_pagetable_walk: addr 
fee00000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426240: kvm_mmu_paging_element: pte 
3c03b027 level 3
  qemu-system-x86-2728  [002]  1726.426240: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426240: kvm_mmu_paging_element: pte 
3c03c027 level 2
  qemu-system-x86-2728  [002]  1726.426241: kvm_mmu_paging_element: pte 
3c03b027 level 3
  qemu-system-x86-2725  [000]  1726.426241: kvm_mmu_paging_element: pte 
fee0003d level 1
  qemu-system-x86-2728  [002]  1726.426241: kvm_mmu_paging_element: pte 
3c03c027 level 2
  qemu-system-x86-2725  [000]  1726.426241: kvm_mmu_walker_error: pferr 
7 P|W|U
  qemu-system-x86-2728  [002]  1726.426241: kvm_mmu_paging_element: pte 
fee0003d level 1
  qemu-system-x86-2725  [000]  1726.426241: kvm_mmu_walker_error: pferr 2 W
  qemu-system-x86-2728  [002]  1726.426242: kvm_mmu_walker_error: pferr 
7 P|W|U
  qemu-system-x86-2728  [002]  1726.426242: kvm_mmu_walker_error: pferr 2 W
  qemu-system-x86-2725  [000]  1726.426243: kvm_inj_exception:    e (0x2)
  qemu-system-x86-2728  [002]  1726.426243: kvm_inj_exception:    e (0x2)
  qemu-system-x86-2725  [000]  1726.426244: kvm_entry:            vcpu 0

Thanks,
Valentine


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-01 17:04     ` Paolo Bonzini
@ 2014-09-02  6:09       ` Valentine Sinitsyn
  2014-09-02  6:21         ` Valentine Sinitsyn
  2014-09-02  9:45         ` Paolo Bonzini
  0 siblings, 2 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-09-02  6:09 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

Hi Paolo,

On 01.09.2014 23:04, Paolo Bonzini wrote:
> Valentine, can you produce another trace, this time with both kvm and
> kvmmmu events enabled?
I was able to make the trace shorter by grepping only what's happening 
on a single CPU core (#0):

https://www.dropbox.com/s/slbxmxyg74wh9hv/l1mmio-cpu0.txt.gz?dl=0

It was taken with kernel 3.16.1 modules with your paging-tmpl.h patch 
applied.

This time, the trace looks somewhat different, however my code still 
hangs in nested KVM (and doesn't on real HW).

Thanks,
Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-02  6:09       ` Valentine Sinitsyn
@ 2014-09-02  6:21         ` Valentine Sinitsyn
  2014-09-02  9:45         ` Paolo Bonzini
  1 sibling, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-09-02  6:21 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 02.09.2014 12:09, Valentine Sinitsyn wrote:
> https://www.dropbox.com/s/slbxmxyg74wh9hv/l1mmio-cpu0.txt.gz?dl=0
Forgot to say: the user space is vanilla QEMU 2.1.0 here.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-01 19:21                     ` Valentine Sinitsyn
@ 2014-09-02  8:25                       ` Paolo Bonzini
  2014-09-02  9:16                         ` Valentine Sinitsyn
  0 siblings, 1 reply; 38+ messages in thread
From: Paolo Bonzini @ 2014-09-02  8:25 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 01/09/2014 21:21, Valentine Sinitsyn ha scritto:
> 
>> Can you retry running the tests with the latest kvm-unit-tests (branch
>> "master"), gather a trace of kvm and kvmmmu events, and send the
>> compressed trace.dat my way?
> You mean the trace when the problem reveal itself (not from running
> tests), I assume? It's around 2G uncompressed (probably I'm enabling
> tracing to early or doing anything else wrong). Will look into it
> tomorrow, hopefully, I can reduce the size (e.g. by switching to
> uniprocessor mode). Below is a trace snippet similar to the one I've
> sent earlier.

I actually meant kvm-unit-tests in order to understand the npt_rsvd
failure.  (I had sent a separate message for Jailhouse).

For kvm-unit-tests, you can comment out tests that do not fail to reduce
the trace size.

Paolo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-02  8:25                       ` Paolo Bonzini
@ 2014-09-02  9:16                         ` Valentine Sinitsyn
  2014-09-02 11:21                           ` Paolo Bonzini
  0 siblings, 1 reply; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-09-02  9:16 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 02.09.2014 14:25, Paolo Bonzini wrote:
> I actually meant kvm-unit-tests in order to understand the npt_rsvd
> failure.  (I had sent a separate message for Jailhouse).
Oops, sorry for misunderstanding. Uploaded it here:
https://www.dropbox.com/s/jp6ohb0ul3d6v4u/npt_rsvd.txt.bz2?dl=0

The environment is QEMU 2.1.0 + Linux 3.16.1 with paging_tmpl.h patch, 
and the only test enabled was npt_rsvd (others do pass now).

> For kvm-unit-tests, you can comment out tests that do not fail to reduce
> the trace size.
Yes, I've sent that trace earlier today.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-02  6:09       ` Valentine Sinitsyn
  2014-09-02  6:21         ` Valentine Sinitsyn
@ 2014-09-02  9:45         ` Paolo Bonzini
  2014-09-02  9:53           ` Valentine Sinitsyn
  2014-09-02 10:31           ` Valentine Sinitsyn
  1 sibling, 2 replies; 38+ messages in thread
From: Paolo Bonzini @ 2014-09-02  9:45 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 02/09/2014 08:09, Valentine Sinitsyn ha scritto:
> 
> https://www.dropbox.com/s/slbxmxyg74wh9hv/l1mmio-cpu0.txt.gz?dl=0
> 
> It was taken with kernel 3.16.1 modules with your paging-tmpl.h patch
> applied.
> 
> This time, the trace looks somewhat different, however my code still
> hangs in nested KVM (and doesn't on real HW).

This *is* different though.  I don't see any kvm_inj_exception at all
(with my patch it should be for vector 0xfe).

In any case, the problem seems specific to _writes_ to the APIC page.
I'm going to write a testcase for that and see if I can reproduce it now.

Paolo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-02  9:45         ` Paolo Bonzini
@ 2014-09-02  9:53           ` Valentine Sinitsyn
  2014-09-02 11:48             ` Paolo Bonzini
  2014-09-02 10:31           ` Valentine Sinitsyn
  1 sibling, 1 reply; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-09-02  9:53 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 02.09.2014 15:45, Paolo Bonzini wrote:
> This *is* different though.  I don't see any kvm_inj_exception at all
> (with my patch it should be for vector 0xfe).
I've applied the part of your patch, that fixes the uninitialized 
exception vector problem, otherwise the lockup will trigger before my 
code will have chance to hang on APIC. Namely, I did the following change:

--- a/arch/x86/kvm/paging_tmpl.h	2014-09-02 21:53:26.035112557 +0600
+++ b/arch/x86/kvm/paging_tmpl.h	2014-09-02 21:53:46.145110721 +0600
@@ -366,7 +366,7 @@

  	real_gpa = mmu->translate_gpa(vcpu, gfn_to_gpa(gfn), access);
  	if (real_gpa == UNMAPPED_GVA)
-		return 0;
+		goto error;

  	walker->gfn = real_gpa >> PAGE_SHIFT;

So they should look like regular page faults (as they ought to, I guess) 
now.

Thanks,
Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-02  9:45         ` Paolo Bonzini
  2014-09-02  9:53           ` Valentine Sinitsyn
@ 2014-09-02 10:31           ` Valentine Sinitsyn
  1 sibling, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-09-02 10:31 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 02.09.2014 15:45, Paolo Bonzini wrote:
> In any case, the problem seems specific to _writes_ to the APIC page.
> I'm going to write a testcase for that and see if I can reproduce it now.
If you'll need a complete trace, not only CPU 0, please let me know - 
I'll upload it as well. It's about 17M compressed.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-02  9:16                         ` Valentine Sinitsyn
@ 2014-09-02 11:21                           ` Paolo Bonzini
  2014-09-02 11:26                             ` Valentine Sinitsyn
  0 siblings, 1 reply; 38+ messages in thread
From: Paolo Bonzini @ 2014-09-02 11:21 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 02/09/2014 11:16, Valentine Sinitsyn ha scritto:
> On 02.09.2014 14:25, Paolo Bonzini wrote:
>> I actually meant kvm-unit-tests in order to understand the npt_rsvd
>> failure.  (I had sent a separate message for Jailhouse).
> Oops, sorry for misunderstanding. Uploaded it here:
> https://www.dropbox.com/s/jp6ohb0ul3d6v4u/npt_rsvd.txt.bz2?dl=0

Ugh, there are many bugs and the test is even wrong because the actual
error code should be 0x200000006 (error while visiting page tables).

Paolo

> The environment is QEMU 2.1.0 + Linux 3.16.1 with paging_tmpl.h patch,
> and the only test enabled was npt_rsvd (others do pass now).
> 
>> For kvm-unit-tests, you can comment out tests that do not fail to reduce
>> the trace size.
> Yes, I've sent that trace earlier today.
> 
> Valentine
> -- 
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-02 11:21                           ` Paolo Bonzini
@ 2014-09-02 11:26                             ` Valentine Sinitsyn
  0 siblings, 0 replies; 38+ messages in thread
From: Valentine Sinitsyn @ 2014-09-02 11:26 UTC (permalink / raw)
  To: Paolo Bonzini, Jan Kiszka, kvm

On 02.09.2014 17:21, Paolo Bonzini wrote:
> Ugh, there are many bugs and the test is even wrong because the actual
> error code should be 0x200000006 (error while visiting page tables).
Well, good they were spotted. :-) Haven't looked at the test code 
actually, just saw it fails for some reason.

Valentine

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: Nested paging in nested SVM setup
  2014-09-02  9:53           ` Valentine Sinitsyn
@ 2014-09-02 11:48             ` Paolo Bonzini
  0 siblings, 0 replies; 38+ messages in thread
From: Paolo Bonzini @ 2014-09-02 11:48 UTC (permalink / raw)
  To: Valentine Sinitsyn, Jan Kiszka, kvm

Il 02/09/2014 11:53, Valentine Sinitsyn ha scritto:
> 
>      real_gpa = mmu->translate_gpa(vcpu, gfn_to_gpa(gfn), access);
>      if (real_gpa == UNMAPPED_GVA)
> -        return 0;
> +        goto error;
> 
>      walker->gfn = real_gpa >> PAGE_SHIFT;
> 
> So they should look like regular page faults (as they ought to, I guess)
> now.

Yes, they do look like a regular page fault with this patch.  However,
they actually should look like nested page faults...  I'm starting to
clean up this stuff.

Paolo

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2014-09-02 11:48 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-18 11:36 Nested paging in nested SVM setup Valentine Sinitsyn
2014-06-18 12:47 ` Jan Kiszka
2014-06-18 16:59   ` Valentine Sinitsyn
2014-06-19  9:32     ` Paolo Bonzini
2014-06-19  5:03   ` Valentine Sinitsyn
2014-08-20  6:46   ` Valentine Sinitsyn
2014-08-20  6:55     ` Paolo Bonzini
2014-08-20  7:37       ` Valentine Sinitsyn
2014-08-20  8:11         ` Paolo Bonzini
2014-08-20  9:49           ` Valentine Sinitsyn
2014-08-21  6:28           ` Valentine Sinitsyn
2014-08-21  8:48             ` Valentine Sinitsyn
2014-08-21 11:04               ` Paolo Bonzini
2014-08-21 11:06                 ` Jan Kiszka
2014-08-21 11:12                   ` Valentine Sinitsyn
2014-08-21 11:16                 ` Valentine Sinitsyn
2014-08-21 11:24               ` Paolo Bonzini
2014-08-21 12:28                 ` Valentine Sinitsyn
2014-08-21 12:38                   ` Valentine Sinitsyn
2014-08-21 13:40                   ` Valentine Sinitsyn
2014-09-01 17:41                   ` Paolo Bonzini
2014-09-01 19:21                     ` Valentine Sinitsyn
2014-09-02  8:25                       ` Paolo Bonzini
2014-09-02  9:16                         ` Valentine Sinitsyn
2014-09-02 11:21                           ` Paolo Bonzini
2014-09-02 11:26                             ` Valentine Sinitsyn
2014-08-21 17:35                 ` Valentine Sinitsyn
2014-08-21 20:31                   ` Paolo Bonzini
2014-08-22  4:33                     ` Valentine Sinitsyn
2014-08-22  8:53                       ` Paolo Bonzini
2014-09-01 16:11                       ` Paolo Bonzini
2014-09-01 17:04     ` Paolo Bonzini
2014-09-02  6:09       ` Valentine Sinitsyn
2014-09-02  6:21         ` Valentine Sinitsyn
2014-09-02  9:45         ` Paolo Bonzini
2014-09-02  9:53           ` Valentine Sinitsyn
2014-09-02 11:48             ` Paolo Bonzini
2014-09-02 10:31           ` Valentine Sinitsyn

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.