All of lore.kernel.org
 help / color / mirror / Atom feed
* possible I/O emulation state machine issue
@ 2018-03-22 15:12 Jan Beulich
  2018-03-22 15:29 ` Andrew Cooper
  0 siblings, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2018-03-22 15:12 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel

Paul,

our PV driver person has found a reproducible crash with ws2k8,
triggered by one of the WHQL tests. The guest get crashed because
the re-issue check of an ioreq close to the top of hvmemul_do_io()
fails. I've handed him a first debugging patch, output of which
suggests that we're dealing with a completely new request, which
in turn would mean that we've run into stale STATE_IORESP_READY
state:

(XEN) d2v3: t=0/1 a=3c4/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0 v=100/ffff831873f27a30
(XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=n   Tainted:  C   ]----
(XEN) CPU:    39
(XEN) RIP:    e008:[<ffff82d0802d4b91>] emulate.c#hvmemul_do_io+0x1b1/0x640
(XEN) RFLAGS: 0000000000010292   CONTEXT: hypervisor (d2v3)
(XEN) rax: ffff8308797d802c   rbx: 0000000000000004   rcx: 0000000000000000
(XEN) rdx: ffff831873f27fff   rsi: 000000000000000a   rdi: ffff82d0804433b8
(XEN) rbp: ffff830007d28000   rsp: ffff831873f27728   r8:  0000000000000027
(XEN) r9:  0000000000100000   r10: 0000000000000400   r11: ffff82d08035bd40
(XEN) r12: 0000000000000001   r13: 0000000000000000   r14: 0000000000000001
(XEN) r15: ffff831873f278e0   cr0: 0000000080050033   cr4: 00000000000026e0
(XEN) cr3: 0000003794f02000   cr2: fffffa6000fae10e
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 000007fffffdd000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen code around <ffff82d0802d4b91> (emulate.c#hvmemul_do_io+0x1b1/0x640):
(XEN)  54 24 70 e8 cf 87 f7 ff <0f> 0b 48 8d 3d 16 b6 0b 00 48 8d 35 88 f8 0c 00
(XEN) Xen stack trace from rsp=ffff831873f27728:
(XEN)    0000000000000002 0000000000000004 0000000000000001 0000000000000001
(XEN)    0000000000000000 0000000000000001 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000100 ffff831873f27a30
(XEN)    ffff83283fe74010 ffff83284ad22000 0000000000000000 0000000100000000
(XEN)    ffff831873f277d8 ffff831873f277e0 00000000000003c4 0000000000000100
(XEN)    0000000200000001 0000000000000000 ffff8317f8e5b000 0000000000000004
(XEN)    0000000000000001 ffff831873f27a30 ffff831873f27a30 00000000fed000f0
(XEN)    ffff830007d289c8 ffff82d0802d578e 0000000000000000 ffff831873f27a30
(XEN)    0000000000000000 0000000000000004 0000000000000004 0000000000000000
(XEN)    ffff831873f27a30 ffff82d0802d64dd ffff831873f27a30 ffff831873f27d10
(XEN)    00000000fed000f0 ffff831873f27a30 0100000000000003 0000000000000000
(XEN)    ffff831873f278e0 ffffffffffd070f0 0000000400000004 0000000000000004
(XEN)    0000000100000000 ffff831873f27c78 ffff831873f278d8 ffff831873f278d0
(XEN)    ffff831873f27938 00000000fed000f0 0000000000000001 0000000000000001
(XEN)    ffff82d080350ecb 0000000000000004 0000000000000001 ffff831873f27c78
(XEN)    ffff831873f27a30 0000000000000002 ffff830007d28000 ffff82d0802d69f1
(XEN)    0000000000000001 ffff82d0802a313d ffffffffffd070f0 0000000000000001
(XEN)    0000000000000000 00000000000000f0 ffff82d080350ecb ffff831873f27aa0
(XEN)    0000000000000000 ffff831873f27c78 ffff831873f27a28 ffff830007d28a60
(XEN)    ffff82d0803a7620 ffff82d0802a4aad ffff831873f279c8 ffff831873f27ac0
(XEN) Xen call trace:
(XEN)    [<ffff82d0802d4b91>] emulate.c#hvmemul_do_io+0x1b1/0x640
(XEN)    [<ffff82d0802d578e>] emulate.c#hvmemul_do_io_buffer+0x2e/0x70
(XEN)    [<ffff82d0802d64dd>] emulate.c#hvmemul_linear_mmio_access+0x24d/0x540
(XEN)    [<ffff82d080350ecb>] common_interrupt+0x9b/0x120
(XEN)    [<ffff82d0802d69f1>] emulate.c#__hvmemul_read+0x221/0x230
(XEN)    [<ffff82d0802a313d>] x86_emulate.c#x86_decode+0xe2d/0x1e50
(XEN)    [<ffff82d080350ecb>] common_interrupt+0x9b/0x120
(XEN)    [<ffff82d0802a4aad>] x86_emulate+0x94d/0x19150
(XEN)    [<ffff82d08030ebd1>] __get_gfn_type_access+0x101/0x290
(XEN)    [<ffff82d0802d7c0a>] emulate.c#_hvm_emulate_one+0x4a/0x1e0
(XEN)    [<ffff82d0803006e0>] vmx.c#vmx_get_interrupt_shadow+0/0x10
(XEN)    [<ffff82d0802d7a2e>] hvm_emulate_init_once+0x7e/0xb0
(XEN)    [<ffff82d0802e394b>] hvm_emulate_one_insn+0x3b/0x120
(XEN)    [<ffff82d0802bd3a0>] x86_insn_is_mem_access+0/0xc0
(XEN)    [<ffff82d0802dc5b8>] hvm_hap_nested_page_fault+0x138/0x710
(XEN)    [<ffff82d08023bdc0>] timer.c#add_entry+0x50/0xc0
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b59f>] vmx_asm_vmexit_handler+0x9f/0x240
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b59f>] vmx_asm_vmexit_handler+0x9f/0x240
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030517e>] vmx_vmexit_handler+0x8ae/0x1960
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b59f>] vmx_asm_vmexit_handler+0x9f/0x240
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b59f>] vmx_asm_vmexit_handler+0x9f/0x240
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b59f>] vmx_asm_vmexit_handler+0x9f/0x240
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b59f>] vmx_asm_vmexit_handler+0x9f/0x240
(XEN)    [<ffff82d08030b5ab>] vmx_asm_vmexit_handler+0xab/0x240
(XEN)    [<ffff82d08030b5e2>] vmx_asm_vmexit_handler+0xe2/0x240
(XEN) 
(XEN) domain_crash called from emulate.c:171
(XEN) Domain 2 (vcpu#3) crashed on cpu#39:
(XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=n   Tainted:  C   ]----
(XEN) CPU:    39
(XEN) RIP:    0010:[<fffff8000162411e>]
(XEN) RFLAGS: 0000000000010286   CONTEXT: hvm guest (d2v3)
(XEN) rax: ffffffffffd07000   rbx: 0000000000000003   rcx: 0000000a00005036
(XEN) rdx: 0000000002549700   rsi: fffffa80044b8990   rdi: 00000001adfbbe88
(XEN) rbp: fffffa6001145128   rsp: fffffa60019ffb58   r8:  00000000b57e152b
(XEN) r9:  0000000001d3c1ec   r10: fffff6fb7e980038   r11: 0000000000000003
(XEN) r12: fffffa80044b8990   r13: 0000000000000004   r14: 0000000001d3c1ec
(XEN) r15: fffffa60019dbc00   cr0: 0000000080050031   cr4: 00000000000006f8
(XEN) cr3: 0000000000124000   cr2: fffffa6000fae10e
(XEN) fsb: 00000000fffdf000   gsb: fffffa60019d8000   gss: 000007fffffae000
(XEN) ds: 002b   es: 002b   fs: 0053   gs: 002b   ss: 0018   cs: 0010

The elements in the first line are recorded / actual values for
each of the elements the if() checks, in that same order (patch
below for reference). The stack trace also suggests to me that
we're not in the context of a re-issue (which iirc would always
originate from hvm_do_resume()).

I'd appreciate any thoughts on the matter,
Jan

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -164,7 +164,12 @@ static int hvmemul_do_io(
              (p.dir != dir) ||
              (p.df != df) ||
              (p.data_is_ptr != data_is_addr) )
+{//temp
+ printk("%pv: t=%d/%d a=%lx/%lx s=%x/%x c=%x/%lx d=%d/%d f=%d/%d p=%d/%d v=%lx/%lx\n", curr,
+        p.type, is_mmio, p.addr, addr, p.size, size, p.count, *reps, p.dir, dir, p.df, df, p.data_is_ptr, data_is_addr, p.data, data);
+ dump_execution_state();
             domain_crash(currd);
+}
 
         if ( data_is_addr )
             return X86EMUL_UNHANDLEABLE;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-22 15:12 possible I/O emulation state machine issue Jan Beulich
@ 2018-03-22 15:29 ` Andrew Cooper
  2018-03-23  7:30   ` Jan Beulich
  0 siblings, 1 reply; 21+ messages in thread
From: Andrew Cooper @ 2018-03-22 15:29 UTC (permalink / raw)
  To: Jan Beulich, Paul Durrant; +Cc: xen-devel

On 22/03/18 15:12, Jan Beulich wrote:
> Paul,
>
> our PV driver person has found a reproducible crash with ws2k8,
> triggered by one of the WHQL tests. The guest get crashed because
> the re-issue check of an ioreq close to the top of hvmemul_do_io()
> fails. I've handed him a first debugging patch, output of which
> suggests that we're dealing with a completely new request, which
> in turn would mean that we've run into stale STATE_IORESP_READY
> state:
>
> (XEN) d2v3: t=0/1 a=3c4/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0 v=100/ffff831873f27a30
> (XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=n   Tainted:  C   ]----

Irrespective of the issue at hand, can testing be tried with a debug
build to see if any of the assertions are hit?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-22 15:29 ` Andrew Cooper
@ 2018-03-23  7:30   ` Jan Beulich
  2018-03-23 10:43     ` Paul Durrant
  0 siblings, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2018-03-23  7:30 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Paul Durrant

>>> On 22.03.18 at 16:29, <andrew.cooper3@citrix.com> wrote:
> On 22/03/18 15:12, Jan Beulich wrote:
>> Paul,
>>
>> our PV driver person has found a reproducible crash with ws2k8,
>> triggered by one of the WHQL tests. The guest get crashed because
>> the re-issue check of an ioreq close to the top of hvmemul_do_io()
>> fails. I've handed him a first debugging patch, output of which
>> suggests that we're dealing with a completely new request, which
>> in turn would mean that we've run into stale STATE_IORESP_READY
>> state:
>>
>> (XEN) d2v3: t=0/1 a=3c4/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0 
> v=100/ffff831873f27a30
>> (XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=n   Tainted:  C   ]----
> 
> Irrespective of the issue at hand, can testing be tried with a debug
> build to see if any of the assertions are hit?

Nothing, unfortunately. But at least the stack trace can be relied
upon this way.

Jan

(XEN) d2v3: t=0/1 a=3ce/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0 v=406/ffff83387d21fa30
(XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=y   Tainted:  C   ]----
(XEN) CPU:    62
(XEN) RIP:    e008:[<ffff82d0802e58fc>] emulate.c#hvmemul_do_io+0x169/0x445
(XEN) RFLAGS: 0000000000010292   CONTEXT: hypervisor (d2v3)
(XEN) rax: ffff8308796f602c   rbx: ffff830007d26000   rcx: 0000000000000000
(XEN) rdx: ffff83387d21ffff   rsi: 000000000000000a   rdi: ffff82d0804823b8
(XEN) rbp: ffff83387d21f788   rsp: ffff83387d21f6a8   r8:  ffff830879d00000
(XEN) r9:  0000000000000030   r10: 000000000000000f   r11: 00000000ffffffee
(XEN) r12: 0000000000000004   r13: 0000000000000000   r14: ffff83387d21f850
(XEN) r15: 0000000000000001   cr0: 0000000080050033   cr4: 00000000000026e0
(XEN) cr3: 00000037d2026000   cr2: fffff880051d17ac
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 000007fffff98000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen code around <ffff82d0802e58fc> (emulate.c#hvmemul_do_io+0x169/0x445):
(XEN)  00 00 00 e8 f5 e2 f6 ff <0f> 0b 48 83 c4 60 ba ab 00 00 00 48 8d 35 89 cd
(XEN) Xen stack trace from rsp=ffff83387d21f6a8:
(XEN)    0000000000000002 0000000000000004 0000000000000001 0000000000000001
(XEN)    0000000000000000 0000000000000001 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000406 ffff83387d21fa30
(XEN)    ffff83387d21f798 00000000fed000f0 ffff831187beb000 000000017d21f914
(XEN)    0000000000000000 000000000000014c 00000000000003ce 0000000000000406
(XEN)    0000000200000001 0000000000000000 000000000000014c 0000000000000004
(XEN)    0000000000000001 ffff83387d21fa30 00000000fed000f0 ffff830007d269c8
(XEN)    ffff83387d21f7c8 ffff82d0802e5c06 0000000000000000 ffff83387d21fa30
(XEN)    0000000000000004 0000000000000004 0000000000000000 00000000fed000f0
(XEN)    ffff83387d21f898 ffff82d0802e6264 ffff83387d21fa30 0000000000000003
(XEN)    ffff83387d21f860 ffff83387d21f858 0000000000000004 ffffffffffd070f0
(XEN)    0000000000000003 ffff83387d21fc58 0000000400000001 ffff83387d21f850
(XEN)    0000000000000000 ffff83387d21fa30 01ff833800000004 ffff83387d21fa30
(XEN)    0000000000001b41 0000000000000001 0000000000000001 00000000fed000f0
(XEN)    ffff8318ad778380 0000000000000004 ffff83387d21fc58 0000000000000001
(XEN)    0000000000000002 ffff830007d26000 ffff83387d21f918 ffff82d0802e725f
(XEN)    0000000000000001 0000000000000000 ffff82d0803e6da0 ffff83387d21fa30
(XEN)    0000000000000001 ffffffffffd070f0 0000000000861efd 0000000000000004
(XEN)    000000000000008b ffff83387d21f9c0 ffff83387d21fc58 ffff82d0803e6da0
(XEN)    ffff83387d21fef8 000000000000008b ffff83387d21f928 ffff82d0802e73c3
(XEN) Xen call trace:
(XEN)    [<ffff82d0802e58fc>] emulate.c#hvmemul_do_io+0x169/0x445
(XEN)    [<ffff82d0802e5c06>] emulate.c#hvmemul_do_io_buffer+0x2e/0x68
(XEN)    [<ffff82d0802e6264>] emulate.c#hvmemul_linear_mmio_access+0x2b9/0x3fc
(XEN)    [<ffff82d0802e725f>] emulate.c#__hvmemul_read+0x163/0x1fa
(XEN)    [<ffff82d0802e73c3>] emulate.c#hvmemul_read+0x1c/0x2a
(XEN)    [<ffff82d0802ab5e2>] x86_emulate.c#read_ulong+0x13/0x15
(XEN)    [<ffff82d0802aeeb1>] x86_emulate+0x47d/0x1efa3
(XEN)    [<ffff82d0802cd9fd>] x86_emulate_wrapper+0x26/0x5f
(XEN)    [<ffff82d0802e6cf0>] emulate.c#_hvm_emulate_one+0x54/0x173
(XEN)    [<ffff82d0802e6e1f>] hvm_emulate_one+0x10/0x12
(XEN)    [<ffff82d0802f4cc3>] hvm_emulate_one_insn+0x42/0x130
(XEN)    [<ffff82d0802f4e00>] handle_mmio_with_translation+0x4f/0x51
(XEN)    [<ffff82d0802ec25e>] hvm_hap_nested_page_fault+0x1e4/0x6b6
(XEN)    [<ffff82d0803191ea>] vmx_vmexit_handler+0x1796/0x1d3d
(XEN)    [<ffff82d08031e6e8>] vmx_asm_vmexit_handler+0xe8/0x250
(XEN) 
(XEN) domain_crash called from emulate.c:171
(XEN) Domain 2 (vcpu#3) crashed on cpu#62:
(XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=y   Tainted:  C   ]----
(XEN) CPU:    62
(XEN) RIP:    0010:[<fffff80001b4111e>]
(XEN) RFLAGS: 0000000000010046   CONTEXT: hvm guest (d2v3)
(XEN) rax: ffffffffffd07000   rbx: 0000000000000000   rcx: 0000000c800022a5
(XEN) rdx: fffffffffffffd5b   rsi: fffffa60019dba00   rdi: fffffa80049bf3e0
(XEN) rbp: fffffa80049da440   rsp: fffffa60019ffcd8   r8:  0000000000000000
(XEN) r9:  0000000000000001   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000912   r14: fffffa8003c20950
(XEN) r15: 0000000000019274   cr0: 0000000080050031   cr4: 00000000000006f8
(XEN) cr3: 0000000000124000   cr2: fffff880051d17ac
(XEN) fsb: 00000000fff9a000   gsb: fffffa60019d8000   gss: 000007fffffa2000
(XEN) ds: 002b   es: 002b   fs: 0053   gs: 002b   ss: 0018   cs: 0010


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-23  7:30   ` Jan Beulich
@ 2018-03-23 10:43     ` Paul Durrant
  2018-03-23 11:19       ` Jan Beulich
  2018-03-23 11:35       ` Jan Beulich
  0 siblings, 2 replies; 21+ messages in thread
From: Paul Durrant @ 2018-03-23 10:43 UTC (permalink / raw)
  To: 'Jan Beulich', Andrew Cooper; +Cc: xen-devel

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> Of Jan Beulich
> Sent: 23 March 2018 07:30
> To: Andrew Cooper <Andrew.Cooper3@citrix.com>
> Cc: xen-devel <xen-devel@lists.xenproject.org>; Paul Durrant
> <Paul.Durrant@citrix.com>
> Subject: Re: [Xen-devel] possible I/O emulation state machine issue
> 
> >>> On 22.03.18 at 16:29, <andrew.cooper3@citrix.com> wrote:
> > On 22/03/18 15:12, Jan Beulich wrote:
> >> Paul,
> >>
> >> our PV driver person has found a reproducible crash with ws2k8,
> >> triggered by one of the WHQL tests. The guest get crashed because
> >> the re-issue check of an ioreq close to the top of hvmemul_do_io()
> >> fails. I've handed him a first debugging patch, output of which
> >> suggests that we're dealing with a completely new request, which
> >> in turn would mean that we've run into stale STATE_IORESP_READY
> >> state:
> >>
> >> (XEN) d2v3: t=0/1 a=3c4/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0
> > v=100/ffff831873f27a30
> >> (XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=n   Tainted:  C   ]----
> >
> > Irrespective of the issue at hand, can testing be tried with a debug
> > build to see if any of the assertions are hit?
> 
> Nothing, unfortunately. But at least the stack trace can be relied
> upon this way.
> 

Jan,

  I'm assuming the debug line above is indicating the former emulation before the '/' and the latter after? In which case it looks like an MMIO to the HPET (I think that's what's at 0xfed000f0) clashing with a port IO to the graphics device. So, why is the HPET emulation making it to QEMU? Are you trying to run Windows with Xen's HPET emulation turned on?

  Paul

> Jan
> 
> (XEN) d2v3: t=0/1 a=3ce/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0
> v=406/ffff83387d21fa30
> (XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=y   Tainted:  C   ]----
> (XEN) CPU:    62
> (XEN) RIP:    e008:[<ffff82d0802e58fc>]
> emulate.c#hvmemul_do_io+0x169/0x445
> (XEN) RFLAGS: 0000000000010292   CONTEXT: hypervisor (d2v3)
> (XEN) rax: ffff8308796f602c   rbx: ffff830007d26000   rcx: 0000000000000000
> (XEN) rdx: ffff83387d21ffff   rsi: 000000000000000a   rdi: ffff82d0804823b8
> (XEN) rbp: ffff83387d21f788   rsp: ffff83387d21f6a8   r8:  ffff830879d00000
> (XEN) r9:  0000000000000030   r10: 000000000000000f   r11: 00000000ffffffee
> (XEN) r12: 0000000000000004   r13: 0000000000000000   r14: ffff83387d21f850
> (XEN) r15: 0000000000000001   cr0: 0000000080050033   cr4: 00000000000026e0
> (XEN) cr3: 00000037d2026000   cr2: fffff880051d17ac
> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 000007fffff98000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> (XEN) Xen code around <ffff82d0802e58fc>
> (emulate.c#hvmemul_do_io+0x169/0x445):
> (XEN)  00 00 00 e8 f5 e2 f6 ff <0f> 0b 48 83 c4 60 ba ab 00 00 00 48 8d 35 89 cd
> (XEN) Xen stack trace from rsp=ffff83387d21f6a8:
> (XEN)    0000000000000002 0000000000000004 0000000000000001
> 0000000000000001
> (XEN)    0000000000000000 0000000000000001 0000000000000000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000406
> ffff83387d21fa30
> (XEN)    ffff83387d21f798 00000000fed000f0 ffff831187beb000
> 000000017d21f914
> (XEN)    0000000000000000 000000000000014c 00000000000003ce
> 0000000000000406
> (XEN)    0000000200000001 0000000000000000 000000000000014c
> 0000000000000004
> (XEN)    0000000000000001 ffff83387d21fa30 00000000fed000f0
> ffff830007d269c8
> (XEN)    ffff83387d21f7c8 ffff82d0802e5c06 0000000000000000
> ffff83387d21fa30
> (XEN)    0000000000000004 0000000000000004 0000000000000000
> 00000000fed000f0
> (XEN)    ffff83387d21f898 ffff82d0802e6264 ffff83387d21fa30
> 0000000000000003
> (XEN)    ffff83387d21f860 ffff83387d21f858 0000000000000004 ffffffffffd070f0
> (XEN)    0000000000000003 ffff83387d21fc58 0000000400000001
> ffff83387d21f850
> (XEN)    0000000000000000 ffff83387d21fa30 01ff833800000004
> ffff83387d21fa30
> (XEN)    0000000000001b41 0000000000000001 0000000000000001
> 00000000fed000f0
> (XEN)    ffff8318ad778380 0000000000000004 ffff83387d21fc58
> 0000000000000001
> (XEN)    0000000000000002 ffff830007d26000 ffff83387d21f918
> ffff82d0802e725f
> (XEN)    0000000000000001 0000000000000000 ffff82d0803e6da0
> ffff83387d21fa30
> (XEN)    0000000000000001 ffffffffffd070f0 0000000000861efd
> 0000000000000004
> (XEN)    000000000000008b ffff83387d21f9c0 ffff83387d21fc58
> ffff82d0803e6da0
> (XEN)    ffff83387d21fef8 000000000000008b ffff83387d21f928
> ffff82d0802e73c3
> (XEN) Xen call trace:
> (XEN)    [<ffff82d0802e58fc>] emulate.c#hvmemul_do_io+0x169/0x445
> (XEN)    [<ffff82d0802e5c06>] emulate.c#hvmemul_do_io_buffer+0x2e/0x68
> (XEN)    [<ffff82d0802e6264>]
> emulate.c#hvmemul_linear_mmio_access+0x2b9/0x3fc
> (XEN)    [<ffff82d0802e725f>] emulate.c#__hvmemul_read+0x163/0x1fa
> (XEN)    [<ffff82d0802e73c3>] emulate.c#hvmemul_read+0x1c/0x2a
> (XEN)    [<ffff82d0802ab5e2>] x86_emulate.c#read_ulong+0x13/0x15
> (XEN)    [<ffff82d0802aeeb1>] x86_emulate+0x47d/0x1efa3
> (XEN)    [<ffff82d0802cd9fd>] x86_emulate_wrapper+0x26/0x5f
> (XEN)    [<ffff82d0802e6cf0>] emulate.c#_hvm_emulate_one+0x54/0x173
> (XEN)    [<ffff82d0802e6e1f>] hvm_emulate_one+0x10/0x12
> (XEN)    [<ffff82d0802f4cc3>] hvm_emulate_one_insn+0x42/0x130
> (XEN)    [<ffff82d0802f4e00>] handle_mmio_with_translation+0x4f/0x51
> (XEN)    [<ffff82d0802ec25e>] hvm_hap_nested_page_fault+0x1e4/0x6b6
> (XEN)    [<ffff82d0803191ea>] vmx_vmexit_handler+0x1796/0x1d3d
> (XEN)    [<ffff82d08031e6e8>] vmx_asm_vmexit_handler+0xe8/0x250
> (XEN)
> (XEN) domain_crash called from emulate.c:171
> (XEN) Domain 2 (vcpu#3) crashed on cpu#62:
> (XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=y   Tainted:  C   ]----
> (XEN) CPU:    62
> (XEN) RIP:    0010:[<fffff80001b4111e>]
> (XEN) RFLAGS: 0000000000010046   CONTEXT: hvm guest (d2v3)
> (XEN) rax: ffffffffffd07000   rbx: 0000000000000000   rcx: 0000000c800022a5
> (XEN) rdx: fffffffffffffd5b   rsi: fffffa60019dba00   rdi: fffffa80049bf3e0
> (XEN) rbp: fffffa80049da440   rsp: fffffa60019ffcd8   r8:  0000000000000000
> (XEN) r9:  0000000000000001   r10: 0000000000000000   r11: 0000000000000000
> (XEN) r12: 0000000000000000   r13: 0000000000000912   r14: fffffa8003c20950
> (XEN) r15: 0000000000019274   cr0: 0000000080050031   cr4: 00000000000006f8
> (XEN) cr3: 0000000000124000   cr2: fffff880051d17ac
> (XEN) fsb: 00000000fff9a000   gsb: fffffa60019d8000   gss: 000007fffffa2000
> (XEN) ds: 002b   es: 002b   fs: 0053   gs: 002b   ss: 0018   cs: 0010
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-23 10:43     ` Paul Durrant
@ 2018-03-23 11:19       ` Jan Beulich
  2018-03-23 11:35       ` Jan Beulich
  1 sibling, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2018-03-23 11:19 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 23.03.18 at 11:43, <Paul.Durrant@citrix.com> wrote:
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Jan Beulich
>> Sent: 23 March 2018 07:30
>> 
>> >>> On 22.03.18 at 16:29, <andrew.cooper3@citrix.com> wrote:
>> > On 22/03/18 15:12, Jan Beulich wrote:
>> >> our PV driver person has found a reproducible crash with ws2k8,
>> >> triggered by one of the WHQL tests. The guest get crashed because
>> >> the re-issue check of an ioreq close to the top of hvmemul_do_io()
>> >> fails. I've handed him a first debugging patch, output of which
>> >> suggests that we're dealing with a completely new request, which
>> >> in turn would mean that we've run into stale STATE_IORESP_READY
>> >> state:
>> >>
>> >> (XEN) d2v3: t=0/1 a=3c4/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0
>> > v=100/ffff831873f27a30
>> >> (XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=n   Tainted:  C   ]----
>> >
>> > Irrespective of the issue at hand, can testing be tried with a debug
>> > build to see if any of the assertions are hit?
>> 
>> Nothing, unfortunately. But at least the stack trace can be relied
>> upon this way.
> 
>   I'm assuming the debug line above is indicating the former emulation 
> before the '/' and the latter after?

Yes (to clarify this is why I had included the patch as well).

> In which case it looks like an MMIO to 
> the HPET (I think that's what's at 0xfed000f0) clashing with a port IO to the 
> graphics device.

That's what I had concluded too.

> So, why is the HPET emulation making it to QEMU? Are you 
> trying to run Windows with Xen's HPET emulation turned on?

DYM "off"? In any event I don't think he's having any special settings
in place, but I'll double check. Yet if there really was "hpet=0" in the
guest config file, things should still work, shouldn't they? I'd rather
take this as a hint that hpet_range() suddenly isn't reached anymore,
perhaps because of some other address range getting inserted
which supersedes the HPET one.

In that context it may become relevant to mention that this happens
when, in the course of the test, the LAN driver gets unloaded and
then reloaded (i.e. it's the reload which triggers the issue). He's
calling the test "AddressChange test", and now I start wondering
whether this isn't a change of the NIC address, but a change of
addresses within the MMIO window. I've asked for clarification of
that as well.

Supposedly all was fine with 4.9, but I'll also ask to make sure it
really is.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-23 10:43     ` Paul Durrant
  2018-03-23 11:19       ` Jan Beulich
@ 2018-03-23 11:35       ` Jan Beulich
  2018-03-23 13:41         ` Paul Durrant
  1 sibling, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2018-03-23 11:35 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 23.03.18 at 11:43, <Paul.Durrant@citrix.com> wrote:
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Jan Beulich
>> Sent: 23 March 2018 07:30
>> 
>> >>> On 22.03.18 at 16:29, <andrew.cooper3@citrix.com> wrote:
>> > On 22/03/18 15:12, Jan Beulich wrote:
>> >> Paul,
>> >>
>> >> our PV driver person has found a reproducible crash with ws2k8,
>> >> triggered by one of the WHQL tests. The guest get crashed because
>> >> the re-issue check of an ioreq close to the top of hvmemul_do_io()
>> >> fails. I've handed him a first debugging patch, output of which
>> >> suggests that we're dealing with a completely new request, which
>> >> in turn would mean that we've run into stale STATE_IORESP_READY
>> >> state:
>> >>
>> >> (XEN) d2v3: t=0/1 a=3c4/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0
>> > v=100/ffff831873f27a30
>> >> (XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=n   Tainted:  C   ]----
>> >
>> > Irrespective of the issue at hand, can testing be tried with a debug
>> > build to see if any of the assertions are hit?
>> 
>> Nothing, unfortunately. But at least the stack trace can be relied
>> upon this way.
> 
>   I'm assuming the debug line above is indicating the former emulation 
> before the '/' and the latter after? In which case it looks like an MMIO to 
> the HPET (I think that's what's at 0xfed000f0) clashing with a port IO to the 
> graphics device. So, why is the HPET emulation making it to QEMU? Are you 
> trying to run Windows with Xen's HPET emulation turned on?

Actually I think I'm confused by your reply. Why are you talking about
qemu? Said check sits above hvm_io_intercept(), so the code in question
runs for both internally handled and forwarded requests. The question
for me rather is why we see a HPET access when the prior VGA one
apparently wasn't fully finished yet.

The exact port number of the earlier access isn't stable (above you see
3c4, but the other (debug) output had 3ce. These are the two ports
stdvga.c intercepts without actually handling the accesses. The
consistent part is that it's a VGA port write followed by a HPET read.

Yet in no event can I make any connection (yet) to our internal state
getting screwed during a driver reload in a guest.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-23 11:35       ` Jan Beulich
@ 2018-03-23 13:41         ` Paul Durrant
  2018-03-23 15:09           ` Jan Beulich
  2018-03-26  8:42           ` Jan Beulich
  0 siblings, 2 replies; 21+ messages in thread
From: Paul Durrant @ 2018-03-23 13:41 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, xen-devel

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> Of Jan Beulich
> Sent: 23 March 2018 11:36
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: Re: [Xen-devel] possible I/O emulation state machine issue
> 
> >>> On 23.03.18 at 11:43, <Paul.Durrant@citrix.com> wrote:
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
> Behalf
> >> Of Jan Beulich
> >> Sent: 23 March 2018 07:30
> >>
> >> >>> On 22.03.18 at 16:29, <andrew.cooper3@citrix.com> wrote:
> >> > On 22/03/18 15:12, Jan Beulich wrote:
> >> >> Paul,
> >> >>
> >> >> our PV driver person has found a reproducible crash with ws2k8,
> >> >> triggered by one of the WHQL tests. The guest get crashed because
> >> >> the re-issue check of an ioreq close to the top of hvmemul_do_io()
> >> >> fails. I've handed him a first debugging patch, output of which
> >> >> suggests that we're dealing with a completely new request, which
> >> >> in turn would mean that we've run into stale STATE_IORESP_READY
> >> >> state:
> >> >>
> >> >> (XEN) d2v3: t=0/1 a=3c4/fed000f0 s=2/4 c=1/1 d=0/1 f=0/0 p=0/0
> >> > v=100/ffff831873f27a30
> >> >> (XEN) ----[ Xen-4.10.0_15-0  x86_64  debug=n   Tainted:  C   ]----
> >> >
> >> > Irrespective of the issue at hand, can testing be tried with a debug
> >> > build to see if any of the assertions are hit?
> >>
> >> Nothing, unfortunately. But at least the stack trace can be relied
> >> upon this way.
> >
> >   I'm assuming the debug line above is indicating the former emulation
> > before the '/' and the latter after? In which case it looks like an MMIO to
> > the HPET (I think that's what's at 0xfed000f0) clashing with a port IO to the
> > graphics device. So, why is the HPET emulation making it to QEMU? Are you
> > trying to run Windows with Xen's HPET emulation turned on?
> 
> Actually I think I'm confused by your reply. Why are you talking about
> qemu? Said check sits above hvm_io_intercept(), so the code in question
> runs for both internally handled and forwarded requests. The question
> for me rather is why we see a HPET access when the prior VGA one
> apparently wasn't fully finished yet.

Ah that's true. We will do the check based on the response state even if the next IO is going to be dealt with internally. So, yes, the real question is why the previous I/O was completed without apparently waiting for QEMU to finish.
We should have sent the VGA PIO out to QEMU, resulting in hvm_vcpu_io_need_completion() returning true in handle_pio() meaning that vio->io_completion gets set to HVMIO_pio_completion. We should then return true from handle_pio() resulting in RIP being advanced when we return to guest, but we should not get back into the guest because hvm_do_resume() should see the pending IO flag on one of the ioreq server vcpus and block on the relevant event channel.
So somehow it appears the vcpu got back into guest and executed the next instruction whilst there was pending I/O.

> 
> The exact port number of the earlier access isn't stable (above you see
> 3c4, but the other (debug) output had 3ce. These are the two ports
> stdvga.c intercepts without actually handling the accesses. The
> consistent part is that it's a VGA port write followed by a HPET read.
> 
> Yet in no event can I make any connection (yet) to our internal state
> getting screwed during a driver reload in a guest.
> 

No, I can't see any connection there at all.

  Paul

> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-23 13:41         ` Paul Durrant
@ 2018-03-23 15:09           ` Jan Beulich
  2018-03-26  8:42           ` Jan Beulich
  1 sibling, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2018-03-23 15:09 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 23.03.18 at 14:41, <Paul.Durrant@citrix.com> wrote:
> Ah that's true. We will do the check based on the response state even if the 
> next IO is going to be dealt with internally. So, yes, the real question is 
> why the previous I/O was completed without apparently waiting for QEMU to 
> finish.
> We should have sent the VGA PIO out to QEMU, resulting in 
> hvm_vcpu_io_need_completion() returning true in handle_pio() meaning that 
> vio->io_completion gets set to HVMIO_pio_completion. We should then return 
> true from handle_pio() resulting in RIP being advanced when we return to 
> guest, but we should not get back into the guest because hvm_do_resume() 
> should see the pending IO flag on one of the ioreq server vcpus and block on 
> the relevant event channel.
> So somehow it appears the vcpu got back into guest and executed the next 
> instruction whilst there was pending I/O.

I've extended my debugging patch to check vio->io_req.state for
being other STATE_IOREQ_NONE first thing in the VMEXIT handler
as well as first and last thing in vmx_vmenter_helper(). If you have
any other ideas where to place sanity checks, I'm all ears.

If that doesn't help, I guess I'll have to pull out a bigger hammer
and log recent ioreq-s handled (and perhaps individual steps
thereof) to see if their sequence rings any bell.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-23 13:41         ` Paul Durrant
  2018-03-23 15:09           ` Jan Beulich
@ 2018-03-26  8:42           ` Jan Beulich
  2018-03-28 13:48             ` Paul Durrant
  1 sibling, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2018-03-26  8:42 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 23.03.18 at 14:41, <Paul.Durrant@citrix.com> wrote:
> So somehow it appears the vcpu got back into guest and executed the next 
> instruction whilst there was pending I/O.

Two new pieces of information, in case either rings a bell:

The issue appears to never occur in hap=0 mode.

After having added I/O emulation state checks at the beginning of
vmx_vmexit_handler() as well as very early and very late in
vmx_vmenter_helper(), it was the one early in
vmx_vmenter_helper() which triggered (still seeing the VGA port
access in STATE_IORESP_READY while vio->io_completion was
HVMIO_no_completion).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-26  8:42           ` Jan Beulich
@ 2018-03-28 13:48             ` Paul Durrant
  2018-03-28 14:08               ` Jan Beulich
  2018-03-28 15:59               ` Jan Beulich
  0 siblings, 2 replies; 21+ messages in thread
From: Paul Durrant @ 2018-03-28 13:48 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 26 March 2018 09:43
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: Re: possible I/O emulation state machine issue
> 
> >>> On 23.03.18 at 14:41, <Paul.Durrant@citrix.com> wrote:
> > So somehow it appears the vcpu got back into guest and executed the next
> > instruction whilst there was pending I/O.
> 
> Two new pieces of information, in case either rings a bell:
> 

Alas neither rings a bell.

> The issue appears to never occur in hap=0 mode.
> 

That's quite an odd correlation.

> After having added I/O emulation state checks at the beginning of
> vmx_vmexit_handler() as well as very early and very late in
> vmx_vmenter_helper(), it was the one early in
> vmx_vmenter_helper() which triggered (still seeing the VGA port
> access in STATE_IORESP_READY while vio->io_completion was
> HVMIO_no_completion).
> 

The same test is used (hvm_vcpu_io_need_completion()) in handle_pio() to set the completion handler and in hvm_io_assist() to set the state to IORESP_READY. The only place the internal state gets set to IORESP_READY is in hvm_io_assist() so the fact that you see a disparity between the state and the completion handler is very odd. Perhaps it might be worth adding an ASSERT into hvm_io_assist() to ensure there really is a completion handler in place before setting the internal state to IORESP_READY would be worthwhile.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 13:48             ` Paul Durrant
@ 2018-03-28 14:08               ` Jan Beulich
  2018-03-28 14:20                 ` Paul Durrant
  2018-03-28 15:59               ` Jan Beulich
  1 sibling, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2018-03-28 14:08 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 28.03.18 at 15:48, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 26 March 2018 09:43
>> 
>> After having added I/O emulation state checks at the beginning of
>> vmx_vmexit_handler() as well as very early and very late in
>> vmx_vmenter_helper(), it was the one early in
>> vmx_vmenter_helper() which triggered (still seeing the VGA port
>> access in STATE_IORESP_READY while vio->io_completion was
>> HVMIO_no_completion).
>> 
> 
> The same test is used (hvm_vcpu_io_need_completion()) in handle_pio() to set 
> the completion handler and in hvm_io_assist() to set the state to 
> IORESP_READY. The only place the internal state gets set to IORESP_READY is 
> in hvm_io_assist() so the fact that you see a disparity between the state and 
> the completion handler is very odd. Perhaps it might be worth adding an 
> ASSERT into hvm_io_assist() to ensure there really is a completion handler in 
> place before setting the internal state to IORESP_READY would be worthwhile.

Further extended logging appears to confirm there's no issue in that
direction. While I haven't been able to draw useful conclusions from
that further logging (towards a fix), the exact conditions when this
triggers have become more clear: It's the last iteration of a REP OUTSW
to either of the two VGA port ranges stdvga.c intercepts, and I've
begun to think it might be connected to the way the insn emulator
deals with such single-iteration operations (breaking them up into a
memory read and an I/O write in the case here).

I've simulated this by way of an XTF test, though, and all is fine
there. Together with this not being reliable to reproduce (guest
crashes in one of 5-10 attempts) there clearly must be some other
factor here.

One thing I started to wonder about is why we run these insns
through the full emulator in the first place. But perhaps that's just
because we hope this code won't be used much, and hence the
simplest possible solution code-wise ought to do.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 14:08               ` Jan Beulich
@ 2018-03-28 14:20                 ` Paul Durrant
  2018-03-28 15:32                   ` Jan Beulich
  0 siblings, 1 reply; 21+ messages in thread
From: Paul Durrant @ 2018-03-28 14:20 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 28 March 2018 15:08
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: RE: possible I/O emulation state machine issue
> 
> >>> On 28.03.18 at 15:48, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 26 March 2018 09:43
> >>
> >> After having added I/O emulation state checks at the beginning of
> >> vmx_vmexit_handler() as well as very early and very late in
> >> vmx_vmenter_helper(), it was the one early in
> >> vmx_vmenter_helper() which triggered (still seeing the VGA port
> >> access in STATE_IORESP_READY while vio->io_completion was
> >> HVMIO_no_completion).
> >>
> >
> > The same test is used (hvm_vcpu_io_need_completion()) in handle_pio()
> to set
> > the completion handler and in hvm_io_assist() to set the state to
> > IORESP_READY. The only place the internal state gets set to IORESP_READY
> is
> > in hvm_io_assist() so the fact that you see a disparity between the state
> and
> > the completion handler is very odd. Perhaps it might be worth adding an
> > ASSERT into hvm_io_assist() to ensure there really is a completion handler
> in
> > place before setting the internal state to IORESP_READY would be
> worthwhile.
> 
> Further extended logging appears to confirm there's no issue in that
> direction. While I haven't been able to draw useful conclusions from
> that further logging (towards a fix), the exact conditions when this
> triggers have become more clear: It's the last iteration of a REP OUTSW
> to either of the two VGA port ranges stdvga.c intercepts, and I've
> begun to think it might be connected to the way the insn emulator
> deals with such single-iteration operations (breaking them up into a
> memory read and an I/O write in the case here).
> 

It looks to me like (unless there's a page boundary issue) the rep outsw is probably only being broken up because of the stdvga caching (which will return 'unhandleable' in the middle of the intercept loop and thus force a truncation). If you disable caching and let the full rep ioreq make it out to QEMU, does the issue go away?

  Paul

> I've simulated this by way of an XTF test, though, and all is fine
> there. Together with this not being reliable to reproduce (guest
> crashes in one of 5-10 attempts) there clearly must be some other
> factor here.
> 
> One thing I started to wonder about is why we run these insns
> through the full emulator in the first place. But perhaps that's just
> because we hope this code won't be used much, and hence the
> simplest possible solution code-wise ought to do.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 14:20                 ` Paul Durrant
@ 2018-03-28 15:32                   ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2018-03-28 15:32 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 28.03.18 at 16:20, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 28 March 2018 15:08
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
>> devel@lists.xenproject.org>
>> Subject: RE: possible I/O emulation state machine issue
>> 
>> >>> On 28.03.18 at 15:48, <Paul.Durrant@citrix.com> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: 26 March 2018 09:43
>> >>
>> >> After having added I/O emulation state checks at the beginning of
>> >> vmx_vmexit_handler() as well as very early and very late in
>> >> vmx_vmenter_helper(), it was the one early in
>> >> vmx_vmenter_helper() which triggered (still seeing the VGA port
>> >> access in STATE_IORESP_READY while vio->io_completion was
>> >> HVMIO_no_completion).
>> >>
>> >
>> > The same test is used (hvm_vcpu_io_need_completion()) in handle_pio()
>> to set
>> > the completion handler and in hvm_io_assist() to set the state to
>> > IORESP_READY. The only place the internal state gets set to IORESP_READY
>> is
>> > in hvm_io_assist() so the fact that you see a disparity between the state
>> and
>> > the completion handler is very odd. Perhaps it might be worth adding an
>> > ASSERT into hvm_io_assist() to ensure there really is a completion handler
>> in
>> > place before setting the internal state to IORESP_READY would be
>> worthwhile.
>> 
>> Further extended logging appears to confirm there's no issue in that
>> direction. While I haven't been able to draw useful conclusions from
>> that further logging (towards a fix), the exact conditions when this
>> triggers have become more clear: It's the last iteration of a REP OUTSW
>> to either of the two VGA port ranges stdvga.c intercepts, and I've
>> begun to think it might be connected to the way the insn emulator
>> deals with such single-iteration operations (breaking them up into a
>> memory read and an I/O write in the case here).
>> 
> 
> It looks to me like (unless there's a page boundary issue) the rep outsw is 
> probably only being broken up because of the stdvga caching (which will 
> return 'unhandleable' in the middle of the intercept loop and thus force a 
> truncation). If you disable caching and let the full rep ioreq make it out to 
> QEMU, does the issue go away?

I've sent him a patch simply suppressing the registration of the PIO
intercept function, but my XTF code doesn't behave any different
with that. I should say though that (without knowing yet whether
that's also the case on that Windows version) my code does the
REP OUTSW from video memory, which causes the string operation
to be split independent of what stdvga.c does (see the bottom of
hvmemul_rep_outs()). Without doing that, I hadn't been able to
observe anything unusual at all, i.e. none of the dozen or so
printk()s I had added ever triggered.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 13:48             ` Paul Durrant
  2018-03-28 14:08               ` Jan Beulich
@ 2018-03-28 15:59               ` Jan Beulich
  2018-03-28 16:22                 ` Paul Durrant
  1 sibling, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2018-03-28 15:59 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 28.03.18 at 15:48, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 26 March 2018 09:43
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
>> devel@lists.xenproject.org>
>> Subject: Re: possible I/O emulation state machine issue
>> 
>> >>> On 23.03.18 at 14:41, <Paul.Durrant@citrix.com> wrote:
>> > So somehow it appears the vcpu got back into guest and executed the next
>> > instruction whilst there was pending I/O.
>> 
>> Two new pieces of information, in case either rings a bell:
>> 
> 
> Alas neither rings a bell.
> 
>> The issue appears to never occur in hap=0 mode.
>> 
> 
> That's quite an odd correlation.

Simply timing, perhaps. In any event, newest logs suggest we have
an issue with Windows paging out the page the data for the
REP OUTSW is coming from while the port I/O part of the operation
is pending qemu's completion. Upon retry the linear->physical
translation fails, and we leave incorrect state in place.

I thought we cache the translation result, thus avoiding the need
for a translation during the retry cycle, so either I'm misremembering
or this doesn't work as intended. And in fact doing the translation a
second time (with the potential of it failing) is wrong here - when the
port access has occurred, we must not fail the emulation anymore
(repeating the port write would probably be fine for the VGA, but
would hardly be fine for e.g. an IDE interface).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 15:59               ` Jan Beulich
@ 2018-03-28 16:22                 ` Paul Durrant
  2018-03-28 16:35                   ` Andrew Cooper
  2018-03-29  6:27                   ` Jan Beulich
  0 siblings, 2 replies; 21+ messages in thread
From: Paul Durrant @ 2018-03-28 16:22 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 28 March 2018 16:59
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: RE: possible I/O emulation state machine issue
> 
> >>> On 28.03.18 at 15:48, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 26 March 2018 09:43
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
> >> devel@lists.xenproject.org>
> >> Subject: Re: possible I/O emulation state machine issue
> >>
> >> >>> On 23.03.18 at 14:41, <Paul.Durrant@citrix.com> wrote:
> >> > So somehow it appears the vcpu got back into guest and executed the
> next
> >> > instruction whilst there was pending I/O.
> >>
> >> Two new pieces of information, in case either rings a bell:
> >>
> >
> > Alas neither rings a bell.
> >
> >> The issue appears to never occur in hap=0 mode.
> >>
> >
> > That's quite an odd correlation.
> 
> Simply timing, perhaps. In any event, newest logs suggest we have
> an issue with Windows paging out the page the data for the
> REP OUTSW is coming from while the port I/O part of the operation
> is pending qemu's completion. Upon retry the linear->physical
> translation fails, and we leave incorrect state in place.
> 
> I thought we cache the translation result, thus avoiding the need
> for a translation during the retry cycle, so either I'm misremembering
> or this doesn't work as intended. And in fact doing the translation a
> second time (with the potential of it failing) is wrong here - when the
> port access has occurred, we must not fail the emulation anymore
> (repeating the port write would probably be fine for the VGA, but
> would hardly be fine for e.g. an IDE interface).
> 

Yes, I thought we made sure all reps were completed using cached translations before returning to guest.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 16:22                 ` Paul Durrant
@ 2018-03-28 16:35                   ` Andrew Cooper
  2018-03-29  6:31                     ` Jan Beulich
  2018-04-12 14:13                     ` Jan Beulich
  2018-03-29  6:27                   ` Jan Beulich
  1 sibling, 2 replies; 21+ messages in thread
From: Andrew Cooper @ 2018-03-28 16:35 UTC (permalink / raw)
  To: Paul Durrant, 'Jan Beulich'; +Cc: xen-devel

On 28/03/18 17:22, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 28 March 2018 16:59
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
>> devel@lists.xenproject.org>
>> Subject: RE: possible I/O emulation state machine issue
>>
>>>>> On 28.03.18 at 15:48, <Paul.Durrant@citrix.com> wrote:
>>>>  -----Original Message-----
>>>> From: Jan Beulich [mailto:JBeulich@suse.com]
>>>> Sent: 26 March 2018 09:43
>>>> To: Paul Durrant <Paul.Durrant@citrix.com>
>>>> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
>>>> devel@lists.xenproject.org>
>>>> Subject: Re: possible I/O emulation state machine issue
>>>>
>>>>>>> On 23.03.18 at 14:41, <Paul.Durrant@citrix.com> wrote:
>>>>> So somehow it appears the vcpu got back into guest and executed the
>> next
>>>>> instruction whilst there was pending I/O.
>>>> Two new pieces of information, in case either rings a bell:
>>>>
>>> Alas neither rings a bell.
>>>
>>>> The issue appears to never occur in hap=0 mode.
>>>>
>>> That's quite an odd correlation.
>> Simply timing, perhaps. In any event, newest logs suggest we have
>> an issue with Windows paging out the page the data for the
>> REP OUTSW is coming from while the port I/O part of the operation
>> is pending qemu's completion. Upon retry the linear->physical
>> translation fails, and we leave incorrect state in place.
>>
>> I thought we cache the translation result, thus avoiding the need
>> for a translation during the retry cycle, so either I'm misremembering
>> or this doesn't work as intended. And in fact doing the translation a
>> second time (with the potential of it failing) is wrong here - when the
>> port access has occurred, we must not fail the emulation anymore
>> (repeating the port write would probably be fine for the VGA, but
>> would hardly be fine for e.g. an IDE interface).
>>
> Yes, I thought we made sure all reps were completed using cached translations before returning to guest.

Its one of the many items on the TODO list, along with maintaining a
proper virtual TLB to avoid rewalks during a single emulation.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 16:22                 ` Paul Durrant
  2018-03-28 16:35                   ` Andrew Cooper
@ 2018-03-29  6:27                   ` Jan Beulich
  2018-03-29  8:42                     ` Paul Durrant
  1 sibling, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2018-03-29  6:27 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 28.03.18 at 18:22, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 28 March 2018 16:59
>> 
>> Simply timing, perhaps. In any event, newest logs suggest we have
>> an issue with Windows paging out the page the data for the
>> REP OUTSW is coming from while the port I/O part of the operation
>> is pending qemu's completion. Upon retry the linear->physical
>> translation fails, and we leave incorrect state in place.
>> 
>> I thought we cache the translation result, thus avoiding the need
>> for a translation during the retry cycle, so either I'm misremembering
>> or this doesn't work as intended. And in fact doing the translation a
>> second time (with the potential of it failing) is wrong here - when the
>> port access has occurred, we must not fail the emulation anymore
>> (repeating the port write would probably be fine for the VGA, but
>> would hardly be fine for e.g. an IDE interface).
> 
> Yes, I thought we made sure all reps were completed using cached 
> translations before returning to guest.

We do this only for actual MMIO accesses, not for RAM ones,
afaics.

I think I see a way to deal with the specific case here, but we'll
certainly need to make things work properly in the general case.
That's not something reasonable to be done for 4.11 though.

Suppressing the stdvga port intercepts has, btw, not helped the
situation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 16:35                   ` Andrew Cooper
@ 2018-03-29  6:31                     ` Jan Beulich
  2018-04-12 14:13                     ` Jan Beulich
  1 sibling, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2018-03-29  6:31 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Paul Durrant

>>> On 28.03.18 at 18:35, <andrew.cooper3@citrix.com> wrote:
> On 28/03/18 17:22, Paul Durrant wrote:
>>> From: Jan Beulich [mailto:JBeulich@suse.com]
>>> Sent: 28 March 2018 16:59
>>>
>>> I thought we cache the translation result, thus avoiding the need
>>> for a translation during the retry cycle, so either I'm misremembering
>>> or this doesn't work as intended. And in fact doing the translation a
>>> second time (with the potential of it failing) is wrong here - when the
>>> port access has occurred, we must not fail the emulation anymore
>>> (repeating the port write would probably be fine for the VGA, but
>>> would hardly be fine for e.g. an IDE interface).
>>>
>> Yes, I thought we made sure all reps were completed using cached 
> translations before returning to guest.
> 
> Its one of the many items on the TODO list, along with maintaining a
> proper virtual TLB to avoid rewalks during a single emulation.

I don't think a virtual TLB will be the right answer. We need to
record the results of the "uops" we break the request up into,
and simply return previously recorded values for replayed ones.
I.e. just like we don't (anymore) re-fetch the insn during replay.
That's closer to how I assume hardware believes - in particular
I don't think it would repeat TLB walks for a single uop.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-29  6:27                   ` Jan Beulich
@ 2018-03-29  8:42                     ` Paul Durrant
  2018-03-29  8:51                       ` Jan Beulich
  0 siblings, 1 reply; 21+ messages in thread
From: Paul Durrant @ 2018-03-29  8:42 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 29 March 2018 07:27
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: RE: possible I/O emulation state machine issue
> 
> >>> On 28.03.18 at 18:22, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 28 March 2018 16:59
> >>
> >> Simply timing, perhaps. In any event, newest logs suggest we have
> >> an issue with Windows paging out the page the data for the
> >> REP OUTSW is coming from while the port I/O part of the operation
> >> is pending qemu's completion. Upon retry the linear->physical
> >> translation fails, and we leave incorrect state in place.
> >>
> >> I thought we cache the translation result, thus avoiding the need
> >> for a translation during the retry cycle, so either I'm misremembering
> >> or this doesn't work as intended. And in fact doing the translation a
> >> second time (with the potential of it failing) is wrong here - when the
> >> port access has occurred, we must not fail the emulation anymore
> >> (repeating the port write would probably be fine for the VGA, but
> >> would hardly be fine for e.g. an IDE interface).
> >
> > Yes, I thought we made sure all reps were completed using cached
> > translations before returning to guest.
> 
> We do this only for actual MMIO accesses, not for RAM ones,
> afaics.
> 
> I think I see a way to deal with the specific case here, but we'll
> certainly need to make things work properly in the general case.
> That's not something reasonable to be done for 4.11 though.
> 

Page table modification racing with an emulation sounds pretty bad though. I guess that if the damage is only limited to the guest though it's not something that requires immediate fix.

> Suppressing the stdvga port intercepts has, btw, not helped the
> situation.
> 

That surprises me. The whole string emulation should go out to QEMU without being broken up in that case, and since it's an outsw I don't see why there would be any retry of the linear->physical translation during completion.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-29  8:42                     ` Paul Durrant
@ 2018-03-29  8:51                       ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2018-03-29  8:51 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 29.03.18 at 10:42, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 29 March 2018 07:27
>> 
>> Suppressing the stdvga port intercepts has, btw, not helped the
>> situation.
>> 
> 
> That surprises me. The whole string emulation should go out to QEMU without 
> being broken up in that case, and since it's an outsw I don't see why there 
> would be any retry of the linear->physical translation during completion.

See the patch sent earlier: HVMIO_mmio_completion means a full
second (or further) run through the emulator (which that patch
now avoids). Same would occur for an insn reading and writing
multiple memory locations, if at least the second one is in MMIO.
In that case we can't avoid the completion though, as the access
may additionally have been split (and we still need to execute
its later part(s)). To fully address this, I don't see a way around
recording completed steps (which is going to be a pretty intrusive
change as it looks).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: possible I/O emulation state machine issue
  2018-03-28 16:35                   ` Andrew Cooper
  2018-03-29  6:31                     ` Jan Beulich
@ 2018-04-12 14:13                     ` Jan Beulich
  1 sibling, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2018-04-12 14:13 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Paul Durrant

>>> On 28.03.18 at 18:35, <andrew.cooper3@citrix.com> wrote:
> Its one of the many items on the TODO list, along with maintaining a
> proper virtual TLB to avoid rewalks during a single emulation.

Having thought about this some more I agree that for correctness
a virtual TLB would be sufficient. Also caching values read might
help performance a little, but at the expense of quite a bit more
logic/space to maintain that extra information. Hence I guess the
TLB-only solution is going to be preferable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-04-12 14:13 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-22 15:12 possible I/O emulation state machine issue Jan Beulich
2018-03-22 15:29 ` Andrew Cooper
2018-03-23  7:30   ` Jan Beulich
2018-03-23 10:43     ` Paul Durrant
2018-03-23 11:19       ` Jan Beulich
2018-03-23 11:35       ` Jan Beulich
2018-03-23 13:41         ` Paul Durrant
2018-03-23 15:09           ` Jan Beulich
2018-03-26  8:42           ` Jan Beulich
2018-03-28 13:48             ` Paul Durrant
2018-03-28 14:08               ` Jan Beulich
2018-03-28 14:20                 ` Paul Durrant
2018-03-28 15:32                   ` Jan Beulich
2018-03-28 15:59               ` Jan Beulich
2018-03-28 16:22                 ` Paul Durrant
2018-03-28 16:35                   ` Andrew Cooper
2018-03-29  6:31                     ` Jan Beulich
2018-04-12 14:13                     ` Jan Beulich
2018-03-29  6:27                   ` Jan Beulich
2018-03-29  8:42                     ` Paul Durrant
2018-03-29  8:51                       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.