xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
       [not found] <161405394665.5977.17427402181939884734@c667a6b167f6>
@ 2021-02-23 20:29 ` Stefano Stabellini
  2021-02-24  0:19   ` Volodymyr Babchuk
  0 siblings, 1 reply; 15+ messages in thread
From: Stefano Stabellini @ 2021-02-23 20:29 UTC (permalink / raw)
  To: xen-devel, Volodymyr_Babchuk
  Cc: famzheng, sstabellini, cardoe, wl, Bertrand.Marquis, julien,
	andrew.cooper3

Hi Volodymyr,

This looks like a genuine failure:

https://gitlab.com/xen-project/patchew/xen/-/jobs/1048475444


(XEN) Data Abort Trap. Syndrome=0x1930046
(XEN) Walking Hypervisor VA 0xf0008 on CPU0 via TTBR 0x0000000040545000
(XEN) 0TH[0x0] = 0x0000000040544f7f
(XEN) 1ST[0x0] = 0x0000000040541f7f
(XEN) 2ND[0x0] = 0x0000000000000000
(XEN) CPU0: Unexpected Trap: Data Abort
(XEN) ----[ Xen-4.15-unstable  arm64  debug=y  Tainted: U     ]----
(XEN) CPU:    0
(XEN) PC:     00000000002273b8 timer.c#remove_from_heap+0x2c/0x114
(XEN) LR:     0000000000227530
(XEN) SP:     000080003ff7f9a0
(XEN) CPSR:   800002c9 MODE:64-bit EL2h (Hypervisor, handler)
(XEN)      X0: 000080000234e6a0  X1: 0000000000000001  X2: 0000000000000000
(XEN)      X3: 00000000000f0000  X4: 0000000000000000  X5: 00000000014d014d
(XEN)      X6: 0000000000000080  X7: fefefefefefeff09  X8: 7f7f7f7f7f7f7f7f
(XEN)      X9: 717164616f726051 X10: 7f7f7f7f7f7f7f7f X11: 0101010101010101
(XEN)     X12: 0000000000000008 X13: 0000000000000001 X14: 000080003ff7fa78
(XEN)     X15: 0000000000000020 X16: 000000000028e558 X17: 0000000000000000
(XEN)     X18: 00000000fffffffe X19: 0000000000000001 X20: 0000000000310180
(XEN)     X21: 00000000000002c0 X22: 0000000000000000 X23: 0000000000346008
(XEN)     X24: 0000000000310180 X25: 0000000000000000 X26: 00008000044e91b8
(XEN)     X27: 000000000000ffff X28: 0000000041570018  FP: 000080003ff7f9a0
(XEN) 
(XEN)   VTCR_EL2: 80043594
(XEN)  VTTBR_EL2: 000200007ffe3000
(XEN) 
(XEN)  SCTLR_EL2: 30cd183d
(XEN)    HCR_EL2: 00000000807c663f
(XEN)  TTBR0_EL2: 0000000040545000
(XEN) 
(XEN)    ESR_EL2: 97930046
(XEN)  HPFAR_EL2: 0000000000030010
(XEN)    FAR_EL2: 00000000000f0008
(XEN) 
(XEN) Xen stack trace from sp=000080003ff7f9a0:
(XEN)    000080003ff7f9c0 0000000000227530 00008000044e9190 00000000002280dc
(XEN)    000080003ff7f9e0 0000000000228234 00008000044e9190 000000000024dd04
(XEN)    000080003ff7fa40 000000000024a414 0000000000311390 000080000234e430
(XEN)    0000800002345000 0000000000000000 0000000000346008 00008000044e9150
(XEN)    0000000000000001 0000000000000000 0000000000000240 0000000000270474
(XEN)    000080003ff7faa0 000000000024b91c 0000000000000001 0000000000310238
(XEN)    000080003ff7fbf8 0000000080000249 0000000093860047 00000000002a1de0
(XEN)    000080003ff7fc88 00000000002a1de0 00000000000002c0 00008000044e9470
(XEN)    000080003ff7fab0 00000000002217b4 000080003ff7fad0 000000000027a8c0
(XEN)    0000000000311324 00000000002a1de0 000080003ff7fc00 0000000000265310
(XEN)    0000000000000000 00000000002263d8 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000020
(XEN)    0000000000000080 fefefefefefeff09 7f7f7f7f7f7f7f7f 717164616f726051
(XEN)    7f7f7f7f7f7f7f7f 0101010101010101 0000000000000008 0000000000000001
(XEN)    000080003ff7fa78 0000000000000020 000000000028e558 0000000000000000
(XEN)    00000000fffffffe 0000000000000000 0000000000310238 000000000000000a
(XEN)    0000000000310238 00000000002a64b0 00000000002a1de0 000080003ff7fc88
(XEN)    0000000000000000 0000000000000240 0000000041570018 000080003ff7fc00
(XEN)    000000000024c8c0 000080003ff7fc00 000000000024c8c4 9386004780000249
(XEN)    000080003ff7fc90 000000000024c974 0000000000000384 0000000000000002
(XEN)    0000800002345000 00000000ffffffff 0000000000000006 000080003ff7fe20
(XEN)    0000000000000001 000080003ff7fe00 000080003ffe4a60 000080000234e430
(XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
(XEN)    000080003ff7fce0 000000000031a147 000080003ff7fd20 000000000027f7b8
(XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
(XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
(XEN)    0000000000000240 0000800002345000 00000000ffffffff 0000000000000004
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000022
(XEN)    000080003ff7fda0 000000000026ff2c 000000000027f608 0000000000000000
(XEN)    0000000000000093 0000800002345000 0000000000000000 000080003ffe4a60
(XEN)    0000000000000001 000080003ff7fe00 000080003ffe4a60 0000000041570018
(XEN)    000080003ff7fda0 000000000026fee0 000080003ff7fda0 000000000026ff18
(XEN)    000080003ff7fe30 0000000000279b2c 0000000093860047 0000000000000090
(XEN)    0000000003001384 000080003ff7feb0 ffff800011dc1384 ffff8000104b06a0
(XEN)    ffff8000104b0240 ffff00000df806e8 0000000000000000 ffff800011b0ca88
(XEN)    0000000003001384 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000093860047 0000000003001384 000080003ff7fe70 000000000027a180
(XEN)    000080003ff7feb0 0000000093860047 0000000093860047 0000000060000085
(XEN)    0000000093860047 ffff800011b0ca88 ffff800011b03d90 0000000000265458
(XEN)    0000000000000000 ffff800011b0ca88 000080003ff7ffb8 000000000026545c
(XEN) Xen call trace:
(XEN)    [<00000000002273b8>] timer.c#remove_from_heap+0x2c/0x114 (PC)
(XEN)    [<0000000000227530>] timer.c#remove_entry+0x90/0xa0 (LR)
(XEN)    [<0000000000227530>] timer.c#remove_entry+0x90/0xa0
(XEN)    [<0000000000228234>] stop_timer+0x1fc/0x254
(XEN)    [<000000000024a414>] core.c#schedule+0xf4/0x380
(XEN)    [<000000000024b91c>] wait+0xc/0x14
(XEN)    [<00000000002217b4>] try_preempt+0x88/0xbc
(XEN)    [<000000000027a8c0>] do_trap_irq+0x5c/0x60
(XEN)    [<0000000000265310>] entry.o#hyp_irq+0x7c/0x80
(XEN)    [<000000000024c974>] printk+0x68/0x70
(XEN)    [<000000000027f7b8>] vgic-v2.c#vgic_v2_distr_mmio_write+0x1b0/0x7ac
(XEN)    [<000000000026ff2c>] try_handle_mmio+0x1ac/0x27c
(XEN)    [<0000000000279b2c>] traps.c#do_trap_stage2_abort_guest+0x18c/0x2d8
(XEN)    [<000000000027a180>] do_trap_guest_sync+0x10c/0x63c
(XEN)    [<0000000000265458>] entry.o#guest_sync_slowpath+0xa4/0xd4
(XEN) 
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) CPU0: Unexpected Trap: Data Abort
(XEN) ****************************************


On Mon, 22 Feb 2021, no-reply@patchew.org wrote:
> Hi,
> 
> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
> 
> You can find the link to the pipeline near the end of the report below:
> 
> Type: series
> Message-id: 20210223023428.757694-1-volodymyr_babchuk@epam.com
> Subject: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> sleep 10
> patchew gitlab-pipeline-check -p xen-project/patchew/xen
> === TEST SCRIPT END ===
> 
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> warning: redirecting to https://gitlab.com/xen-project/patchew/xen.git/
> From https://gitlab.com/xen-project/patchew/xen
>  * [new tag]               patchew/20210223023428.757694-1-volodymyr_babchuk@epam.com -> patchew/20210223023428.757694-1-volodymyr_babchuk@epam.com
> Switched to a new branch 'test'
> a569959cc0 alloc pages: enable preemption early
> c943c35519 arm: traps: try to preempt before leaving IRQ handler
> 4b634d1924 arm: context_switch: allow to run with IRQs already disabled
> 7d78d6e861 sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
> d56302eb03 arm: setup: disable preemption during startup
> 18a52ab80a preempt: add try_preempt() function
> 9c4a07d0fa preempt: use atomic_t to for preempt_count
> 904e59f28e sched: credit2: save IRQ state during locking
> 3e3726692c sched: rt: save IRQ state during locking
> c552842efc sched: core: save IRQ state during locking
> 
> === OUTPUT BEGIN ===
> [2021-02-23 02:38:00] Looking up pipeline...
> [2021-02-23 02:38:01] Found pipeline 260183774:
> 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/260183774
> 
> [2021-02-23 02:38:01] Waiting for pipeline to finish...
> [2021-02-23 02:53:10] Still waiting...
> [2021-02-23 03:08:19] Still waiting...
> [2021-02-23 03:23:29] Still waiting...
> [2021-02-23 03:38:38] Still waiting...
> [2021-02-23 03:53:48] Still waiting...
> [2021-02-23 04:08:57] Still waiting...
> [2021-02-23 04:19:05] Pipeline failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-clang-pvh' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-gcc-pvh' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-clang' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-gcc' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-smoke-arm64-gcc' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'qemu-alpine-arm64-gcc' in stage 'test' is failed
> [2021-02-23 04:19:06] Job 'alpine-3.12-clang-debug' in stage 'build' is failed
> [2021-02-23 04:19:06] Job 'alpine-3.12-clang' in stage 'build' is failed
> [2021-02-23 04:19:06] Job 'alpine-3.12-gcc-debug' in stage 'build' is failed
> [2021-02-23 04:19:06] Job 'alpine-3.12-gcc' in stage 'build' is failed
> === OUTPUT END ===
> 
> Test command exited with code: 1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-23 20:29 ` [RFC PATCH 00/10] Preemption in hypervisor (ARM only) Stefano Stabellini
@ 2021-02-24  0:19   ` Volodymyr Babchuk
  0 siblings, 0 replies; 15+ messages in thread
From: Volodymyr Babchuk @ 2021-02-24  0:19 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: xen-devel, famzheng, cardoe, wl, Bertrand.Marquis, julien,
	andrew.cooper3


Hi Stefano,

Stefano Stabellini writes:

> Hi Volodymyr,
>
> This looks like a genuine failure:

Thank you for the report. I just debugged similar issues, which seems
happen randomly and found a flaw in this_cpu() implementation. It is
currently not compatible with preemption in hypervisor mode.

It might happen then CPU id is being read while running on one pCPU, but
then code might be preempted and it may continue to run on other pCPU,
while accessing data for a previous pCPU.

This mostly happens with __preempt_count variable in my case, by other
per_cpu variables affected too. Linux uses pair of
get_cpu_var/put_cpu_var functions, that temporally disable/enable
preemption. Something like that should be implemented in my patches as
well. But for __preempt_count I need completely different approach of
course. I'm looking for solution right now.

> https://urldefense.com/v3/__https://gitlab.com/xen-project/patchew/xen/-/jobs/1048475444__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5RJvSwfsRrESsenHBOJ4yxtj7XSU8QELo6TojcFLguIww$ [gitlab[.]com]
>
>
> (XEN) Data Abort Trap. Syndrome=0x1930046
> (XEN) Walking Hypervisor VA 0xf0008 on CPU0 via TTBR 0x0000000040545000
> (XEN) 0TH[0x0] = 0x0000000040544f7f
> (XEN) 1ST[0x0] = 0x0000000040541f7f
> (XEN) 2ND[0x0] = 0x0000000000000000
> (XEN) CPU0: Unexpected Trap: Data Abort
> (XEN) ----[ Xen-4.15-unstable  arm64  debug=y  Tainted: U     ]----
> (XEN) CPU:    0
> (XEN) PC:     00000000002273b8 timer.c#remove_from_heap+0x2c/0x114
> (XEN) LR:     0000000000227530
> (XEN) SP:     000080003ff7f9a0
> (XEN) CPSR:   800002c9 MODE:64-bit EL2h (Hypervisor, handler)
> (XEN)      X0: 000080000234e6a0  X1: 0000000000000001  X2: 0000000000000000
> (XEN)      X3: 00000000000f0000  X4: 0000000000000000  X5: 00000000014d014d
> (XEN)      X6: 0000000000000080  X7: fefefefefefeff09  X8: 7f7f7f7f7f7f7f7f
> (XEN)      X9: 717164616f726051 X10: 7f7f7f7f7f7f7f7f X11: 0101010101010101
> (XEN)     X12: 0000000000000008 X13: 0000000000000001 X14: 000080003ff7fa78
> (XEN)     X15: 0000000000000020 X16: 000000000028e558 X17: 0000000000000000
> (XEN)     X18: 00000000fffffffe X19: 0000000000000001 X20: 0000000000310180
> (XEN)     X21: 00000000000002c0 X22: 0000000000000000 X23: 0000000000346008
> (XEN)     X24: 0000000000310180 X25: 0000000000000000 X26: 00008000044e91b8
> (XEN)     X27: 000000000000ffff X28: 0000000041570018  FP: 000080003ff7f9a0
> (XEN) 
> (XEN)   VTCR_EL2: 80043594
> (XEN)  VTTBR_EL2: 000200007ffe3000
> (XEN) 
> (XEN)  SCTLR_EL2: 30cd183d
> (XEN)    HCR_EL2: 00000000807c663f
> (XEN)  TTBR0_EL2: 0000000040545000
> (XEN) 
> (XEN)    ESR_EL2: 97930046
> (XEN)  HPFAR_EL2: 0000000000030010
> (XEN)    FAR_EL2: 00000000000f0008
> (XEN) 
> (XEN) Xen stack trace from sp=000080003ff7f9a0:
> (XEN)    000080003ff7f9c0 0000000000227530 00008000044e9190 00000000002280dc
> (XEN)    000080003ff7f9e0 0000000000228234 00008000044e9190 000000000024dd04
> (XEN)    000080003ff7fa40 000000000024a414 0000000000311390 000080000234e430
> (XEN)    0000800002345000 0000000000000000 0000000000346008 00008000044e9150
> (XEN)    0000000000000001 0000000000000000 0000000000000240 0000000000270474
> (XEN)    000080003ff7faa0 000000000024b91c 0000000000000001 0000000000310238
> (XEN)    000080003ff7fbf8 0000000080000249 0000000093860047 00000000002a1de0
> (XEN)    000080003ff7fc88 00000000002a1de0 00000000000002c0 00008000044e9470
> (XEN)    000080003ff7fab0 00000000002217b4 000080003ff7fad0 000000000027a8c0
> (XEN)    0000000000311324 00000000002a1de0 000080003ff7fc00 0000000000265310
> (XEN)    0000000000000000 00000000002263d8 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000020
> (XEN)    0000000000000080 fefefefefefeff09 7f7f7f7f7f7f7f7f 717164616f726051
> (XEN)    7f7f7f7f7f7f7f7f 0101010101010101 0000000000000008 0000000000000001
> (XEN)    000080003ff7fa78 0000000000000020 000000000028e558 0000000000000000
> (XEN)    00000000fffffffe 0000000000000000 0000000000310238 000000000000000a
> (XEN)    0000000000310238 00000000002a64b0 00000000002a1de0 000080003ff7fc88
> (XEN)    0000000000000000 0000000000000240 0000000041570018 000080003ff7fc00
> (XEN)    000000000024c8c0 000080003ff7fc00 000000000024c8c4 9386004780000249
> (XEN)    000080003ff7fc90 000000000024c974 0000000000000384 0000000000000002
> (XEN)    0000800002345000 00000000ffffffff 0000000000000006 000080003ff7fe20
> (XEN)    0000000000000001 000080003ff7fe00 000080003ffe4a60 000080000234e430
> (XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
> (XEN)    000080003ff7fce0 000000000031a147 000080003ff7fd20 000000000027f7b8
> (XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
> (XEN)    000080003ff7fd20 000080003ff7fd20 000080003ff7fce0 00000000ffffffc8
> (XEN)    0000000000000240 0000800002345000 00000000ffffffff 0000000000000004
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000022
> (XEN)    000080003ff7fda0 000000000026ff2c 000000000027f608 0000000000000000
> (XEN)    0000000000000093 0000800002345000 0000000000000000 000080003ffe4a60
> (XEN)    0000000000000001 000080003ff7fe00 000080003ffe4a60 0000000041570018
> (XEN)    000080003ff7fda0 000000000026fee0 000080003ff7fda0 000000000026ff18
> (XEN)    000080003ff7fe30 0000000000279b2c 0000000093860047 0000000000000090
> (XEN)    0000000003001384 000080003ff7feb0 ffff800011dc1384 ffff8000104b06a0
> (XEN)    ffff8000104b0240 ffff00000df806e8 0000000000000000 ffff800011b0ca88
> (XEN)    0000000003001384 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000093860047 0000000003001384 000080003ff7fe70 000000000027a180
> (XEN)    000080003ff7feb0 0000000093860047 0000000093860047 0000000060000085
> (XEN)    0000000093860047 ffff800011b0ca88 ffff800011b03d90 0000000000265458
> (XEN)    0000000000000000 ffff800011b0ca88 000080003ff7ffb8 000000000026545c
> (XEN) Xen call trace:
> (XEN)    [<00000000002273b8>] timer.c#remove_from_heap+0x2c/0x114 (PC)
> (XEN)    [<0000000000227530>] timer.c#remove_entry+0x90/0xa0 (LR)
> (XEN)    [<0000000000227530>] timer.c#remove_entry+0x90/0xa0
> (XEN)    [<0000000000228234>] stop_timer+0x1fc/0x254
> (XEN)    [<000000000024a414>] core.c#schedule+0xf4/0x380
> (XEN)    [<000000000024b91c>] wait+0xc/0x14
> (XEN)    [<00000000002217b4>] try_preempt+0x88/0xbc
> (XEN)    [<000000000027a8c0>] do_trap_irq+0x5c/0x60
> (XEN)    [<0000000000265310>] entry.o#hyp_irq+0x7c/0x80
> (XEN)    [<000000000024c974>] printk+0x68/0x70
> (XEN)    [<000000000027f7b8>] vgic-v2.c#vgic_v2_distr_mmio_write+0x1b0/0x7ac
> (XEN)    [<000000000026ff2c>] try_handle_mmio+0x1ac/0x27c
> (XEN)    [<0000000000279b2c>] traps.c#do_trap_stage2_abort_guest+0x18c/0x2d8
> (XEN)    [<000000000027a180>] do_trap_guest_sync+0x10c/0x63c
> (XEN)    [<0000000000265458>] entry.o#guest_sync_slowpath+0xa4/0xd4
> (XEN) 
> (XEN) 
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) CPU0: Unexpected Trap: Data Abort
> (XEN) ****************************************
>
>
> On Mon, 22 Feb 2021, no-reply@patchew.org wrote:
>> Hi,
>> 
>> Patchew automatically ran gitlab-ci pipeline with this patch (series) applied, but the job failed. Maybe there's a bug in the patches?
>> 
>> You can find the link to the pipeline near the end of the report below:
>> 
>> Type: series
>> Message-id: 20210223023428.757694-1-volodymyr_babchuk@epam.com
>> Subject: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
>> 
>> === TEST SCRIPT BEGIN ===
>> #!/bin/bash
>> sleep 10
>> patchew gitlab-pipeline-check -p xen-project/patchew/xen
>> === TEST SCRIPT END ===
>> 
>> warning: redirecting to https://urldefense.com/v3/__https://gitlab.com/xen-project/patchew/xen.git/__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5RJvSwfsRrESsenHBOJ4yxtj7XSU8QELo6TojcGTFbSRQ$ [gitlab[.]com]
>> warning: redirecting to https://urldefense.com/v3/__https://gitlab.com/xen-project/patchew/xen.git/__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5RJvSwfsRrESsenHBOJ4yxtj7XSU8QELo6TojcGTFbSRQ$ [gitlab[.]com]
>> From https://urldefense.com/v3/__https://gitlab.com/xen-project/patchew/xen__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5RJvSwfsRrESsenHBOJ4yxtj7XSU8QELo6TojcntxRYAg$ [gitlab[.]com]
>>  * [new tag]               patchew/20210223023428.757694-1-volodymyr_babchuk@epam.com -> patchew/20210223023428.757694-1-volodymyr_babchuk@epam.com
>> Switched to a new branch 'test'
>> a569959cc0 alloc pages: enable preemption early
>> c943c35519 arm: traps: try to preempt before leaving IRQ handler
>> 4b634d1924 arm: context_switch: allow to run with IRQs already disabled
>> 7d78d6e861 sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>> d56302eb03 arm: setup: disable preemption during startup
>> 18a52ab80a preempt: add try_preempt() function
>> 9c4a07d0fa preempt: use atomic_t to for preempt_count
>> 904e59f28e sched: credit2: save IRQ state during locking
>> 3e3726692c sched: rt: save IRQ state during locking
>> c552842efc sched: core: save IRQ state during locking
>> 
>> === OUTPUT BEGIN ===
>> [2021-02-23 02:38:00] Looking up pipeline...
>> [2021-02-23 02:38:01] Found pipeline 260183774:
>> 
>> https://urldefense.com/v3/__https://gitlab.com/xen-project/patchew/xen/-/pipelines/260183774__;!!GF_29dbcQIUBPA!mJXa6VegCDFete9dsvs4m8Epto5RJvSwfsRrESsenHBOJ4yxtj7XSU8QELo6Tojc-d06GNY$ [gitlab[.]com]
>> 
>> [2021-02-23 02:38:01] Waiting for pipeline to finish...
>> [2021-02-23 02:53:10] Still waiting...
>> [2021-02-23 03:08:19] Still waiting...
>> [2021-02-23 03:23:29] Still waiting...
>> [2021-02-23 03:38:38] Still waiting...
>> [2021-02-23 03:53:48] Still waiting...
>> [2021-02-23 04:08:57] Still waiting...
>> [2021-02-23 04:19:05] Pipeline failed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-clang-pvh' in stage 'test' is failed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-gcc-pvh' in stage 'test' is failed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-clang' in stage 'test' is failed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-x86-64-gcc' in stage 'test' is failed
>> [2021-02-23 04:19:06] Job 'qemu-smoke-arm64-gcc' in stage 'test' is failed
>> [2021-02-23 04:19:06] Job 'qemu-alpine-arm64-gcc' in stage 'test' is failed
>> [2021-02-23 04:19:06] Job 'alpine-3.12-clang-debug' in stage 'build' is failed
>> [2021-02-23 04:19:06] Job 'alpine-3.12-clang' in stage 'build' is failed
>> [2021-02-23 04:19:06] Job 'alpine-3.12-gcc-debug' in stage 'build' is failed
>> [2021-02-23 04:19:06] Job 'alpine-3.12-gcc' in stage 'build' is failed
>> === OUTPUT END ===
>> 
>> Test command exited with code: 1


-- 
Volodymyr Babchuk at EPAM

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-25 12:51               ` Volodymyr Babchuk
@ 2021-03-05  9:31                 ` Volodymyr Babchuk
  0 siblings, 0 replies; 15+ messages in thread
From: Volodymyr Babchuk @ 2021-03-05  9:31 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, xen-devel, George Dunlap, Dario Faggioli, Meng Xu,
	Ian Jackson, Jan Beulich, Stefano Stabellini, Wei Liu


Hi,

Volodymyr Babchuk writes:

> Hi Andrew,
>
> Andrew Cooper writes:
>
>> On 24/02/2021 23:58, Volodymyr Babchuk wrote:
>>> And I am not mentioning x86 support there...
>>
>> x86 uses per-pCPU stacks, not per-vCPU stacks.
>>
>> Transcribing from an old thread which happened in private as part of an
>> XSA discussion, concerning the implications of trying to change this.
>>
>> ~Andrew
>>
>> -----8<-----
>>
>> Here is a partial list off the top of my head of the practical problems
>> you're going to have to solve.
>>
>> Introduction of new SpectreRSB vulnerable gadgets.  I'm really close to
>> being able to drop RSB stuffing and recover some performance in Xen.
>>
>> CPL0 entrypoints need updating across schedule.  SYSCALL entry would
>> need to become a stub per vcpu, rather than the current stub per pcpu.
>> This requires reintroducing a writeable mapping to the TSS (doable) and
>> a shadow stack switch of active stacks (This corner case is so broken it
>> looks to be a blocker for CET-SS support in Linux, and is resulting in
>> some conversation about tweaking Shstk's in future processors).
>>
>> All per-cpu variables stop working.  You'd need to rewrite Xen to use
>> %gs for TLS which will have churn in the PV logic, and introduce the x86
>> architectural corner cases of running with an invalid %gs.  Xen has been
>> saved from a large number of privilege escalation vulnerabilities in
>> common with Linux and Windows by the fact that we don't use %gs, so
>> anyone trying to do this is going to have to come up with some concrete
>> way of proving that the corner cases are covered.
>
> Thank you. This is exactly what I needed. I am not a big specialist in
> x86, but from what I said, I can see that there is no easy way to switch
> contexts while in hypervisor mode.
>
> Then I want to return to a task domain idea, which you mentioned in the
> other thread. If I got it right, it would allow to
>
> 1. Implement asynchronous hypercalls for cases when there is no reason
> to hold calling vCPU in hypervisor for the whole call duration
>

Okay, I was too overexcited there. I mean - surely it is possible to
implement async hypercalls, but there is no immediate profit in this:
such hypercall can't be preempted anyways. On a SMP system you can
offload hypercall to another core, but that's basically all.

> I skimmed through ML archives, but didn't found any discussion about it.

Maybe you can give some hint how to find it?

> As I see it, its implementation would be close to idle domain
> implementation, but a little different.


-- 
Volodymyr Babchuk at EPAM

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-24 23:37   ` Volodymyr Babchuk
@ 2021-03-01 14:39     ` George Dunlap
  0 siblings, 0 replies; 15+ messages in thread
From: George Dunlap @ 2021-03-01 14:39 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Andrew Cooper, xen-devel, Dario Faggioli, Meng Xu, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu



> On Feb 24, 2021, at 11:37 PM, Volodymyr Babchuk <volodymyr_babchuk@epam.com> wrote:
> 
> 
>> Hypervisor/virt properties are different to both a kernel-only-RTOS, and
>> regular usespace.  This was why I gave you some specific extra scenarios
>> to do latency testing with, so you could make a fair comparison of
>> "extra overhead caused by Xen" separate from "overhead due to
>> fundamental design constraints of using virt".
> 
> I can't see any fundamental constraints there. I see how virtualization
> architecture can influence context switch time: how many actions you
> need to switch one vCPU to another. I have in mind low level things
> there: reprogram MMU to use another set of tables, reprogram your
> interrupt controller, timer, etc. Of course, you can't get latency lower
> that context switch time. This is the only fundamental constraint I can
> see.

Well suppose you have two domains, A and B, both of which control  hardware which have hard real-time requirements.

And suppose that A has just started handling handling a latency-sensitive interrupt, when a latency-sensitive interrupt comes in for B.  You might well preempt A and let B run for a full timeslice, causing A’s interrupt handler to be delayed by a significant amount.

Preventing that sort of thing would be a much more tricky issue to get right.

>> If you want timely interrupt handling, you either need to partition your
>> workloads by the long-running-ness of their hypercalls, or not have
>> long-running hypercalls.
> 
> ... or do long-running tasks asynchronously. I believe, for most
> domctls and sysctls there is no need to hold calling vCPU in hypervisor
> mode at all.
> 
>> I remain unconvinced that preemption is an sensible fix to the problem
>> you're trying to solve.
> 
> Well, this is the purpose of this little experiment. I want to discuss
> different approaches and to estimate amount of required efforts. By the
> way, from x86 point of view, how hard to switch vCPU context while it is
> running in hypervisor mode?

I’m not necessarily opposed to introducing preemption, but the more we ask about things, the more complex things begin to look.  The idea of introducing an async framework to deal with long-running hypercalls is a huge engineering and design effort, not just for Xen, but for all future callers of the interface.

The claim in the cover letter was that “[s]ome hypercalls can not be preempted at all”.  I looked at the reference, and it looks like you’re referring to this:

"I brooded over ways to make [alloc_domheap_pages()] preemptible. But it is a) located deep in call chain and b) used not only by hypercalls. So I can't see an easy way to make it preemptible."

Let’s assume for the sake of argument that preventing delays due to alloc_domheap_pages() would require significant rearchitecting of the code.  And let’s even assume that there are 2-3 other such knotty issues making for unacceptably long hypercalls.  Will identifying and tracking down those issues really be more effort than introducing preemption, introducing async operations, and all the other things we’ve been talking about?

One thing that might be interesting is to add some sort of metrics (disabled in Kconfig by default); e.g.:

1. On entry to a hypercall, take a timestamp

2. On every hypercall_preempt() call, take another timestamp and see how much time has passed without a preempt, and reset the timestamp count; also do a check on exit of the hypercall

We could start by trying to do stats and figuring out which hypercalls go the longest without preemption, as a way to guide the optimization efforts.  Then as we get that number down, we could add an ASSERT()s that the time is never longer than a certain amount, and add runs like that to osstest to make sure there are no regressions introduced.

I agree that hypercall continuations are complex; and you’re right that the fact that the hypercall continuation may never be called limits where preemption can happen.  But making the entire hypervisor preemption-friendly is also quite complex in its own way; it’s not immediately obvious to me from this thread that hypervisor preemption is less complex.

 -George

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-25  0:39             ` Andrew Cooper
@ 2021-02-25 12:51               ` Volodymyr Babchuk
  2021-03-05  9:31                 ` Volodymyr Babchuk
  0 siblings, 1 reply; 15+ messages in thread
From: Volodymyr Babchuk @ 2021-02-25 12:51 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, xen-devel, George Dunlap, Dario Faggioli, Meng Xu,
	Ian Jackson, Jan Beulich, Stefano Stabellini, Wei Liu


Hi Andrew,

Andrew Cooper writes:

> On 24/02/2021 23:58, Volodymyr Babchuk wrote:
>> And I am not mentioning x86 support there...
>
> x86 uses per-pCPU stacks, not per-vCPU stacks.
>
> Transcribing from an old thread which happened in private as part of an
> XSA discussion, concerning the implications of trying to change this.
>
> ~Andrew
>
> -----8<-----
>
> Here is a partial list off the top of my head of the practical problems
> you're going to have to solve.
>
> Introduction of new SpectreRSB vulnerable gadgets.  I'm really close to
> being able to drop RSB stuffing and recover some performance in Xen.
>
> CPL0 entrypoints need updating across schedule.  SYSCALL entry would
> need to become a stub per vcpu, rather than the current stub per pcpu.
> This requires reintroducing a writeable mapping to the TSS (doable) and
> a shadow stack switch of active stacks (This corner case is so broken it
> looks to be a blocker for CET-SS support in Linux, and is resulting in
> some conversation about tweaking Shstk's in future processors).
>
> All per-cpu variables stop working.  You'd need to rewrite Xen to use
> %gs for TLS which will have churn in the PV logic, and introduce the x86
> architectural corner cases of running with an invalid %gs.  Xen has been
> saved from a large number of privilege escalation vulnerabilities in
> common with Linux and Windows by the fact that we don't use %gs, so
> anyone trying to do this is going to have to come up with some concrete
> way of proving that the corner cases are covered.

Thank you. This is exactly what I needed. I am not a big specialist in
x86, but from what I said, I can see that there is no easy way to switch
contexts while in hypervisor mode.

Then I want to return to a task domain idea, which you mentioned in the
other thread. If I got it right, it would allow to

1. Implement asynchronous hypercalls for cases when there is no reason
to hold calling vCPU in hypervisor for the whole call duration

2. Improve time accounting, as tasklets can be scheduled to run in this
task domain.

I skimmed through ML archives, but didn't found any discussion about it.

As I see it, its implementation would be close to idle domain
implementation, but a little different.

-- 
Volodymyr Babchuk at EPAM

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-24 23:58           ` Volodymyr Babchuk
@ 2021-02-25  0:39             ` Andrew Cooper
  2021-02-25 12:51               ` Volodymyr Babchuk
  0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2021-02-25  0:39 UTC (permalink / raw)
  To: Volodymyr Babchuk, Julien Grall
  Cc: xen-devel, George Dunlap, Dario Faggioli, Meng Xu, Ian Jackson,
	Jan Beulich, Stefano Stabellini, Wei Liu

On 24/02/2021 23:58, Volodymyr Babchuk wrote:
> And I am not mentioning x86 support there...

x86 uses per-pCPU stacks, not per-vCPU stacks.

Transcribing from an old thread which happened in private as part of an
XSA discussion, concerning the implications of trying to change this.

~Andrew

-----8<-----

Here is a partial list off the top of my head of the practical problems
you're going to have to solve.

Introduction of new SpectreRSB vulnerable gadgets.  I'm really close to
being able to drop RSB stuffing and recover some performance in Xen.

CPL0 entrypoints need updating across schedule.  SYSCALL entry would
need to become a stub per vcpu, rather than the current stub per pcpu.
This requires reintroducing a writeable mapping to the TSS (doable) and
a shadow stack switch of active stacks (This corner case is so broken it
looks to be a blocker for CET-SS support in Linux, and is resulting in
some conversation about tweaking Shstk's in future processors).

All per-cpu variables stop working.  You'd need to rewrite Xen to use
%gs for TLS which will have churn in the PV logic, and introduce the x86
architectural corner cases of running with an invalid %gs.  Xen has been
saved from a large number of privilege escalation vulnerabilities in
common with Linux and Windows by the fact that we don't use %gs, so
anyone trying to do this is going to have to come up with some concrete
way of proving that the corner cases are covered.



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-24 22:31         ` Julien Grall
@ 2021-02-24 23:58           ` Volodymyr Babchuk
  2021-02-25  0:39             ` Andrew Cooper
  0 siblings, 1 reply; 15+ messages in thread
From: Volodymyr Babchuk @ 2021-02-24 23:58 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, George Dunlap, Dario Faggioli, Meng Xu, Andrew Cooper,
	Ian Jackson, Jan Beulich, Stefano Stabellini, Wei Liu



Julien Grall writes:

> On Wed, 24 Feb 2021 at 20:58, Volodymyr Babchuk
> <Volodymyr_Babchuk@epam.com> wrote:
>>
>>
>> Hi Julien,
>>
>> Julien Grall writes:
>>
>>> On 23/02/2021 12:06, Volodymyr Babchuk wrote:
>>>> Hi Julien,
>>>
>>> Hi Volodymyr,
>>>
>>>> Julien Grall writes:
>>>>> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>>>>> ... just rescheduling the vCPU. It will also give the opportunity for
>>>>> the guest to handle interrupts.
>>>>>
>>>>> If you don't return to the guest, then risk to get an RCU sched stall
>>>>> on that the vCPU (some hypercalls can take really really long).
>>>> Ah yes, you are right. I'd only wish that hypervisor saved context
>>>> of
>>>> hypercall on it's side...
>>>> I have example of OP-TEE before my eyes. They have special return
>>>> code
>>>> "task was interrupted" and they use separate call "continue execution of
>>>> interrupted task", which takes opaque context handle as a
>>>> parameter. With this approach state of interrupted call never leaks to > rest of the system.
>>>
>>> Feel free to suggest a new approach for the hypercals.
>>>
>>
>> I believe, I suggested it right above. There are some corner cases, that
>> should be addressed, of course.
>
> If we wanted a clean break, then possibly yes.  But I meant one that doesn't
> break all the existing users and doesn't put Xen at risk.
>
> I don't believe your approach fulfill it.

Of course, we can't touch any hypercalls that are part of stable
ABI. But if I got this right, domctls and sysctls are not stable, so one
can change theirs behavior quite drastically in major releases.

>>
>>>>>
>>>>>> This approach itself have obvious
>>>>>> problems: code that executes hypercall is responsible for preemption,
>>>>>> preemption checks are infrequent (because they are costly by
>>>>>> themselves), hypercall execution state is stored in guest-controlled
>>>>>> area, we rely on guest's good will to continue the hypercall.
>>>>>
>>>>> Why is it a problem to rely on guest's good will? The hypercalls
>>>>> should be preempted at a boundary that is safe to continue.
>>>> Yes, and it imposes restrictions on how to write hypercall
>>>> handler.
>>>> In other words, there are much more places in hypercall handler code
>>>> where it can be preempted than where hypercall continuation can be
>>>> used. For example, you can preempt hypercall that holds a mutex, but of
>>>> course you can't create an continuation point in such place.
>>>
>>> I disagree, you can create continuation point in such place. Although
>>> it will be more complex because you have to make sure you break the
>>> work in a restartable place.
>>
>> Maybe there is some misunderstanding. You can't create hypercall
>> continuation point in a place where you are holding mutex. Because,
>> there is absolutely not guarantee that guest will restart the
>> hypercall.
>
> I don't think we are disagreeing here. My point is you should rarely
> need to hold a mutex for a long period, so you could break your work
> in smaller chunk. In which cases, you can use hypercall continuation.

Let's say in this way: generally you can hold a mutex much longer than
you can hold a spinlock. And nothing catastrophic will happen if you are
preempted while holding a mutex. Better to avoid, this of course.

>
>>
>> But you can preempt vCPU while holding mutex, because xen owns scheduler
>> and it can guarantee that vCPU will be scheduled eventually to continue
>> the work and release mutex.
>
> The problem is the "eventually". If you are accounting the time spent
> in the hypervisor to the vCPU A, then there is a possibility that it
> has exhausted its time slice. In which case, your vCPU A may be
> sleeping for a while with a mutex held.
>
> If another vCPU B needs the mutex, it will either have to wait
> potentially for a long time or we need to force vCPU A to run on
> borrowed time.

Yes, of course.

>>
>>> I would also like to point out that preemption also have some drawbacks.
>>> With RT in mind, you have to deal with priority inversion (e.g. a
>>> lower priority vCPU held a mutex that is required by an higher
>>> priority).
>>
>> Of course. This is not as simple as "just call scheduler when we want
>> to".
>
> Your e-mail made it sounds like it was easy to add preemption in
> Xen. ;)

I'm sorry for that :)
Actually, there is lots of work to do. It appeared to me that "current"
needs to be reworked, preempt_enable/disable needs to be reworked, per-cpu
variables also should be reworked. And this is just to ensure
consistency of already existing code.

And I am not mentioning x86 support there...

>>> Outside of RT, you have to be careful where mutex are held. In your
>>> earlier answer, you suggested to held mutex for the memory
>>> allocation. If you do that, then it means a domain A can block
>>> allocation for domain B as it helds the mutex.
>>
>> As long as we do not exit to a EL1 with mutex being held, domain A can't
>> block anything. Of course, we have to deal with priority inversion, but
>> malicious domain can't cause DoS.
>
> It is not really a priority inversion problem outside of RT because
> all the tasks will have the same priority. It is more a time
> accounting problem because each vCPU may have a different number of
> credits.

Speaking of that, RTDS does not use concept of priority. And ARINC of
course neither.


>>>>> I am really surprised that this is the only changes necessary in
>>>>> Xen. For a first approach, we may want to be conservative when the
>>>>> preemption is happening as I am not convinced that all the places are
>>>>> safe to preempt.
>>>>>
>>>> Well, I can't say that I ran extensive tests, but I played with this
>>>> for
>>>> some time and it seemed quite stable. Of course, I had some problems
>>>> with RTDS...
>>>> As I see it, Xen is already supports SMP, so all places where races
>>>> are
>>>> possible should already be covered by spinlocks or taken into account by
>>>> some other means.
>>> That's correct for shared resources. I am more worried for any
>>> hypercalls that expected to run more or less continuously (e.g not
>>> taking into account interrupt) on the same pCPU.
>>
>> Are there many such hypercalls? They can disable preemption if they
>> really need to run on the same pCPU. As I understand, they should be
>> relatively fast, because they can't create continuations anyway.
>
> Well, I never tried to make Xen preemptible... My comment is based on
> the fact that the use preempt_{enable, disable}() was mostly done on a
> best effort basis.
>
> The usual suspects are anything using this_cpu() or interacting with
> the per-CPU HW registers.
>
> From a quick look here a few things (only looked at Arm):
>   * map_domain_page() in particular on arm32 because this is using
> per-CPU page-tables
>   * guest_atomics_* as this uses this_cpu()
>   * virt_to_mfn() in particular the failure path
>   * Incorrect use (or missing) rcu locking. (Hopefully Juergen's
> recent work in the RCU mitigate the risk)
>
> I can provide guidance, but you will have to go through the code and
> check what's happening.

Thank you for the list. Of course, I need to see thru all the code. I
already had a bunch of problems with per_cpu variables...

-- 
Volodymyr Babchuk at EPAM

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-24 18:07 ` Andrew Cooper
@ 2021-02-24 23:37   ` Volodymyr Babchuk
  2021-03-01 14:39     ` George Dunlap
  0 siblings, 1 reply; 15+ messages in thread
From: Volodymyr Babchuk @ 2021-02-24 23:37 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: xen-devel, George Dunlap, Dario Faggioli, Meng Xu, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu


Hi Andrew,

Andrew Cooper writes:

> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>> Hello community,
>>
>> Subject of this cover letter is quite self-explanatory. This patch
>> series implements PoC for preemption in hypervisor mode.
>>
>> This is the sort of follow-up to recent discussion about latency
>> ([1]).
>>
>> Motivation
>> ==========
>>
>> It is well known that Xen is not preemptable. On other words, it is
>> impossible to switch vCPU contexts while running in hypervisor
>> mode. Only one place where scheduling decision can be made and one
>> vCPU can be replaced with another is the exit path from the hypervisor
>> mode. The one exception are Idle vCPUs, which never leaves the
>> hypervisor mode for obvious reasons.
>>
>> This leads to a number of problems. This list is not comprehensive. It
>> lists only things that I or my colleagues encountered personally.
>>
>> Long-running hypercalls. Due to nature of some hypercalls they can
>> execute for arbitrary long time. Mostly those are calls that deal with
>> long list of similar actions, like memory pages processing. To deal
>> with this issue Xen employs most horrific technique called "hypercall
>> continuation". When code that handles hypercall decides that it should
>> be preempted, it basically updates the hypercall parameters, and moves
>> guest PC one instruction back. This causes guest to re-execute the
>> hypercall with altered parameters, which will allow hypervisor to
>> continue hypercall execution later. This approach itself have obvious
>> problems: code that executes hypercall is responsible for preemption,
>> preemption checks are infrequent (because they are costly by
>> themselves), hypercall execution state is stored in guest-controlled
>> area, we rely on guest's good will to continue the hypercall. All this
>> imposes restrictions on which hypercalls can be preempted, when they
>> can be preempted and how to write hypercall handlers. Also, it
>> requires very accurate coding and already led to at least one
>> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
>> like the one mentioned in [1].
>>
>> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
>> which are supposed to run when the system is idle. If hypervisor needs
>> to execute own tasks that are required to run right now, it have no
>> other way than to execute them on current vCPU. But scheduler does not
>> know that hypervisor executes hypervisor task and accounts spent time
>> to a domain. This can lead to domain starvation.
>>
>> Also, absence of hypervisor threads leads to absence of high-level
>> synchronization primitives like mutexes, conditional variables,
>> completions, etc. This leads to two problems: we need to use spinlocks
>> everywhere and we have problems when porting device drivers from linux
>> kernel.
>
> You cannot reenter a guest, even to deliver interrupts, if pre-empted at
> an arbitrary point in a hypercall.  State needs unwinding suitably.
>

Yes, Julien pointed this to me already. So, looks like hypercall
continuations are still needed.

> Xen's non-preemptible-ness is designed to specifically force you to not
> implement long-running hypercalls which would interfere with timely
> interrupt handling in the general case.

What if long-running hypercalls are still required? There are other
options, like async calls, for example.

> Hypervisor/virt properties are different to both a kernel-only-RTOS, and
> regular usespace.  This was why I gave you some specific extra scenarios
> to do latency testing with, so you could make a fair comparison of
> "extra overhead caused by Xen" separate from "overhead due to
> fundamental design constraints of using virt".

I can't see any fundamental constraints there. I see how virtualization
architecture can influence context switch time: how many actions you
need to switch one vCPU to another. I have in mind low level things
there: reprogram MMU to use another set of tables, reprogram your
interrupt controller, timer, etc. Of course, you can't get latency lower
that context switch time. This is the only fundamental constraint I can
see.

But all other things are debatable.

As for latency testing, I'm not interested in absolute times per se. I
already determined that time needed to switch vCPU context on my machine
is about 9us. It is fine for me. I am interested in a (semi-)guaranteed
time of reaction. And Xen is doing quite well in most cases. But there
are other cases, when long-lasting hypercalls cause spikes in time of
reaction.

> Preemption like this will make some benchmarks look better, but it also
> introduces the ability to create fundamental problems, like preventing
> any interrupt delivery into a VM for seconds of wallclock time while
> each vcpu happens to be in a long-running hypercall.
>
> If you want timely interrupt handling, you either need to partition your
> workloads by the long-running-ness of their hypercalls, or not have
> long-running hypercalls.

... or do long-running tasks asynchronously. I believe, for most
domctls and sysctls there is no need to hold calling vCPU in hypervisor
mode at all.

> I remain unconvinced that preemption is an sensible fix to the problem
> you're trying to solve.

Well, this is the purpose of this little experiment. I want to discuss
different approaches and to estimate amount of required efforts. By the
way, from x86 point of view, how hard to switch vCPU context while it is
running in hypervisor mode?


-- 
Volodymyr Babchuk at EPAM

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-24 20:57       ` Volodymyr Babchuk
@ 2021-02-24 22:31         ` Julien Grall
  2021-02-24 23:58           ` Volodymyr Babchuk
  0 siblings, 1 reply; 15+ messages in thread
From: Julien Grall @ 2021-02-24 22:31 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: xen-devel, George Dunlap, Dario Faggioli, Meng Xu, Andrew Cooper,
	Ian Jackson, Jan Beulich, Stefano Stabellini, Wei Liu

On Wed, 24 Feb 2021 at 20:58, Volodymyr Babchuk
<Volodymyr_Babchuk@epam.com> wrote:
>
>
> Hi Julien,
>
> Julien Grall writes:
>
> > On 23/02/2021 12:06, Volodymyr Babchuk wrote:
> >> Hi Julien,
> >
> > Hi Volodymyr,
> >
> >> Julien Grall writes:
> >>> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
> >>> ... just rescheduling the vCPU. It will also give the opportunity for
> >>> the guest to handle interrupts.
> >>>
> >>> If you don't return to the guest, then risk to get an RCU sched stall
> >>> on that the vCPU (some hypercalls can take really really long).
> >> Ah yes, you are right. I'd only wish that hypervisor saved context
> >> of
> >> hypercall on it's side...
> >> I have example of OP-TEE before my eyes. They have special return
> >> code
> >> "task was interrupted" and they use separate call "continue execution of
> >> interrupted task", which takes opaque context handle as a
> >> parameter. With this approach state of interrupted call never leaks to > rest of the system.
> >
> > Feel free to suggest a new approach for the hypercals.
> >
>
> I believe, I suggested it right above. There are some corner cases, that
> should be addressed, of course.

If we wanted a clean break, then possibly yes.  But I meant one that doesn't
break all the existing users and doesn't put Xen at risk.

I don't believe your approach fulfill it.

>
> >>>
> >>>> This approach itself have obvious
> >>>> problems: code that executes hypercall is responsible for preemption,
> >>>> preemption checks are infrequent (because they are costly by
> >>>> themselves), hypercall execution state is stored in guest-controlled
> >>>> area, we rely on guest's good will to continue the hypercall.
> >>>
> >>> Why is it a problem to rely on guest's good will? The hypercalls
> >>> should be preempted at a boundary that is safe to continue.
> >> Yes, and it imposes restrictions on how to write hypercall
> >> handler.
> >> In other words, there are much more places in hypercall handler code
> >> where it can be preempted than where hypercall continuation can be
> >> used. For example, you can preempt hypercall that holds a mutex, but of
> >> course you can't create an continuation point in such place.
> >
> > I disagree, you can create continuation point in such place. Although
> > it will be more complex because you have to make sure you break the
> > work in a restartable place.
>
> Maybe there is some misunderstanding. You can't create hypercall
> continuation point in a place where you are holding mutex. Because,
> there is absolutely not guarantee that guest will restart the
> hypercall.

I don't think we are disagreeing here. My point is you should rarely
need to hold a mutex for a long period, so you could break your work
in smaller chunk. In which cases, you can use hypercall continuation.

>
> But you can preempt vCPU while holding mutex, because xen owns scheduler
> and it can guarantee that vCPU will be scheduled eventually to continue
> the work and release mutex.

The problem is the "eventually". If you are accounting the time spent
in the hypervisor to the vCPU A, then there is a possibility that it
has exhausted its time slice. In which case, your vCPU A may be
sleeping for a while with a mutex held.

If another vCPU B needs the mutex, it will either have to wait
potentially for a long time or we need to force vCPU A to run on
borrowed time.

>
> > I would also like to point out that preemption also have some drawbacks.
> > With RT in mind, you have to deal with priority inversion (e.g. a
> > lower priority vCPU held a mutex that is required by an higher
> > priority).
>
> Of course. This is not as simple as "just call scheduler when we want
> to".

Your e-mail made it sounds like it was easy to add preemption in Xen. ;)

>
> > Outside of RT, you have to be careful where mutex are held. In your
> > earlier answer, you suggested to held mutex for the memory
> > allocation. If you do that, then it means a domain A can block
> > allocation for domain B as it helds the mutex.
>
> As long as we do not exit to a EL1 with mutex being held, domain A can't
> block anything. Of course, we have to deal with priority inversion, but
> malicious domain can't cause DoS.

It is not really a priority inversion problem outside of RT because
all the tasks will have the same priority. It is more a time
accounting problem because each vCPU may have a different number of
credits.

> >>> I am really surprised that this is the only changes necessary in
> >>> Xen. For a first approach, we may want to be conservative when the
> >>> preemption is happening as I am not convinced that all the places are
> >>> safe to preempt.
> >>>
> >> Well, I can't say that I ran extensive tests, but I played with this
> >> for
> >> some time and it seemed quite stable. Of course, I had some problems
> >> with RTDS...
> >> As I see it, Xen is already supports SMP, so all places where races
> >> are
> >> possible should already be covered by spinlocks or taken into account by
> >> some other means.
> > That's correct for shared resources. I am more worried for any
> > hypercalls that expected to run more or less continuously (e.g not
> > taking into account interrupt) on the same pCPU.
>
> Are there many such hypercalls? They can disable preemption if they
> really need to run on the same pCPU. As I understand, they should be
> relatively fast, because they can't create continuations anyway.

Well, I never tried to make Xen preemptible... My comment is based on
the fact that the use preempt_{enable, disable}() was mostly done on a
best effort basis.

The usual suspects are anything using this_cpu() or interacting with
the per-CPU HW registers.

From a quick look here a few things (only looked at Arm):
  * map_domain_page() in particular on arm32 because this is using
per-CPU page-tables
  * guest_atomics_* as this uses this_cpu()
  * virt_to_mfn() in particular the failure path
  * Incorrect use (or missing) rcu locking. (Hopefully Juergen's
recent work in the RCU mitigate the risk)

I can provide guidance, but you will have to go through the code and
check what's happening.

Cheers,


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-24 10:08     ` Julien Grall
@ 2021-02-24 20:57       ` Volodymyr Babchuk
  2021-02-24 22:31         ` Julien Grall
  0 siblings, 1 reply; 15+ messages in thread
From: Volodymyr Babchuk @ 2021-02-24 20:57 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, George Dunlap, Dario Faggioli, Meng Xu, Andrew Cooper,
	Ian Jackson, Jan Beulich, Stefano Stabellini, Wei Liu


Hi Julien,

Julien Grall writes:

> On 23/02/2021 12:06, Volodymyr Babchuk wrote:
>> Hi Julien,
>
> Hi Volodymyr,
>
>> Julien Grall writes:
>>> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>>> ... just rescheduling the vCPU. It will also give the opportunity for
>>> the guest to handle interrupts.
>>>
>>> If you don't return to the guest, then risk to get an RCU sched stall
>>> on that the vCPU (some hypercalls can take really really long).
>> Ah yes, you are right. I'd only wish that hypervisor saved context
>> of
>> hypercall on it's side...
>> I have example of OP-TEE before my eyes. They have special return
>> code
>> "task was interrupted" and they use separate call "continue execution of
>> interrupted task", which takes opaque context handle as a
>> parameter. With this approach state of interrupted call never leaks to > rest of the system.
>
> Feel free to suggest a new approach for the hypercals.
>

I believe, I suggested it right above. There are some corner cases, that
should be addressed, of course.

>>>
>>>> This approach itself have obvious
>>>> problems: code that executes hypercall is responsible for preemption,
>>>> preemption checks are infrequent (because they are costly by
>>>> themselves), hypercall execution state is stored in guest-controlled
>>>> area, we rely on guest's good will to continue the hypercall.
>>>
>>> Why is it a problem to rely on guest's good will? The hypercalls
>>> should be preempted at a boundary that is safe to continue.
>> Yes, and it imposes restrictions on how to write hypercall
>> handler.
>> In other words, there are much more places in hypercall handler code
>> where it can be preempted than where hypercall continuation can be
>> used. For example, you can preempt hypercall that holds a mutex, but of
>> course you can't create an continuation point in such place.
>
> I disagree, you can create continuation point in such place. Although
> it will be more complex because you have to make sure you break the
> work in a restartable place.

Maybe there is some misunderstanding. You can't create hypercall
continuation point in a place where you are holding mutex. Because,
there is absolutely not guarantee that guest will restart the
hypercall.

But you can preempt vCPU while holding mutex, because xen owns scheduler
and it can guarantee that vCPU will be scheduled eventually to continue
the work and release mutex.

> I would also like to point out that preemption also have some drawbacks.
> With RT in mind, you have to deal with priority inversion (e.g. a
> lower priority vCPU held a mutex that is required by an higher
> priority).

Of course. This is not as simple as "just call scheduler when we want
to".

> Outside of RT, you have to be careful where mutex are held. In your
> earlier answer, you suggested to held mutex for the memory
> allocation. If you do that, then it means a domain A can block
> allocation for domain B as it helds the mutex.

As long as we do not exit to a EL1 with mutex being held, domain A can't
block anything. Of course, we have to deal with priority inversion, but
malicious domain can't cause DoS.

> This can lead to quite serious problem if domain A cannot run (because
> it exhausted its credit) for a long time.
>

I believe, this problem is related to a priority inversion problem and
they should be addressed together.

>> 
>>>> All this
>>>> imposes restrictions on which hypercalls can be preempted, when they
>>>> can be preempted and how to write hypercall handlers. Also, it
>>>> requires very accurate coding and already led to at least one
>>>> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
>>>> like the one mentioned in [1].
>>>> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle
>>>> vCPUs,
>>>> which are supposed to run when the system is idle. If hypervisor needs
>>>> to execute own tasks that are required to run right now, it have no
>>>> other way than to execute them on current vCPU. But scheduler does not
>>>> know that hypervisor executes hypervisor task and accounts spent time
>>>> to a domain. This can lead to domain starvation.
>>>> Also, absence of hypervisor threads leads to absence of high-level
>>>> synchronization primitives like mutexes, conditional variables,
>>>> completions, etc. This leads to two problems: we need to use spinlocks
>>>> everywhere and we have problems when porting device drivers from linux
>>>> kernel.
>>>> Proposed solution
>>>> =================
>>>> It is quite obvious that to fix problems above we need to allow
>>>> preemption in hypervisor mode. I am not familiar with x86 side, but
>>>> for the ARM it was surprisingly easy to implement. Basically, vCPU
>>>> context in hypervisor mode is determined by its stack at general
>>>> purpose registers. And __context_switch() function perfectly switches
>>>> them when running in hypervisor mode. So there are no hard
>>>> restrictions, why it should be called only in leave_hypervisor() path.
>>>> The obvious question is: when we should to try to preempt running
>>>> vCPU?  And answer is: when there was an external event. This means
>>>> that we should try to preempt only when there was an interrupt request
>>>> where we are running in hypervisor mode. On ARM, in this case function
>>>> do_trap_irq() is called. Problem is that IRQ handler can be called
>>>> when vCPU is already in atomic state (holding spinlock, for
>>>> example). In this case we should try to preempt right after leaving
>>>> atomic state. This is basically all the idea behind this PoC.
>>>> Now, about the series composition.
>>>> Patches
>>>>     sched: core: save IRQ state during locking
>>>>     sched: rt: save IRQ state during locking
>>>>     sched: credit2: save IRQ state during locking
>>>>     preempt: use atomic_t to for preempt_count
>>>>     arm: setup: disable preemption during startup
>>>>     arm: context_switch: allow to run with IRQs already disabled
>>>> prepare the groundwork for the rest of PoC. It appears that not all
>>>> code is ready to be executed in IRQ state, and schedule() now can be
>>>> called at end of do_trap_irq(), which technically is considered IRQ
>>>> handler state. Also, it is unwise to try preempt things when we are
>>>> still booting, so ween to enable atomic context during the boot
>>>> process.
>>>
>>> I am really surprised that this is the only changes necessary in
>>> Xen. For a first approach, we may want to be conservative when the
>>> preemption is happening as I am not convinced that all the places are
>>> safe to preempt.
>>>
>> Well, I can't say that I ran extensive tests, but I played with this
>> for
>> some time and it seemed quite stable. Of course, I had some problems
>> with RTDS...
>> As I see it, Xen is already supports SMP, so all places where races
>> are
>> possible should already be covered by spinlocks or taken into account by
>> some other means.
> That's correct for shared resources. I am more worried for any
> hypercalls that expected to run more or less continuously (e.g not 
> taking into account interrupt) on the same pCPU.

Are there many such hypercalls? They can disable preemption if they
really need to run on the same pCPU. As I understand, they should be
relatively fast, because they can't create continuations anyway.

>> Places which may not be safe to preempt are clustered around task
>> management code itself: schedulers, xen entry/exit points, vcpu
>> creation/destruction and such.
>> For example, for sure we do not want to destroy vCPU which was
>> preempted
>> in hypervisor mode. I didn't covered this case, by the way.
>> 
>>>> Patches
>>>>     preempt: add try_preempt() function
>>>>     sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>>>>     arm: traps: try to preempt before leaving IRQ handler
>>>> are basically the core of this PoC. try_preempt() function tries to
>>>> preempt vCPU when either called by IRQ handler and when leaving atomic
>>>> state. Scheduler now enters atomic state to ensure that it will not
>>>> preempt self. do_trap_irq() calls try_preempt() to initiate preemption.
>>>
>>> AFAICT, try_preempt() will deal with the rescheduling. But how about
>>> softirqs? Don't we want to handle them in try_preempt() as well?
>> Well, yes and no. We have the following softirqs:
>>   TIMER_SOFTIRQ - should be called, I believe
>>   RCU_SOFTIRQ - I'm not sure about this, but probably no
>
> When would you call RCU callback then?
>

I'm not sure there. But I think, they should be called in the same place
as always: while leaving hypervisor. But I'm not very familiar with
RCU, so I may talk nonsense. 

>>   SCHED_SLAVE_SOFTIRQ - no
>>   SCHEDULE_SOFTIRQ - no
>>   NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ - should be moved to a separate
>>   thread, maybe?
>>   TASKLET_SOFTIRQ - should be moved to a separate thread >
>> So, looks like only timers should be handled for sure.
>> 
>>>
>>> [...]
>>>
>>>> Conclusion
>>>> ==========
>>>> My main intention is to begin discussion of hypervisor
>>>> preemption. As
>>>> I showed, it is doable right away and provides some immediate
>>>> benefits. I do understand that proper implementation requires much
>>>> more efforts. But we are ready to do this work if community is
>>>> interested in it.
>>>> Just to reiterate main benefits:
>>>> 1. More controllable latency. On embedded systems customers care
>>>> about
>>>> such things.
>>>
>>> Is the plan to only offer preemptible Xen?
>>>
>> Sorry, didn't get the question.
>
> What's your plan for the preemption support? Will an admin be able to
> configure Xen to be either preemptible or not?

Honestly, it would be much easier to enable it unconditionally. But I
understand, that this is not feasible. So, I'm looking at a build-time
option.

-- 
Volodymyr Babchuk at EPAM

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-23  2:34 Volodymyr Babchuk
  2021-02-23  9:02 ` Julien Grall
@ 2021-02-24 18:07 ` Andrew Cooper
  2021-02-24 23:37   ` Volodymyr Babchuk
  1 sibling, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2021-02-24 18:07 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: George Dunlap, Dario Faggioli, Meng Xu, Ian Jackson, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu

On 23/02/2021 02:34, Volodymyr Babchuk wrote:
> Hello community,
>
> Subject of this cover letter is quite self-explanatory. This patch
> series implements PoC for preemption in hypervisor mode.
>
> This is the sort of follow-up to recent discussion about latency
> ([1]).
>
> Motivation
> ==========
>
> It is well known that Xen is not preemptable. On other words, it is
> impossible to switch vCPU contexts while running in hypervisor
> mode. Only one place where scheduling decision can be made and one
> vCPU can be replaced with another is the exit path from the hypervisor
> mode. The one exception are Idle vCPUs, which never leaves the
> hypervisor mode for obvious reasons.
>
> This leads to a number of problems. This list is not comprehensive. It
> lists only things that I or my colleagues encountered personally.
>
> Long-running hypercalls. Due to nature of some hypercalls they can
> execute for arbitrary long time. Mostly those are calls that deal with
> long list of similar actions, like memory pages processing. To deal
> with this issue Xen employs most horrific technique called "hypercall
> continuation". When code that handles hypercall decides that it should
> be preempted, it basically updates the hypercall parameters, and moves
> guest PC one instruction back. This causes guest to re-execute the
> hypercall with altered parameters, which will allow hypervisor to
> continue hypercall execution later. This approach itself have obvious
> problems: code that executes hypercall is responsible for preemption,
> preemption checks are infrequent (because they are costly by
> themselves), hypercall execution state is stored in guest-controlled
> area, we rely on guest's good will to continue the hypercall. All this
> imposes restrictions on which hypercalls can be preempted, when they
> can be preempted and how to write hypercall handlers. Also, it
> requires very accurate coding and already led to at least one
> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
> like the one mentioned in [1].
>
> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
> which are supposed to run when the system is idle. If hypervisor needs
> to execute own tasks that are required to run right now, it have no
> other way than to execute them on current vCPU. But scheduler does not
> know that hypervisor executes hypervisor task and accounts spent time
> to a domain. This can lead to domain starvation.
>
> Also, absence of hypervisor threads leads to absence of high-level
> synchronization primitives like mutexes, conditional variables,
> completions, etc. This leads to two problems: we need to use spinlocks
> everywhere and we have problems when porting device drivers from linux
> kernel.

You cannot reenter a guest, even to deliver interrupts, if pre-empted at
an arbitrary point in a hypercall.  State needs unwinding suitably.

Xen's non-preemptible-ness is designed to specifically force you to not
implement long-running hypercalls which would interfere with timely
interrupt handling in the general case.

Hypervisor/virt properties are different to both a kernel-only-RTOS, and
regular usespace.  This was why I gave you some specific extra scenarios
to do latency testing with, so you could make a fair comparison of
"extra overhead caused by Xen" separate from "overhead due to
fundamental design constraints of using virt".


Preemption like this will make some benchmarks look better, but it also
introduces the ability to create fundamental problems, like preventing
any interrupt delivery into a VM for seconds of wallclock time while
each vcpu happens to be in a long-running hypercall.

If you want timely interrupt handling, you either need to partition your
workloads by the long-running-ness of their hypercalls, or not have
long-running hypercalls.

I remain unconvinced that preemption is an sensible fix to the problem
you're trying to solve.

~Andrew


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-23 12:06   ` Volodymyr Babchuk
@ 2021-02-24 10:08     ` Julien Grall
  2021-02-24 20:57       ` Volodymyr Babchuk
  0 siblings, 1 reply; 15+ messages in thread
From: Julien Grall @ 2021-02-24 10:08 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: xen-devel, George Dunlap, Dario Faggioli, Meng Xu, Andrew Cooper,
	Ian Jackson, Jan Beulich, Stefano Stabellini, Wei Liu



On 23/02/2021 12:06, Volodymyr Babchuk wrote:
> 
> Hi Julien,

Hi Volodymyr,

> Julien Grall writes:
>> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>> ... just rescheduling the vCPU. It will also give the opportunity for
>> the guest to handle interrupts.
>>
>> If you don't return to the guest, then risk to get an RCU sched stall
>> on that the vCPU (some hypercalls can take really really long).
> 
> Ah yes, you are right. I'd only wish that hypervisor saved context of
> hypercall on it's side...
> 
> I have example of OP-TEE before my eyes. They have special return code
> "task was interrupted" and they use separate call "continue execution of
> interrupted task", which takes opaque context handle as a
> parameter. With this approach state of interrupted call never leaks to > rest of the system.

Feel free to suggest a new approach for the hypercals.

>>
>>> This approach itself have obvious
>>> problems: code that executes hypercall is responsible for preemption,
>>> preemption checks are infrequent (because they are costly by
>>> themselves), hypercall execution state is stored in guest-controlled
>>> area, we rely on guest's good will to continue the hypercall.
>>
>> Why is it a problem to rely on guest's good will? The hypercalls
>> should be preempted at a boundary that is safe to continue.
> 
> Yes, and it imposes restrictions on how to write hypercall
> handler.
> In other words, there are much more places in hypercall handler code
> where it can be preempted than where hypercall continuation can be
> used. For example, you can preempt hypercall that holds a mutex, but of
> course you can't create an continuation point in such place.

I disagree, you can create continuation point in such place. Although it 
will be more complex because you have to make sure you break the work in 
a restartable place.

I would also like to point out that preemption also have some drawbacks.
With RT in mind, you have to deal with priority inversion (e.g. a lower 
priority vCPU held a mutex that is required by an higher priority).

Outside of RT, you have to be careful where mutex are held. In your 
earlier answer, you suggested to held mutex for the memory allocation. 
If you do that, then it means a domain A can block allocation for domain 
B as it helds the mutex.

This can lead to quite serious problem if domain A cannot run (because 
it exhausted its credit) for a long time.

> 
>>> All this
>>> imposes restrictions on which hypercalls can be preempted, when they
>>> can be preempted and how to write hypercall handlers. Also, it
>>> requires very accurate coding and already led to at least one
>>> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
>>> like the one mentioned in [1].
>>> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle
>>> vCPUs,
>>> which are supposed to run when the system is idle. If hypervisor needs
>>> to execute own tasks that are required to run right now, it have no
>>> other way than to execute them on current vCPU. But scheduler does not
>>> know that hypervisor executes hypervisor task and accounts spent time
>>> to a domain. This can lead to domain starvation.
>>> Also, absence of hypervisor threads leads to absence of high-level
>>> synchronization primitives like mutexes, conditional variables,
>>> completions, etc. This leads to two problems: we need to use spinlocks
>>> everywhere and we have problems when porting device drivers from linux
>>> kernel.
>>> Proposed solution
>>> =================
>>> It is quite obvious that to fix problems above we need to allow
>>> preemption in hypervisor mode. I am not familiar with x86 side, but
>>> for the ARM it was surprisingly easy to implement. Basically, vCPU
>>> context in hypervisor mode is determined by its stack at general
>>> purpose registers. And __context_switch() function perfectly switches
>>> them when running in hypervisor mode. So there are no hard
>>> restrictions, why it should be called only in leave_hypervisor() path.
>>> The obvious question is: when we should to try to preempt running
>>> vCPU?  And answer is: when there was an external event. This means
>>> that we should try to preempt only when there was an interrupt request
>>> where we are running in hypervisor mode. On ARM, in this case function
>>> do_trap_irq() is called. Problem is that IRQ handler can be called
>>> when vCPU is already in atomic state (holding spinlock, for
>>> example). In this case we should try to preempt right after leaving
>>> atomic state. This is basically all the idea behind this PoC.
>>> Now, about the series composition.
>>> Patches
>>>     sched: core: save IRQ state during locking
>>>     sched: rt: save IRQ state during locking
>>>     sched: credit2: save IRQ state during locking
>>>     preempt: use atomic_t to for preempt_count
>>>     arm: setup: disable preemption during startup
>>>     arm: context_switch: allow to run with IRQs already disabled
>>> prepare the groundwork for the rest of PoC. It appears that not all
>>> code is ready to be executed in IRQ state, and schedule() now can be
>>> called at end of do_trap_irq(), which technically is considered IRQ
>>> handler state. Also, it is unwise to try preempt things when we are
>>> still booting, so ween to enable atomic context during the boot
>>> process.
>>
>> I am really surprised that this is the only changes necessary in
>> Xen. For a first approach, we may want to be conservative when the
>> preemption is happening as I am not convinced that all the places are
>> safe to preempt.
>>
> 
> Well, I can't say that I ran extensive tests, but I played with this for
> some time and it seemed quite stable. Of course, I had some problems
> with RTDS...
> 
> As I see it, Xen is already supports SMP, so all places where races are
> possible should already be covered by spinlocks or taken into account by
> some other means.
That's correct for shared resources. I am more worried for any 
hypercalls that expected to run more or less continuously (e.g not 
taking into account interrupt) on the same pCPU.

> 
> Places which may not be safe to preempt are clustered around task
> management code itself: schedulers, xen entry/exit points, vcpu
> creation/destruction and such.
> 
> For example, for sure we do not want to destroy vCPU which was preempted
> in hypervisor mode. I didn't covered this case, by the way.
> 
>>> Patches
>>>     preempt: add try_preempt() function
>>>     sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>>>     arm: traps: try to preempt before leaving IRQ handler
>>> are basically the core of this PoC. try_preempt() function tries to
>>> preempt vCPU when either called by IRQ handler and when leaving atomic
>>> state. Scheduler now enters atomic state to ensure that it will not
>>> preempt self. do_trap_irq() calls try_preempt() to initiate preemption.
>>
>> AFAICT, try_preempt() will deal with the rescheduling. But how about
>> softirqs? Don't we want to handle them in try_preempt() as well?
> 
> Well, yes and no. We have the following softirqs:
> 
>   TIMER_SOFTIRQ - should be called, I believe
>   RCU_SOFTIRQ - I'm not sure about this, but probably no

When would you call RCU callback then?

>   SCHED_SLAVE_SOFTIRQ - no
>   SCHEDULE_SOFTIRQ - no
>   NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ - should be moved to a separate
>   thread, maybe?
>   TASKLET_SOFTIRQ - should be moved to a separate thread >
> So, looks like only timers should be handled for sure.
> 
>>
>> [...]
>>
>>> Conclusion
>>> ==========
>>> My main intention is to begin discussion of hypervisor
>>> preemption. As
>>> I showed, it is doable right away and provides some immediate
>>> benefits. I do understand that proper implementation requires much
>>> more efforts. But we are ready to do this work if community is
>>> interested in it.
>>> Just to reiterate main benefits:
>>> 1. More controllable latency. On embedded systems customers care
>>> about
>>> such things.
>>
>> Is the plan to only offer preemptible Xen?
>>
> 
> Sorry, didn't get the question.

What's your plan for the preemption support? Will an admin be able to 
configure Xen to be either preemptible or not?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-23  9:02 ` Julien Grall
@ 2021-02-23 12:06   ` Volodymyr Babchuk
  2021-02-24 10:08     ` Julien Grall
  0 siblings, 1 reply; 15+ messages in thread
From: Volodymyr Babchuk @ 2021-02-23 12:06 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, George Dunlap, Dario Faggioli, Meng Xu, Andrew Cooper,
	Ian Jackson, Jan Beulich, Stefano Stabellini, Wei Liu


Hi Julien,

Julien Grall writes:

> On 23/02/2021 02:34, Volodymyr Babchuk wrote:
>> Hello community,
>
> Hi Volodymyr,
>
> Thank you for the proposal, I like the like of been able to preempt
> the vCPU thread. This would make easier to implement some of the
> device emulation in Xen (e.g. vGIC, SMMU).

Yes, emulation is the other topic that I didn't mentioned. Also, it could
lift some restrictions in OP-TEE mediator code as well.

>> Subject of this cover letter is quite self-explanatory. This patch
>> series implements PoC for preemption in hypervisor mode.
>> This is the sort of follow-up to recent discussion about latency
>> ([1]).
>> Motivation
>> ==========
>> It is well known that Xen is not preemptable. On other words, it is
>> impossible to switch vCPU contexts while running in hypervisor
>> mode. Only one place where scheduling decision can be made and one
>> vCPU can be replaced with another is the exit path from the hypervisor
>> mode. The one exception are Idle vCPUs, which never leaves the
>> hypervisor mode for obvious reasons.
>> This leads to a number of problems. This list is not
>> comprehensive. It
>> lists only things that I or my colleagues encountered personally.
>> Long-running hypercalls. Due to nature of some hypercalls they can
>> execute for arbitrary long time. Mostly those are calls that deal with
>> long list of similar actions, like memory pages processing. To deal
>> with this issue Xen employs most horrific technique called "hypercall
>> continuation". 
>
> I agree the code is not nice. However, it does serve another purpose
> than ...
>
>> When code that handles hypercall decides that it should
>> be preempted, it basically updates the hypercall parameters, and moves
>> guest PC one instruction back. This causes guest to re-execute the
>> hypercall with altered parameters, which will allow hypervisor to
>> continue hypercall execution later.
>
> ... just rescheduling the vCPU. It will also give the opportunity for
> the guest to handle interrupts.
>
> If you don't return to the guest, then risk to get an RCU sched stall
> on that the vCPU (some hypercalls can take really really long).

Ah yes, you are right. I'd only wish that hypervisor saved context of
hypercall on it's side...

I have example of OP-TEE before my eyes. They have special return code
"task was interrupted" and they use separate call "continue execution of
interrupted task", which takes opaque context handle as a
parameter. With this approach state of interrupted call never leaks to
rest of the system.

>
>> This approach itself have obvious
>> problems: code that executes hypercall is responsible for preemption,
>> preemption checks are infrequent (because they are costly by
>> themselves), hypercall execution state is stored in guest-controlled
>> area, we rely on guest's good will to continue the hypercall. 
>
> Why is it a problem to rely on guest's good will? The hypercalls
> should be preempted at a boundary that is safe to continue.

Yes, and it imposes restrictions on how to write hypercall
handler.
In other words, there are much more places in hypercall handler code
where it can be preempted than where hypercall continuation can be
used. For example, you can preempt hypercall that holds a mutex, but of
course you can't create an continuation point in such place.

>> All this
>> imposes restrictions on which hypercalls can be preempted, when they
>> can be preempted and how to write hypercall handlers. Also, it
>> requires very accurate coding and already led to at least one
>> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
>> like the one mentioned in [1].
>> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle
>> vCPUs,
>> which are supposed to run when the system is idle. If hypervisor needs
>> to execute own tasks that are required to run right now, it have no
>> other way than to execute them on current vCPU. But scheduler does not
>> know that hypervisor executes hypervisor task and accounts spent time
>> to a domain. This can lead to domain starvation.
>> Also, absence of hypervisor threads leads to absence of high-level
>> synchronization primitives like mutexes, conditional variables,
>> completions, etc. This leads to two problems: we need to use spinlocks
>> everywhere and we have problems when porting device drivers from linux
>> kernel.
>> Proposed solution
>> =================
>> It is quite obvious that to fix problems above we need to allow
>> preemption in hypervisor mode. I am not familiar with x86 side, but
>> for the ARM it was surprisingly easy to implement. Basically, vCPU
>> context in hypervisor mode is determined by its stack at general
>> purpose registers. And __context_switch() function perfectly switches
>> them when running in hypervisor mode. So there are no hard
>> restrictions, why it should be called only in leave_hypervisor() path.
>> The obvious question is: when we should to try to preempt running
>> vCPU?  And answer is: when there was an external event. This means
>> that we should try to preempt only when there was an interrupt request
>> where we are running in hypervisor mode. On ARM, in this case function
>> do_trap_irq() is called. Problem is that IRQ handler can be called
>> when vCPU is already in atomic state (holding spinlock, for
>> example). In this case we should try to preempt right after leaving
>> atomic state. This is basically all the idea behind this PoC.
>> Now, about the series composition.
>> Patches
>>    sched: core: save IRQ state during locking
>>    sched: rt: save IRQ state during locking
>>    sched: credit2: save IRQ state during locking
>>    preempt: use atomic_t to for preempt_count
>>    arm: setup: disable preemption during startup
>>    arm: context_switch: allow to run with IRQs already disabled
>> prepare the groundwork for the rest of PoC. It appears that not all
>> code is ready to be executed in IRQ state, and schedule() now can be
>> called at end of do_trap_irq(), which technically is considered IRQ
>> handler state. Also, it is unwise to try preempt things when we are
>> still booting, so ween to enable atomic context during the boot
>> process.
>
> I am really surprised that this is the only changes necessary in
> Xen. For a first approach, we may want to be conservative when the
> preemption is happening as I am not convinced that all the places are
> safe to preempt.
>

Well, I can't say that I ran extensive tests, but I played with this for
some time and it seemed quite stable. Of course, I had some problems
with RTDS...

As I see it, Xen is already supports SMP, so all places where races are
possible should already be covered by spinlocks or taken into account by
some other means.

Places which may not be safe to preempt are clustered around task
management code itself: schedulers, xen entry/exit points, vcpu
creation/destruction and such.

For example, for sure we do not want to destroy vCPU which was preempted
in hypervisor mode. I didn't covered this case, by the way.

>> Patches
>>    preempt: add try_preempt() function
>>    sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>>    arm: traps: try to preempt before leaving IRQ handler
>> are basically the core of this PoC. try_preempt() function tries to
>> preempt vCPU when either called by IRQ handler and when leaving atomic
>> state. Scheduler now enters atomic state to ensure that it will not
>> preempt self. do_trap_irq() calls try_preempt() to initiate preemption.
>
> AFAICT, try_preempt() will deal with the rescheduling. But how about
> softirqs? Don't we want to handle them in try_preempt() as well?

Well, yes and no. We have the following softirqs:

 TIMER_SOFTIRQ - should be called, I believe
 RCU_SOFTIRQ - I'm not sure about this, but probably no
 SCHED_SLAVE_SOFTIRQ - no
 SCHEDULE_SOFTIRQ - no
 NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ - should be moved to a separate
 thread, maybe?
 TASKLET_SOFTIRQ - should be moved to a separate thread

So, looks like only timers should be handled for sure.

>
> [...]
>
>> Conclusion
>> ==========
>> My main intention is to begin discussion of hypervisor
>> preemption. As
>> I showed, it is doable right away and provides some immediate
>> benefits. I do understand that proper implementation requires much
>> more efforts. But we are ready to do this work if community is
>> interested in it.
>> Just to reiterate main benefits:
>> 1. More controllable latency. On embedded systems customers care
>> about
>> such things.
>
> Is the plan to only offer preemptible Xen?
>

Sorry, didn't get the question.

>> 2. We can get rid of hypercall continuations, which will results in
>> simpler and more secure code.
>
> I don't think you can get rid of it completely without risking the OS
> to receive RCU sched stall. So you would need to handle them
> hypercalls differently.

Agree. I believe that continuation context should reside in
hypervisor. Those changes are not connected to preemption per se and can
be implemented separately. But we can discuss them there, of course.

[...]

-- 
Volodymyr Babchuk at EPAM

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
  2021-02-23  2:34 Volodymyr Babchuk
@ 2021-02-23  9:02 ` Julien Grall
  2021-02-23 12:06   ` Volodymyr Babchuk
  2021-02-24 18:07 ` Andrew Cooper
  1 sibling, 1 reply; 15+ messages in thread
From: Julien Grall @ 2021-02-23  9:02 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: George Dunlap, Dario Faggioli, Meng Xu, Andrew Cooper,
	Ian Jackson, Jan Beulich, Stefano Stabellini, Wei Liu



On 23/02/2021 02:34, Volodymyr Babchuk wrote:
> Hello community,

Hi Volodymyr,

Thank you for the proposal, I like the like of been able to preempt the 
vCPU thread. This would make easier to implement some of the device 
emulation in Xen (e.g. vGIC, SMMU).

> 
> Subject of this cover letter is quite self-explanatory. This patch
> series implements PoC for preemption in hypervisor mode.
> 
> This is the sort of follow-up to recent discussion about latency
> ([1]).
> 
> Motivation
> ==========
> 
> It is well known that Xen is not preemptable. On other words, it is
> impossible to switch vCPU contexts while running in hypervisor
> mode. Only one place where scheduling decision can be made and one
> vCPU can be replaced with another is the exit path from the hypervisor
> mode. The one exception are Idle vCPUs, which never leaves the
> hypervisor mode for obvious reasons.
> 
> This leads to a number of problems. This list is not comprehensive. It
> lists only things that I or my colleagues encountered personally.
> 
> Long-running hypercalls. Due to nature of some hypercalls they can
> execute for arbitrary long time. Mostly those are calls that deal with
> long list of similar actions, like memory pages processing. To deal
> with this issue Xen employs most horrific technique called "hypercall
> continuation". 

I agree the code is not nice. However, it does serve another purpose 
than ...

> When code that handles hypercall decides that it should
> be preempted, it basically updates the hypercall parameters, and moves
> guest PC one instruction back. This causes guest to re-execute the
> hypercall with altered parameters, which will allow hypervisor to
> continue hypercall execution later.

... just rescheduling the vCPU. It will also give the opportunity for 
the guest to handle interrupts.

If you don't return to the guest, then risk to get an RCU sched stall on 
that the vCPU (some hypercalls can take really really long).


> This approach itself have obvious
> problems: code that executes hypercall is responsible for preemption,
> preemption checks are infrequent (because they are costly by
> themselves), hypercall execution state is stored in guest-controlled
> area, we rely on guest's good will to continue the hypercall. 

Why is it a problem to rely on guest's good will? The hypercalls should 
be preempted at a boundary that is safe to continue.

> All this
> imposes restrictions on which hypercalls can be preempted, when they
> can be preempted and how to write hypercall handlers. Also, it
> requires very accurate coding and already led to at least one
> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
> like the one mentioned in [1].
> 
> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
> which are supposed to run when the system is idle. If hypervisor needs
> to execute own tasks that are required to run right now, it have no
> other way than to execute them on current vCPU. But scheduler does not
> know that hypervisor executes hypervisor task and accounts spent time
> to a domain. This can lead to domain starvation.
> 
> Also, absence of hypervisor threads leads to absence of high-level
> synchronization primitives like mutexes, conditional variables,
> completions, etc. This leads to two problems: we need to use spinlocks
> everywhere and we have problems when porting device drivers from linux
> kernel.
> 
> Proposed solution
> =================
> 
> It is quite obvious that to fix problems above we need to allow
> preemption in hypervisor mode. I am not familiar with x86 side, but
> for the ARM it was surprisingly easy to implement. Basically, vCPU
> context in hypervisor mode is determined by its stack at general
> purpose registers. And __context_switch() function perfectly switches
> them when running in hypervisor mode. So there are no hard
> restrictions, why it should be called only in leave_hypervisor() path.
> 
> The obvious question is: when we should to try to preempt running
> vCPU?  And answer is: when there was an external event. This means
> that we should try to preempt only when there was an interrupt request
> where we are running in hypervisor mode. On ARM, in this case function
> do_trap_irq() is called. Problem is that IRQ handler can be called
> when vCPU is already in atomic state (holding spinlock, for
> example). In this case we should try to preempt right after leaving
> atomic state. This is basically all the idea behind this PoC.
> 
> Now, about the series composition.
> Patches
> 
>    sched: core: save IRQ state during locking
>    sched: rt: save IRQ state during locking
>    sched: credit2: save IRQ state during locking
>    preempt: use atomic_t to for preempt_count
>    arm: setup: disable preemption during startup
>    arm: context_switch: allow to run with IRQs already disabled
> 
> prepare the groundwork for the rest of PoC. It appears that not all
> code is ready to be executed in IRQ state, and schedule() now can be
> called at end of do_trap_irq(), which technically is considered IRQ
> handler state. Also, it is unwise to try preempt things when we are
> still booting, so ween to enable atomic context during the boot
> process.

I am really surprised that this is the only changes necessary in Xen. 
For a first approach, we may want to be conservative when the preemption 
is happening as I am not convinced that all the places are safe to preempt.

> 
> Patches
>    preempt: add try_preempt() function
>    sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>    arm: traps: try to preempt before leaving IRQ handler
> 
> are basically the core of this PoC. try_preempt() function tries to
> preempt vCPU when either called by IRQ handler and when leaving atomic
> state. Scheduler now enters atomic state to ensure that it will not
> preempt self. do_trap_irq() calls try_preempt() to initiate preemption.

AFAICT, try_preempt() will deal with the rescheduling. But how about 
softirqs? Don't we want to handle them in try_preempt() as well?

[...]

> Conclusion
> ==========
> 
> My main intention is to begin discussion of hypervisor preemption. As
> I showed, it is doable right away and provides some immediate
> benefits. I do understand that proper implementation requires much
> more efforts. But we are ready to do this work if community is
> interested in it.
> 
> Just to reiterate main benefits:
> 
> 1. More controllable latency. On embedded systems customers care about
> such things.

Is the plan to only offer preemptible Xen?

> 
> 2. We can get rid of hypercall continuations, which will results in
> simpler and more secure code.

I don't think you can get rid of it completely without risking the OS to 
receive RCU sched stall. So you would need to handle them hypercalls 
differently.

> 
> 3. We can implement proper hypervisor threads, mutexes, completions
> and so on. This will make scheduling more accurate, ease up linux
> drivers porting and implementation of more complex features in the
> hypervisor.
> 
> 
> 
> [1] https://marc.info/?l=xen-devel&m=161049529916656&w=2
> 
> Volodymyr Babchuk (10):
>    sched: core: save IRQ state during locking
>    sched: rt: save IRQ state during locking
>    sched: credit2: save IRQ state during locking
>    preempt: use atomic_t to for preempt_count
>    preempt: add try_preempt() function
>    arm: setup: disable preemption during startup
>    sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
>    arm: context_switch: allow to run with IRQs already disabled
>    arm: traps: try to preempt before leaving IRQ handler
>    [HACK] alloc pages: enable preemption early
> 
>   xen/arch/arm/domain.c      | 18 ++++++++++-----
>   xen/arch/arm/setup.c       |  4 ++++
>   xen/arch/arm/traps.c       |  7 ++++++
>   xen/common/memory.c        |  4 ++--
>   xen/common/page_alloc.c    | 21 ++---------------
>   xen/common/preempt.c       | 36 ++++++++++++++++++++++++++---
>   xen/common/sched/core.c    | 46 +++++++++++++++++++++++---------------
>   xen/common/sched/credit2.c |  5 +++--
>   xen/common/sched/rt.c      | 10 +++++----
>   xen/include/xen/preempt.h  | 17 +++++++++-----
>   10 files changed, 109 insertions(+), 59 deletions(-)
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 00/10] Preemption in hypervisor (ARM only)
@ 2021-02-23  2:34 Volodymyr Babchuk
  2021-02-23  9:02 ` Julien Grall
  2021-02-24 18:07 ` Andrew Cooper
  0 siblings, 2 replies; 15+ messages in thread
From: Volodymyr Babchuk @ 2021-02-23  2:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Volodymyr Babchuk, George Dunlap, Dario Faggioli, Meng Xu,
	Andrew Cooper, Ian Jackson, Jan Beulich, Julien Grall,
	Stefano Stabellini, Wei Liu, Volodymyr Babchuk

Hello community,

Subject of this cover letter is quite self-explanatory. This patch
series implements PoC for preemption in hypervisor mode.

This is the sort of follow-up to recent discussion about latency
([1]).

Motivation
==========

It is well known that Xen is not preemptable. On other words, it is
impossible to switch vCPU contexts while running in hypervisor
mode. Only one place where scheduling decision can be made and one
vCPU can be replaced with another is the exit path from the hypervisor
mode. The one exception are Idle vCPUs, which never leaves the
hypervisor mode for obvious reasons.

This leads to a number of problems. This list is not comprehensive. It
lists only things that I or my colleagues encountered personally.

Long-running hypercalls. Due to nature of some hypercalls they can
execute for arbitrary long time. Mostly those are calls that deal with
long list of similar actions, like memory pages processing. To deal
with this issue Xen employs most horrific technique called "hypercall
continuation". When code that handles hypercall decides that it should
be preempted, it basically updates the hypercall parameters, and moves
guest PC one instruction back. This causes guest to re-execute the
hypercall with altered parameters, which will allow hypervisor to
continue hypercall execution later. This approach itself have obvious
problems: code that executes hypercall is responsible for preemption,
preemption checks are infrequent (because they are costly by
themselves), hypercall execution state is stored in guest-controlled
area, we rely on guest's good will to continue the hypercall. All this
imposes restrictions on which hypercalls can be preempted, when they
can be preempted and how to write hypercall handlers. Also, it
requires very accurate coding and already led to at least one
vulnerability - XSA-318. Some hypercalls can not be preempted at all,
like the one mentioned in [1].

Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
which are supposed to run when the system is idle. If hypervisor needs
to execute own tasks that are required to run right now, it have no
other way than to execute them on current vCPU. But scheduler does not
know that hypervisor executes hypervisor task and accounts spent time
to a domain. This can lead to domain starvation.

Also, absence of hypervisor threads leads to absence of high-level
synchronization primitives like mutexes, conditional variables,
completions, etc. This leads to two problems: we need to use spinlocks
everywhere and we have problems when porting device drivers from linux
kernel.

Proposed solution
=================

It is quite obvious that to fix problems above we need to allow
preemption in hypervisor mode. I am not familiar with x86 side, but
for the ARM it was surprisingly easy to implement. Basically, vCPU
context in hypervisor mode is determined by its stack at general
purpose registers. And __context_switch() function perfectly switches
them when running in hypervisor mode. So there are no hard
restrictions, why it should be called only in leave_hypervisor() path.

The obvious question is: when we should to try to preempt running
vCPU?  And answer is: when there was an external event. This means
that we should try to preempt only when there was an interrupt request
where we are running in hypervisor mode. On ARM, in this case function
do_trap_irq() is called. Problem is that IRQ handler can be called
when vCPU is already in atomic state (holding spinlock, for
example). In this case we should try to preempt right after leaving
atomic state. This is basically all the idea behind this PoC.

Now, about the series composition.
Patches

  sched: core: save IRQ state during locking
  sched: rt: save IRQ state during locking
  sched: credit2: save IRQ state during locking
  preempt: use atomic_t to for preempt_count
  arm: setup: disable preemption during startup
  arm: context_switch: allow to run with IRQs already disabled

prepare the groundwork for the rest of PoC. It appears that not all
code is ready to be executed in IRQ state, and schedule() now can be
called at end of do_trap_irq(), which technically is considered IRQ
handler state. Also, it is unwise to try preempt things when we are
still booting, so ween to enable atomic context during the boot
process.

Patches
  preempt: add try_preempt() function
  sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
  arm: traps: try to preempt before leaving IRQ handler

are basically the core of this PoC. try_preempt() function tries to
preempt vCPU when either called by IRQ handler and when leaving atomic
state. Scheduler now enters atomic state to ensure that it will not
preempt self. do_trap_irq() calls try_preempt() to initiate preemption.

Patch
  [HACK] alloc pages: enable preemption early

is exactly what it says. I wanted to see if this PoC is capable of
fixing that mentioned issue with long-running alloc_heap_pages(). So
this is just a hack that disables atomic context early. As mentioned
in the patch description, right solution would be to use mutexes.

Results
=======

I used the same testing setup that I described in [1]. The results are
quite promising:

1. Stefano noted that very first batch of measurements resulted in
higher than usual latency:

 *** Booting Zephyr OS build zephyr-v2.4.0-2750-g0f2c858a39fc  ***
RT Eval app

Counter freq is 33280000 Hz. Period is 30 ns
Set alarm in 0 sec (332800 ticks)
Mean: 600 (18000 ns) stddev: 3737 (112110 ns) above thr: 0% [265 (7950 ns) - 66955 (2008650 ns)]
Mean: 388 (11640 ns) stddev: 2059 (61770 ns) above thr: 0% [266 (7980 ns) - 58830 (1764900 ns)]

Note that maximum latency is about 2ms.

With this patches applied, things are much better:

 *** Booting Zephyr OS build zephyr-v2.4.0-3614-g0e2689f8edc3  ***
RT Eval app

Counter freq is 33280000 Hz. Period is 30 ns
Set alarm in 0 sec (332800 ticks)
Mean: 335 (10050 ns) stddev: 52 (1560 ns) above thr: 0% [296 (8880 ns) - 1256 (37680 ns)]
Mean: 332 (9960 ns) stddev: 11 (330 ns) above thr: 0% [293 (8790 ns) - 501 (15030 ns)]

As you can see, maximum latency is ~38us, which is way lower than 2ms.

Second test is to observe influence of call to alloc_heap_pages() with
order 18. Without the last patch:

Mean: 756 (22680 ns) stddev: 7328 (219840 ns) above thr: 4% [326 (9780 ns) - 234405 (7032150 ns)]

Huge spike of 7ms can be observed.

Now, with the HACK patch:

Mean: 488 (14640 ns) stddev: 1656 (49680 ns) above thr: 6% [324 (9720 ns) - 52756 (1582680 ns)]
Mean: 458 (13740 ns) stddev: 227 (6810 ns) above thr: 3% [324 (9720 ns) - 3936 (118080 ns)]
Mean: 333 (9990 ns) stddev: 12 (360 ns) above thr: 0% [320 (9600 ns) - 512 (15360 ns)]

Two things can be observed: mean latency time is lower, maximum
latencies are lower too, but overall runtime is higher.

Downside of this patches is that mean latency time is a bit
higher. There are the results for current xen master branch:

Mean: 288 (8640 ns) stddev: 20 (600 ns) above thr: 0% [269 (8070 ns) - 766 (22980 ns)]
Mean: 287 (8610 ns) stddev: 20 (600 ns) above thr: 0% [266 (7980 ns) - 793 (23790 ns)]

8.6us versus ~10us with the patches.

Of course, this is the crude approach and certain things can be made
more optimally.

Know issues
===========

0. Right now it is ARM only. x86 changes vCPU contexts in a different
way, and I don't know what amount of changes needed to make this work on x86

1. RTDS scheduler goes crasy when running on SMP system (e.g. with
more than 1 pCPU) and tries to schedule already running vCPU on
multiple pCPU at a time. This leads to some hard-to-debug crashes

2. As I mentioned, mean latency become a bit higher

Conclusion
==========

My main intention is to begin discussion of hypervisor preemption. As
I showed, it is doable right away and provides some immediate
benefits. I do understand that proper implementation requires much
more efforts. But we are ready to do this work if community is
interested in it.

Just to reiterate main benefits:

1. More controllable latency. On embedded systems customers care about
such things.

2. We can get rid of hypercall continuations, which will results in
simpler and more secure code.

3. We can implement proper hypervisor threads, mutexes, completions
and so on. This will make scheduling more accurate, ease up linux
drivers porting and implementation of more complex features in the
hypervisor.



[1] https://marc.info/?l=xen-devel&m=161049529916656&w=2

Volodymyr Babchuk (10):
  sched: core: save IRQ state during locking
  sched: rt: save IRQ state during locking
  sched: credit2: save IRQ state during locking
  preempt: use atomic_t to for preempt_count
  preempt: add try_preempt() function
  arm: setup: disable preemption during startup
  sched: core: remove ASSERT_NOT_IN_ATOMIC and disable preemption[!]
  arm: context_switch: allow to run with IRQs already disabled
  arm: traps: try to preempt before leaving IRQ handler
  [HACK] alloc pages: enable preemption early

 xen/arch/arm/domain.c      | 18 ++++++++++-----
 xen/arch/arm/setup.c       |  4 ++++
 xen/arch/arm/traps.c       |  7 ++++++
 xen/common/memory.c        |  4 ++--
 xen/common/page_alloc.c    | 21 ++---------------
 xen/common/preempt.c       | 36 ++++++++++++++++++++++++++---
 xen/common/sched/core.c    | 46 +++++++++++++++++++++++---------------
 xen/common/sched/credit2.c |  5 +++--
 xen/common/sched/rt.c      | 10 +++++----
 xen/include/xen/preempt.h  | 17 +++++++++-----
 10 files changed, 109 insertions(+), 59 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-03-05  9:35 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <161405394665.5977.17427402181939884734@c667a6b167f6>
2021-02-23 20:29 ` [RFC PATCH 00/10] Preemption in hypervisor (ARM only) Stefano Stabellini
2021-02-24  0:19   ` Volodymyr Babchuk
2021-02-23  2:34 Volodymyr Babchuk
2021-02-23  9:02 ` Julien Grall
2021-02-23 12:06   ` Volodymyr Babchuk
2021-02-24 10:08     ` Julien Grall
2021-02-24 20:57       ` Volodymyr Babchuk
2021-02-24 22:31         ` Julien Grall
2021-02-24 23:58           ` Volodymyr Babchuk
2021-02-25  0:39             ` Andrew Cooper
2021-02-25 12:51               ` Volodymyr Babchuk
2021-03-05  9:31                 ` Volodymyr Babchuk
2021-02-24 18:07 ` Andrew Cooper
2021-02-24 23:37   ` Volodymyr Babchuk
2021-03-01 14:39     ` George Dunlap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).