All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Juergen Gross <jgross@suse.com>
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Julien Grall <julien.grall@arm.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH RFC V2 42/45] xen/sched: add fall back to idle vcpu when scheduling item
Date: Fri, 17 May 2019 00:57:18 -0600	[thread overview]
Message-ID: <5CDE5B4E020000780022FEFC@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <7bfa4c94-ccf1-0b3c-6f92-d4f87b591961@suse.com>

>>> On 17.05.19 at 07:13, <jgross@suse.com> wrote:
> On 16/05/2019 16:41, Jan Beulich wrote:
>>>>> On 16.05.19 at 15:51, <jgross@suse.com> wrote:
>>> On 16/05/2019 15:05, Jan Beulich wrote:
>>>>>>> On 06.05.19 at 08:56, <jgross@suse.com> wrote:
>>>>> --- a/xen/arch/x86/domain.c
>>>>> +++ b/xen/arch/x86/domain.c
>>>>> @@ -154,6 +154,24 @@ static void idle_loop(void)
>>>>>      }
>>>>>  }
>>>>>  
>>>>> +/*
>>>>> + * Idle loop for siblings of active schedule items.
>>>>> + * We don't do any standard idle work like tasklets, page scrubbing or
>>>>> + * livepatching.
>>>>> + * Use default_idle() in order to simulate v->is_urgent.
>>>>
>>>> I guess I'm missing a part of the description which explains all this:
>>>> What's wrong with doing scrubbing work, for example? Why is
>>>> doing tasklet work not okay, but softirqs are? What is the deal with
>>>> v->is_urgent, i.e. what justifies not entering a decent power
>>>> saving mode here on Intel, but doing so on AMD?
>>>
>>> One of the reasons for using core scheduling is to avoid running vcpus
>>> of different domains on the same core in order to minimize the chances
>>> for side channel attacks to data of other domains. Not allowing
>>> scrubbing or tasklets here is due to avoid accessing data of other
>>> domains.
>> 
>> So how is running softirqs okay then? And how is scrubbing accessing
>> other domains' data?
> 
> Right now I'm not sure whether it is a good idea to block any softirqs.
> We definitely need to process scheduling requests and I believe RCU and
> tasklets, too. The tlbflush one should be uncritical, so timers is the
> remaining one which might be questionable. This can be fine-tuned later
> IMO e.g. by defining a softirq mask of critical softirqs to block and
> eventually splitting up e.g. timer and tasklet softirqs into critical
> and uncritical ones.

Well, okay, but please add an abridged version of this to the patch
description then.

> Scrubbing will probably pull the cache lines of the dirty pages into
> the L1 cache of the cpu. For me this sounds problematic. In case we
> are fine to do scrubbing as there is no risk associated I'm fine to add
> it back in.

Well, of course there's going to be a brief period of time where
a cache line will be present in CPU internal buffers (it's not just the
cache after all, as we've learned with XSA-297). So I can certainly
buy that when using core granularity you don't want to scrub on
the other thread. But what about the socket granularity case?
Scrubbing on fully idle cores should still be fine, I would think.

>>> As with core scheduling we can be sure the other thread is active
>>> (otherwise we would schedule the idle item) and hoping for saving power
>>> by using mwait is moot.
>> 
>> Saving power may be indirect, by the CPU re-arranging
>> resource assignment between threads when one goes idle.
>> I have no idea whether they do this when entering C1, or
>> only when entering deeper C states.
> 
> SDM Vol. 3 chapter 8.10.1 "HLT instruction":
> 
> "Here shared resources that were being used by the halted logical
> processor become available to active logical processors, allowing them
> to execute at greater efficiency."

To be honest, this is to broad/generic a statement to fully
trust it, judging from other areas of the SDM. And then, as
per above, what about the socket granularity case? Putting
entirely idle cores to sleep is surely worthwhile?

>> And anyway - I'm still none the wiser as to the v->is_urgent
>> relationship.
> 
> With v->is_urgent set today's idle loop will drop into default_idle().
> I can remove this sentence in case it is just confusing.

I'd prefer if the connection would become more obvious. One
needs to go from ->is_urgent via ->urgent_count to
sched_has_urgent_vcpu() to find where the described
behavior really lives.

What's worse though: This won't work as intended on AMD
at all. I don't think it's correct to fall back to default_idle() in
this case. Instead sched_has_urgent_vcpu() returning true
should amount to the same effect as max_cstate being set
to 1. There's
(a) no reason not to use MWAIT on Intel CPUs in this case,
if MWAIT can enter C1, and
(b) a strong need to use MWAIT on (at least) AMD Fam17,
or else it won't be C1 that gets entered.
I'll see about making a patch in due course.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

WARNING: multiple messages have this Message-ID (diff)
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <jgross@suse.com>
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Julien Grall <julien.grall@arm.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC V2 42/45] xen/sched: add fall back to idle vcpu when scheduling item
Date: Fri, 17 May 2019 00:57:18 -0600	[thread overview]
Message-ID: <5CDE5B4E020000780022FEFC@prv1-mh.provo.novell.com> (raw)
Message-ID: <20190517065718.Fkm6a_2_IDe1vOOE14y-Yg4Z6WLcDibgCpCnofiHwPY@z> (raw)
In-Reply-To: <7bfa4c94-ccf1-0b3c-6f92-d4f87b591961@suse.com>

>>> On 17.05.19 at 07:13, <jgross@suse.com> wrote:
> On 16/05/2019 16:41, Jan Beulich wrote:
>>>>> On 16.05.19 at 15:51, <jgross@suse.com> wrote:
>>> On 16/05/2019 15:05, Jan Beulich wrote:
>>>>>>> On 06.05.19 at 08:56, <jgross@suse.com> wrote:
>>>>> --- a/xen/arch/x86/domain.c
>>>>> +++ b/xen/arch/x86/domain.c
>>>>> @@ -154,6 +154,24 @@ static void idle_loop(void)
>>>>>      }
>>>>>  }
>>>>>  
>>>>> +/*
>>>>> + * Idle loop for siblings of active schedule items.
>>>>> + * We don't do any standard idle work like tasklets, page scrubbing or
>>>>> + * livepatching.
>>>>> + * Use default_idle() in order to simulate v->is_urgent.
>>>>
>>>> I guess I'm missing a part of the description which explains all this:
>>>> What's wrong with doing scrubbing work, for example? Why is
>>>> doing tasklet work not okay, but softirqs are? What is the deal with
>>>> v->is_urgent, i.e. what justifies not entering a decent power
>>>> saving mode here on Intel, but doing so on AMD?
>>>
>>> One of the reasons for using core scheduling is to avoid running vcpus
>>> of different domains on the same core in order to minimize the chances
>>> for side channel attacks to data of other domains. Not allowing
>>> scrubbing or tasklets here is due to avoid accessing data of other
>>> domains.
>> 
>> So how is running softirqs okay then? And how is scrubbing accessing
>> other domains' data?
> 
> Right now I'm not sure whether it is a good idea to block any softirqs.
> We definitely need to process scheduling requests and I believe RCU and
> tasklets, too. The tlbflush one should be uncritical, so timers is the
> remaining one which might be questionable. This can be fine-tuned later
> IMO e.g. by defining a softirq mask of critical softirqs to block and
> eventually splitting up e.g. timer and tasklet softirqs into critical
> and uncritical ones.

Well, okay, but please add an abridged version of this to the patch
description then.

> Scrubbing will probably pull the cache lines of the dirty pages into
> the L1 cache of the cpu. For me this sounds problematic. In case we
> are fine to do scrubbing as there is no risk associated I'm fine to add
> it back in.

Well, of course there's going to be a brief period of time where
a cache line will be present in CPU internal buffers (it's not just the
cache after all, as we've learned with XSA-297). So I can certainly
buy that when using core granularity you don't want to scrub on
the other thread. But what about the socket granularity case?
Scrubbing on fully idle cores should still be fine, I would think.

>>> As with core scheduling we can be sure the other thread is active
>>> (otherwise we would schedule the idle item) and hoping for saving power
>>> by using mwait is moot.
>> 
>> Saving power may be indirect, by the CPU re-arranging
>> resource assignment between threads when one goes idle.
>> I have no idea whether they do this when entering C1, or
>> only when entering deeper C states.
> 
> SDM Vol. 3 chapter 8.10.1 "HLT instruction":
> 
> "Here shared resources that were being used by the halted logical
> processor become available to active logical processors, allowing them
> to execute at greater efficiency."

To be honest, this is to broad/generic a statement to fully
trust it, judging from other areas of the SDM. And then, as
per above, what about the socket granularity case? Putting
entirely idle cores to sleep is surely worthwhile?

>> And anyway - I'm still none the wiser as to the v->is_urgent
>> relationship.
> 
> With v->is_urgent set today's idle loop will drop into default_idle().
> I can remove this sentence in case it is just confusing.

I'd prefer if the connection would become more obvious. One
needs to go from ->is_urgent via ->urgent_count to
sched_has_urgent_vcpu() to find where the described
behavior really lives.

What's worse though: This won't work as intended on AMD
at all. I don't think it's correct to fall back to default_idle() in
this case. Instead sched_has_urgent_vcpu() returning true
should amount to the same effect as max_cstate being set
to 1. There's
(a) no reason not to use MWAIT on Intel CPUs in this case,
if MWAIT can enter C1, and
(b) a strong need to use MWAIT on (at least) AMD Fam17,
or else it won't be C1 that gets entered.
I'll see about making a patch in due course.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-05-17  6:57 UTC|newest]

Thread overview: 189+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-06  6:55 [PATCH RFC V2 00/45] xen: add core scheduling support Juergen Gross
2019-05-06  6:55 ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 01/45] xen/sched: add inline wrappers for calling per-scheduler functions Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  8:27   ` Jan Beulich
2019-05-06  8:27     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CCFF004020000780022C0D4@suse.com>
2019-05-06  8:34     ` Juergen Gross
2019-05-06  8:34       ` [Xen-devel] " Juergen Gross
2019-05-06  8:58       ` Jan Beulich
2019-05-06  8:58         ` [Xen-devel] " Jan Beulich
2019-05-06 15:26   ` Dario Faggioli
2019-05-06 15:26     ` [Xen-devel] " Dario Faggioli
2019-05-08 16:24   ` George Dunlap
2019-05-08 16:24     ` [Xen-devel] " George Dunlap
2019-05-09  5:32     ` Juergen Gross
2019-05-09  5:32       ` [Xen-devel] " Juergen Gross
2019-05-09 10:04       ` George Dunlap
2019-05-09 10:04         ` [Xen-devel] " George Dunlap
2019-05-09 10:56         ` Juergen Gross
2019-05-09 10:56           ` [Xen-devel] " Juergen Gross
2019-05-09 11:50           ` Jan Beulich
2019-05-09 11:50             ` [Xen-devel] " Jan Beulich
2019-05-09 12:03             ` Juergen Gross
2019-05-09 12:03               ` [Xen-devel] " Juergen Gross
2019-05-09 12:31               ` Jan Beulich
2019-05-09 12:31                 ` [Xen-devel] " Jan Beulich
     [not found]               ` <5CD41D9C020000780022D259@suse.com>
2019-05-09 12:44                 ` Juergen Gross
2019-05-09 12:44                   ` [Xen-devel] " Juergen Gross
2019-05-09 13:22                   ` Jan Beulich
2019-05-09 13:22                     ` [Xen-devel] " Jan Beulich
2019-05-09 12:27           ` Dario Faggioli
2019-05-09 12:27             ` [Xen-devel] " Dario Faggioli
2019-05-06  6:56 ` [PATCH RFC V2 02/45] xen/sched: use new sched_item instead of vcpu in scheduler interfaces Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-08 16:35   ` George Dunlap
2019-05-08 16:35     ` [Xen-devel] " George Dunlap
2019-05-08 17:27     ` Dario Faggioli
2019-05-08 17:27       ` [Xen-devel] " Dario Faggioli
2019-05-09  5:36     ` Juergen Gross
2019-05-09  5:36       ` [Xen-devel] " Juergen Gross
2019-05-09  5:56       ` Dario Faggioli
2019-05-09  5:56         ` [Xen-devel] " Dario Faggioli
2019-05-09  5:59         ` Juergen Gross
2019-05-09  5:59           ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 03/45] xen/sched: alloc struct sched_item for each vcpu Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 04/45] xen/sched: move per-vcpu scheduler private data pointer to sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 05/45] xen/sched: build a linked list of struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 06/45] xen/sched: introduce struct sched_resource Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 07/45] xen/sched: let pick_cpu return a scheduler resource Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 08/45] xen/sched: switch schedule_data.curr to point at sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 09/45] xen/sched: move per cpu scheduler private data into struct sched_resource Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 10/45] xen/sched: switch vcpu_schedule_lock to item_schedule_lock Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 11/45] xen/sched: move some per-vcpu items to struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 12/45] xen/sched: add scheduler helpers hiding vcpu Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 13/45] xen/sched: add domain pointer to struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 14/45] xen/sched: add id " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 15/45] xen/sched: rename scheduler related perf counters Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 16/45] xen/sched: switch struct task_slice from vcpu to sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 17/45] xen/sched: add is_running indicator to struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 18/45] xen/sched: make null scheduler vcpu agnostic Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 19/45] xen/sched: make rt " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 20/45] xen/sched: make credit " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 21/45] xen/sched: make credit2 " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 22/45] xen/sched: make arinc653 " Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 23/45] xen: add sched_item_pause_nosync() and sched_item_unpause() Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 24/45] xen: let vcpu_create() select processor Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-16 12:20   ` Jan Beulich
2019-05-16 12:20     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CDD557D020000780022FA32@suse.com>
2019-05-16 12:46     ` Juergen Gross
2019-05-16 12:46       ` [Xen-devel] " Juergen Gross
2019-05-16 13:10       ` Jan Beulich
2019-05-16 13:10         ` [Xen-devel] " Jan Beulich
2019-05-06  6:56 ` [PATCH RFC V2 25/45] xen/sched: use sched_resource cpu instead smp_processor_id in schedulers Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 26/45] xen/sched: switch schedule() from vcpus to sched_items Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 27/45] xen/sched: switch sched_move_irqs() to take sched_item as parameter Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 28/45] xen: switch from for_each_vcpu() to for_each_sched_item() Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 29/45] xen/sched: add runstate counters to struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 30/45] xen/sched: rework and rename vcpu_force_reschedule() Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  8:37   ` Jan Beulich
2019-05-06  8:37     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CCFF238020000780022C0F9@suse.com>
2019-05-06  8:51     ` Juergen Gross
2019-05-06  8:51       ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 31/45] xen/sched: Change vcpu_migrate_*() to operate on schedule item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 32/45] xen/sched: move struct task_slice into struct sched_item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 33/45] xen/sched: add code to sync scheduling of all vcpus of a sched item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 34/45] xen/sched: introduce item_runnable_state() Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 35/45] xen/sched: add support for multiple vcpus per sched item where missing Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 36/45] x86: make loading of GDT at context switch more modular Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-16 12:30   ` Jan Beulich
2019-05-16 12:30     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CDD57DB020000780022FA5E@suse.com>
2019-05-16 12:52     ` Juergen Gross
2019-05-16 12:52       ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 37/45] x86: optimize loading of GDT at context switch Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-16 12:42   ` Jan Beulich
2019-05-16 12:42     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CDD5AC2020000780022FA6A@suse.com>
2019-05-16 13:10     ` Juergen Gross
2019-05-16 13:10       ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 38/45] xen/sched: modify cpupool_domain_cpumask() to be an item mask Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 39/45] xen/sched: support allocating multiple vcpus into one sched item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 40/45] xen/sched: add a scheduler_percpu_init() function Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 41/45] xen/sched: add a percpu resource index Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 42/45] xen/sched: add fall back to idle vcpu when scheduling item Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-16 13:05   ` Jan Beulich
2019-05-16 13:05     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CDD6005020000780022FA9A@suse.com>
2019-05-16 13:51     ` Juergen Gross
2019-05-16 13:51       ` [Xen-devel] " Juergen Gross
2019-05-16 14:41       ` Jan Beulich
2019-05-16 14:41         ` [Xen-devel] " Jan Beulich
     [not found]       ` <5CDD7693020000780022FC59@suse.com>
2019-05-17  5:13         ` Juergen Gross
2019-05-17  5:13           ` [Xen-devel] " Juergen Gross
2019-05-17  6:57           ` Jan Beulich [this message]
2019-05-17  6:57             ` Jan Beulich
     [not found]           ` <5CDE5B4E020000780022FEFC@suse.com>
2019-05-17  7:48             ` Juergen Gross
2019-05-17  7:48               ` [Xen-devel] " Juergen Gross
2019-05-17  8:22               ` Jan Beulich
2019-05-17  8:22                 ` [Xen-devel] " Jan Beulich
2019-05-06  6:56 ` [PATCH RFC V2 43/45] xen/sched: make vcpu_wake() and vcpu_sleep() core scheduling aware Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 44/45] xen/sched: carve out freeing sched_item memory into dedicated function Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  6:56 ` [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum Juergen Gross
2019-05-06  6:56   ` [Xen-devel] " Juergen Gross
2019-05-06  8:57   ` Jan Beulich
2019-05-06  8:57     ` [Xen-devel] " Jan Beulich
     [not found]   ` <5CCFF6F1020000780022C12B@suse.com>
     [not found]     ` <ac57c420*a72e*7570*db8f*27e4693c2755@suse.com>
2019-05-06  9:23     ` Juergen Gross
2019-05-06  9:23       ` [Xen-devel] " Juergen Gross
2019-05-06 10:01       ` Jan Beulich
2019-05-06 10:01         ` [Xen-devel] " Jan Beulich
2019-05-08 14:36         ` Juergen Gross
2019-05-08 14:36           ` [Xen-devel] " Juergen Gross
2019-05-10  8:53           ` Jan Beulich
2019-05-10  8:53             ` [Xen-devel] " Jan Beulich
     [not found]           ` <5CD53C1C020000780022D706@suse.com>
2019-05-10  9:00             ` Juergen Gross
2019-05-10  9:00               ` [Xen-devel] " Juergen Gross
2019-05-10 10:29               ` Dario Faggioli
2019-05-10 10:29                 ` [Xen-devel] " Dario Faggioli
2019-05-10 11:17               ` Jan Beulich
2019-05-10 11:17                 ` [Xen-devel] " Jan Beulich
     [not found]       ` <5CD005E7020000780022C1B5@suse.com>
2019-05-06 10:20         ` Juergen Gross
2019-05-06 10:20           ` [Xen-devel] " Juergen Gross
2019-05-06 11:58           ` Jan Beulich
2019-05-06 11:58             ` [Xen-devel] " Jan Beulich
     [not found]           ` <5CD02161020000780022C257@suse.com>
2019-05-06 12:23             ` Juergen Gross
2019-05-06 12:23               ` [Xen-devel] " Juergen Gross
2019-05-06 13:14               ` Jan Beulich
2019-05-06 13:14                 ` [Xen-devel] " Jan Beulich
2019-05-06  7:10 ` [PATCH RFC V2 00/45] xen: add core scheduling support Juergen Gross
2019-05-06  7:10   ` [Xen-devel] " Juergen Gross
  -- strict thread matches above, loose matches on Subject: below --
2019-05-17  8:57 [PATCH RFC V2 42/45] xen/sched: add fall back to idle vcpu when scheduling item Juergen Gross
     [not found] <20190506065644.7415****1****jgross@suse.com>
     [not found] <20190506065644.7415*1*jgross@suse.com>
     [not found] ` <20190506065644.7415*2*jgross@suse.com>
     [not found]   ` <1d5f7b35*304c*6a86*5f24*67b79de447dc@citrix.com>
     [not found]     ` <2ca22195*9bdb*b040*ce12*df5bb2416038@suse.com>
     [not found]       ` <0ed82a64*58e7*7ce4*afd1*22f621c0d56d@citrix.com>
     [not found]         ` <a3e3370b*a4a9*9654*368b*f8c13b7f9742@suse.com>

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5CDE5B4E020000780022FEFC@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=jgross@suse.com \
    --cc=julien.grall@arm.com \
    --cc=konrad.wilk@oracle.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.