All of lore.kernel.org
 help / color / mirror / Atom feed
* Hypercall continuation and wait_event
@ 2012-04-09 17:51 Ruslan Nikolaev
  2012-04-09 18:54 ` Keir Fraser
  0 siblings, 1 reply; 10+ messages in thread
From: Ruslan Nikolaev @ 2012-04-09 17:51 UTC (permalink / raw)
  To: xen-devel

Hi

I am curious how I can properly support hypercall continuation and wait_event. I have a dedicated VCPU in a domain which makes a special hypercall, and the hypercall waits for certain event to arrive. I am using queues available in Xen, so wait_event will be invoked in the hypercall once its ready to accept events. However, my understanding that even though I have a dedicated VCPU for this hypercall, I still may need to support hypercall continuation properly. (Is this the case?) So, my question is how exactly the need for hypercall preemption may affect wait_event() and wait() operations, and where would I need to do hypercall_preempt_check()?

Thank you!
Ruslan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-09 17:51 Hypercall continuation and wait_event Ruslan Nikolaev
@ 2012-04-09 18:54 ` Keir Fraser
  2012-04-09 19:18   ` Ruslan Nikolaev
  0 siblings, 1 reply; 10+ messages in thread
From: Keir Fraser @ 2012-04-09 18:54 UTC (permalink / raw)
  To: Ruslan Nikolaev, xen-devel

On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Hi
> 
> I am curious how I can properly support hypercall continuation and wait_event.
> I have a dedicated VCPU in a domain which makes a special hypercall, and the
> hypercall waits for certain event to arrive. I am using queues available in
> Xen, so wait_event will be invoked in the hypercall once its ready to accept
> events. However, my understanding that even though I have a dedicated VCPU for
> this hypercall, I still may need to support hypercall continuation properly.
> (Is this the case?) So, my question is how exactly the need for hypercall

No it's not the case, the old hypercall_create_continuation() mechanism does
not need to be used with wait_event().

 -- Keir

> preemption may affect wait_event() and wait() operations, and where would I
> need to do hypercall_preempt_check()?
> 
> Thank you!
> Ruslan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-09 18:54 ` Keir Fraser
@ 2012-04-09 19:18   ` Ruslan Nikolaev
  2012-04-09 20:09     ` Keir Fraser
  0 siblings, 1 reply; 10+ messages in thread
From: Ruslan Nikolaev @ 2012-04-09 19:18 UTC (permalink / raw)
  To: xen-devel

Thanks for the reply. 

Since it can take arbitrarily long for an event to arrive (e.g., it is coming from a different guest on a user request), how do I need to handle this case?Does it mean that I only need to make sure that nothings get scheduled on this VCPU in the guest?
Also, it is not exactly clear to me how wait_event avoids the need for hypercall continuation. What about local_events_need_delivery() or softirq_pending()? Are they going to be handled by wait_event internally?

Ruslan






----- Original Message -----
From: Keir Fraser <keir.xen@gmail.com>
To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: 
Sent: Monday, April 9, 2012 6:54 PM
Subject: Re: [Xen-devel] Hypercall continuation and wait_event

On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Hi
> 
> I am curious how I can properly support hypercall continuation and wait_event.
> I have a dedicated VCPU in a domain which makes a special hypercall, and the
> hypercall waits for certain event to arrive. I am using queues available in
> Xen, so wait_event will be invoked in the hypercall once its ready to accept
> events. However, my understanding that even though I have a dedicated VCPU for
> this hypercall, I still may need to support hypercall continuation properly.
> (Is this the case?) So, my question is how exactly the need for hypercall

No it's not the case, the old hypercall_create_continuation() mechanism does
not need to be used with wait_event().

-- Keir

> preemption may affect wait_event() and wait() operations, and where would I
> need to do hypercall_preempt_check()?
> 
> Thank you!
> Ruslan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-09 19:18   ` Ruslan Nikolaev
@ 2012-04-09 20:09     ` Keir Fraser
  2012-04-09 20:16       ` Ruslan Nikolaev
  0 siblings, 1 reply; 10+ messages in thread
From: Keir Fraser @ 2012-04-09 20:09 UTC (permalink / raw)
  To: Ruslan Nikolaev, xen-devel

On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Thanks for the reply.
> 
> Since it can take arbitrarily long for an event to arrive (e.g., it is coming
> from a different guest on a user request), how do I need to handle this
> case?Does it mean that I only need to make sure that nothings get scheduled on
> this VCPU in the guest?

Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
sleep within wait_event within the hypercall context. Hence you must not
hold any hypervisor spinlocks either, for example.

> Also, it is not exactly clear to me how wait_event avoids the need for
> hypercall continuation. What about local_events_need_delivery() or
> softirq_pending()? Are they going to be handled by wait_event internally?

Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
the duration that you're descheduled. And if local_event_need_delivery(),
that's too bad, they have to wait for the vcpu to wake up on the event.

 -- Keir

> Ruslan
> 
> 
> 
> 
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@gmail.com>
> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
> <xen-devel@lists.xen.org>
> Cc: 
> Sent: Monday, April 9, 2012 6:54 PM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
> 
>> Hi
>> 
>> I am curious how I can properly support hypercall continuation and
>> wait_event.
>> I have a dedicated VCPU in a domain which makes a special hypercall, and the
>> hypercall waits for certain event to arrive. I am using queues available in
>> Xen, so wait_event will be invoked in the hypercall once its ready to accept
>> events. However, my understanding that even though I have a dedicated VCPU
>> for
>> this hypercall, I still may need to support hypercall continuation properly.
>> (Is this the case?) So, my question is how exactly the need for hypercall
> 
> No it's not the case, the old hypercall_create_continuation() mechanism does
> not need to be used with wait_event().
> 
> -- Keir
> 
>> preemption may affect wait_event() and wait() operations, and where would I
>> need to do hypercall_preempt_check()?
>> 
>> Thank you!
>> Ruslan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-09 20:09     ` Keir Fraser
@ 2012-04-09 20:16       ` Ruslan Nikolaev
  2012-04-09 20:58         ` Keir Fraser
  0 siblings, 1 reply; 10+ messages in thread
From: Ruslan Nikolaev @ 2012-04-09 20:16 UTC (permalink / raw)
  To: xen-devel

Keir,

Thanks for your replies! Just one more question about local_event_need_delivery(). Under what (common) conditions I would expect to have local events that need delivery?

Ruslan



----- Original Message -----
From: Keir Fraser <keir.xen@gmail.com>
To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: 
Sent: Monday, April 9, 2012 8:09 PM
Subject: Re: [Xen-devel] Hypercall continuation and wait_event

On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Thanks for the reply.
> 
> Since it can take arbitrarily long for an event to arrive (e.g., it is coming
> from a different guest on a user request), how do I need to handle this
> case?Does it mean that I only need to make sure that nothings get scheduled on
> this VCPU in the guest?

Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
sleep within wait_event within the hypercall context. Hence you must not
hold any hypervisor spinlocks either, for example.

> Also, it is not exactly clear to me how wait_event avoids the need for
> hypercall continuation. What about local_events_need_delivery() or
> softirq_pending()? Are they going to be handled by wait_event internally?

Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
the duration that you're descheduled. And if local_event_need_delivery(),
that's too bad, they have to wait for the vcpu to wake up on the event.

-- Keir

> Ruslan
> 
> 
> 
> 
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@gmail.com>
> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
> <xen-devel@lists.xen.org>
> Cc: 
> Sent: Monday, April 9, 2012 6:54 PM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
> 
>> Hi
>> 
>> I am curious how I can properly support hypercall continuation and
>> wait_event.
>> I have a dedicated VCPU in a domain which makes a special hypercall, and the
>> hypercall waits for certain event to arrive. I am using queues available in
>> Xen, so wait_event will be invoked in the hypercall once its ready to accept
>> events. However, my understanding that even though I have a dedicated VCPU
>> for
>> this hypercall, I still may need to support hypercall continuation properly.
>> (Is this the case?) So, my question is how exactly the need for hypercall
> 
> No it's not the case, the old hypercall_create_continuation() mechanism does
> not need to be used with wait_event().
> 
> -- Keir
> 
>> preemption may affect wait_event() and wait() operations, and where would I
>> need to do hypercall_preempt_check()?
>> 
>> Thank you!
>> Ruslan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-09 20:16       ` Ruslan Nikolaev
@ 2012-04-09 20:58         ` Keir Fraser
  2012-04-09 21:19           ` Ruslan Nikolaev
  0 siblings, 1 reply; 10+ messages in thread
From: Keir Fraser @ 2012-04-09 20:58 UTC (permalink / raw)
  To: Ruslan Nikolaev, xen-devel

It means the vcpu has an interrupt pending (in the pv case, that means an
event channel has a pending event).


On 09/04/2012 21:16, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Keir,
> 
> Thanks for your replies! Just one more question about
> local_event_need_delivery(). Under what (common) conditions I would expect to
> have local events that need delivery?
> 
> Ruslan
> 
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@gmail.com>
> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
> <xen-devel@lists.xen.org>
> Cc: 
> Sent: Monday, April 9, 2012 8:09 PM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
> 
>> Thanks for the reply.
>> 
>> Since it can take arbitrarily long for an event to arrive (e.g., it is coming
>> from a different guest on a user request), how do I need to handle this
>> case?Does it mean that I only need to make sure that nothings get scheduled
>> on
>> this VCPU in the guest?
> 
> Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
> sleep within wait_event within the hypercall context. Hence you must not
> hold any hypervisor spinlocks either, for example.
> 
>> Also, it is not exactly clear to me how wait_event avoids the need for
>> hypercall continuation. What about local_events_need_delivery() or
>> softirq_pending()? Are they going to be handled by wait_event internally?
> 
> Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
> the duration that you're descheduled. And if local_event_need_delivery(),
> that's too bad, they have to wait for the vcpu to wake up on the event.
> 
> -- Keir
> 
>> Ruslan
>> 
>> 
>> 
>> 
>> 
>> 
>> ----- Original Message -----
>> From: Keir Fraser <keir.xen@gmail.com>
>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>> <xen-devel@lists.xen.org>
>> Cc: 
>> Sent: Monday, April 9, 2012 6:54 PM
>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>> 
>> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>> 
>>> Hi
>>> 
>>> I am curious how I can properly support hypercall continuation and
>>> wait_event.
>>> I have a dedicated VCPU in a domain which makes a special hypercall, and the
>>> hypercall waits for certain event to arrive. I am using queues available in
>>> Xen, so wait_event will be invoked in the hypercall once its ready to accept
>>> events. However, my understanding that even though I have a dedicated VCPU
>>> for
>>> this hypercall, I still may need to support hypercall continuation properly.
>>> (Is this the case?) So, my question is how exactly the need for hypercall
>> 
>> No it's not the case, the old hypercall_create_continuation() mechanism does
>> not need to be used with wait_event().
>> 
>> -- Keir
>> 
>>> preemption may affect wait_event() and wait() operations, and where would I
>>> need to do hypercall_preempt_check()?
>>> 
>>> Thank you!
>>> Ruslan
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-09 20:58         ` Keir Fraser
@ 2012-04-09 21:19           ` Ruslan Nikolaev
  2012-04-10  7:37             ` Keir Fraser
  0 siblings, 1 reply; 10+ messages in thread
From: Ruslan Nikolaev @ 2012-04-09 21:19 UTC (permalink / raw)
  To: xen-devel

Keir,

Thanks again! When I used the scheme I have described, I periodically receive kernel errors as shown below. Notice that I use HVM domain and also 'isolcpus' as a Linux kernel option to prevent a dedicated VCPU from being normally used. A hypercall is being made from a special kernel thread (which is bind to the dedicated VCPU before the call).

What could be the reason of these messages? Looks like it is something related to a timer.


[ 1039.319957] RIP: 0010:[<ffffffff8101ba09>]  [<ffffffff8101ba09>] default_send_IPI_mask_sequence_phys+0x95/0xce
[ 1039.319957] RSP: 0018:ffff88007f043c28  EFLAGS: 00000046
[ 1039.319957] RAX: 0000000000000400 RBX: 0000000000000096 RCX: 0000000000000020
[ 1039.319957] RDX: 0000000000000002 RSI: 0000000000000020 RDI: 0000000000000300
[ 1039.319957] RBP: ffff88007f043c68 R08: 0000000000000000 R09: ffffffff8163eb20
[ 1039.319957] R10: ffff8800ff043bad R11: 0000000000000000 R12: 000000000000d602
[ 1039.319957] R13: 0000000000000002 R14: 0000000000000400 R15: ffffffff8163eb20
[ 1039.319957] FS:  0000000000000000(0000) GS:ffff88007f040000(0000) knlGS:0000000000000000
[ 1039.319957] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 1039.319957] CR2: 00007f74195d29be CR3: 000000007af4d000 CR4: 00000000000006a0
[ 1039.319957] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1039.319957] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 1039.319957] Process swapper/2 (pid: 0, threadinfo ffff88007c4ec000, task ffff88007c4f1650)
[ 1039.319957] Stack:
[ 1039.319957]  0000000000000002 0000000400000008 ffff88007f043c88 0000000000002710
[ 1039.319957]  ffffffff8161a280 ffffffff8161a340 0000000000000001 ffffffff8161a4c0
[ 1039.319957]  ffff88007f043c78 ffffffff8101ecc6 ffff88007f043c98 ffffffff8101bb81
[ 1039.319957] Call Trace:
[ 1039.319957]  <IRQ>
[ 1039.319957]  [<ffffffff8101ecc6>] physflat_send_IPI_all+0x12/0x14
[ 1039.319957]  [<ffffffff8101bb81>] arch_trigger_all_cpu_backtrace+0x4b/0x6e
[ 1039.319957]  [<ffffffff8107a25a>] __rcu_pending+0x224/0x347
[ 1039.319957]  [<ffffffff8107aa13>] rcu_check_callbacks+0xa2/0xb4
[ 1039.319957]  [<ffffffff810469fd>] update_process_times+0x3a/0x70
[ 1039.319957]  [<ffffffff8105f815>] tick_sched_timer+0x70/0x9a
[ 1039.319957]  [<ffffffff810557c0>] __run_hrtimer.isra.26+0x75/0xce
[ 1039.319957]  [<ffffffff81055ded>] hrtimer_interrupt+0xd7/0x193
[ 1039.319957]  [<ffffffff81005f0a>] xen_timer_interrupt+0x2f/0x155
[ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
[ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
[ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
[ 1039.319957]  [<ffffffff8107542d>] handle_irq_event_percpu+0x29/0x126
[ 1039.319957]  [<ffffffff8119064a>] ? info_for_irq+0x9/0x19
[ 1039.319957]  [<ffffffff81077b70>] handle_percpu_irq+0x39/0x4d
[ 1039.319957]  [<ffffffff81190510>] __xen_evtchn_do_upcall+0x147/0x1df
[ 1039.319957]  [<ffffffff81191eae>] xen_evtchn_do_upcall+0x27/0x39
[ 1039.319957]  [<ffffffff812987ee>] xen_hvm_callback_vector+0x6e/0x80
[ 1039.319957]  <EOI>
[ 1039.319957]  [<ffffffff8107ab83>] ? rcu_needs_cpu+0x110/0x1c1
[ 1039.319957]  [<ffffffff81020ff0>] ? native_safe_halt+0x6/0x8
[ 1039.319957]  [<ffffffff8100e8bf>] default_idle+0x27/0x44
[ 1039.319957]  [<ffffffff81007704>] cpu_idle+0x66/0xa4
[ 1039.319957]  [<ffffffff81286605>] start_secondary+0x1ac/0x1b1



Thanks,
Ruslan


----- Original Message -----
From: Keir Fraser <keir.xen@gmail.com>
To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: 
Sent: Monday, April 9, 2012 8:58 PM
Subject: Re: [Xen-devel] Hypercall continuation and wait_event

It means the vcpu has an interrupt pending (in the pv case, that means an
event channel has a pending event).


On 09/04/2012 21:16, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Keir,
> 
> Thanks for your replies! Just one more question about
> local_event_need_delivery(). Under what (common) conditions I would expect to
> have local events that need delivery?
> 
> Ruslan
> 
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@gmail.com>
> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
> <xen-devel@lists.xen.org>
> Cc: 
> Sent: Monday, April 9, 2012 8:09 PM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
> 
>> Thanks for the reply.
>> 
>> Since it can take arbitrarily long for an event to arrive (e.g., it is coming
>> from a different guest on a user request), how do I need to handle this
>> case?Does it mean that I only need to make sure that nothings get scheduled
>> on
>> this VCPU in the guest?
> 
> Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
> sleep within wait_event within the hypercall context. Hence you must not
> hold any hypervisor spinlocks either, for example.
> 
>> Also, it is not exactly clear to me how wait_event avoids the need for
>> hypercall continuation. What about local_events_need_delivery() or
>> softirq_pending()? Are they going to be handled by wait_event internally?
> 
> Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
> the duration that you're descheduled. And if local_event_need_delivery(),
> that's too bad, they have to wait for the vcpu to wake up on the event.
> 
> -- Keir
> 
>> Ruslan
>> 
>> 
>> 
>> 
>> 
>> 
>> ----- Original Message -----
>> From: Keir Fraser <keir.xen@gmail.com>
>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>> <xen-devel@lists.xen.org>
>> Cc: 
>> Sent: Monday, April 9, 2012 6:54 PM
>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>> 
>> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>> 
>>> Hi
>>> 
>>> I am curious how I can properly support hypercall continuation and
>>> wait_event.
>>> I have a dedicated VCPU in a domain which makes a special hypercall, and the
>>> hypercall waits for certain event to arrive. I am using queues available in
>>> Xen, so wait_event will be invoked in the hypercall once its ready to accept
>>> events. However, my understanding that even though I have a dedicated VCPU
>>> for
>>> this hypercall, I still may need to support hypercall continuation properly.
>>> (Is this the case?) So, my question is how exactly the need for hypercall
>> 
>> No it's not the case, the old hypercall_create_continuation() mechanism does
>> not need to be used with wait_event().
>> 
>> -- Keir
>> 
>>> preemption may affect wait_event() and wait() operations, and where would I
>>> need to do hypercall_preempt_check()?
>>> 
>>> Thank you!
>>> Ruslan
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-09 21:19           ` Ruslan Nikolaev
@ 2012-04-10  7:37             ` Keir Fraser
  2012-04-12 21:04               ` Ruslan Nikolaev
  0 siblings, 1 reply; 10+ messages in thread
From: Keir Fraser @ 2012-04-10  7:37 UTC (permalink / raw)
  To: Ruslan Nikolaev, xen-devel

Not sure. Did you snip some lines from the call trace that might explain why
the call trace is being generated (e.g., watchdog timeout, page fault, ...)?
>From the lines you provide, we can't even tell which vcpu it is that is
dumping the call trace.

 -- Keir

On 09/04/2012 22:19, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Keir,
> 
> Thanks again! When I used the scheme I have described, I periodically receive
> kernel errors as shown below. Notice that I use HVM domain and also 'isolcpus'
> as a Linux kernel option to prevent a dedicated VCPU from being normally used.
> A hypercall is being made from a special kernel thread (which is bind to the
> dedicated VCPU before the call).
> 
> What could be the reason of these messages? Looks like it is something related
> to a timer.
> 
> 
> [ 1039.319957] RIP: 0010:[<ffffffff8101ba09>]  [<ffffffff8101ba09>]
> default_send_IPI_mask_sequence_phys+0x95/0xce
> [ 1039.319957] RSP: 0018:ffff88007f043c28  EFLAGS: 00000046
> [ 1039.319957] RAX: 0000000000000400 RBX: 0000000000000096 RCX:
> 0000000000000020
> [ 1039.319957] RDX: 0000000000000002 RSI: 0000000000000020 RDI:
> 0000000000000300
> [ 1039.319957] RBP: ffff88007f043c68 R08: 0000000000000000 R09:
> ffffffff8163eb20
> [ 1039.319957] R10: ffff8800ff043bad R11: 0000000000000000 R12:
> 000000000000d602
> [ 1039.319957] R13: 0000000000000002 R14: 0000000000000400 R15:
> ffffffff8163eb20
> [ 1039.319957] FS:  0000000000000000(0000) GS:ffff88007f040000(0000)
> knlGS:0000000000000000
> [ 1039.319957] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> [ 1039.319957] CR2: 00007f74195d29be CR3: 000000007af4d000 CR4:
> 00000000000006a0
> [ 1039.319957] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [ 1039.319957] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [ 1039.319957] Process swapper/2 (pid: 0, threadinfo ffff88007c4ec000, task
> ffff88007c4f1650)
> [ 1039.319957] Stack:
> [ 1039.319957]  0000000000000002 0000000400000008 ffff88007f043c88
> 0000000000002710
> [ 1039.319957]  ffffffff8161a280 ffffffff8161a340 0000000000000001
> ffffffff8161a4c0
> [ 1039.319957]  ffff88007f043c78 ffffffff8101ecc6 ffff88007f043c98
> ffffffff8101bb81
> [ 1039.319957] Call Trace:
> [ 1039.319957]  <IRQ>
> [ 1039.319957]  [<ffffffff8101ecc6>] physflat_send_IPI_all+0x12/0x14
> [ 1039.319957]  [<ffffffff8101bb81>] arch_trigger_all_cpu_backtrace+0x4b/0x6e
> [ 1039.319957]  [<ffffffff8107a25a>] __rcu_pending+0x224/0x347
> [ 1039.319957]  [<ffffffff8107aa13>] rcu_check_callbacks+0xa2/0xb4
> [ 1039.319957]  [<ffffffff810469fd>] update_process_times+0x3a/0x70
> [ 1039.319957]  [<ffffffff8105f815>] tick_sched_timer+0x70/0x9a
> [ 1039.319957]  [<ffffffff810557c0>] __run_hrtimer.isra.26+0x75/0xce
> [ 1039.319957]  [<ffffffff81055ded>] hrtimer_interrupt+0xd7/0x193
> [ 1039.319957]  [<ffffffff81005f0a>] xen_timer_interrupt+0x2f/0x155
> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
> [ 1039.319957]  [<ffffffff8107542d>] handle_irq_event_percpu+0x29/0x126
> [ 1039.319957]  [<ffffffff8119064a>] ? info_for_irq+0x9/0x19
> [ 1039.319957]  [<ffffffff81077b70>] handle_percpu_irq+0x39/0x4d
> [ 1039.319957]  [<ffffffff81190510>] __xen_evtchn_do_upcall+0x147/0x1df
> [ 1039.319957]  [<ffffffff81191eae>] xen_evtchn_do_upcall+0x27/0x39
> [ 1039.319957]  [<ffffffff812987ee>] xen_hvm_callback_vector+0x6e/0x80
> [ 1039.319957]  <EOI>
> [ 1039.319957]  [<ffffffff8107ab83>] ? rcu_needs_cpu+0x110/0x1c1
> [ 1039.319957]  [<ffffffff81020ff0>] ? native_safe_halt+0x6/0x8
> [ 1039.319957]  [<ffffffff8100e8bf>] default_idle+0x27/0x44
> [ 1039.319957]  [<ffffffff81007704>] cpu_idle+0x66/0xa4
> [ 1039.319957]  [<ffffffff81286605>] start_secondary+0x1ac/0x1b1
> 
> 
> 
> Thanks,
> Ruslan
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@gmail.com>
> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
> <xen-devel@lists.xen.org>
> Cc: 
> Sent: Monday, April 9, 2012 8:58 PM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> It means the vcpu has an interrupt pending (in the pv case, that means an
> event channel has a pending event).
> 
> 
> On 09/04/2012 21:16, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
> 
>> Keir,
>> 
>> Thanks for your replies! Just one more question about
>> local_event_need_delivery(). Under what (common) conditions I would expect to
>> have local events that need delivery?
>> 
>> Ruslan
>> 
>> 
>> 
>> ----- Original Message -----
>> From: Keir Fraser <keir.xen@gmail.com>
>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>> <xen-devel@lists.xen.org>
>> Cc: 
>> Sent: Monday, April 9, 2012 8:09 PM
>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>> 
>> On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>> 
>>> Thanks for the reply.
>>> 
>>> Since it can take arbitrarily long for an event to arrive (e.g., it is
>>> coming
>>> from a different guest on a user request), how do I need to handle this
>>> case?Does it mean that I only need to make sure that nothings get scheduled
>>> on
>>> this VCPU in the guest?
>> 
>> Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
>> sleep within wait_event within the hypercall context. Hence you must not
>> hold any hypervisor spinlocks either, for example.
>> 
>>> Also, it is not exactly clear to me how wait_event avoids the need for
>>> hypercall continuation. What about local_events_need_delivery() or
>>> softirq_pending()? Are they going to be handled by wait_event internally?
>> 
>> Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
>> the duration that you're descheduled. And if local_event_need_delivery(),
>> that's too bad, they have to wait for the vcpu to wake up on the event.
>> 
>> -- Keir
>> 
>>> Ruslan
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ----- Original Message -----
>>> From: Keir Fraser <keir.xen@gmail.com>
>>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>>> <xen-devel@lists.xen.org>
>>> Cc: 
>>> Sent: Monday, April 9, 2012 6:54 PM
>>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>>> 
>>> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>>> 
>>>> Hi
>>>> 
>>>> I am curious how I can properly support hypercall continuation and
>>>> wait_event.
>>>> I have a dedicated VCPU in a domain which makes a special hypercall, and
>>>> the
>>>> hypercall waits for certain event to arrive. I am using queues available in
>>>> Xen, so wait_event will be invoked in the hypercall once its ready to
>>>> accept
>>>> events. However, my understanding that even though I have a dedicated VCPU
>>>> for
>>>> this hypercall, I still may need to support hypercall continuation
>>>> properly.
>>>> (Is this the case?) So, my question is how exactly the need for hypercall
>>> 
>>> No it's not the case, the old hypercall_create_continuation() mechanism does
>>> not need to be used with wait_event().
>>> 
>>> -- Keir
>>> 
>>>> preemption may affect wait_event() and wait() operations, and where would I
>>>> need to do hypercall_preempt_check()?
>>>> 
>>>> Thank you!
>>>> Ruslan
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-10  7:37             ` Keir Fraser
@ 2012-04-12 21:04               ` Ruslan Nikolaev
  2012-04-12 22:16                 ` Keir Fraser
  0 siblings, 1 reply; 10+ messages in thread
From: Ruslan Nikolaev @ 2012-04-12 21:04 UTC (permalink / raw)
  To: xen-devel

Keir,

I have a question regarding xen interrupt affinity mask. Is there some way to disable xen (virtual) interrupts on a particular cpu? I mean, like irq_default_affinity in Linux kernel (which is for normal SMP interrupts).

If there is no easy way to change the mask, do you know what functions I need to look at?

Thank you!

Ruslan



----- Original Message -----
From: Keir Fraser <keir.xen@gmail.com>
To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: 
Sent: Tuesday, April 10, 2012 3:37 AM
Subject: Re: [Xen-devel] Hypercall continuation and wait_event

Not sure. Did you snip some lines from the call trace that might explain why
the call trace is being generated (e.g., watchdog timeout, page fault, ...)?
>From the lines you provide, we can't even tell which vcpu it is that is
dumping the call trace.

-- Keir

On 09/04/2012 22:19, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Keir,
> 
> Thanks again! When I used the scheme I have described, I periodically receive
> kernel errors as shown below. Notice that I use HVM domain and also 'isolcpus'
> as a Linux kernel option to prevent a dedicated VCPU from being normally used.
> A hypercall is being made from a special kernel thread (which is bind to the
> dedicated VCPU before the call).
> 
> What could be the reason of these messages? Looks like it is something related
> to a timer.
> 
> 
> [ 1039.319957] RIP: 0010:[<ffffffff8101ba09>]  [<ffffffff8101ba09>]
> default_send_IPI_mask_sequence_phys+0x95/0xce
> [ 1039.319957] RSP: 0018:ffff88007f043c28  EFLAGS: 00000046
> [ 1039.319957] RAX: 0000000000000400 RBX: 0000000000000096 RCX:
> 0000000000000020
> [ 1039.319957] RDX: 0000000000000002 RSI: 0000000000000020 RDI:
> 0000000000000300
> [ 1039.319957] RBP: ffff88007f043c68 R08: 0000000000000000 R09:
> ffffffff8163eb20
> [ 1039.319957] R10: ffff8800ff043bad R11: 0000000000000000 R12:
> 000000000000d602
> [ 1039.319957] R13: 0000000000000002 R14: 0000000000000400 R15:
> ffffffff8163eb20
> [ 1039.319957] FS:  0000000000000000(0000) GS:ffff88007f040000(0000)
> knlGS:0000000000000000
> [ 1039.319957] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> [ 1039.319957] CR2: 00007f74195d29be CR3: 000000007af4d000 CR4:
> 00000000000006a0
> [ 1039.319957] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [ 1039.319957] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [ 1039.319957] Process swapper/2 (pid: 0, threadinfo ffff88007c4ec000, task
> ffff88007c4f1650)
> [ 1039.319957] Stack:
> [ 1039.319957]  0000000000000002 0000000400000008 ffff88007f043c88
> 0000000000002710
> [ 1039.319957]  ffffffff8161a280 ffffffff8161a340 0000000000000001
> ffffffff8161a4c0
> [ 1039.319957]  ffff88007f043c78 ffffffff8101ecc6 ffff88007f043c98
> ffffffff8101bb81
> [ 1039.319957] Call Trace:
> [ 1039.319957]  <IRQ>
> [ 1039.319957]  [<ffffffff8101ecc6>] physflat_send_IPI_all+0x12/0x14
> [ 1039.319957]  [<ffffffff8101bb81>] arch_trigger_all_cpu_backtrace+0x4b/0x6e
> [ 1039.319957]  [<ffffffff8107a25a>] __rcu_pending+0x224/0x347
> [ 1039.319957]  [<ffffffff8107aa13>] rcu_check_callbacks+0xa2/0xb4
> [ 1039.319957]  [<ffffffff810469fd>] update_process_times+0x3a/0x70
> [ 1039.319957]  [<ffffffff8105f815>] tick_sched_timer+0x70/0x9a
> [ 1039.319957]  [<ffffffff810557c0>] __run_hrtimer.isra.26+0x75/0xce
> [ 1039.319957]  [<ffffffff81055ded>] hrtimer_interrupt+0xd7/0x193
> [ 1039.319957]  [<ffffffff81005f0a>] xen_timer_interrupt+0x2f/0x155
> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
> [ 1039.319957]  [<ffffffff8107542d>] handle_irq_event_percpu+0x29/0x126
> [ 1039.319957]  [<ffffffff8119064a>] ? info_for_irq+0x9/0x19
> [ 1039.319957]  [<ffffffff81077b70>] handle_percpu_irq+0x39/0x4d
> [ 1039.319957]  [<ffffffff81190510>] __xen_evtchn_do_upcall+0x147/0x1df
> [ 1039.319957]  [<ffffffff81191eae>] xen_evtchn_do_upcall+0x27/0x39
> [ 1039.319957]  [<ffffffff812987ee>] xen_hvm_callback_vector+0x6e/0x80
> [ 1039.319957]  <EOI>
> [ 1039.319957]  [<ffffffff8107ab83>] ? rcu_needs_cpu+0x110/0x1c1
> [ 1039.319957]  [<ffffffff81020ff0>] ? native_safe_halt+0x6/0x8
> [ 1039.319957]  [<ffffffff8100e8bf>] default_idle+0x27/0x44
> [ 1039.319957]  [<ffffffff81007704>] cpu_idle+0x66/0xa4
> [ 1039.319957]  [<ffffffff81286605>] start_secondary+0x1ac/0x1b1
> 
> 
> 
> Thanks,
> Ruslan
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@gmail.com>
> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
> <xen-devel@lists.xen.org>
> Cc: 
> Sent: Monday, April 9, 2012 8:58 PM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> It means the vcpu has an interrupt pending (in the pv case, that means an
> event channel has a pending event).
> 
> 
> On 09/04/2012 21:16, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
> 
>> Keir,
>> 
>> Thanks for your replies! Just one more question about
>> local_event_need_delivery(). Under what (common) conditions I would expect to
>> have local events that need delivery?
>> 
>> Ruslan
>> 
>> 
>> 
>> ----- Original Message -----
>> From: Keir Fraser <keir.xen@gmail.com>
>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>> <xen-devel@lists.xen.org>
>> Cc: 
>> Sent: Monday, April 9, 2012 8:09 PM
>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>> 
>> On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>> 
>>> Thanks for the reply.
>>> 
>>> Since it can take arbitrarily long for an event to arrive (e.g., it is
>>> coming
>>> from a different guest on a user request), how do I need to handle this
>>> case?Does it mean that I only need to make sure that nothings get scheduled
>>> on
>>> this VCPU in the guest?
>> 
>> Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
>> sleep within wait_event within the hypercall context. Hence you must not
>> hold any hypervisor spinlocks either, for example.
>> 
>>> Also, it is not exactly clear to me how wait_event avoids the need for
>>> hypercall continuation. What about local_events_need_delivery() or
>>> softirq_pending()? Are they going to be handled by wait_event internally?
>> 
>> Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
>> the duration that you're descheduled. And if local_event_need_delivery(),
>> that's too bad, they have to wait for the vcpu to wake up on the event.
>> 
>> -- Keir
>> 
>>> Ruslan
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ----- Original Message -----
>>> From: Keir Fraser <keir.xen@gmail.com>
>>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>>> <xen-devel@lists.xen.org>
>>> Cc: 
>>> Sent: Monday, April 9, 2012 6:54 PM
>>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>>> 
>>> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>>> 
>>>> Hi
>>>> 
>>>> I am curious how I can properly support hypercall continuation and
>>>> wait_event.
>>>> I have a dedicated VCPU in a domain which makes a special hypercall, and
>>>> the
>>>> hypercall waits for certain event to arrive. I am using queues available in
>>>> Xen, so wait_event will be invoked in the hypercall once its ready to
>>>> accept
>>>> events. However, my understanding that even though I have a dedicated VCPU
>>>> for
>>>> this hypercall, I still may need to support hypercall continuation
>>>> properly.
>>>> (Is this the case?) So, my question is how exactly the need for hypercall
>>> 
>>> No it's not the case, the old hypercall_create_continuation() mechanism does
>>> not need to be used with wait_event().
>>> 
>>> -- Keir
>>> 
>>>> preemption may affect wait_event() and wait() operations, and where would I
>>>> need to do hypercall_preempt_check()?
>>>> 
>>>> Thank you!
>>>> Ruslan
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Hypercall continuation and wait_event
  2012-04-12 21:04               ` Ruslan Nikolaev
@ 2012-04-12 22:16                 ` Keir Fraser
  0 siblings, 0 replies; 10+ messages in thread
From: Keir Fraser @ 2012-04-12 22:16 UTC (permalink / raw)
  To: Ruslan Nikolaev, xen-devel

PV interrupts or HVM emulated interrupts? For the latter you do it in the
same way you would for native guest. For the former, it might depend on the
Linux kernel version, but possibly the same.

 -- Keir


On 12/04/2012 22:04, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:

> Keir,
> 
> I have a question regarding xen interrupt affinity mask. Is there some way to
> disable xen (virtual) interrupts on a particular cpu? I mean, like
> irq_default_affinity in Linux kernel (which is for normal SMP interrupts).
> 
> If there is no easy way to change the mask, do you know what functions I need
> to look at?
> 
> Thank you!
> 
> Ruslan
> 
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@gmail.com>
> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
> <xen-devel@lists.xen.org>
> Cc: 
> Sent: Tuesday, April 10, 2012 3:37 AM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> Not sure. Did you snip some lines from the call trace that might explain why
> the call trace is being generated (e.g., watchdog timeout, page fault, ...)?
> From the lines you provide, we can't even tell which vcpu it is that is
> dumping the call trace.
> 
> -- Keir
> 
> On 09/04/2012 22:19, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
> 
>> Keir,
>> 
>> Thanks again! When I used the scheme I have described, I periodically receive
>> kernel errors as shown below. Notice that I use HVM domain and also
>> 'isolcpus'
>> as a Linux kernel option to prevent a dedicated VCPU from being normally
>> used.
>> A hypercall is being made from a special kernel thread (which is bind to the
>> dedicated VCPU before the call).
>> 
>> What could be the reason of these messages? Looks like it is something
>> related
>> to a timer.
>> 
>> 
>> [ 1039.319957] RIP: 0010:[<ffffffff8101ba09>]  [<ffffffff8101ba09>]
>> default_send_IPI_mask_sequence_phys+0x95/0xce
>> [ 1039.319957] RSP: 0018:ffff88007f043c28  EFLAGS: 00000046
>> [ 1039.319957] RAX: 0000000000000400 RBX: 0000000000000096 RCX:
>> 0000000000000020
>> [ 1039.319957] RDX: 0000000000000002 RSI: 0000000000000020 RDI:
>> 0000000000000300
>> [ 1039.319957] RBP: ffff88007f043c68 R08: 0000000000000000 R09:
>> ffffffff8163eb20
>> [ 1039.319957] R10: ffff8800ff043bad R11: 0000000000000000 R12:
>> 000000000000d602
>> [ 1039.319957] R13: 0000000000000002 R14: 0000000000000400 R15:
>> ffffffff8163eb20
>> [ 1039.319957] FS:  0000000000000000(0000) GS:ffff88007f040000(0000)
>> knlGS:0000000000000000
>> [ 1039.319957] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
>> [ 1039.319957] CR2: 00007f74195d29be CR3: 000000007af4d000 CR4:
>> 00000000000006a0
>> [ 1039.319957] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
>> 0000000000000000
>> [ 1039.319957] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
>> 0000000000000400
>> [ 1039.319957] Process swapper/2 (pid: 0, threadinfo ffff88007c4ec000, task
>> ffff88007c4f1650)
>> [ 1039.319957] Stack:
>> [ 1039.319957]  0000000000000002 0000000400000008 ffff88007f043c88
>> 0000000000002710
>> [ 1039.319957]  ffffffff8161a280 ffffffff8161a340 0000000000000001
>> ffffffff8161a4c0
>> [ 1039.319957]  ffff88007f043c78 ffffffff8101ecc6 ffff88007f043c98
>> ffffffff8101bb81
>> [ 1039.319957] Call Trace:
>> [ 1039.319957]  <IRQ>
>> [ 1039.319957]  [<ffffffff8101ecc6>] physflat_send_IPI_all+0x12/0x14
>> [ 1039.319957]  [<ffffffff8101bb81>] arch_trigger_all_cpu_backtrace+0x4b/0x6e
>> [ 1039.319957]  [<ffffffff8107a25a>] __rcu_pending+0x224/0x347
>> [ 1039.319957]  [<ffffffff8107aa13>] rcu_check_callbacks+0xa2/0xb4
>> [ 1039.319957]  [<ffffffff810469fd>] update_process_times+0x3a/0x70
>> [ 1039.319957]  [<ffffffff8105f815>] tick_sched_timer+0x70/0x9a
>> [ 1039.319957]  [<ffffffff810557c0>] __run_hrtimer.isra.26+0x75/0xce
>> [ 1039.319957]  [<ffffffff81055ded>] hrtimer_interrupt+0xd7/0x193
>> [ 1039.319957]  [<ffffffff81005f0a>] xen_timer_interrupt+0x2f/0x155
>> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
>> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
>> [ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
>> [ 1039.319957]  [<ffffffff8107542d>] handle_irq_event_percpu+0x29/0x126
>> [ 1039.319957]  [<ffffffff8119064a>] ? info_for_irq+0x9/0x19
>> [ 1039.319957]  [<ffffffff81077b70>] handle_percpu_irq+0x39/0x4d
>> [ 1039.319957]  [<ffffffff81190510>] __xen_evtchn_do_upcall+0x147/0x1df
>> [ 1039.319957]  [<ffffffff81191eae>] xen_evtchn_do_upcall+0x27/0x39
>> [ 1039.319957]  [<ffffffff812987ee>] xen_hvm_callback_vector+0x6e/0x80
>> [ 1039.319957]  <EOI>
>> [ 1039.319957]  [<ffffffff8107ab83>] ? rcu_needs_cpu+0x110/0x1c1
>> [ 1039.319957]  [<ffffffff81020ff0>] ? native_safe_halt+0x6/0x8
>> [ 1039.319957]  [<ffffffff8100e8bf>] default_idle+0x27/0x44
>> [ 1039.319957]  [<ffffffff81007704>] cpu_idle+0x66/0xa4
>> [ 1039.319957]  [<ffffffff81286605>] start_secondary+0x1ac/0x1b1
>> 
>> 
>> 
>> Thanks,
>> Ruslan
>> 
>> 
>> ----- Original Message -----
>> From: Keir Fraser <keir.xen@gmail.com>
>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>> <xen-devel@lists.xen.org>
>> Cc: 
>> Sent: Monday, April 9, 2012 8:58 PM
>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>> 
>> It means the vcpu has an interrupt pending (in the pv case, that means an
>> event channel has a pending event).
>> 
>> 
>> On 09/04/2012 21:16, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>> 
>>> Keir,
>>> 
>>> Thanks for your replies! Just one more question about
>>> local_event_need_delivery(). Under what (common) conditions I would expect
>>> to
>>> have local events that need delivery?
>>> 
>>> Ruslan
>>> 
>>> 
>>> 
>>> ----- Original Message -----
>>> From: Keir Fraser <keir.xen@gmail.com>
>>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>>> <xen-devel@lists.xen.org>
>>> Cc: 
>>> Sent: Monday, April 9, 2012 8:09 PM
>>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>>> 
>>> On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>>> 
>>>> Thanks for the reply.
>>>> 
>>>> Since it can take arbitrarily long for an event to arrive (e.g., it is
>>>> coming
>>>> from a different guest on a user request), how do I need to handle this
>>>> case?Does it mean that I only need to make sure that nothings get scheduled
>>>> on
>>>> this VCPU in the guest?
>>> 
>>> Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
>>> sleep within wait_event within the hypercall context. Hence you must not
>>> hold any hypervisor spinlocks either, for example.
>>> 
>>>> Also, it is not exactly clear to me how wait_event avoids the need for
>>>> hypercall continuation. What about local_events_need_delivery() or
>>>> softirq_pending()? Are they going to be handled by wait_event internally?
>>> 
>>> Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
>>> the duration that you're descheduled. And if local_event_need_delivery(),
>>> that's too bad, they have to wait for the vcpu to wake up on the event.
>>> 
>>> -- Keir
>>> 
>>>> Ruslan
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> ----- Original Message -----
>>>> From: Keir Fraser <keir.xen@gmail.com>
>>>> To: Ruslan Nikolaev <nruslan_devel@yahoo.com>; "xen-devel@lists.xen.org"
>>>> <xen-devel@lists.xen.org>
>>>> Cc: 
>>>> Sent: Monday, April 9, 2012 6:54 PM
>>>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>>>> 
>>>> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@yahoo.com> wrote:
>>>> 
>>>>> Hi
>>>>> 
>>>>> I am curious how I can properly support hypercall continuation and
>>>>> wait_event.
>>>>> I have a dedicated VCPU in a domain which makes a special hypercall, and
>>>>> the
>>>>> hypercall waits for certain event to arrive. I am using queues available
>>>>> in
>>>>> Xen, so wait_event will be invoked in the hypercall once its ready to
>>>>> accept
>>>>> events. However, my understanding that even though I have a dedicated VCPU
>>>>> for
>>>>> this hypercall, I still may need to support hypercall continuation
>>>>> properly.
>>>>> (Is this the case?) So, my question is how exactly the need for hypercall
>>>> 
>>>> No it's not the case, the old hypercall_create_continuation() mechanism
>>>> does
>>>> not need to be used with wait_event().
>>>> 
>>>> -- Keir
>>>> 
>>>>> preemption may affect wait_event() and wait() operations, and where would
>>>>> I
>>>>> need to do hypercall_preempt_check()?
>>>>> 
>>>>> Thank you!
>>>>> Ruslan
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> Xen-devel mailing list
>>>>> Xen-devel@lists.xen.org
>>>>> http://lists.xen.org/xen-devel
>>>> 
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-04-12 22:16 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-09 17:51 Hypercall continuation and wait_event Ruslan Nikolaev
2012-04-09 18:54 ` Keir Fraser
2012-04-09 19:18   ` Ruslan Nikolaev
2012-04-09 20:09     ` Keir Fraser
2012-04-09 20:16       ` Ruslan Nikolaev
2012-04-09 20:58         ` Keir Fraser
2012-04-09 21:19           ` Ruslan Nikolaev
2012-04-10  7:37             ` Keir Fraser
2012-04-12 21:04               ` Ruslan Nikolaev
2012-04-12 22:16                 ` Keir Fraser

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.