From: Oleksandr <olekstysh@gmail.com>
To: Julien Grall <julien.grall.oss@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
xen-devel <xen-devel@lists.xenproject.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
Andrew Cooper <andrew.cooper3@citrix.com>,
George Dunlap <george.dunlap@citrix.com>,
Jan Beulich <jbeulich@suse.com>,
Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
Daniel De Graaf <dgdegra@tycho.nsa.gov>,
Julien Grall <julien.grall@arm.com>
Subject: Re: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features
Date: Wed, 12 Aug 2020 18:08:11 +0300 [thread overview]
Message-ID: <23ca366d-0f5a-fe47-874d-8cd4629ef308@gmail.com> (raw)
In-Reply-To: <5cf9c14c-3bac-ccbd-6586-1a540dbe9b8d@gmail.com>
Hi Julien
>>>>>>>> @@ -2275,6 +2282,16 @@ static void check_for_vcpu_work(void)
>>>>>>>> */
>>>>>>>> void leave_hypervisor_to_guest(void)
>>>>>>>> {
>>>>>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>>>>>> + /*
>>>>>>>> + * XXX: Check the return. Shall we call that in
>>>>>>>> + * continue_running and context_switch instead?
>>>>>>>> + * The benefits would be to avoid calling
>>>>>>>> + * handle_hvm_io_completion on every return.
>>>>>>>> + */
>>>>>>> Yeah, that could be a simple and good optimization
>>>>>> Well, it is not simple as it is sounds :).
>>>>>> handle_hvm_io_completion() is the function in charge to mark the
>>>>>> vCPU as waiting for I/O. So we would at least need to split the
>>>>>> function.
>>>>>>
>>>>>> I wrote this TODO because I wasn't sure about the complexity of
>>>>>> handle_hvm_io_completion(current). Looking at it again, the main
>>>>>> complexity is the looping over the IOREQ servers.
>>>>>>
>>>>>> I think it would be better to optimize handle_hvm_io_completion()
>>>>>> rather than trying to hack the context_switch() or
>>>>>> continue_running().
>>>>> Well, is the idea in proposed dirty test patch below close to what
>>>>> you expect? Patch optimizes handle_hvm_io_completion() to avoid extra
>>>>> actions if vcpu's domain doesn't have ioreq_server, alternatively
>>>>> the check could be moved out of handle_hvm_io_completion() to avoid
>>>>> calling that function at all.
>>>> This looks ok to me.
>>>>
>>>>> BTW, TODO also suggests checking the return value of
>>>>> handle_hvm_io_completion(), but I am completely sure we can simply
>>>>> just return from leave_hypervisor_to_guest() at this point. Could you
>>>>> please share your opinion?
>>>> From my understanding, handle_hvm_io_completion() may return false if
>>>> there is pending I/O or a failure.
>>> It seems, yes
>>>
>>>
>>>> In the former case, I think we want to call handle_hvm_io_completion()
>>>> later on. Possibly after we call do_softirq().
>>>>
>>>> I am wondering whether check_for_vcpu_work() could return whether
>>>> there are more work todo on the behalf of the vCPU.
>>>>
>>>> So we could have:
>>>>
>>>> do
>>>> {
>>>> check_for_pcpu_work();
>>>> } while (check_for_vcpu_work())
>>>>
>>>> The implementation of check_for_vcpu_work() would be:
>>>>
>>>> if ( !handle_hvm_io_completion() )
>>>> return true;
>>>>
>>>> /* Rest of the existing code */
>>>>
>>>> return false;
>>> Thank you, will give it a try.
>>>
>>> Can we behave the same way for both "pending I/O" and "failure" or we
>>> need to distinguish them?
>> We don't need to distinguish them. In both cases, we will want to
>> process softirqs. In all the failure cases, the domain will have
>> crashed. Therefore the vCPU will be unscheduled.
>
> Got it.
>
>
>>> Probably we need some sort of safe timeout/number attempts in order to
>>> not spin forever?
>> Well, anything based on timeout/number of attempts is flaky. How do
>> you know whether the I/O is just taking a "long time" to complete?
>>
>> But a vCPU shouldn't continue until an I/O has completed. This is
>> nothing very different than what a processor would do.
>>
>> In Xen case, if an I/O never completes then it most likely means that
>> something went horribly wrong with the Device Emulator. So it is most
>> likely not safe to continue. In HW, when there is a device failure,
>> the OS may receive an SError (this is implementation defined) and
>> could act accordingly if it is able to recognize the issue.
>>
>> It *might* be possible to send a virtual SError but there are a couple
>> of issues with it:
>> * How do you detect a failure?
>> * SErrors are implementations defined. You would need to teach
>> your OS (or the firmware) how to deal with them.
>>
>> I would expect quite a bit of effort in order to design and implement
>> it. For now, it is probably best to just let the vCPU spin forever.
>>
>> This wouldn't be an issue for Xen as do_softirq() would be called at
>> every loop.
>
> Thank you for clarification. Fair enough and sounds reasonable.
I added logic to properly handle the return value of
handle_hvm_io_completion() as you had suggested. For test purpose I
simulated handle_hvm_io_completion() to return false sometimes
(I couldn't detect real "pending I/O" failure during testing) to see how
new logic behaved. I assume I can take this solution for non-RFC series (?)
---
xen/arch/arm/traps.c | 36 ++++++++++++++++++++++--------------
xen/common/hvm/ioreq.c | 9 ++++++++-
xen/include/asm-arm/domain.h | 1 +
xen/include/xen/hvm/ioreq.h | 5 +++++
4 files changed, 36 insertions(+), 15 deletions(-)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 974c744..f74b514 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2264,12 +2264,26 @@ static void check_for_pcpu_work(void)
* Process pending work for the vCPU. Any call should be fast or
* implement preemption.
*/
-static void check_for_vcpu_work(void)
+static bool check_for_vcpu_work(void)
{
struct vcpu *v = current;
+#ifdef CONFIG_IOREQ_SERVER
+ if ( hvm_domain_has_ioreq_server(v->domain) )
+ {
+ bool handled;
+
+ local_irq_enable();
+ handled = handle_hvm_io_completion(v);
+ local_irq_disable();
+
+ if ( !handled )
+ return true;
+ }
+#endif
+
if ( likely(!v->arch.need_flush_to_ram) )
- return;
+ return false;
/*
* Give a chance for the pCPU to process work before handling the vCPU
@@ -2280,6 +2294,8 @@ static void check_for_vcpu_work(void)
local_irq_enable();
p2m_flush_vm(v);
local_irq_disable();
+
+ return false;
}
/*
@@ -2290,20 +2306,12 @@ static void check_for_vcpu_work(void)
*/
void leave_hypervisor_to_guest(void)
{
-#ifdef CONFIG_IOREQ_SERVER
- /*
- * XXX: Check the return. Shall we call that in
- * continue_running and context_switch instead?
- * The benefits would be to avoid calling
- * handle_hvm_io_completion on every return.
- */
- local_irq_enable();
- handle_hvm_io_completion(current);
-#endif
local_irq_disable();
- check_for_vcpu_work();
- check_for_pcpu_work();
+ do
+ {
+ check_for_pcpu_work();
+ } while ( check_for_vcpu_work() );
vgic_sync_to_lrs();
diff --git a/xen/common/hvm/ioreq.c b/xen/common/hvm/ioreq.c
index 7e1fa23..81b41ab 100644
--- a/xen/common/hvm/ioreq.c
+++ b/xen/common/hvm/ioreq.c
@@ -38,9 +38,15 @@ static void set_ioreq_server(struct domain *d,
unsigned int id,
struct hvm_ioreq_server *s)
{
ASSERT(id < MAX_NR_IOREQ_SERVERS);
- ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
+ ASSERT((!s && d->arch.hvm.ioreq_server.server[id]) ||
+ (s && !d->arch.hvm.ioreq_server.server[id]));
d->arch.hvm.ioreq_server.server[id] = s;
+
+ if ( s )
+ d->arch.hvm.ioreq_server.nr_servers ++;
+ else
+ d->arch.hvm.ioreq_server.nr_servers --;
}
/*
@@ -1415,6 +1421,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool
buffered)
void hvm_ioreq_init(struct domain *d)
{
spin_lock_init(&d->arch.hvm.ioreq_server.lock);
+ d->arch.hvm.ioreq_server.nr_servers = 0;
arch_hvm_ioreq_init(d);
}
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 6a01d69..484bd1a 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -68,6 +68,7 @@ struct hvm_domain
struct {
spinlock_t lock;
struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS];
+ unsigned int nr_servers;
} ioreq_server;
bool_t qemu_mapcache_invalidate;
diff --git a/xen/include/xen/hvm/ioreq.h b/xen/include/xen/hvm/ioreq.h
index 40b7b5e..8f78852 100644
--- a/xen/include/xen/hvm/ioreq.h
+++ b/xen/include/xen/hvm/ioreq.h
@@ -23,6 +23,11 @@
#include <asm/hvm/ioreq.h>
+static inline bool hvm_domain_has_ioreq_server(const struct domain *d)
+{
+ return (d->arch.hvm.ioreq_server.nr_servers > 0);
+}
+
#define GET_IOREQ_SERVER(d, id) \
(d)->arch.hvm.ioreq_server.server[id]
--
2.7.4
--
Regards,
Oleksandr Tyshchenko
next prev parent reply other threads:[~2020-08-12 15:08 UTC|newest]
Thread overview: 140+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-03 18:21 [RFC PATCH V1 00/12] IOREQ feature (+ virtio-mmio) on Arm Oleksandr Tyshchenko
2020-08-03 18:21 ` [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common Oleksandr Tyshchenko
2020-08-04 7:45 ` Paul Durrant
2020-08-04 11:10 ` Oleksandr
2020-08-04 11:23 ` Paul Durrant
2020-08-04 11:51 ` Oleksandr
2020-08-04 13:18 ` Paul Durrant
2020-08-04 13:52 ` Julien Grall
2020-08-04 15:41 ` Jan Beulich
2020-08-04 19:11 ` Stefano Stabellini
2020-08-05 7:01 ` Jan Beulich
2020-08-06 0:37 ` Stefano Stabellini
2020-08-06 6:59 ` Jan Beulich
2020-08-06 20:32 ` Stefano Stabellini
2020-08-07 13:19 ` Oleksandr
2020-08-07 16:45 ` Oleksandr
2020-08-07 21:50 ` Stefano Stabellini
2020-08-07 22:19 ` Oleksandr
2020-08-10 13:41 ` Oleksandr
2020-08-10 23:34 ` Stefano Stabellini
2020-08-11 9:19 ` Julien Grall
2020-08-11 10:10 ` Oleksandr
2020-08-11 22:47 ` Stefano Stabellini
2020-08-12 14:35 ` Oleksandr
2020-08-12 23:08 ` Stefano Stabellini
2020-08-13 20:16 ` Julien Grall
2020-08-07 23:45 ` Oleksandr
2020-08-10 23:34 ` Stefano Stabellini
2020-08-05 8:33 ` Julien Grall
2020-08-06 0:37 ` Stefano Stabellini
2020-08-06 9:45 ` Julien Grall
2020-08-06 23:48 ` Stefano Stabellini
2020-08-10 19:20 ` Julien Grall
2020-08-10 23:34 ` Stefano Stabellini
2020-08-11 11:28 ` Julien Grall
2020-08-11 22:48 ` Stefano Stabellini
2020-08-12 8:19 ` Julien Grall
2020-08-20 19:14 ` Oleksandr
2020-08-21 0:53 ` Stefano Stabellini
2020-08-21 18:54 ` Julien Grall
2020-08-05 13:30 ` Julien Grall
2020-08-06 11:37 ` Oleksandr
2020-08-10 16:29 ` Julien Grall
2020-08-10 17:28 ` Oleksandr
2020-08-05 16:15 ` Andrew Cooper
2020-08-06 8:20 ` Oleksandr
2020-08-15 17:30 ` Julien Grall
2020-08-16 19:37 ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 02/12] hvm/dm: Make x86's DM " Oleksandr Tyshchenko
2020-08-03 18:21 ` [RFC PATCH V1 03/12] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Oleksandr Tyshchenko
2020-08-03 18:21 ` [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features Oleksandr Tyshchenko
2020-08-04 7:49 ` Paul Durrant
2020-08-04 14:01 ` Julien Grall
2020-08-04 23:22 ` Stefano Stabellini
2020-08-15 17:56 ` Julien Grall
2020-08-17 14:36 ` Oleksandr
2020-08-04 23:22 ` Stefano Stabellini
2020-08-05 7:05 ` Jan Beulich
2020-08-05 16:41 ` Stefano Stabellini
2020-08-05 19:45 ` Oleksandr
2020-08-05 9:32 ` Julien Grall
2020-08-05 15:41 ` Oleksandr
2020-08-06 10:19 ` Julien Grall
2020-08-10 18:09 ` Oleksandr
2020-08-10 18:21 ` Oleksandr
2020-08-10 19:00 ` Julien Grall
2020-08-10 20:29 ` Oleksandr
2020-08-10 22:37 ` Julien Grall
2020-08-11 6:13 ` Oleksandr
2020-08-12 15:08 ` Oleksandr [this message]
2020-08-11 17:09 ` Oleksandr
2020-08-11 17:50 ` Julien Grall
2020-08-13 18:41 ` Oleksandr
2020-08-13 20:36 ` Julien Grall
2020-08-13 21:49 ` Oleksandr
2020-08-13 20:39 ` Oleksandr Tyshchenko
2020-08-13 22:14 ` Julien Grall
2020-08-14 12:08 ` Oleksandr
2020-08-05 14:12 ` Julien Grall
2020-08-05 14:45 ` Jan Beulich
2020-08-05 19:30 ` Oleksandr
2020-08-06 11:08 ` Julien Grall
2020-08-06 11:29 ` Jan Beulich
2020-08-20 18:30 ` Oleksandr
2020-08-21 6:16 ` Jan Beulich
2020-08-21 11:13 ` Oleksandr
2020-08-06 13:27 ` Oleksandr
2020-08-10 18:25 ` Julien Grall
2020-08-10 19:58 ` Oleksandr
2020-08-05 16:13 ` Jan Beulich
2020-08-05 19:47 ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 05/12] hvm/dm: Introduce xendevicemodel_set_irq_level DM op Oleksandr Tyshchenko
2020-08-04 23:22 ` Stefano Stabellini
2020-08-05 9:39 ` Julien Grall
2020-08-06 0:37 ` Stefano Stabellini
2020-08-06 11:32 ` Julien Grall
2020-08-06 23:49 ` Stefano Stabellini
2020-08-07 8:43 ` Jan Beulich
2020-08-07 21:50 ` Stefano Stabellini
2020-08-08 9:27 ` Julien Grall
2020-08-08 9:28 ` Julien Grall
2020-08-10 23:34 ` Stefano Stabellini
2020-08-11 13:04 ` Julien Grall
2020-08-11 22:48 ` Stefano Stabellini
2020-08-18 9:31 ` Julien Grall
2020-08-21 0:53 ` Stefano Stabellini
2020-08-17 15:23 ` Jan Beulich
2020-08-17 22:56 ` Stefano Stabellini
2020-08-18 8:03 ` Jan Beulich
2020-08-05 16:15 ` Jan Beulich
2020-08-05 22:12 ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 06/12] libxl: Introduce basic virtio-mmio support on Arm Oleksandr Tyshchenko
2020-08-03 18:21 ` [RFC PATCH V1 07/12] A collection of tweaks to be able to run emulator in driver domain Oleksandr Tyshchenko
2020-08-05 16:19 ` Jan Beulich
2020-08-05 16:40 ` Paul Durrant
2020-08-06 9:22 ` Oleksandr
2020-08-06 9:27 ` Jan Beulich
2020-08-14 16:30 ` Oleksandr
2020-08-16 15:36 ` Julien Grall
2020-08-17 15:07 ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 08/12] xen/arm: Invalidate qemu mapcache on XENMEM_decrease_reservation Oleksandr Tyshchenko
2020-08-05 16:21 ` Jan Beulich
2020-08-06 11:35 ` Julien Grall
2020-08-06 11:50 ` Jan Beulich
2020-08-06 14:28 ` Oleksandr
2020-08-06 16:33 ` Jan Beulich
2020-08-06 16:57 ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way Oleksandr Tyshchenko
2020-08-04 23:22 ` Stefano Stabellini
2020-08-05 20:51 ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 10/12] libxl: Add support for virtio-disk configuration Oleksandr Tyshchenko
2020-08-04 23:23 ` Stefano Stabellini
2020-08-05 21:12 ` Oleksandr
2020-08-06 0:37 ` Stefano Stabellini
2020-08-03 18:21 ` [RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node Oleksandr Tyshchenko
2020-08-04 23:23 ` Stefano Stabellini
2020-08-05 20:35 ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT Oleksandr Tyshchenko
2020-08-15 17:24 ` [RFC PATCH V1 00/12] IOREQ feature (+ virtio-mmio) on Arm Julien Grall
2020-08-16 19:34 ` Oleksandr
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=23ca366d-0f5a-fe47-874d-8cd4629ef308@gmail.com \
--to=olekstysh@gmail.com \
--cc=Volodymyr_Babchuk@epam.com \
--cc=andrew.cooper3@citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=george.dunlap@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien.grall.oss@gmail.com \
--cc=julien.grall@arm.com \
--cc=oleksandr_tyshchenko@epam.com \
--cc=sstabellini@kernel.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).