xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall.oss@gmail.com>
To: Oleksandr <olekstysh@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Julien Grall <julien.grall@arm.com>,
	Jan Beulich <jbeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features
Date: Mon, 10 Aug 2020 23:37:49 +0100	[thread overview]
Message-ID: <CAJ=z9a3x7LgztG2KQKza8n3Y7xJsoO+DSmyi8Pxm1BgQOBs=xQ@mail.gmail.com> (raw)
In-Reply-To: <48a130dc-57fd-0627-d188-ddff4abf6778@gmail.com>

On Mon, 10 Aug 2020 at 21:29, Oleksandr <olekstysh@gmail.com> wrote:
>
>
> On 10.08.20 22:00, Julien Grall wrote:
>
> Hi Julien
>
> >
> >>>
> >>>>> @@ -2275,6 +2282,16 @@ static void check_for_vcpu_work(void)
> >>>>>    */
> >>>>>   void leave_hypervisor_to_guest(void)
> >>>>>   {
> >>>>> +#ifdef CONFIG_IOREQ_SERVER
> >>>>> +    /*
> >>>>> +     * XXX: Check the return. Shall we call that in
> >>>>> +     * continue_running and context_switch instead?
> >>>>> +     * The benefits would be to avoid calling
> >>>>> +     * handle_hvm_io_completion on every return.
> >>>>> +     */
> >>>>
> >>>> Yeah, that could be a simple and good optimization
> >>>
> >>> Well, it is not simple as it is sounds :).
> >>> handle_hvm_io_completion() is the function in charge to mark the
> >>> vCPU as waiting for I/O. So we would at least need to split the
> >>> function.
> >>>
> >>> I wrote this TODO because I wasn't sure about the complexity of
> >>> handle_hvm_io_completion(current). Looking at it again, the main
> >>> complexity is the looping over the IOREQ servers.
> >>>
> >>> I think it would be better to optimize handle_hvm_io_completion()
> >>> rather than trying to hack the context_switch() or continue_running().
> >> Well, is the idea in proposed dirty test patch below close to what
> >> you expect? Patch optimizes handle_hvm_io_completion() to avoid extra
> >> actions if vcpu's domain doesn't have ioreq_server, alternatively
> >> the check could be moved out of handle_hvm_io_completion() to avoid
> >> calling that function at all.
> >
> > This looks ok to me.
> >
> >> BTW, TODO also suggests checking the return value of
> >> handle_hvm_io_completion(), but I am completely sure we can simply
> >> just return from leave_hypervisor_to_guest() at this point. Could you
> >> please share your opinion?
> >
> > From my understanding, handle_hvm_io_completion() may return false if
> > there is pending I/O or a failure.
>
> It seems, yes
>
>
> >
> > In the former case, I think we want to call handle_hvm_io_completion()
> > later on. Possibly after we call do_softirq().
> >
> > I am wondering whether check_for_vcpu_work() could return whether
> > there are more work todo on the behalf of the vCPU.
> >
> > So we could have:
> >
> > do
> > {
> >   check_for_pcpu_work();
> > } while (check_for_vcpu_work())
> >
> > The implementation of check_for_vcpu_work() would be:
> >
> > if ( !handle_hvm_io_completion() )
> >   return true;
> >
> > /* Rest of the existing code */
> >
> > return false;
>
> Thank you, will give it a try.
>
> Can we behave the same way for both "pending I/O" and "failure" or we
> need to distinguish them?

We don't need to distinguish them. In both cases, we will want to
process softirqs. In all the failure cases, the domain will have
crashed. Therefore the vCPU will be unscheduled.

>
> Probably we need some sort of safe timeout/number attempts in order to
> not spin forever?

Well, anything based on timeout/number of attempts is flaky. How do
you know whether the I/O is just taking a "long time" to complete?

But a vCPU shouldn't continue until an I/O has completed. This is
nothing very different than what a processor would do.

In Xen case, if an I/O never completes then it most likely means that
something went horribly wrong with the Device Emulator. So it is most
likely not safe to continue. In HW, when there is a device failure,
the OS may receive an SError (this is implementation defined) and
could act accordingly if it is able to recognize the issue.

It *might* be possible to send a virtual SError but there are a couple
of issues with it:
     * How do you detect a failure?
     * SErrors are implementations defined. You would need to teach
your OS (or the firmware) how to deal with them.

I would expect quite a bit of effort in order to design and implement
it. For now, it is probably best to just let the vCPU spin forever.

This wouldn't be an issue for Xen as do_softirq() would be called at
every loop.

Cheers,


  reply	other threads:[~2020-08-10 22:38 UTC|newest]

Thread overview: 140+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-03 18:21 [RFC PATCH V1 00/12] IOREQ feature (+ virtio-mmio) on Arm Oleksandr Tyshchenko
2020-08-03 18:21 ` [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common Oleksandr Tyshchenko
2020-08-04  7:45   ` Paul Durrant
2020-08-04 11:10     ` Oleksandr
2020-08-04 11:23       ` Paul Durrant
2020-08-04 11:51         ` Oleksandr
2020-08-04 13:18           ` Paul Durrant
2020-08-04 13:52       ` Julien Grall
2020-08-04 15:41         ` Jan Beulich
2020-08-04 19:11         ` Stefano Stabellini
2020-08-05  7:01           ` Jan Beulich
2020-08-06  0:37             ` Stefano Stabellini
2020-08-06  6:59               ` Jan Beulich
2020-08-06 20:32                 ` Stefano Stabellini
2020-08-07 13:19                   ` Oleksandr
2020-08-07 16:45               ` Oleksandr
2020-08-07 21:50                 ` Stefano Stabellini
2020-08-07 22:19                   ` Oleksandr
2020-08-10 13:41                     ` Oleksandr
2020-08-10 23:34                       ` Stefano Stabellini
2020-08-11  9:19                         ` Julien Grall
2020-08-11 10:10                           ` Oleksandr
2020-08-11 22:47                             ` Stefano Stabellini
2020-08-12 14:35                               ` Oleksandr
2020-08-12 23:08                                 ` Stefano Stabellini
2020-08-13 20:16                                   ` Julien Grall
2020-08-07 23:45                   ` Oleksandr
2020-08-10 23:34                     ` Stefano Stabellini
2020-08-05  8:33           ` Julien Grall
2020-08-06  0:37             ` Stefano Stabellini
2020-08-06  9:45               ` Julien Grall
2020-08-06 23:48                 ` Stefano Stabellini
2020-08-10 19:20                   ` Julien Grall
2020-08-10 23:34                     ` Stefano Stabellini
2020-08-11 11:28                       ` Julien Grall
2020-08-11 22:48                         ` Stefano Stabellini
2020-08-12  8:19                           ` Julien Grall
2020-08-20 19:14                             ` Oleksandr
2020-08-21  0:53                               ` Stefano Stabellini
2020-08-21 18:54                                 ` Julien Grall
2020-08-05 13:30   ` Julien Grall
2020-08-06 11:37     ` Oleksandr
2020-08-10 16:29       ` Julien Grall
2020-08-10 17:28         ` Oleksandr
2020-08-05 16:15   ` Andrew Cooper
2020-08-06  8:20     ` Oleksandr
2020-08-15 17:30   ` Julien Grall
2020-08-16 19:37     ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 02/12] hvm/dm: Make x86's DM " Oleksandr Tyshchenko
2020-08-03 18:21 ` [RFC PATCH V1 03/12] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Oleksandr Tyshchenko
2020-08-03 18:21 ` [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features Oleksandr Tyshchenko
2020-08-04  7:49   ` Paul Durrant
2020-08-04 14:01     ` Julien Grall
2020-08-04 23:22       ` Stefano Stabellini
2020-08-15 17:56       ` Julien Grall
2020-08-17 14:36         ` Oleksandr
2020-08-04 23:22   ` Stefano Stabellini
2020-08-05  7:05     ` Jan Beulich
2020-08-05 16:41       ` Stefano Stabellini
2020-08-05 19:45         ` Oleksandr
2020-08-05  9:32     ` Julien Grall
2020-08-05 15:41       ` Oleksandr
2020-08-06 10:19         ` Julien Grall
2020-08-10 18:09       ` Oleksandr
2020-08-10 18:21         ` Oleksandr
2020-08-10 19:00         ` Julien Grall
2020-08-10 20:29           ` Oleksandr
2020-08-10 22:37             ` Julien Grall [this message]
2020-08-11  6:13               ` Oleksandr
2020-08-12 15:08                 ` Oleksandr
2020-08-11 17:09       ` Oleksandr
2020-08-11 17:50         ` Julien Grall
2020-08-13 18:41           ` Oleksandr
2020-08-13 20:36             ` Julien Grall
2020-08-13 21:49               ` Oleksandr
2020-08-13 20:39             ` Oleksandr Tyshchenko
2020-08-13 22:14               ` Julien Grall
2020-08-14 12:08                 ` Oleksandr
2020-08-05 14:12   ` Julien Grall
2020-08-05 14:45     ` Jan Beulich
2020-08-05 19:30     ` Oleksandr
2020-08-06 11:08       ` Julien Grall
2020-08-06 11:29         ` Jan Beulich
2020-08-20 18:30           ` Oleksandr
2020-08-21  6:16             ` Jan Beulich
2020-08-21 11:13               ` Oleksandr
2020-08-06 13:27         ` Oleksandr
2020-08-10 18:25           ` Julien Grall
2020-08-10 19:58             ` Oleksandr
2020-08-05 16:13   ` Jan Beulich
2020-08-05 19:47     ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 05/12] hvm/dm: Introduce xendevicemodel_set_irq_level DM op Oleksandr Tyshchenko
2020-08-04 23:22   ` Stefano Stabellini
2020-08-05  9:39     ` Julien Grall
2020-08-06  0:37       ` Stefano Stabellini
2020-08-06 11:32         ` Julien Grall
2020-08-06 23:49           ` Stefano Stabellini
2020-08-07  8:43             ` Jan Beulich
2020-08-07 21:50               ` Stefano Stabellini
2020-08-08  9:27                 ` Julien Grall
2020-08-08  9:28                   ` Julien Grall
2020-08-10 23:34                   ` Stefano Stabellini
2020-08-11 13:04                     ` Julien Grall
2020-08-11 22:48                       ` Stefano Stabellini
2020-08-18  9:31                         ` Julien Grall
2020-08-21  0:53                           ` Stefano Stabellini
2020-08-17 15:23                 ` Jan Beulich
2020-08-17 22:56                   ` Stefano Stabellini
2020-08-18  8:03                     ` Jan Beulich
2020-08-05 16:15   ` Jan Beulich
2020-08-05 22:12     ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 06/12] libxl: Introduce basic virtio-mmio support on Arm Oleksandr Tyshchenko
2020-08-03 18:21 ` [RFC PATCH V1 07/12] A collection of tweaks to be able to run emulator in driver domain Oleksandr Tyshchenko
2020-08-05 16:19   ` Jan Beulich
2020-08-05 16:40     ` Paul Durrant
2020-08-06  9:22       ` Oleksandr
2020-08-06  9:27         ` Jan Beulich
2020-08-14 16:30           ` Oleksandr
2020-08-16 15:36             ` Julien Grall
2020-08-17 15:07               ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 08/12] xen/arm: Invalidate qemu mapcache on XENMEM_decrease_reservation Oleksandr Tyshchenko
2020-08-05 16:21   ` Jan Beulich
2020-08-06 11:35     ` Julien Grall
2020-08-06 11:50       ` Jan Beulich
2020-08-06 14:28         ` Oleksandr
2020-08-06 16:33           ` Jan Beulich
2020-08-06 16:57             ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way Oleksandr Tyshchenko
2020-08-04 23:22   ` Stefano Stabellini
2020-08-05 20:51     ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 10/12] libxl: Add support for virtio-disk configuration Oleksandr Tyshchenko
2020-08-04 23:23   ` Stefano Stabellini
2020-08-05 21:12     ` Oleksandr
2020-08-06  0:37       ` Stefano Stabellini
2020-08-03 18:21 ` [RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node Oleksandr Tyshchenko
2020-08-04 23:23   ` Stefano Stabellini
2020-08-05 20:35     ` Oleksandr
2020-08-03 18:21 ` [RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT Oleksandr Tyshchenko
2020-08-15 17:24 ` [RFC PATCH V1 00/12] IOREQ feature (+ virtio-mmio) on Arm Julien Grall
2020-08-16 19:34   ` Oleksandr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJ=z9a3x7LgztG2KQKza8n3Y7xJsoO+DSmyi8Pxm1BgQOBs=xQ@mail.gmail.com' \
    --to=julien.grall.oss@gmail.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dgdegra@tycho.nsa.gov \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien.grall@arm.com \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=olekstysh@gmail.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).