xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Oleksandr <olekstysh@gmail.com>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien.grall@arm.com>
Subject: Re: [PATCH V1 09/16] arm/ioreq: Introduce arch specific bits for IOREQ/DM features
Date: Thu, 24 Sep 2020 21:22:26 +0300	[thread overview]
Message-ID: <fcb40929-9487-1d20-3990-09c79cab8df8@gmail.com> (raw)
In-Reply-To: <e4009c0f-1057-f031-c3bb-6b7c850a0aa1@xen.org>


On 24.09.20 20:25, Julien Grall wrote:

Hi Julien.

>
>
> On 23/09/2020 21:16, Oleksandr wrote:
>>
>> On 23.09.20 21:03, Julien Grall wrote:
>>> Hi,
>>
>> Hi Julien
>>
>>
>>>
>>> On 10/09/2020 21:22, Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> I believe I am the originally author of this code...
>>
>> Sorry, will fix
>>
>>
>>>
>>>> This patch adds basic IOREQ/DM support on Arm. The subsequent
>>>> patches will improve functionality, add remaining bits as well as
>>>> address several TODOs.
>>>
>>> Find a bit weird to add code with TODOs that are handled in the same 
>>> series? Can't we just split this patch in smaller one where 
>>> everything is addressed from start?
>> Sorry if I wasn't clear in description. Let me please clarify.
>> Corresponding RFC patch had 3 major TODOs:
>> 1. Handle properly when hvm_send_ioreq() returns IO_RETRY
>> 2. Proper ref-counting for the foreign entries in 
>> set_foreign_p2m_entry()
>> 3. Check the return value of handle_hvm_io_completion() *and* avoid 
>> calling handle_hvm_io_completion() on every return.
>>
>> TODO #1 was fixed in current patch
>> TODO #2 was fixed in "xen/mm: Handle properly reference in 
>> set_foreign_p2m_entry() on Arm"
>> TODO #3 was partially fixed in current patch (check the return value 
>> of handle_hvm_io_completion()).
>> The second part of TODO #3 (avoid calling handle_hvm_io_completion() 
>> on every return) was moved to a separate patch "xen/ioreq: Introduce 
>> hvm_domain_has_ioreq_server()"
>> and fixed (or probably improved is a better word) there along with 
>> introducing a mechanism to actually improve.
>
> Right, none of those TODOs are described in the code. So it makes more 
> difficult to know what you are actually referring to.
>
> I would suggest to reshuffle the series so the TODOs are addressed 
> before when possible.

ok, I will try to prepare something.


>
>>
>> Could you please clarify how this patch could be split in smaller one?
>
> This patch is going to be reduced a fair bit if you make some of the 
> structure common. The next steps would be to move anything that is not 
> directly related to IOREQ out.


Thank you for the clarification.
Yes, however, I believed everything in this patch is directly related to 
IOREQ...


>
>
> From a quick look, there are few things that can be moved in separate 
> patches:
>    - The addition of the ASSERT_UNREACHABLE()

Did you mean the addition of the ASSERT_UNREACHABLE() to 
arch_handle_hvm_io_completion/handle_pio can moved to separate patches?
Sorry, I don't quite understand, for what benefit?


>    - The addition of the loop in leave_hypervisor_to_guest() as I 
> think it deserve some explanation.

Agree that loop in leave_hypervisor_to_guest() needs explanation. Will 
move to separate patch. But, this way I need to return corresponding 
TODO back to this patch.


>    - The sign extension in handle_ioserv() can possibly be abstracted. 
> Actually the code is quite similar to handle_read().

Ok, will consider that.


>
>>
>>>
>>>>
>>>> Please note, the "PIO handling" TODO is expected to left unaddressed
>>>> for the current series. It is not an big issue for now while Xen
>>>> doesn't have support for vPCI on Arm. On Arm64 they are only used
>>>> for PCI IO Bar and we would probably want to expose them to emulator
>>>> as PIO access to make a DM completely arch-agnostic. So "PIO handling"
>>>> should be implemented when we add support for vPCI.
>>>>
>>>> Please note, at the moment build on Arm32 is broken (see cmpxchg
>>>> usage in hvm_send_buffered_ioreq()) due to the lack of cmpxchg_64
>>>> support on Arm32. There is a patch on review to address this issue:
>>>> https://patchwork.kernel.org/patch/11715559/
>>>
>>> This has been committed.
>>
>> Thank you for the patch, will remove a notice.
>
> For future reference, I think such notice would be better after --- as 
> they don't need to be part of the commit message.

Got it.


>
>
>>
>>>
>>>
>>>> +    if ( dabt.write )
>>>> +        return IO_HANDLED;
>>>> +
>>>> +    /*
>>>> +     * Sign extend if required.
>>>> +     * Note that we expect the read handler to have zeroed the bits
>>>> +     * outside the requested access size.
>>>> +     */
>>>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>>>> +    {
>>>> +        /*
>>>> +         * We are relying on register_t using the same as
>>>> +         * an unsigned long in order to keep the 32-bit assembly
>>>> +         * code smaller.
>>>> +         */
>>>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>>>> +        r |= (~0UL) << size;
>>>> +    }
>>>> +
>>>> +    set_user_reg(regs, dabt.reg, r);
>>>> +
>>>> +    return IO_HANDLED;
>>>> +}
>>>> +
>>>> +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
>>>> +                             struct vcpu *v, mmio_info_t *info)
>>>> +{
>>>> +    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
>>>> +    ioreq_t p = {
>>>> +        .type = IOREQ_TYPE_COPY,
>>>> +        .addr = info->gpa,
>>>> +        .size = 1 << info->dabt.size,
>>>> +        .count = 1,
>>>> +        .dir = !info->dabt.write,
>>>> +        /*
>>>> +         * On x86, df is used by 'rep' instruction to tell the 
>>>> direction
>>>> +         * to iterate (forward or backward).
>>>> +         * On Arm, all the accesses to MMIO region will do a single
>>>> +         * memory access. So for now, we can safely always set to 0.
>>>> +         */
>>>> +        .df = 0,
>>>> +        .data = get_user_reg(regs, info->dabt.reg),
>>>> +        .state = STATE_IOREQ_READY,
>>>> +    };
>>>> +    struct hvm_ioreq_server *s = NULL;
>>>> +    enum io_state rc;
>>>> +
>>>> +    switch ( vio->io_req.state )
>>>> +    {
>>>> +    case STATE_IOREQ_NONE:
>>>> +        break;
>>>> +
>>>> +    case STATE_IORESP_READY:
>>>> +        return IO_HANDLED;
>>>> +
>>>> +    default:
>>>> +        gdprintk(XENLOG_ERR, "wrong state %u\n", vio->io_req.state);
>>>> +        return IO_ABORT;
>>>> +    }
>>>> +
>>>> +    s = hvm_select_ioreq_server(v->domain, &p);
>>>> +    if ( !s )
>>>> +        return IO_UNHANDLED;
>>>> +
>>>> +    if ( !info->dabt.valid )
>>>> +        return IO_ABORT;
>>>> +
>>>> +    vio->io_req = p;
>>>> +
>>>> +    rc = hvm_send_ioreq(s, &p, 0);
>>>> +    if ( rc != IO_RETRY || v->domain->is_shutting_down )
>>>> +        vio->io_req.state = STATE_IOREQ_NONE;
>>>> +    else if ( !hvm_ioreq_needs_completion(&vio->io_req) )
>>>> +        rc = IO_HANDLED;
>>>> +    else
>>>> +        vio->io_completion = HVMIO_mmio_completion;
>>>> +
>>>> +    return rc;
>>>> +}
>>>> +
>>>> +bool ioreq_handle_complete_mmio(void)
>>>> +{
>>>> +    struct vcpu *v = current;
>>>> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>> +    const union hsr hsr = { .bits = regs->hsr };
>>>> +    paddr_t addr = v->arch.hvm.hvm_io.io_req.addr;
>>>> +
>>>> +    if ( try_handle_mmio(regs, hsr, addr) == IO_HANDLED )
>>>> +    {
>>>> +        advance_pc(regs, hsr);
>>>> +        return true;
>>>> +    }
>>>> +
>>>> +    return false;
>>>> +}
>>>> +
>>>> +/*
>>>> + * Local variables:
>>>> + * mode: C
>>>> + * c-file-style: "BSD"
>>>> + * c-basic-offset: 4
>>>> + * tab-width: 4
>>>> + * indent-tabs-mode: nil
>>>> + * End:
>>>> + */
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 8f40d0e..121942c 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>> @@ -21,6 +21,7 @@
>>>>   #include <xen/hypercall.h>
>>>>   #include <xen/init.h>
>>>>   #include <xen/iocap.h>
>>>> +#include <xen/ioreq.h>
>>>>   #include <xen/irq.h>
>>>>   #include <xen/lib.h>
>>>>   #include <xen/mem_access.h>
>>>> @@ -1384,6 +1385,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
>>>>   #ifdef CONFIG_HYPFS
>>>>       HYPERCALL(hypfs_op, 5),
>>>>   #endif
>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>> +    HYPERCALL(dm_op, 3),
>>>> +#endif
>>>>   };
>>>>     #ifndef NDEBUG
>>>> @@ -1955,9 +1959,14 @@ static void 
>>>> do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
>>>>               case IO_HANDLED:
>>>>                   advance_pc(regs, hsr);
>>>>                   return;
>>>> +            case IO_RETRY:
>>>> +                /* finish later */
>>>> +                return;
>>>>               case IO_UNHANDLED:
>>>>                   /* IO unhandled, try another way to handle it. */
>>>>                   break;
>>>> +            default:
>>>> +                ASSERT_UNREACHABLE();
>>>>               }
>>>>           }
>>>>   @@ -2249,12 +2258,23 @@ static void check_for_pcpu_work(void)
>>>>    * Process pending work for the vCPU. Any call should be fast or
>>>>    * implement preemption.
>>>>    */
>>>> -static void check_for_vcpu_work(void)
>>>> +static bool check_for_vcpu_work(void)
>>>>   {
>>>>       struct vcpu *v = current;
>>>>   +#ifdef CONFIG_IOREQ_SERVER
>>>> +    bool handled;
>>>> +
>>>> +    local_irq_enable();
>>>> +    handled = handle_hvm_io_completion(v);
>>>> +    local_irq_disable();
>>>> +
>>>> +    if ( !handled )
>>>> +        return true;
>>>> +#endif
>>>> +
>>>>       if ( likely(!v->arch.need_flush_to_ram) )
>>>> -        return;
>>>> +        return false;
>>>>         /*
>>>>        * Give a chance for the pCPU to process work before handling 
>>>> the vCPU
>>>> @@ -2265,6 +2285,8 @@ static void check_for_vcpu_work(void)
>>>>       local_irq_enable();
>>>>       p2m_flush_vm(v);
>>>>       local_irq_disable();
>>>> +
>>>> +    return false;
>>>>   }
>>>>     /*
>>>> @@ -2277,8 +2299,10 @@ void leave_hypervisor_to_guest(void)
>>>>   {
>>>>       local_irq_disable();
>>>>   -    check_for_vcpu_work();
>>>> -    check_for_pcpu_work();
>>>> +    do
>>>> +    {
>>>> +        check_for_pcpu_work();
>>>> +    } while ( check_for_vcpu_work() );
>>>>         vgic_sync_to_lrs();
>>>>   diff --git a/xen/include/asm-arm/domain.h 
>>>> b/xen/include/asm-arm/domain.h
>>>> index 6819a3b..d1c48d7 100644
>>>> --- a/xen/include/asm-arm/domain.h
>>>> +++ b/xen/include/asm-arm/domain.h
>>>> @@ -11,10 +11,27 @@
>>>>   #include <asm/vgic.h>
>>>>   #include <asm/vpl011.h>
>>>>   #include <public/hvm/params.h>
>>>> +#include <public/hvm/dm_op.h>
>>>> +#include <public/hvm/ioreq.h>
>>>> +
>>>> +#define MAX_NR_IOREQ_SERVERS 8
>>>>     struct hvm_domain
>>>>   {
>>>>       uint64_t              params[HVM_NR_PARAMS];
>>>> +
>>>> +    /* Guest page range used for non-default ioreq servers */
>>>> +    struct {
>>>> +        unsigned long base;
>>>> +        unsigned long mask;
>>>> +        unsigned long legacy_mask; /* indexed by HVM param number */
>>>> +    } ioreq_gfn;
>>>> +
>>>> +    /* Lock protects all other values in the sub-struct and the 
>>>> default */
>>>> +    struct {
>>>> +        spinlock_t              lock;
>>>> +        struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS];
>>>> +    } ioreq_server;
>>>>   };
>>>>     #ifdef CONFIG_ARM_64
>>>> @@ -91,6 +108,28 @@ struct arch_domain
>>>>   #endif
>>>>   }  __cacheline_aligned;
>>>>   +enum hvm_io_completion {
>>>> +    HVMIO_no_completion,
>>>> +    HVMIO_mmio_completion,
>>>> +    HVMIO_pio_completion
>>>> +};
>>>> +
>>>> +struct hvm_vcpu_io {
>>>> +    /* I/O request in flight to device model. */
>>>> +    enum hvm_io_completion io_completion;
>>>> +    ioreq_t                io_req;
>>>> +
>>>> +    /*
>>>> +     * HVM emulation:
>>>> +     *  Linear address @mmio_gla maps to MMIO physical frame 
>>>> @mmio_gpfn.
>>>> +     *  The latter is known to be an MMIO frame (not RAM).
>>>> +     *  This translation is only valid for accesses as per 
>>>> @mmio_access.
>>>> +     */
>>>> +    struct npfec        mmio_access;
>>>> +    unsigned long       mmio_gla;
>>>> +    unsigned long       mmio_gpfn;
>>>> +};
>>>> +
>>>
>>> Why do we need to re-define most of this? Can't this just be in 
>>> common code?
>>
>> Jan asked almost the similar question in "[PATCH V1 02/16] xen/ioreq: 
>> Make x86's IOREQ feature common".
>> Please see my answer there:
>> https://patchwork.kernel.org/patch/11769105/#23637511
>>
>> Theoretically we could move this to the common code, but the question 
>> is how to be with other struct fields the x86's struct hvm_vcpu_io 
>> has/needs but
>> Arm's seems not, would it be possible to logically split struct 
>> hvm_vcpu_io into common and arch parts?
>>
>> struct hvm_vcpu_io {
>>      /* I/O request in flight to device model. */
>>      enum hvm_io_completion io_completion;
>>      ioreq_t                io_req;
>>
>>      /*
>>       * HVM emulation:
>>       *  Linear address @mmio_gla maps to MMIO physical frame 
>> @mmio_gpfn.
>>       *  The latter is known to be an MMIO frame (not RAM).
>>       *  This translation is only valid for accesses as per 
>> @mmio_access.
>>       */
>>      struct npfec        mmio_access;
>>      unsigned long       mmio_gla;
>>      unsigned long       mmio_gpfn;
>>
>>      /*
>>       * We may need to handle up to 3 distinct memory accesses per
>>       * instruction.
>>       */
>>      struct hvm_mmio_cache mmio_cache[3];
>>      unsigned int mmio_cache_count;
>>
>>      /* For retries we shouldn't re-fetch the instruction. */
>>      unsigned int mmio_insn_bytes;
>>      unsigned char mmio_insn[16];
>>      struct hvmemul_cache *cache;
>>
>>      /*
>>       * For string instruction emulation we need to be able to signal a
>>       * necessary retry through other than function return codes.
>>       */
>>      bool_t mmio_retry;
>>
>>      unsigned long msix_unmask_address;
>>      unsigned long msix_snoop_address;
>>      unsigned long msix_snoop_gpa;
>>
>>      const struct g2m_ioport *g2m_ioport;
>> };
>
> I think Jan made some suggestion today. Let me know if you require 
> more input.


Yes. I am considering this now. I provided my thoughts on that a little 
bit earlier. Could you please clarify there.


-- 
Regards,

Oleksandr Tyshchenko



  reply	other threads:[~2020-09-24 18:22 UTC|newest]

Thread overview: 111+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-10 20:21 [PATCH V1 00/16] IOREQ feature (+ virtio-mmio) on Arm Oleksandr Tyshchenko
2020-09-10 20:21 ` [PATCH V1 01/16] x86/ioreq: Prepare IOREQ feature for making it common Oleksandr Tyshchenko
2020-09-14 13:52   ` Jan Beulich
2020-09-21 12:22     ` Oleksandr
2020-09-21 12:31       ` Jan Beulich
2020-09-21 12:47         ` Oleksandr
2020-09-21 13:29           ` Jan Beulich
2020-09-21 14:43             ` Oleksandr
2020-09-21 15:28               ` Jan Beulich
2020-09-23 17:22   ` Julien Grall
2020-09-23 18:08     ` Oleksandr
2020-09-10 20:21 ` [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common Oleksandr Tyshchenko
2020-09-14 14:17   ` Jan Beulich
2020-09-21 19:02     ` Oleksandr
2020-09-22  6:33       ` Jan Beulich
2020-09-22  9:58         ` Oleksandr
2020-09-22 10:54           ` Jan Beulich
2020-09-22 15:05             ` Oleksandr
2020-09-22 15:52               ` Jan Beulich
2020-09-23 12:28                 ` Oleksandr
2020-09-24 10:58                   ` Jan Beulich
2020-09-24 15:38                     ` Oleksandr
2020-09-24 15:51                       ` Jan Beulich
2020-09-24 18:01   ` Julien Grall
2020-09-25  8:19     ` Paul Durrant
2020-09-30 13:39       ` Oleksandr
2020-09-30 17:47         ` Julien Grall
2020-10-01  6:59           ` Paul Durrant
2020-10-01  8:49           ` Jan Beulich
2020-10-01  8:50             ` Paul Durrant
2020-09-10 20:21 ` [PATCH V1 03/16] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common Oleksandr Tyshchenko
2020-09-14 14:59   ` Jan Beulich
2020-09-22 16:16     ` Oleksandr
2020-09-23 17:27     ` Julien Grall
2020-09-10 20:21 ` [PATCH V1 04/16] xen/ioreq: Provide alias for the handle_mmio() Oleksandr Tyshchenko
2020-09-14 15:10   ` Jan Beulich
2020-09-22 16:20     ` Oleksandr
2020-09-23 17:28   ` Julien Grall
2020-09-23 18:17     ` Oleksandr
2020-09-10 20:21 ` [PATCH V1 05/16] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common Oleksandr Tyshchenko
2020-09-14 15:13   ` Jan Beulich
2020-09-22 16:24     ` Oleksandr
2020-09-10 20:22 ` [PATCH V1 06/16] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Oleksandr Tyshchenko
2020-09-14 15:16   ` Jan Beulich
2020-09-14 15:59     ` Julien Grall
2020-09-22 16:33     ` Oleksandr
2020-09-10 20:22 ` [PATCH V1 07/16] xen/dm: Make x86's DM feature common Oleksandr Tyshchenko
2020-09-14 15:56   ` Jan Beulich
2020-09-22 16:46     ` Oleksandr
2020-09-24 11:03       ` Jan Beulich
2020-09-24 12:47         ` Oleksandr
2020-09-23 17:35   ` Julien Grall
2020-09-23 18:28     ` Oleksandr
2020-09-10 20:22 ` [PATCH V1 08/16] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Oleksandr Tyshchenko
2020-09-10 20:22 ` [PATCH V1 09/16] arm/ioreq: Introduce arch specific bits for IOREQ/DM features Oleksandr Tyshchenko
2020-09-11 10:14   ` Oleksandr
2020-09-16  7:51   ` Jan Beulich
2020-09-22 17:12     ` Oleksandr
2020-09-23 18:03   ` Julien Grall
2020-09-23 20:16     ` Oleksandr
2020-09-24 11:08       ` Jan Beulich
2020-09-24 16:02         ` Oleksandr
2020-09-24 18:02           ` Oleksandr
2020-09-25  6:51             ` Jan Beulich
2020-09-25  9:47               ` Oleksandr
2020-09-26 13:12             ` Julien Grall
2020-09-26 13:18               ` Oleksandr
2020-09-24 16:51         ` Julien Grall
2020-09-24 17:25       ` Julien Grall
2020-09-24 18:22         ` Oleksandr [this message]
2020-09-26 13:21           ` Julien Grall
2020-09-26 14:57             ` Oleksandr
2020-09-10 20:22 ` [PATCH V1 10/16] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm Oleksandr Tyshchenko
2020-09-16  7:17   ` Jan Beulich
2020-09-16  8:50     ` Julien Grall
2020-09-16  8:52       ` Jan Beulich
2020-09-16  8:55         ` Julien Grall
2020-09-22 17:30           ` Oleksandr
2020-09-16  8:08   ` Jan Beulich
2020-09-10 20:22 ` [PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server() Oleksandr Tyshchenko
2020-09-16  8:04   ` Jan Beulich
2020-09-16  8:13     ` Paul Durrant
2020-09-16  8:39       ` Julien Grall
2020-09-16  8:43         ` Paul Durrant
2020-09-22 18:39           ` Oleksandr
2020-09-22 18:23     ` Oleksandr
2020-09-10 20:22 ` [PATCH V1 12/16] xen/dm: Introduce xendevicemodel_set_irq_level DM op Oleksandr Tyshchenko
2020-09-26 13:50   ` Julien Grall
2020-09-26 14:21     ` Oleksandr
2020-09-10 20:22 ` [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common Oleksandr Tyshchenko
2020-09-16  8:50   ` Jan Beulich
2020-09-22 19:32     ` Oleksandr
2020-09-24 11:16       ` Jan Beulich
2020-09-24 16:45         ` Oleksandr
2020-09-25  7:03           ` Jan Beulich
2020-09-25 13:05             ` Oleksandr
2020-10-02  9:55               ` Oleksandr
2020-10-07 10:38                 ` Julien Grall
2020-10-07 12:01                   ` Oleksandr
2020-09-10 20:22 ` [PATCH V1 14/16] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() Oleksandr Tyshchenko
2020-09-16  9:04   ` Jan Beulich
2020-09-16  9:07     ` Julien Grall
2020-09-16  9:09       ` Paul Durrant
2020-09-16  9:12         ` Julien Grall
2020-09-22 20:05           ` Oleksandr
2020-09-23 18:12             ` Julien Grall
2020-09-23 20:29               ` Oleksandr
2020-09-16  9:07     ` Paul Durrant
2020-09-23 18:05   ` Julien Grall
2020-09-10 20:22 ` [PATCH V1 15/16] libxl: Introduce basic virtio-mmio support on Arm Oleksandr Tyshchenko
2020-09-10 20:22 ` [PATCH V1 16/16] [RFC] libxl: Add support for virtio-disk configuration Oleksandr Tyshchenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fcb40929-9487-1d20-3990-09c79cab8df8@gmail.com \
    --to=olekstysh@gmail.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=julien.grall@arm.com \
    --cc=julien@xen.org \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).