xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Oleksandr <olekstysh@gmail.com>
Cc: "Oleksandr Tyshchenko" <oleksandr_tyshchenko@epam.com>,
	"Paul Durrant" <paul@xen.org>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>, "Wei Liu" <wl@xen.org>,
	"Julien Grall" <julien@xen.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	"Julien Grall" <julien.grall@arm.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it common
Date: Mon, 7 Dec 2020 17:29:14 +0100	[thread overview]
Message-ID: <2d83a093-29d3-5870-0814-229cc7f1c04b@suse.com> (raw)
In-Reply-To: <029c3dcc-fac2-5b65-703e-5d821335f2a0@gmail.com>

On 07.12.2020 16:27, Oleksandr wrote:
> On 07.12.20 13:13, Jan Beulich wrote:
>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>> @@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
>>>       return rc;
>>>   }
>>>   
>>> -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>>> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>>>   {
>>>       hvm_unmap_ioreq_gfn(s, true);
>>>       hvm_unmap_ioreq_gfn(s, false);
>> How is this now different from ...
>>
>>> @@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>>>       return rc;
>>>   }
>>>   
>>> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
>>> +{
>>> +    hvm_remove_ioreq_gfn(s, false);
>>> +    hvm_remove_ioreq_gfn(s, true);
>>> +}
>> ... this? Imo if at all possible there should be no such duplication
>> (i.e. at least have this one simply call the earlier one).
> 
> I am afraid, I don't see any duplication between mentioned functions. 
> Would you mind explaining?

Ouch - somehow my eyes considered "unmap" == "remove". I'm sorry
for the noise.

>>> @@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
>>>       return rc;
>>>   }
>>>   
>>> +/* Called with ioreq_server lock held */
>>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>>> +                                   struct hvm_ioreq_server *s,
>>> +                                   uint32_t flags)
>>> +{
>>> +    int rc = p2m_set_ioreq_server(d, flags, s);
>>> +
>>> +    if ( rc == 0 && flags == 0 )
>>> +    {
>>> +        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +
>>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>> +    }
>>> +
>>> +    return rc;
>>> +}
>>> +
>>>   /*
>>>    * Map or unmap an ioreq server to specific memory type. For now, only
>>>    * HVMMEM_ioreq_server is supported, and in the future new types can be
>>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>>>       if ( s->emulator != current->domain )
>>>           goto out;
>>>   
>>> -    rc = p2m_set_ioreq_server(d, flags, s);
>>> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>>>   
>>>    out:
>>>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>   
>>> -    if ( rc == 0 && flags == 0 )
>>> -    {
>>> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> -
>>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>> -    }
>>> -
>>>       return rc;
>>>   }
>> While you mention this change in the description, I'm still
>> missing justification as to why the change is safe to make. I
>> continue to think p2m_change_entry_type_global() would better
>> not be called inside the locked region, if at all possible.
> Well. I am afraid, I don't have a 100% justification why the change is 
> safe to make as well
> as I don't see an obvious reason why it is not safe to make (at least I 
> didn't find a possible deadlock scenario while investigating the code).
> I raised a question earlier whether I can fold this check in, which 
> implied calling p2m_change_entry_type_global() with ioreq_server lock held.

I'm aware of the earlier discussion. But "didn't find" isn't good
enough in a case like this, and since it's likely hard to indeed
prove there's no deadlock possible, I think it's best to avoid
having to provide such a proof by avoiding the nesting.

> If there is a concern with calling this inside the locked region 
> (unfortunately still unclear for me at the moment), I will try to find 
> another way how to split hvm_map_mem_type_to_ioreq_server() without
> potentially unsafe change here *and* exposing 
> p2m_change_entry_type_global() to the common code. Right now, I don't 
> have any ideas how this could be split other than
> introducing one more hook here to deal with p2m_change_entry_type_global 
> (probably arch_ioreq_server_map_mem_type_complete?), but I don't expect 
> it to be accepted.
> I appreciate any ideas on that.

Is there a reason why the simplest solution (two independent
arch_*() calls) won't do? If so, what are the constraints?
Can the first one e.g. somehow indicate what needs to happen
after the lock was dropped? But the two calls look independent
right now, so I don't see any complicating factors.

>>> --- a/xen/include/asm-x86/hvm/ioreq.h
>>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>>> @@ -19,6 +19,25 @@
>>>   #ifndef __ASM_X86_HVM_IOREQ_H__
>>>   #define __ASM_X86_HVM_IOREQ_H__
>>>   
>>> +#define HANDLE_BUFIOREQ(s) \
>>> +    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
>>> +
>>> +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
>>> +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_disable(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s);
>>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>>> +                                   struct hvm_ioreq_server *s,
>>> +                                   uint32_t flags);
>>> +bool arch_ioreq_server_destroy_all(struct domain *d);
>>> +int arch_ioreq_server_get_type_addr(const struct domain *d,
>>> +                                    const ioreq_t *p,
>>> +                                    uint8_t *type,
>>> +                                    uint64_t *addr);
>>> +void arch_ioreq_domain_init(struct domain *d);
>>> +
>>>   bool hvm_io_pending(struct vcpu *v);
>>>   bool handle_hvm_io_completion(struct vcpu *v);
>>>   bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
>> What's the plan here? Introduce them into the x86 header just
>> to later move the entire block into the common one? Wouldn't
>> it make sense to introduce the common header here right away?
>> Or do you expect to convert some of the simpler ones to inline
>> functions later on?
> The former. The subsequent patch is moving the the entire block(s) from 
> here and from x86/hvm/ioreq.c to the common code in one go.

I think I saw it move the _other_ pieces there, and this block
left here. (FAOD my comment is about the arch_*() declarations
you add, not the patch context in view.)

> I thought it was a little bit odd to expose a header before exposing an 
> implementation to the common code. Another reason is to minimize places 
> that need touching by current patch.

By exposing arch_*() declarations you don't give the impression
of exposing any "implementation". These are helpers the
implementation is to invoke; I'm fine with you moving the
declarations of the functions actually constituting this
component's external interface only once you also move the
implementation to common code.

Jan


  reply	other threads:[~2020-12-07 16:29 UTC|newest]

Thread overview: 127+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-30 10:31 Oleksandr Tyshchenko
2020-11-30 10:31 ` [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it common Oleksandr Tyshchenko
2020-12-01 11:03   ` Alex Bennée
2020-12-01 18:53     ` Oleksandr
2020-12-01 19:36       ` Alex Bennée
2020-12-02  8:00       ` Jan Beulich
2020-12-02 11:19         ` Oleksandr
2020-12-07 11:13   ` Jan Beulich
2020-12-07 15:27     ` Oleksandr
2020-12-07 16:29       ` Jan Beulich [this message]
2020-12-07 17:21         ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 02/23] x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving Oleksandr Tyshchenko
2020-12-01 11:07   ` Alex Bennée
2020-12-07 11:19   ` Jan Beulich
2020-12-07 15:37     ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 03/23] x86/ioreq: Provide out-of-line wrapper for the handle_mmio() Oleksandr Tyshchenko
2020-12-07 11:27   ` Jan Beulich
2020-12-07 15:39     ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common Oleksandr Tyshchenko
2020-12-07 11:41   ` Jan Beulich
2020-12-07 19:43     ` Oleksandr
2020-12-08  9:21       ` Jan Beulich
2020-12-08 13:56         ` Oleksandr
2020-12-08 15:02           ` Jan Beulich
2020-12-08 17:24             ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 05/23] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common Oleksandr Tyshchenko
2020-12-07 11:47   ` Jan Beulich
2020-11-30 10:31 ` [PATCH V3 06/23] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common Oleksandr Tyshchenko
2020-12-07 11:48   ` Jan Beulich
2020-11-30 10:31 ` [PATCH V3 07/23] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Oleksandr Tyshchenko
2020-12-07 11:54   ` Jan Beulich
2020-11-30 10:31 ` [PATCH V3 08/23] xen/ioreq: Move x86's ioreq_server to struct domain Oleksandr Tyshchenko
2020-12-07 12:04   ` Jan Beulich
2020-12-07 12:12     ` Paul Durrant
2020-12-07 19:52     ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 09/23] xen/dm: Make x86's DM feature common Oleksandr Tyshchenko
2020-12-07 12:08   ` Jan Beulich
2020-12-07 20:23     ` Oleksandr
2020-12-08  9:30       ` Jan Beulich
2020-12-08 14:54         ` Oleksandr
2021-01-07 14:38           ` Oleksandr
2021-01-07 15:01             ` Jan Beulich
2021-01-07 16:49               ` Oleksandr
2021-01-12 22:23                 ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Oleksandr Tyshchenko
2020-12-07 11:35   ` Jan Beulich
2020-12-07 12:11     ` Jan Beulich
2020-12-07 21:06       ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 11/23] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu Oleksandr Tyshchenko
2020-12-07 12:32   ` Jan Beulich
2020-12-07 20:59     ` Oleksandr
2020-12-08  7:52       ` Paul Durrant
2020-12-08  9:35         ` Jan Beulich
2020-12-08 18:21         ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names Oleksandr Tyshchenko
2020-12-07 12:45   ` Jan Beulich
2020-12-07 20:28     ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 13/23] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() Oleksandr Tyshchenko
2020-12-09 21:32   ` Stefano Stabellini
2020-12-09 22:34     ` Oleksandr
2020-12-10  2:30       ` Stefano Stabellini
2020-11-30 10:31 ` [PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for IOREQ/DM features Oleksandr Tyshchenko
2020-12-09 22:04   ` Stefano Stabellini
2020-12-09 22:49     ` Oleksandr
2020-12-10  2:30       ` Stefano Stabellini
2020-11-30 10:31 ` [PATCH V3 15/23] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed Oleksandr Tyshchenko
2020-11-30 20:51   ` Volodymyr Babchuk
2020-12-01 12:46     ` Julien Grall
2020-12-09 23:18   ` Stefano Stabellini
2020-12-09 23:35     ` Stefano Stabellini
2020-12-09 23:47       ` Julien Grall
2020-12-10  2:30         ` Stefano Stabellini
2020-12-10 13:17           ` Julien Grall
2020-12-10 13:21           ` Oleksandr
2020-12-09 23:38     ` Julien Grall
2020-11-30 10:31 ` [PATCH V3 16/23] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm Oleksandr Tyshchenko
2020-12-08 14:24   ` Jan Beulich
2020-12-08 16:41     ` Oleksandr
2020-12-09 23:49   ` Stefano Stabellini
2021-01-15  1:18   ` Stefano Stabellini
2020-11-30 10:31 ` [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server() Oleksandr Tyshchenko
2020-12-08 15:11   ` Jan Beulich
2020-12-08 15:33     ` Oleksandr
2020-12-08 16:56       ` Oleksandr
2020-12-08 19:43         ` Paul Durrant
2020-12-08 20:16           ` Oleksandr
2020-12-09  9:01             ` Paul Durrant
2020-12-09 18:58               ` Julien Grall
2020-12-09 21:05                 ` Oleksandr
2020-12-09 20:36               ` Oleksandr
2020-12-10  8:38                 ` Paul Durrant
2020-12-10 16:57                   ` Oleksandr
2020-11-30 10:31 ` [PATCH V3 18/23] xen/dm: Introduce xendevicemodel_set_irq_level DM op Oleksandr Tyshchenko
2020-12-10  2:21   ` Stefano Stabellini
2020-12-10 12:58     ` Oleksandr
2020-12-10 13:38     ` Julien Grall
2020-11-30 10:31 ` [PATCH V3 19/23] xen/arm: io: Abstract sign-extension Oleksandr Tyshchenko
2020-11-30 21:03   ` Volodymyr Babchuk
2020-11-30 23:27     ` Oleksandr
2020-12-01  7:55       ` Jan Beulich
2020-12-01 10:30         ` Julien Grall
2020-12-01 10:42           ` Oleksandr
2020-12-01 12:13             ` Julien Grall
2020-12-01 12:24               ` Oleksandr
2020-12-01 12:28                 ` Julien Grall
2020-12-01 10:49           ` Jan Beulich
2020-12-01 10:23       ` Julien Grall
2020-11-30 10:31 ` [PATCH V3 20/23] xen/ioreq: Make x86's send_invalidate_req() common Oleksandr Tyshchenko
2020-12-08 15:24   ` Jan Beulich
2020-12-08 16:49     ` Oleksandr
2020-12-09  8:21       ` Jan Beulich
2020-11-30 10:31 ` [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling Oleksandr Tyshchenko
2020-12-10  2:30   ` Stefano Stabellini
2020-12-10 18:50     ` Julien Grall
2020-12-11  1:28       ` Stefano Stabellini
2020-12-11 11:21         ` Oleksandr
2020-12-11 19:07           ` Stefano Stabellini
2020-12-11 19:37             ` Julien Grall
2020-12-11 19:27         ` Julien Grall
2020-11-30 10:31 ` [PATCH V3 22/23] libxl: Introduce basic virtio-mmio support on Arm Oleksandr Tyshchenko
2020-11-30 10:31 ` [PATCH V3 23/23] [RFC] libxl: Add support for virtio-disk configuration Oleksandr Tyshchenko
2020-11-30 11:22 ` [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm Oleksandr
2020-12-07 13:03   ` Wei Chen
2020-12-07 21:03     ` Oleksandr
2020-11-30 16:21 ` Alex Bennée
2020-11-30 22:22   ` [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm Oleksandr
2020-12-29 15:32   ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2d83a093-29d3-5870-0814-229cc7f1c04b@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=julien.grall@arm.com \
    --cc=julien@xen.org \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=olekstysh@gmail.com \
    --cc=paul@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).