xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"roger.pau@citrix.com" <roger.pau@citrix.com>
Cc: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Artem Mygaiev <Artem_Mygaiev@epam.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"george.dunlap@citrix.com" <george.dunlap@citrix.com>,
	"paul@xen.org" <paul@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Rahul Singh <rahul.singh@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Subject: Re: [PATCH v4 02/11] vpci: cancel pending map/unmap on vpci removal
Date: Thu, 18 Nov 2021 12:57:33 +0000	[thread overview]
Message-ID: <4e6828d7-512c-2217-8c9a-a45f85a9d07d@epam.com> (raw)
In-Reply-To: <8168668f-c89b-cc5b-295f-1d472164a5f6@xen.org>

Hi, Julien!

On 16.11.21 20:02, Julien Grall wrote:
> Hi Oleksandr,
>
> On 16/11/2021 14:24, Oleksandr Andrushchenko wrote:
>>
>>
>> On 16.11.21 16:12, Jan Beulich wrote:
>>> On 16.11.2021 14:41, Oleksandr Andrushchenko wrote:
>>>>
>>>> On 16.11.21 10:23, Oleksandr Andrushchenko wrote:
>>>>> On 16.11.21 10:01, Jan Beulich wrote:
>>>>>> On 16.11.2021 08:32, Oleksandr Andrushchenko wrote:
>>>>>>> On 15.11.21 18:56, Jan Beulich wrote:
>>>>>>>> On 05.11.2021 07:56, Oleksandr Andrushchenko wrote:
>>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>>>
>>>>>>>>> When a vPCI is removed for a PCI device it is possible that we have
>>>>>>>>> scheduled a delayed work for map/unmap operations for that device.
>>>>>>>>> For example, the following scenario can illustrate the problem:
>>>>>>>>>
>>>>>>>>> pci_physdev_op
>>>>>>>>>         pci_add_device
>>>>>>>>>             init_bars -> modify_bars -> defer_map -> raise_softirq(SCHEDULE_SOFTIRQ)
>>>>>>>>>         iommu_add_device <- FAILS
>>>>>>>>>         vpci_remove_device -> xfree(pdev->vpci)
>>>>>>>>>
>>>>>>>>> leave_hypervisor_to_guest
>>>>>>>>>         vpci_process_pending: v->vpci.mem != NULL; v->vpci.pdev->vpci == NULL
>>>>>>>>>
>>>>>>>>> For the hardware domain we continue execution as the worse that
>>>>>>>>> could happen is that MMIO mappings are left in place when the
>>>>>>>>> device has been deassigned
>>>>>>>> Is continuing safe in this case? I.e. isn't there the risk of a NULL
>>>>>>>> deref?
>>>>>>> I think it is safe to continue
>>>>>> And why do you think so? I.e. why is there no race for Dom0 when there
>>>>>> is one for DomU?
>>>>> Well, then we need to use a lock to synchronize the two.
>>>>> I guess this needs to be pci devs lock unfortunately
>>>> The parties involved in deferred work and its cancellation:
>>>>
>>>> MMIO trap -> vpci_write -> vpci_cmd_write -> modify_bars -> defer_map
>>>>
>>>> Arm: leave_hypervisor_to_guest -> check_for_vcpu_work -> vpci_process_pending
>>>>
>>>> x86: two places -> hvm_do_resume -> vpci_process_pending
>>>>
>>>> So, both defer_map and vpci_process_pending need to be synchronized with
>>>> pcidevs_{lock|unlock).
>>> If I was an Arm maintainer, I'm afraid I would object to the pcidevs lock
>>> getting used in leave_hypervisor_to_guest.
>> I do agree this is really not good, but it seems I am limited in choices.
>> @Stefano, @Julien, do you see any better way of doing that?
>
> I agree with Jan about using the pcidevs_{lock|unlock}. The lock is not fine-grained enough for been call in vpci_process_pending().
>
> I haven't yet looked at the rest of the series to be able to suggest the exact lock. But we at least want a per-domain spinlock.
>
>>
>> We were thinking about introducing a dedicated lock for vpci [1],
>> but finally decided to use pcidevs_lock for now
>
> Skimming through the thread, you decided to use pcidevs_lock because it was simpler and sufficient for the use case discussed back then. Now, we have a use case where it would be a problem to use pcidevs_lock. So I think the extra complexity is justified.
I would like to understand what is this lock so I can implement that properly.
We have the following options as I can see:

1. pcidevs_{lock|unlock} - considered too heavy, per host
2. pdev->vpci->lock - better, but still heavy, per PCI device
3. We may convert pdev->vpci->lock into r/w lock
4. We may introduce a specific lock

To better understand the scope of the lock:
1. MMIO trap handlers (vpci_{read|write} - already protected with pdev->vpci->lock
2. vpci_process_pending (SOFTIRQ context)
3. Hypercalls which call pci_{add|remove|assign|deassign}_device
4. @Roger, did I miss something?

And I feel that this needs a dedicated patch for that: I am not sure it is a
good idea to add this locking change into this patch which seems not so relevant
>
> Cheers,
>

  reply	other threads:[~2021-11-18 12:58 UTC|newest]

Thread overview: 101+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-05  6:56 [PATCH v4 00/11] PCI devices passthrough on Arm, part 3 Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 01/11] vpci: fix function attributes for vpci_process_pending Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 02/11] vpci: cancel pending map/unmap on vpci removal Oleksandr Andrushchenko
2021-11-15 16:56   ` Jan Beulich
2021-11-16  7:32     ` Oleksandr Andrushchenko
2021-11-16  8:01       ` Jan Beulich
2021-11-16  8:23         ` Oleksandr Andrushchenko
2021-11-16 11:38           ` Jan Beulich
2021-11-16 13:27             ` Oleksandr Andrushchenko
2021-11-16 14:11               ` Jan Beulich
2021-11-16 13:41           ` Oleksandr Andrushchenko
2021-11-16 14:12             ` Jan Beulich
2021-11-16 14:24               ` Oleksandr Andrushchenko
2021-11-16 14:37                 ` Oleksandr Andrushchenko
2021-11-16 16:09                 ` Jan Beulich
2021-11-16 18:02                 ` Julien Grall
2021-11-18 12:57                   ` Oleksandr Andrushchenko [this message]
2021-11-17  8:28   ` Jan Beulich
2021-11-18  7:49     ` Oleksandr Andrushchenko
2021-11-18  8:36       ` Jan Beulich
2021-11-18  8:54         ` Oleksandr Andrushchenko
2021-11-18  9:15           ` Jan Beulich
2021-11-18  9:32             ` Oleksandr Andrushchenko
2021-11-18 13:25               ` Jan Beulich
2021-11-18 13:48                 ` Oleksandr Andrushchenko
2021-11-18 14:04                   ` Roger Pau Monné
2021-11-18 14:14                     ` Oleksandr Andrushchenko
2021-11-18 14:35                       ` Jan Beulich
2021-11-18 15:11                         ` Oleksandr Andrushchenko
2021-11-18 15:16                           ` Jan Beulich
2021-11-18 15:21                             ` Oleksandr Andrushchenko
2021-11-18 15:41                               ` Jan Beulich
2021-11-18 15:46                                 ` Oleksandr Andrushchenko
2021-11-18 15:53                                   ` Jan Beulich
2021-11-19 12:34                                     ` Oleksandr Andrushchenko
2021-11-19 13:00                                       ` Jan Beulich
2021-11-19 13:16                                         ` Oleksandr Andrushchenko
2021-11-19 13:25                                           ` Jan Beulich
2021-11-19 13:34                                             ` Oleksandr Andrushchenko
2021-11-22 14:21                                               ` Oleksandr Andrushchenko
2021-11-22 14:37                                                 ` Jan Beulich
2021-11-22 14:45                                                   ` Oleksandr Andrushchenko
2021-11-22 14:57                                                     ` Jan Beulich
2021-11-22 15:02                                                       ` Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 03/11] vpci: make vpci registers removal a dedicated function Oleksandr Andrushchenko
2021-11-15 16:57   ` Jan Beulich
2021-11-16  8:02     ` Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 04/11] vpci: add hooks for PCI device assign/de-assign Oleksandr Andrushchenko
2021-11-15 17:06   ` Jan Beulich
2021-11-16  9:38     ` Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 05/11] vpci/header: implement guest BAR register handlers Oleksandr Andrushchenko
2021-11-19 11:58   ` Jan Beulich
2021-11-19 12:10     ` Oleksandr Andrushchenko
2021-11-19 12:37       ` Jan Beulich
2021-11-19 12:46         ` Oleksandr Andrushchenko
2021-11-19 12:49           ` Jan Beulich
2021-11-19 12:54             ` Oleksandr Andrushchenko
2021-11-19 13:02               ` Jan Beulich
2021-11-19 13:17                 ` Oleksandr Andrushchenko
2021-11-23 15:14                 ` Oleksandr Andrushchenko
2021-11-24 12:32                   ` Roger Pau Monné
2021-11-24 12:36                     ` Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 06/11] vpci/header: handle p2m range sets per BAR Oleksandr Andrushchenko
2021-11-19 12:05   ` Jan Beulich
2021-11-19 12:13     ` Oleksandr Andrushchenko
2021-11-19 12:45       ` Jan Beulich
2021-11-19 12:50         ` Oleksandr Andrushchenko
2021-11-19 13:06           ` Jan Beulich
2021-11-19 13:19             ` Oleksandr Andrushchenko
2021-11-19 13:29               ` Jan Beulich
2021-11-19 13:38                 ` Oleksandr Andrushchenko
2021-11-19 13:16   ` Jan Beulich
2021-11-19 13:41     ` Oleksandr Andrushchenko
2021-11-19 13:57       ` Jan Beulich
2021-11-19 14:09         ` Oleksandr Andrushchenko
2021-11-22  8:24           ` Jan Beulich
2021-11-22  8:31             ` Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 07/11] vpci/header: program p2m with guest BAR view Oleksandr Andrushchenko
2021-11-19 12:33   ` Jan Beulich
2021-11-19 12:44     ` Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 08/11] vpci/header: emulate PCI_COMMAND register for guests Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 09/11] vpci/header: reset the command register when adding devices Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 10/11] vpci: add initial support for virtual PCI bus topology Oleksandr Andrushchenko
2021-11-18 16:45   ` Jan Beulich
2021-11-24 11:28     ` Oleksandr Andrushchenko
2021-11-24 12:36       ` Roger Pau Monné
2021-11-24 12:43         ` Oleksandr Andrushchenko
2021-11-05  6:56 ` [PATCH v4 11/11] xen/arm: translate virtual PCI bus topology for guests Oleksandr Andrushchenko
2021-11-08 11:10   ` Jan Beulich
2021-11-08 11:16     ` Oleksandr Andrushchenko
2021-11-08 14:23       ` Roger Pau Monné
2021-11-08 15:28         ` Oleksandr Andrushchenko
2021-11-24 11:31           ` Oleksandr Andrushchenko
2021-11-19 13:56 ` [PATCH v4 00/11] PCI devices passthrough on Arm, part 3 Jan Beulich
2021-11-19 14:06   ` Oleksandr Andrushchenko
2021-11-19 14:23   ` Roger Pau Monné
2021-11-19 14:26     ` Oleksandr Andrushchenko
2021-11-20  9:47       ` Roger Pau Monné
2021-11-22  8:22     ` Jan Beulich
2021-11-22  8:34       ` Oleksandr Andrushchenko
2021-11-22  8:44         ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4e6828d7-512c-2217-8c9a-a45f85a9d07d@epam.com \
    --to=oleksandr_andrushchenko@epam.com \
    --cc=Artem_Mygaiev@epam.com \
    --cc=Oleksandr_Tyshchenko@epam.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=bertrand.marquis@arm.com \
    --cc=george.dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=paul@xen.org \
    --cc=rahul.singh@arm.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).