All of lore.kernel.org
 help / color / mirror / Atom feed
From: liulongfang <liulongfang@huawei.com>
To: Alex Williamson <alex.williamson@redhat.com>,
	Jason Gunthorpe <jgg@nvidia.com>
Cc: <cohuck@redhat.com>, <linux-kernel@vger.kernel.org>,
	<linuxarm@openeuler.org>
Subject: Re: [RFC PATCH 2/3] vfio/hisilicon: register the driver to vfio
Date: Wed, 21 Apr 2021 17:59:02 +0800	[thread overview]
Message-ID: <25d033e6-1cba-0da0-2ee7-03a14e75b8a5@huawei.com> (raw)
In-Reply-To: <20210420160457.6b91850a@x1.home.shazbot.org>

On 2021/4/21 6:04, Alex Williamson wrote:
> On Tue, 20 Apr 2021 09:59:57 -0300
> Jason Gunthorpe <jgg@nvidia.com> wrote:
> 
>> On Tue, Apr 20, 2021 at 08:50:12PM +0800, liulongfang wrote:
>>> On 2021/4/19 20:33, Jason Gunthorpe wrote:  
>>>> On Mon, Apr 19, 2021 at 08:24:40PM +0800, liulongfang wrote:
>>>>   
>>>>>> I'm also confused how this works securely at all, as a general rule a
>>>>>> VFIO PCI driver cannot access the MMIO memory of the function it is
>>>>>> planning to assign to the guest. There is a lot of danger that the
>>>>>> guest could access that MMIO space one way or another.  
>>>>>
>>>>> VF's MMIO memory is divided into two parts, one is the guest part,
>>>>> and the other is the live migration part. They do not affect each other,
>>>>> so there is no security problem.  
>>>>
>>>> AFAIK there are several scenarios where a guest can access this MMIO
>>>> memory using DMA even if it is not mapped into the guest for CPU
>>>> access.
>>>>   
>>> The hardware divides VF's MMIO memory into two parts. The live migration
>>> driver in the host uses the live migration part, and the device driver in
>>> the guest uses the guest part. They obtain the address of VF's MMIO memory
>>> in their respective drivers, although these two parts The memory is
>>> continuous on the hardware device, but due to the needs of the drive function,
>>> they will not perform operations on another part of the memory, and the
>>> device hardware also independently responds to the operation commands of
>>> the two parts.  
>>
>> It doesn't matter, the memory is still under the same PCI BDF and VFIO
>> supports scenarios where devices in the same IOMMU group are not
>> isolated from each other.
>>
>> This is why the granual of isolation is a PCI BDF - VFIO directly
>> blocks kernel drivers from attaching to PCI BDFs that are not
>> completely isolated from VFIO BDF.
>>
>> Bypassing this prevention and attaching a kernel driver directly to
>> the same BDF being exposed to the guest breaks that isolation model.
>>
>>> So, I still don't understand what the security risk you are talking about is,
>>> and what do you think the security design should look like?
>>> Can you elaborate on it?  
>>
>> Each security domain must have its own PCI BDF.
>>
>> The migration control registers must be on a different VF from the VF
>> being plugged into a guest and the two VFs have to be in different
>> IOMMU groups to ensure they are isolated from each other.
> 
> I think that's a solution, I don't know if it's the only solution.
> AIUI, the issue here is that we have a device specific kernel driver
> extending vfio-pci with migration support for this device by using an

If the two parts of the MMIO region are split into different BAR spaces on
the device, the MMIO region of ​​the business function is still placed in BAR2,
and the MMIO region of ​​the live migration function is moved to BAR4.
Only BAR2 is mapped in the guest. only BAR4 is mapped in the host.
This can solve this security issue.

> MMIO region of the same device.  This is susceptible to DMA> manipulation by the user device.   Whether that's a security issue or> not depends on how the user can break the device.  If the scope is
> limited to breaking their own device, they can do that any number of
> ways and it's not very interesting.  If the user can manipulate device
> state in order to trigger an exploit of the host-side kernel driver,
> that's obviously more of a problem.
> 
> The other side of this is that if migration support can be implemented
> entirely within the VF using this portion of the device MMIO space, why
> do we need the host kernel to support this rather than implementing it
> in userspace?  For example, QEMU could know about this device,
> manipulate the BAR size to expose only the operational portion of MMIO
> to the VM and use the remainder to support migration itself.  I'm
> afraid that just like mdev, the vfio migration uAPI is going to be used
> as an excuse to create kernel drivers simply to be able to make use of
> that uAPI.  I haven't looked at this driver to know if it has some

When the accelerator device is designed to support the live migration
function, it is based on the uAPI of the migration region to realize the
live migration function, so the live migration function requires a driver
that connects to this uAPI.
Is this set of interfaces not open to us now?

> other reason to exist beyond what could be done through vfio-pci and
> userspace migration support.  Thanks,
> 
> Alex
> 
> .
> 
Thanks,
Longfang.

  parent reply	other threads:[~2021-04-21  9:59 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-13  3:36 [RFC PATCH 0/3] vfio/hisilicon: add acc live migration driver Longfang Liu
2021-04-13  3:36 ` [RFC PATCH 1/3] " Longfang Liu
2021-04-13  3:36 ` [RFC PATCH 2/3] vfio/hisilicon: register the driver to vfio Longfang Liu
2021-04-15 22:01   ` Jason Gunthorpe
2021-04-19 12:24     ` liulongfang
2021-04-19 12:33       ` Jason Gunthorpe
2021-04-20 12:50         ` liulongfang
2021-04-20 12:59           ` Jason Gunthorpe
2021-04-20 13:28             ` liulongfang
2021-04-20 14:55               ` Jason Gunthorpe
2021-04-20 22:04             ` Alex Williamson
2021-04-20 23:18               ` Jason Gunthorpe
2021-04-21  9:59                 ` liulongfang
2021-04-21  9:59               ` liulongfang [this message]
2021-04-21 18:12                 ` Alex Williamson
2021-04-26 11:49                   ` liulongfang
2021-05-12  8:39                   ` liulongfang
2021-05-12 12:10                     ` Jason Gunthorpe
2021-05-13  2:08                       ` liulongfang
2021-05-13 13:44                         ` Jason Gunthorpe
2021-05-13 15:49                           ` [Linuxarm] " Shameerali Kolothum Thodi
2021-05-13 17:03                             ` Alex Williamson
2021-05-13 17:52                               ` Shameerali Kolothum Thodi
2021-05-13 18:22                                 ` Alex Williamson
2021-05-13 18:31                                   ` Shameerali Kolothum Thodi
2021-05-13 18:34                                   ` Jason Gunthorpe
2021-05-13 18:24                                 ` Jason Gunthorpe
2021-05-13 18:35                                   ` Shameerali Kolothum Thodi
2021-05-27 10:11                                   ` Shameerali Kolothum Thodi
2021-05-27 10:30                                     ` Max Gurtovoy
2021-04-13  3:36 ` [RFC PATCH 3/3] vfio/hisilicom: add debugfs for driver Longfang Liu
2021-04-15 21:35 ` [RFC PATCH 0/3] vfio/hisilicon: add acc live migration driver Alex Williamson
  -- strict thread matches above, loose matches on Subject: below --
2021-04-13  1:20 [RFC PATCH v3 " Longfang Liu
2021-04-13  1:20 ` [RFC PATCH 2/3] vfio/hisilicon: register the driver to vfio Longfang Liu
2021-04-12  8:53 [RFC PATCH v3 0/3] vfio/hisilicon: add acc live migration driver Longfang Liu
2021-04-12  8:53 ` [RFC PATCH 2/3] vfio/hisilicon: register the driver to vfio Longfang Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=25d033e6-1cba-0da0-2ee7-03a14e75b8a5@huawei.com \
    --to=liulongfang@huawei.com \
    --cc=alex.williamson@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@openeuler.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.