linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>,
	liulongfang <liulongfang@huawei.com>,
	"cohuck@redhat.com" <cohuck@redhat.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@openeuler.org" <linuxarm@openeuler.org>
Subject: Re: [Linuxarm]  Re: [RFC PATCH 2/3] vfio/hisilicon: register the driver to vfio
Date: Thu, 13 May 2021 12:22:32 -0600	[thread overview]
Message-ID: <20210513122232.589d24d8@redhat.com> (raw)
In-Reply-To: <1035a9a9b03b43dd9f859136ed84a7f8@huawei.com>

On Thu, 13 May 2021 17:52:56 +0000
Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote:

> Hi Alex,
> 
> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: 13 May 2021 18:04
> > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> > Cc: Jason Gunthorpe <jgg@nvidia.com>; liulongfang
> > <liulongfang@huawei.com>; cohuck@redhat.com;
> > linux-kernel@vger.kernel.org; linuxarm@openeuler.org
> > Subject: [Linuxarm] Re: [RFC PATCH 2/3] vfio/hisilicon: register the driver to
> > vfio
> > 
> > On Thu, 13 May 2021 15:49:25 +0000
> > Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> > wrote:
> >   
> > > > -----Original Message-----
> > > > From: Jason Gunthorpe [mailto:jgg@nvidia.com]
> > > > Sent: 13 May 2021 14:44
> > > > To: liulongfang <liulongfang@huawei.com>
> > > > Cc: Alex Williamson <alex.williamson@redhat.com>; cohuck@redhat.com;
> > > > linux-kernel@vger.kernel.org; linuxarm@openeuler.org
> > > > Subject: [Linuxarm] Re: [RFC PATCH 2/3] vfio/hisilicon: register the driver to
> > > > vfio
> > > >
> > > > On Thu, May 13, 2021 at 10:08:28AM +0800, liulongfang wrote:  
> > > > > On 2021/5/12 20:10, Jason Gunthorpe wrote:  
> > > > > > On Wed, May 12, 2021 at 04:39:43PM +0800, liulongfang wrote:
> > > > > >  
> > > > > >> Therefore, this method of limiting the length of the BAR
> > > > > >> configuration space can prevent unsafe operations of the memory.  
> > > > > >
> > > > > > The issue is DMA controlled by the guest accessing the secure BAR
> > > > > > area, not the guest CPU.
> > > > > >
> > > > > > Jason
> > > > > > .
> > > > > >  
> > > > > This secure BAR area is not presented to the Guest,
> > > > > which makes it impossible for the Guest to obtain the secure BAR area
> > > > > when establishing the DMA mapping of the configuration space.
> > > > > If the DMA controller accesses the secure BAR area, the access will
> > > > > be blocked by the SMMU.  
> > > >
> > > > There are scenarios where this is not true.
> > > >
> > > > At a minimum the mdev driver should refuse to work in those cases.
> > > >  
> > >
> > > Hi,
> > >
> > > I think the idea here is not a generic solution, but a quirk for this specific dev.
> > >
> > > Something like,
> > >
> > > --- a/drivers/vfio/pci/vfio_pci.c
> > > +++ b/drivers/vfio/pci/vfio_pci.c
> > > @@ -866,7 +866,12 @@ static long vfio_pci_ioctl(struct vfio_device  
> > *core_vdev,  
> > >                         break;
> > >                 case VFIO_PCI_BAR0_REGION_INDEX ...  
> > VFIO_PCI_BAR5_REGION_INDEX:  
> > >                         info.offset =  
> > VFIO_PCI_INDEX_TO_OFFSET(info.index);  
> > > -                       info.size = pci_resource_len(pdev, info.index);
> > > +
> > > +                       if (check_hisi_acc_quirk(pdev, info))
> > > +                               info.size = new_size;// BAR is limited  
> > without migration region.  
> > > +                       else
> > > +                               info.size = pci_resource_len(pdev,  
> > info.index);  
> > > +
> > >                         if (!info.size) {
> > >                                 info.flags = 0;
> > >                                 break;
> > >
> > > Is this an acceptable/workable solution here?  
> > 
> > As Jason says, this only restricts CPU access to the BAR, the issue is
> > DMA access.  As the hardware vendor you may be able to guarantee that
> > a DMA transaction generated by the device targeting the remainder of
> > the BAR will always go upstream, but can you guarantee the routing
> > between the device and the SMMU?  For instance if this device can be
> > implemented as a plugin card, then it can be installed into a
> > downstream port that may not support ACS.  That downstream port may
> > implement request redirection allowing the transaction to reflect back
> > to the device without IOMMU translation.  At that point the userspace
> > driver can target the kernel driver half of the BAR and potentially
> > expose a security risk.  Thanks,  
> 
> The ACC devices on this platform are not pluggable devices. They are exposed
> as integrated endpoint devices. So I am not sure the above concern is valid in this
> case.
> 
> I had a look at the userspace driver approach you suggested. But unfortunately
> the migration state change for the vf has to check some of the pf registers for
> confirming the state. So even if we move the implementation to Qemu, we
> still may have to use the migration uAPI to access the pf device registers.
> 
> Since the devices we are concerned here are all integrated endpoints and if the 
> above quirk is an acceptable one, then we can use the uAPI as done in this
> series without overly complicating things here.

If you expect this device to appear only as an integrated endpoint, then
I think Jason's suggestion above is correct.  Your driver that supports
migration can refuse to load for devices there the topology is other
than expected and you're effectively guaranteeing DMA isolation of the
user and in-kernel drivers by hardware DMA semantics and topology.

Requiring access to the PF to support the migration protocol also
suggests that an in-kernel driver to support migration is our best
option.  Thanks,

Alex


  reply	other threads:[~2021-05-13 18:22 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-13  3:36 [RFC PATCH 0/3] vfio/hisilicon: add acc live migration driver Longfang Liu
2021-04-13  3:36 ` [RFC PATCH 1/3] " Longfang Liu
2021-04-13  3:36 ` [RFC PATCH 2/3] vfio/hisilicon: register the driver to vfio Longfang Liu
2021-04-15 22:01   ` Jason Gunthorpe
2021-04-19 12:24     ` liulongfang
2021-04-19 12:33       ` Jason Gunthorpe
2021-04-20 12:50         ` liulongfang
2021-04-20 12:59           ` Jason Gunthorpe
2021-04-20 13:28             ` liulongfang
2021-04-20 14:55               ` Jason Gunthorpe
2021-04-20 22:04             ` Alex Williamson
2021-04-20 23:18               ` Jason Gunthorpe
2021-04-21  9:59                 ` liulongfang
2021-04-21  9:59               ` liulongfang
2021-04-21 18:12                 ` Alex Williamson
2021-04-26 11:49                   ` liulongfang
2021-05-12  8:39                   ` liulongfang
2021-05-12 12:10                     ` Jason Gunthorpe
2021-05-13  2:08                       ` liulongfang
2021-05-13 13:44                         ` Jason Gunthorpe
2021-05-13 15:49                           ` [Linuxarm] " Shameerali Kolothum Thodi
2021-05-13 17:03                             ` Alex Williamson
2021-05-13 17:52                               ` Shameerali Kolothum Thodi
2021-05-13 18:22                                 ` Alex Williamson [this message]
2021-05-13 18:31                                   ` Shameerali Kolothum Thodi
2021-05-13 18:34                                   ` Jason Gunthorpe
2021-05-13 18:24                                 ` Jason Gunthorpe
2021-05-13 18:35                                   ` Shameerali Kolothum Thodi
2021-05-27 10:11                                   ` Shameerali Kolothum Thodi
2021-05-27 10:30                                     ` Max Gurtovoy
2021-04-13  3:36 ` [RFC PATCH 3/3] vfio/hisilicom: add debugfs for driver Longfang Liu
2021-04-15 21:35 ` [RFC PATCH 0/3] vfio/hisilicon: add acc live migration driver Alex Williamson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210513122232.589d24d8@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@openeuler.org \
    --cc=liulongfang@huawei.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).