From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A2B4C433DB for ; Wed, 10 Mar 2021 06:09:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BADEE64FE3 for ; Wed, 10 Mar 2021 06:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231137AbhCJGJL (ORCPT ); Wed, 10 Mar 2021 01:09:11 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:47802 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231899AbhCJGIr (ORCPT ); Wed, 10 Mar 2021 01:08:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1615356526; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CLbXGXlVj3W3x6vIQEKd0idsguJpkcNlzgeMzT6qYhI=; b=adB2HZQZFYFRLS5IfYptFOQBMdv3aiQKr5Q9XqhTUpEKsiOCJefAqGejQhykvDlYbN1awE N0U20RF+lT9lcK7YxdW04ijCrRWRxmokgks/PbWoy1xdp9AhYAxjguSA4H3dc9l8KmS9ZK GdIyAjOzPuHQw7ooDOGg5TYGriJU8OI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-5-H8N7MSotP5y8QtlR9r70yw-1; Wed, 10 Mar 2021 01:08:45 -0500 X-MC-Unique: H8N7MSotP5y8QtlR9r70yw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1CE2A107465C; Wed, 10 Mar 2021 06:08:43 +0000 (UTC) Received: from x1.home.shazbot.org (ovpn-112-255.phx2.redhat.com [10.3.112.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4C46962461; Wed, 10 Mar 2021 06:08:38 +0000 (UTC) Date: Tue, 9 Mar 2021 23:08:37 -0700 From: Alex Williamson To: Jason Gunthorpe Cc: Peter Xu , "Zengtao (B)" , Cornelia Huck , Kevin Tian , Andrew Morton , Giovanni Cabiddu , Michel Lespinasse , Jann Horn , Max Gurtovoy , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Linuxarm Subject: Re: [PATCH] vfio/pci: make the vfio_pci_mmap_fault reentrant Message-ID: <20210309230837.394cb101@x1.home.shazbot.org> In-Reply-To: <20210309234127.GM2356281@nvidia.com> References: <1615201890-887-1-git-send-email-prime.zeng@hisilicon.com> <20210308132106.49da42e2@omen.home.shazbot.org> <20210308225626.GN397383@xz-x1> <6b98461600f74f2385b9096203fa3611@hisilicon.com> <20210309124609.GG2356281@nvidia.com> <20210309082951.75f0eb01@x1.home.shazbot.org> <20210309164004.GJ2356281@nvidia.com> <20210309184739.GD763132@xz-x1> <20210309122607.0b68fb9b@omen.home.shazbot.org> <20210309234127.GM2356281@nvidia.com> Organization: Red Hat MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 9 Mar 2021 19:41:27 -0400 Jason Gunthorpe wrote: > On Tue, Mar 09, 2021 at 12:26:07PM -0700, Alex Williamson wrote: > > > In the new series, I think the fault handler becomes (untested): > > > > static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) > > { > > struct vm_area_struct *vma = vmf->vma; > > struct vfio_pci_device *vdev = vma->vm_private_data; > > unsigned long base_pfn, pgoff; > > vm_fault_t ret = VM_FAULT_SIGBUS; > > > > if (vfio_pci_bar_vma_to_pfn(vma, &base_pfn)) > > return ret; > > > > pgoff = (vmf->address - vma->vm_start) >> PAGE_SHIFT; > > I don't think this math is completely safe, it needs to parse the > vm_pgoff.. > > I'm worried userspace could split/punch/mangle a VMA using > munmap/mremap/etc/etc in a way that does update the pg_off but is > incompatible with the above. parsing vm_pgoff is done in: static int vfio_pci_bar_vma_to_pfn(struct vm_area_struct *vma, unsigned long *pfn) { struct vfio_pci_device *vdev = vma->vm_private_data; struct pci_dev *pdev = vdev->pdev; int index; u64 pgoff; index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT); if (index >= VFIO_PCI_ROM_REGION_INDEX || !vdev->bar_mmap_supported[index] || !vdev->barmap[index]) return -EINVAL; pgoff = vma->vm_pgoff & ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1); *pfn = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff; return 0; } But given Peter's concern about faulting individual pages, I think the fault handler becomes: static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct vfio_pci_device *vdev = vma->vm_private_data; unsigned long vaddr, pfn; vm_fault_t ret = VM_FAULT_SIGBUS; if (vfio_pci_bar_vma_to_pfn(vma, &pfn)) return ret; down_read(&vdev->memory_lock); if (__vfio_pci_memory_enabled(vdev)) { for (vaddr = vma->vm_start; vaddr < vma->vm_end; vaddr += PAGE_SIZE, pfn++) { ret = vmf_insert_pfn_prot(vma, vaddr, pfn, pgprot_decrypted(vma->vm_page_prot)); if (ret != VM_FAULT_NOPAGE) { zap_vma_ptes(vma, vma->vm_start, vaddr - vma->vm_start); break; } } } up_read(&vdev->memory_lock); return ret; } Thanks, Alex