From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06976C433E0 for ; Tue, 9 Mar 2021 19:49:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB88D64F5F for ; Tue, 9 Mar 2021 19:49:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231373AbhCITtG (ORCPT ); Tue, 9 Mar 2021 14:49:06 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:41763 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231357AbhCITsf (ORCPT ); Tue, 9 Mar 2021 14:48:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1615319314; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=+6T9g+65HR3GbghKP+E5VatQ1Ly1n65wA9dNu8qYeVw=; b=aO9+yjBGiRcwQ0h0p2EDVTTIuPLNKTzQuQEPn3tdBu7zJ0vU/bj2KRMB6Pf2VjZ9TsQWJv gl227iOsDx8Xzzb2mPemtIYbjKaN8uI76zdUD4Yz3TuXyWR5YtbPfgPNWLEosOWg89XtgK pUx80Qxxpo5HzBJB8zTmgLPnOaoKb2o= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-505-Y22RvE-SMa-SumPr4454-Q-1; Tue, 09 Mar 2021 14:48:27 -0500 X-MC-Unique: Y22RvE-SMa-SumPr4454-Q-1 Received: by mail-qt1-f197.google.com with SMTP id 16so11279438qtw.1 for ; Tue, 09 Mar 2021 11:48:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=+6T9g+65HR3GbghKP+E5VatQ1Ly1n65wA9dNu8qYeVw=; b=adR78qEu1+0qVXsIW2EisKBr0MAk5NVIJnPSI6yUXBlnI+t0c9Js0XjnErCcHAU9kr k4M81oQi58PnnA4s/9jK+cpKy0sSLWg0fkyJ7pw/RGZSKlZglxi5CPP8Ym3OMq+xi/XP IRJeVXE+ipNyTji/QDh2B/TQ/AHe0OUH2HOwC+U20mUjIS1BX4EsNqJCKo3KuQSvZvEx FQVJsXhjtIBUuz+yuMDNpuP7u9yjx2WXyYfYrCWrVP3fnC2Fffoi/dFxDxlYtwCBzTgJ Pam/TMQ1aUVgyAmrYsZjZs6tbTMMBZ27M7IvcXm++Ocq5o6KSi63NYydXlwY+O7+Uh16 nAFA== X-Gm-Message-State: AOAM532XUemIcCYIGHYuxTsbF0uuLQB4Aq1SUZc5InNa8CwFIDxioA4J wwbaN6q6ljjKPI1C0Hww+gznarLQVMRGhb3drAJIclDIqaBvDHtPSLSJmzQVE0qIXs9kshLqeM2 w15Yd/A7tXV45FSm67dhH1r3G X-Received: by 2002:a37:e315:: with SMTP id y21mr11481590qki.418.1615319306893; Tue, 09 Mar 2021 11:48:26 -0800 (PST) X-Google-Smtp-Source: ABdhPJyaGEqVNvsBQoPcSdxfnsw5oRVjX3r8bvAG+qMNLXv5SKRlwQkcaxHeXJhDMwZzFbft/7lk/w== X-Received: by 2002:a37:e315:: with SMTP id y21mr11481572qki.418.1615319306602; Tue, 09 Mar 2021 11:48:26 -0800 (PST) Received: from xz-x1 (bras-vprn-toroon474qw-lp130-25-174-95-95-253.dsl.bell.ca. [174.95.95.253]) by smtp.gmail.com with ESMTPSA id a9sm10360544qtx.96.2021.03.09.11.48.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Mar 2021 11:48:26 -0800 (PST) Date: Tue, 9 Mar 2021 14:48:24 -0500 From: Peter Xu To: Alex Williamson Cc: Jason Gunthorpe , "Zengtao (B)" , Cornelia Huck , Kevin Tian , Andrew Morton , Giovanni Cabiddu , Michel Lespinasse , Jann Horn , Max Gurtovoy , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Linuxarm Subject: Re: [PATCH] vfio/pci: make the vfio_pci_mmap_fault reentrant Message-ID: <20210309194824.GE763132@xz-x1> References: <1615201890-887-1-git-send-email-prime.zeng@hisilicon.com> <20210308132106.49da42e2@omen.home.shazbot.org> <20210308225626.GN397383@xz-x1> <6b98461600f74f2385b9096203fa3611@hisilicon.com> <20210309124609.GG2356281@nvidia.com> <20210309082951.75f0eb01@x1.home.shazbot.org> <20210309164004.GJ2356281@nvidia.com> <20210309184739.GD763132@xz-x1> <20210309122607.0b68fb9b@omen.home.shazbot.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210309122607.0b68fb9b@omen.home.shazbot.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 09, 2021 at 12:26:07PM -0700, Alex Williamson wrote: > On Tue, 9 Mar 2021 13:47:39 -0500 > Peter Xu wrote: > > > On Tue, Mar 09, 2021 at 12:40:04PM -0400, Jason Gunthorpe wrote: > > > On Tue, Mar 09, 2021 at 08:29:51AM -0700, Alex Williamson wrote: > > > > On Tue, 9 Mar 2021 08:46:09 -0400 > > > > Jason Gunthorpe wrote: > > > > > > > > > On Tue, Mar 09, 2021 at 03:49:09AM +0000, Zengtao (B) wrote: > > > > > > Hi guys: > > > > > > > > > > > > Thanks for the helpful comments, after rethinking the issue, I have proposed > > > > > > the following change: > > > > > > 1. follow_pte instead of follow_pfn. > > > > > > > > > > Still no on follow_pfn, you don't need it once you use vmf_insert_pfn > > > > > > > > vmf_insert_pfn() only solves the BUG_ON, follow_pte() is being used > > > > here to determine whether the translation is already present to avoid > > > > both duplicate work in inserting the translation and allocating a > > > > duplicate vma tracking structure. > > > > > > Oh.. Doing something stateful in fault is not nice at all > > > > > > I would rather see __vfio_pci_add_vma() search the vma_list for dups > > > than call follow_pfn/pte.. > > > > It seems to me that searching vma list is still the simplest way to fix the > > problem for the current code base. I see io_remap_pfn_range() is also used in > > the new series - maybe that'll need to be moved to where PCI_COMMAND_MEMORY got > > turned on/off in the new series (I just noticed remap_pfn_range modifies vma > > flags..), as you suggested in the other email. > > > In the new series, I think the fault handler becomes (untested): > > static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > struct vfio_pci_device *vdev = vma->vm_private_data; > unsigned long base_pfn, pgoff; > vm_fault_t ret = VM_FAULT_SIGBUS; > > if (vfio_pci_bar_vma_to_pfn(vma, &base_pfn)) > return ret; > > pgoff = (vmf->address - vma->vm_start) >> PAGE_SHIFT; > > down_read(&vdev->memory_lock); > > if (__vfio_pci_memory_enabled(vdev)) > ret = vmf_insert_pfn(vma, vmf->address, pgoff + base_pfn); > > up_read(&vdev->memory_lock); > > return ret; > } It's just that the initial MMIO access delay would be spread to the 1st access of each mmio page access rather than using the previous pre-fault scheme. I think an userspace cares the delay enough should pre-fault all pages anyway, but just raise this up. Otherwise looks sane. Thanks, -- Peter Xu