From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAFECC04EBE for ; Thu, 8 Oct 2020 08:10:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 70DD12083B for ; Thu, 8 Oct 2020 08:10:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ffwll.ch header.i=@ffwll.ch header.b="C53LylVZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728590AbgJHIKG (ORCPT ); Thu, 8 Oct 2020 04:10:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728513AbgJHIKG (ORCPT ); Thu, 8 Oct 2020 04:10:06 -0400 Received: from mail-ot1-x341.google.com (mail-ot1-x341.google.com [IPv6:2607:f8b0:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E5BBC0613D6 for ; Thu, 8 Oct 2020 01:10:06 -0700 (PDT) Received: by mail-ot1-x341.google.com with SMTP id f37so4681935otf.12 for ; Thu, 08 Oct 2020 01:10:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=wCjCa64MStwiq1yR7OaeNvk7N1NN2BleqQckjLbhNYU=; b=C53LylVZwMO0g2+JAqtSBtMgpBv+ulmsaRO6I2h41YCH/cKnnJ2tt3EfS3lZw6Rhw0 MUO4icylwPq5djGkpDWWyO5lhz5aT49NTtxjLsnsfxd1PvP6W6t98VP0roBxwIJCy+Bl vpajr9+5TXFrcHadQv/pRUWTQ9v7fI3MyQ+kM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=wCjCa64MStwiq1yR7OaeNvk7N1NN2BleqQckjLbhNYU=; b=HYTccjNDpoQt/S/oEQaRk6iUYFiaIGL48I0YoPHpeH+E6VeRsg6rs966M2jVYKLJk2 IWp4P8bduDDkd7YM59wp+8AflVA4BzeqJoWyzZHfbgCM21py/6ddyeEi9b0YDRNHlbLq x9J0mwYLi5clqzDP3b25PNY7+SxFj2pKJJBOWQSUo4sJGS0k0pAZwnAT1fPk6ZSxGumD mt/br2hPzsbB0p7dKy4abRloCAVtTcOCtJ2C8O2Q3gy+Ji7UB4UUVOLrVgvL1I/eRS7V yzs72LF9fGMdh09xWle8UKQb2RLPvfEfpIrzGdnHnso9st9Wv/ga+QzrO+6UpmDHPNAN Lbvg== X-Gm-Message-State: AOAM531BGHMf8bFj/34GyNvnv/Xvjjvq8i40DgOo3vuszyntNmCJy34N jI4ojqyEftXR1kFBofLlZHDVxtTuuBjLbf8rg/AU0g== X-Google-Smtp-Source: ABdhPJz2Aux80kUsEM7YHCXCuraur5IrUAmiithdZ9pp1HS0einrpaKxMEfTYuxGd6kEaO0zfYuPyPuQ741ZEEETeSg= X-Received: by 2002:a05:6830:1e56:: with SMTP id e22mr4190355otj.303.1602144605250; Thu, 08 Oct 2020 01:10:05 -0700 (PDT) MIME-Version: 1.0 References: <20201007164426.1812530-1-daniel.vetter@ffwll.ch> <20201007164426.1812530-11-daniel.vetter@ffwll.ch> In-Reply-To: From: Daniel Vetter Date: Thu, 8 Oct 2020 10:09:54 +0200 Message-ID: Subject: Re: [PATCH 10/13] PCI: revoke mappings like devmem To: Dan Williams Cc: DRI Development , LKML , KVM list , Linux MM , Linux ARM , linux-samsung-soc , "Linux-media@vger.kernel.org" , linux-s390 , Daniel Vetter , Jason Gunthorpe , Kees Cook , Andrew Morton , John Hubbard , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Jan Kara , Bjorn Helgaas , Linux PCI Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org On Thu, Oct 8, 2020 at 12:29 AM Dan Williams wrote: > > On Wed, Oct 7, 2020 at 3:23 PM Dan Williams wrote: > > > > On Wed, Oct 7, 2020 at 12:49 PM Daniel Vetter wrote: > > > > > > On Wed, Oct 7, 2020 at 9:33 PM Dan Williams wrote: > > > > > > > > On Wed, Oct 7, 2020 at 11:11 AM Daniel Vetter wrote: > > > > > > > > > > Since 3234ac664a87 ("/dev/mem: Revoke mappings when a driver claims > > > > > the region") /dev/kmem zaps ptes when the kernel requests exclusive > > > > > acccess to an iomem region. And with CONFIG_IO_STRICT_DEVMEM, this is > > > > > the default for all driver uses. > > > > > > > > > > Except there's two more ways to access pci bars: sysfs and proc mmap > > > > > support. Let's plug that hole. > > > > > > > > Ooh, yes, lets. > > > > > > > > > For revoke_devmem() to work we need to link our vma into the same > > > > > address_space, with consistent vma->vm_pgoff. ->pgoff is already > > > > > adjusted, because that's how (io_)remap_pfn_range works, but for the > > > > > mapping we need to adjust vma->vm_file->f_mapping. Usually that's done > > > > > at ->open time, but that's a bit tricky here with all the entry points > > > > > and arch code. So instead create a fake file and adjust vma->vm_file. > > > > > > > > I don't think you want to share the devmem inode for this, this should > > > > be based off the sysfs inode which I believe there is already only one > > > > instance per resource. In contrast /dev/mem can have multiple inodes > > > > because anyone can just mknod a new character device file, the same > > > > problem does not exist for sysfs. > > > > > > But then I need to find the right one, plus I also need to find the > > > right one for the procfs side. That gets messy, and I already have no > > > idea how to really test this. Shared address_space is the same trick > > > we're using in drm (where we have multiple things all pointing to the > > > same underlying resources, through different files), and it gets the > > > job done. So that's why I figured the shared address_space is the > > > cleaner solution since then unmap_mapping_range takes care of > > > iterating over all vma for us. I guess I could reimplement that logic > > > with our own locking and everything in revoke_devmem, but feels a bit > > > silly. But it would also solve the problem of having mutliple > > > different mknod of /dev/kmem with different address_space behind them. > > > Also because of how remap_pfn_range works, all these vma do use the > > > same pgoff already anyway. > > > > True, remap_pfn_range() makes sure that ->pgoff is an absolute > > physical address offset for all use cases. So you might be able to > > just point proc_bus_pci_open() at the shared devmem address space. For > > sysfs it's messier. I think you would need to somehow get the inode > > from kernfs_fop_open() to adjust its address space, but only if the > > bin_file will ultimately be used for PCI memory. Just read the code a bit more, and for proc it's impossible. There's only a single file, and before you mmap it you have to call a few ioctl to select the right pci resource on that device you want to mmap. Which includes legacy ioport stuff, and at least for now those don't get revoked (maybe they should, but I'm looking at iomem here now). Setting the mapping too early in ->open means that on architectures which can do ioport as mmaps (not many, but powerpc is among them) we'd shoot down these mmaps too. Looking at the code there's the generic implementation, which consults pci_iobar_pfn. And the only other implementation for sparc looks similar, they separate iomem vs ioport through different pfn. So I think this should indeed work. > To me this seems like a new sysfs_create_bin_file() flavor that > registers the file with the common devmem address_space. Hm I think we could just add a i_mapping member to bin_attributes and let the normal open code set that up for us. That should work. mmapable binary sysfs file is already a similar special case. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch