From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70F6AC4727E for ; Thu, 8 Oct 2020 07:23:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E457121897 for ; Thu, 8 Oct 2020 07:23:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ffwll.ch header.i=@ffwll.ch header.b="OLHQfD6c" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E457121897 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 12E136B005D; Thu, 8 Oct 2020 03:23:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DF546B0062; Thu, 8 Oct 2020 03:23:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EED566B0068; Thu, 8 Oct 2020 03:23:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id C317A6B005D for ; Thu, 8 Oct 2020 03:23:29 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 44C13824999B for ; Thu, 8 Oct 2020 07:23:29 +0000 (UTC) X-FDA: 77347917738.17.prose99_5f09b57271d6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 2D18C180D0181 for ; Thu, 8 Oct 2020 07:23:29 +0000 (UTC) X-HE-Tag: prose99_5f09b57271d6 X-Filterd-Recvd-Size: 6646 Received: from mail-oo1-f65.google.com (mail-oo1-f65.google.com [209.85.161.65]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Thu, 8 Oct 2020 07:23:28 +0000 (UTC) Received: by mail-oo1-f65.google.com with SMTP id t3so1249502ook.8 for ; Thu, 08 Oct 2020 00:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ui8WkZ8ZsqWNVKFISnxizxdl1xBQtE0ojtcqIbBf85o=; b=OLHQfD6clKQWUqpmaAK9RMlR/tsEUgzWlL24nlPViG1Rk9p8LHmyCmM7/5WLHXKEvu lI/irRP+aq5gBsvKA49uJgztAy930tPikm27PGApe4JTRXsGR0g3qvfoDhdKCEoe0v+J hUX2iSsNsi3+8JirFm0OBhWG4fOImFFw/17nk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ui8WkZ8ZsqWNVKFISnxizxdl1xBQtE0ojtcqIbBf85o=; b=ts7lNH0ruPA1xPmHSFaSdhFhcBYbiAct/vAhwIBBFgC+CmIV6LsYlXXLnRGDobWkJW E/zpwT+s6s9/1R81WZ0tJXLT7zU8+P7jyaYiVlM0+An94AWVMmESkaF1jWXTX6onSfN5 +xZItSMvzmVDuEYEvsIR+r7I4QPpijiQyJu9uqAfjWkhSYMZDg/3T1kach8l6OC3dmGI d1DArjlQfMnSTdevKTTcBwLm99pB4BdlR/I1zjPDWBOALOxDqjWS2a9gzZlE+YmpQOp8 S//XR9dfeKRrnyYt1+oBnVGb5FejS9ajyc4pAT3NYkq5fMr0RqmZDvITBR3gtVMqIzSr TIdw== X-Gm-Message-State: AOAM531af6g5W6VeSYgJrfP7u8820l6y5F9rl5WcGK7A9vbAwvAqeQ1k R94IlYkS32qH2pUCkmXMtHXmS0cKmVvjQhLItFE/Sg== X-Google-Smtp-Source: ABdhPJy3F0IZWGCXcT7DTnuCU2t5bEV1GI8WhdMxAAqDMxgLl6e5j0szfklCmFBV9PnPK7/VfntIN7R+bRVO1DC8GEg= X-Received: by 2002:a4a:c011:: with SMTP id v17mr4481557oop.89.1602141806038; Thu, 08 Oct 2020 00:23:26 -0700 (PDT) MIME-Version: 1.0 References: <20201007164426.1812530-1-daniel.vetter@ffwll.ch> <20201007164426.1812530-8-daniel.vetter@ffwll.ch> <852a74ec-339b-4c7f-9e29-b9736111849a@nvidia.com> In-Reply-To: <852a74ec-339b-4c7f-9e29-b9736111849a@nvidia.com> From: Daniel Vetter Date: Thu, 8 Oct 2020 09:23:14 +0200 Message-ID: Subject: Re: [PATCH 07/13] mm: close race in generic_access_phys To: John Hubbard Cc: DRI Development , LKML , kvm@vger.kernel.org, Linux MM , Linux ARM , linux-samsung-soc , "open list:DMA BUFFER SHARING FRAMEWORK" , linux-s390@vger.kernel.org, Jason Gunthorpe , Dan Williams , Kees Cook , Rik van Riel , Benjamin Herrensmidt , Dave Airlie , Hugh Dickins , Andrew Morton , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Jan Kara , Daniel Vetter Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 8, 2020 at 2:44 AM John Hubbard wrote: > > On 10/7/20 9:44 AM, Daniel Vetter wrote: > > Way back it was a reasonable assumptions that iomem mappings never > > change the pfn range they point at. But this has changed: > > > > - gpu drivers dynamically manage their memory nowadays, invalidating > > ptes with unmap_mapping_range when buffers get moved > > > > - contiguous dma allocations have moved from dedicated carvetouts to > > s/carvetouts/carveouts/ > > > cma regions. This means if we miss the unmap the pfn might contain > > pagecache or anon memory (well anything allocated with GFP_MOVEABLE) > > > > - even /dev/mem now invalidates mappings when the kernel requests that > > iomem region when CONFIG_IO_STRICT_DEVMEM is set, see 3234ac664a87 > > ("/dev/mem: Revoke mappings when a driver claims the region") > > Thanks for putting these references into the log, it's very helpful. > ... > > diff --git a/mm/memory.c b/mm/memory.c > > index fcfc4ca36eba..8d467e23b44e 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -4873,28 +4873,68 @@ int follow_phys(struct vm_area_struct *vma, > > return ret; > > } > > > > +/** > > + * generic_access_phys - generic implementation for iomem mmap access > > + * @vma: the vma to access > > + * @addr: userspace addres, not relative offset within @vma > > + * @buf: buffer to read/write > > + * @len: length of transfer > > + * @write: set to FOLL_WRITE when writing, otherwise reading > > + * > > + * This is a generic implementation for &vm_operations_struct.access for an > > + * iomem mapping. This callback is used by access_process_vm() when the @vma is > > + * not page based. > > + */ > > int generic_access_phys(struct vm_area_struct *vma, unsigned long addr, > > void *buf, int len, int write) > > { > > resource_size_t phys_addr; > > unsigned long prot = 0; > > void __iomem *maddr; > > + pte_t *ptep, pte; > > + spinlock_t *ptl; > > int offset = addr & (PAGE_SIZE-1); > > + int ret = -EINVAL; > > + > > + if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) > > + return -EINVAL; > > + > > +retry: > > + if (follow_pte(vma->vm_mm, addr, &ptep, &ptl)) > > + return -EINVAL; > > + pte = *ptep; > > + pte_unmap_unlock(ptep, ptl); > > > > - if (follow_phys(vma, addr, write, &prot, &phys_addr)) > > + prot = pgprot_val(pte_pgprot(pte)); > > + phys_addr = (resource_size_t)pte_pfn(pte) << PAGE_SHIFT; > > + > > + if ((write & FOLL_WRITE) && !pte_write(pte)) > > return -EINVAL; > > > > maddr = ioremap_prot(phys_addr, PAGE_ALIGN(len + offset), prot); > > if (!maddr) > > return -ENOMEM; > > > > + if (follow_pte(vma->vm_mm, addr, &ptep, &ptl)) > > + goto out_unmap; > > + > > + if (pte_same(pte, *ptep)) { > > > The ioremap area is something I'm sorta new to, so a newbie question: > is it possible for the same pte to already be there, ever? If so, we > be stuck in an infinite loop here. I'm sure that's not the case, but > it's not yet obvious to me why it's impossible. Resource reservations > maybe? It's just buggy, it should be !pte_same. And I need to figure out how to test this I guess. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch