From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7526CC54E76 for ; Tue, 17 Jan 2023 20:10:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235011AbjAQUJ5 (ORCPT ); Tue, 17 Jan 2023 15:09:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233661AbjAQUG2 (ORCPT ); Tue, 17 Jan 2023 15:06:28 -0500 Received: from mail-io1-xd2d.google.com (mail-io1-xd2d.google.com [IPv6:2607:f8b0:4864:20::d2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4FFC42BE6 for ; Tue, 17 Jan 2023 11:00:07 -0800 (PST) Received: by mail-io1-xd2d.google.com with SMTP id y69so818892iof.3 for ; Tue, 17 Jan 2023 11:00:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=/Wdw6HE8HTpF7TjWP3fEjV+IfKA8kpoKQVjC2fd8a/M=; b=TI/jZm0TgXF1KqXD5ofQO3vjXscEbgReusDd9NZUiFO+rZ4ufxMxY0Wm4LXbNCDrcg VPncWQa1E7kGB48K1ouW/fIGNKtaTHLaj6Z/N+1njok2FFlr1LQOk6B6mXTmTM4tDkDf IT/vZNPwYX1bRsGgO77+sUA8PUaha5GZWJLZLvBjr/LdPPd7VppF5lAXYrpbp3pdyZxj qnWiHWo+K3KJh9Ng2IQXIddcOyASz5CtMRx5zrwOlml0CTgVQgdC17VWPPFTv7BoO/Vk VQiZrTXpH0fg9i9Oc4wCLj9yLYrY/QAd4sRfKUHBx+5Dam1c45UI9qdnx24fctaC4QRp myrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/Wdw6HE8HTpF7TjWP3fEjV+IfKA8kpoKQVjC2fd8a/M=; b=g3qKOz7JNgJU61kGyQ6LPSgTPsDv5ic9hUZWfXv2pN7yLvJqH5CUbTQ3S5Gka+cS/e 15tjDtdlyYrfU7X4QK8zodpaRPUb86k0Tyc8GDXbKJWAvTE5c+T2UoB8OpqspkdPIUUC 4CTOaQ9sMkjPBuDyjozat3ueFnlRJptFciKunuTWHtXElDTrCToVdmmU07ec/yc1GNsl ggYnwnVNggMw9LQAf+gJv7ImukDe/iGcpuWjfV4JLxWZhFXjJ2ECN6BiHnHJ5ZiqSwHP tlROkhW3W1xmoeuE6ojMIfn8pvgxRGuQ8JIIRF1VRAUnBuymvgEIOKwDJW1ShsVLSU+Q +66A== X-Gm-Message-State: AFqh2krlb1L+ifOJQpi+rvZoL5LpRrSQLEQ5+Afre1hfJRtdl/UYiGcr lSJTyXLTE/eufp2p8t5IDBcf6tMIH4A6wtrfFXbSnQ== X-Google-Smtp-Source: AMrXdXv6MqZXsvF44Gv12CcJfMs1PLhgpGnFRLlBquHPxqID9PTjCHdbpCsREcfjUMjSH1VGmrPetxKzt6MQiokOUYI= X-Received: by 2002:a05:6602:2108:b0:704:9cd9:2dc7 with SMTP id x8-20020a056602210800b007049cd92dc7mr305485iox.154.1673982006407; Tue, 17 Jan 2023 11:00:06 -0800 (PST) MIME-Version: 1.0 References: <20230109205336.3665937-1-surenb@google.com> <20230109205336.3665937-42-surenb@google.com> In-Reply-To: From: Jann Horn Date: Tue, 17 Jan 2023 19:59:29 +0100 Message-ID: Subject: Re: [PATCH 41/41] mm: replace rw_semaphore with atomic_t in vma_lock To: Suren Baghdasaryan Cc: Matthew Wilcox , akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 17, 2023 at 7:55 PM Suren Baghdasaryan wrote: > On Tue, Jan 17, 2023 at 10:47 AM Matthew Wilcox wrote: > > > > On Tue, Jan 17, 2023 at 10:36:42AM -0800, Suren Baghdasaryan wrote: > > > On Tue, Jan 17, 2023 at 10:31 AM Matthew Wilcox wrote: > > > > > > > > On Tue, Jan 17, 2023 at 10:26:32AM -0800, Suren Baghdasaryan wrote: > > > > > On Tue, Jan 17, 2023 at 10:12 AM Jann Horn wrote: > > > > > > > > > > > > On Mon, Jan 9, 2023 at 9:55 PM Suren Baghdasaryan wrote: > > > > > > > rw_semaphore is a sizable structure of 40 bytes and consumes > > > > > > > considerable space for each vm_area_struct. However vma_lock has > > > > > > > two important specifics which can be used to replace rw_semaphore > > > > > > > with a simpler structure: > > > > > > [...] > > > > > > > static inline void vma_read_unlock(struct vm_area_struct *vma) > > > > > > > { > > > > > > > - up_read(&vma->vm_lock->lock); > > > > > > > + if (atomic_dec_and_test(&vma->vm_lock->count)) > > > > > > > + wake_up(&vma->vm_mm->vma_writer_wait); > > > > > > > } > > > > > > > > > > > > I haven't properly reviewed this, but this bit looks like a > > > > > > use-after-free because you're accessing the vma after dropping your > > > > > > reference on it. You'd have to first look up the vma->vm_mm, then do > > > > > > the atomic_dec_and_test(), and afterwards do the wake_up() without > > > > > > touching the vma. Or alternatively wrap the whole thing in an RCU > > > > > > read-side critical section if the VMA is freed with RCU delay. > > > > > > > > > > vm_lock->count does not control the lifetime of the VMA, it's a > > > > > counter of how many readers took the lock or it's negative if the lock > > > > > is write-locked. > > > > > > > > Yes, but ... > > > > > > > > Task A: > > > > atomic_dec_and_test(&vma->vm_lock->count) > > > > Task B: > > > > munmap() > > > > write lock > > > > free VMA > > > > synchronize_rcu() > > > > VMA is really freed > > > > wake_up(&vma->vm_mm->vma_writer_wait); > > > > > > > > ... vma is freed. > > > > > > > > Now, I think this doesn't occur. I'm pretty sure that every caller of > > > > vma_read_unlock() is holding the RCU read lock. But maybe we should > > > > have that assertion? > > > > > > Yep, that's what this patch is doing > > > https://lore.kernel.org/all/20230109205336.3665937-27-surenb@google.com/ > > > by calling vma_assert_no_reader() from __vm_area_free(). > > > > That's not enough though. Task A still has a pointer to vma after it > > has called atomic_dec_and_test(), even after vma has been freed by > > Task B, and before Task A dereferences vma->vm_mm. > > Ah, I see your point now. I guess I'll have to store vma->vm_mm in a > local variable and call mmgrab() before atomic_dec_and_test(), then > use it in wake_up() and call mmdrop(). Is that what you are thinking? You shouldn't need mmgrab()/mmdrop(), because whoever is calling you for page fault handling must be keeping the mm_struct alive.