From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEAB0ECAAD4 for ; Mon, 29 Aug 2022 21:26:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229817AbiH2V0O (ORCPT ); Mon, 29 Aug 2022 17:26:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229781AbiH2V0E (ORCPT ); Mon, 29 Aug 2022 17:26:04 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD77857884 for ; Mon, 29 Aug 2022 14:25:45 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id p8-20020a170902e74800b0017307429ca3so6839759plf.17 for ; Mon, 29 Aug 2022 14:25:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=N9bIms4Zgtt1w5mfZvOohRmDSgEYydYD4bEwHw/V5ow=; b=CDYqj2724k+j3UZuEqBII6d0mYlErNKF+DONYkTH+E04Ngvdyx6CFcIzGI3JpE+Ocv IfFz52ELd/oHC5JbA4KJiqR41o93UU1oNs91Ejb3C2WUYMd8rJbvn4wGpH2TVPFpMmJT SgaAVJGTguR7jHY2klbLNlMZG17tv+P143dcZzCLTWGQf5RYnpB7E01bhfNNtgZSKJd4 8iRdR4KuZii8bgu+gGfhXIXfMfDbZfPU8zkTguml14FGrqbs//qWRwWzX5xJRRQfxW3M XOIZ22KxRY8zlSKN1mF3msZnGKAdWpIuRmlb/EMY5nhFhrrOHs7u2uNNMJtGec19XB/4 ScVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=N9bIms4Zgtt1w5mfZvOohRmDSgEYydYD4bEwHw/V5ow=; b=yco3RAXEBf0k8WkDJ3iuJyBlDhkokQJoBW8rheLvyqDRcgVSNOrrG3A69kvPsHlpf2 rWoNfaeIrw7H0m24xWuo19vtNKUsb9hALeylcXGG9OnJ7jtOLLiTE5iuTo6+zEu4no2b J4Dt7yQsbRd8gneTnybvLHUGypxdCRybiNGhJX+bKsyMVWTTvbd9W+Wr3g84Z9FBt1Ty N/1lkz7JINOKu4E64EC7oQSRmjF+EB0axQnCUg2GDWXO0ndhrDwt+t3FGi7DkiVeYEEU YIvFibP6zb79mv3UAd7gFM3EIVvcSicXL7z7D9UvNPwd+JA5lD9yZK0KQiw5W3cLcZSn 9EUw== X-Gm-Message-State: ACgBeo3KMhRyBVPc32WWPZhbTSIkmLAa8z5EY51lK+ZFxb3na+BKe7tj tNv7yofd+zdjsFliwxfdyVduBQHHob2Mer74DWQRG1CWysKPaYyUr5lQ4oX+09VagwT0gYvgUiy 5l77Q6TWbUMceNp4Nuf/L X-Google-Smtp-Source: AA6agR7o7Ix1gbIIukXoy+30trWr4dJQ0Q9jpP73SljSWj/tMusjSLFwSN2Bk21+x1fzs5P+BkoH3vuJaG0= X-Received: from surenb-spec.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e30]) (user=surenb job=sendgmr) by 2002:aa7:8a0b:0:b0:537:f5f1:fd91 with SMTP id m11-20020aa78a0b000000b00537f5f1fd91mr12449109pfa.41.1661808344609; Mon, 29 Aug 2022 14:25:44 -0700 (PDT) Date: Mon, 29 Aug 2022 21:25:08 +0000 In-Reply-To: <20220829212531.3184856-1-surenb@google.com> Mime-Version: 1.0 References: <20220829212531.3184856-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829212531.3184856-6-surenb@google.com> Subject: [RFC PATCH 05/28] mm: add per-VMA lock and helper functions to control it From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, riel@surriel.com, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, surenb@google.com, kernel-team@android.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-ccpol: medium Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Introduce a per-VMA rw_semaphore to be used during page fault handling instead of mmap_lock. Because there are cases when multiple VMAs need to be exclusively locked during VMA tree modifications, instead of the usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock exclusively and setting vma->lock_seq to the current mm->lock_seq. When mmap_write_lock holder is done with all modifications and drops mmap_lock, it will increment mm->lock_seq, effectively unlocking all VMAs marked as locked. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 78 +++++++++++++++++++++++++++++++++++++++ include/linux/mm_types.h | 7 ++++ include/linux/mmap_lock.h | 13 +++++++ kernel/fork.c | 4 ++ mm/init-mm.c | 3 ++ 5 files changed, 105 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7d322a979455..476bf936c5f0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -611,6 +611,83 @@ struct vm_operations_struct { unsigned long addr); }; +#ifdef CONFIG_PER_VMA_LOCK +static inline void vma_init_lock(struct vm_area_struct *vma) +{ + init_rwsem(&vma->lock); + vma->vm_lock_seq = -1; +} + +static inline void vma_mark_locked(struct vm_area_struct *vma) +{ + int mm_lock_seq; + + mmap_assert_write_locked(vma->vm_mm); + + /* + * current task is holding mmap_write_lock, both vma->vm_lock_seq and + * mm->mm_lock_seq can't be concurrently modified. + */ + mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq); + if (vma->vm_lock_seq == mm_lock_seq) + return; + + down_write(&vma->lock); + vma->vm_lock_seq = mm_lock_seq; + up_write(&vma->lock); +} + +static inline bool vma_read_trylock(struct vm_area_struct *vma) +{ + if (unlikely(down_read_trylock(&vma->lock) == 0)) + return false; + + /* + * Overflow might produce false locked result but it's not critical. + * False unlocked result is critical but is impossible because we + * modify and check vma->vm_lock_seq under vma->lock protection and + * mm->mm_lock_seq modification invalidates all existing locks. + */ + if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq)) { + up_read(&vma->lock); + return false; + } + return true; +} + +static inline void vma_read_unlock(struct vm_area_struct *vma) +{ + up_read(&vma->lock); +} + +static inline void vma_assert_locked(struct vm_area_struct *vma) +{ + lockdep_assert_held(&vma->lock); + VM_BUG_ON_VMA(!rwsem_is_locked(&vma->lock), vma); +} + +static inline void vma_assert_write_locked(struct vm_area_struct *vma, int pos) +{ + mmap_assert_write_locked(vma->vm_mm); + /* + * current task is holding mmap_write_lock, both vma->vm_lock_seq and + * mm->mm_lock_seq can't be concurrently modified. + */ + VM_BUG_ON_VMA(vma->vm_lock_seq != READ_ONCE(vma->vm_mm->mm_lock_seq), vma); +} + +#else /* CONFIG_PER_VMA_LOCK */ + +static inline void vma_init_lock(struct vm_area_struct *vma) {} +static inline void vma_mark_locked(struct vm_area_struct *vma) {} +static inline bool vma_read_trylock(struct vm_area_struct *vma) + { return false; } +static inline void vma_read_unlock(struct vm_area_struct *vma) {} +static inline void vma_assert_locked(struct vm_area_struct *vma) {} +static inline void vma_assert_write_locked(struct vm_area_struct *vma, int pos) {} + +#endif /* CONFIG_PER_VMA_LOCK */ + static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { static const struct vm_operations_struct dummy_vm_ops = {}; @@ -619,6 +696,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); + vma_init_lock(vma); } static inline void vma_set_anonymous(struct vm_area_struct *vma) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index bed25ef7c994..6a03f59c1e78 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -486,6 +486,10 @@ struct vm_area_struct { struct mempolicy *vm_policy; /* NUMA policy for the VMA */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + struct rw_semaphore lock; + int vm_lock_seq; +#endif } __randomize_layout; struct kioctx_table; @@ -567,6 +571,9 @@ struct mm_struct { * init_mm.mmlist, and are protected * by mmlist_lock */ +#ifdef CONFIG_PER_VMA_LOCK + int mm_lock_seq; +#endif unsigned long hiwater_rss; /* High-watermark of RSS usage */ diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index e49ba91bb1f0..a391ae226564 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -72,6 +72,17 @@ static inline void mmap_assert_write_locked(struct mm_struct *mm) VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); } +#ifdef CONFIG_PER_VMA_LOCK +static inline void vma_mark_unlocked_all(struct mm_struct *mm) +{ + mmap_assert_write_locked(mm); + /* No races during update due to exclusive mmap_lock being held */ + WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1); +} +#else +static inline void vma_mark_unlocked_all(struct mm_struct *mm) {} +#endif + static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); @@ -114,12 +125,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm) static inline void mmap_write_unlock(struct mm_struct *mm) { __mmap_lock_trace_released(mm, true); + vma_mark_unlocked_all(mm); up_write(&mm->mmap_lock); } static inline void mmap_write_downgrade(struct mm_struct *mm) { __mmap_lock_trace_acquire_returned(mm, false, true); + vma_mark_unlocked_all(mm); downgrade_write(&mm->mmap_lock); } diff --git a/kernel/fork.c b/kernel/fork.c index 614872438393..bfab31ecd11e 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -475,6 +475,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) */ *new = data_race(*orig); INIT_LIST_HEAD(&new->anon_vma_chain); + vma_init_lock(new); new->vm_next = new->vm_prev = NULL; dup_anon_vma_name(orig, new); } @@ -1130,6 +1131,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, seqcount_init(&mm->write_protect_seq); mmap_init_lock(mm); INIT_LIST_HEAD(&mm->mmlist); +#ifdef CONFIG_PER_VMA_LOCK + WRITE_ONCE(mm->mm_lock_seq, 0); +#endif mm_pgtables_bytes_init(mm); mm->map_count = 0; mm->locked_vm = 0; diff --git a/mm/init-mm.c b/mm/init-mm.c index fbe7844d0912..8399f90d631c 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -37,6 +37,9 @@ struct mm_struct init_mm = { .page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock), .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), +#ifdef CONFIG_PER_VMA_LOCK + .mm_lock_seq = 0, +#endif .user_ns = &init_user_ns, .cpu_bitmap = CPU_BITS_NONE, #ifdef CONFIG_IOMMU_SVA -- 2.37.2.672.g94769d06f0-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76A43ECAAD4 for ; Mon, 29 Aug 2022 22:01:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jPuPKoiLpWRkyD+0Oo5IombXm6Hsf7jrvafD7jW2Lmg=; b=CM+U03WVsJx1Lgb1/fJogjzMsm RPHpUkJy5R3QnPfD1TneM9bJ/zUCFcBfO3IouIv9p2YkHJEwvqrwRD/tTbI9YXNAdp1jyagTT3gSh BhwQLZ7WKoikAGQlYh0SH1lBRPRNJJDY1SAInrV/FFHdujy6sQBW5hQzzGSEWj12v/z5OzU1y12lm 1fU6uzQ/rPAjCv25ph7uA3LRrObp+pm+HZARtHTH2/89x7sAS1xDaXMxLgI2XjD0oSIBmSfeDwGnP 7uiwjwNxeakHsMRqrLrySf3Vs+P6fPOcVMl/y7NOjxEarXKPS8SSe+jeOuMYewnteMNXw33EQJYsI hAgtlS/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmnf-00CyJg-MN; Mon, 29 Aug 2022 22:00:08 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmmV-00Cxu8-8d for linux-arm-kernel@bombadil.infradead.org; Mon, 29 Aug 2022 21:58:55 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=N9bIms4Zgtt1w5mfZvOohRmDSgEYydYD4bEwHw/V5ow=; b=ENEk06kmk/1AroEMxXGbX3o9hJ ao+Url8E/cdqMOdfp9GbZjA06njEKwkeoOZu8eCmmfrDRrfe/L8kR+lKQOoakIeBhs0wv2ZtHAUix c1dus6iOKFlbuN+tlyqxWgyPsMsMchjqMyd2Pct/cPH7cjEHyCCV8aMHbqQLh9w393jnenyxd8jZ9 JuyejhZ4tMZO541+PwJu0MRl0K/JcAGcoM1MJ7owNLNK5ktU77iXjbjFYfS7I1A5tMPqRtRfioGlE LXZ4YEr3rDKg54e7ktDPlgJf0OJXxcahtrneF2AyVRyQ1zEJdckBnEWyFRtprIwQqs3uqOdhQwVuu M14AObTg==; Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmGS-007bZj-04 for linux-arm-kernel@lists.infradead.org; Mon, 29 Aug 2022 21:25:51 +0000 Received: by mail-pj1-x104a.google.com with SMTP id lw13-20020a17090b180d00b001fb2be14a3dso10088772pjb.8 for ; Mon, 29 Aug 2022 14:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=N9bIms4Zgtt1w5mfZvOohRmDSgEYydYD4bEwHw/V5ow=; b=CDYqj2724k+j3UZuEqBII6d0mYlErNKF+DONYkTH+E04Ngvdyx6CFcIzGI3JpE+Ocv IfFz52ELd/oHC5JbA4KJiqR41o93UU1oNs91Ejb3C2WUYMd8rJbvn4wGpH2TVPFpMmJT SgaAVJGTguR7jHY2klbLNlMZG17tv+P143dcZzCLTWGQf5RYnpB7E01bhfNNtgZSKJd4 8iRdR4KuZii8bgu+gGfhXIXfMfDbZfPU8zkTguml14FGrqbs//qWRwWzX5xJRRQfxW3M XOIZ22KxRY8zlSKN1mF3msZnGKAdWpIuRmlb/EMY5nhFhrrOHs7u2uNNMJtGec19XB/4 ScVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=N9bIms4Zgtt1w5mfZvOohRmDSgEYydYD4bEwHw/V5ow=; b=gdOxSp+BtKGnYgkVsJtUuA6q4piYOpitHulHLsx4+j4GRz/eBYgiP6ffsBiW6Zdpsq ykAcwU8KkMf7voKVP9GSBTgtP5JRtmXViQ16NLBYSuNdnL5wEMfdSGnae7VyLupPmtLK vSmizbCJLQQ8KVfeRpanfYLHCgoOHLBmCgH69E3W6wqvDO/4kkEsboKOuKfCNXNcShiz GEILpue/RSgRgcffqCuqgKlnDrwVBU1tsJeTuThN3oqc+GEKsjCPK/817tXfINksx7mT Bwysi98r2h1SpVQDb/ok2XYNMK1vD9sX5zciYD1L3DxVl2bJIwrl5CwK9SKAMDsosNdw TCEA== X-Gm-Message-State: ACgBeo3uK2jbxkeKZnOmMnqCiAcOW7BQx32WjI7tPgAT/6QkydIetkyp NP9RGrGJAMa3QTdzt6LjcLpwnTbB9FqlbTVBj4KXFKlSta1U1jcwnFGHo732rS/OTdmQ5hqHkWS UmFa5PHyhdhwdL4Xr+jZGiSo+Ks/YIMo= X-Google-Smtp-Source: AA6agR7o7Ix1gbIIukXoy+30trWr4dJQ0Q9jpP73SljSWj/tMusjSLFwSN2Bk21+x1fzs5P+BkoH3vuJaG0= X-Received: from surenb-spec.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e30]) (user=surenb job=sendgmr) by 2002:aa7:8a0b:0:b0:537:f5f1:fd91 with SMTP id m11-20020aa78a0b000000b00537f5f1fd91mr12449109pfa.41.1661808344609; Mon, 29 Aug 2022 14:25:44 -0700 (PDT) Date: Mon, 29 Aug 2022 21:25:08 +0000 In-Reply-To: <20220829212531.3184856-1-surenb@google.com> Mime-Version: 1.0 References: <20220829212531.3184856-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829212531.3184856-6-surenb@google.com> Subject: [RFC PATCH 05/28] mm: add per-VMA lock and helper functions to control it From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, riel@surriel.com, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, surenb@google.com, kernel-team@android.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org X-ccpol: medium X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220829_222548_384792_C8B7FCA6 X-CRM114-Status: GOOD ( 18.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a per-VMA rw_semaphore to be used during page fault handling instead of mmap_lock. Because there are cases when multiple VMAs need to be exclusively locked during VMA tree modifications, instead of the usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock exclusively and setting vma->lock_seq to the current mm->lock_seq. When mmap_write_lock holder is done with all modifications and drops mmap_lock, it will increment mm->lock_seq, effectively unlocking all VMAs marked as locked. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 78 +++++++++++++++++++++++++++++++++++++++ include/linux/mm_types.h | 7 ++++ include/linux/mmap_lock.h | 13 +++++++ kernel/fork.c | 4 ++ mm/init-mm.c | 3 ++ 5 files changed, 105 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7d322a979455..476bf936c5f0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -611,6 +611,83 @@ struct vm_operations_struct { unsigned long addr); }; +#ifdef CONFIG_PER_VMA_LOCK +static inline void vma_init_lock(struct vm_area_struct *vma) +{ + init_rwsem(&vma->lock); + vma->vm_lock_seq = -1; +} + +static inline void vma_mark_locked(struct vm_area_struct *vma) +{ + int mm_lock_seq; + + mmap_assert_write_locked(vma->vm_mm); + + /* + * current task is holding mmap_write_lock, both vma->vm_lock_seq and + * mm->mm_lock_seq can't be concurrently modified. + */ + mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq); + if (vma->vm_lock_seq == mm_lock_seq) + return; + + down_write(&vma->lock); + vma->vm_lock_seq = mm_lock_seq; + up_write(&vma->lock); +} + +static inline bool vma_read_trylock(struct vm_area_struct *vma) +{ + if (unlikely(down_read_trylock(&vma->lock) == 0)) + return false; + + /* + * Overflow might produce false locked result but it's not critical. + * False unlocked result is critical but is impossible because we + * modify and check vma->vm_lock_seq under vma->lock protection and + * mm->mm_lock_seq modification invalidates all existing locks. + */ + if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq)) { + up_read(&vma->lock); + return false; + } + return true; +} + +static inline void vma_read_unlock(struct vm_area_struct *vma) +{ + up_read(&vma->lock); +} + +static inline void vma_assert_locked(struct vm_area_struct *vma) +{ + lockdep_assert_held(&vma->lock); + VM_BUG_ON_VMA(!rwsem_is_locked(&vma->lock), vma); +} + +static inline void vma_assert_write_locked(struct vm_area_struct *vma, int pos) +{ + mmap_assert_write_locked(vma->vm_mm); + /* + * current task is holding mmap_write_lock, both vma->vm_lock_seq and + * mm->mm_lock_seq can't be concurrently modified. + */ + VM_BUG_ON_VMA(vma->vm_lock_seq != READ_ONCE(vma->vm_mm->mm_lock_seq), vma); +} + +#else /* CONFIG_PER_VMA_LOCK */ + +static inline void vma_init_lock(struct vm_area_struct *vma) {} +static inline void vma_mark_locked(struct vm_area_struct *vma) {} +static inline bool vma_read_trylock(struct vm_area_struct *vma) + { return false; } +static inline void vma_read_unlock(struct vm_area_struct *vma) {} +static inline void vma_assert_locked(struct vm_area_struct *vma) {} +static inline void vma_assert_write_locked(struct vm_area_struct *vma, int pos) {} + +#endif /* CONFIG_PER_VMA_LOCK */ + static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { static const struct vm_operations_struct dummy_vm_ops = {}; @@ -619,6 +696,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); + vma_init_lock(vma); } static inline void vma_set_anonymous(struct vm_area_struct *vma) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index bed25ef7c994..6a03f59c1e78 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -486,6 +486,10 @@ struct vm_area_struct { struct mempolicy *vm_policy; /* NUMA policy for the VMA */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + struct rw_semaphore lock; + int vm_lock_seq; +#endif } __randomize_layout; struct kioctx_table; @@ -567,6 +571,9 @@ struct mm_struct { * init_mm.mmlist, and are protected * by mmlist_lock */ +#ifdef CONFIG_PER_VMA_LOCK + int mm_lock_seq; +#endif unsigned long hiwater_rss; /* High-watermark of RSS usage */ diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index e49ba91bb1f0..a391ae226564 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -72,6 +72,17 @@ static inline void mmap_assert_write_locked(struct mm_struct *mm) VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); } +#ifdef CONFIG_PER_VMA_LOCK +static inline void vma_mark_unlocked_all(struct mm_struct *mm) +{ + mmap_assert_write_locked(mm); + /* No races during update due to exclusive mmap_lock being held */ + WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1); +} +#else +static inline void vma_mark_unlocked_all(struct mm_struct *mm) {} +#endif + static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); @@ -114,12 +125,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm) static inline void mmap_write_unlock(struct mm_struct *mm) { __mmap_lock_trace_released(mm, true); + vma_mark_unlocked_all(mm); up_write(&mm->mmap_lock); } static inline void mmap_write_downgrade(struct mm_struct *mm) { __mmap_lock_trace_acquire_returned(mm, false, true); + vma_mark_unlocked_all(mm); downgrade_write(&mm->mmap_lock); } diff --git a/kernel/fork.c b/kernel/fork.c index 614872438393..bfab31ecd11e 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -475,6 +475,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) */ *new = data_race(*orig); INIT_LIST_HEAD(&new->anon_vma_chain); + vma_init_lock(new); new->vm_next = new->vm_prev = NULL; dup_anon_vma_name(orig, new); } @@ -1130,6 +1131,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, seqcount_init(&mm->write_protect_seq); mmap_init_lock(mm); INIT_LIST_HEAD(&mm->mmlist); +#ifdef CONFIG_PER_VMA_LOCK + WRITE_ONCE(mm->mm_lock_seq, 0); +#endif mm_pgtables_bytes_init(mm); mm->map_count = 0; mm->locked_vm = 0; diff --git a/mm/init-mm.c b/mm/init-mm.c index fbe7844d0912..8399f90d631c 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -37,6 +37,9 @@ struct mm_struct init_mm = { .page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock), .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), +#ifdef CONFIG_PER_VMA_LOCK + .mm_lock_seq = 0, +#endif .user_ns = &init_user_ns, .cpu_bitmap = CPU_BITS_NONE, #ifdef CONFIG_IOMMU_SVA -- 2.37.2.672.g94769d06f0-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3DCE0ECAAD2 for ; Mon, 29 Aug 2022 22:04:59 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4MGky15NBTz3dxg for ; Tue, 30 Aug 2022 08:04:57 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=CDYqj272; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=flex--surenb.bounces.google.com (client-ip=2607:f8b0:4864:20::104a; helo=mail-pj1-x104a.google.com; envelope-from=32c4nywykdik574r0ot11tyr.p1zyv07a22p-qr8yv565.1cyno5.14t@flex--surenb.bounces.google.com; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=CDYqj272; dkim-atps=neutral Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4MGk4q1dxwz308b for ; Tue, 30 Aug 2022 07:25:46 +1000 (AEST) Received: by mail-pj1-x104a.google.com with SMTP id r6-20020a17090a2e8600b001fbb51e5cc1so7988166pjd.5 for ; Mon, 29 Aug 2022 14:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=N9bIms4Zgtt1w5mfZvOohRmDSgEYydYD4bEwHw/V5ow=; b=CDYqj2724k+j3UZuEqBII6d0mYlErNKF+DONYkTH+E04Ngvdyx6CFcIzGI3JpE+Ocv IfFz52ELd/oHC5JbA4KJiqR41o93UU1oNs91Ejb3C2WUYMd8rJbvn4wGpH2TVPFpMmJT SgaAVJGTguR7jHY2klbLNlMZG17tv+P143dcZzCLTWGQf5RYnpB7E01bhfNNtgZSKJd4 8iRdR4KuZii8bgu+gGfhXIXfMfDbZfPU8zkTguml14FGrqbs//qWRwWzX5xJRRQfxW3M XOIZ22KxRY8zlSKN1mF3msZnGKAdWpIuRmlb/EMY5nhFhrrOHs7u2uNNMJtGec19XB/4 ScVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=N9bIms4Zgtt1w5mfZvOohRmDSgEYydYD4bEwHw/V5ow=; b=XQbKfYRAHlV2uYeYzbPuA9sqQwT/eYFkEYOVNL/IJW5NTYJzQGKoosftAzkQicqya2 JMgvDlW98NnMnir/V24FmKAqP6aRaIDdKrj8zY0CV1Do573MXyOSJzC2nbP6ya6hapPU Pj6dbs7aj+bKZ8clIW+BAKW8rLm8VJFD/IclsTmH9hk+sRqpwK3ig4alUVjjqTJipoP6 tcjcVlyYpPKZEHf6vBRdUmLsgF1JwZyiSROS1uLdsMm4Jil4XnBWrKpuT/jN4ochc++D BuCbAzsONfmVnq+I7RDNDtEB2LUU+rdvfBYVgahyAWbAYVPKjw5WjXaE9JeePtkLpjfe XX8w== X-Gm-Message-State: ACgBeo35gz92Il42cismr1yngcBvR6lfhqFeF36xxvRBJBoSklGO3HnW S6/ss7vKbDTFVc53JWyCHH4QHK0/aJBLHBIMs104igl668vQzbkQ+MF7oqb50mYOw84kVGXhEiU N7NSrGTs4B1PkWpFxrnI/OQ== X-Google-Smtp-Source: AA6agR7o7Ix1gbIIukXoy+30trWr4dJQ0Q9jpP73SljSWj/tMusjSLFwSN2Bk21+x1fzs5P+BkoH3vuJaG0= X-Received: from surenb-spec.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e30]) (user=surenb job=sendgmr) by 2002:aa7:8a0b:0:b0:537:f5f1:fd91 with SMTP id m11-20020aa78a0b000000b00537f5f1fd91mr12449109pfa.41.1661808344609; Mon, 29 Aug 2022 14:25:44 -0700 (PDT) Date: Mon, 29 Aug 2022 21:25:08 +0000 In-Reply-To: <20220829212531.3184856-1-surenb@google.com> Mime-Version: 1.0 References: <20220829212531.3184856-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829212531.3184856-6-surenb@google.com> Subject: [RFC PATCH 05/28] mm: add per-VMA lock and helper functions to control it From: Suren Baghdasaryan To: akpm@linux-foundation.org Content-Type: text/plain; charset="UTF-8" X-ccpol: medium X-Mailman-Approved-At: Tue, 30 Aug 2022 08:01:45 +1000 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: michel@lespinasse.org, joelaf@google.com, songliubraving@fb.com, mhocko@suse.com, david@redhat.com, peterz@infradead.org, bigeasy@linutronix.de, peterx@redhat.com, dhowells@redhat.com, linux-mm@kvack.org, jglisse@google.com, dave@stgolabs.net, minchan@google.com, x86@kernel.org, hughd@google.com, willy@infradead.org, laurent.dufour@fr.ibm.com, linux-arm-kernel@lists.infradead.org, rientjes@google.com, axelrasmussen@google.com, kernel-team@android.com, paulmck@kernel.org, riel@surriel.com, liam.howlett@oracle.com, luto@kernel.org, ldufour@linux.ibm.com, surenb@google.com, vbabka@suse.cz, linuxppc-dev@lists.ozlabs.org, kent.overstreet@linux.dev, linux-kernel@vger.kernel.org, hannes@cmpxchg.org, mgorman@techsingularity.net Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Introduce a per-VMA rw_semaphore to be used during page fault handling instead of mmap_lock. Because there are cases when multiple VMAs need to be exclusively locked during VMA tree modifications, instead of the usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock exclusively and setting vma->lock_seq to the current mm->lock_seq. When mmap_write_lock holder is done with all modifications and drops mmap_lock, it will increment mm->lock_seq, effectively unlocking all VMAs marked as locked. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 78 +++++++++++++++++++++++++++++++++++++++ include/linux/mm_types.h | 7 ++++ include/linux/mmap_lock.h | 13 +++++++ kernel/fork.c | 4 ++ mm/init-mm.c | 3 ++ 5 files changed, 105 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7d322a979455..476bf936c5f0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -611,6 +611,83 @@ struct vm_operations_struct { unsigned long addr); }; +#ifdef CONFIG_PER_VMA_LOCK +static inline void vma_init_lock(struct vm_area_struct *vma) +{ + init_rwsem(&vma->lock); + vma->vm_lock_seq = -1; +} + +static inline void vma_mark_locked(struct vm_area_struct *vma) +{ + int mm_lock_seq; + + mmap_assert_write_locked(vma->vm_mm); + + /* + * current task is holding mmap_write_lock, both vma->vm_lock_seq and + * mm->mm_lock_seq can't be concurrently modified. + */ + mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq); + if (vma->vm_lock_seq == mm_lock_seq) + return; + + down_write(&vma->lock); + vma->vm_lock_seq = mm_lock_seq; + up_write(&vma->lock); +} + +static inline bool vma_read_trylock(struct vm_area_struct *vma) +{ + if (unlikely(down_read_trylock(&vma->lock) == 0)) + return false; + + /* + * Overflow might produce false locked result but it's not critical. + * False unlocked result is critical but is impossible because we + * modify and check vma->vm_lock_seq under vma->lock protection and + * mm->mm_lock_seq modification invalidates all existing locks. + */ + if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq)) { + up_read(&vma->lock); + return false; + } + return true; +} + +static inline void vma_read_unlock(struct vm_area_struct *vma) +{ + up_read(&vma->lock); +} + +static inline void vma_assert_locked(struct vm_area_struct *vma) +{ + lockdep_assert_held(&vma->lock); + VM_BUG_ON_VMA(!rwsem_is_locked(&vma->lock), vma); +} + +static inline void vma_assert_write_locked(struct vm_area_struct *vma, int pos) +{ + mmap_assert_write_locked(vma->vm_mm); + /* + * current task is holding mmap_write_lock, both vma->vm_lock_seq and + * mm->mm_lock_seq can't be concurrently modified. + */ + VM_BUG_ON_VMA(vma->vm_lock_seq != READ_ONCE(vma->vm_mm->mm_lock_seq), vma); +} + +#else /* CONFIG_PER_VMA_LOCK */ + +static inline void vma_init_lock(struct vm_area_struct *vma) {} +static inline void vma_mark_locked(struct vm_area_struct *vma) {} +static inline bool vma_read_trylock(struct vm_area_struct *vma) + { return false; } +static inline void vma_read_unlock(struct vm_area_struct *vma) {} +static inline void vma_assert_locked(struct vm_area_struct *vma) {} +static inline void vma_assert_write_locked(struct vm_area_struct *vma, int pos) {} + +#endif /* CONFIG_PER_VMA_LOCK */ + static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { static const struct vm_operations_struct dummy_vm_ops = {}; @@ -619,6 +696,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); + vma_init_lock(vma); } static inline void vma_set_anonymous(struct vm_area_struct *vma) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index bed25ef7c994..6a03f59c1e78 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -486,6 +486,10 @@ struct vm_area_struct { struct mempolicy *vm_policy; /* NUMA policy for the VMA */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + struct rw_semaphore lock; + int vm_lock_seq; +#endif } __randomize_layout; struct kioctx_table; @@ -567,6 +571,9 @@ struct mm_struct { * init_mm.mmlist, and are protected * by mmlist_lock */ +#ifdef CONFIG_PER_VMA_LOCK + int mm_lock_seq; +#endif unsigned long hiwater_rss; /* High-watermark of RSS usage */ diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index e49ba91bb1f0..a391ae226564 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -72,6 +72,17 @@ static inline void mmap_assert_write_locked(struct mm_struct *mm) VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); } +#ifdef CONFIG_PER_VMA_LOCK +static inline void vma_mark_unlocked_all(struct mm_struct *mm) +{ + mmap_assert_write_locked(mm); + /* No races during update due to exclusive mmap_lock being held */ + WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1); +} +#else +static inline void vma_mark_unlocked_all(struct mm_struct *mm) {} +#endif + static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); @@ -114,12 +125,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm) static inline void mmap_write_unlock(struct mm_struct *mm) { __mmap_lock_trace_released(mm, true); + vma_mark_unlocked_all(mm); up_write(&mm->mmap_lock); } static inline void mmap_write_downgrade(struct mm_struct *mm) { __mmap_lock_trace_acquire_returned(mm, false, true); + vma_mark_unlocked_all(mm); downgrade_write(&mm->mmap_lock); } diff --git a/kernel/fork.c b/kernel/fork.c index 614872438393..bfab31ecd11e 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -475,6 +475,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) */ *new = data_race(*orig); INIT_LIST_HEAD(&new->anon_vma_chain); + vma_init_lock(new); new->vm_next = new->vm_prev = NULL; dup_anon_vma_name(orig, new); } @@ -1130,6 +1131,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, seqcount_init(&mm->write_protect_seq); mmap_init_lock(mm); INIT_LIST_HEAD(&mm->mmlist); +#ifdef CONFIG_PER_VMA_LOCK + WRITE_ONCE(mm->mm_lock_seq, 0); +#endif mm_pgtables_bytes_init(mm); mm->map_count = 0; mm->locked_vm = 0; diff --git a/mm/init-mm.c b/mm/init-mm.c index fbe7844d0912..8399f90d631c 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -37,6 +37,9 @@ struct mm_struct init_mm = { .page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock), .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), +#ifdef CONFIG_PER_VMA_LOCK + .mm_lock_seq = 0, +#endif .user_ns = &init_user_ns, .cpu_bitmap = CPU_BITS_NONE, #ifdef CONFIG_IOMMU_SVA -- 2.37.2.672.g94769d06f0-goog