From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67078C3DA78 for ; Tue, 17 Jan 2023 21:30:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229516AbjAQV34 (ORCPT ); Tue, 17 Jan 2023 16:29:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229622AbjAQV3M (ORCPT ); Tue, 17 Jan 2023 16:29:12 -0500 Received: from mail-il1-x133.google.com (mail-il1-x133.google.com [IPv6:2607:f8b0:4864:20::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C69C1A4A2 for ; Tue, 17 Jan 2023 11:51:38 -0800 (PST) Received: by mail-il1-x133.google.com with SMTP id v6so6406974ilq.3 for ; Tue, 17 Jan 2023 11:51:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=OSFcPHbwPU7VuQqTvb7CRygx4qEQJSCX4opQ3eOpE/E=; b=oK5gCq5xbP6niBRqF/QRL7Bq2gVthVqJQbspwhXXQ7OnRSHWI5AiE5QK1LTqtcMGRn HeyRbfCQAKtC4T3gzbb8Wzg7491gbqjvpVBGSplHQdv+8mbaq/6L7WWL28izPBjIqoix RcZwdqu/IkMGD9/tcQk+epsczoRPYFq75Ly73PSeVqekeXO1isqgk/+r+nTbDSVMXuGR 7D+jZPUfzbSoBt77De0gD6WjMDA5N64/6EgjY16694eHJ1oNyDGZlC2jU7fD9KrrnhM9 KHzaaaT+da49MDKCEZDrLz7ulbS8/gmsAhEEGJNDyRrKINMsAmuavctzqCV6f7UGaTFS kFGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OSFcPHbwPU7VuQqTvb7CRygx4qEQJSCX4opQ3eOpE/E=; b=3Q05DoWYZc6YPf+LNAkb15wCQN3kMm6prZ+iAST4W9iyM3bBMEHx0XcCDzkiO395qu WUriw9mGoCLkuRmHYJTqxChlKnQGn9RZ/zClAr9ct/rWaSE6eOKTS2haMaM9CY/vzFL2 2IHOqgE1WMMVByff+iXECljiDCq137oi1JCabGpvVX6QvD+en90c9wamalpFBoUL5ct3 1AIC7+Nq9Tc4mSwPLUUaDavbnYR3V69g804e+rcgJVG9N+3U5gyk1/zDQhy4DW/VRlLA NisHZi0BCMe1b3QT1tFxYjVoVkZefKaeuOMe3IADH4eoc6K3kpsfZPSo5ljBF0SanIz4 2y0Q== X-Gm-Message-State: AFqh2kqYeU4qQSkOQN9ktBue6amnPXmfZxt/83RZLPUgEOXplYae7vyZ UrQipoRovMDAxdKr8wel5SkFKo9/TBCjBBOJNGcbuA== X-Google-Smtp-Source: AMrXdXtN2czWy7s7z6riYeL+TaWTkUjkEXdbo5Qe/e5v9zxAXm+Abj+Br8dn7RO1A0aNN2qhpP5AND12ti3Cn4146c8= X-Received: by 2002:a92:c5c6:0:b0:30c:877:db26 with SMTP id s6-20020a92c5c6000000b0030c0877db26mr453316ilt.101.1673985097438; Tue, 17 Jan 2023 11:51:37 -0800 (PST) MIME-Version: 1.0 References: <20230109205336.3665937-1-surenb@google.com> <20230109205336.3665937-33-surenb@google.com> In-Reply-To: <20230109205336.3665937-33-surenb@google.com> From: Jann Horn Date: Tue, 17 Jan 2023 20:51:01 +0100 Message-ID: Subject: Re: [PATCH 32/41] mm: prevent userfaults to be handled under per-vma lock To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 9, 2023 at 9:55 PM Suren Baghdasaryan wrote: > Due to the possibility of handle_userfault dropping mmap_lock, avoid fault > handling under VMA lock and retry holding mmap_lock. This can be handled > more gracefully in the future. > > Signed-off-by: Suren Baghdasaryan > Suggested-by: Peter Xu > --- > mm/memory.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/mm/memory.c b/mm/memory.c > index 20806bc8b4eb..12508f4d845a 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5273,6 +5273,13 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, > if (!vma->anon_vma) > goto inval; > > + /* > + * Due to the possibility of userfault handler dropping mmap_lock, avoid > + * it for now and fall back to page fault handling under mmap_lock. > + */ > + if (userfaultfd_armed(vma)) > + goto inval; This looks racy wrt concurrent userfaultfd_register(). I think you'll want to do the userfaultfd_armed(vma) check _after_ locking the VMA, and ensure that the userfaultfd code write-locks the VMA before changing the __VM_UFFD_FLAGS in vma->vm_flags. > if (!vma_read_trylock(vma)) > goto inval; > > -- > 2.39.0 >