From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E2EDC4363D for ; Wed, 7 Oct 2020 14:41:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DF8CA212CC for ; Wed, 7 Oct 2020 14:41:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G/1uZ6Me" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF8CA212CC Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0268A6B0070; Wed, 7 Oct 2020 10:41:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F1A4A6B0071; Wed, 7 Oct 2020 10:41:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE1E96B0072; Wed, 7 Oct 2020 10:41:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id B03FE6B0070 for ; Wed, 7 Oct 2020 10:41:44 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3C6FF181AE863 for ; Wed, 7 Oct 2020 14:41:44 +0000 (UTC) X-FDA: 77345393328.13.rose63_120853d271d0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 0A91518140B67 for ; Wed, 7 Oct 2020 14:41:42 +0000 (UTC) X-HE-Tag: rose63_120853d271d0 X-Filterd-Recvd-Size: 8800 Received: from mail-ot1-f66.google.com (mail-ot1-f66.google.com [209.85.210.66]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Wed, 7 Oct 2020 14:41:37 +0000 (UTC) Received: by mail-ot1-f66.google.com with SMTP id s66so2461352otb.2 for ; Wed, 07 Oct 2020 07:41:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ln7qxyNzii1scl8HLSA2+oOjYgQ14R9ppJu+tWqxheQ=; b=G/1uZ6MeGY8rOjaNAklSVvWjwj9mhvIyr//BSIgpkmzlm4xSzk1UliIldkVStFhjzK guXYhVU3K20+r7wiNGmWowgpWVD8ICk7FEs94assDRJ2Qz4pEif0o1cqV5V9nQIPgWDn 0M5+V8V4CzXE9kSiToHHUNDVSlO/qwGepPLA1XkLBo7eOkTkVyrK//Ty1x9bdan86r18 0ji9kl1sYrSRcowtfN78ieadrASPfbrfupDUNWTQt9I8ppvr7ZV3aDnI5rnUV/B2MUNh ZJ8YpLLkmWO9SzFNoM/W53MoIGouRDcaTeTueqzB/cIM4zF116m2OcZARASeI9zROTVu fn9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ln7qxyNzii1scl8HLSA2+oOjYgQ14R9ppJu+tWqxheQ=; b=lbfG1dlKlVt6Rx2U+4i2owdekw/cRnzdlScKFMIR2RTO8Ns8TdHg8MPtOLr3XGsk0G K+ERF+4poX1g/Q6asEfjaaOzwrqhJT+tgzO66Rt2eSEeOvrgzqcQunRKaxWLSYjxsuUl Mp0E7BcAmFzxqOekIfJHdD4tBkKrQY5m52yfCXTMFSq9O3nHvICKAWcIdFRsImgFMblB vV1R4a0kNN6y5zYSltieSFgKEgvd6hfHUpLA0/83/5flFEbP06PQTMZS0bawHdRZ3GBL WmnDHnu71KKNV6PLomrG5FD4HTuS62ayEYKg/xc38HL8loJxcUmt67ZpkjpJCTYGZgZi OAbA== X-Gm-Message-State: AOAM532WcEbkXWoQYuSteQq5w/osSIZQwlRl1cacwwKJjy6nAHRclVM1 MxGB0G7TEt44OKKb15iE6xb00VY7tWx6WlACvK5pbQ== X-Google-Smtp-Source: ABdhPJz0y0PHROhdZgOKjGqM+5XUi78Db76T50F9Fxwke8ee/GyTOCfNHB8qk8Q8oh1UeTEJNvd4nvQ3DLmPxGTNmSE= X-Received: by 2002:a9d:66a:: with SMTP id 97mr2142529otn.233.1602081696697; Wed, 07 Oct 2020 07:41:36 -0700 (PDT) MIME-Version: 1.0 References: <20200929133814.2834621-1-elver@google.com> <20200929133814.2834621-3-elver@google.com> In-Reply-To: From: Marco Elver Date: Wed, 7 Oct 2020 16:41:25 +0200 Message-ID: Subject: Re: [PATCH v4 02/11] x86, kfence: enable KFENCE for x86 To: Jann Horn Cc: Andrew Morton , Alexander Potapenko , "H . Peter Anvin" , "Paul E . McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Catalin Marinas , Christoph Lameter , Dave Hansen , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Hillf Danton , Ingo Molnar , Jonathan Cameron , Jonathan Corbet , Joonsoo Kim , Kees Cook , Mark Rutland , Pekka Enberg , Peter Zijlstra , SeongJae Park , Thomas Gleixner , Vlastimil Babka , Will Deacon , "the arch/x86 maintainers" , "open list:DOCUMENTATION" , kernel list , kasan-dev , Linux ARM , Linux-MM Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 7 Oct 2020 at 16:15, Jann Horn wrote: > > On Wed, Oct 7, 2020 at 3:09 PM Marco Elver wrote: > > On Fri, 2 Oct 2020 at 07:45, Jann Horn wrote: > > > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote: > > > > Add architecture specific implementation details for KFENCE and enable > > > > KFENCE for the x86 architecture. In particular, this implements the > > > > required interface in for setting up the pool and > > > > providing helper functions for protecting and unprotecting pages. > > > > > > > > For x86, we need to ensure that the pool uses 4K pages, which is done > > > > using the set_memory_4k() helper function. > > > [...] > > > > diff --git a/arch/x86/include/asm/kfence.h b/arch/x86/include/asm/kfence.h > > > [...] > > > > +/* Protect the given page and flush TLBs. */ > > > > +static inline bool kfence_protect_page(unsigned long addr, bool protect) > > > > +{ > > > > + unsigned int level; > > > > + pte_t *pte = lookup_address(addr, &level); > > > > + > > > > + if (!pte || level != PG_LEVEL_4K) > > > > > > Do we actually expect this to happen, or is this just a "robustness" > > > check? If we don't expect this to happen, there should be a WARN_ON() > > > around the condition. > > > > It's not obvious here, but we already have this covered with a WARN: > > the core.c code has a KFENCE_WARN_ON, which disables KFENCE on a > > warning. > > So for this specific branch: Can it ever happen? If not, please either > remove it or add WARN_ON(). That serves two functions: It ensures that > if something unexpected happens, we see a warning, and it hints to > people reading the code "this isn't actually expected to happen, you > don't have to wrack your brain trying to figure out for which scenario > this branch is intended". Perhaps I could have been clearer: we already have this returning false covered by a WARN+disable KFENCE in core.c. We'll add another WARN_ON right here, as it doesn't hurt, and hopefully improves readability. > > > > + return false; > > > > + > > > > + if (protect) > > > > + set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); > > > > + else > > > > + set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); > > > > > > Hmm... do we have this helper (instead of using the existing helpers > > > for modifying memory permissions) to work around the allocation out of > > > the data section? > > > > I just played around with using the set_memory.c functions, to remind > > myself why this didn't work. I experimented with using > > set_memory_{np,p}() functions; set_memory_p() isn't implemented, but > > is easily added (which I did for below experiment). However, this > > didn't quite work: > [...] > > For one, smp_call_function_many_cond() doesn't want to be called with > > interrupts disabled, and we may very well get a KFENCE allocation or > > page fault with interrupts disabled / within interrupts. > > > > Therefore, to be safe, we should avoid IPIs. > > set_direct_map_invalid_noflush() does that, too, I think? And that's > already implemented for both arm64 and x86. Sure, that works. We still want the flush_tlb_one_kernel(), at least so the local CPU's TLB is flushed. > > It follows that setting > > the page attribute is best-effort, and we can tolerate some > > inaccuracy. Lazy fault handling should take care of faults after we > > set the page as PRESENT. > [...] > > > Shouldn't kfence_handle_page_fault() happen after prefetch handling, > > > at least? Maybe directly above the "oops" label? > > > > Good question. AFAIK it doesn't matter, as is_kfence_address() should > > never apply for any of those that follow, right? In any case, it > > shouldn't hurt to move it down. > > is_prefetch() ignores any #PF not caused by instruction fetch if it > comes from kernel mode and the faulting instruction is one of the > PREFETCH* instructions. (Which is not supposed to happen - the > processor should just be ignoring the fault for PREFETCH instead of > generating an exception AFAIK. But the comments say that this is about > CPU bugs and stuff.) While this is probably not a big deal anymore > partly because the kernel doesn't use software prefetching in many > places anymore, it seems to me like, in principle, this could also > cause page faults that should be ignored in KFENCE regions if someone > tries to do PREFETCH on an out-of-bounds array element or a dangling > pointer or something. Thanks for the clarification.