From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEB0CC47423 for ; Fri, 2 Oct 2020 15:06:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C2DB12074B for ; Fri, 2 Oct 2020 15:06:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388157AbgJBPG6 (ORCPT ); Fri, 2 Oct 2020 11:06:58 -0400 Received: from foss.arm.com ([217.140.110.172]:38574 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387939AbgJBPG5 (ORCPT ); Fri, 2 Oct 2020 11:06:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B12B41396; Fri, 2 Oct 2020 08:06:56 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.49.154]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 45F5C3F73B; Fri, 2 Oct 2020 08:06:50 -0700 (PDT) Date: Fri, 2 Oct 2020 16:06:43 +0100 From: Mark Rutland To: Dmitry Vyukov Cc: Jann Horn , Marco Elver , Andrew Morton , Alexander Potapenko , "H . Peter Anvin" , "Paul E . McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Catalin Marinas , Christoph Lameter , Dave Hansen , David Rientjes , Eric Dumazet , Greg Kroah-Hartman , Hillf Danton , Ingo Molnar , Jonathan.Cameron@huawei.com, Jonathan Corbet , Joonsoo Kim , Kees Cook , Pekka Enberg , Peter Zijlstra , sjpark@amazon.com, Thomas Gleixner , Vlastimil Babka , Will Deacon , the arch/x86 maintainers , "open list:DOCUMENTATION" , kernel list , kasan-dev , Linux ARM , Linux-MM , SeongJae Park Subject: Re: [PATCH v4 01/11] mm: add Kernel Electric-Fence infrastructure Message-ID: <20201002150643.GA5601@C02TD0UTHF1T.local> References: <20200929133814.2834621-1-elver@google.com> <20200929133814.2834621-2-elver@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 02, 2020 at 04:22:59PM +0200, Dmitry Vyukov wrote: > On Fri, Oct 2, 2020 at 9:54 AM Jann Horn wrote: > > > > On Fri, Oct 2, 2020 at 8:33 AM Jann Horn wrote: > > > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote: > > > > This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a > > > > low-overhead sampling-based memory safety error detector of heap > > > > use-after-free, invalid-free, and out-of-bounds access errors. > > > > > > > > KFENCE is designed to be enabled in production kernels, and has near > > > > zero performance overhead. Compared to KASAN, KFENCE trades performance > > > > for precision. The main motivation behind KFENCE's design, is that with > > > > enough total uptime KFENCE will detect bugs in code paths not typically > > > > exercised by non-production test workloads. One way to quickly achieve a > > > > large enough total uptime is when the tool is deployed across a large > > > > fleet of machines. > > [...] > > > > +/* > > > > + * The pool of pages used for guard pages and objects. If supported, allocated > > > > + * statically, so that is_kfence_address() avoids a pointer load, and simply > > > > + * compares against a constant address. Assume that if KFENCE is compiled into > > > > + * the kernel, it is usually enabled, and the space is to be allocated one way > > > > + * or another. > > > > + */ > KFENCE needs the range to be covered by struct page's and that's what > creates problems for arm64. But I would assume most other users don't > need that. I've said this in a few other sub-threads, but the issue being attributed to arm64 is a red herring, and indicates a more fundamental issue that also applies to x86, which will introduce a regression for existing correctly-written code. I don't think that's acceptable for a feature expected to be deployed in production kernels, especially given that the failures are going to be non-deterministic and hard to debug. The code in question is mostly going to be in drivers, and it's very likely you may not hit it in local testing. If it is critical to avoid a pointer load here, then we need to either: * Build some infrastructure for patching constants. The x86 static_call work is vaguely the right shape for this. Then we can place the KFENCE region anywhere (e.g. within the linear/direct map), and potentially dynamically allocate it. * Go audit usage of {page,phys}_to_virt() to find any va->{page,pa}->va round-trips, and go modify that code to do something else which avoids a round-trip. When I last looked at this it didn't seem viable in general since in many cases the physcial address was the only piece of information which was retained. I'd be really curious to see how using an immediate compares to loading an __ro_after_init pointer value. Thanks, Mark. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD470C4363D for ; Fri, 2 Oct 2020 15:08:26 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3D3DA206CA for ; Fri, 2 Oct 2020 15:08:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="sVY8a80J" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D3DA206CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=iYFiES62L4QPTrPtAnHuO7o10V528kcnsuiGz3cYwds=; b=sVY8a80Ja4Om331HPBnX4Y8tf YEnON7BZesuzLesIlSO+lNbKdNuTppEKgGqD/PT9iVE4affzKqEWXoPzsYLU3xz0rzKrzrTcUZQAW 5+YPKYHaxN/3X2dlEnBYyIgn1bgBJvVZ9GMCHsKS/fkGnliVRcprKuPR7agJ6Jy61rvrzClD2iHom CF5pCBv7/FGEnKXNJZACkMrHzsctQphQeKNVWwC8VT6/DEmcMl6fr31DgiTE9ul+DM13P3DH57P2+ 4jzrzXl4qJKg1S9Dym8+IrvRcRaxgI42LJTm0uz4Vvtb2DAtoPlLO7MPus5ewIZ5LWK0oodiP/7Pp hZxlZV/DQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kOMeD-0008Hf-UT; Fri, 02 Oct 2020 15:07:01 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kOMeB-0008Gf-2c for linux-arm-kernel@lists.infradead.org; Fri, 02 Oct 2020 15:07:00 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B12B41396; Fri, 2 Oct 2020 08:06:56 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.49.154]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 45F5C3F73B; Fri, 2 Oct 2020 08:06:50 -0700 (PDT) Date: Fri, 2 Oct 2020 16:06:43 +0100 From: Mark Rutland To: Dmitry Vyukov Subject: Re: [PATCH v4 01/11] mm: add Kernel Electric-Fence infrastructure Message-ID: <20201002150643.GA5601@C02TD0UTHF1T.local> References: <20200929133814.2834621-1-elver@google.com> <20200929133814.2834621-2-elver@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201002_110659_230422_EE11A0AA X-CRM114-Status: GOOD ( 26.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hillf Danton , "open list:DOCUMENTATION" , Peter Zijlstra , Catalin Marinas , Dave Hansen , SeongJae Park , Linux-MM , Eric Dumazet , Alexander Potapenko , "H . Peter Anvin" , Christoph Lameter , Will Deacon , sjpark@amazon.com, Jonathan Corbet , the arch/x86 maintainers , kasan-dev , Ingo Molnar , Linux ARM , David Rientjes , Andrey Ryabinin , Marco Elver , Kees Cook , "Paul E . McKenney" , Jann Horn , Andrey Konovalov , Borislav Petkov , Andy Lutomirski , Jonathan.Cameron@huawei.com, Thomas Gleixner , Joonsoo Kim , Vlastimil Babka , Greg Kroah-Hartman , kernel list , Pekka Enberg , Andrew Morton Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Oct 02, 2020 at 04:22:59PM +0200, Dmitry Vyukov wrote: > On Fri, Oct 2, 2020 at 9:54 AM Jann Horn wrote: > > > > On Fri, Oct 2, 2020 at 8:33 AM Jann Horn wrote: > > > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote: > > > > This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a > > > > low-overhead sampling-based memory safety error detector of heap > > > > use-after-free, invalid-free, and out-of-bounds access errors. > > > > > > > > KFENCE is designed to be enabled in production kernels, and has near > > > > zero performance overhead. Compared to KASAN, KFENCE trades performance > > > > for precision. The main motivation behind KFENCE's design, is that with > > > > enough total uptime KFENCE will detect bugs in code paths not typically > > > > exercised by non-production test workloads. One way to quickly achieve a > > > > large enough total uptime is when the tool is deployed across a large > > > > fleet of machines. > > [...] > > > > +/* > > > > + * The pool of pages used for guard pages and objects. If supported, allocated > > > > + * statically, so that is_kfence_address() avoids a pointer load, and simply > > > > + * compares against a constant address. Assume that if KFENCE is compiled into > > > > + * the kernel, it is usually enabled, and the space is to be allocated one way > > > > + * or another. > > > > + */ > KFENCE needs the range to be covered by struct page's and that's what > creates problems for arm64. But I would assume most other users don't > need that. I've said this in a few other sub-threads, but the issue being attributed to arm64 is a red herring, and indicates a more fundamental issue that also applies to x86, which will introduce a regression for existing correctly-written code. I don't think that's acceptable for a feature expected to be deployed in production kernels, especially given that the failures are going to be non-deterministic and hard to debug. The code in question is mostly going to be in drivers, and it's very likely you may not hit it in local testing. If it is critical to avoid a pointer load here, then we need to either: * Build some infrastructure for patching constants. The x86 static_call work is vaguely the right shape for this. Then we can place the KFENCE region anywhere (e.g. within the linear/direct map), and potentially dynamically allocate it. * Go audit usage of {page,phys}_to_virt() to find any va->{page,pa}->va round-trips, and go modify that code to do something else which avoids a round-trip. When I last looked at this it didn't seem viable in general since in many cases the physcial address was the only piece of information which was retained. I'd be really curious to see how using an immediate compares to loading an __ro_after_init pointer value. Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel