From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 867D7C4CECE for ; Mon, 14 Oct 2019 15:29:31 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 012C020854 for ; Mon, 14 Oct 2019 15:29:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 012C020854 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 46sMtJ6mKYzDqrX for ; Tue, 15 Oct 2019 02:29:28 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=arm.com (client-ip=217.140.110.172; helo=foss.arm.com; envelope-from=mark.rutland@arm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lists.ozlabs.org (Postfix) with ESMTP id 46sMrC1g7VzDqlD for ; Tue, 15 Oct 2019 02:27:32 +1100 (AEDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 236F728; Mon, 14 Oct 2019 08:27:26 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 886913F68E; Mon, 14 Oct 2019 08:27:24 -0700 (PDT) Date: Mon, 14 Oct 2019 16:27:17 +0100 From: Mark Rutland To: Daniel Axtens Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory Message-ID: <20191014152717.GA20438@lakrids.cambridge.arm.com> References: <20191001065834.8880-1-dja@axtens.net> <20191001065834.8880-2-dja@axtens.net> <352cb4fa-2e57-7e3b-23af-898e113bbe22@virtuozzo.com> <87ftjvtoo7.fsf@dja-thinkpad.axtens.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87ftjvtoo7.fsf@dja-thinkpad.axtens.net> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: gor@linux.ibm.com, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, glider@google.com, luto@kernel.org, Andrey Ryabinin , linuxppc-dev@lists.ozlabs.org, dvyukov@google.com Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Tue, Oct 15, 2019 at 12:57:44AM +1100, Daniel Axtens wrote: > Hi Andrey, > > > >> + /* > >> + * Ensure poisoning is visible before the shadow is made visible > >> + * to other CPUs. > >> + */ > >> + smp_wmb(); > > > > I'm not quite understand what this barrier do and why it needed. > > And if it's really needed there should be a pairing barrier > > on the other side which I don't see. > > Mark might be better able to answer this, but my understanding is that > we want to make sure that we never have a situation where the writes are > reordered so that PTE is installed before all the poisioning is written > out. I think it follows the logic in __pte_alloc() in mm/memory.c: > > /* > * Ensure all pte setup (eg. pte page lock and page clearing) are > * visible before the pte is made visible to other CPUs by being > * put into page tables. Yup. We need to ensure that if a thread sees a populated shadow PTE, the corresponding shadow memory has been zeroed. Thus, we need to ensure that the zeroing is observed by other CPUs before we update the PTE. We're relying on the absence of a TLB entry preventing another CPU from loading the corresponding shadow shadow memory until its PTE has been populated (after the zeroing is visible). Consequently there is no barrier on the other side, and just a control-dependency (which would be insufficient on its own). There is a potential problem here, as Will Deacon wrote up at: https://lore.kernel.org/linux-arm-kernel/20190827131818.14724-1-will@kernel.org/ ... in the section starting: | *** Other architecture maintainers -- start here! *** ... whereby the CPU can spuriously fault on an access after observing a valid PTE. For arm64 we handle the spurious fault, and it looks like x86 would need something like its vmalloc_fault() applying to the shadow region to cater for this. Thanks, Mark.