From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87BEDC433F5 for ; Fri, 21 Jan 2022 05:26:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A02266B007D; Fri, 21 Jan 2022 00:26:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 98AD36B007E; Fri, 21 Jan 2022 00:26:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82B9E6B0080; Fri, 21 Jan 2022 00:26:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 6CAE86B007D for ; Fri, 21 Jan 2022 00:26:40 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2084785D54 for ; Fri, 21 Jan 2022 05:26:40 +0000 (UTC) X-FDA: 79053159360.28.50BB966 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf01.hostedemail.com (Postfix) with ESMTP id B49E640002 for ; Fri, 21 Jan 2022 05:26:39 +0000 (UTC) Received: by mail-pf1-f182.google.com with SMTP id w190so1705002pfw.7 for ; Thu, 20 Jan 2022 21:26:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=klkC+qf7JEHVmwgaoW0cLa/wjaKIwgJ9IF7PyZlZEkQ=; b=Ia7/kCIvw0vBFQRVcnDkGdKqayflWZMU7tpbc2n/OXgpZ8GpzjRcEgkHWcEuMfIeGS h7Y2WVDBhgBWPlQMAo9dGVXVz8K+BjaM7Tgmu4yKsHTzBFlYUygxN/kLcEiayb24+9P6 Iaek5PhX0P9FZy223IvGsOKcZbZjmkepkHm57/0IZW83eG8gt4p9bHucCNjLqRc4bDfu c8hGUtEAtdzL0/HBIxglYnS8Z3NYyXGz1iq4AWjvlxX4FYlSt3DgOzgR/eSsY4NThVGX NnIuYYR8x35jGuDT9vRHpnGlxV8y3NXTIiQSIrrRBCaCa57nbTDi6sWr7KCeGzUwPcUN oafg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=klkC+qf7JEHVmwgaoW0cLa/wjaKIwgJ9IF7PyZlZEkQ=; b=jClhjZmRShwL/LYw/RnDUJ4cjCN+C+c1QzMVqzjtG8mxADAIdjto7CJxxzggjEFgxH 4JulrMEPIpvN/5ZcqaMGtgWuEptVbd73faJ3af/JmClE82h04faMLtvrIS0DTB/7bk4O hD5Y7KgXYtx5vat17Ky1dcY/VsG5QGU86uy3G5QxiWZ2A+BKov+qnM4HLyl+ZVgAjFkx Llb9+0UloyW7ZpKNcM/1ErbF0pGLkdIMBjwobtgwxoJInST9JqiohC8U6C1nuTsg8pEo jqcMABnLlYAh5FHeg4tSFuAjhLManQ0L7cZuzBBlSZU/y5XrfR5UPd4fcA+OpmNaQc4a x4gQ== X-Gm-Message-State: AOAM5334nR7H6sRl/PJM871BiehbVWcb/qZxXsM7pSfhpO1GsQfJNrZX ltX1ff76DHvXTZaiMw2e5h88m0+IfahrkaqZd64= X-Google-Smtp-Source: ABdhPJzzL8ucK4zUZvzyGVjjl0PtNvp8V0GCX9i0P7dun3fUE1/9eGVyxYyvbfzrfcIv3QLxXXmBr4oiWp1wDH5+J14= X-Received: by 2002:a05:6a00:70d:b0:4c0:1cbf:2394 with SMTP id 13-20020a056a00070d00b004c01cbf2394mr2200010pfl.69.1642742798582; Thu, 20 Jan 2022 21:26:38 -0800 (PST) MIME-Version: 1.0 References: <20220118235244.540103-1-yury.norov@gmail.com> <319b09bc-56a2-207f-6180-3cc7d8cd43d1@arm.com> In-Reply-To: From: Yury Norov Date: Thu, 20 Jan 2022 21:26:18 -0800 Message-ID: Subject: Re: [PATCH] vmap(): don't allow invalid pages To: Robin Murphy Cc: "Russell King (Oracle)" , Matthew Wilcox , Catalin Marinas , Will Deacon , Andrew Morton , Nicholas Piggin , Ding Tianhong , Anshuman Khandual , Alexey Klimov , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: B49E640002 X-Stat-Signature: 3u6rnenhtdtd8qqfffqgygd3arbkitr6 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="Ia7/kCIv"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of yury.norov@gmail.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=yury.norov@gmail.com X-Rspamd-Server: rspam02 X-HE-Tag: 1642742799-10087 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 20, 2022 at 8:37 AM Robin Murphy wrote: > > On 2022-01-20 13:03, Russell King (Oracle) wrote: > > On Thu, Jan 20, 2022 at 12:22:35PM +0000, Robin Murphy wrote: > >> On 2022-01-19 19:12, Russell King (Oracle) wrote: > >>> On Wed, Jan 19, 2022 at 06:43:10PM +0000, Robin Murphy wrote: > >>>> Indeed, my impression is that the only legitimate way to get hold of a page > >>>> pointer without assumed provenance is via pfn_to_page(), which is where > >>>> pfn_valid() comes in. Thus pfn_valid(page_to_pfn()) really *should* be a > >>>> tautology. > >>> > >>> That can only be true if pfn == page_to_pfn(pfn_to_page(pfn)) for all > >>> values of pfn. > >>> > >>> Given how pfn_to_page() is defined in the sparsemem case: > >>> > >>> #define __pfn_to_page(pfn) \ > >>> ({ unsigned long __pfn = (pfn); \ > >>> struct mem_section *__sec = __pfn_to_section(__pfn); \ > >>> __section_mem_map_addr(__sec) + __pfn; \ > >>> }) > >>> #define page_to_pfn __page_to_pfn > >>> > >>> that isn't the case, especially when looking at page_to_pfn(): > >>> > >>> #define __page_to_pfn(pg) \ > >>> ({ const struct page *__pg = (pg); \ > >>> int __sec = page_to_section(__pg); \ > >>> (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \ > >>> }) > >>> > >>> Where: > >>> > >>> static inline unsigned long page_to_section(const struct page *page) > >>> { > >>> return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK; > >>> } > >>> > >>> So if page_to_section() returns something that is, e.g. zero for an > >>> invalid page in a non-zero section, you're not going to end up with > >>> the right pfn from page_to_pfn(). > >> > >> Right, I emphasised "should" in an attempt to imply "in the absence of > >> serious bugs that have further-reaching consequences anyway". > >> > >>> As I've said now a couple of times, trying to determine of a struct > >>> page pointer is valid is the wrong question to be asking. > >> > >> And doing so in one single place, on the justification of avoiding an > >> incredibly niche symptom, is even more so. Not to mention that an address > >> size fault is one of the best possible outcomes anyway, vs. the untold > >> damage that may stem from accesses actually going through to random parts of > >> the physical memory map. It's not a single place. Many exported functions check arguments this way or another. __vunmap() in vfree() path, for example, checks for address alignment, which is quite similar to me. And later even makes BUG_ON(!page). > > I don't see it as a "niche" symptom. > > The commit message specifically cites a Data Abort "at address > translation later". Broadly speaking, a Data Abort due to an address > size fault only occurs if you've been lucky enough that the bogus PA > which got mapped is so spectacularly wrong that it's beyond the range > configured in TCR.IPS. How many other architectures even have a > mechanism like that? > > > If we start off with the struct page being invalid, then the result of > > page_to_pfn() can not be relied upon to produce something that is > > meaningful - which is exactly why the vmap() issue arises. > > > > With a pfn_valid() check, we at least know that the PFN points at > > memory. > > No, we know it points to some PA space which has a struct page to > represent it. pfn_valid() only says that pfn_to_page() will yield a > valid result. That also includes things like reserved pages covering > non-RAM areas, where a kernel VA mapping existing at all could > potentially be fatal to the system even if it's never explicitly > accessed - for all we know it might be a carveout belonging to > overly-aggressive Secure software such that even a speculative prefetch > might trigger an instant system reset. > > > However, that memory could be _anything_ in the system - it > > could be the kernel image, and it could give userspace access to > > change kernel code. > > > > So, while it is useful to do a pfn_valid() check in vmap(), as I said > > to willy, this must _not_ be the primary check. It should IMHO use > > WARN_ON() to make it blatently obvious that it should be something we > > expect _not_ to trigger under normal circumstances, but is there to > > catch programming errors elsewhere. It actually uses WARN_ON(). > Rather, "to partially catch unrelated programming errors elsewhere, > provided the buggy code happens to call vmap() rather than any of the > many other functions with a struct page * argument." That's where it > stretches my definition of "useful" just a bit too far. It's not about > perfect being the enemy of good, it's about why vmap() should be > special, and death by a thousand "useful" cuts - if we don't trust the > pointer, why not check its alignment for basic plausibility first? Because in that particular case pfn_valid() is enough. If someone else will have a real case where IS_ALIGNED() would help - I will be all for adding that check in vmap(). > If it > seems valid, why not check if the page flags look sensible to make sure? > How many useful little checks is too many? I'd put in 'too many' group those that test for something that never happened to people. > Every bit of code footprint > and execution overhead imposed unconditionally on all end users to > theoretically save developers' debugging time still adds up. Not theoretically - practically! End users will value kernel stability even when buggy drivers are installed. They will also value developers who fix bugs quickly. It has been noticed that DEBUG_VIRTUAL could catch this bug. But sometimes stopping the production hardware, building a custom kernel with many debug options in the hope that one of them will help, and running a suspicious driver for hours would take itself for more than a day. Thanks, Yury > Although on > that note, it looks like arch/arm's pfn_valid() is still a linear scan > of the memblock array, so the overhead of adding that for every page in > every vmap() might not even be so small... > > Robin.