From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 945F2C433B4 for ; Wed, 5 May 2021 11:08:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 76071613C7 for ; Wed, 5 May 2021 11:08:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 76071613C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AF5A06B006C; Wed, 5 May 2021 07:08:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA6876B006E; Wed, 5 May 2021 07:08:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9468F6B0070; Wed, 5 May 2021 07:08:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 725ED6B006C for ; Wed, 5 May 2021 07:08:38 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2FFD6180ACEF0 for ; Wed, 5 May 2021 11:08:38 +0000 (UTC) X-FDA: 78106904316.21.2D2EF46 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf06.hostedemail.com (Postfix) with ESMTP id 2C28EC0007EA for ; Wed, 5 May 2021 11:08:39 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 4C0F0B168; Wed, 5 May 2021 11:08:36 +0000 (UTC) To: Rick Edgecombe , dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> From: Vlastimil Babka Subject: Re: [PATCH RFC 0/9] PKS write protected page tables Message-ID: Date: Wed, 5 May 2021 13:08:35 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.0 MIME-Version: 1.0 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US X-Rspamd-Queue-Id: 2C28EC0007EA Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf06.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam04 X-Stat-Signature: ejku8a3iwp8a137pg5t8ynnpuum87jmy Received-SPF: none (suse.cz>: No applicable sender policy available) receiver=imf06; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: none/none X-HE-Tag: 1620212919-995929 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/5/21 2:30 AM, Rick Edgecombe wrote: > This is a POC for write protecting page tables with PKS (Protection Key= s for=20 > Supervisor) [1]. The basic idea is to make the page tables read only, e= xcept=20 > temporarily on a per-cpu basis when they need to be modified. I=E2=80=99= m looking for=20 > opinions on whether people like the general direction of this in terms = of=20 > value and implementation. >=20 > Why would people want this? > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D > Page tables are the basis for many types of protections and as such, ar= e a=20 > juicy target for attackers. Mapping them read-only will make them harde= r to=20 > use in attacks. >=20 > This protects against an attacker that has acquired the ability to writ= e to=20 > the page tables. It's not foolproof because an attacker who can execute= =20 > arbitrary code can either disable PKS directly, or simply call the same= =20 > functions that the kernel uses for legitimate page table writes. Yeah, it's a good idea. I've once used a similar approach locally during debugging a problem that appeared to be stray writes hitting page tables,= and without PKS I indeed made the whole pages read-only when not touched by t= he designated code. > Why use PKS for this? > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > PKS is an upcoming CPU feature that allows supervisor virtual memory=20 > permissions to be changed without flushing the TLB, like PKU does for u= ser=20 > memory. Protecting page tables would normally be really expensive becau= se you=20 > would have to do it with paging itself. PKS helps by providing a way to= toggle=20 > the writability of the page tables with just a per-cpu MSR. I can see in patch 8/9 that you are flipping the MSR around individual operations on page table entries. In my patch I hooked making the page ta= ble writable to obtaining the page table lock (IIRC I had only the PTE level = fully handled though). Wonder if that would be better tradeoff even for your MS= R approach? Vlastimil > Performance impacts > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Setting direct map permissions on whatever random page gets allocated f= or a=20 > page table would result in a lot of kernel range shootdowns and direct = map=20 > large page shattering. So the way the PKS page table memory is created = is=20 > similar to this module page clustering series[2], where a cache of page= s is=20 > replenished from 2MB pages such that the direct map permissions and ass= ociated=20 > breakage is localized on the direct map. In the PKS page tables case, a= PKS=20 > key is pre-applied to the direct map for pages in the cache. >=20 > There would be some costs of memory overhead in order to protect the di= rect=20 > map page tables. There would also be some extra kernel range shootdowns= to=20 > replenish the cache on occasion, from setting the PKS key on the direct= map of=20 > the new pages. I don=E2=80=99t have any actual performance data yet. >=20 > This is based on V6 [1] of the core PKS infrastructure patches. PKS=20 > infrastructure follow-on=E2=80=99s are planned to enable keys to be set= to the same=20 > permissions globally. Since this usage needs a key to be set globally=20 > read-only by default, a small temporary solution is hacked up in patch = 8. Long=20 > term, PKS protected page tables would use a better and more generic sol= ution=20 > to achieve this. >=20 > [1] > https://lore.kernel.org/lkml/20210401225833.566238-1-ira.weiny@intel.co= m/ > [2] > https://lore.kernel.org/lkml/20210405203711.1095940-1-rick.p.edgecombe@= intel.com > / >=20 > Thanks, >=20 > Rick >=20 >=20 > Rick Edgecombe (9): > list: Support getting most recent element in list_lru > list: Support list head not in object for list_lru > x86/mm/cpa: Add grouped page allocations > mm: Explicitly zero page table lock ptr > x86, mm: Use cache of page tables > x86/mm/cpa: Add set_memory_pks() > x86/mm/cpa: Add perm callbacks to grouped pages > x86, mm: Protect page tables with PKS > x86, cpa: PKS protect direct map page tables >=20 > arch/x86/boot/compressed/ident_map_64.c | 5 + > arch/x86/include/asm/pgalloc.h | 6 + > arch/x86/include/asm/pgtable.h | 26 +- > arch/x86/include/asm/pgtable_64.h | 33 ++- > arch/x86/include/asm/pkeys_common.h | 8 +- > arch/x86/include/asm/set_memory.h | 23 ++ > arch/x86/mm/init.c | 40 +++ > arch/x86/mm/pat/set_memory.c | 312 +++++++++++++++++++++++- > arch/x86/mm/pgtable.c | 144 ++++++++++- > include/asm-generic/pgalloc.h | 42 +++- > include/linux/list_lru.h | 26 ++ > include/linux/mm.h | 7 + > mm/Kconfig | 6 +- > mm/list_lru.c | 38 ++- > mm/memory.c | 1 + > mm/swap.c | 7 + > mm/swap_state.c | 6 + > 17 files changed, 705 insertions(+), 25 deletions(-) >=20