From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E90EC433EF for ; Thu, 16 Sep 2021 01:53:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 422F161185 for ; Thu, 16 Sep 2021 01:53:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232068AbhIPByT (ORCPT ); Wed, 15 Sep 2021 21:54:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229816AbhIPByS (ORCPT ); Wed, 15 Sep 2021 21:54:18 -0400 Received: from mail-yb1-xb29.google.com (mail-yb1-xb29.google.com [IPv6:2607:f8b0:4864:20::b29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F540C061574 for ; Wed, 15 Sep 2021 18:52:59 -0700 (PDT) Received: by mail-yb1-xb29.google.com with SMTP id y16so9793919ybm.3 for ; Wed, 15 Sep 2021 18:52:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=MUuiaD7fPm6bru0+TN+7mwGOwHEgDZ/9h2r4AuCVLtA=; b=ZBkQ7M00Hs5s1820nxFltOq/2DVNOb/YUm0uUl9ni39Qs2Da8Iv19vRTFHl9yPfl8a KqM1kfiuHBIl91v7Xt1wWfPDntaM9aiXFn6BfFxYTmx/+R4fpLz1Y4l0Zes9YKXKwUM+ 6WCsxpwCbCw2aU0RvmGu5TCd59duUNNBrPqO7BB7oLjFBfuCW03vg/9Ur+PJrq28s/8R Sdvs/6O/MglbrZYG/ZnjX/cnyEZ+P8fqMPkl/C3cp1wCGjuGFr2WB4wexltaRWwEAtjV 9XeA6ZeInecYpSsi+UIxJ9DcvSopluGtTnZZOg7Xm3v44wUmLLg6iLwpBQoRhFKp8yB3 G/Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=MUuiaD7fPm6bru0+TN+7mwGOwHEgDZ/9h2r4AuCVLtA=; b=V19AD4VmwbXZgsjjtMlOGE5BT0eYHK2vIRzKVbNrZ4s63CjCEVH/aSKM9sK0AJouoF hfTpEX9L6pvyUD0dzt4LhqFMv7Q6Kfx7CEWKjqCjX4KLvqnjooJhNEhj9HVozs987ldi nNyeENcb4ykvcuP3pzB5TwD5sMJTw9m6QhBFyPICoImMqDpvWnQDQv72yJLHywV66YF7 vee1Ehhpkwu53+uDCthj/HJd4OLnDt9C0HO25gtm/42Gtr0Pxze5doItxn9iEea0qXoe lR3TZqmcmHFWlqNtUbf9164tLq6vYKRL1m+kI/N6m+BGlHg4UwB8hc9AD8U5MbB1zz9g 6bsA== X-Gm-Message-State: AOAM5320g1yFr2GExLdQA+LWPe2NF9RYXCHxqBaY8gzQETIEqvQHkf2q 8UnB8CaZa+9kUo9LaI+8k5Vn3ekOsoYlPBkniDE2vap1 X-Google-Smtp-Source: ABdhPJxNR80ab6aaWF0h9tWw/KO4hoAj02q45mMOYHHge01s9kmmAdxOI5GSgm1+cVq8YyNywEo6L3Kpp+XaVxwmCe8= X-Received: by 2002:a05:6902:1141:: with SMTP id p1mr3767874ybu.0.1631757178179; Wed, 15 Sep 2021 18:52:58 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Jordan Niethe Date: Thu, 16 Sep 2021 11:52:46 +1000 Message-ID: Subject: Re: [PATCH v6 4/4] powerpc/64s: Initialize and use a temporary mm for patching on Radix To: "Christopher M. Riedl" Cc: linuxppc-dev , linux-hardening@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org On Thu, Sep 16, 2021 at 10:38 AM Christopher M. Riedl wrote: > > On Sat Sep 11, 2021 at 4:14 AM CDT, Jordan Niethe wrote: > > On Sat, Sep 11, 2021 at 12:39 PM Christopher M. Riedl > > wrote: > > > > > > When code patching a STRICT_KERNEL_RWX kernel the page containing the > > > address to be patched is temporarily mapped as writeable. Currently, a > > > per-cpu vmalloc patch area is used for this purpose. While the patch > > > area is per-cpu, the temporary page mapping is inserted into the kernel > > > page tables for the duration of patching. The mapping is exposed to CPUs > > > other than the patching CPU - this is undesirable from a hardening > > > perspective. Use a temporary mm instead which keeps the mapping local to > > > the CPU doing the patching. > > > > > > Use the `poking_init` init hook to prepare a temporary mm and patching > > > address. Initialize the temporary mm by copying the init mm. Choose a > > > randomized patching address inside the temporary mm userspace address > > > space. The patching address is randomized between PAGE_SIZE and > > > DEFAULT_MAP_WINDOW-PAGE_SIZE. > > > > > > Bits of entropy with 64K page size on BOOK3S_64: > > > > > > bits of entropy = log2(DEFAULT_MAP_WINDOW_USER64 / PAGE_SIZE) > > > > > > PAGE_SIZE=64K, DEFAULT_MAP_WINDOW_USER64=128TB > > > bits of entropy = log2(128TB / 64K) > > > bits of entropy = 31 > > > > > > The upper limit is DEFAULT_MAP_WINDOW due to how the Book3s64 Hash MMU > > > operates - by default the space above DEFAULT_MAP_WINDOW is not > > > available. Currently the Hash MMU does not use a temporary mm so > > > technically this upper limit isn't necessary; however, a larger > > > randomization range does not further "harden" this overall approach and > > > future work may introduce patching with a temporary mm on Hash as well. > > > > > > Randomization occurs only once during initialization at boot for each > > > possible CPU in the system. > > > > > > Introduce two new functions, map_patch_mm() and unmap_patch_mm(), to > > > respectively create and remove the temporary mapping with write > > > permissions at patching_addr. Map the page with PAGE_KERNEL to set > > > EAA[0] for the PTE which ignores the AMR (so no need to unlock/lock > > > KUAP) according to PowerISA v3.0b Figure 35 on Radix. > > > > > > Based on x86 implementation: > > > > > > commit 4fc19708b165 > > > ("x86/alternatives: Initialize temporary mm for patching") > > > > > > and: > > > > > > commit b3fd8e83ada0 > > > ("x86/alternatives: Use temporary mm for text poking") > > > > > > Signed-off-by: Christopher M. Riedl > > > > > > --- > > > > > > v6: * Small clean-ups (naming, formatting, style, etc). > > > * Call stop_using_temporary_mm() before pte_unmap_unlock() after > > > patching. > > > * Replace BUG_ON()s in poking_init() w/ WARN_ON()s. > > > > > > v5: * Only support Book3s64 Radix MMU for now. > > > * Use a per-cpu datastructure to hold the patching_addr and > > > patching_mm to avoid the need for a synchronization lock/mutex. > > > > > > v4: * In the previous series this was two separate patches: one to init > > > the temporary mm in poking_init() (unused in powerpc at the time) > > > and the other to use it for patching (which removed all the > > > per-cpu vmalloc code). Now that we use poking_init() in the > > > existing per-cpu vmalloc approach, that separation doesn't work > > > as nicely anymore so I just merged the two patches into one. > > > * Preload the SLB entry and hash the page for the patching_addr > > > when using Hash on book3s64 to avoid taking an SLB and Hash fault > > > during patching. The previous implementation was a hack which > > > changed current->mm to allow the SLB and Hash fault handlers to > > > work with the temporary mm since both of those code-paths always > > > assume mm == current->mm. > > > * Also (hmm - seeing a trend here) with the book3s64 Hash MMU we > > > have to manage the mm->context.active_cpus counter and mm cpumask > > > since they determine (via mm_is_thread_local()) if the TLB flush > > > in pte_clear() is local or not - it should always be local when > > > we're using the temporary mm. On book3s64's Radix MMU we can > > > just call local_flush_tlb_mm(). > > > * Use HPTE_USE_KERNEL_KEY on Hash to avoid costly lock/unlock of > > > KUAP. > > > --- > > > arch/powerpc/lib/code-patching.c | 119 +++++++++++++++++++++++++++++-- > > > 1 file changed, 112 insertions(+), 7 deletions(-) > > > > > > diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c > > > index e802e42c2789..af8e2a02a9dd 100644 > > > --- a/arch/powerpc/lib/code-patching.c > > > +++ b/arch/powerpc/lib/code-patching.c > > > @@ -11,6 +11,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > > > > #include > > > #include > > > @@ -103,6 +104,7 @@ static inline void stop_using_temporary_mm(struct temp_mm *temp_mm) > > > > > > static DEFINE_PER_CPU(struct vm_struct *, text_poke_area); > > > static DEFINE_PER_CPU(unsigned long, cpu_patching_addr); > > > +static DEFINE_PER_CPU(struct mm_struct *, cpu_patching_mm); > > > > > > static int text_area_cpu_up(unsigned int cpu) > > > { > > > @@ -126,8 +128,48 @@ static int text_area_cpu_down(unsigned int cpu) > > > return 0; > > > } > > > > > > +static __always_inline void __poking_init_temp_mm(void) > > > +{ > > > + int cpu; > > > + spinlock_t *ptl; /* for protecting pte table */ > > > > ptl is just used so we don't have to open code allocating a pte in > > patching_mm isn't it? > > Yup - I think that comment was a copy-pasta... I'll improve it. > > > > > > + pte_t *ptep; > > > + struct mm_struct *patching_mm; > > > + unsigned long patching_addr; > > > + > > > + for_each_possible_cpu(cpu) { > > > + patching_mm = copy_init_mm(); > > > + WARN_ON(!patching_mm); > > > > Would it be okay to just let the mmu handle null pointer dereferences? > > In general I think yes; however, the NULL dereference wouldn't occur > until later during actual patching so I thought an early WARN here is > appropriate. > > > > > > + per_cpu(cpu_patching_mm, cpu) = patching_mm; > > > + > > > + /* > > > + * Choose a randomized, page-aligned address from the range: > > > + * [PAGE_SIZE, DEFAULT_MAP_WINDOW - PAGE_SIZE] The lower > > > + * address bound is PAGE_SIZE to avoid the zero-page. The > > > + * upper address bound is DEFAULT_MAP_WINDOW - PAGE_SIZE to > > > + * stay under DEFAULT_MAP_WINDOW with the Book3s64 Hash MMU. > > > + */ > > > + patching_addr = PAGE_SIZE + ((get_random_long() & PAGE_MASK) > > > + % (DEFAULT_MAP_WINDOW - 2 * PAGE_SIZE)); > > > + per_cpu(cpu_patching_addr, cpu) = patching_addr; > > > > On x86 the randomization depends on CONFIG_RANDOMIZE_BASE. Should it > > be controllable here too? > > IIRC CONFIG_RANDOMIZE_BASE is for KASLR which IMO doesn't really have > much to do with this. > > > > > > + > > > + /* > > > + * PTE allocation uses GFP_KERNEL which means we need to > > > + * pre-allocate the PTE here because we cannot do the > > > + * allocation during patching when IRQs are disabled. > > > + */ > > > + ptep = get_locked_pte(patching_mm, patching_addr, &ptl); > > > + WARN_ON(!ptep); > > > + pte_unmap_unlock(ptep, ptl); > > > + } > > > +} > > > + > > > void __init poking_init(void) > > > { > > > + if (radix_enabled()) { > > > + __poking_init_temp_mm(); > > > > Should this also be done with cpuhp_setup_state()? > > I think I prefer doing the setup ahead of time during boot. It does lose the ability to free up memory after a cpu is hot unplugged but I'm not sure if that's a big problem. > > > > > > + return; > > > + } > > > + > > > WARN_ON(cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, > > > "powerpc/text_poke:online", text_area_cpu_up, > > > text_area_cpu_down) < 0); > > > @@ -197,30 +239,93 @@ static inline int unmap_patch_area(void) > > > return 0; > > > } > > > > > > +struct patch_mapping { > > > + spinlock_t *ptl; /* for protecting pte table */ > > > + pte_t *ptep; > > > + struct temp_mm temp_mm; > > > +}; > > > + > > > +/* > > > + * This can be called for kernel text or a module. > > > + */ > > > +static int map_patch_mm(const void *addr, struct patch_mapping *patch_mapping) > > > +{ > > > + struct page *page; > > > + struct mm_struct *patching_mm = __this_cpu_read(cpu_patching_mm); > > > + unsigned long patching_addr = __this_cpu_read(cpu_patching_addr); > > > + > > > + if (is_vmalloc_or_module_addr(addr)) > > > + page = vmalloc_to_page(addr); > > > + else > > > + page = virt_to_page(addr); > > > + > > > + patch_mapping->ptep = get_locked_pte(patching_mm, patching_addr, > > > + &patch_mapping->ptl); > > > + if (unlikely(!patch_mapping->ptep)) { > > > + pr_warn("map patch: failed to allocate pte for patching\n"); > > > + return -1; > > > + } > > > + > > > + set_pte_at(patching_mm, patching_addr, patch_mapping->ptep, > > > + pte_mkdirty(mk_pte(page, PAGE_KERNEL))); > > > + > > > + init_temp_mm(&patch_mapping->temp_mm, patching_mm); > > > + start_using_temporary_mm(&patch_mapping->temp_mm); > > > + > > > + return 0; > > > +} > > > + > > > +static int unmap_patch_mm(struct patch_mapping *patch_mapping) > > > +{ > > > + struct mm_struct *patching_mm = __this_cpu_read(cpu_patching_mm); > > > + unsigned long patching_addr = __this_cpu_read(cpu_patching_addr); > > > + > > > + pte_clear(patching_mm, patching_addr, patch_mapping->ptep); > > > + > > > + local_flush_tlb_mm(patching_mm); > > > + stop_using_temporary_mm(&patch_mapping->temp_mm); > > > + > > > + pte_unmap_unlock(patch_mapping->ptep, patch_mapping->ptl); > > > + > > > + return 0; > > > +} > > > + > > > static int do_patch_instruction(u32 *addr, struct ppc_inst instr) > > > { > > > int err, rc = 0; > > > u32 *patch_addr = NULL; > > > unsigned long flags; > > > + struct patch_mapping patch_mapping; > > > > > > /* > > > - * During early early boot patch_instruction is called > > > - * when text_poke_area is not ready, but we still need > > > - * to allow patching. We just do the plain old patching > > > + * During early early boot patch_instruction is called when the > > > + * patching_mm/text_poke_area is not ready, but we still need to allow > > > + * patching. We just do the plain old patching. > > > */ > > > - if (!this_cpu_read(text_poke_area)) > > > - return raw_patch_instruction(addr, instr); > > > + if (radix_enabled()) { > > > + if (!this_cpu_read(cpu_patching_mm)) > > > + return raw_patch_instruction(addr, instr); > > > + } else { > > > + if (!this_cpu_read(text_poke_area)) > > > + return raw_patch_instruction(addr, instr); > > > + } > > > > Would testing cpu_patching_addr handler both of these cases? > > > > Then I think it might be clearer to do something like this: > > if (radix_enabled()) { > > return patch_instruction_mm(addr, instr); > > } > > > > patch_instruction_mm() would combine map_patch_mm(), then patching and > > unmap_patch_mm() into one function. > > > > IMO, a bit of code duplication would be cleaner than checking multiple > > times for radix_enabled() and having struct patch_mapping especially > > for maintaining state. > > Hmm, I think it's a good idea - I'll give it a go for the next version. > Thanks for the suggestion! > > > > > > > > > local_irq_save(flags); > > > > > > - err = map_patch_area(addr); > > > + if (radix_enabled()) > > > + err = map_patch_mm(addr, &patch_mapping); > > > + else > > > + err = map_patch_area(addr); > > > if (err) > > > goto out; > > > > > > patch_addr = (u32 *)(__this_cpu_read(cpu_patching_addr) | offset_in_page(addr)); > > > rc = __patch_instruction(addr, instr, patch_addr); > > > > > > - err = unmap_patch_area(); > > > + if (radix_enabled()) > > > + err = unmap_patch_mm(&patch_mapping); > > > + else > > > + err = unmap_patch_area(); > > > > > > out: > > > local_irq_restore(flags); > > > -- > > > 2.32.0 > > > > > Thanks, > > Jordan >