All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christophe Leroy <christophe.leroy@c-s.fr>
To: Christopher M Riedl <cmr@informatik.wtf>, linuxppc-dev@ozlabs.org
Subject: Re: [RFC PATCH 2/3] powerpc/lib: Initialize a temporary mm for code patching
Date: Wed, 8 Apr 2020 13:01:02 +0200	[thread overview]
Message-ID: <8ff6d279-fdd9-4e4d-b4e0-f5c5cba480a4@c-s.fr> (raw)
In-Reply-To: <1782990079.111623.1585624792778@privateemail.com>



Le 31/03/2020 à 05:19, Christopher M Riedl a écrit :
>> On March 24, 2020 11:10 AM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
>>
>>   
>> Le 23/03/2020 à 05:52, Christopher M. Riedl a écrit :
>>> When code patching a STRICT_KERNEL_RWX kernel the page containing the
>>> address to be patched is temporarily mapped with permissive memory
>>> protections. Currently, a per-cpu vmalloc patch area is used for this
>>> purpose. While the patch area is per-cpu, the temporary page mapping is
>>> inserted into the kernel page tables for the duration of the patching.
>>> The mapping is exposed to CPUs other than the patching CPU - this is
>>> undesirable from a hardening perspective.
>>>
>>> Use the `poking_init` init hook to prepare a temporary mm and patching
>>> address. Initialize the temporary mm by copying the init mm. Choose a
>>> randomized patching address inside the temporary mm userspace address
>>> portion. The next patch uses the temporary mm and patching address for
>>> code patching.
>>>
>>> Based on x86 implementation:
>>>
>>> commit 4fc19708b165
>>> ("x86/alternatives: Initialize temporary mm for patching")
>>>
>>> Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf>
>>> ---
>>>    arch/powerpc/lib/code-patching.c | 26 ++++++++++++++++++++++++++
>>>    1 file changed, 26 insertions(+)
>>>
>>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>>> index 3345f039a876..18b88ecfc5a8 100644
>>> --- a/arch/powerpc/lib/code-patching.c
>>> +++ b/arch/powerpc/lib/code-patching.c
>>> @@ -11,6 +11,8 @@
>>>    #include <linux/cpuhotplug.h>
>>>    #include <linux/slab.h>
>>>    #include <linux/uaccess.h>
>>> +#include <linux/sched/task.h>
>>> +#include <linux/random.h>
>>>    
>>>    #include <asm/pgtable.h>
>>>    #include <asm/tlbflush.h>
>>> @@ -39,6 +41,30 @@ int raw_patch_instruction(unsigned int *addr, unsigned int instr)
>>>    }
>>>    
>>>    #ifdef CONFIG_STRICT_KERNEL_RWX
>>> +
>>> +__ro_after_init struct mm_struct *patching_mm;
>>> +__ro_after_init unsigned long patching_addr;
>>
>> Can we make those those static ?
>>
> 
> Yes, makes sense to me.
> 
>>> +
>>> +void __init poking_init(void)
>>> +{
>>> +	spinlock_t *ptl; /* for protecting pte table */
>>> +	pte_t *ptep;
>>> +
>>> +	patching_mm = copy_init_mm();
>>> +	BUG_ON(!patching_mm);
>>
>> Does it needs to be a BUG_ON() ? Can't we fail gracefully with just a
>> WARN_ON ?
>>
> 
> I'm not sure what failing gracefully means here? The main reason this could
> fail is if there is not enough memory to allocate the patching_mm. The
> previous implementation had this justification for BUG_ON():

But the system can continue running just fine after this failure.
Only the things that make use of code patching will fail (ftrace, kgdb, ...)

Checkpatch tells: "Avoid crashing the kernel - try using WARN_ON & 
recovery code rather than BUG() or BUG_ON()"

All vital code patching has already been done previously, so I think a 
WARN_ON() should be enough, plus returning non 0 to indicate that the 
late_initcall failed.


> 
> /*
>   * Run as a late init call. This allows all the boot time patching to be done
>   * simply by patching the code, and then we're called here prior to
>   * mark_rodata_ro(), which happens after all init calls are run. Although
>   * BUG_ON() is rude, in this case it should only happen if ENOMEM, and we judge
>   * it as being preferable to a kernel that will crash later when someone tries
>   * to use patch_instruction().
>   */
> static int __init setup_text_poke_area(void)
> {
>          BUG_ON(!cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
>                  "powerpc/text_poke:online", text_area_cpu_up,
>                  text_area_cpu_down));
> 
>          return 0;
> }
> late_initcall(setup_text_poke_area);
> 
> I think the BUG_ON() is appropriate even if only to adhere to the previous
> judgement call. I can add a similar comment explaining the reasoning if
> that helps.
> 
>>> +
>>> +	/*
>>> +	 * In hash we cannot go above DEFAULT_MAP_WINDOW easily.
>>> +	 * XXX: Do we want additional bits of entropy for radix?
>>> +	 */
>>> +	patching_addr = (get_random_long() & PAGE_MASK) %
>>> +		(DEFAULT_MAP_WINDOW - PAGE_SIZE);
>>> +
>>> +	ptep = get_locked_pte(patching_mm, patching_addr, &ptl);
>>> +	BUG_ON(!ptep);
>>
>> Same here, can we fail gracefully instead ?
>>
> 
> Same reasoning as above.

Here as well, a WARN_ON() should be enough, the system will continue 
running after that.

> 
>>> +	pte_unmap_unlock(ptep, ptl);
>>> +}
>>> +
>>>    static DEFINE_PER_CPU(struct vm_struct *, text_poke_area);
>>>    
>>>    static int text_area_cpu_up(unsigned int cpu)
>>>
>>
>> Christophe

Christophe

  reply	other threads:[~2020-04-08 11:03 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-23  4:52 [RFC PATCH 0/3] Use per-CPU temporary mappings for patching Christopher M. Riedl
2020-03-23  4:52 ` [RFC PATCH 1/3] powerpc/mm: Introduce temporary mm Christopher M. Riedl
2020-03-23  6:10   ` kbuild test robot
2020-03-24 16:07   ` Christophe Leroy
2020-03-31  2:41     ` Christopher M Riedl
2020-04-18 10:36   ` Christophe Leroy
2020-03-23  4:52 ` [RFC PATCH 2/3] powerpc/lib: Initialize a temporary mm for code patching Christopher M. Riedl
2020-03-24 16:10   ` Christophe Leroy
2020-03-31  3:19     ` Christopher M Riedl
2020-04-08 11:01       ` Christophe Leroy [this message]
2020-04-15  4:39         ` Christopher M Riedl
2020-04-17  0:57         ` Michael Ellerman
2020-04-24 13:11           ` Steven Rostedt
2020-03-23  4:52 ` [RFC PATCH 3/3] powerpc/lib: Use " Christopher M. Riedl
2020-03-24 16:25   ` Christophe Leroy
2020-04-15  5:11     ` Christopher M Riedl
2020-04-15  8:45       ` Christophe Leroy
2020-04-15 16:24         ` Christopher M Riedl
2020-03-23 11:30 ` [RFC PATCH 0/3] Use per-CPU temporary mappings for patching Christophe Leroy
2020-03-23 14:04   ` Christophe Leroy
     [not found]     ` <738663735.45060.1584982175261@privateemail.com>
2020-03-23 16:59       ` Christophe Leroy
2020-03-24 16:02     ` Christophe Leroy
2020-03-25  2:51 ` Andrew Donnellan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8ff6d279-fdd9-4e4d-b4e0-f5c5cba480a4@c-s.fr \
    --to=christophe.leroy@c-s.fr \
    --cc=cmr@informatik.wtf \
    --cc=linuxppc-dev@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.