All of lore.kernel.org
 help / color / mirror / Atom feed
From: Khalid Aziz <khalid.aziz@oracle.com>
To: Laura Abbott <labbott@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: juergh@gmail.com, tycho@tycho.ws, jsteckli@amazon.de,
	ak@linux.intel.com, torvalds@linux-foundation.org,
	liran.alon@oracle.com, keescook@google.com,
	Juerg Haefliger <juerg.haefliger@canonical.com>,
	deepa.srinivasan@oracle.com, chris.hyser@oracle.com,
	tyhicks@canonical.com, dwmw@amazon.co.uk,
	andrew.cooper3@citrix.com, jcm@redhat.com,
	boris.ostrovsky@oracle.com, kanth.ghatraju@oracle.com,
	joao.m.martins@oracle.com, jmattson@google.com,
	pradeep.vincent@oracle.com, john.haxby@oracle.com,
	tglx@linutronix.de, kirill.shutemov@linux.intel.com, hch@lst.de,
	steven.sistare@oracle.com, kernel-hardening@lists.openwall.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	Tycho Andersen <tycho@docker.com>
Subject: Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
Date: Tue, 12 Feb 2019 13:34:15 -0700	[thread overview]
Message-ID: <76668db0-e87a-5294-1d71-2ab42a48425c@oracle.com> (raw)
In-Reply-To: <7497bd44-1fda-e073-ba7f-18a76577b64a@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 5505 bytes --]

On 2/12/19 1:01 PM, Laura Abbott wrote:
> On 2/12/19 7:52 AM, Khalid Aziz wrote:
>> On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote:
>>> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote:
>>>> From: Juerg Haefliger <juerg.haefliger@canonical.com>
>>>>
>>>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
>>>> provide a hook for updating a single kernel page table entry (which is
>>>> required by the generic XPFO code).
>>>>
>>>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()
>>>>
>>>> CC: linux-arm-kernel@lists.infradead.org
>>>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
>>>> Signed-off-by: Tycho Andersen <tycho@docker.com>
>>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>>> ---
>>>>   arch/arm64/Kconfig     |  1 +
>>>>   arch/arm64/mm/Makefile |  2 ++
>>>>   arch/arm64/mm/xpfo.c   | 58
>>>> ++++++++++++++++++++++++++++++++++++++++++
>>>>   3 files changed, 61 insertions(+)
>>>>   create mode 100644 arch/arm64/mm/xpfo.c
>>>>
>>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>>> index ea2ab0330e3a..f0a9c0007d23 100644
>>>> --- a/arch/arm64/Kconfig
>>>> +++ b/arch/arm64/Kconfig
>>>> @@ -171,6 +171,7 @@ config ARM64
>>>>       select SWIOTLB
>>>>       select SYSCTL_EXCEPTION_TRACE
>>>>       select THREAD_INFO_IN_TASK
>>>> +    select ARCH_SUPPORTS_XPFO
>>>>       help
>>>>         ARM 64-bit (AArch64) Linux support.
>>>>   diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
>>>> index 849c1df3d214..cca3808d9776 100644
>>>> --- a/arch/arm64/mm/Makefile
>>>> +++ b/arch/arm64/mm/Makefile
>>>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o    += n
>>>>     obj-$(CONFIG_KASAN)        += kasan_init.o
>>>>   KASAN_SANITIZE_kasan_init.o    := n
>>>> +
>>>> +obj-$(CONFIG_XPFO)        += xpfo.o
>>>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
>>>> new file mode 100644
>>>> index 000000000000..678e2be848eb
>>>> --- /dev/null
>>>> +++ b/arch/arm64/mm/xpfo.c
>>>> @@ -0,0 +1,58 @@
>>>> +/*
>>>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
>>>> + * Copyright (C) 2016 Brown University. All rights reserved.
>>>> + *
>>>> + * Authors:
>>>> + *   Juerg Haefliger <juerg.haefliger@hpe.com>
>>>> + *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or
>>>> modify it
>>>> + * under the terms of the GNU General Public License version 2 as
>>>> published by
>>>> + * the Free Software Foundation.
>>>> + */
>>>> +
>>>> +#include <linux/mm.h>
>>>> +#include <linux/module.h>
>>>> +
>>>> +#include <asm/tlbflush.h>
>>>> +
>>>> +/*
>>>> + * Lookup the page table entry for a virtual address and return a
>>>> pointer to
>>>> + * the entry. Based on x86 tree.
>>>> + */
>>>> +static pte_t *lookup_address(unsigned long addr)
>>>> +{
>>>> +    pgd_t *pgd;
>>>> +    pud_t *pud;
>>>> +    pmd_t *pmd;
>>>> +
>>>> +    pgd = pgd_offset_k(addr);
>>>> +    if (pgd_none(*pgd))
>>>> +        return NULL;
>>>> +
>>>> +    pud = pud_offset(pgd, addr);
>>>> +    if (pud_none(*pud))
>>>> +        return NULL;
>>>> +
>>>> +    pmd = pmd_offset(pud, addr);
>>>> +    if (pmd_none(*pmd))
>>>> +        return NULL;
>>>> +
>>>> +    return pte_offset_kernel(pmd, addr);
>>>> +}
>>>> +
>>>> +/* Update a single kernel page table entry */
>>>> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot)
>>>> +{
>>>> +    pte_t *pte = lookup_address((unsigned long)kaddr);
>>>> +
>>>> +    set_pte(pte, pfn_pte(page_to_pfn(page), prot));
>>>
>>> Thought on the other hand.. what if the page is PMD? Do you really want
>>> to do this?
>>>
>>> What if 'pte' is NULL?
>>>> +}
>>>> +
>>>> +inline void xpfo_flush_kernel_tlb(struct page *page, int order)
>>>> +{
>>>> +    unsigned long kaddr = (unsigned long)page_address(page);
>>>> +    unsigned long size = PAGE_SIZE;
>>>> +
>>>> +    flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
>>>
>>> Ditto here. You are assuming it is PTE, but it may be PMD or such.
>>> Or worts - the lookup_address could be NULL.
>>>
>>>> +}
>>>> -- 
>>>> 2.17.1
>>>>
>>
>> Hi Konrad,
>>
>> This makes sense. x86 version of set_kpte() checks pte for NULL and also
>> checks if the page is PMD. Now what you said about adding level to
>> lookup_address() for arm makes more sense.
>>
>> Can someone with knowledge of arm64 mmu make recommendations here?
>>
>> Thanks,
>> Khalid
>>
> 
> arm64 can't split larger pages and requires everything must be
> mapped as pages (see [RFC PATCH v7 08/16] arm64/mm: disable
> section/contiguous mappings if XPFO is enabled) . Any
> situation where we would get something other than a pte
> would be a bug.

Thanks, Laura! That helps a lot. I would think checking for NULL pte in
set_kpte() would still make sense since lookup_address() can return
NULL. Something like:

--- arch/arm64/mm/xpfo.c	2019-01-30 13:36:39.857185612 -0700
+++ arch/arm64/mm/xpfo.c.new	2019-02-12 13:26:47.471633031 -0700
@@ -46,6 +46,11 @@
 {
 	pte_t *pte = lookup_address((unsigned long)kaddr);

+	if (unlikely(!pte)) {
+		WARN(1, "xpfo: invalid address %p\n", kaddr);
+		return;
+	}
+
 	set_pte(pte, pfn_pte(page_to_pfn(page), prot));
 }

--
Khalid

[-- Attachment #2: pEpkey.asc --]
[-- Type: application/pgp-keys, Size: 2501 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: Khalid Aziz <khalid.aziz@oracle.com>
To: Laura Abbott <labbott@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tycho Andersen <tycho@docker.com>,
	kernel-hardening@lists.openwall.com, linux-mm@kvack.org,
	deepa.srinivasan@oracle.com, steven.sistare@oracle.com,
	joao.m.martins@oracle.com, boris.ostrovsky@oracle.com,
	tycho@tycho.ws, ak@linux.intel.com, hch@lst.de,
	kanth.ghatraju@oracle.com, jsteckli@amazon.de,
	pradeep.vincent@oracle.com, jcm@redhat.com,
	liran.alon@oracle.com, tglx@linutronix.de,
	chris.hyser@oracle.com, linux-arm-kernel@lists.infradead.org,
	jmattson@google.com, juergh@gmail.com, andrew.cooper3@citrix.com,
	linux-kernel@vger.kernel.org, tyhicks@canonical.com,
	john.haxby@oracle.com,
	Juerg Haefliger <juerg.haefliger@canonical.com>,
	dwmw@amazon.co.uk, keescook@google.com,
	torvalds@linux-foundation.org, kirill.shutemov@linux.intel.com
Subject: Re: [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO
Date: Tue, 12 Feb 2019 13:34:15 -0700	[thread overview]
Message-ID: <76668db0-e87a-5294-1d71-2ab42a48425c@oracle.com> (raw)
In-Reply-To: <7497bd44-1fda-e073-ba7f-18a76577b64a@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 5505 bytes --]

On 2/12/19 1:01 PM, Laura Abbott wrote:
> On 2/12/19 7:52 AM, Khalid Aziz wrote:
>> On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote:
>>> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote:
>>>> From: Juerg Haefliger <juerg.haefliger@canonical.com>
>>>>
>>>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
>>>> provide a hook for updating a single kernel page table entry (which is
>>>> required by the generic XPFO code).
>>>>
>>>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one()
>>>>
>>>> CC: linux-arm-kernel@lists.infradead.org
>>>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
>>>> Signed-off-by: Tycho Andersen <tycho@docker.com>
>>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
>>>> ---
>>>>   arch/arm64/Kconfig     |  1 +
>>>>   arch/arm64/mm/Makefile |  2 ++
>>>>   arch/arm64/mm/xpfo.c   | 58
>>>> ++++++++++++++++++++++++++++++++++++++++++
>>>>   3 files changed, 61 insertions(+)
>>>>   create mode 100644 arch/arm64/mm/xpfo.c
>>>>
>>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>>> index ea2ab0330e3a..f0a9c0007d23 100644
>>>> --- a/arch/arm64/Kconfig
>>>> +++ b/arch/arm64/Kconfig
>>>> @@ -171,6 +171,7 @@ config ARM64
>>>>       select SWIOTLB
>>>>       select SYSCTL_EXCEPTION_TRACE
>>>>       select THREAD_INFO_IN_TASK
>>>> +    select ARCH_SUPPORTS_XPFO
>>>>       help
>>>>         ARM 64-bit (AArch64) Linux support.
>>>>   diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
>>>> index 849c1df3d214..cca3808d9776 100644
>>>> --- a/arch/arm64/mm/Makefile
>>>> +++ b/arch/arm64/mm/Makefile
>>>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o    += n
>>>>     obj-$(CONFIG_KASAN)        += kasan_init.o
>>>>   KASAN_SANITIZE_kasan_init.o    := n
>>>> +
>>>> +obj-$(CONFIG_XPFO)        += xpfo.o
>>>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
>>>> new file mode 100644
>>>> index 000000000000..678e2be848eb
>>>> --- /dev/null
>>>> +++ b/arch/arm64/mm/xpfo.c
>>>> @@ -0,0 +1,58 @@
>>>> +/*
>>>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
>>>> + * Copyright (C) 2016 Brown University. All rights reserved.
>>>> + *
>>>> + * Authors:
>>>> + *   Juerg Haefliger <juerg.haefliger@hpe.com>
>>>> + *   Vasileios P. Kemerlis <vpk@cs.brown.edu>
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or
>>>> modify it
>>>> + * under the terms of the GNU General Public License version 2 as
>>>> published by
>>>> + * the Free Software Foundation.
>>>> + */
>>>> +
>>>> +#include <linux/mm.h>
>>>> +#include <linux/module.h>
>>>> +
>>>> +#include <asm/tlbflush.h>
>>>> +
>>>> +/*
>>>> + * Lookup the page table entry for a virtual address and return a
>>>> pointer to
>>>> + * the entry. Based on x86 tree.
>>>> + */
>>>> +static pte_t *lookup_address(unsigned long addr)
>>>> +{
>>>> +    pgd_t *pgd;
>>>> +    pud_t *pud;
>>>> +    pmd_t *pmd;
>>>> +
>>>> +    pgd = pgd_offset_k(addr);
>>>> +    if (pgd_none(*pgd))
>>>> +        return NULL;
>>>> +
>>>> +    pud = pud_offset(pgd, addr);
>>>> +    if (pud_none(*pud))
>>>> +        return NULL;
>>>> +
>>>> +    pmd = pmd_offset(pud, addr);
>>>> +    if (pmd_none(*pmd))
>>>> +        return NULL;
>>>> +
>>>> +    return pte_offset_kernel(pmd, addr);
>>>> +}
>>>> +
>>>> +/* Update a single kernel page table entry */
>>>> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot)
>>>> +{
>>>> +    pte_t *pte = lookup_address((unsigned long)kaddr);
>>>> +
>>>> +    set_pte(pte, pfn_pte(page_to_pfn(page), prot));
>>>
>>> Thought on the other hand.. what if the page is PMD? Do you really want
>>> to do this?
>>>
>>> What if 'pte' is NULL?
>>>> +}
>>>> +
>>>> +inline void xpfo_flush_kernel_tlb(struct page *page, int order)
>>>> +{
>>>> +    unsigned long kaddr = (unsigned long)page_address(page);
>>>> +    unsigned long size = PAGE_SIZE;
>>>> +
>>>> +    flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
>>>
>>> Ditto here. You are assuming it is PTE, but it may be PMD or such.
>>> Or worts - the lookup_address could be NULL.
>>>
>>>> +}
>>>> -- 
>>>> 2.17.1
>>>>
>>
>> Hi Konrad,
>>
>> This makes sense. x86 version of set_kpte() checks pte for NULL and also
>> checks if the page is PMD. Now what you said about adding level to
>> lookup_address() for arm makes more sense.
>>
>> Can someone with knowledge of arm64 mmu make recommendations here?
>>
>> Thanks,
>> Khalid
>>
> 
> arm64 can't split larger pages and requires everything must be
> mapped as pages (see [RFC PATCH v7 08/16] arm64/mm: disable
> section/contiguous mappings if XPFO is enabled) . Any
> situation where we would get something other than a pte
> would be a bug.

Thanks, Laura! That helps a lot. I would think checking for NULL pte in
set_kpte() would still make sense since lookup_address() can return
NULL. Something like:

--- arch/arm64/mm/xpfo.c	2019-01-30 13:36:39.857185612 -0700
+++ arch/arm64/mm/xpfo.c.new	2019-02-12 13:26:47.471633031 -0700
@@ -46,6 +46,11 @@
 {
 	pte_t *pte = lookup_address((unsigned long)kaddr);

+	if (unlikely(!pte)) {
+		WARN(1, "xpfo: invalid address %p\n", kaddr);
+		return;
+	}
+
 	set_pte(pte, pfn_pte(page_to_pfn(page), prot));
 }

--
Khalid

[-- Attachment #2: pEpkey.asc --]
[-- Type: application/pgp-keys, Size: 2501 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2019-02-12 20:35 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-10 21:09 [RFC PATCH v7 00/16] Add support for eXclusive Page Frame Ownership Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 01/16] mm: add MAP_HUGETLB support to vm_mmap Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 02/16] x86: always set IF before oopsing from page fault Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 03/16] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO) Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 04/16] swiotlb: Map the buffer if it was unmapped by XPFO Khalid Aziz
2019-01-23 14:16   ` Konrad Rzeszutek Wilk
2019-01-10 21:09 ` [RFC PATCH v7 05/16] arm64/mm: Add support for XPFO Khalid Aziz
2019-01-10 21:09   ` Khalid Aziz
2019-01-23 14:20   ` Konrad Rzeszutek Wilk
2019-01-23 14:20     ` Konrad Rzeszutek Wilk
2019-02-12 15:45     ` Khalid Aziz
2019-02-12 15:45       ` Khalid Aziz
2019-01-23 14:24   ` Konrad Rzeszutek Wilk
2019-01-23 14:24     ` Konrad Rzeszutek Wilk
2019-02-12 15:52     ` Khalid Aziz
2019-02-12 15:52       ` Khalid Aziz
2019-02-12 20:01       ` Laura Abbott
2019-02-12 20:01         ` Laura Abbott
2019-02-12 20:34         ` Khalid Aziz [this message]
2019-02-12 20:34           ` Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 06/16] xpfo: add primitives for mapping underlying memory Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 07/16] arm64/mm, xpfo: temporarily map dcache regions Khalid Aziz
2019-01-10 21:09   ` Khalid Aziz
2019-01-11 14:54   ` Tycho Andersen
2019-01-11 14:54     ` Tycho Andersen
2019-01-11 18:28     ` Khalid Aziz
2019-01-11 18:28       ` Khalid Aziz
2019-01-11 19:50       ` Tycho Andersen
2019-01-11 19:50         ` Tycho Andersen
2019-01-23 14:56   ` Konrad Rzeszutek Wilk
2019-01-23 14:56     ` Konrad Rzeszutek Wilk
2019-01-10 21:09 ` [RFC PATCH v7 08/16] arm64/mm: disable section/contiguous mappings if XPFO is enabled Khalid Aziz
2019-01-10 21:09   ` Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 09/16] mm: add a user_virt_to_phys symbol Khalid Aziz
2019-01-10 21:09   ` Khalid Aziz
2019-01-23 15:03   ` Konrad Rzeszutek Wilk
2019-01-23 15:03     ` Konrad Rzeszutek Wilk
2019-01-10 21:09 ` [RFC PATCH v7 10/16] lkdtm: Add test for XPFO Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 11/16] mm, x86: omit TLB flushing by default for XPFO page table modifications Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 12/16] xpfo, mm: remove dependency on CONFIG_PAGE_EXTENSION Khalid Aziz
2019-01-16 15:01   ` Julian Stecklina
2019-01-10 21:09 ` [RFC PATCH v7 13/16] xpfo, mm: optimize spinlock usage in xpfo_kunmap Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 14/16] EXPERIMENTAL: xpfo, mm: optimize spin lock usage in xpfo_kmap Khalid Aziz
2019-01-17  0:18   ` Laura Abbott
2019-01-17 15:14     ` Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 15/16] xpfo, mm: Fix hang when booting with "xpfotlbflush" Khalid Aziz
2019-01-10 21:09 ` [RFC PATCH v7 16/16] xpfo, mm: Defer TLB flushes for non-current CPUs (x86 only) Khalid Aziz
2019-01-10 23:07 ` [RFC PATCH v7 00/16] Add support for eXclusive Page Frame Ownership Kees Cook
2019-01-10 23:07   ` Kees Cook
2019-01-11  0:20   ` Khalid Aziz
2019-01-11  0:44   ` Andy Lutomirski
2019-01-11  0:44     ` Andy Lutomirski
2019-01-11 21:45     ` Khalid Aziz
2019-01-10 23:40 ` Dave Hansen
2019-01-11  9:59   ` Peter Zijlstra
2019-01-11 18:21   ` Khalid Aziz
2019-01-11 20:42     ` Dave Hansen
2019-01-11 21:06       ` Andy Lutomirski
2019-01-11 21:06         ` Andy Lutomirski
2019-01-11 23:25         ` Khalid Aziz
2019-01-11 23:23       ` Khalid Aziz
2019-01-16  1:28 ` Laura Abbott
2019-01-16 14:56 ` Julian Stecklina
2019-01-16 15:16   ` Khalid Aziz
2019-01-17 23:38 ` Laura Abbott

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=76668db0-e87a-5294-1d71-2ab42a48425c@oracle.com \
    --to=khalid.aziz@oracle.com \
    --cc=ak@linux.intel.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=chris.hyser@oracle.com \
    --cc=deepa.srinivasan@oracle.com \
    --cc=dwmw@amazon.co.uk \
    --cc=hch@lst.de \
    --cc=jcm@redhat.com \
    --cc=jmattson@google.com \
    --cc=joao.m.martins@oracle.com \
    --cc=john.haxby@oracle.com \
    --cc=jsteckli@amazon.de \
    --cc=juerg.haefliger@canonical.com \
    --cc=juergh@gmail.com \
    --cc=kanth.ghatraju@oracle.com \
    --cc=keescook@google.com \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=konrad.wilk@oracle.com \
    --cc=labbott@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liran.alon@oracle.com \
    --cc=pradeep.vincent@oracle.com \
    --cc=steven.sistare@oracle.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=tycho@docker.com \
    --cc=tycho@tycho.ws \
    --cc=tyhicks@canonical.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.