All of lore.kernel.org
 help / color / mirror / Atom feed
From: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
To: vgoyal@redhat.com, ebiederm@xmission.com, akpm@linux-foundation.org
Cc: cpw@sgi.com, kumagai-atsushi@mxc.nes.nec.co.jp,
	lisa.mitchell@hp.com, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org, zhangyanfei@cn.fujitsu.com,
	jingbai.ma@hp.com, linux-mm@kvack.org, riel@redhat.com,
	walken@google.com, hughd@google.com,
	kosaki.motohiro@jp.fujitsu.com
Subject: [PATCH v8 9/9] vmcore: support mmap() on /proc/vmcore
Date: Thu, 23 May 2013 14:25:48 +0900	[thread overview]
Message-ID: <20130523052547.13864.83306.stgit@localhost6.localdomain6> (raw)
In-Reply-To: <20130523052421.13864.83978.stgit@localhost6.localdomain6>

This patch introduces mmap_vmcore().

Don't permit writable nor executable mapping even with mprotect()
because this mmap() is aimed at reading crash dump memory.
Non-writable mapping is also requirement of remap_pfn_range() when
mapping linear pages on non-consecutive physical pages; see
is_cow_mapping().

Set VM_MIXEDMAP flag to remap memory by remap_pfn_range and by
remap_vmalloc_range_pertial at the same time for a single
vma. do_munmap() can correctly clean partially remapped vma with two
functions in abnormal case. See zap_pte_range(), vm_normal_page() and
their comments for details.

On x86-32 PAE kernels, mmap() supports at most 16TB memory only. This
limitation comes from the fact that the third argument of
remap_pfn_range(), pfn, is of 32-bit length on x86-32: unsigned long.

Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
---

 fs/proc/vmcore.c |   86 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 86 insertions(+), 0 deletions(-)

diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index f71157d..80221d7 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -20,6 +20,7 @@
 #include <linux/init.h>
 #include <linux/crash_dump.h>
 #include <linux/list.h>
+#include <linux/vmalloc.h>
 #include <asm/uaccess.h>
 #include <asm/io.h>
 #include "internal.h"
@@ -200,9 +201,94 @@ static ssize_t read_vmcore(struct file *file, char __user *buffer,
 	return acc;
 }
 
+static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
+{
+	size_t size = vma->vm_end - vma->vm_start;
+	u64 start, end, len, tsz;
+	struct vmcore *m;
+
+	start = (u64)vma->vm_pgoff << PAGE_SHIFT;
+	end = start + size;
+
+	if (size > vmcore_size || end > vmcore_size)
+		return -EINVAL;
+
+	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
+		return -EPERM;
+
+	vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
+	vma->vm_flags |= VM_MIXEDMAP;
+
+	len = 0;
+
+	if (start < elfcorebuf_sz) {
+		u64 pfn;
+
+		tsz = elfcorebuf_sz - start;
+		if (size < tsz)
+			tsz = size;
+		pfn = __pa(elfcorebuf + start) >> PAGE_SHIFT;
+		if (remap_pfn_range(vma, vma->vm_start, pfn, tsz,
+				    vma->vm_page_prot))
+			return -EAGAIN;
+		size -= tsz;
+		start += tsz;
+		len += tsz;
+
+		if (size == 0)
+			return 0;
+	}
+
+	if (start < elfcorebuf_sz + elfnotes_sz) {
+		void *kaddr;
+
+		tsz = elfcorebuf_sz + elfnotes_sz - start;
+		if (size < tsz)
+			tsz = size;
+		kaddr = elfnotes_buf + start - elfcorebuf_sz;
+		if (remap_vmalloc_range_partial(vma, vma->vm_start + len,
+						kaddr, tsz)) {
+			do_munmap(vma->vm_mm, vma->vm_start, len);
+			return -EAGAIN;
+		}
+		size -= tsz;
+		start += tsz;
+		len += tsz;
+
+		if (size == 0)
+			return 0;
+	}
+
+	list_for_each_entry(m, &vmcore_list, list) {
+		if (start < m->offset + m->size) {
+			u64 paddr = 0;
+
+			tsz = m->offset + m->size - start;
+			if (size < tsz)
+				tsz = size;
+			paddr = m->paddr + start - m->offset;
+			if (remap_pfn_range(vma, vma->vm_start + len,
+					    paddr >> PAGE_SHIFT, tsz,
+					    vma->vm_page_prot)) {
+				do_munmap(vma->vm_mm, vma->vm_start, len);
+				return -EAGAIN;
+			}
+			size -= tsz;
+			start += tsz;
+			len += tsz;
+
+			if (size == 0)
+				return 0;
+		}
+	}
+
+	return 0;
+}
+
 static const struct file_operations proc_vmcore_operations = {
 	.read		= read_vmcore,
 	.llseek		= default_llseek,
+	.mmap		= mmap_vmcore,
 };
 
 static struct vmcore* __init get_new_element(void)


WARNING: multiple messages have this Message-ID (diff)
From: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
To: vgoyal@redhat.com, ebiederm@xmission.com, akpm@linux-foundation.org
Cc: cpw@sgi.com, kumagai-atsushi@mxc.nes.nec.co.jp,
	lisa.mitchell@hp.com, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org, zhangyanfei@cn.fujitsu.com,
	jingbai.ma@hp.com, linux-mm@kvack.org, riel@redhat.com,
	walken@google.com, hughd@google.com,
	kosaki.motohiro@jp.fujitsu.com
Subject: [PATCH v8 9/9] vmcore: support mmap() on /proc/vmcore
Date: Thu, 23 May 2013 14:25:48 +0900	[thread overview]
Message-ID: <20130523052547.13864.83306.stgit@localhost6.localdomain6> (raw)
In-Reply-To: <20130523052421.13864.83978.stgit@localhost6.localdomain6>

This patch introduces mmap_vmcore().

Don't permit writable nor executable mapping even with mprotect()
because this mmap() is aimed at reading crash dump memory.
Non-writable mapping is also requirement of remap_pfn_range() when
mapping linear pages on non-consecutive physical pages; see
is_cow_mapping().

Set VM_MIXEDMAP flag to remap memory by remap_pfn_range and by
remap_vmalloc_range_pertial at the same time for a single
vma. do_munmap() can correctly clean partially remapped vma with two
functions in abnormal case. See zap_pte_range(), vm_normal_page() and
their comments for details.

On x86-32 PAE kernels, mmap() supports at most 16TB memory only. This
limitation comes from the fact that the third argument of
remap_pfn_range(), pfn, is of 32-bit length on x86-32: unsigned long.

Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
---

 fs/proc/vmcore.c |   86 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 86 insertions(+), 0 deletions(-)

diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index f71157d..80221d7 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -20,6 +20,7 @@
 #include <linux/init.h>
 #include <linux/crash_dump.h>
 #include <linux/list.h>
+#include <linux/vmalloc.h>
 #include <asm/uaccess.h>
 #include <asm/io.h>
 #include "internal.h"
@@ -200,9 +201,94 @@ static ssize_t read_vmcore(struct file *file, char __user *buffer,
 	return acc;
 }
 
+static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
+{
+	size_t size = vma->vm_end - vma->vm_start;
+	u64 start, end, len, tsz;
+	struct vmcore *m;
+
+	start = (u64)vma->vm_pgoff << PAGE_SHIFT;
+	end = start + size;
+
+	if (size > vmcore_size || end > vmcore_size)
+		return -EINVAL;
+
+	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
+		return -EPERM;
+
+	vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
+	vma->vm_flags |= VM_MIXEDMAP;
+
+	len = 0;
+
+	if (start < elfcorebuf_sz) {
+		u64 pfn;
+
+		tsz = elfcorebuf_sz - start;
+		if (size < tsz)
+			tsz = size;
+		pfn = __pa(elfcorebuf + start) >> PAGE_SHIFT;
+		if (remap_pfn_range(vma, vma->vm_start, pfn, tsz,
+				    vma->vm_page_prot))
+			return -EAGAIN;
+		size -= tsz;
+		start += tsz;
+		len += tsz;
+
+		if (size == 0)
+			return 0;
+	}
+
+	if (start < elfcorebuf_sz + elfnotes_sz) {
+		void *kaddr;
+
+		tsz = elfcorebuf_sz + elfnotes_sz - start;
+		if (size < tsz)
+			tsz = size;
+		kaddr = elfnotes_buf + start - elfcorebuf_sz;
+		if (remap_vmalloc_range_partial(vma, vma->vm_start + len,
+						kaddr, tsz)) {
+			do_munmap(vma->vm_mm, vma->vm_start, len);
+			return -EAGAIN;
+		}
+		size -= tsz;
+		start += tsz;
+		len += tsz;
+
+		if (size == 0)
+			return 0;
+	}
+
+	list_for_each_entry(m, &vmcore_list, list) {
+		if (start < m->offset + m->size) {
+			u64 paddr = 0;
+
+			tsz = m->offset + m->size - start;
+			if (size < tsz)
+				tsz = size;
+			paddr = m->paddr + start - m->offset;
+			if (remap_pfn_range(vma, vma->vm_start + len,
+					    paddr >> PAGE_SHIFT, tsz,
+					    vma->vm_page_prot)) {
+				do_munmap(vma->vm_mm, vma->vm_start, len);
+				return -EAGAIN;
+			}
+			size -= tsz;
+			start += tsz;
+			len += tsz;
+
+			if (size == 0)
+				return 0;
+		}
+	}
+
+	return 0;
+}
+
 static const struct file_operations proc_vmcore_operations = {
 	.read		= read_vmcore,
 	.llseek		= default_llseek,
+	.mmap		= mmap_vmcore,
 };
 
 static struct vmcore* __init get_new_element(void)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
To: vgoyal@redhat.com, ebiederm@xmission.com, akpm@linux-foundation.org
Cc: riel@redhat.com, hughd@google.com, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org, lisa.mitchell@hp.com,
	linux-mm@kvack.org, zhangyanfei@cn.fujitsu.com,
	kosaki.motohiro@jp.fujitsu.com,
	kumagai-atsushi@mxc.nes.nec.co.jp, walken@google.com,
	cpw@sgi.com, jingbai.ma@hp.com
Subject: [PATCH v8 9/9] vmcore: support mmap() on /proc/vmcore
Date: Thu, 23 May 2013 14:25:48 +0900	[thread overview]
Message-ID: <20130523052547.13864.83306.stgit@localhost6.localdomain6> (raw)
In-Reply-To: <20130523052421.13864.83978.stgit@localhost6.localdomain6>

This patch introduces mmap_vmcore().

Don't permit writable nor executable mapping even with mprotect()
because this mmap() is aimed at reading crash dump memory.
Non-writable mapping is also requirement of remap_pfn_range() when
mapping linear pages on non-consecutive physical pages; see
is_cow_mapping().

Set VM_MIXEDMAP flag to remap memory by remap_pfn_range and by
remap_vmalloc_range_pertial at the same time for a single
vma. do_munmap() can correctly clean partially remapped vma with two
functions in abnormal case. See zap_pte_range(), vm_normal_page() and
their comments for details.

On x86-32 PAE kernels, mmap() supports at most 16TB memory only. This
limitation comes from the fact that the third argument of
remap_pfn_range(), pfn, is of 32-bit length on x86-32: unsigned long.

Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
---

 fs/proc/vmcore.c |   86 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 86 insertions(+), 0 deletions(-)

diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index f71157d..80221d7 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -20,6 +20,7 @@
 #include <linux/init.h>
 #include <linux/crash_dump.h>
 #include <linux/list.h>
+#include <linux/vmalloc.h>
 #include <asm/uaccess.h>
 #include <asm/io.h>
 #include "internal.h"
@@ -200,9 +201,94 @@ static ssize_t read_vmcore(struct file *file, char __user *buffer,
 	return acc;
 }
 
+static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
+{
+	size_t size = vma->vm_end - vma->vm_start;
+	u64 start, end, len, tsz;
+	struct vmcore *m;
+
+	start = (u64)vma->vm_pgoff << PAGE_SHIFT;
+	end = start + size;
+
+	if (size > vmcore_size || end > vmcore_size)
+		return -EINVAL;
+
+	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
+		return -EPERM;
+
+	vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
+	vma->vm_flags |= VM_MIXEDMAP;
+
+	len = 0;
+
+	if (start < elfcorebuf_sz) {
+		u64 pfn;
+
+		tsz = elfcorebuf_sz - start;
+		if (size < tsz)
+			tsz = size;
+		pfn = __pa(elfcorebuf + start) >> PAGE_SHIFT;
+		if (remap_pfn_range(vma, vma->vm_start, pfn, tsz,
+				    vma->vm_page_prot))
+			return -EAGAIN;
+		size -= tsz;
+		start += tsz;
+		len += tsz;
+
+		if (size == 0)
+			return 0;
+	}
+
+	if (start < elfcorebuf_sz + elfnotes_sz) {
+		void *kaddr;
+
+		tsz = elfcorebuf_sz + elfnotes_sz - start;
+		if (size < tsz)
+			tsz = size;
+		kaddr = elfnotes_buf + start - elfcorebuf_sz;
+		if (remap_vmalloc_range_partial(vma, vma->vm_start + len,
+						kaddr, tsz)) {
+			do_munmap(vma->vm_mm, vma->vm_start, len);
+			return -EAGAIN;
+		}
+		size -= tsz;
+		start += tsz;
+		len += tsz;
+
+		if (size == 0)
+			return 0;
+	}
+
+	list_for_each_entry(m, &vmcore_list, list) {
+		if (start < m->offset + m->size) {
+			u64 paddr = 0;
+
+			tsz = m->offset + m->size - start;
+			if (size < tsz)
+				tsz = size;
+			paddr = m->paddr + start - m->offset;
+			if (remap_pfn_range(vma, vma->vm_start + len,
+					    paddr >> PAGE_SHIFT, tsz,
+					    vma->vm_page_prot)) {
+				do_munmap(vma->vm_mm, vma->vm_start, len);
+				return -EAGAIN;
+			}
+			size -= tsz;
+			start += tsz;
+			len += tsz;
+
+			if (size == 0)
+				return 0;
+		}
+	}
+
+	return 0;
+}
+
 static const struct file_operations proc_vmcore_operations = {
 	.read		= read_vmcore,
 	.llseek		= default_llseek,
+	.mmap		= mmap_vmcore,
 };
 
 static struct vmcore* __init get_new_element(void)


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

  parent reply	other threads:[~2013-05-23  5:25 UTC|newest]

Thread overview: 103+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-23  5:24 [PATCH v8 0/9] kdump, vmcore: support mmap() on /proc/vmcore HATAYAMA Daisuke
2013-05-23  5:24 ` HATAYAMA Daisuke
2013-05-23  5:24 ` HATAYAMA Daisuke
2013-05-23  5:25 ` [PATCH v8 1/9] vmcore: clean up read_vmcore() HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25 ` [PATCH v8 2/9] vmcore: allocate buffer for ELF headers on page-size alignment HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23 14:22   ` Vivek Goyal
2013-05-23 14:22     ` Vivek Goyal
2013-05-23 14:22     ` Vivek Goyal
2013-05-23 21:46   ` Andrew Morton
2013-05-23 21:46     ` Andrew Morton
2013-05-23 21:46     ` Andrew Morton
2013-05-23  5:25 ` [PATCH v8 3/9] vmcore: treat memory chunks referenced by PT_LOAD program header entries in page-size boundary in vmcore_list HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23 21:49   ` Andrew Morton
2013-05-23 21:49     ` Andrew Morton
2013-05-23 21:49     ` Andrew Morton
2013-05-24 13:12     ` Vivek Goyal
2013-05-24 13:12       ` Vivek Goyal
2013-05-24 13:12       ` Vivek Goyal
2013-05-27  0:13       ` HATAYAMA Daisuke
2013-05-27  0:13         ` HATAYAMA Daisuke
2013-05-27  0:13         ` HATAYAMA Daisuke
2013-05-23  5:25 ` [PATCH v8 4/9] vmalloc: make find_vm_area check in range HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25 ` [PATCH v8 5/9] vmalloc: introduce remap_vmalloc_range_partial HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23 22:00   ` Andrew Morton
2013-05-23 22:00     ` Andrew Morton
2013-05-23 22:00     ` Andrew Morton
2013-05-23  5:25 ` [PATCH v8 6/9] vmcore: allocate ELF note segment in the 2nd kernel vmalloc memory HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23 14:28   ` Vivek Goyal
2013-05-23 14:28     ` Vivek Goyal
2013-05-23 14:28     ` Vivek Goyal
2013-05-23 22:17   ` Andrew Morton
2013-05-23 22:17     ` Andrew Morton
2013-05-23 22:17     ` Andrew Morton
2013-05-23  5:25 ` [PATCH v8 7/9] vmcore: Allow user process to remap ELF note segment buffer HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23 14:32   ` Vivek Goyal
2013-05-23 14:32     ` Vivek Goyal
2013-05-23 14:32     ` Vivek Goyal
2013-05-23  5:25 ` [PATCH v8 8/9] vmcore: calculate vmcore file size from buffer size and total size of vmcore objects HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23 14:34   ` Vivek Goyal
2013-05-23 14:34     ` Vivek Goyal
2013-05-23 14:34     ` Vivek Goyal
2013-05-23  5:25 ` HATAYAMA Daisuke [this message]
2013-05-23  5:25   ` [PATCH v8 9/9] vmcore: support mmap() on /proc/vmcore HATAYAMA Daisuke
2013-05-23  5:25   ` HATAYAMA Daisuke
2013-05-23 22:24   ` Andrew Morton
2013-05-23 22:24     ` Andrew Morton
2013-05-23 22:24     ` Andrew Morton
2013-05-24  9:02     ` Maxim Uvarov
2013-05-27  1:49       ` HATAYAMA Daisuke
2013-05-27  1:49         ` HATAYAMA Daisuke
2013-05-27  1:49         ` HATAYAMA Daisuke
2013-05-30  9:14         ` Maxim Uvarov
2013-05-30  9:26           ` Zhang Yanfei
2013-05-30  9:26             ` Zhang Yanfei
2013-05-30  9:26             ` Zhang Yanfei
2013-05-30 10:30             ` Maxim Uvarov
2013-06-03  8:43               ` Atsushi Kumagai
2013-06-03  8:43                 ` Atsushi Kumagai
2013-06-03  8:43                 ` Atsushi Kumagai
2013-06-04 15:34                 ` Maxim Uvarov
2013-06-07  1:11                   ` Zhang Yanfei
2013-06-07  1:11                     ` Zhang Yanfei
2013-06-07  1:11                     ` Zhang Yanfei
2013-06-28 16:40                 ` Maxim Uvarov
2013-06-30 23:53                   ` HATAYAMA Daisuke
2013-06-30 23:53                     ` HATAYAMA Daisuke
2013-06-30 23:53                     ` HATAYAMA Daisuke
2013-07-01 14:34                     ` Maxim Uvarov
2013-07-01 19:53                       ` Andrew Morton
2013-07-01 19:53                         ` Andrew Morton
2013-07-01 19:53                         ` Andrew Morton
2013-07-02  7:00                         ` Maxim Uvarov
2013-06-06 21:31   ` Arnd Bergmann
2013-06-06 21:31     ` Arnd Bergmann
2013-06-06 21:31     ` Arnd Bergmann
2013-06-07  1:01     ` HATAYAMA Daisuke
2013-06-07  1:01       ` HATAYAMA Daisuke
2013-06-07  1:01       ` HATAYAMA Daisuke
2013-06-07 18:34       ` Arnd Bergmann
2013-06-07 18:34         ` Arnd Bergmann
2013-06-07 18:34         ` Arnd Bergmann
2013-06-08 10:42         ` HATAYAMA Daisuke
2013-06-08 10:42           ` HATAYAMA Daisuke
2013-06-08 10:42           ` HATAYAMA Daisuke
2013-05-23 14:35 ` [PATCH v8 0/9] kdump, " Vivek Goyal
2013-05-23 14:35   ` Vivek Goyal
2013-05-23 14:35   ` Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130523052547.13864.83306.stgit@localhost6.localdomain6 \
    --to=d.hatayama@jp.fujitsu.com \
    --cc=akpm@linux-foundation.org \
    --cc=cpw@sgi.com \
    --cc=ebiederm@xmission.com \
    --cc=hughd@google.com \
    --cc=jingbai.ma@hp.com \
    --cc=kexec@lists.infradead.org \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=kumagai-atsushi@mxc.nes.nec.co.jp \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lisa.mitchell@hp.com \
    --cc=riel@redhat.com \
    --cc=vgoyal@redhat.com \
    --cc=walken@google.com \
    --cc=zhangyanfei@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.