All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] kexec: add resriction on the kexec_load
@ 2016-07-20  2:00 ` zhongjiang
  0 siblings, 0 replies; 13+ messages in thread
From: zhongjiang @ 2016-07-20  2:00 UTC (permalink / raw)
  To: ebiederm, yinghai, horms, akpm; +Cc: kexec, linux-mm

From: zhong jiang <zhongjiang@huawei.com>

I hit the following question when run trinity in my system. The
kernel is 3.4 version. but the mainline have same question to be
solved. The root cause is the segment size is too large, it can
expand the most of the area or the whole memory, therefore, it
may waste an amount of time to abtain a useable page. and other
cases will block until the test case quit. at the some time,
OOM will come up.

ck time:20160628120131-243c5
rlock reason:SOFT-WATCHDOG detected! on cpu 5.
CPU 5 Pid: 9485, comm: trinity-c5
RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
Stack:
 ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
 0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
 0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
Call Trace:
 [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
 [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
 [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
 [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
 [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
 [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
 [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
 [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b

The patch just add condition on sanity_check_segment_list to
restriction the segment size.

Signed-off-by: zhong jiang <zhongjiang@huawei.com>
---
 arch/x86/include/asm/kexec.h |  1 +
 kernel/kexec_core.c          | 12 ++++++++++++
 2 files changed, 13 insertions(+)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index d2434c1..b31a723 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -67,6 +67,7 @@ struct kimage;
 /* Memory to backup during crash kdump */
 #define KEXEC_BACKUP_SRC_START	(0UL)
 #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
+#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
 
 /*
  * CPU does not save ss and sp on stack if execution is already
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 448127d..35c5159 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
 			return result;
 	}
 
+
+	/* Verity all segment size donnot exceed the specified size.
+ 	 * if segment size from user space is too large,  a large 
+ 	 * amount of time will be wasted when allocating page. so,
+ 	 * softlockup may be come up.
+ 	 */
+	for (i = 0; i< nr_segments; i++) {
+		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
+			return result;
+	}
+
+
 	/*
 	 * Verify we have good destination addresses.  Normally
 	 * the caller is responsible for making certain we don't
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH] kexec: add resriction on the kexec_load
@ 2016-07-20  2:00 ` zhongjiang
  0 siblings, 0 replies; 13+ messages in thread
From: zhongjiang @ 2016-07-20  2:00 UTC (permalink / raw)
  To: ebiederm, yinghai, horms, akpm; +Cc: linux-mm, kexec

From: zhong jiang <zhongjiang@huawei.com>

I hit the following question when run trinity in my system. The
kernel is 3.4 version. but the mainline have same question to be
solved. The root cause is the segment size is too large, it can
expand the most of the area or the whole memory, therefore, it
may waste an amount of time to abtain a useable page. and other
cases will block until the test case quit. at the some time,
OOM will come up.

ck time:20160628120131-243c5
rlock reason:SOFT-WATCHDOG detected! on cpu 5.
CPU 5 Pid: 9485, comm: trinity-c5
RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
Stack:
 ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
 0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
 0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
Call Trace:
 [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
 [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
 [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
 [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
 [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
 [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
 [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
 [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b

The patch just add condition on sanity_check_segment_list to
restriction the segment size.

Signed-off-by: zhong jiang <zhongjiang@huawei.com>
---
 arch/x86/include/asm/kexec.h |  1 +
 kernel/kexec_core.c          | 12 ++++++++++++
 2 files changed, 13 insertions(+)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index d2434c1..b31a723 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -67,6 +67,7 @@ struct kimage;
 /* Memory to backup during crash kdump */
 #define KEXEC_BACKUP_SRC_START	(0UL)
 #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
+#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
 
 /*
  * CPU does not save ss and sp on stack if execution is already
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 448127d..35c5159 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
 			return result;
 	}
 
+
+	/* Verity all segment size donnot exceed the specified size.
+ 	 * if segment size from user space is too large,  a large 
+ 	 * amount of time will be wasted when allocating page. so,
+ 	 * softlockup may be come up.
+ 	 */
+	for (i = 0; i< nr_segments; i++) {
+		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
+			return result;
+	}
+
+
 	/*
 	 * Verify we have good destination addresses.  Normally
 	 * the caller is responsible for making certain we don't
-- 
1.8.3.1


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
  2016-07-20  2:00 ` zhongjiang
@ 2016-07-20  2:07   ` Eric W. Biederman
  -1 siblings, 0 replies; 13+ messages in thread
From: Eric W. Biederman @ 2016-07-20  2:07 UTC (permalink / raw)
  To: zhongjiang; +Cc: yinghai, horms, akpm, kexec, linux-mm

zhongjiang <zhongjiang@huawei.com> writes:

> From: zhong jiang <zhongjiang@huawei.com>
>
> I hit the following question when run trinity in my system. The
> kernel is 3.4 version. but the mainline have same question to be
> solved. The root cause is the segment size is too large, it can
> expand the most of the area or the whole memory, therefore, it
> may waste an amount of time to abtain a useable page. and other
> cases will block until the test case quit. at the some time,
> OOM will come up.

5MiB is way too small.  I have seen vmlinux images not to mention
ramdisks that get larger than that.  Depending on the system
1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
system out of a ramfs.  It works well if you have enough memory.

I think there is a practical limit at about 50% of memory (because we
need two copies in memory the source and the destination pages), but
anything else is pretty much reasonable and should have a fair chance of
working.

A limit that reflected that reality above would be interesting.
Anything else will likely cause someone trouble in the futrue.

Eric

> ck time:20160628120131-243c5
> rlock reason:SOFT-WATCHDOG detected! on cpu 5.
> CPU 5 Pid: 9485, comm: trinity-c5
> RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
> RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
> RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
> RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
> RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
> R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
> R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
> FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
> Stack:
>  ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
>  0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
>  0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
> Call Trace:
>  [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
>  [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
>  [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
>  [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
>  [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
>  [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
>  [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
>  [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b
>
> The patch just add condition on sanity_check_segment_list to
> restriction the segment size.
>
> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
> ---
>  arch/x86/include/asm/kexec.h |  1 +
>  kernel/kexec_core.c          | 12 ++++++++++++
>  2 files changed, 13 insertions(+)
>
> diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
> index d2434c1..b31a723 100644
> --- a/arch/x86/include/asm/kexec.h
> +++ b/arch/x86/include/asm/kexec.h
> @@ -67,6 +67,7 @@ struct kimage;
>  /* Memory to backup during crash kdump */
>  #define KEXEC_BACKUP_SRC_START	(0UL)
>  #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
> +#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
>  
>  /*
>   * CPU does not save ss and sp on stack if execution is already
> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> index 448127d..35c5159 100644
> --- a/kernel/kexec_core.c
> +++ b/kernel/kexec_core.c
> @@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
>  			return result;
>  	}
>  
> +
> +	/* Verity all segment size donnot exceed the specified size.
> + 	 * if segment size from user space is too large,  a large 
> + 	 * amount of time will be wasted when allocating page. so,
> + 	 * softlockup may be come up.
> + 	 */
> +	for (i = 0; i< nr_segments; i++) {
> +		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
> +			return result;
> +	}
> +
> +
>  	/*
>  	 * Verify we have good destination addresses.  Normally
>  	 * the caller is responsible for making certain we don't

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
@ 2016-07-20  2:07   ` Eric W. Biederman
  0 siblings, 0 replies; 13+ messages in thread
From: Eric W. Biederman @ 2016-07-20  2:07 UTC (permalink / raw)
  To: zhongjiang; +Cc: kexec, akpm, horms, yinghai, linux-mm

zhongjiang <zhongjiang@huawei.com> writes:

> From: zhong jiang <zhongjiang@huawei.com>
>
> I hit the following question when run trinity in my system. The
> kernel is 3.4 version. but the mainline have same question to be
> solved. The root cause is the segment size is too large, it can
> expand the most of the area or the whole memory, therefore, it
> may waste an amount of time to abtain a useable page. and other
> cases will block until the test case quit. at the some time,
> OOM will come up.

5MiB is way too small.  I have seen vmlinux images not to mention
ramdisks that get larger than that.  Depending on the system
1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
system out of a ramfs.  It works well if you have enough memory.

I think there is a practical limit at about 50% of memory (because we
need two copies in memory the source and the destination pages), but
anything else is pretty much reasonable and should have a fair chance of
working.

A limit that reflected that reality above would be interesting.
Anything else will likely cause someone trouble in the futrue.

Eric

> ck time:20160628120131-243c5
> rlock reason:SOFT-WATCHDOG detected! on cpu 5.
> CPU 5 Pid: 9485, comm: trinity-c5
> RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
> RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
> RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
> RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
> RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
> R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
> R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
> FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
> Stack:
>  ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
>  0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
>  0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
> Call Trace:
>  [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>  [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
>  [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
>  [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
>  [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
>  [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
>  [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
>  [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
>  [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b
>
> The patch just add condition on sanity_check_segment_list to
> restriction the segment size.
>
> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
> ---
>  arch/x86/include/asm/kexec.h |  1 +
>  kernel/kexec_core.c          | 12 ++++++++++++
>  2 files changed, 13 insertions(+)
>
> diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
> index d2434c1..b31a723 100644
> --- a/arch/x86/include/asm/kexec.h
> +++ b/arch/x86/include/asm/kexec.h
> @@ -67,6 +67,7 @@ struct kimage;
>  /* Memory to backup during crash kdump */
>  #define KEXEC_BACKUP_SRC_START	(0UL)
>  #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
> +#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
>  
>  /*
>   * CPU does not save ss and sp on stack if execution is already
> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> index 448127d..35c5159 100644
> --- a/kernel/kexec_core.c
> +++ b/kernel/kexec_core.c
> @@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
>  			return result;
>  	}
>  
> +
> +	/* Verity all segment size donnot exceed the specified size.
> + 	 * if segment size from user space is too large,  a large 
> + 	 * amount of time will be wasted when allocating page. so,
> + 	 * softlockup may be come up.
> + 	 */
> +	for (i = 0; i< nr_segments; i++) {
> +		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
> +			return result;
> +	}
> +
> +
>  	/*
>  	 * Verify we have good destination addresses.  Normally
>  	 * the caller is responsible for making certain we don't

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
  2016-07-20  2:07   ` Eric W. Biederman
@ 2016-07-20  3:08     ` zhong jiang
  -1 siblings, 0 replies; 13+ messages in thread
From: zhong jiang @ 2016-07-20  3:08 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: yinghai, horms, akpm, kexec, linux-mm

On 2016/7/20 10:07, Eric W. Biederman wrote:
> zhongjiang <zhongjiang@huawei.com> writes:
>
>> From: zhong jiang <zhongjiang@huawei.com>
>>
>> I hit the following question when run trinity in my system. The
>> kernel is 3.4 version. but the mainline have same question to be
>> solved. The root cause is the segment size is too large, it can
>> expand the most of the area or the whole memory, therefore, it
>> may waste an amount of time to abtain a useable page. and other
>> cases will block until the test case quit. at the some time,
>> OOM will come up.
> 5MiB is way too small.  I have seen vmlinux images not to mention
> ramdisks that get larger than that.  Depending on the system
> 1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
> system out of a ramfs.  It works well if you have enough memory.
>
> I think there is a practical limit at about 50% of memory (because we
> need two copies in memory the source and the destination pages), but
> anything else is pretty much reasonable and should have a fair chance of
> working.
>
> A limit that reflected that reality above would be interesting.
> Anything else will likely cause someone trouble in the futrue.
>
> Eric  
   en, limit the total segment size to 50% of memory.  I agree that.
     can you accept the change when I resend it ?
>> ck time:20160628120131-243c5
>> rlock reason:SOFT-WATCHDOG detected! on cpu 5.
>> CPU 5 Pid: 9485, comm: trinity-c5
>> RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
>> RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
>> RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
>> RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
>> RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
>> R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
>> R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
>> FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
>> Stack:
>>  ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
>>  0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
>>  0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
>> Call Trace:
>>  [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
>>  [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
>>  [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
>>  [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
>>  [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
>>  [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
>>  [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
>>  [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b
>>
>> The patch just add condition on sanity_check_segment_list to
>> restriction the segment size.
>>
>> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
>> ---
>>  arch/x86/include/asm/kexec.h |  1 +
>>  kernel/kexec_core.c          | 12 ++++++++++++
>>  2 files changed, 13 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
>> index d2434c1..b31a723 100644
>> --- a/arch/x86/include/asm/kexec.h
>> +++ b/arch/x86/include/asm/kexec.h
>> @@ -67,6 +67,7 @@ struct kimage;
>>  /* Memory to backup during crash kdump */
>>  #define KEXEC_BACKUP_SRC_START	(0UL)
>>  #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
>> +#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
>>  
>>  /*
>>   * CPU does not save ss and sp on stack if execution is already
>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
>> index 448127d..35c5159 100644
>> --- a/kernel/kexec_core.c
>> +++ b/kernel/kexec_core.c
>> @@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
>>  			return result;
>>  	}
>>  
>> +
>> +	/* Verity all segment size donnot exceed the specified size.
>> + 	 * if segment size from user space is too large,  a large 
>> + 	 * amount of time will be wasted when allocating page. so,
>> + 	 * softlockup may be come up.
>> + 	 */
>> +	for (i = 0; i< nr_segments; i++) {
>> +		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
>> +			return result;
>> +	}
>> +
>> +
>>  	/*
>>  	 * Verify we have good destination addresses.  Normally
>>  	 * the caller is responsible for making certain we don't
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
> .
>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
@ 2016-07-20  3:08     ` zhong jiang
  0 siblings, 0 replies; 13+ messages in thread
From: zhong jiang @ 2016-07-20  3:08 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: kexec, akpm, horms, yinghai, linux-mm

On 2016/7/20 10:07, Eric W. Biederman wrote:
> zhongjiang <zhongjiang@huawei.com> writes:
>
>> From: zhong jiang <zhongjiang@huawei.com>
>>
>> I hit the following question when run trinity in my system. The
>> kernel is 3.4 version. but the mainline have same question to be
>> solved. The root cause is the segment size is too large, it can
>> expand the most of the area or the whole memory, therefore, it
>> may waste an amount of time to abtain a useable page. and other
>> cases will block until the test case quit. at the some time,
>> OOM will come up.
> 5MiB is way too small.  I have seen vmlinux images not to mention
> ramdisks that get larger than that.  Depending on the system
> 1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
> system out of a ramfs.  It works well if you have enough memory.
>
> I think there is a practical limit at about 50% of memory (because we
> need two copies in memory the source and the destination pages), but
> anything else is pretty much reasonable and should have a fair chance of
> working.
>
> A limit that reflected that reality above would be interesting.
> Anything else will likely cause someone trouble in the futrue.
>
> Eric  
   en, limit the total segment size to 50% of memory.  I agree that.
     can you accept the change when I resend it ?
>> ck time:20160628120131-243c5
>> rlock reason:SOFT-WATCHDOG detected! on cpu 5.
>> CPU 5 Pid: 9485, comm: trinity-c5
>> RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
>> RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
>> RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
>> RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
>> RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
>> R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
>> R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
>> FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
>> Stack:
>>  ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
>>  0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
>>  0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
>> Call Trace:
>>  [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
>>  [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
>>  [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
>>  [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
>>  [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
>>  [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
>>  [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
>>  [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b
>>
>> The patch just add condition on sanity_check_segment_list to
>> restriction the segment size.
>>
>> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
>> ---
>>  arch/x86/include/asm/kexec.h |  1 +
>>  kernel/kexec_core.c          | 12 ++++++++++++
>>  2 files changed, 13 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
>> index d2434c1..b31a723 100644
>> --- a/arch/x86/include/asm/kexec.h
>> +++ b/arch/x86/include/asm/kexec.h
>> @@ -67,6 +67,7 @@ struct kimage;
>>  /* Memory to backup during crash kdump */
>>  #define KEXEC_BACKUP_SRC_START	(0UL)
>>  #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
>> +#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
>>  
>>  /*
>>   * CPU does not save ss and sp on stack if execution is already
>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
>> index 448127d..35c5159 100644
>> --- a/kernel/kexec_core.c
>> +++ b/kernel/kexec_core.c
>> @@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
>>  			return result;
>>  	}
>>  
>> +
>> +	/* Verity all segment size donnot exceed the specified size.
>> + 	 * if segment size from user space is too large,  a large 
>> + 	 * amount of time will be wasted when allocating page. so,
>> + 	 * softlockup may be come up.
>> + 	 */
>> +	for (i = 0; i< nr_segments; i++) {
>> +		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
>> +			return result;
>> +	}
>> +
>> +
>>  	/*
>>  	 * Verify we have good destination addresses.  Normally
>>  	 * the caller is responsible for making certain we don't
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
> .
>



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
  2016-07-20  2:07   ` Eric W. Biederman
@ 2016-07-20  3:38     ` zhong jiang
  -1 siblings, 0 replies; 13+ messages in thread
From: zhong jiang @ 2016-07-20  3:38 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: yinghai, horms, akpm, kexec, linux-mm

On 2016/7/20 10:07, Eric W. Biederman wrote:
> zhongjiang <zhongjiang@huawei.com> writes:
>
>> From: zhong jiang <zhongjiang@huawei.com>
>>
>> I hit the following question when run trinity in my system. The
>> kernel is 3.4 version. but the mainline have same question to be
>> solved. The root cause is the segment size is too large, it can
>> expand the most of the area or the whole memory, therefore, it
>> may waste an amount of time to abtain a useable page. and other
>> cases will block until the test case quit. at the some time,
>> OOM will come up.
> 5MiB is way too small.  I have seen vmlinux images not to mention
> ramdisks that get larger than that.  Depending on the system
> 1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
> system out of a ramfs.  It works well if you have enough memory.
>
> I think there is a practical limit at about 50% of memory (because we
> need two copies in memory the source and the destination pages), but
> anything else is pretty much reasonable and should have a fair chance of
> working.
>
> A limit that reflected that reality above would be interesting.
> Anything else will likely cause someone trouble in the futrue.
>
> Eric
  In addition, I had tested when set max segment size to 1G when system memory have 32G,
  the rlock probabilistic come up when trinity run.
>> ck time:20160628120131-243c5
>> rlock reason:SOFT-WATCHDOG detected! on cpu 5.
>> CPU 5 Pid: 9485, comm: trinity-c5
>> RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
>> RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
>> RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
>> RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
>> RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
>> R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
>> R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
>> FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
>> Stack:
>>  ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
>>  0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
>>  0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
>> Call Trace:
>>  [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
>>  [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
>>  [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
>>  [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
>>  [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
>>  [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
>>  [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
>>  [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b
>>
>> The patch just add condition on sanity_check_segment_list to
>> restriction the segment size.
>>
>> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
>> ---
>>  arch/x86/include/asm/kexec.h |  1 +
>>  kernel/kexec_core.c          | 12 ++++++++++++
>>  2 files changed, 13 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
>> index d2434c1..b31a723 100644
>> --- a/arch/x86/include/asm/kexec.h
>> +++ b/arch/x86/include/asm/kexec.h
>> @@ -67,6 +67,7 @@ struct kimage;
>>  /* Memory to backup during crash kdump */
>>  #define KEXEC_BACKUP_SRC_START	(0UL)
>>  #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
>> +#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
>>  
>>  /*
>>   * CPU does not save ss and sp on stack if execution is already
>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
>> index 448127d..35c5159 100644
>> --- a/kernel/kexec_core.c
>> +++ b/kernel/kexec_core.c
>> @@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
>>  			return result;
>>  	}
>>  
>> +
>> +	/* Verity all segment size donnot exceed the specified size.
>> + 	 * if segment size from user space is too large,  a large 
>> + 	 * amount of time will be wasted when allocating page. so,
>> + 	 * softlockup may be come up.
>> + 	 */
>> +	for (i = 0; i< nr_segments; i++) {
>> +		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
>> +			return result;
>> +	}
>> +
>> +
>>  	/*
>>  	 * Verify we have good destination addresses.  Normally
>>  	 * the caller is responsible for making certain we don't
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
> .
>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
@ 2016-07-20  3:38     ` zhong jiang
  0 siblings, 0 replies; 13+ messages in thread
From: zhong jiang @ 2016-07-20  3:38 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: kexec, akpm, horms, yinghai, linux-mm

On 2016/7/20 10:07, Eric W. Biederman wrote:
> zhongjiang <zhongjiang@huawei.com> writes:
>
>> From: zhong jiang <zhongjiang@huawei.com>
>>
>> I hit the following question when run trinity in my system. The
>> kernel is 3.4 version. but the mainline have same question to be
>> solved. The root cause is the segment size is too large, it can
>> expand the most of the area or the whole memory, therefore, it
>> may waste an amount of time to abtain a useable page. and other
>> cases will block until the test case quit. at the some time,
>> OOM will come up.
> 5MiB is way too small.  I have seen vmlinux images not to mention
> ramdisks that get larger than that.  Depending on the system
> 1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
> system out of a ramfs.  It works well if you have enough memory.
>
> I think there is a practical limit at about 50% of memory (because we
> need two copies in memory the source and the destination pages), but
> anything else is pretty much reasonable and should have a fair chance of
> working.
>
> A limit that reflected that reality above would be interesting.
> Anything else will likely cause someone trouble in the futrue.
>
> Eric
  In addition, I had tested when set max segment size to 1G when system memory have 32G,
  the rlock probabilistic come up when trinity run.
>> ck time:20160628120131-243c5
>> rlock reason:SOFT-WATCHDOG detected! on cpu 5.
>> CPU 5 Pid: 9485, comm: trinity-c5
>> RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
>> RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
>> RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
>> RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
>> RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
>> R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
>> R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
>> FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
>> Stack:
>>  ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
>>  0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
>>  0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
>> Call Trace:
>>  [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
>>  [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
>>  [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
>>  [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
>>  [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
>>  [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
>>  [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
>>  [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
>>  [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b
>>
>> The patch just add condition on sanity_check_segment_list to
>> restriction the segment size.
>>
>> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
>> ---
>>  arch/x86/include/asm/kexec.h |  1 +
>>  kernel/kexec_core.c          | 12 ++++++++++++
>>  2 files changed, 13 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
>> index d2434c1..b31a723 100644
>> --- a/arch/x86/include/asm/kexec.h
>> +++ b/arch/x86/include/asm/kexec.h
>> @@ -67,6 +67,7 @@ struct kimage;
>>  /* Memory to backup during crash kdump */
>>  #define KEXEC_BACKUP_SRC_START	(0UL)
>>  #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
>> +#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
>>  
>>  /*
>>   * CPU does not save ss and sp on stack if execution is already
>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
>> index 448127d..35c5159 100644
>> --- a/kernel/kexec_core.c
>> +++ b/kernel/kexec_core.c
>> @@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
>>  			return result;
>>  	}
>>  
>> +
>> +	/* Verity all segment size donnot exceed the specified size.
>> + 	 * if segment size from user space is too large,  a large 
>> + 	 * amount of time will be wasted when allocating page. so,
>> + 	 * softlockup may be come up.
>> + 	 */
>> +	for (i = 0; i< nr_segments; i++) {
>> +		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
>> +			return result;
>> +	}
>> +
>> +
>>  	/*
>>  	 * Verify we have good destination addresses.  Normally
>>  	 * the caller is responsible for making certain we don't
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
> .
>



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
  2016-07-20  2:07   ` Eric W. Biederman
@ 2016-07-21  8:10     ` Dave Young
  -1 siblings, 0 replies; 13+ messages in thread
From: Dave Young @ 2016-07-21  8:10 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: zhongjiang, kexec, akpm, horms, yinghai, linux-mm

On 07/19/16 at 09:07pm, Eric W. Biederman wrote:
> zhongjiang <zhongjiang@huawei.com> writes:
> 
> > From: zhong jiang <zhongjiang@huawei.com>
> >
> > I hit the following question when run trinity in my system. The
> > kernel is 3.4 version. but the mainline have same question to be
> > solved. The root cause is the segment size is too large, it can
> > expand the most of the area or the whole memory, therefore, it
> > may waste an amount of time to abtain a useable page. and other
> > cases will block until the test case quit. at the some time,
> > OOM will come up.
> 
> 5MiB is way too small.  I have seen vmlinux images not to mention
> ramdisks that get larger than that.  Depending on the system
> 1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
> system out of a ramfs.  It works well if you have enough memory.

There was a use case from Michael Holzheu about a 1.5G ramdisk, see below
kexec-tools commit:

commit 95741713e790fa6bde7780bbfb772ad88e81a744
Author: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Date:   Fri Oct 30 16:02:04 2015 +0100

    kexec/s390x: use mmap instead of read for slurp_file()
    
    The slurp_fd() function allocates memory and uses the read() system
call.
    This results in double memory consumption for image and initrd:
    
     1) Memory allocated in user space by the kexec tool
     2) Memory allocated in kernel by the kexec() system call
    
    The following illustrates the use case that we have on s390x:
    
     1) Boot a 4 GB Linux system
     2) Copy kernel and 1,5 GB ramdisk from external source into tmpfs
(ram)
     3) Use kexec to boot kernel with ramdisk
    
     Therefore for kexec runtime we need:
    
     1,5 GB (tmpfs) + 1,5 GB (kexec malloc) + 1,5 GB (kernel memory) =
4,5 GB
    
    This patch introduces slurp_file_mmap() which for "normal" files
uses
    mmap() instead of malloc()/read(). This reduces the runtime memory
    consumption of the kexec tool as follows:
    
     1,5 GB (tmpfs) + 1,5 GB (kernel memory) = 3 GB
    
    Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
    Reviewed-by: Dave Young <dyoung@redhat.com>
    Signed-off-by: Simon Horman <horms@verge.net.au>

> 
> I think there is a practical limit at about 50% of memory (because we
> need two copies in memory the source and the destination pages), but
> anything else is pretty much reasonable and should have a fair chance of
> working.
> 
> A limit that reflected that reality above would be interesting.
> Anything else will likely cause someone trouble in the futrue.

Maybe one should test his ramdisk first to ensure it works first before
really using it.

Thanks
Dave

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
@ 2016-07-21  8:10     ` Dave Young
  0 siblings, 0 replies; 13+ messages in thread
From: Dave Young @ 2016-07-21  8:10 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: zhongjiang, kexec, linux-mm, horms, akpm, yinghai

On 07/19/16 at 09:07pm, Eric W. Biederman wrote:
> zhongjiang <zhongjiang@huawei.com> writes:
> 
> > From: zhong jiang <zhongjiang@huawei.com>
> >
> > I hit the following question when run trinity in my system. The
> > kernel is 3.4 version. but the mainline have same question to be
> > solved. The root cause is the segment size is too large, it can
> > expand the most of the area or the whole memory, therefore, it
> > may waste an amount of time to abtain a useable page. and other
> > cases will block until the test case quit. at the some time,
> > OOM will come up.
> 
> 5MiB is way too small.  I have seen vmlinux images not to mention
> ramdisks that get larger than that.  Depending on the system
> 1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
> system out of a ramfs.  It works well if you have enough memory.

There was a use case from Michael Holzheu about a 1.5G ramdisk, see below
kexec-tools commit:

commit 95741713e790fa6bde7780bbfb772ad88e81a744
Author: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Date:   Fri Oct 30 16:02:04 2015 +0100

    kexec/s390x: use mmap instead of read for slurp_file()
    
    The slurp_fd() function allocates memory and uses the read() system
call.
    This results in double memory consumption for image and initrd:
    
     1) Memory allocated in user space by the kexec tool
     2) Memory allocated in kernel by the kexec() system call
    
    The following illustrates the use case that we have on s390x:
    
     1) Boot a 4 GB Linux system
     2) Copy kernel and 1,5 GB ramdisk from external source into tmpfs
(ram)
     3) Use kexec to boot kernel with ramdisk
    
     Therefore for kexec runtime we need:
    
     1,5 GB (tmpfs) + 1,5 GB (kexec malloc) + 1,5 GB (kernel memory) =
4,5 GB
    
    This patch introduces slurp_file_mmap() which for "normal" files
uses
    mmap() instead of malloc()/read(). This reduces the runtime memory
    consumption of the kexec tool as follows:
    
     1,5 GB (tmpfs) + 1,5 GB (kernel memory) = 3 GB
    
    Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
    Reviewed-by: Dave Young <dyoung@redhat.com>
    Signed-off-by: Simon Horman <horms@verge.net.au>

> 
> I think there is a practical limit at about 50% of memory (because we
> need two copies in memory the source and the destination pages), but
> anything else is pretty much reasonable and should have a fair chance of
> working.
> 
> A limit that reflected that reality above would be interesting.
> Anything else will likely cause someone trouble in the futrue.

Maybe one should test his ramdisk first to ensure it works first before
really using it.

Thanks
Dave

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
  2016-07-21  8:10     ` Dave Young
@ 2016-07-22  5:52       ` zhong jiang
  -1 siblings, 0 replies; 13+ messages in thread
From: zhong jiang @ 2016-07-22  5:52 UTC (permalink / raw)
  To: Dave Young; +Cc: Eric W. Biederman, kexec, akpm, horms, yinghai, linux-mm

On 2016/7/21 16:10, Dave Young wrote:
> On 07/19/16 at 09:07pm, Eric W. Biederman wrote:
>> zhongjiang <zhongjiang@huawei.com> writes:
>>
>>> From: zhong jiang <zhongjiang@huawei.com>
>>>
>>> I hit the following question when run trinity in my system. The
>>> kernel is 3.4 version. but the mainline have same question to be
>>> solved. The root cause is the segment size is too large, it can
>>> expand the most of the area or the whole memory, therefore, it
>>> may waste an amount of time to abtain a useable page. and other
>>> cases will block until the test case quit. at the some time,
>>> OOM will come up.
>> 5MiB is way too small.  I have seen vmlinux images not to mention
>> ramdisks that get larger than that.  Depending on the system
>> 1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
>> system out of a ramfs.  It works well if you have enough memory.
> There was a use case from Michael Holzheu about a 1.5G ramdisk, see below
> kexec-tools commit:
>
> commit 95741713e790fa6bde7780bbfb772ad88e81a744
> Author: Michael Holzheu <holzheu@linux.vnet.ibm.com>
> Date:   Fri Oct 30 16:02:04 2015 +0100
>
>     kexec/s390x: use mmap instead of read for slurp_file()
>     
>     The slurp_fd() function allocates memory and uses the read() system
> call.
>     This results in double memory consumption for image and initrd:
>     
>      1) Memory allocated in user space by the kexec tool
>      2) Memory allocated in kernel by the kexec() system call
>     
>     The following illustrates the use case that we have on s390x:
>     
>      1) Boot a 4 GB Linux system
>      2) Copy kernel and 1,5 GB ramdisk from external source into tmpfs
> (ram)
>      3) Use kexec to boot kernel with ramdisk
>     
>      Therefore for kexec runtime we need:
>     
>      1,5 GB (tmpfs) + 1,5 GB (kexec malloc) + 1,5 GB (kernel memory) =
> 4,5 GB
>     
>     This patch introduces slurp_file_mmap() which for "normal" files
> uses
>     mmap() instead of malloc()/read(). This reduces the runtime memory
>     consumption of the kexec tool as follows:
>     
>      1,5 GB (tmpfs) + 1,5 GB (kernel memory) = 3 GB
>     
>     Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
>     Reviewed-by: Dave Young <dyoung@redhat.com>
>     Signed-off-by: Simon Horman <horms@verge.net.au>
>
>> I think there is a practical limit at about 50% of memory (because we
>> need two copies in memory the source and the destination pages), but
>> anything else is pretty much reasonable and should have a fair chance of
>> working.
>>
>> A limit that reflected that reality above would be interesting.
>> Anything else will likely cause someone trouble in the futrue.
> Maybe one should test his ramdisk first to ensure it works first before
> really using it.
>
> Thanks
> Dave
>
> .
>
 Thank you reply.  I just test the syscall kexec_load, I don't really run kexec iamge to boot machine.
 Recently , I hit the question. I fix it by passing resonable parameters to kernel from user space.
 no functional change.   is right?  
 according to the W. Biederman advice, I agree so. 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] kexec: add resriction on the kexec_load
@ 2016-07-22  5:52       ` zhong jiang
  0 siblings, 0 replies; 13+ messages in thread
From: zhong jiang @ 2016-07-22  5:52 UTC (permalink / raw)
  To: Dave Young; +Cc: kexec, linux-mm, horms, Eric W. Biederman, akpm, yinghai

On 2016/7/21 16:10, Dave Young wrote:
> On 07/19/16 at 09:07pm, Eric W. Biederman wrote:
>> zhongjiang <zhongjiang@huawei.com> writes:
>>
>>> From: zhong jiang <zhongjiang@huawei.com>
>>>
>>> I hit the following question when run trinity in my system. The
>>> kernel is 3.4 version. but the mainline have same question to be
>>> solved. The root cause is the segment size is too large, it can
>>> expand the most of the area or the whole memory, therefore, it
>>> may waste an amount of time to abtain a useable page. and other
>>> cases will block until the test case quit. at the some time,
>>> OOM will come up.
>> 5MiB is way too small.  I have seen vmlinux images not to mention
>> ramdisks that get larger than that.  Depending on the system
>> 1GiB might not be an unreasonable ramdisk size.  AKA run an entire live
>> system out of a ramfs.  It works well if you have enough memory.
> There was a use case from Michael Holzheu about a 1.5G ramdisk, see below
> kexec-tools commit:
>
> commit 95741713e790fa6bde7780bbfb772ad88e81a744
> Author: Michael Holzheu <holzheu@linux.vnet.ibm.com>
> Date:   Fri Oct 30 16:02:04 2015 +0100
>
>     kexec/s390x: use mmap instead of read for slurp_file()
>     
>     The slurp_fd() function allocates memory and uses the read() system
> call.
>     This results in double memory consumption for image and initrd:
>     
>      1) Memory allocated in user space by the kexec tool
>      2) Memory allocated in kernel by the kexec() system call
>     
>     The following illustrates the use case that we have on s390x:
>     
>      1) Boot a 4 GB Linux system
>      2) Copy kernel and 1,5 GB ramdisk from external source into tmpfs
> (ram)
>      3) Use kexec to boot kernel with ramdisk
>     
>      Therefore for kexec runtime we need:
>     
>      1,5 GB (tmpfs) + 1,5 GB (kexec malloc) + 1,5 GB (kernel memory) =
> 4,5 GB
>     
>     This patch introduces slurp_file_mmap() which for "normal" files
> uses
>     mmap() instead of malloc()/read(). This reduces the runtime memory
>     consumption of the kexec tool as follows:
>     
>      1,5 GB (tmpfs) + 1,5 GB (kernel memory) = 3 GB
>     
>     Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
>     Reviewed-by: Dave Young <dyoung@redhat.com>
>     Signed-off-by: Simon Horman <horms@verge.net.au>
>
>> I think there is a practical limit at about 50% of memory (because we
>> need two copies in memory the source and the destination pages), but
>> anything else is pretty much reasonable and should have a fair chance of
>> working.
>>
>> A limit that reflected that reality above would be interesting.
>> Anything else will likely cause someone trouble in the futrue.
> Maybe one should test his ramdisk first to ensure it works first before
> really using it.
>
> Thanks
> Dave
>
> .
>
 Thank you reply.  I just test the syscall kexec_load, I don't really run kexec iamge to boot machine.
 Recently , I hit the question. I fix it by passing resonable parameters to kernel from user space.
 no functional change.   is right?  
 according to the W. Biederman advice, I agree so. 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] kexec: add resriction on the kexec_load
@ 2016-07-19  4:10 zhongjiang
  0 siblings, 0 replies; 13+ messages in thread
From: zhongjiang @ 2016-07-19  4:10 UTC (permalink / raw)
  To: ebiederm, yinghai, horms, akpm; +Cc: kexec

From: zhong jiang <zhongjiang@huawei.com>

I hit the following question when run trinity in my system. The
kernel is 3.4 version. but the mainline have same question to be
solved. The root cause is the segment size is too large, it can
expand the most of the area or the whole memory, therefore, it
may waste an amount of time to abtain a useable page. and other
cases will block until the test case quit. at the some time,
OOM will come up.

ck time:20160628120131-243c5
rlock reason:SOFT-WATCHDOG detected! on cpu 5.
CPU 5 Pid: 9485, comm: trinity-c5
RIP: 0010:[<ffffffff8111a4cf>]  [<ffffffff8111a4cf>] next_zones_zonelist+0x3f/0x60
RSP: 0018:ffff88088783bc38  EFLAGS: 00000283
RAX: ffff8808bffd9b08 RBX: ffff88088783bbb8 RCX: ffff88088783bd30
RDX: ffff88088f15a248 RSI: 0000000000000002 RDI: 0000000000000000
RBP: ffff88088783bc38 R08: ffff8808bffd8d80 R09: 0000000412c4d000
R10: 0000000412c4e000 R11: 0000000000000000 R12: 0000000000000002
R13: 0000000000000000 R14: ffff8808bffd9b00 R15: 0000000000000000
FS:  00007f91137ee700(0000) GS:ffff88089f2a0000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000000016161a CR3: 0000000887820000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process trinity-c5 (pid: 9485, threadinfo ffff88088783a000, task ffff88088f159980)
Stack:
 ffff88088783bd88 ffffffff81106eac ffff8808bffd8d80 0000000000000000
 0000000000000000 ffffffff8124c2be 0000000000000001 000000000000001e
 0000000000000000 ffffffff8124c2be 0000000000000002 ffffffff8124c2be
Call Trace:
 [<ffffffff81106eac>] __alloc_pages_nodemask+0x14c/0x8f0
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8124c2be>] ? trace_hardirqs_on_thunk+0x3a/0x3c
 [<ffffffff8113e5ef>] alloc_pages_current+0xaf/0x120
 [<ffffffff810a0da0>] kimage_alloc_pages+0x10/0x60
 [<ffffffff810a15ad>] kimage_alloc_control_pages+0x5d/0x270
 [<ffffffff81027e85>] machine_kexec_prepare+0xe5/0x6c0
 [<ffffffff810a0d52>] ? kimage_free_page_list+0x52/0x70
 [<ffffffff810a1921>] sys_kexec_load+0x141/0x600
 [<ffffffff8115e6b0>] ? vfs_write+0x100/0x180
 [<ffffffff8145fbd9>] system_call_fastpath+0x16/0x1b

The patch just add condition on sanity_check_segment_list to
restriction the segment size.

Signed-off-by: zhong jiang <zhongjiang@huawei.com>
---
 arch/x86/include/asm/kexec.h |  1 +
 kernel/kexec_core.c          | 12 ++++++++++++
 2 files changed, 13 insertions(+)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index d2434c1..b31a723 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -67,6 +67,7 @@ struct kimage;
 /* Memory to backup during crash kdump */
 #define KEXEC_BACKUP_SRC_START	(0UL)
 #define KEXEC_BACKUP_SRC_END	(640 * 1024UL)	/* 640K */
+#define KEXEC_MAX_SEGMENT_SIZE	(5 * 1024 * 1024UL)	/* 5M */
 
 /*
  * CPU does not save ss and sp on stack if execution is already
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 448127d..35c5159 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -209,6 +209,18 @@ int sanity_check_segment_list(struct kimage *image)
 			return result;
 	}
 
+
+	/* Verity all segment size donnot exceed the specified size.
+ 	 * if segment size from user space is too large,  a large 
+ 	 * amount of time will be wasted when allocating page. so,
+ 	 * softlockup may be come up.
+ 	 */
+	for (i = 0; i< nr_segments; i++) {
+		if (image->segment[i].memsz > KEXEC_MAX_SEGMENT_SIZE)
+			return result;
+	}
+
+
 	/*
 	 * Verify we have good destination addresses.  Normally
 	 * the caller is responsible for making certain we don't
-- 
1.8.3.1


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-07-22  5:55 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-20  2:00 [PATCH] kexec: add resriction on the kexec_load zhongjiang
2016-07-20  2:00 ` zhongjiang
2016-07-20  2:07 ` Eric W. Biederman
2016-07-20  2:07   ` Eric W. Biederman
2016-07-20  3:08   ` zhong jiang
2016-07-20  3:08     ` zhong jiang
2016-07-20  3:38   ` zhong jiang
2016-07-20  3:38     ` zhong jiang
2016-07-21  8:10   ` Dave Young
2016-07-21  8:10     ` Dave Young
2016-07-22  5:52     ` zhong jiang
2016-07-22  5:52       ` zhong jiang
  -- strict thread matches above, loose matches on Subject: below --
2016-07-19  4:10 zhongjiang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.