* [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
@ 2021-09-10 10:33 Tao Liu
2021-09-10 10:33 ` [PATCH 01/11] makedumpfile: Add dump header for zstd Tao Liu
` (11 more replies)
0 siblings, 12 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu
This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
support, the vmcore dump size and time consumption can have a better balance than
zlib/lzo/snappy.
How to build:
Build using make:
$ make USEZSTD=on
Performance Comparison:
How to measure
I took a x86_64 machine which had 4T memory, and the compression level
range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
All testing was done by makedumpfile single thread mode.
As for compression performance testing, in order to avoid the performance
bottle neck of disk I/O, I used the following makedumpfile cmd, which took
lzo compression as an example. "--dry-run" will not write any data to disk,
"--show-stat" will output the vmcore size after compression, and the time
consumption can be collected from the output logs.
$ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
As for decompression performance testing, I only tested the (-d 31) case,
because the vmcore size of (-d 0) case is too big to fit in the disk, in
addtion, to read a oversized file from disk will encounter the disk I/O
bottle neck.
I triggered a kernel crash and collected a vmcore. Then I converted the
vmcore into specific compression format using the following makedumpfile
cmd, which would get a lzo format vmcore as an example:
$ makedumpfile -l vmcore vmcore.lzo
After all vmcores were ready, I used the following cmd to perform the
decompression, the time consumption can be collected from the logs.
$ makedumpfile -F vmcore.lzo --dry-run --show-stat
Result charts
For compression:
makedumpfile -d31 | makedumpfile -d0
Compression time vmcore size | Compression time vmcore size
zstd-3 325.516446 5285179595 | 8205.452248 51715430204
zstd-2 332.069432 5319726604 | 8057.381371 51732062793
zstd-1 309.942773 5730516274 | 8138.060786 52136191571
zstd0 439.773076 4673859661 | 8873.059963 50993669657
zstd1 406.68036 4700959521 | 8259.417132 51036900055
zstd2 397.195643 4699263608 | 8230.308291 51030410942
zstd3 436.491632 4673306398 | 8803.970103 51043393637
zstd4 543.363928 4668419304 | 8991.240244 51058088514
zlib 561.217381 8514803195 | 14381.755611 78199283893
lzo 248.175953 16696411879 | 6057.528781 90020895741
snappy 231.868312 11782236674 | 5290.919894 245661288355
For decompression:
makedumpfile -d31
decompress time vmcore size
zstd-3 477.543396 5289373448
zstd-2 478.034534 5327454123
zstd-1 459.066807 5748037931
zstd0 561.687525 4680009013
zstd1 547.248917 4706358547
zstd2 544.219758 4704780719
zstd3 555.726343 4680009013
zstd4 558.031721 4675545933
zlib 630.965426 8555376229
lzo 427.292107 16849457649
snappy 446.542806 11841407957
Discussion
For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
the best time consumption and vmcore dump size balance.
For zstd2/zlib/lzo/snappy, zstd2 has the smallest vmcore size, also
the best time consumption and vmcore dump size balance.
Tao Liu (11):
Add dump header for zstd.
Add command-line processing for zstd
Add zstd build support
Notify zstd unsupporting when disabled
Add single thread zstd compression processing
Add parallel threads zstd compression processing
Add single thread zstd uncompression processing
Add parallel threads zstd uncompression processing
Add zstd help message
Add zstd manual description
Add zstd README description
Makefile | 5 +++
README | 5 ++-
diskdump_mod.h | 1 +
makedumpfile.8 | 7 ++--
makedumpfile.c | 101 +++++++++++++++++++++++++++++++++++++++++++++----
makedumpfile.h | 10 +++++
print_info.c | 30 ++++++++++-----
7 files changed, 138 insertions(+), 21 deletions(-)
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 01/11] makedumpfile: Add dump header for zstd.
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 02/11] makedumpfile: Add command-line processing " Tao Liu
` (10 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
diskdump_mod.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/diskdump_mod.h b/diskdump_mod.h
index 3733953..e4bce5c 100644
--- a/diskdump_mod.h
+++ b/diskdump_mod.h
@@ -98,6 +98,7 @@ struct kdump_sub_header {
#define DUMP_DH_COMPRESSED_INCOMPLETE 0x8
/* indicate an incomplete dumpfile */
#define DUMP_DH_EXCLUDED_VMEMMAP 0x10 /* unused vmemmap pages are excluded */
+#define DUMP_DH_COMPRESSED_ZSTD 0x20 /* page is compressed with zstd */
/* descriptor of each page for vmcore */
typedef struct page_desc {
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 02/11] makedumpfile: Add command-line processing for zstd
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
2021-09-10 10:33 ` [PATCH 01/11] makedumpfile: Add dump header for zstd Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 03/11] makedumpfile: Add zstd build support Tao Liu
` (9 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
makedumpfile.c | 5 ++++-
makedumpfile.h | 1 +
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/makedumpfile.c b/makedumpfile.c
index 7777157..100b407 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -11656,7 +11656,7 @@ main(int argc, char *argv[])
info->block_order = DEFAULT_ORDER;
message_level = DEFAULT_MSG_LEVEL;
- while ((opt = getopt_long(argc, argv, "b:cDd:eEFfg:hi:lL:pRvXx:", longopts,
+ while ((opt = getopt_long(argc, argv, "b:cDd:eEFfg:hi:lL:pRvXx:z", longopts,
NULL)) != -1) {
switch (opt) {
unsigned long long val;
@@ -11739,6 +11739,9 @@ main(int argc, char *argv[])
case OPT_COMPRESS_SNAPPY:
info->flag_compress = DUMP_DH_COMPRESSED_SNAPPY;
break;
+ case OPT_COMPRESS_ZSTD:
+ info->flag_compress = DUMP_DH_COMPRESSED_ZSTD;
+ break;
case OPT_XEN_PHYS_START:
info->xen_phys_start = strtoul(optarg, NULL, 0);
break;
diff --git a/makedumpfile.h b/makedumpfile.h
index bd9e2f6..46d77b0 100644
--- a/makedumpfile.h
+++ b/makedumpfile.h
@@ -2471,6 +2471,7 @@ struct elf_prstatus {
#define OPT_VERSION 'v'
#define OPT_EXCLUDE_XEN_DOM 'X'
#define OPT_VMLINUX 'x'
+#define OPT_COMPRESS_ZSTD 'z'
#define OPT_START 256
#define OPT_SPLIT OPT_START+0
#define OPT_REASSEMBLE OPT_START+1
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 03/11] makedumpfile: Add zstd build support
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
2021-09-10 10:33 ` [PATCH 01/11] makedumpfile: Add dump header for zstd Tao Liu
2021-09-10 10:33 ` [PATCH 02/11] makedumpfile: Add command-line processing " Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 04/11] makedumpfile: Notify zstd unsupporting when disabled Tao Liu
` (8 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
Makefile | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/Makefile b/Makefile
index 5d61a69..725c186 100644
--- a/Makefile
+++ b/Makefile
@@ -68,6 +68,11 @@ endif
CFLAGS += -DUSESNAPPY
endif
+ifeq ($(USEZSTD), on)
+LIBS := -lzstd $(LIBS)
+CFLAGS += -DUSEZSTD
+endif
+
LIBS := $(LIBS) -lpthread
try-run = $(shell set -e; \
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 04/11] makedumpfile: Notify zstd unsupporting when disabled
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (2 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 03/11] makedumpfile: Add zstd build support Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 05/11] makedumpfile: Add single thread zstd compression processing Tao Liu
` (7 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
makedumpfile.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/makedumpfile.c b/makedumpfile.c
index 100b407..f3bf297 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -4171,6 +4171,15 @@ initial(void)
}
#endif
+#ifndef USEZSTD
+ if (info->flag_compress == DUMP_DH_COMPRESSED_ZSTD) {
+ MSG("'-z' option is disabled, ");
+ MSG("because this binary doesn't support zstd "
+ "compression.\n");
+ MSG("Try `make USEZSTD=on` when building.\n");
+ }
+#endif
+
if (info->flag_exclude_xen_dom && !is_xen_memory()) {
MSG("'-X' option is disable,");
MSG("because %s is not Xen's memory core image.\n", info->name_memory);
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 05/11] makedumpfile: Add single thread zstd compression processing
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (3 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 04/11] makedumpfile: Notify zstd unsupporting when disabled Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 06/11] makedumpfile: Add parallel threads " Tao Liu
` (6 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
makedumpfile.c | 41 ++++++++++++++++++++++++++++++++++++-----
makedumpfile.h | 3 +++
2 files changed, 39 insertions(+), 5 deletions(-)
diff --git a/makedumpfile.c b/makedumpfile.c
index f3bf297..76a7a77 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -296,18 +296,20 @@ is_cache_page(unsigned long flags)
static inline unsigned long
calculate_len_buf_out(long page_size)
{
- unsigned long zlib = compressBound(page_size);
- unsigned long lzo = 0;
- unsigned long snappy = 0;
+ unsigned long zlib, lzo, snappy, zstd;
+ zlib = lzo = snappy = zstd = 0;
+ zlib = compressBound(page_size);
#ifdef USELZO
lzo = page_size + page_size / 16 + 64 + 3;
#endif
#ifdef USESNAPPY
snappy = snappy_max_compressed_length(page_size);
#endif
-
- return MAX(zlib, MAX(lzo, snappy));
+#ifdef USEZSTD
+ zstd = ZSTD_compressBound(page_size);
+#endif
+ return MAX(zlib, MAX(lzo, MAX(snappy, zstd)));
}
#define BITMAP_SECT_LEN 4096
@@ -7298,6 +7300,10 @@ write_kdump_header(void)
else if (info->flag_compress & DUMP_DH_COMPRESSED_SNAPPY)
dh->status |= DUMP_DH_COMPRESSED_SNAPPY;
#endif
+#ifdef USEZSTD
+ else if (info->flag_compress & DUMP_DH_COMPRESSED_ZSTD)
+ dh->status |= DUMP_DH_COMPRESSED_ZSTD;
+#endif
size = sizeof(struct disk_dump_header);
if (!write_buffer(info->fd_dumpfile, 0, dh, size, info->name_dumpfile))
@@ -8567,6 +8573,9 @@ write_kdump_pages_cyclic(struct cache_data *cd_header, struct cache_data *cd_pag
#ifdef USELZO
lzo_bytep wrkmem = NULL;
#endif
+#ifdef USEZSTD
+ ZSTD_CCtx *cctx = NULL;
+#endif
if (info->flag_elf_dumpfile)
return FALSE;
@@ -8586,6 +8595,14 @@ write_kdump_pages_cyclic(struct cache_data *cd_header, struct cache_data *cd_pag
goto out;
}
#endif
+#ifdef USEZSTD
+ if (info->flag_compress & DUMP_DH_COMPRESSED_ZSTD) {
+ if ((cctx = ZSTD_createCCtx()) == NULL) {
+ ERRMSG("Can't allocate ZSTD_CCtx.\n");
+ goto out;
+ }
+ }
+#endif
len_buf_out = calculate_len_buf_out(info->page_size);
@@ -8668,6 +8685,16 @@ write_kdump_pages_cyclic(struct cache_data *cd_header, struct cache_data *cd_pag
&& (size_out < info->page_size)) {
pd.flags = DUMP_DH_COMPRESSED_SNAPPY;
pd.size = size_out;
+#endif
+#ifdef USEZSTD
+ } else if ((info->flag_compress & DUMP_DH_COMPRESSED_ZSTD)
+ && (size_out = ZSTD_compressCCtx(cctx,
+ buf_out, len_buf_out,
+ buf, info->page_size, ZSTD_dfast))
+ && (!ZSTD_isError(size_out))
+ && (size_out < info->page_size)) {
+ pd.flags = DUMP_DH_COMPRESSED_ZSTD;
+ pd.size = size_out;
#endif
} else {
pd.flags = 0;
@@ -8688,6 +8715,10 @@ write_kdump_pages_cyclic(struct cache_data *cd_header, struct cache_data *cd_pag
out:
if (buf_out != NULL)
free(buf_out);
+#ifdef USEZSTD
+ if (cctx != NULL)
+ ZSTD_freeCCtx(cctx);
+#endif
#ifdef USELZO
if (wrkmem != NULL)
free(wrkmem);
diff --git a/makedumpfile.h b/makedumpfile.h
index 46d77b0..a1a8cc2 100644
--- a/makedumpfile.h
+++ b/makedumpfile.h
@@ -38,6 +38,9 @@
#ifdef USESNAPPY
#include <snappy-c.h>
#endif
+#ifdef USEZSTD
+#include <zstd.h>
+#endif
#include "common.h"
#include "dwarf_info.h"
#include "diskdump_mod.h"
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 06/11] makedumpfile: Add parallel threads zstd compression processing
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (4 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 05/11] makedumpfile: Add single thread zstd compression processing Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 07/11] makedumpfile: Add single thread zstd uncompression processing Tao Liu
` (5 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu
Signed-off-by: Tao Liu <ltao@redhat.com>
---
makedumpfile.c | 26 ++++++++++++++++++++++++++
makedumpfile.h | 6 ++++++
2 files changed, 32 insertions(+)
diff --git a/makedumpfile.c b/makedumpfile.c
index 76a7a77..af21a84 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -3892,6 +3892,12 @@ initial_for_parallel()
strerror(errno));
return FALSE;
}
+#endif
+#ifdef USEZSTD
+ if ((ZSTD_CCTX_PARALLEL(i) = ZSTD_createCCtx()) == NULL) {
+ MSG("Can't allocate ZSTD_CCtx.\n");
+ return FALSE;
+ }
#endif
}
@@ -4000,6 +4006,10 @@ free_for_parallel()
if (WRKMEM_PARALLEL(i) != NULL)
free(WRKMEM_PARALLEL(i));
#endif
+#ifdef USEZSTD
+ if (ZSTD_CCTX_PARALLEL(i) != NULL)
+ ZSTD_freeCCtx(ZSTD_CCTX_PARALLEL(i));
+#endif
}
free(info->threads);
@@ -8166,6 +8176,9 @@ kdump_thread_function_cyclic(void *arg) {
#ifdef USELZO
lzo_bytep wrkmem = WRKMEM_PARALLEL(kdump_thread_args->thread_num);
#endif
+#ifdef USEZSTD
+ ZSTD_CCtx *cctx = ZSTD_CCTX_PARALLEL(kdump_thread_args->thread_num);
+#endif
buf = BUF_PARALLEL(kdump_thread_args->thread_num);
buf_out = BUF_OUT_PARALLEL(kdump_thread_args->thread_num);
@@ -8298,6 +8311,19 @@ kdump_thread_function_cyclic(void *arg) {
DUMP_DH_COMPRESSED_SNAPPY;
page_data_buf[index].size = size_out;
memcpy(page_data_buf[index].buf, buf_out, size_out);
+#endif
+#ifdef USEZSTD
+ } else if ((info->flag_compress
+ & DUMP_DH_COMPRESSED_ZSTD)
+ && (size_out = ZSTD_compressCCtx(cctx,
+ buf_out, kdump_thread_args->len_buf_out,
+ buf, info->page_size, ZSTD_dfast))
+ && (!ZSTD_isError(size_out))
+ && (size_out < info->page_size)) {
+ page_data_buf[index].flags =
+ DUMP_DH_COMPRESSED_ZSTD;
+ page_data_buf[index].size = size_out;
+ memcpy(page_data_buf[index].buf, buf_out, size_out);
#endif
} else {
page_data_buf[index].flags = 0;
diff --git a/makedumpfile.h b/makedumpfile.h
index a1a8cc2..d583249 100644
--- a/makedumpfile.h
+++ b/makedumpfile.h
@@ -484,6 +484,9 @@ do { \
#ifdef USELZO
#define WRKMEM_PARALLEL(i) info->parallel_info[i].wrkmem
#endif
+#ifdef USEZSTD
+#define ZSTD_CCTX_PARALLEL(i) info->parallel_info[i].zstd_cctx
+#endif
/*
* kernel version
*
@@ -1328,6 +1331,9 @@ struct parallel_info {
#ifdef USELZO
lzo_bytep wrkmem;
#endif
+#ifdef USEZSTD
+ ZSTD_CCtx *zstd_cctx;
+#endif
};
struct ppc64_vmemmap {
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 07/11] makedumpfile: Add single thread zstd uncompression processing
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (5 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 06/11] makedumpfile: Add parallel threads " Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 08/11] makedumpfile: Add parallel threads " Tao Liu
` (4 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
makedumpfile.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/makedumpfile.c b/makedumpfile.c
index af21a84..e70d882 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -832,7 +832,7 @@ readpage_kdump_compressed(unsigned long long paddr, void *bufptr)
* Read page data
*/
rdbuf = pd.flags & (DUMP_DH_COMPRESSED_ZLIB | DUMP_DH_COMPRESSED_LZO |
- DUMP_DH_COMPRESSED_SNAPPY) ? buf : bufptr;
+ DUMP_DH_COMPRESSED_SNAPPY | DUMP_DH_COMPRESSED_ZSTD) ? buf : bufptr;
if (read(info->fd_memory, rdbuf, pd.size) != pd.size) {
ERRMSG("Can't read %s. %s\n",
info->name_memory, strerror(errno));
@@ -873,6 +873,14 @@ readpage_kdump_compressed(unsigned long long paddr, void *bufptr)
ERRMSG("Uncompress failed: %d\n", ret);
return FALSE;
}
+#endif
+#ifdef USEZSTD
+ } else if ((pd.flags & DUMP_DH_COMPRESSED_ZSTD)) {
+ ret = ZSTD_decompress(bufptr, info->page_size, buf, pd.size);
+ if (ZSTD_isError(ret) || (ret != info->page_size)) {
+ ERRMSG("Uncompress failed: %d\n", ret);
+ return FALSE;
+ }
#endif
}
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 08/11] makedumpfile: Add parallel threads zstd uncompression processing
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (6 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 07/11] makedumpfile: Add single thread zstd uncompression processing Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 09/11] makedumpfile: Add zstd help message Tao Liu
` (3 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu
Signed-off-by: Tao Liu <ltao@redhat.com>
---
makedumpfile.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/makedumpfile.c b/makedumpfile.c
index e70d882..2514eb6 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -919,7 +919,7 @@ readpage_kdump_compressed_parallel(int fd_memory, unsigned long long paddr,
* Read page data
*/
rdbuf = pd.flags & (DUMP_DH_COMPRESSED_ZLIB | DUMP_DH_COMPRESSED_LZO |
- DUMP_DH_COMPRESSED_SNAPPY) ? buf : bufptr;
+ DUMP_DH_COMPRESSED_SNAPPY | DUMP_DH_COMPRESSED_ZSTD) ? buf : bufptr;
if (read(fd_memory, rdbuf, pd.size) != pd.size) {
ERRMSG("Can't read %s. %s\n",
info->name_memory, strerror(errno));
@@ -960,6 +960,14 @@ readpage_kdump_compressed_parallel(int fd_memory, unsigned long long paddr,
ERRMSG("Uncompress failed: %d\n", ret);
return FALSE;
}
+#endif
+#ifdef USEZSTD
+ } else if ((pd.flags & DUMP_DH_COMPRESSED_ZSTD)) {
+ ret = ZSTD_decompress(bufptr, info->page_size, buf, pd.size);
+ if (ZSTD_isError(ret) || (ret != info->page_size)) {
+ ERRMSG("Uncompress failed: %d\n", ret);
+ return FALSE;
+ }
#endif
}
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 09/11] makedumpfile: Add zstd help message
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (7 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 08/11] makedumpfile: Add parallel threads " Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 10/11] makedumpfile: Add zstd manual description Tao Liu
` (2 subsequent siblings)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
print_info.c | 30 +++++++++++++++++++++---------
1 file changed, 21 insertions(+), 9 deletions(-)
diff --git a/print_info.c b/print_info.c
index 8b28554..e4bfefc 100644
--- a/print_info.c
+++ b/print_info.c
@@ -38,6 +38,11 @@ show_version(void)
MSG("snappy\tenabled\n");
#else
MSG("snappy\tdisabled\n");
+#endif
+#ifdef USEZSTD
+ MSG("zstd\tenabled\n");
+#else
+ MSG("zstd\tdisabled\n");
#endif
MSG("\n");
}
@@ -57,20 +62,26 @@ print_usage(void)
MSG(" enabled\n");
#else
MSG(" disabled ('-p' option will be ignored.)\n");
+#endif
+ MSG("zstd support:\n");
+#ifdef USEZSTD
+ MSG(" enabled\n");
+#else
+ MSG(" disabled ('-z' option will be ignored.)\n");
#endif
MSG("\n");
MSG("Usage:\n");
MSG(" Creating DUMPFILE:\n");
- MSG(" # makedumpfile [-c|-l|-p|-E] [-d DL] [-e] [-x VMLINUX|-i VMCOREINFO] VMCORE\n");
+ MSG(" # makedumpfile [-c|-l|-p|-z|-E] [-d DL] [-e] [-x VMLINUX|-i VMCOREINFO] VMCORE\n");
MSG(" DUMPFILE\n");
MSG("\n");
MSG(" Creating DUMPFILE with filtered kernel data specified through filter config\n");
MSG(" file or eppic macro:\n");
- MSG(" # makedumpfile [-c|-l|-p|-E] [-d DL] -x VMLINUX [--config FILTERCONFIGFILE]\n");
+ MSG(" # makedumpfile [-c|-l|-p|-z|-E] [-d DL] -x VMLINUX [--config FILTERCONFIGFILE]\n");
MSG(" [--eppic EPPICMACRO] VMCORE DUMPFILE\n");
MSG("\n");
MSG(" Outputting the dump data in the flattened format to the standard output:\n");
- MSG(" # makedumpfile -F [-c|-l|-p|-E] [-d DL] [-x VMLINUX|-i VMCOREINFO] VMCORE\n");
+ MSG(" # makedumpfile -F [-c|-l|-p|-z|-E] [-d DL] [-x VMLINUX|-i VMCOREINFO] VMCORE\n");
MSG("\n");
MSG(" Rearranging the dump data in the flattened format to a readable DUMPFILE:\n");
MSG(" # makedumpfile -R DUMPFILE\n");
@@ -94,26 +105,27 @@ print_usage(void)
MSG("\n");
MSG("\n");
MSG(" Creating DUMPFILE of Xen:\n");
- MSG(" # makedumpfile [-c|-l|-p|-E] [--xen-syms XEN-SYMS|--xen-vmcoreinfo VMCOREINFO]\n");
+ MSG(" # makedumpfile [-c|-l|-p|-z|-E] [--xen-syms XEN-SYMS|--xen-vmcoreinfo VMCOREINFO]\n");
MSG(" VMCORE DUMPFILE\n");
MSG("\n");
MSG(" Filtering domain-0 of Xen:\n");
- MSG(" # makedumpfile [-c|-l|-p|-E] -d DL -x vmlinux VMCORE DUMPFILE\n");
+ MSG(" # makedumpfile [-c|-l|-p|-z|-E] -d DL -x vmlinux VMCORE DUMPFILE\n");
MSG("\n");
MSG(" Generating VMCOREINFO of Xen:\n");
MSG(" # makedumpfile -g VMCOREINFO --xen-syms XEN-SYMS\n");
MSG("\n");
MSG("\n");
MSG(" Creating DUMPFILE from multiple VMCOREs generated on sadump diskset configuration:\n");
- MSG(" # makedumpfile [-c|-l|-p] [-d DL] -x VMLINUX --diskset=VMCORE1 --diskset=VMCORE2\n");
+ MSG(" # makedumpfile [-c|-l|-p|-z] [-d DL] -x VMLINUX --diskset=VMCORE1 --diskset=VMCORE2\n");
MSG(" [--diskset=VMCORE3 ..] DUMPFILE\n");
MSG("\n");
MSG("\n");
MSG("Available options:\n");
- MSG(" [-c|-l|-p]:\n");
+ MSG(" [-c|-l|-p|-z]:\n");
MSG(" Compress dump data by each page using zlib for -c option, lzo for -l option\n");
- MSG(" or snappy for -p option. A user cannot specify either of these options with\n");
- MSG(" -E option, because the ELF format does not support compressed data.\n");
+ MSG(" snappy for -p option, or zstd for -z option. A user cannot specify either of\n");
+ MSG(" these options with -E option, because the ELF format does not support\n");
+ MSG(" compressed data.\n");
MSG(" THIS IS ONLY FOR THE CRASH UTILITY.\n");
MSG("\n");
MSG(" [-e]:\n");
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 10/11] makedumpfile: Add zstd manual description
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (8 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 09/11] makedumpfile: Add zstd help message Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-10 10:33 ` [PATCH 11/11] makedumpfile: Add zstd README description Tao Liu
2021-09-14 7:04 ` [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile HAGIO KAZUHITO(萩尾 一仁)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
makedumpfile.8 | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/makedumpfile.8 b/makedumpfile.8
index f782e5f..3022d9c 100644
--- a/makedumpfile.8
+++ b/makedumpfile.8
@@ -132,10 +132,11 @@ configuration, you need to use --diskset option.
.SH OPTIONS
.TP
-\fB\-c,\-l,\-p\fR
+\fB\-c,\-l,\-p,\-z\fR
Compress dump data by each page using zlib for -c option, lzo for -l
-option or snappy for -p option.
-(-l option needs USELZO=on and -p option needs USESNAPPY=on when building)
+option, snappy for -p option or zstd for -z option.
+(-l option needs USELZO=on, -p option needs USESNAPPY=on and -z option needs
+USEZSTD=on when building)
.br
A user cannot specify this option with \-E option, because the ELF format does
not support compressed data.
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 11/11] makedumpfile: Add zstd README description
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (9 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 10/11] makedumpfile: Add zstd manual description Tao Liu
@ 2021-09-10 10:33 ` Tao Liu
2021-09-14 7:04 ` [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile HAGIO KAZUHITO(萩尾 一仁)
11 siblings, 0 replies; 20+ messages in thread
From: Tao Liu @ 2021-09-10 10:33 UTC (permalink / raw)
To: kexec; +Cc: k-hagio-ab, Tao Liu, Coiby Xu
Signed-off-by: Tao Liu <ltao@redhat.com>
Signed-off-by: Coiby Xu <coxu@redhat.com>
---
README | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/README b/README
index 6629440..0b17cf1 100644
--- a/README
+++ b/README
@@ -49,7 +49,10 @@
7.Build with snappy support:
# make USESNAPPY=on ; make install
The user has to prepare snappy library.
- 8.Build the extension module for --eppic option.
+ 8.Build with zstd support:
+ # make USEZSTD=on ; make install
+ The user has to prepare zstd library.
+ 9.Build the extension module for --eppic option.
# make eppic_makedumpfile.so
The user has to prepare eppic library from the following site:
http://code.google.com/p/eppic/
--
2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 20+ messages in thread
* RE: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
` (10 preceding siblings ...)
2021-09-10 10:33 ` [PATCH 11/11] makedumpfile: Add zstd README description Tao Liu
@ 2021-09-14 7:04 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-14 8:33 ` Tao Liu
11 siblings, 1 reply; 20+ messages in thread
From: HAGIO KAZUHITO(萩尾 一仁) @ 2021-09-14 7:04 UTC (permalink / raw)
To: Tao Liu, kexec
Hi Tao Liu,
Thanks for the patchset!
-----Original Message-----
> This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> support, the vmcore dump size and time consumption can have a better balance than
> zlib/lzo/snappy.
>
> How to build:
>
> Build using make:
> $ make USEZSTD=on
>
> Performance Comparison:
>
> How to measure
>
> I took a x86_64 machine which had 4T memory, and the compression level
> range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> All testing was done by makedumpfile single thread mode.
>
> As for compression performance testing, in order to avoid the performance
> bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> lzo compression as an example. "--dry-run" will not write any data to disk,
> "--show-stat" will output the vmcore size after compression, and the time
> consumption can be collected from the output logs.
>
> $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
>
>
> As for decompression performance testing, I only tested the (-d 31) case,
> because the vmcore size of (-d 0) case is too big to fit in the disk, in
> addtion, to read a oversized file from disk will encounter the disk I/O
> bottle neck.
>
> I triggered a kernel crash and collected a vmcore. Then I converted the
> vmcore into specific compression format using the following makedumpfile
> cmd, which would get a lzo format vmcore as an example:
>
> $ makedumpfile -l vmcore vmcore.lzo
>
> After all vmcores were ready, I used the following cmd to perform the
> decompression, the time consumption can be collected from the logs.
>
> $ makedumpfile -F vmcore.lzo --dry-run --show-stat
>
>
> Result charts
>
> For compression:
>
> makedumpfile -d31 | makedumpfile -d0
> Compression time vmcore size | Compression time vmcore size
> zstd-3 325.516446 5285179595 | 8205.452248 51715430204
> zstd-2 332.069432 5319726604 | 8057.381371 51732062793
> zstd-1 309.942773 5730516274 | 8138.060786 52136191571
> zstd0 439.773076 4673859661 | 8873.059963 50993669657
> zstd1 406.68036 4700959521 | 8259.417132 51036900055
> zstd2 397.195643 4699263608 | 8230.308291 51030410942
> zstd3 436.491632 4673306398 | 8803.970103 51043393637
> zstd4 543.363928 4668419304 | 8991.240244 51058088514
> zlib 561.217381 8514803195 | 14381.755611 78199283893
> lzo 248.175953 16696411879 | 6057.528781 90020895741
> snappy 231.868312 11782236674 | 5290.919894 245661288355
>
> For decompression:
>
> makedumpfile -d31
> decompress time vmcore size
> zstd-3 477.543396 5289373448
> zstd-2 478.034534 5327454123
> zstd-1 459.066807 5748037931
> zstd0 561.687525 4680009013
> zstd1 547.248917 4706358547
> zstd2 544.219758 4704780719
> zstd3 555.726343 4680009013
> zstd4 558.031721 4675545933
> zlib 630.965426 8555376229
> lzo 427.292107 16849457649
> snappy 446.542806 11841407957
>
> Discussion
>
> For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> the best time consumption and vmcore dump size balance.
Do you have a result of -d 1 compression test? I think -d 0 is not
practical, I would like to see a -d 1 result of such a large vmcore.
And just out of curiosity, what version of zstd are you using?
When I tested zstd last time, compression level 1 was faster than 2, iirc.
btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
(no need to update for now, I will review later)
Thanks,
Kazu
>
> For zstd2/zlib/lzo/snappy, zstd2 has the smallest vmcore size, also
> the best time consumption and vmcore dump size balance.
>
> Tao Liu (11):
> Add dump header for zstd.
> Add command-line processing for zstd
> Add zstd build support
> Notify zstd unsupporting when disabled
> Add single thread zstd compression processing
> Add parallel threads zstd compression processing
> Add single thread zstd uncompression processing
> Add parallel threads zstd uncompression processing
> Add zstd help message
> Add zstd manual description
> Add zstd README description
>
> Makefile | 5 +++
> README | 5 ++-
> diskdump_mod.h | 1 +
> makedumpfile.8 | 7 ++--
> makedumpfile.c | 101 +++++++++++++++++++++++++++++++++++++++++++++----
> makedumpfile.h | 10 +++++
> print_info.c | 30 ++++++++++-----
> 7 files changed, 138 insertions(+), 21 deletions(-)
>
> --
> 2.29.2
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
2021-09-14 7:04 ` [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile HAGIO KAZUHITO(萩尾 一仁)
@ 2021-09-14 8:33 ` Tao Liu
2021-09-17 1:34 ` HAGIO KAZUHITO(萩尾 一仁)
0 siblings, 1 reply; 20+ messages in thread
From: Tao Liu @ 2021-09-14 8:33 UTC (permalink / raw)
To: HAGIO KAZUHITO(萩尾 一仁); +Cc: kexec
Hi Kazu,
Thanks for reviewing the patchset!
On Tue, Sep 14, 2021 at 07:04:24AM +0000, HAGIO KAZUHITO(萩尾 一仁) wrote:
> Hi Tao Liu,
>
> Thanks for the patchset!
>
> -----Original Message-----
> > This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> > support, the vmcore dump size and time consumption can have a better balance than
> > zlib/lzo/snappy.
> >
> > How to build:
> >
> > Build using make:
> > $ make USEZSTD=on
> >
> > Performance Comparison:
> >
> > How to measure
> >
> > I took a x86_64 machine which had 4T memory, and the compression level
> > range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> > All testing was done by makedumpfile single thread mode.
> >
> > As for compression performance testing, in order to avoid the performance
> > bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> > lzo compression as an example. "--dry-run" will not write any data to disk,
> > "--show-stat" will output the vmcore size after compression, and the time
> > consumption can be collected from the output logs.
> >
> > $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
> >
> >
> > As for decompression performance testing, I only tested the (-d 31) case,
> > because the vmcore size of (-d 0) case is too big to fit in the disk, in
> > addtion, to read a oversized file from disk will encounter the disk I/O
> > bottle neck.
> >
> > I triggered a kernel crash and collected a vmcore. Then I converted the
> > vmcore into specific compression format using the following makedumpfile
> > cmd, which would get a lzo format vmcore as an example:
> >
> > $ makedumpfile -l vmcore vmcore.lzo
> >
> > After all vmcores were ready, I used the following cmd to perform the
> > decompression, the time consumption can be collected from the logs.
> >
> > $ makedumpfile -F vmcore.lzo --dry-run --show-stat
> >
> >
> > Result charts
> >
> > For compression:
> >
> > makedumpfile -d31 | makedumpfile -d0
> > Compression time vmcore size | Compression time vmcore size
> > zstd-3 325.516446 5285179595 | 8205.452248 51715430204
> > zstd-2 332.069432 5319726604 | 8057.381371 51732062793
> > zstd-1 309.942773 5730516274 | 8138.060786 52136191571
> > zstd0 439.773076 4673859661 | 8873.059963 50993669657
> > zstd1 406.68036 4700959521 | 8259.417132 51036900055
> > zstd2 397.195643 4699263608 | 8230.308291 51030410942
> > zstd3 436.491632 4673306398 | 8803.970103 51043393637
> > zstd4 543.363928 4668419304 | 8991.240244 51058088514
> > zlib 561.217381 8514803195 | 14381.755611 78199283893
> > lzo 248.175953 16696411879 | 6057.528781 90020895741
> > snappy 231.868312 11782236674 | 5290.919894 245661288355
> >
> > For decompression:
> >
> > makedumpfile -d31
> > decompress time vmcore size
> > zstd-3 477.543396 5289373448
> > zstd-2 478.034534 5327454123
> > zstd-1 459.066807 5748037931
> > zstd0 561.687525 4680009013
> > zstd1 547.248917 4706358547
> > zstd2 544.219758 4704780719
> > zstd3 555.726343 4680009013
> > zstd4 558.031721 4675545933
> > zlib 630.965426 8555376229
> > lzo 427.292107 16849457649
> > snappy 446.542806 11841407957
> >
> > Discussion
> >
> > For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> > the best time consumption and vmcore dump size balance.
>
> Do you have a result of -d 1 compression test? I think -d 0 is not
> practical, I would like to see a -d 1 result of such a large vmcore.
>
No, I haven't tested the -d 1 case. I have returned the machine which used
for performance testing, I will borrow and test on it again, please wait for
a while...
> And just out of curiosity, what version of zstd are you using?
> When I tested zstd last time, compression level 1 was faster than 2, iirc.
>
The OS running on the machine is fedora34, I used its default zstd package, whose
version is v1.4.9.
> btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
Yes, it's enum of ZSTD_strategy [1].
[1]: https://zstd.docsforge.com/dev/api-documentation/#advanced-compression-api-requires-v140
Thanks,
Tao Liu
> (no need to update for now, I will review later)
>
> Thanks,
> Kazu
>
> >
> > For zstd2/zlib/lzo/snappy, zstd2 has the smallest vmcore size, also
> > the best time consumption and vmcore dump size balance.
> >
> > Tao Liu (11):
> > Add dump header for zstd.
> > Add command-line processing for zstd
> > Add zstd build support
> > Notify zstd unsupporting when disabled
> > Add single thread zstd compression processing
> > Add parallel threads zstd compression processing
> > Add single thread zstd uncompression processing
> > Add parallel threads zstd uncompression processing
> > Add zstd help message
> > Add zstd manual description
> > Add zstd README description
> >
> > Makefile | 5 +++
> > README | 5 ++-
> > diskdump_mod.h | 1 +
> > makedumpfile.8 | 7 ++--
> > makedumpfile.c | 101 +++++++++++++++++++++++++++++++++++++++++++++----
> > makedumpfile.h | 10 +++++
> > print_info.c | 30 ++++++++++-----
> > 7 files changed, 138 insertions(+), 21 deletions(-)
> >
> > --
> > 2.29.2
>
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
2021-09-14 8:33 ` Tao Liu
@ 2021-09-17 1:34 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-17 2:31 ` HAGIO KAZUHITO(萩尾 一仁)
0 siblings, 1 reply; 20+ messages in thread
From: HAGIO KAZUHITO(萩尾 一仁) @ 2021-09-17 1:34 UTC (permalink / raw)
To: Tao Liu; +Cc: kexec
Hi Tao Liu,
-----Original Message-----
> Hi Kazu,
>
> Thanks for reviewing the patchset!
>
> On Tue, Sep 14, 2021 at 07:04:24AM +0000, HAGIO KAZUHITO(萩尾 一仁) wrote:
> > Hi Tao Liu,
> >
> > Thanks for the patchset!
> >
> > -----Original Message-----
> > > This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> > > support, the vmcore dump size and time consumption can have a better balance than
> > > zlib/lzo/snappy.
> > >
> > > How to build:
> > >
> > > Build using make:
> > > $ make USEZSTD=on
> > >
> > > Performance Comparison:
> > >
> > > How to measure
> > >
> > > I took a x86_64 machine which had 4T memory, and the compression level
> > > range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> > > All testing was done by makedumpfile single thread mode.
> > >
> > > As for compression performance testing, in order to avoid the performance
> > > bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> > > lzo compression as an example. "--dry-run" will not write any data to disk,
> > > "--show-stat" will output the vmcore size after compression, and the time
> > > consumption can be collected from the output logs.
> > >
> > > $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
> > >
> > >
> > > As for decompression performance testing, I only tested the (-d 31) case,
> > > because the vmcore size of (-d 0) case is too big to fit in the disk, in
> > > addtion, to read a oversized file from disk will encounter the disk I/O
> > > bottle neck.
> > >
> > > I triggered a kernel crash and collected a vmcore. Then I converted the
> > > vmcore into specific compression format using the following makedumpfile
> > > cmd, which would get a lzo format vmcore as an example:
> > >
> > > $ makedumpfile -l vmcore vmcore.lzo
> > >
> > > After all vmcores were ready, I used the following cmd to perform the
> > > decompression, the time consumption can be collected from the logs.
> > >
> > > $ makedumpfile -F vmcore.lzo --dry-run --show-stat
> > >
> > >
> > > Result charts
> > >
> > > For compression:
> > >
> > > makedumpfile -d31 | makedumpfile -d0
> > > Compression time vmcore size | Compression time vmcore size
> > > zstd-3 325.516446 5285179595 | 8205.452248 51715430204
> > > zstd-2 332.069432 5319726604 | 8057.381371 51732062793
> > > zstd-1 309.942773 5730516274 | 8138.060786 52136191571
> > > zstd0 439.773076 4673859661 | 8873.059963 50993669657
> > > zstd1 406.68036 4700959521 | 8259.417132 51036900055
> > > zstd2 397.195643 4699263608 | 8230.308291 51030410942
> > > zstd3 436.491632 4673306398 | 8803.970103 51043393637
> > > zstd4 543.363928 4668419304 | 8991.240244 51058088514
> > > zlib 561.217381 8514803195 | 14381.755611 78199283893
> > > lzo 248.175953 16696411879 | 6057.528781 90020895741
> > > snappy 231.868312 11782236674 | 5290.919894 245661288355
> > >
> > > For decompression:
> > >
> > > makedumpfile -d31
> > > decompress time vmcore size
> > > zstd-3 477.543396 5289373448
> > > zstd-2 478.034534 5327454123
> > > zstd-1 459.066807 5748037931
> > > zstd0 561.687525 4680009013
> > > zstd1 547.248917 4706358547
> > > zstd2 544.219758 4704780719
> > > zstd3 555.726343 4680009013
> > > zstd4 558.031721 4675545933
> > > zlib 630.965426 8555376229
> > > lzo 427.292107 16849457649
> > > snappy 446.542806 11841407957
> > >
> > > Discussion
> > >
> > > For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> > > the best time consumption and vmcore dump size balance.
> >
> > Do you have a result of -d 1 compression test? I think -d 0 is not
> > practical, I would like to see a -d 1 result of such a large vmcore.
> >
>
> No, I haven't tested the -d 1 case. I have returned the machine which used
> for performance testing, I will borrow and test on it again, please wait for
> a while...
Thanks, it would be helpful.
>
> > And just out of curiosity, what version of zstd are you using?
> > When I tested zstd last time, compression level 1 was faster than 2, iirc.
> >
>
> The OS running on the machine is fedora34, I used its default zstd package, whose
> version is v1.4.9.
Thanks for the info.
>
> > btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
>
> Yes, it's enum of ZSTD_strategy [1].
ok, so it'll have to be replaced with "2" to avoid confusion.
>
> [1]: https://zstd.docsforge.com/dev/api-documentation/#advanced-compression-api-requires-v140
>
> Thanks,
> Tao Liu
>
> > (no need to update for now, I will review later)
The series almost looks good to me (though I will merge those into a patch),
just two questions are:
- whether 2 is the best balanced compression level,
- how much ZSTD_decompressDCtx() is faster than the current ZSTD_decompress().
I'll evaluate these, but it would be helpful if you could do some, too.
I think that compression time and ratio will vary with the data, it'll be
better to use some real data, I'm looking for it.. kernel source or something.
Thanks,
Kazu
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
2021-09-17 1:34 ` HAGIO KAZUHITO(萩尾 一仁)
@ 2021-09-17 2:31 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-17 7:03 ` HAGIO KAZUHITO(萩尾 一仁)
0 siblings, 1 reply; 20+ messages in thread
From: HAGIO KAZUHITO(萩尾 一仁) @ 2021-09-17 2:31 UTC (permalink / raw)
To: Tao Liu; +Cc: kexec
-----Original Message-----
> > > > This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> > > > support, the vmcore dump size and time consumption can have a better balance than
> > > > zlib/lzo/snappy.
> > > >
> > > > How to build:
> > > >
> > > > Build using make:
> > > > $ make USEZSTD=on
> > > >
> > > > Performance Comparison:
> > > >
> > > > How to measure
> > > >
> > > > I took a x86_64 machine which had 4T memory, and the compression level
> > > > range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> > > > All testing was done by makedumpfile single thread mode.
> > > >
> > > > As for compression performance testing, in order to avoid the performance
> > > > bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> > > > lzo compression as an example. "--dry-run" will not write any data to disk,
> > > > "--show-stat" will output the vmcore size after compression, and the time
> > > > consumption can be collected from the output logs.
> > > >
> > > > $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
> > > >
> > > >
> > > > As for decompression performance testing, I only tested the (-d 31) case,
> > > > because the vmcore size of (-d 0) case is too big to fit in the disk, in
> > > > addtion, to read a oversized file from disk will encounter the disk I/O
> > > > bottle neck.
> > > >
> > > > I triggered a kernel crash and collected a vmcore. Then I converted the
> > > > vmcore into specific compression format using the following makedumpfile
> > > > cmd, which would get a lzo format vmcore as an example:
> > > >
> > > > $ makedumpfile -l vmcore vmcore.lzo
> > > >
> > > > After all vmcores were ready, I used the following cmd to perform the
> > > > decompression, the time consumption can be collected from the logs.
> > > >
> > > > $ makedumpfile -F vmcore.lzo --dry-run --show-stat
> > > >
> > > >
> > > > Result charts
> > > >
> > > > For compression:
> > > >
> > > > makedumpfile -d31 | makedumpfile -d0
> > > > Compression time vmcore size | Compression time vmcore size
> > > > zstd-3 325.516446 5285179595 | 8205.452248 51715430204
> > > > zstd-2 332.069432 5319726604 | 8057.381371 51732062793
> > > > zstd-1 309.942773 5730516274 | 8138.060786 52136191571
> > > > zstd0 439.773076 4673859661 | 8873.059963 50993669657
> > > > zstd1 406.68036 4700959521 | 8259.417132 51036900055
> > > > zstd2 397.195643 4699263608 | 8230.308291 51030410942
> > > > zstd3 436.491632 4673306398 | 8803.970103 51043393637
> > > > zstd4 543.363928 4668419304 | 8991.240244 51058088514
> > > > zlib 561.217381 8514803195 | 14381.755611 78199283893
> > > > lzo 248.175953 16696411879 | 6057.528781 90020895741
> > > > snappy 231.868312 11782236674 | 5290.919894 245661288355
> > > >
> > > > For decompression:
> > > >
> > > > makedumpfile -d31
> > > > decompress time vmcore size
> > > > zstd-3 477.543396 5289373448
> > > > zstd-2 478.034534 5327454123
> > > > zstd-1 459.066807 5748037931
> > > > zstd0 561.687525 4680009013
> > > > zstd1 547.248917 4706358547
> > > > zstd2 544.219758 4704780719
> > > > zstd3 555.726343 4680009013
> > > > zstd4 558.031721 4675545933
> > > > zlib 630.965426 8555376229
> > > > lzo 427.292107 16849457649
> > > > snappy 446.542806 11841407957
> > > >
> > > > Discussion
> > > >
> > > > For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> > > > the best time consumption and vmcore dump size balance.
> > >
> > > Do you have a result of -d 1 compression test? I think -d 0 is not
> > > practical, I would like to see a -d 1 result of such a large vmcore.
> > >
> >
> > No, I haven't tested the -d 1 case. I have returned the machine which used
> > for performance testing, I will borrow and test on it again, please wait for
> > a while...
>
> Thanks, it would be helpful.
>
> >
> > > And just out of curiosity, what version of zstd are you using?
> > > When I tested zstd last time, compression level 1 was faster than 2, iirc.
> > >
> >
> > The OS running on the machine is fedora34, I used its default zstd package, whose
> > version is v1.4.9.
>
> Thanks for the info.
>
> >
> > > btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
> >
> > Yes, it's enum of ZSTD_strategy [1].
>
> ok, so it'll have to be replaced with "2" to avoid confusion.
>
> >
> > [1]: https://zstd.docsforge.com/dev/api-documentation/#advanced-compression-api-requires-v140
> >
> > Thanks,
> > Tao Liu
> >
> > > (no need to update for now, I will review later)
>
> The series almost looks good to me (though I will merge those into a patch),
> just two questions are:
> - whether 2 is the best balanced compression level,
> - how much ZSTD_decompressDCtx() is faster than the current ZSTD_decompress().
Looking at this further, we will need some effort to use it especially with
threads and decompression is not the main usage (it's only for refiltering),
so please ignore this for now. We can improve it later if it's very faster.
Thanks,
Kazu
>
> I'll evaluate these, but it would be helpful if you could do some, too.
>
> I think that compression time and ratio will vary with the data, it'll be
> better to use some real data, I'm looking for it.. kernel source or something.
>
> Thanks,
> Kazu
>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
2021-09-17 2:31 ` HAGIO KAZUHITO(萩尾 一仁)
@ 2021-09-17 7:03 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-21 9:26 ` Tao Liu
0 siblings, 1 reply; 20+ messages in thread
From: HAGIO KAZUHITO(萩尾 一仁) @ 2021-09-17 7:03 UTC (permalink / raw)
To: Tao Liu; +Cc: kexec
-----Original Message-----
> -----Original Message-----
> > > > > This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> > > > > support, the vmcore dump size and time consumption can have a better balance than
> > > > > zlib/lzo/snappy.
> > > > >
> > > > > How to build:
> > > > >
> > > > > Build using make:
> > > > > $ make USEZSTD=on
> > > > >
> > > > > Performance Comparison:
> > > > >
> > > > > How to measure
> > > > >
> > > > > I took a x86_64 machine which had 4T memory, and the compression level
> > > > > range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> > > > > All testing was done by makedumpfile single thread mode.
> > > > >
> > > > > As for compression performance testing, in order to avoid the performance
> > > > > bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> > > > > lzo compression as an example. "--dry-run" will not write any data to disk,
> > > > > "--show-stat" will output the vmcore size after compression, and the time
> > > > > consumption can be collected from the output logs.
> > > > >
> > > > > $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
> > > > >
> > > > >
> > > > > As for decompression performance testing, I only tested the (-d 31) case,
> > > > > because the vmcore size of (-d 0) case is too big to fit in the disk, in
> > > > > addtion, to read a oversized file from disk will encounter the disk I/O
> > > > > bottle neck.
> > > > >
> > > > > I triggered a kernel crash and collected a vmcore. Then I converted the
> > > > > vmcore into specific compression format using the following makedumpfile
> > > > > cmd, which would get a lzo format vmcore as an example:
> > > > >
> > > > > $ makedumpfile -l vmcore vmcore.lzo
> > > > >
> > > > > After all vmcores were ready, I used the following cmd to perform the
> > > > > decompression, the time consumption can be collected from the logs.
> > > > >
> > > > > $ makedumpfile -F vmcore.lzo --dry-run --show-stat
> > > > >
> > > > >
> > > > > Result charts
> > > > >
> > > > > For compression:
> > > > >
> > > > > makedumpfile -d31 | makedumpfile -d0
> > > > > Compression time vmcore size | Compression time vmcore size
> > > > > zstd-3 325.516446 5285179595 | 8205.452248 51715430204
> > > > > zstd-2 332.069432 5319726604 | 8057.381371 51732062793
> > > > > zstd-1 309.942773 5730516274 | 8138.060786 52136191571
> > > > > zstd0 439.773076 4673859661 | 8873.059963 50993669657
> > > > > zstd1 406.68036 4700959521 | 8259.417132 51036900055
> > > > > zstd2 397.195643 4699263608 | 8230.308291 51030410942
> > > > > zstd3 436.491632 4673306398 | 8803.970103 51043393637
> > > > > zstd4 543.363928 4668419304 | 8991.240244 51058088514
> > > > > zlib 561.217381 8514803195 | 14381.755611 78199283893
> > > > > lzo 248.175953 16696411879 | 6057.528781 90020895741
> > > > > snappy 231.868312 11782236674 | 5290.919894 245661288355
> > > > >
> > > > > For decompression:
> > > > >
> > > > > makedumpfile -d31
> > > > > decompress time vmcore size
> > > > > zstd-3 477.543396 5289373448
> > > > > zstd-2 478.034534 5327454123
> > > > > zstd-1 459.066807 5748037931
> > > > > zstd0 561.687525 4680009013
> > > > > zstd1 547.248917 4706358547
> > > > > zstd2 544.219758 4704780719
> > > > > zstd3 555.726343 4680009013
> > > > > zstd4 558.031721 4675545933
> > > > > zlib 630.965426 8555376229
> > > > > lzo 427.292107 16849457649
> > > > > snappy 446.542806 11841407957
> > > > >
> > > > > Discussion
> > > > >
> > > > > For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> > > > > the best time consumption and vmcore dump size balance.
> > > >
> > > > Do you have a result of -d 1 compression test? I think -d 0 is not
> > > > practical, I would like to see a -d 1 result of such a large vmcore.
> > > >
> > >
> > > No, I haven't tested the -d 1 case. I have returned the machine which used
> > > for performance testing, I will borrow and test on it again, please wait for
> > > a while...
> >
> > Thanks, it would be helpful.
> >
> > >
> > > > And just out of curiosity, what version of zstd are you using?
> > > > When I tested zstd last time, compression level 1 was faster than 2, iirc.
> > > >
> > >
> > > The OS running on the machine is fedora34, I used its default zstd package, whose
> > > version is v1.4.9.
> >
> > Thanks for the info.
> >
> > >
> > > > btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
> > >
> > > Yes, it's enum of ZSTD_strategy [1].
> >
> > ok, so it'll have to be replaced with "2" to avoid confusion.
> >
> > >
> > > [1]: https://zstd.docsforge.com/dev/api-documentation/#advanced-compression-api-requires-v140
> > >
> > > Thanks,
> > > Tao Liu
> > >
> > > > (no need to update for now, I will review later)
> >
> > The series almost looks good to me (though I will merge those into a patch),
> > just two questions are:
> > - whether 2 is the best balanced compression level,
As far as I've tested on two machines this time, compression level 1 was faster
than 2. There is no large difference between them, but generally 1 should be
faster than 2 according to the zstd manual:
"The lower the level, the faster the speed (at the cost of compression)."
And as you know, level 0 is unstable, that was the same when I tested before.
So currently I would prefer 1 rather than 2, what do you think?
Results:
* RHEL8.4 with libzstd-1.4.4 / 64GB filled with QEMU memory/images mainly
# free
total used free shared buff/cache available
Mem: 65599824 21768028 549364 4668 43282432 43078828
Swap: 32964604 4827916 28136688
makedumpfile -d 1 makedumpfile -d 31
copy sec. write bytes copy sec. write bytes
zstd1 220.979689 26456659213 9.014176 558845000
zstd2 227.774602 26402437190 9.078599 560681256
lzo 83.406291 33078995065 3.603778 810219860
* RHEL with libzstd-1.5.0 / 64GB filled with kernel source code mainly
# free
total used free shared buff/cache available
Mem: 65329632 9925536 456020 53086068 54948076 1549088
Swap: 32866300 1607424 31258876
makedumpfile -d 1 makedumpfile -d 31
zstd1 520.844189 15537080819 53.494782 1200754023
zstd2 533.912451 15469575651 53.641510 1199561609
lzo 233.370800 20780821165 23.281374 1740041245
(Used /proc/kcore, so not stable memory, but measured zstd 3 times and
picked the middle elapsed time.)
Thanks,
Kazu
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
2021-09-17 7:03 ` HAGIO KAZUHITO(萩尾 一仁)
@ 2021-09-21 9:26 ` Tao Liu
2021-09-22 2:21 ` HAGIO KAZUHITO(萩尾 一仁)
0 siblings, 1 reply; 20+ messages in thread
From: Tao Liu @ 2021-09-21 9:26 UTC (permalink / raw)
To: HAGIO KAZUHITO(萩尾 一仁); +Cc: kexec
Hello Kazu,
Sorry for the late reply.
On Fri, Sep 17, 2021 at 07:03:50AM +0000, HAGIO KAZUHITO(萩尾 一仁) wrote:
> -----Original Message-----
> > -----Original Message-----
> > > > > > This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> > > > > > support, the vmcore dump size and time consumption can have a better balance than
> > > > > > zlib/lzo/snappy.
> > > > > >
> > > > > > How to build:
> > > > > >
> > > > > > Build using make:
> > > > > > $ make USEZSTD=on
> > > > > >
> > > > > > Performance Comparison:
> > > > > >
> > > > > > How to measure
> > > > > >
> > > > > > I took a x86_64 machine which had 4T memory, and the compression level
> > > > > > range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> > > > > > All testing was done by makedumpfile single thread mode.
> > > > > >
> > > > > > As for compression performance testing, in order to avoid the performance
> > > > > > bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> > > > > > lzo compression as an example. "--dry-run" will not write any data to disk,
> > > > > > "--show-stat" will output the vmcore size after compression, and the time
> > > > > > consumption can be collected from the output logs.
> > > > > >
> > > > > > $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
> > > > > >
> > > > > >
> > > > > > As for decompression performance testing, I only tested the (-d 31) case,
> > > > > > because the vmcore size of (-d 0) case is too big to fit in the disk, in
> > > > > > addtion, to read a oversized file from disk will encounter the disk I/O
> > > > > > bottle neck.
> > > > > >
> > > > > > I triggered a kernel crash and collected a vmcore. Then I converted the
> > > > > > vmcore into specific compression format using the following makedumpfile
> > > > > > cmd, which would get a lzo format vmcore as an example:
> > > > > >
> > > > > > $ makedumpfile -l vmcore vmcore.lzo
> > > > > >
> > > > > > After all vmcores were ready, I used the following cmd to perform the
> > > > > > decompression, the time consumption can be collected from the logs.
> > > > > >
> > > > > > $ makedumpfile -F vmcore.lzo --dry-run --show-stat
> > > > > >
> > > > > >
> > > > > > Result charts
> > > > > >
> > > > > > For compression:
> > > > > >
> > > > > > makedumpfile -d31 | makedumpfile -d0
> > > > > > Compression time vmcore size | Compression time vmcore size
> > > > > > zstd-3 325.516446 5285179595 | 8205.452248 51715430204
> > > > > > zstd-2 332.069432 5319726604 | 8057.381371 51732062793
> > > > > > zstd-1 309.942773 5730516274 | 8138.060786 52136191571
> > > > > > zstd0 439.773076 4673859661 | 8873.059963 50993669657
> > > > > > zstd1 406.68036 4700959521 | 8259.417132 51036900055
> > > > > > zstd2 397.195643 4699263608 | 8230.308291 51030410942
> > > > > > zstd3 436.491632 4673306398 | 8803.970103 51043393637
> > > > > > zstd4 543.363928 4668419304 | 8991.240244 51058088514
> > > > > > zlib 561.217381 8514803195 | 14381.755611 78199283893
> > > > > > lzo 248.175953 16696411879 | 6057.528781 90020895741
> > > > > > snappy 231.868312 11782236674 | 5290.919894 245661288355
> > > > > >
> > > > > > For decompression:
> > > > > >
> > > > > > makedumpfile -d31
> > > > > > decompress time vmcore size
> > > > > > zstd-3 477.543396 5289373448
> > > > > > zstd-2 478.034534 5327454123
> > > > > > zstd-1 459.066807 5748037931
> > > > > > zstd0 561.687525 4680009013
> > > > > > zstd1 547.248917 4706358547
> > > > > > zstd2 544.219758 4704780719
> > > > > > zstd3 555.726343 4680009013
> > > > > > zstd4 558.031721 4675545933
> > > > > > zlib 630.965426 8555376229
> > > > > > lzo 427.292107 16849457649
> > > > > > snappy 446.542806 11841407957
> > > > > >
> > > > > > Discussion
> > > > > >
> > > > > > For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> > > > > > the best time consumption and vmcore dump size balance.
> > > > >
> > > > > Do you have a result of -d 1 compression test? I think -d 0 is not
> > > > > practical, I would like to see a -d 1 result of such a large vmcore.
> > > > >
> > > >
> > > > No, I haven't tested the -d 1 case. I have returned the machine which used
> > > > for performance testing, I will borrow and test on it again, please wait for
> > > > a while...
> > >
> > > Thanks, it would be helpful.
> > >
> > > >
> > > > > And just out of curiosity, what version of zstd are you using?
> > > > > When I tested zstd last time, compression level 1 was faster than 2, iirc.
> > > > >
> > > >
> > > > The OS running on the machine is fedora34, I used its default zstd package, whose
> > > > version is v1.4.9.
> > >
> > > Thanks for the info.
> > >
> > > >
> > > > > btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
> > > >
> > > > Yes, it's enum of ZSTD_strategy [1].
> > >
> > > ok, so it'll have to be replaced with "2" to avoid confusion.
> > >
> > > >
> > > > [1]: https://zstd.docsforge.com/dev/api-documentation/#advanced-compression-api-requires-v140
> > > >
> > > > Thanks,
> > > > Tao Liu
> > > >
> > > > > (no need to update for now, I will review later)
> > >
> > > The series almost looks good to me (though I will merge those into a patch),
> > > just two questions are:
> > > - whether 2 is the best balanced compression level,
>
> As far as I've tested on two machines this time, compression level 1 was faster
> than 2. There is no large difference between them, but generally 1 should be
> faster than 2 according to the zstd manual:
> "The lower the level, the faster the speed (at the cost of compression)."
> And as you know, level 0 is unstable, that was the same when I tested before.
>
> So currently I would prefer 1 rather than 2, what do you think?
As we mentioned before, I have tested the -d 1 compression measurement on
the same x86_64 machine with 4T memory:
compression time| vmcore size
zstd-3 4620.795194 31720632985
zstd-2 4545.636437 31716847503
zstd-1 4516.076298 32113300399
zstd0 4663.17618 30967496299
zstd1 4618.386313 31010305809
zstd2 4633.535771 31005073344
zstd3 4673.240663 30967855841
zstd4 4771.1416 30965914853
lzo 4801.958368 34920417584
zlib 4442.257105 43482765168
snappy 4433.957005 38594790371
As for the decompression, I didn't get a meaningful value, because the vmcore size
were too large, and the most time was spent on disk I/O, thus the decompression time
measurement didn't show an obvious difference.
I agree the compression level 1 and 2 don't have a big difference. I'm OK with
your preference.
>
> Results:
> * RHEL8.4 with libzstd-1.4.4 / 64GB filled with QEMU memory/images mainly
> # free
> total used free shared buff/cache available
> Mem: 65599824 21768028 549364 4668 43282432 43078828
> Swap: 32964604 4827916 28136688
>
> makedumpfile -d 1 makedumpfile -d 31
> copy sec. write bytes copy sec. write bytes
> zstd1 220.979689 26456659213 9.014176 558845000
> zstd2 227.774602 26402437190 9.078599 560681256
> lzo 83.406291 33078995065 3.603778 810219860
>
> * RHEL with libzstd-1.5.0 / 64GB filled with kernel source code mainly
> # free
> total used free shared buff/cache available
> Mem: 65329632 9925536 456020 53086068 54948076 1549088
> Swap: 32866300 1607424 31258876
>
> makedumpfile -d 1 makedumpfile -d 31
> zstd1 520.844189 15537080819 53.494782 1200754023
> zstd2 533.912451 15469575651 53.641510 1199561609
> lzo 233.370800 20780821165 23.281374 1740041245
>
> (Used /proc/kcore, so not stable memory, but measured zstd 3 times and
> picked the middle elapsed time.)
>
Thanks for sharing the results. Just for curiosity, can you share your way
of testing as well? I can improve mine for later use.
Thanks,
Tao Liu
> Thanks,
> Kazu
>
>
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
2021-09-21 9:26 ` Tao Liu
@ 2021-09-22 2:21 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-22 8:16 ` HAGIO KAZUHITO(萩尾 一仁)
0 siblings, 1 reply; 20+ messages in thread
From: HAGIO KAZUHITO(萩尾 一仁) @ 2021-09-22 2:21 UTC (permalink / raw)
To: Tao Liu; +Cc: kexec
Hi Tao Liu,
-----Original Message-----
> Hello Kazu,
>
> Sorry for the late reply.
>
> On Fri, Sep 17, 2021 at 07:03:50AM +0000, HAGIO KAZUHITO(萩尾 一仁) wrote:
> > -----Original Message-----
> > > -----Original Message-----
> > > > > > > This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> > > > > > > support, the vmcore dump size and time consumption can have a better balance than
> > > > > > > zlib/lzo/snappy.
> > > > > > >
> > > > > > > How to build:
> > > > > > >
> > > > > > > Build using make:
> > > > > > > $ make USEZSTD=on
> > > > > > >
> > > > > > > Performance Comparison:
> > > > > > >
> > > > > > > How to measure
> > > > > > >
> > > > > > > I took a x86_64 machine which had 4T memory, and the compression level
> > > > > > > range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> > > > > > > All testing was done by makedumpfile single thread mode.
> > > > > > >
> > > > > > > As for compression performance testing, in order to avoid the performance
> > > > > > > bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> > > > > > > lzo compression as an example. "--dry-run" will not write any data to disk,
> > > > > > > "--show-stat" will output the vmcore size after compression, and the time
> > > > > > > consumption can be collected from the output logs.
> > > > > > >
> > > > > > > $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
> > > > > > >
> > > > > > >
> > > > > > > As for decompression performance testing, I only tested the (-d 31) case,
> > > > > > > because the vmcore size of (-d 0) case is too big to fit in the disk, in
> > > > > > > addtion, to read a oversized file from disk will encounter the disk I/O
> > > > > > > bottle neck.
> > > > > > >
> > > > > > > I triggered a kernel crash and collected a vmcore. Then I converted the
> > > > > > > vmcore into specific compression format using the following makedumpfile
> > > > > > > cmd, which would get a lzo format vmcore as an example:
> > > > > > >
> > > > > > > $ makedumpfile -l vmcore vmcore.lzo
> > > > > > >
> > > > > > > After all vmcores were ready, I used the following cmd to perform the
> > > > > > > decompression, the time consumption can be collected from the logs.
> > > > > > >
> > > > > > > $ makedumpfile -F vmcore.lzo --dry-run --show-stat
> > > > > > >
> > > > > > >
> > > > > > > Result charts
> > > > > > >
> > > > > > > For compression:
> > > > > > >
> > > > > > > makedumpfile -d31 | makedumpfile -d0
> > > > > > > Compression time vmcore size | Compression time vmcore size
> > > > > > > zstd-3 325.516446 5285179595 | 8205.452248 51715430204
> > > > > > > zstd-2 332.069432 5319726604 | 8057.381371 51732062793
> > > > > > > zstd-1 309.942773 5730516274 | 8138.060786 52136191571
> > > > > > > zstd0 439.773076 4673859661 | 8873.059963 50993669657
> > > > > > > zstd1 406.68036 4700959521 | 8259.417132 51036900055
> > > > > > > zstd2 397.195643 4699263608 | 8230.308291 51030410942
> > > > > > > zstd3 436.491632 4673306398 | 8803.970103 51043393637
> > > > > > > zstd4 543.363928 4668419304 | 8991.240244 51058088514
> > > > > > > zlib 561.217381 8514803195 | 14381.755611 78199283893
> > > > > > > lzo 248.175953 16696411879 | 6057.528781 90020895741
> > > > > > > snappy 231.868312 11782236674 | 5290.919894 245661288355
> > > > > > >
> > > > > > > For decompression:
> > > > > > >
> > > > > > > makedumpfile -d31
> > > > > > > decompress time vmcore size
> > > > > > > zstd-3 477.543396 5289373448
> > > > > > > zstd-2 478.034534 5327454123
> > > > > > > zstd-1 459.066807 5748037931
> > > > > > > zstd0 561.687525 4680009013
> > > > > > > zstd1 547.248917 4706358547
> > > > > > > zstd2 544.219758 4704780719
> > > > > > > zstd3 555.726343 4680009013
> > > > > > > zstd4 558.031721 4675545933
> > > > > > > zlib 630.965426 8555376229
> > > > > > > lzo 427.292107 16849457649
> > > > > > > snappy 446.542806 11841407957
> > > > > > >
> > > > > > > Discussion
> > > > > > >
> > > > > > > For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> > > > > > > the best time consumption and vmcore dump size balance.
> > > > > >
> > > > > > Do you have a result of -d 1 compression test? I think -d 0 is not
> > > > > > practical, I would like to see a -d 1 result of such a large vmcore.
> > > > > >
> > > > >
> > > > > No, I haven't tested the -d 1 case. I have returned the machine which used
> > > > > for performance testing, I will borrow and test on it again, please wait for
> > > > > a while...
> > > >
> > > > Thanks, it would be helpful.
> > > >
> > > > >
> > > > > > And just out of curiosity, what version of zstd are you using?
> > > > > > When I tested zstd last time, compression level 1 was faster than 2, iirc.
> > > > > >
> > > > >
> > > > > The OS running on the machine is fedora34, I used its default zstd package, whose
> > > > > version is v1.4.9.
> > > >
> > > > Thanks for the info.
> > > >
> > > > >
> > > > > > btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
> > > > >
> > > > > Yes, it's enum of ZSTD_strategy [1].
> > > >
> > > > ok, so it'll have to be replaced with "2" to avoid confusion.
> > > >
> > > > >
> > > > > [1]: https://zstd.docsforge.com/dev/api-documentation/#advanced-compression-api-requires-v140
> > > > >
> > > > > Thanks,
> > > > > Tao Liu
> > > > >
> > > > > > (no need to update for now, I will review later)
> > > >
> > > > The series almost looks good to me (though I will merge those into a patch),
> > > > just two questions are:
> > > > - whether 2 is the best balanced compression level,
> >
> > As far as I've tested on two machines this time, compression level 1 was faster
> > than 2. There is no large difference between them, but generally 1 should be
> > faster than 2 according to the zstd manual:
> > "The lower the level, the faster the speed (at the cost of compression)."
> > And as you know, level 0 is unstable, that was the same when I tested before.
> >
> > So currently I would prefer 1 rather than 2, what do you think?
>
> As we mentioned before, I have tested the -d 1 compression measurement on
> the same x86_64 machine with 4T memory:
>
> compression time| vmcore size
> zstd-3 4620.795194 31720632985
> zstd-2 4545.636437 31716847503
> zstd-1 4516.076298 32113300399
> zstd0 4663.17618 30967496299
> zstd1 4618.386313 31010305809
> zstd2 4633.535771 31005073344
> zstd3 4673.240663 30967855841
> zstd4 4771.1416 30965914853
> lzo 4801.958368 34920417584
> zlib 4442.257105 43482765168
> snappy 4433.957005 38594790371
>
> As for the decompression, I didn't get a meaningful value, because the vmcore size
> were too large, and the most time was spent on disk I/O, thus the decompression time
> measurement didn't show an obvious difference.
>
> I agree the compression level 1 and 2 don't have a big difference. I'm OK with
> your preference.
OK, thank you for the testing! As we have tested, the compression level 1
and 2 often change places each other, let's choose 1 for general speed.
I will merge the series with some adjustments, please wait for a while.
>
> >
> > Results:
> > * RHEL8.4 with libzstd-1.4.4 / 64GB filled with QEMU memory/images mainly
> > # free
> > total used free shared buff/cache available
> > Mem: 65599824 21768028 549364 4668 43282432 43078828
> > Swap: 32964604 4827916 28136688
> >
> > makedumpfile -d 1 makedumpfile -d 31
> > copy sec. write bytes copy sec. write bytes
> > zstd1 220.979689 26456659213 9.014176 558845000
> > zstd2 227.774602 26402437190 9.078599 560681256
> > lzo 83.406291 33078995065 3.603778 810219860
> >
> > * RHEL with libzstd-1.5.0 / 64GB filled with kernel source code mainly
> > # free
> > total used free shared buff/cache available
> > Mem: 65329632 9925536 456020 53086068 54948076 1549088
> > Swap: 32866300 1607424 31258876
> >
> > makedumpfile -d 1 makedumpfile -d 31
> > zstd1 520.844189 15537080819 53.494782 1200754023
> > zstd2 533.912451 15469575651 53.641510 1199561609
> > lzo 233.370800 20780821165 23.281374 1740041245
> >
> > (Used /proc/kcore, so not stable memory, but measured zstd 3 times and
> > picked the middle elapsed time.)
> >
>
> Thanks for sharing the results. Just for curiosity, can you share your way
> of testing as well? I can improve mine for later use.
I did these:
(1) fill memory
- the former case: on a KVM host running over 100 days with 8 guests
$ cat guest.img > /dev/null for some images until swap is used.
- the latter case: tar xf linux-5.14.tar.xz and copy multiple times
into /dev/shm until swap is used.
(2) run makedumpfile for /proc/kcore
I also used the command you used and picked up the elapsed time of
"Copying data", to measure only the compression time, and size of
"Write bytes".
# makedumpfile -z -d 1 --dry-run --show-stat /proc/kcore vmcore
...
...[Copying data ] : 9.463352 seconds
...
Write bytes : 535151008
Thanks,
Kazu
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
2021-09-22 2:21 ` HAGIO KAZUHITO(萩尾 一仁)
@ 2021-09-22 8:16 ` HAGIO KAZUHITO(萩尾 一仁)
0 siblings, 0 replies; 20+ messages in thread
From: HAGIO KAZUHITO(萩尾 一仁) @ 2021-09-22 8:16 UTC (permalink / raw)
To: Tao Liu; +Cc: kexec
Hi Tao Liu,
Merged them into a patch and applied:
https://github.com/makedumpfile/makedumpfile/commit/afd0a6db2a0543217f8e46955a1b44b71f7e7ef3
Thanks,
Kazu
> -----Original Message-----
> Hi Tao Liu,
>
> -----Original Message-----
> > Hello Kazu,
> >
> > Sorry for the late reply.
> >
> > On Fri, Sep 17, 2021 at 07:03:50AM +0000, HAGIO KAZUHITO(萩尾 一仁) wrote:
> > > -----Original Message-----
> > > > -----Original Message-----
> > > > > > > > This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> > > > > > > > support, the vmcore dump size and time consumption can have a better balance than
> > > > > > > > zlib/lzo/snappy.
> > > > > > > >
> > > > > > > > How to build:
> > > > > > > >
> > > > > > > > Build using make:
> > > > > > > > $ make USEZSTD=on
> > > > > > > >
> > > > > > > > Performance Comparison:
> > > > > > > >
> > > > > > > > How to measure
> > > > > > > >
> > > > > > > > I took a x86_64 machine which had 4T memory, and the compression level
> > > > > > > > range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> > > > > > > > All testing was done by makedumpfile single thread mode.
> > > > > > > >
> > > > > > > > As for compression performance testing, in order to avoid the performance
> > > > > > > > bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> > > > > > > > lzo compression as an example. "--dry-run" will not write any data to disk,
> > > > > > > > "--show-stat" will output the vmcore size after compression, and the time
> > > > > > > > consumption can be collected from the output logs.
> > > > > > > >
> > > > > > > > $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
> > > > > > > >
> > > > > > > >
> > > > > > > > As for decompression performance testing, I only tested the (-d 31) case,
> > > > > > > > because the vmcore size of (-d 0) case is too big to fit in the disk, in
> > > > > > > > addtion, to read a oversized file from disk will encounter the disk I/O
> > > > > > > > bottle neck.
> > > > > > > >
> > > > > > > > I triggered a kernel crash and collected a vmcore. Then I converted the
> > > > > > > > vmcore into specific compression format using the following makedumpfile
> > > > > > > > cmd, which would get a lzo format vmcore as an example:
> > > > > > > >
> > > > > > > > $ makedumpfile -l vmcore vmcore.lzo
> > > > > > > >
> > > > > > > > After all vmcores were ready, I used the following cmd to perform the
> > > > > > > > decompression, the time consumption can be collected from the logs.
> > > > > > > >
> > > > > > > > $ makedumpfile -F vmcore.lzo --dry-run --show-stat
> > > > > > > >
> > > > > > > >
> > > > > > > > Result charts
> > > > > > > >
> > > > > > > > For compression:
> > > > > > > >
> > > > > > > > makedumpfile -d31 | makedumpfile -d0
> > > > > > > > Compression time vmcore size | Compression time vmcore size
> > > > > > > > zstd-3 325.516446 5285179595 | 8205.452248 51715430204
> > > > > > > > zstd-2 332.069432 5319726604 | 8057.381371 51732062793
> > > > > > > > zstd-1 309.942773 5730516274 | 8138.060786 52136191571
> > > > > > > > zstd0 439.773076 4673859661 | 8873.059963 50993669657
> > > > > > > > zstd1 406.68036 4700959521 | 8259.417132 51036900055
> > > > > > > > zstd2 397.195643 4699263608 | 8230.308291 51030410942
> > > > > > > > zstd3 436.491632 4673306398 | 8803.970103 51043393637
> > > > > > > > zstd4 543.363928 4668419304 | 8991.240244 51058088514
> > > > > > > > zlib 561.217381 8514803195 | 14381.755611 78199283893
> > > > > > > > lzo 248.175953 16696411879 | 6057.528781
> 90020895741
> > > > > > > > snappy 231.868312 11782236674 | 5290.919894 245661288355
> > > > > > > >
> > > > > > > > For decompression:
> > > > > > > >
> > > > > > > > makedumpfile -d31
> > > > > > > > decompress time vmcore size
> > > > > > > > zstd-3 477.543396 5289373448
> > > > > > > > zstd-2 478.034534 5327454123
> > > > > > > > zstd-1 459.066807 5748037931
> > > > > > > > zstd0 561.687525 4680009013
> > > > > > > > zstd1 547.248917 4706358547
> > > > > > > > zstd2 544.219758 4704780719
> > > > > > > > zstd3 555.726343 4680009013
> > > > > > > > zstd4 558.031721 4675545933
> > > > > > > > zlib 630.965426 8555376229
> > > > > > > > lzo 427.292107 16849457649
> > > > > > > > snappy 446.542806 11841407957
> > > > > > > >
> > > > > > > > Discussion
> > > > > > > >
> > > > > > > > For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> > > > > > > > the best time consumption and vmcore dump size balance.
> > > > > > >
> > > > > > > Do you have a result of -d 1 compression test? I think -d 0 is not
> > > > > > > practical, I would like to see a -d 1 result of such a large vmcore.
> > > > > > >
> > > > > >
> > > > > > No, I haven't tested the -d 1 case. I have returned the machine which used
> > > > > > for performance testing, I will borrow and test on it again, please wait for
> > > > > > a while...
> > > > >
> > > > > Thanks, it would be helpful.
> > > > >
> > > > > >
> > > > > > > And just out of curiosity, what version of zstd are you using?
> > > > > > > When I tested zstd last time, compression level 1 was faster than 2, iirc.
> > > > > > >
> > > > > >
> > > > > > The OS running on the machine is fedora34, I used its default zstd package, whose
> > > > > > version is v1.4.9.
> > > > >
> > > > > Thanks for the info.
> > > > >
> > > > > >
> > > > > > > btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
> > > > > >
> > > > > > Yes, it's enum of ZSTD_strategy [1].
> > > > >
> > > > > ok, so it'll have to be replaced with "2" to avoid confusion.
> > > > >
> > > > > >
> > > > > > [1]: https://zstd.docsforge.com/dev/api-documentation/#advanced-compression-api-requires-v140
> > > > > >
> > > > > > Thanks,
> > > > > > Tao Liu
> > > > > >
> > > > > > > (no need to update for now, I will review later)
> > > > >
> > > > > The series almost looks good to me (though I will merge those into a patch),
> > > > > just two questions are:
> > > > > - whether 2 is the best balanced compression level,
> > >
> > > As far as I've tested on two machines this time, compression level 1 was faster
> > > than 2. There is no large difference between them, but generally 1 should be
> > > faster than 2 according to the zstd manual:
> > > "The lower the level, the faster the speed (at the cost of compression)."
> > > And as you know, level 0 is unstable, that was the same when I tested before.
> > >
> > > So currently I would prefer 1 rather than 2, what do you think?
> >
> > As we mentioned before, I have tested the -d 1 compression measurement on
> > the same x86_64 machine with 4T memory:
> >
> > compression time| vmcore size
> > zstd-3 4620.795194 31720632985
> > zstd-2 4545.636437 31716847503
> > zstd-1 4516.076298 32113300399
> > zstd0 4663.17618 30967496299
> > zstd1 4618.386313 31010305809
> > zstd2 4633.535771 31005073344
> > zstd3 4673.240663 30967855841
> > zstd4 4771.1416 30965914853
> > lzo 4801.958368 34920417584
> > zlib 4442.257105 43482765168
> > snappy 4433.957005 38594790371
> >
> > As for the decompression, I didn't get a meaningful value, because the vmcore size
> > were too large, and the most time was spent on disk I/O, thus the decompression time
> > measurement didn't show an obvious difference.
> >
> > I agree the compression level 1 and 2 don't have a big difference. I'm OK with
> > your preference.
>
> OK, thank you for the testing! As we have tested, the compression level 1
> and 2 often change places each other, let's choose 1 for general speed.
>
> I will merge the series with some adjustments, please wait for a while.
>
> >
> > >
> > > Results:
> > > * RHEL8.4 with libzstd-1.4.4 / 64GB filled with QEMU memory/images mainly
> > > # free
> > > total used free shared buff/cache available
> > > Mem: 65599824 21768028 549364 4668 43282432 43078828
> > > Swap: 32964604 4827916 28136688
> > >
> > > makedumpfile -d 1 makedumpfile -d 31
> > > copy sec. write bytes copy sec. write bytes
> > > zstd1 220.979689 26456659213 9.014176 558845000
> > > zstd2 227.774602 26402437190 9.078599 560681256
> > > lzo 83.406291 33078995065 3.603778 810219860
> > >
> > > * RHEL with libzstd-1.5.0 / 64GB filled with kernel source code mainly
> > > # free
> > > total used free shared buff/cache available
> > > Mem: 65329632 9925536 456020 53086068 54948076 1549088
> > > Swap: 32866300 1607424 31258876
> > >
> > > makedumpfile -d 1 makedumpfile -d 31
> > > zstd1 520.844189 15537080819 53.494782 1200754023
> > > zstd2 533.912451 15469575651 53.641510 1199561609
> > > lzo 233.370800 20780821165 23.281374 1740041245
> > >
> > > (Used /proc/kcore, so not stable memory, but measured zstd 3 times and
> > > picked the middle elapsed time.)
> > >
> >
> > Thanks for sharing the results. Just for curiosity, can you share your way
> > of testing as well? I can improve mine for later use.
>
> I did these:
> (1) fill memory
> - the former case: on a KVM host running over 100 days with 8 guests
> $ cat guest.img > /dev/null for some images until swap is used.
> - the latter case: tar xf linux-5.14.tar.xz and copy multiple times
> into /dev/shm until swap is used.
>
> (2) run makedumpfile for /proc/kcore
> I also used the command you used and picked up the elapsed time of
> "Copying data", to measure only the compression time, and size of
> "Write bytes".
>
> # makedumpfile -z -d 1 --dry-run --show-stat /proc/kcore vmcore
> ...
> ...[Copying data ] : 9.463352 seconds
> ...
> Write bytes : 535151008
>
> Thanks,
> Kazu
>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2021-09-22 8:16 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
2021-09-10 10:33 ` [PATCH 01/11] makedumpfile: Add dump header for zstd Tao Liu
2021-09-10 10:33 ` [PATCH 02/11] makedumpfile: Add command-line processing " Tao Liu
2021-09-10 10:33 ` [PATCH 03/11] makedumpfile: Add zstd build support Tao Liu
2021-09-10 10:33 ` [PATCH 04/11] makedumpfile: Notify zstd unsupporting when disabled Tao Liu
2021-09-10 10:33 ` [PATCH 05/11] makedumpfile: Add single thread zstd compression processing Tao Liu
2021-09-10 10:33 ` [PATCH 06/11] makedumpfile: Add parallel threads " Tao Liu
2021-09-10 10:33 ` [PATCH 07/11] makedumpfile: Add single thread zstd uncompression processing Tao Liu
2021-09-10 10:33 ` [PATCH 08/11] makedumpfile: Add parallel threads " Tao Liu
2021-09-10 10:33 ` [PATCH 09/11] makedumpfile: Add zstd help message Tao Liu
2021-09-10 10:33 ` [PATCH 10/11] makedumpfile: Add zstd manual description Tao Liu
2021-09-10 10:33 ` [PATCH 11/11] makedumpfile: Add zstd README description Tao Liu
2021-09-14 7:04 ` [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile HAGIO KAZUHITO(萩尾 一仁)
2021-09-14 8:33 ` Tao Liu
2021-09-17 1:34 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-17 2:31 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-17 7:03 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-21 9:26 ` Tao Liu
2021-09-22 2:21 ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-22 8:16 ` HAGIO KAZUHITO(萩尾 一仁)
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.