From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC725C433EF for ; Mon, 22 Nov 2021 13:00:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230322AbhKVNDN (ORCPT ); Mon, 22 Nov 2021 08:03:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230406AbhKVNDM (ORCPT ); Mon, 22 Nov 2021 08:03:12 -0500 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A8DAC061574 for ; Mon, 22 Nov 2021 05:00:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Date:Message-Id:To:From:Subject:Sender :Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:In-Reply-To:References; bh=7RIISr9Dn37+wUdXP5Bc6GJYkYKAV1HAyCEJQ5KQ7qc=; b=EaQUQPMMMTfSJgRjisWGhbHOZh yoQVTI0n9SriqHzBh0MxpeygVoy8LlcZzEOdOHckREOXnPPh+FjLBweXbpCpjKx9Pmx5hqOTzrqez vYQTnIoxgyE6CZs7t0TzR05M2ozNABINZWpH6rYkKeTaayAGAZFd61kXeaf8/oNK7rQFmxfdBb3iR VzLTIB8XHZYBX54L+PtebTp2WMwBMCGr4xzS/vGwIsgG8oEMX4m9fyHxprbEmnRb8rUU5s50ONNKd mjDnqL7GGI8FF26vGpfVrValdm8+Z9XTvGyAx7dfdjnJmcczwunzhns011hYDbSmX9MiYqNA6tDe1 vmpCk77w==; Received: from [65.144.74.35] (helo=kernel.dk) by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mp8vU-00HUdg-6y for fio@vger.kernel.org; Mon, 22 Nov 2021 13:00:04 +0000 Received: by kernel.dk (Postfix, from userid 1000) id AC0C61BC0156; Mon, 22 Nov 2021 06:00:01 -0700 (MST) Subject: Recent changes (master) From: Jens Axboe To: X-Mailer: mail (GNU Mailutils 3.7) Message-Id: <20211122130001.AC0C61BC0156@kernel.dk> Date: Mon, 22 Nov 2021 06:00:01 -0700 (MST) Precedence: bulk List-ID: X-Mailing-List: fio@vger.kernel.org The following changes since commit 9f51d89c683d70cd8ab23ba09ec6e628a548af5a: Sync io_uring header with the kernel (2021-11-20 07:31:20 -0700) are available in the Git repository at: git://git.kernel.dk/fio.git master for you to fetch changes up to 1d08bfb018e600cc47f122fb78c02bf74b84dee8: t/dedupe: style fixups (2021-11-21 06:51:11 -0700) ---------------------------------------------------------------- Bar David (2): Mixed dedup and compression fio-dedup: adjusted the binary to support compression Jens Axboe (3): Merge branch 'dedupe_and_compression' of https://github.com/bardavid/fio t/io_uring: fix 32-bit compile warnings t/dedupe: style fixups DEDUPE-TODO | 3 -- dedupe.c | 12 ++++- io_u.c | 29 ++++++----- t/dedupe.c | 167 +++++++++++++++++++++++++++++++++++++++++++++++------------ t/io_uring.c | 4 +- 5 files changed, 161 insertions(+), 54 deletions(-) --- Diff of recent changes: diff --git a/DEDUPE-TODO b/DEDUPE-TODO index 1f3ee9da..4b0bfd1d 100644 --- a/DEDUPE-TODO +++ b/DEDUPE-TODO @@ -1,6 +1,3 @@ -- Mixed buffers of dedupe-able and compressible data. - Major usecase in performance benchmarking of storage subsystems. - - Shifted dedup-able data. Allow for dedup buffer generation to shift contents by random number of sectors (fill the gaps with uncompressible data). Some storage diff --git a/dedupe.c b/dedupe.c index 043a376c..fd116dfb 100644 --- a/dedupe.c +++ b/dedupe.c @@ -2,12 +2,14 @@ int init_dedupe_working_set_seeds(struct thread_data *td) { - unsigned long long i; + unsigned long long i, j, num_seed_advancements; struct frand_state dedupe_working_set_state = {0}; if (!td->o.dedupe_percentage || !(td->o.dedupe_mode == DEDUPE_MODE_WORKING_SET)) return 0; + num_seed_advancements = td->o.min_bs[DDIR_WRITE] / + min_not_zero(td->o.min_bs[DDIR_WRITE], (unsigned long long) td->o.compress_chunk); /* * The dedupe working set keeps seeds of unique data (generated by buf_state). * Dedupe-ed pages will be generated using those seeds. @@ -21,7 +23,13 @@ int init_dedupe_working_set_seeds(struct thread_data *td) frand_copy(&dedupe_working_set_state, &td->buf_state); for (i = 0; i < td->num_unique_pages; i++) { frand_copy(&td->dedupe_working_set_states[i], &dedupe_working_set_state); - __get_next_seed(&dedupe_working_set_state); + /* + * When compression is used the seed is advanced multiple times to + * generate the buffer. We want to regenerate the same buffer when + * deduping against this page + */ + for (j = 0; j < num_seed_advancements; j++) + __get_next_seed(&dedupe_working_set_state); } return 0; diff --git a/io_u.c b/io_u.c index 586a4bef..3c72d63d 100644 --- a/io_u.c +++ b/io_u.c @@ -2230,27 +2230,30 @@ void fill_io_buffer(struct thread_data *td, void *buf, unsigned long long min_wr if (o->compress_percentage || o->dedupe_percentage) { unsigned int perc = td->o.compress_percentage; - struct frand_state *rs; + struct frand_state *rs = NULL; unsigned long long left = max_bs; unsigned long long this_write; do { - rs = get_buf_state(td); + /* + * Buffers are either entirely dedupe-able or not. + * If we choose to dedup, the buffer should undergo + * the same manipulation as the original write. Which + * means we should retrack the steps we took for compression + * as well. + */ + if (!rs) + rs = get_buf_state(td); min_write = min(min_write, left); - if (perc) { - this_write = min_not_zero(min_write, - (unsigned long long) td->o.compress_chunk); + this_write = min_not_zero(min_write, + (unsigned long long) td->o.compress_chunk); - fill_random_buf_percentage(rs, buf, perc, - this_write, this_write, - o->buffer_pattern, - o->buffer_pattern_bytes); - } else { - fill_random_buf(rs, buf, min_write); - this_write = min_write; - } + fill_random_buf_percentage(rs, buf, perc, + this_write, this_write, + o->buffer_pattern, + o->buffer_pattern_bytes); buf += this_write; left -= this_write; diff --git a/t/dedupe.c b/t/dedupe.c index 8b659c76..109ea1af 100644 --- a/t/dedupe.c +++ b/t/dedupe.c @@ -24,19 +24,25 @@ #include "../lib/bloom.h" #include "debug.h" +#include "zlib.h" + +struct zlib_ctrl { + z_stream stream; + unsigned char *buf_in; + unsigned char *buf_out; +}; struct worker_thread { + struct zlib_ctrl zc; pthread_t thread; - - volatile int done; - - int fd; uint64_t cur_offset; uint64_t size; - + unsigned long long unique_capacity; unsigned long items; unsigned long dupes; int err; + int fd; + volatile int done; }; struct extent { @@ -68,6 +74,7 @@ static unsigned int odirect; static unsigned int collision_check; static unsigned int print_progress = 1; static unsigned int use_bloom = 1; +static unsigned int compression = 0; static uint64_t total_size; static uint64_t cur_offset; @@ -87,8 +94,9 @@ static uint64_t get_size(struct fio_file *f, struct stat *sb) return 0; } ret = bytes; - } else + } else { ret = sb->st_size; + } return (ret & ~((uint64_t)blocksize - 1)); } @@ -120,9 +128,9 @@ static int __read_block(int fd, void *buf, off_t offset, size_t count) if (ret < 0) { perror("pread"); return 1; - } else if (!ret) + } else if (!ret) { return 1; - else if (ret != count) { + } else if (ret != count) { log_err("dedupe: short read on block\n"); return 1; } @@ -135,6 +143,34 @@ static int read_block(int fd, void *buf, off_t offset) return __read_block(fd, buf, offset, blocksize); } +static void account_unique_capacity(uint64_t offset, uint64_t *unique_capacity, + struct zlib_ctrl *zc) +{ + z_stream *stream = &zc->stream; + unsigned int compressed_len; + int ret; + + if (read_block(file.fd, zc->buf_in, offset)) + return; + + stream->next_in = zc->buf_in; + stream->avail_in = blocksize; + stream->avail_out = deflateBound(stream, blocksize); + stream->next_out = zc->buf_out; + + ret = deflate(stream, Z_FINISH); + assert(ret != Z_STREAM_ERROR); + compressed_len = blocksize - stream->avail_out; + + if (dump_output) + printf("offset 0x%lx compressed to %d blocksize %d ratio %.2f \n", + (unsigned long) offset, compressed_len, blocksize, + (float)compressed_len / (float)blocksize); + + *unique_capacity += compressed_len; + deflateReset(stream); +} + static void add_item(struct chunk *c, struct item *i) { /* @@ -182,13 +218,15 @@ static struct chunk *alloc_chunk(void) if (collision_check || dump_output) { c = malloc(sizeof(struct chunk) + sizeof(struct flist_head)); INIT_FLIST_HEAD(&c->extent_list[0]); - } else + } else { c = malloc(sizeof(struct chunk)); + } return c; } -static void insert_chunk(struct item *i) +static void insert_chunk(struct item *i, uint64_t *unique_capacity, + struct zlib_ctrl *zc) { struct fio_rb_node **p, *parent; struct chunk *c; @@ -201,11 +239,11 @@ static void insert_chunk(struct item *i) c = rb_entry(parent, struct chunk, rb_node); diff = memcmp(i->hash, c->hash, sizeof(i->hash)); - if (diff < 0) + if (diff < 0) { p = &(*p)->rb_left; - else if (diff > 0) + } else if (diff > 0) { p = &(*p)->rb_right; - else { + } else { int ret; if (!collision_check) @@ -228,12 +266,15 @@ static void insert_chunk(struct item *i) memcpy(c->hash, i->hash, sizeof(i->hash)); rb_link_node(&c->rb_node, parent, p); rb_insert_color(&c->rb_node, &rb_root); + if (compression) + account_unique_capacity(i->offset, unique_capacity, zc); add: add_item(c, i); } static void insert_chunks(struct item *items, unsigned int nitems, - uint64_t *ndupes) + uint64_t *ndupes, uint64_t *unique_capacity, + struct zlib_ctrl *zc) { int i; @@ -248,7 +289,7 @@ static void insert_chunks(struct item *items, unsigned int nitems, r = bloom_set(bloom, items[i].hash, s); *ndupes += r; } else - insert_chunk(&items[i]); + insert_chunk(&items[i], unique_capacity, zc); } fio_sem_up(rb_lock); @@ -277,11 +318,13 @@ static int do_work(struct worker_thread *thread, void *buf) off_t offset; int nitems = 0; uint64_t ndupes = 0; + uint64_t unique_capacity = 0; struct item *items; offset = thread->cur_offset; - nblocks = read_blocks(thread->fd, buf, offset, min(thread->size, (uint64_t)chunk_size)); + nblocks = read_blocks(thread->fd, buf, offset, + min(thread->size, (uint64_t) chunk_size)); if (!nblocks) return 1; @@ -296,20 +339,39 @@ static int do_work(struct worker_thread *thread, void *buf) nitems++; } - insert_chunks(items, nitems, &ndupes); + insert_chunks(items, nitems, &ndupes, &unique_capacity, &thread->zc); free(items); thread->items += nitems; thread->dupes += ndupes; + thread->unique_capacity += unique_capacity; return 0; } +static void thread_init_zlib_control(struct worker_thread *thread) +{ + size_t sz; + + z_stream *stream = &thread->zc.stream; + stream->zalloc = Z_NULL; + stream->zfree = Z_NULL; + stream->opaque = Z_NULL; + + if (deflateInit(stream, Z_DEFAULT_COMPRESSION) != Z_OK) + return; + + thread->zc.buf_in = fio_memalign(blocksize, blocksize, false); + sz = deflateBound(stream, blocksize); + thread->zc.buf_out = fio_memalign(blocksize, sz, false); +} + static void *thread_fn(void *data) { struct worker_thread *thread = data; void *buf; buf = fio_memalign(blocksize, chunk_size, false); + thread_init_zlib_control(thread); do { if (get_work(&thread->cur_offset, &thread->size)) { @@ -362,15 +424,17 @@ static void show_progress(struct worker_thread *threads, unsigned long total) printf("%3.2f%% done (%luKiB/sec)\r", perc, this_items); last_nitems = nitems; fio_gettime(&last_tv, NULL); - } else + } else { printf("%3.2f%% done\r", perc); + } fflush(stdout); usleep(250000); }; } static int run_dedupe_threads(struct fio_file *f, uint64_t dev_size, - uint64_t *nextents, uint64_t *nchunks) + uint64_t *nextents, uint64_t *nchunks, + uint64_t *unique_capacity) { struct worker_thread *threads; unsigned long nitems, total_items; @@ -398,11 +462,13 @@ static int run_dedupe_threads(struct fio_file *f, uint64_t dev_size, nitems = 0; *nextents = 0; *nchunks = 1; + *unique_capacity = 0; for (i = 0; i < num_threads; i++) { void *ret; pthread_join(threads[i].thread, &ret); nitems += threads[i].items; *nchunks += threads[i].dupes; + *unique_capacity += threads[i].unique_capacity; } printf("Threads(%u): %lu items processed\n", num_threads, nitems); @@ -416,7 +482,7 @@ static int run_dedupe_threads(struct fio_file *f, uint64_t dev_size, } static int dedupe_check(const char *filename, uint64_t *nextents, - uint64_t *nchunks) + uint64_t *nchunks, uint64_t *unique_capacity) { uint64_t dev_size; struct stat sb; @@ -451,9 +517,11 @@ static int dedupe_check(const char *filename, uint64_t *nextents, bloom = bloom_new(bloom_entries); } - printf("Will check <%s>, size <%llu>, using %u threads\n", filename, (unsigned long long) dev_size, num_threads); + printf("Will check <%s>, size <%llu>, using %u threads\n", filename, + (unsigned long long) dev_size, num_threads); - return run_dedupe_threads(&file, dev_size, nextents, nchunks); + return run_dedupe_threads(&file, dev_size, nextents, nchunks, + unique_capacity); err: if (file.fd != -1) close(file.fd); @@ -466,18 +534,38 @@ static void show_chunk(struct chunk *c) struct flist_head *n; struct extent *e; - printf("c hash %8x %8x %8x %8x, count %lu\n", c->hash[0], c->hash[1], c->hash[2], c->hash[3], (unsigned long) c->count); + printf("c hash %8x %8x %8x %8x, count %lu\n", c->hash[0], c->hash[1], + c->hash[2], c->hash[3], (unsigned long) c->count); flist_for_each(n, &c->extent_list[0]) { e = flist_entry(n, struct extent, list); printf("\toffset %llu\n", (unsigned long long) e->offset); } } -static void show_stat(uint64_t nextents, uint64_t nchunks, uint64_t ndupextents) +static const char *capacity_unit[] = {"b","KB", "MB", "GB", "TB", "PB", "EB"}; + +static uint64_t bytes_to_human_readable_unit(uint64_t n, const char **unit_out) +{ + uint8_t i = 0; + + while (n >= 1024) { + i++; + n /= 1024; + } + + *unit_out = capacity_unit[i]; + return n; +} + +static void show_stat(uint64_t nextents, uint64_t nchunks, uint64_t ndupextents, + uint64_t unique_capacity) { double perc, ratio; + const char *unit; + uint64_t uc_human; - printf("Extents=%lu, Unique extents=%lu", (unsigned long) nextents, (unsigned long) nchunks); + printf("Extents=%lu, Unique extents=%lu", (unsigned long) nextents, + (unsigned long) nchunks); if (!bloom) printf(" Duplicated extents=%lu", (unsigned long) ndupextents); printf("\n"); @@ -485,22 +573,29 @@ static void show_stat(uint64_t nextents, uint64_t nchunks, uint64_t ndupextents) if (nchunks) { ratio = (double) nextents / (double) nchunks; printf("De-dupe ratio: 1:%3.2f\n", ratio - 1.0); - } else + } else { printf("De-dupe ratio: 1:infinite\n"); + } - if (ndupextents) - printf("De-dupe working set at least: %3.2f%%\n", 100.0 * (double) ndupextents / (double) nextents); + if (ndupextents) { + printf("De-dupe working set at least: %3.2f%%\n", + 100.0 * (double) ndupextents / (double) nextents); + } perc = 1.00 - ((double) nchunks / (double) nextents); perc *= 100.0; printf("Fio setting: dedupe_percentage=%u\n", (int) (perc + 0.50)); + + if (compression) { + uc_human = bytes_to_human_readable_unit(unique_capacity, &unit); + printf("Unique capacity %lu%s\n", (unsigned long) uc_human, unit); + } } static void iter_rb_tree(uint64_t *nextents, uint64_t *nchunks, uint64_t *ndupextents) { struct fio_rb_node *n; - *nchunks = *nextents = *ndupextents = 0; n = rb_first(&rb_root); @@ -532,18 +627,19 @@ static int usage(char *argv[]) log_err("\t-c\tFull collision check\n"); log_err("\t-B\tUse probabilistic bloom filter\n"); log_err("\t-p\tPrint progress indicator\n"); + log_err("\t-C\tCalculate compressible size\n"); return 1; } int main(int argc, char *argv[]) { - uint64_t nextents = 0, nchunks = 0, ndupextents = 0; + uint64_t nextents = 0, nchunks = 0, ndupextents = 0, unique_capacity; int c, ret; arch_init(argv); debug_init(); - while ((c = getopt(argc, argv, "b:t:d:o:c:p:B:")) != -1) { + while ((c = getopt(argc, argv, "b:t:d:o:c:p:B:C:")) != -1) { switch (c) { case 'b': blocksize = atoi(optarg); @@ -566,13 +662,16 @@ int main(int argc, char *argv[]) case 'B': use_bloom = atoi(optarg); break; + case 'C': + compression = atoi(optarg); + break; case '?': default: return usage(argv); } } - if (collision_check || dump_output) + if (collision_check || dump_output || compression) use_bloom = 0; if (!num_threads) @@ -586,13 +685,13 @@ int main(int argc, char *argv[]) rb_root = RB_ROOT; rb_lock = fio_sem_init(FIO_SEM_UNLOCKED); - ret = dedupe_check(argv[optind], &nextents, &nchunks); + ret = dedupe_check(argv[optind], &nextents, &nchunks, &unique_capacity); if (!ret) { if (!bloom) iter_rb_tree(&nextents, &nchunks, &ndupextents); - show_stat(nextents, nchunks, ndupextents); + show_stat(nextents, nchunks, ndupextents, unique_capacity); } fio_sem_remove(rb_lock); diff --git a/t/io_uring.c b/t/io_uring.c index 7bf215c7..a98f78fd 100644 --- a/t/io_uring.c +++ b/t/io_uring.c @@ -192,7 +192,7 @@ unsigned int calc_clat_percentiles(unsigned long *io_u_plat, unsigned long nr, unsigned long *ovals = NULL; bool is_last; - *minv = -1ULL; + *minv = -1UL; *maxv = 0; ovals = malloc(len * sizeof(*ovals)); @@ -498,7 +498,7 @@ static void init_io(struct submitter *s, unsigned index) sqe->off = offset; sqe->user_data = (unsigned long) f->fileno; if (stats && stats_running) - sqe->user_data |= ((unsigned long)s->clock_index << 32); + sqe->user_data |= ((uint64_t)s->clock_index << 32); } static int prep_more_ios_uring(struct submitter *s, int max_ios)