linux-bcache.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: colyli@suse.de, linux-bcache@vger.kernel.org
Cc: linux-block@vger.kernel.org
Subject: Re: [PATCH v3 16/16] bcache: avoid extra memory consumption in struct bbio for large bucket size
Date: Thu, 16 Jul 2020 08:18:48 +0200	[thread overview]
Message-ID: <a67c4ad7-e743-a468-2b2e-d8bf82b492e4@suse.de> (raw)
In-Reply-To: <20200715143015.14957-17-colyli@suse.de>

On 7/15/20 4:30 PM, colyli@suse.de wrote:
> From: Coly Li <colyli@suse.de>
> 
> Bcache uses struct bbio to do I/Os for meta data pages like uuids,
> disk_buckets, prio_buckets, and btree nodes.
> 
> Example writing a btree node onto cache device, the process is,
> - Allocate a struct bbio from mempool c->bio_meta.
> - Inside struct bbio embedded a struct bio, initialize bi_inline_vecs
>    for this embedded bio.
> - Call bch_bio_map() to map each meta data page to each bv from the
>    inlined  bi_io_vec table.
> - Call bch_submit_bbio() to submit the bio into underlying block layer.
> - When the I/O completed, only release the struct bbio, don't touch the
>    reference counter of the meta data pages.
> 
> The struct bbio is defined as,
> 738 struct bbio {
> 739     unsigned int            submit_time_us;
> 	[snipped]
> 748     struct bio              bio;
> 749 };
> 
> Because struct bio is embedded at the end of struct bbio, therefore the
> actual size of struct bbio is sizeof(struct bio) + size of the embedded
> bio->bi_inline_vecs.
> 
> Now all the meta data bucket size are limited to meta_bucket_pages(), if
> the bucket size is large than meta_bucket_pages()*PAGE_SECTORS, rested
> space in the bucket is unused. Therefore the most used space in meta
> bucket is (1<<MAX_ORDER) pages, or (1<<CONFIG_FORCE_MAX_ZONEORDER) if it
> is configured.
> 
> Therefore for large bucket size, it is unnecessary to calculate the
> allocation size of mempool c->bio_meta as,
> 	mempool_init_kmalloc_pool(&c->bio_meta, 2,
> 			sizeof(struct bbio) +
> 			sizeof(struct bio_vec) * bucket_pages(c))
> It is too large, neither the Linux buddy allocator cannot allocate so
> much continuous pages, nor the extra allocated pages are wasted.
> 
> This patch replace bucket_pages() to meta_bucket_pages() in two places,
> - In bch_cache_set_alloc(), when initialize mempool c->bio_meta, uses
>    sizeof(struct bbio) + sizeof(struct bio_vec) * bucket_pages(c) to set
>    the allocating object size.
> - In bch_bbio_alloc(), when calling bio_init() to set inline bvec talbe
>    bi_inline_bvecs, uses meta_bucket_pages() to indicate number of the
>    inline bio vencs number.
> 
> Now the maximum size of embedded bio inside struct bbio exactly matches
> the limit of meta_bucket_pages(), no extra page wasted.
> 
> Signed-off-by: Coly Li <colyli@suse.de>
> ---
>   drivers/md/bcache/io.c    | 2 +-
>   drivers/md/bcache/super.c | 2 +-
>   2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c
> index b25ee33b0d0b..a14a445618b4 100644
> --- a/drivers/md/bcache/io.c
> +++ b/drivers/md/bcache/io.c
> @@ -26,7 +26,7 @@ struct bio *bch_bbio_alloc(struct cache_set *c)
>   	struct bbio *b = mempool_alloc(&c->bio_meta, GFP_NOIO);
>   	struct bio *bio = &b->bio;
>   
> -	bio_init(bio, bio->bi_inline_vecs, bucket_pages(c));
> +	bio_init(bio, bio->bi_inline_vecs, meta_bucket_pages(&c->sb));
>   
>   	return bio;
>   }
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 90494c7dead8..cade3f09661d 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -1920,7 +1920,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
>   
>   	if (mempool_init_kmalloc_pool(&c->bio_meta, 2,
>   			sizeof(struct bbio) +
> -			sizeof(struct bio_vec) * bucket_pages(c)))
> +			sizeof(struct bio_vec) * meta_bucket_pages(&c->sb)))
>   		goto err;
>   
>   	if (mempool_init_kmalloc_pool(&c->fill_iter, 1, iter_size))
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

      reply	other threads:[~2020-07-16  6:18 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-15 14:29 [PATCH v3 00/16] bcache: extend bucket size to 32bit width colyli
2020-07-15 14:30 ` [PATCH v3 01/16] bcache: add read_super_common() to read major part of super block colyli
2020-07-15 14:30 ` [PATCH v3 02/16] bcache: add more accurate error information in read_super_common() colyli
2020-07-15 14:30 ` [PATCH v3 03/16] bcache: disassemble the big if() checks in bch_cache_set_alloc() colyli
2020-07-15 14:30 ` [PATCH v3 04/16] bcache: fix super block seq numbers comparision in register_cache_set() colyli
2020-07-15 14:30 ` [PATCH v3 05/16] bcache: increase super block version for cache device and backing device colyli
2020-07-15 14:30 ` [PATCH v3 06/16] bcache: move bucket related code into read_super_common() colyli
2020-07-15 14:30 ` [PATCH v3 07/16] bcache: struct cache_sb is only for in-memory super block now colyli
2020-07-15 18:21   ` Christoph Hellwig
2020-07-16  3:31     ` Coly Li
2020-07-15 14:30 ` [PATCH v3 08/16] bcache: introduce meta_bucket_pages() related helper routines colyli
2020-07-15 15:36   ` Hannes Reinecke
2020-07-15 16:00     ` Coly Li
2020-07-15 14:30 ` [PATCH v3 09/16] bcache: handle c->uuids properly for bucket size > 8MB colyli
2020-07-15 15:37   ` Hannes Reinecke
2020-07-15 14:30 ` [PATCH v3 10/16] bcache: handle cache prio_buckets and disk_buckets " colyli
2020-07-15 15:38   ` Hannes Reinecke
2020-07-15 14:30 ` [PATCH v3 11/16] bcache: handle cache set verify_ondisk " colyli
2020-07-16  6:07   ` Hannes Reinecke
2020-07-15 14:30 ` [PATCH v3 12/16] bcache: handle btree node memory allocation " colyli
2020-07-16  6:08   ` Hannes Reinecke
2020-07-15 14:30 ` [PATCH v3 13/16] bcache: add bucket_size_hi into struct cache_sb_disk for large bucket colyli
2020-07-16  6:15   ` Hannes Reinecke
2020-07-16  6:41     ` Coly Li
2020-07-16  7:02       ` Hannes Reinecke
2020-07-16  7:08         ` Coly Li
2020-07-15 14:30 ` [PATCH v3 14/16] bcache: add sysfs file to display feature sets information of cache set colyli
2020-07-16  6:17   ` Hannes Reinecke
2020-07-16  6:20     ` Coly Li
2020-07-15 14:30 ` [PATCH v3 15/16] bcache: avoid extra memory allocation from mempool c->fill_iter colyli
2020-07-16  6:18   ` Hannes Reinecke
2020-07-15 14:30 ` [PATCH v3 16/16] bcache: avoid extra memory consumption in struct bbio for large bucket size colyli
2020-07-16  6:18   ` Hannes Reinecke [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a67c4ad7-e743-a468-2b2e-d8bf82b492e4@suse.de \
    --to=hare@suse.de \
    --cc=colyli@suse.de \
    --cc=linux-bcache@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).