From: Christoph Hellwig <hch@infradead.org>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 1/7] xfs: increase the default parallelism levels of pwork clients
Date: Wed, 13 Jan 2021 15:49:32 +0100 [thread overview]
Message-ID: <X/8IfJj+qgnl303O@infradead.org> (raw)
In-Reply-To: <161040740189.1582286.17385075679159461086.stgit@magnolia>
> +/* Estimate the amount of parallelism available for a given device. */
> +unsigned int
> +xfs_buftarg_guess_threads(
> + struct xfs_buftarg *btp)
> +{
> + int iomin;
> + int ioopt;
> +
> + /*
> + * The device tells us that it is non-rotational, and we take that to
> + * mean there are no moving parts and that the device can handle all
> + * the CPUs throwing IO requests at it.
> + */
> + if (blk_queue_nonrot(btp->bt_bdev->bd_disk->queue))
> + return num_online_cpus();
> +
> + /*
> + * The device has a preferred and minimum IO size that suggest a RAID
> + * setup, so infer the number of disks and assume that the parallelism
> + * is equal to the disk count.
> + */
> + iomin = bdev_io_min(btp->bt_bdev);
> + ioopt = bdev_io_opt(btp->bt_bdev);
> + if (iomin > 0 && ioopt > iomin)
> + return ioopt / iomin;
> +
> + /*
> + * The device did not indicate that it has any capabilities beyond that
> + * of a rotating disk with a single drive head, so we estimate no
> + * parallelism at all.
> + */
> + return 1;
> +}
Why is this in xfs_buf.c despite having nothing to do with the buffer
cache?
Also I think we need some sort of manual override in case the guess is
wrong.
next prev parent reply other threads:[~2021-01-13 14:50 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-11 23:23 [PATCHSET v2 0/7] xfs: consolidate posteof and cowblocks cleanup Darrick J. Wong
2021-01-11 23:23 ` [PATCH 1/7] xfs: increase the default parallelism levels of pwork clients Darrick J. Wong
2021-01-13 14:49 ` Christoph Hellwig [this message]
2021-01-14 21:32 ` Darrick J. Wong
2021-01-14 22:38 ` Darrick J. Wong
2021-01-18 17:36 ` Christoph Hellwig
2021-01-18 19:57 ` Darrick J. Wong
2021-01-19 16:37 ` Christoph Hellwig
2021-01-19 19:17 ` Darrick J. Wong
2021-01-11 23:23 ` [PATCH 2/7] xfs: refactor the predicate part of xfs_free_eofblocks Darrick J. Wong
2021-01-13 14:57 ` Christoph Hellwig
2021-01-14 22:49 ` Darrick J. Wong
2021-01-18 17:38 ` Christoph Hellwig
2021-01-11 23:23 ` [PATCH 3/7] xfs: consolidate incore inode radix tree posteof/cowblocks tags Darrick J. Wong
2021-01-13 14:59 ` Christoph Hellwig
2021-01-11 23:23 ` [PATCH 4/7] xfs: consolidate the eofblocks and cowblocks workers Darrick J. Wong
2021-01-13 15:04 ` Christoph Hellwig
2021-01-13 23:53 ` Darrick J. Wong
2021-01-11 23:23 ` [PATCH 5/7] xfs: only walk the incore inode tree once per blockgc scan Darrick J. Wong
2021-01-13 15:06 ` Christoph Hellwig
2021-01-13 20:41 ` Darrick J. Wong
2021-01-11 23:23 ` [PATCH 6/7] xfs: rename block gc start and stop functions Darrick J. Wong
2021-01-13 15:07 ` Christoph Hellwig
2021-01-11 23:23 ` [PATCH 7/7] xfs: parallelize block preallocation garbage collection Darrick J. Wong
2021-01-13 15:09 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=X/8IfJj+qgnl303O@infradead.org \
--to=hch@infradead.org \
--cc=djwong@kernel.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).