linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Wagner <dwagner@suse.de>
To: Christoph Hellwig <hch@lst.de>
Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
	 Keith Busch <kbusch@kernel.org>,
	Sagi Grimberg <sagi@grimberg.me>, Hannes Reinecke <hare@suse.de>
Subject: Re: [PATCH v3 3/4] nvme: move ns id info to struct nvme_ns_head
Date: Thu, 7 Dec 2023 11:53:07 +0100	[thread overview]
Message-ID: <xpbwa63z7w6fa7mykmhhd56vdm3746y2x7vvtp5xj2st5aupvz@4rrj7gxzu66c> (raw)
In-Reply-To: <20231206085436.GB24484@lst.de>

On Wed, Dec 06, 2023 at 09:54:36AM +0100, Christoph Hellwig wrote:
> On Wed, Dec 06, 2023 at 09:12:43AM +0100, Daniel Wagner wrote:
> > Move the namesapce info to struct nvme_ns_head, because it's the same
> > for all associated namespaces.
> > 
> > The head pointer is accessible from the ns pointer so we could just
> > update all places with ns->x to ns->head->x. While this is okay for the
> > slow path,
> 
> Do you have any data to show that it matters?  All the I/O command
> setup functions already access the ns_head for ->ns_id, so looking
> at more fields can't really make any difference.

I've splitted the patch so that the first patch just moves the variables
around and changes ns->x to ns->head->x ('patched'). Then I changed the
layout of nvme_ns_head, so that all variables used in nvme_setup_rw()
are in one cacheline ('cache line optimized') and the last change is
passing the nvme_ns_head pointer around ('use nvme_ns_head directly')

I assume that nvme_setup_rw is the function which is used most and thus
I've tried to benchmark nvme_setup_rw with issuing 4k reads. I am sure
my benchmark setup is not perfect but well that's what I have.

Anyway, the results are pointing towards that moving the variables to
nvme_ns_head has a slight performance impact but that can be more than
mitigated by optimizing the cacheline access. The change to use
nvme_ns_head directly seems to eat up all the cacheline optimization
gains again.

'patched' layout:

struct nvme_ns_head {
	struct list_head           list;                 /*     0    16 */
	struct srcu_struct         srcu;                 /*    16    72 */
	/* --- cacheline 1 boundary (64 bytes) was 24 bytes ago --- */
	struct nvme_subsystem *    subsys;               /*    88     8 */
	unsigned int               ns_id;                /*    96     4 */
	struct nvme_ns_ids         ids;                  /*   100    41 */

	/* XXX 3 bytes hole, try to pack */

	/* --- cacheline 2 boundary (128 bytes) was 16 bytes ago --- */
	struct list_head           entry;                /*   144    16 */
	struct kref                ref;                  /*   160     4 */
	bool                       shared;               /*   164     1 */

	/* XXX 3 bytes hole, try to pack */

	int                        instance;             /*   168     4 */

	/* XXX 4 bytes hole, try to pack */

	struct nvme_effects_log *  effects;              /*   176     8 */
	int                        lba_shift;            /*   184     4 */
	u16                        ms;                   /*   188     2 */
	u16                        pi_size;              /*   190     2 */
	/* --- cacheline 3 boundary (192 bytes) --- */
	u16                        sgs;                  /*   192     2 */

	/* XXX 2 bytes hole, try to pack */

	u32                        sws;                  /*   196     4 */
	u64                        nuse;                 /*   200     8 */
	u8                         pi_type;              /*   208     1 */
	u8                         guard_type;           /*   209     1 */

	/* XXX 6 bytes hole, try to pack */

	u64                        zsze;                 /*   216     8 */
	unsigned long              features;             /*   224     8 */
	struct ratelimit_state     rs_nuse;              /*   232   104 */
	/* --- cacheline 5 boundary (320 bytes) was 16 bytes ago --- */

[...]
}


'cacheline optimized' layout:

struct nvme_ns_head {
	struct list_head           list;                 /*     0    16 */
	struct srcu_struct         srcu;                 /*    16    72 */
	/* --- cacheline 1 boundary (64 bytes) was 24 bytes ago --- */
	struct nvme_subsystem *    subsys;               /*    88     8 */
	struct nvme_ns_ids         ids;                  /*    96    41 */

	/* XXX 7 bytes hole, try to pack */

	/* --- cacheline 2 boundary (128 bytes) was 16 bytes ago --- */
	struct list_head           entry;                /*   144    16 */
	struct kref                ref;                  /*   160     4 */
	bool                       shared;               /*   164     1 */

	/* XXX 3 bytes hole, try to pack */

	int                        instance;             /*   168     4 */

	/* XXX 4 bytes hole, try to pack */

	struct nvme_effects_log *  effects;              /*   176     8 */
	u64                        nuse;                 /*   184     8 */
	/* --- cacheline 3 boundary (192 bytes) --- */
	unsigned int               ns_id;                /*   192     4 */
	int                        lba_shift;            /*   196     4 */
	u16                        ms;                   /*   200     2 */
	u16                        pi_size;              /*   202     2 */
	u8                         pi_type;              /*   204     1 */
	u8                         guard_type;           /*   205     1 */
	u16                        sgs;                  /*   206     2 */
	u32                        sws;                  /*   208     4 */
[...]
}

fio test job:
  [global]
  name=nvme-read
  time_based
  ramp_time=30
  runtime=120
  readwrite=read
  bs=4k
  ioengine=io_uring
  direct=1
  numjobs=4
  iodepth=64
  group_reporting=1[nvme0]
  new_group
  filename=/dev/nvme0n1
  cpus_allowed=1-4
  cpus_allowed_policy=split

  [nvme0]
  new_group
  filename=/dev/nvme0n1

bandwidth
      'baseline'  'patched'  'cache line optimized' 'use nvme_ns_head directly'
      1608        1632       1613                   1618
      1608        1610       1634                   1618
      1623        1639       1642                   1646
      1638        1610       1640                   1619
      1637        1611       1642                   1620
avg   1622.8      1620.4     1634.2                 1624.2
stdev 14.75       14.01      12.29                  12.21

ios
      'baseline'  'patched'  'cache line optimized' 'use nvme_ns_head directly'
      65626946    66735998   66268893               66458877
      65641469    66041634   66888910               66384526
      66012335    66904002   67132768               67329550
      66589757    66013222   67132053               66491121
      66569213    66033040   67132075               66474708
avg   66087944    66345579.2 66910939.8             66627756.4
stdev 474608.24   437260.67  374068.50              394426.34


  reply	other threads:[~2023-12-07 10:53 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-06  8:12 [PATCH v3 0/4] nvme: add csi, ms and nuse to sysfs Daniel Wagner
2023-12-06  8:12 ` [PATCH v3 1/4] nvme: lookup ctrl from request instead from namespace Daniel Wagner
2023-12-06  8:12 ` [PATCH v3 2/4] nvme: initialize head before namespace Daniel Wagner
2023-12-06  8:50   ` Christoph Hellwig
2023-12-06  8:12 ` [PATCH v3 3/4] nvme: move ns id info to struct nvme_ns_head Daniel Wagner
2023-12-06  8:54   ` Christoph Hellwig
2023-12-07 10:53     ` Daniel Wagner [this message]
2023-12-06 17:38   ` kernel test robot
2023-12-06 17:38   ` kernel test robot
2023-12-07  5:36   ` Dan Carpenter
2023-12-06  8:12 ` [PATCH v3 4/4] nvme: add csi, ms and nuse to sysfs Daniel Wagner
2023-12-06  8:59   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=xpbwa63z7w6fa7mykmhhd56vdm3746y2x7vvtp5xj2st5aupvz@4rrj7gxzu66c \
    --to=dwagner@suse.de \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).