All of lore.kernel.org
 help / color / mirror / Atom feed
From: "hch@infradead.org" <hch@infradead.org>
To: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
Cc: Qu Wenruo <quwenruo.btrfs@gmx.com>,
	"hch@infradead.org" <hch@infradead.org>,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: Any bio_clone_slow() implementation which doesn't share bi_io_vec?
Date: Tue, 23 Nov 2021 06:28:28 -0800	[thread overview]
Message-ID: <YZz6jAVXun8yC/6k@infradead.org> (raw)
In-Reply-To: <PH0PR04MB74169757F9CF740289B790C49B609@PH0PR04MB7416.namprd04.prod.outlook.com>

On Tue, Nov 23, 2021 at 11:39:11AM +0000, Johannes Thumshirn wrote:
> I think we have to differentiate two cases here:
> A "regular" REQ_OP_ZONE_APPEND bio and a RAID stripe REQ_OP_ZONE_APPEND
> bio. The 1st one (i.e. the regular REQ_OP_ZONE_APPEND bio) can't be split
> because we cannot guarantee the order the device writes the data to disk. 
> For the RAID stripe bio we can split it into the two (or more) parts that
> will end up on _different_ devices. All we need to do is a) ensure it 
> doesn't cross the device's zone append limit and b) clamp all 
> bi_iter.bi_sector down to the start of the target zone, a.k.a sticking to
> the rules of REQ_OP_ZONE_APPEND.

Exactly.  A stacking driver must never split a REQ_OP_ZONE_APPEND bio.
But the file system itself can of course split it as long as each split
off bio has it's own bi_end_io handler to record where it has been
written to.

WARNING: multiple messages have this Message-ID (diff)
From: "hch@infradead.org" <hch@infradead.org>
To: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
Cc: "hch@infradead.org" <hch@infradead.org>,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
	Qu Wenruo <quwenruo.btrfs@gmx.com>
Subject: Re: [dm-devel] Any bio_clone_slow() implementation which doesn't share bi_io_vec?
Date: Tue, 23 Nov 2021 06:28:28 -0800	[thread overview]
Message-ID: <YZz6jAVXun8yC/6k@infradead.org> (raw)
In-Reply-To: <PH0PR04MB74169757F9CF740289B790C49B609@PH0PR04MB7416.namprd04.prod.outlook.com>

On Tue, Nov 23, 2021 at 11:39:11AM +0000, Johannes Thumshirn wrote:
> I think we have to differentiate two cases here:
> A "regular" REQ_OP_ZONE_APPEND bio and a RAID stripe REQ_OP_ZONE_APPEND
> bio. The 1st one (i.e. the regular REQ_OP_ZONE_APPEND bio) can't be split
> because we cannot guarantee the order the device writes the data to disk. 
> For the RAID stripe bio we can split it into the two (or more) parts that
> will end up on _different_ devices. All we need to do is a) ensure it 
> doesn't cross the device's zone append limit and b) clamp all 
> bi_iter.bi_sector down to the start of the target zone, a.k.a sticking to
> the rules of REQ_OP_ZONE_APPEND.

Exactly.  A stacking driver must never split a REQ_OP_ZONE_APPEND bio.
But the file system itself can of course split it as long as each split
off bio has it's own bi_end_io handler to record where it has been
written to.

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


  reply	other threads:[~2021-11-23 14:28 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-23  6:44 Any bio_clone_slow() implementation which doesn't share bi_io_vec? Qu Wenruo
2021-11-23  6:44 ` [dm-devel] " Qu Wenruo
2021-11-23  7:43 ` Christoph Hellwig
2021-11-23  7:43   ` [dm-devel] " Christoph Hellwig
2021-11-23  8:10   ` Qu Wenruo
2021-11-23  8:10     ` [dm-devel] " Qu Wenruo
2021-11-23  8:13     ` Christoph Hellwig
2021-11-23  8:13       ` [dm-devel] " Christoph Hellwig
2021-11-23 11:09       ` Qu Wenruo
2021-11-23 11:09         ` [dm-devel] " Qu Wenruo
2021-11-23 11:39         ` Johannes Thumshirn
2021-11-23 11:39           ` [dm-devel] " Johannes Thumshirn
2021-11-23 14:28           ` hch [this message]
2021-11-23 14:28             ` hch
2021-11-23 23:07             ` Qu Wenruo
2021-11-23 23:07               ` [dm-devel] " Qu Wenruo
2021-11-24  6:09               ` hch
2021-11-24  6:09                 ` [dm-devel] " hch
2021-11-24  6:18                 ` Qu Wenruo
2021-11-24  6:18                   ` [dm-devel] " Qu Wenruo
2021-11-24  7:02                   ` hch
2021-11-24  7:02                     ` [dm-devel] " hch
2021-11-24  7:22                     ` hch
2021-11-24  7:22                       ` [dm-devel] " hch
2021-11-24  7:25               ` Naohiro Aota
2021-11-24  7:25                 ` [dm-devel] " Naohiro Aota
2021-11-24  7:39                 ` Qu Wenruo
2021-11-24  7:39                   ` [dm-devel] " Qu Wenruo
2021-11-26 12:33       ` Qu Wenruo
2021-11-26 12:33         ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YZz6jAVXun8yC/6k@infradead.org \
    --to=hch@infradead.org \
    --cc=Johannes.Thumshirn@wdc.com \
    --cc=dm-devel@redhat.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.