linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Eric Biggers <ebiggers@kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>
Cc: "linux-ext4@vger.kernel.org" <linux-ext4@vger.kernel.org>,
	"linux-f2fs-devel@lists.sourceforge.net" 
	<linux-f2fs-devel@lists.sourceforge.net>,
	"linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
	"linux-api@vger.kernel.org" <linux-api@vger.kernel.org>,
	"linux-fscrypt@vger.kernel.org" <linux-fscrypt@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 1/7] statx: add I/O alignment information
Date: Fri, 11 Feb 2022 11:40:21 +0000	[thread overview]
Message-ID: <1762970b-94b6-1cd0-8ae2-41a5d057f72a@nvidia.com> (raw)
In-Reply-To: <20220211061158.227688-2-ebiggers@kernel.org>

On 2/10/22 10:11 PM, Eric Biggers wrote:
> From: Eric Biggers <ebiggers@google.com>
> 
> Traditionally, the conditions for when DIO (direct I/O) is supported
> were fairly simple: filesystems either supported DIO aligned to the
> block device's logical block size, or didn't support DIO at all.
> 
> However, due to filesystem features that have been added over time (e.g,
> data journalling, inline data, encryption, verity, compression,
> checkpoint disabling, log-structured mode), the conditions for when DIO
> is allowed on a file have gotten increasingly complex.  Whether a
> particular file supports DIO, and with what alignment, can depend on
> various file attributes and filesystem mount options, as well as which
> block device(s) the file's data is located on.
> 
> XFS has an ioctl XFS_IOC_DIOINFO which exposes this information to
> applications.  However, as discussed
> (https://lore.kernel.org/linux-fsdevel/20220120071215.123274-1-ebiggers@kernel.org/T/#u),
> this ioctl is rarely used and not known to be used outside of
> XFS-specific code.  It also was never intended to indicate when a file
> doesn't support DIO at all, and it only exposes the minimum I/O
> alignment, not the optimal I/O alignment which has been requested too.
> 
> Therefore, let's expose this information via statx().  Add the
> STATX_IOALIGN flag and three fields associated with it:
> 
> * stx_mem_align_dio: the alignment (in bytes) required for user memory
>    buffers for DIO, or 0 if DIO is not supported on the file.
> 
> * stx_offset_align_dio: the alignment (in bytes) required for file
>    offsets and I/O segment lengths for DIO, or 0 if DIO is not supported
>    on the file.  This will only be nonzero if stx_mem_align_dio is
>    nonzero, and vice versa.
> 
> * stx_offset_align_optimal: the alignment (in bytes) suggested for file
>    offsets and I/O segment lengths to get optimal performance.  This
>    applies to both DIO and buffered I/O.  It differs from stx_blocksize
>    in that stx_offset_align_optimal will contain the real optimum I/O
>    size, which may be a large value.  In contrast, for compatibility
>    reasons stx_blocksize is the minimum size needed to avoid page cache
>    read/write/modify cycles, which may be much smaller than the optimum
>    I/O size.  For more details about the motivation for this field, see
>    https://lore.kernel.org/r/20220210040304.GM59729@dread.disaster.area
> 
> Note that as with other statx() extensions, if STATX_IOALIGN isn't set
> in the returned statx struct, then these new fields won't be filled in.
> This will happen if the filesystem doesn't support STATX_IOALIGN, or if
> the file isn't a regular file.  (It might be supported on block device
> files in the future.)  It might also happen if the caller didn't include
> STATX_IOALIGN in the request mask, since statx() isn't required to
> return information that wasn't requested.
> 
> This commit adds the VFS-level plumbing for STATX_IOALIGN.  Individual
> filesystems will still need to add code to support it.
> 
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> ---


I've actually worked on similar series to export alignment and 
granularity for non-trivial operations, this implementation
only exporting I/O alignments (mostly REQ_OP_WRITE/REQ_OP_READ) via
stax.

Since it is coming from :-
bdev_logical_block_size()->q->limits.logical_block_size that is set when
low level driver like nvme calls blk_queue_logical_block_size().

 From my experience especially with SSDs, applications want to
know similar information about different non-trivial requests such as
REQ_OP_DISCARD/REQ_OP_WRITE_ZEROES/REQ_OP_VERIFY (work in progress see
[1]) etc.

It will be great to make this generic userspace interface where user can
ask for specific REQ_OP_XXX such as generic I/O REQ_OP_READ/REQ_OP_WRITE
and non generic REQ_OP_XX such as REQ_OP_DISCARD/REQ_OP_VERIFY etc ....

Since I've worked on implementing REQ_OP_VERIFY support I don't want to
implement separate interface for querying the REQ_OP_VERIFY or any other
non-trivial REQ_OP_XXX granularity or alignment.

-ck

[1] https://www.spinics.net/lists/linux-xfs/msg56826.html


  reply	other threads:[~2022-02-11 11:40 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-11  6:11 [RFC PATCH 0/7] make statx() return I/O alignment information Eric Biggers
2022-02-11  6:11 ` [RFC PATCH 1/7] statx: add " Eric Biggers
2022-02-11 11:40   ` Chaitanya Kulkarni [this message]
2022-02-11 11:45     ` Chaitanya Kulkarni
2022-02-11  6:11 ` [RFC PATCH 2/7] fscrypt: change fscrypt_dio_supported() to prepare for STATX_IOALIGN Eric Biggers
2022-02-11  6:11 ` [RFC PATCH 3/7] ext4: support STATX_IOALIGN Eric Biggers
2022-02-11  6:11 ` [RFC PATCH 4/7] f2fs: move f2fs_force_buffered_io() into file.c Eric Biggers
2022-02-11  6:11 ` [RFC PATCH 5/7] f2fs: don't allow DIO reads but not DIO writes Eric Biggers
2022-02-11  6:11 ` [RFC PATCH 6/7] f2fs: simplify f2fs_force_buffered_io() Eric Biggers
2022-02-11  6:11 ` [RFC PATCH 7/7] f2fs: support STATX_IOALIGN Eric Biggers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1762970b-94b6-1cd0-8ae2-41a5d057f72a@nvidia.com \
    --to=chaitanyak@nvidia.com \
    --cc=ebiggers@kernel.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fscrypt@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).