All of lore.kernel.org
 help / color / mirror / Atom feed
* Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-02  3:11 ` Eric Biggers
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2022-11-02  3:11 UTC (permalink / raw)
  To: linux-block, dm-devel, Keith Busch; +Cc: Stefan Hajnoczi

Hi,

I happened to notice the following QEMU bug report:

https://gitlab.com/qemu-project/qemu/-/issues/1290

I believe it's a regression from the following kernel commit:

    commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
    Author: Keith Busch <kbusch@kernel.org>
    Date:   Fri Jun 10 12:58:29 2022 -0700

        block: relax direct io memory alignment

The bug is that if a dm-crypt device is set up with a crypto sector size (and
thus also a logical_block_size) of 4096, then the block layer now lets through
direct I/O requests to dm-crypt when the user buffer has only 512-byte
alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
because the dma_alignment of the device-mapper device is only 511 bytes.

This has two effects in this case:

    - The error code for DIO with a misaligned buffer is now EIO, instead of
      EINVAL as expected and documented.  This is because the I/O reaches
      dm-crypt instead of being rejected by the block layer.

    - STATX_DIOALIGN reports 512 bytes for stx_dio_mem_align, instead of the
      correct value of 4096.  (Technically not a regression since STATX_DIOALIGN
      is new in v6.1, but still a bug.)

Any thoughts on what the correct fix is here?  Maybe the device-mapper layer
needs to set dma_alignment correctly?  Or maybe the block layer needs to set it
to 'logical_block_size - 1' by default?

- Eric

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dm-devel] Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-02  3:11 ` Eric Biggers
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2022-11-02  3:11 UTC (permalink / raw)
  To: linux-block, dm-devel, Keith Busch; +Cc: Stefan Hajnoczi

Hi,

I happened to notice the following QEMU bug report:

https://gitlab.com/qemu-project/qemu/-/issues/1290

I believe it's a regression from the following kernel commit:

    commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
    Author: Keith Busch <kbusch@kernel.org>
    Date:   Fri Jun 10 12:58:29 2022 -0700

        block: relax direct io memory alignment

The bug is that if a dm-crypt device is set up with a crypto sector size (and
thus also a logical_block_size) of 4096, then the block layer now lets through
direct I/O requests to dm-crypt when the user buffer has only 512-byte
alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
because the dma_alignment of the device-mapper device is only 511 bytes.

This has two effects in this case:

    - The error code for DIO with a misaligned buffer is now EIO, instead of
      EINVAL as expected and documented.  This is because the I/O reaches
      dm-crypt instead of being rejected by the block layer.

    - STATX_DIOALIGN reports 512 bytes for stx_dio_mem_align, instead of the
      correct value of 4096.  (Technically not a regression since STATX_DIOALIGN
      is new in v6.1, but still a bug.)

Any thoughts on what the correct fix is here?  Maybe the device-mapper layer
needs to set dma_alignment correctly?  Or maybe the block layer needs to set it
to 'logical_block_size - 1' by default?

- Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Regression: wrong DIO alignment check with dm-crypt
  2022-11-02  3:11 ` [dm-devel] " Eric Biggers
@ 2022-11-02 14:52   ` Keith Busch
  -1 siblings, 0 replies; 16+ messages in thread
From: Keith Busch @ 2022-11-02 14:52 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-block, dm-devel, Stefan Hajnoczi, Dmitrii Tcvetkov

[Cc'ing Dmitrii, who also reported the same issue]

On Tue, Nov 01, 2022 at 08:11:15PM -0700, Eric Biggers wrote:
> Hi,
> 
> I happened to notice the following QEMU bug report:
> 
> https://gitlab.com/qemu-project/qemu/-/issues/1290
> 
> I believe it's a regression from the following kernel commit:
> 
>     commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
>     Author: Keith Busch <kbusch@kernel.org>
>     Date:   Fri Jun 10 12:58:29 2022 -0700
> 
>         block: relax direct io memory alignment
> 
> The bug is that if a dm-crypt device is set up with a crypto sector size (and
> thus also a logical_block_size) of 4096, then the block layer now lets through
> direct I/O requests to dm-crypt when the user buffer has only 512-byte
> alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
> because the dma_alignment of the device-mapper device is only 511 bytes.
> 
> This has two effects in this case:
> 
>     - The error code for DIO with a misaligned buffer is now EIO, instead of
>       EINVAL as expected and documented.  This is because the I/O reaches
>       dm-crypt instead of being rejected by the block layer.
> 
>     - STATX_DIOALIGN reports 512 bytes for stx_dio_mem_align, instead of the
>       correct value of 4096.  (Technically not a regression since STATX_DIOALIGN
>       is new in v6.1, but still a bug.)
> 
> Any thoughts on what the correct fix is here?  Maybe the device-mapper layer
> needs to set dma_alignment correctly?  Or maybe the block layer needs to set it
> to 'logical_block_size - 1' by default?

I think the quick fix is to have the device mapper override the default
queue stacking limits to align the dma mask to logical block size.

Does dm-crypt strictly require memory alignment to match the block size,
or is this just the way the current software works that we can change?
It may take me a moment to get to the bottem of that, but after a quick
glance, it looks like dm-crypt will work fine with the 512 offsets if we
happen to have a physically contiguous multi-page bvec, and will fail
otherwise due to a predetermined set of sg segments (looking at
crypt_convert_block_aead()).

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-02 14:52   ` Keith Busch
  0 siblings, 0 replies; 16+ messages in thread
From: Keith Busch @ 2022-11-02 14:52 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-block, dm-devel, Dmitrii Tcvetkov, Stefan Hajnoczi

[Cc'ing Dmitrii, who also reported the same issue]

On Tue, Nov 01, 2022 at 08:11:15PM -0700, Eric Biggers wrote:
> Hi,
> 
> I happened to notice the following QEMU bug report:
> 
> https://gitlab.com/qemu-project/qemu/-/issues/1290
> 
> I believe it's a regression from the following kernel commit:
> 
>     commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
>     Author: Keith Busch <kbusch@kernel.org>
>     Date:   Fri Jun 10 12:58:29 2022 -0700
> 
>         block: relax direct io memory alignment
> 
> The bug is that if a dm-crypt device is set up with a crypto sector size (and
> thus also a logical_block_size) of 4096, then the block layer now lets through
> direct I/O requests to dm-crypt when the user buffer has only 512-byte
> alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
> because the dma_alignment of the device-mapper device is only 511 bytes.
> 
> This has two effects in this case:
> 
>     - The error code for DIO with a misaligned buffer is now EIO, instead of
>       EINVAL as expected and documented.  This is because the I/O reaches
>       dm-crypt instead of being rejected by the block layer.
> 
>     - STATX_DIOALIGN reports 512 bytes for stx_dio_mem_align, instead of the
>       correct value of 4096.  (Technically not a regression since STATX_DIOALIGN
>       is new in v6.1, but still a bug.)
> 
> Any thoughts on what the correct fix is here?  Maybe the device-mapper layer
> needs to set dma_alignment correctly?  Or maybe the block layer needs to set it
> to 'logical_block_size - 1' by default?

I think the quick fix is to have the device mapper override the default
queue stacking limits to align the dma mask to logical block size.

Does dm-crypt strictly require memory alignment to match the block size,
or is this just the way the current software works that we can change?
It may take me a moment to get to the bottem of that, but after a quick
glance, it looks like dm-crypt will work fine with the 512 offsets if we
happen to have a physically contiguous multi-page bvec, and will fail
otherwise due to a predetermined set of sg segments (looking at
crypt_convert_block_aead()).

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Regression: wrong DIO alignment check with dm-crypt
  2022-11-02 14:52   ` [dm-devel] " Keith Busch
@ 2022-11-02 16:14     ` Keith Busch
  -1 siblings, 0 replies; 16+ messages in thread
From: Keith Busch @ 2022-11-02 16:14 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-block, dm-devel, Stefan Hajnoczi, Dmitrii Tcvetkov

On Wed, Nov 02, 2022 at 08:52:15AM -0600, Keith Busch wrote:
> [Cc'ing Dmitrii, who also reported the same issue]
> 
> On Tue, Nov 01, 2022 at 08:11:15PM -0700, Eric Biggers wrote:
> > Hi,
> > 
> > I happened to notice the following QEMU bug report:
> > 
> > https://gitlab.com/qemu-project/qemu/-/issues/1290
> > 
> > I believe it's a regression from the following kernel commit:
> > 
> >     commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
> >     Author: Keith Busch <kbusch@kernel.org>
> >     Date:   Fri Jun 10 12:58:29 2022 -0700
> > 
> >         block: relax direct io memory alignment
> > 
> > The bug is that if a dm-crypt device is set up with a crypto sector size (and
> > thus also a logical_block_size) of 4096, then the block layer now lets through
> > direct I/O requests to dm-crypt when the user buffer has only 512-byte
> > alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
> > because the dma_alignment of the device-mapper device is only 511 bytes.
> > 
> > This has two effects in this case:
> > 
> >     - The error code for DIO with a misaligned buffer is now EIO, instead of
> >       EINVAL as expected and documented.  This is because the I/O reaches
> >       dm-crypt instead of being rejected by the block layer.
> > 
> >     - STATX_DIOALIGN reports 512 bytes for stx_dio_mem_align, instead of the
> >       correct value of 4096.  (Technically not a regression since STATX_DIOALIGN
> >       is new in v6.1, but still a bug.)
> > 
> > Any thoughts on what the correct fix is here?  Maybe the device-mapper layer
> > needs to set dma_alignment correctly?  Or maybe the block layer needs to set it
> > to 'logical_block_size - 1' by default?
> 
> I think the quick fix is to have the device mapper override the default
> queue stacking limits to align the dma mask to logical block size.

This is what I'm coming up with. Only compile tested (still setting up
an enviroment to actually run it).

---
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 159c6806c19b..9334e58a4c9f 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -43,6 +43,7 @@
 #include <linux/device-mapper.h>
 
 #include "dm-audit.h"
+#include "dm-core.h"
 
 #define DM_MSG_PREFIX "crypt"
 
@@ -3615,7 +3616,9 @@ static int crypt_iterate_devices(struct dm_target *ti,
 
 static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
 {
+	struct mapped_device *md = dm_table_get_md(ti->table);
 	struct crypt_config *cc = ti->private;
+	struct request_queue *q = md->queue;
 
 	/*
 	 * Unfortunate constraint that is required to avoid the potential
@@ -3630,6 +3633,8 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	limits->physical_block_size =
 		max_t(unsigned, limits->physical_block_size, cc->sector_size);
 	limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size);
+
+	blk_queue_dma_alignment(q, limits->logical_block_size - 1);
 }
 
 static struct target_type crypt_target = {
--

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-02 16:14     ` Keith Busch
  0 siblings, 0 replies; 16+ messages in thread
From: Keith Busch @ 2022-11-02 16:14 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-block, dm-devel, Dmitrii Tcvetkov, Stefan Hajnoczi

On Wed, Nov 02, 2022 at 08:52:15AM -0600, Keith Busch wrote:
> [Cc'ing Dmitrii, who also reported the same issue]
> 
> On Tue, Nov 01, 2022 at 08:11:15PM -0700, Eric Biggers wrote:
> > Hi,
> > 
> > I happened to notice the following QEMU bug report:
> > 
> > https://gitlab.com/qemu-project/qemu/-/issues/1290
> > 
> > I believe it's a regression from the following kernel commit:
> > 
> >     commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
> >     Author: Keith Busch <kbusch@kernel.org>
> >     Date:   Fri Jun 10 12:58:29 2022 -0700
> > 
> >         block: relax direct io memory alignment
> > 
> > The bug is that if a dm-crypt device is set up with a crypto sector size (and
> > thus also a logical_block_size) of 4096, then the block layer now lets through
> > direct I/O requests to dm-crypt when the user buffer has only 512-byte
> > alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
> > because the dma_alignment of the device-mapper device is only 511 bytes.
> > 
> > This has two effects in this case:
> > 
> >     - The error code for DIO with a misaligned buffer is now EIO, instead of
> >       EINVAL as expected and documented.  This is because the I/O reaches
> >       dm-crypt instead of being rejected by the block layer.
> > 
> >     - STATX_DIOALIGN reports 512 bytes for stx_dio_mem_align, instead of the
> >       correct value of 4096.  (Technically not a regression since STATX_DIOALIGN
> >       is new in v6.1, but still a bug.)
> > 
> > Any thoughts on what the correct fix is here?  Maybe the device-mapper layer
> > needs to set dma_alignment correctly?  Or maybe the block layer needs to set it
> > to 'logical_block_size - 1' by default?
> 
> I think the quick fix is to have the device mapper override the default
> queue stacking limits to align the dma mask to logical block size.

This is what I'm coming up with. Only compile tested (still setting up
an enviroment to actually run it).

---
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 159c6806c19b..9334e58a4c9f 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -43,6 +43,7 @@
 #include <linux/device-mapper.h>
 
 #include "dm-audit.h"
+#include "dm-core.h"
 
 #define DM_MSG_PREFIX "crypt"
 
@@ -3615,7 +3616,9 @@ static int crypt_iterate_devices(struct dm_target *ti,
 
 static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
 {
+	struct mapped_device *md = dm_table_get_md(ti->table);
 	struct crypt_config *cc = ti->private;
+	struct request_queue *q = md->queue;
 
 	/*
 	 * Unfortunate constraint that is required to avoid the potential
@@ -3630,6 +3633,8 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	limits->physical_block_size =
 		max_t(unsigned, limits->physical_block_size, cc->sector_size);
 	limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size);
+
+	blk_queue_dma_alignment(q, limits->logical_block_size - 1);
 }
 
 static struct target_type crypt_target = {
--

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: Regression: wrong DIO alignment check with dm-crypt
  2022-11-02 16:14     ` [dm-devel] " Keith Busch
@ 2022-11-02 17:03       ` Dmitrii Tcvetkov
  -1 siblings, 0 replies; 16+ messages in thread
From: Dmitrii Tcvetkov @ 2022-11-02 17:03 UTC (permalink / raw)
  To: Keith Busch; +Cc: Eric Biggers, linux-block, dm-devel, Stefan Hajnoczi

On Wed, 2 Nov 2022 10:14:52 -0600
Keith Busch <kbusch@kernel.org> wrote:
> This is what I'm coming up with. Only compile tested (still setting up
> an enviroment to actually run it).
> 
> ---
> diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
> index 159c6806c19b..9334e58a4c9f 100644
> --- a/drivers/md/dm-crypt.c
> +++ b/drivers/md/dm-crypt.c
> @@ -43,6 +43,7 @@
>  #include <linux/device-mapper.h>
>  
>  #include "dm-audit.h"
> +#include "dm-core.h"
>  
>  #define DM_MSG_PREFIX "crypt"
>  
> @@ -3615,7 +3616,9 @@ static int crypt_iterate_devices(struct
> dm_target *ti, 
>  static void crypt_io_hints(struct dm_target *ti, struct queue_limits
> *limits) {
> +	struct mapped_device *md = dm_table_get_md(ti->table);
>  	struct crypt_config *cc = ti->private;
> +	struct request_queue *q = md->queue;
>  
>  	/*
>  	 * Unfortunate constraint that is required to avoid the
> potential @@ -3630,6 +3633,8 @@ static void crypt_io_hints(struct
> dm_target *ti, struct queue_limits *limits)
> limits->physical_block_size = max_t(unsigned,
> limits->physical_block_size, cc->sector_size); limits->io_min =
> max_t(unsigned, limits->io_min, cc->sector_size); +
> +	blk_queue_dma_alignment(q, limits->logical_block_size - 1);
>  }
>  
>  static struct target_type crypt_target = {
> --

Applied on top 6.1-rc3, the issue still reproduces.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-02 17:03       ` Dmitrii Tcvetkov
  0 siblings, 0 replies; 16+ messages in thread
From: Dmitrii Tcvetkov @ 2022-11-02 17:03 UTC (permalink / raw)
  To: Keith Busch; +Cc: Eric Biggers, linux-block, dm-devel, Stefan Hajnoczi

On Wed, 2 Nov 2022 10:14:52 -0600
Keith Busch <kbusch@kernel.org> wrote:
> This is what I'm coming up with. Only compile tested (still setting up
> an enviroment to actually run it).
> 
> ---
> diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
> index 159c6806c19b..9334e58a4c9f 100644
> --- a/drivers/md/dm-crypt.c
> +++ b/drivers/md/dm-crypt.c
> @@ -43,6 +43,7 @@
>  #include <linux/device-mapper.h>
>  
>  #include "dm-audit.h"
> +#include "dm-core.h"
>  
>  #define DM_MSG_PREFIX "crypt"
>  
> @@ -3615,7 +3616,9 @@ static int crypt_iterate_devices(struct
> dm_target *ti, 
>  static void crypt_io_hints(struct dm_target *ti, struct queue_limits
> *limits) {
> +	struct mapped_device *md = dm_table_get_md(ti->table);
>  	struct crypt_config *cc = ti->private;
> +	struct request_queue *q = md->queue;
>  
>  	/*
>  	 * Unfortunate constraint that is required to avoid the
> potential @@ -3630,6 +3633,8 @@ static void crypt_io_hints(struct
> dm_target *ti, struct queue_limits *limits)
> limits->physical_block_size = max_t(unsigned,
> limits->physical_block_size, cc->sector_size); limits->io_min =
> max_t(unsigned, limits->io_min, cc->sector_size); +
> +	blk_queue_dma_alignment(q, limits->logical_block_size - 1);
>  }
>  
>  static struct target_type crypt_target = {
> --

Applied on top 6.1-rc3, the issue still reproduces.

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
  2022-11-02  3:11 ` [dm-devel] " Eric Biggers
@ 2022-11-02 18:45   ` Mikulas Patocka
  -1 siblings, 0 replies; 16+ messages in thread
From: Mikulas Patocka @ 2022-11-02 18:45 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-block, Keith Busch, dm-devel, Stefan Hajnoczi

Hi


On Tue, 1 Nov 2022, Eric Biggers wrote:

> Hi,
> 
> I happened to notice the following QEMU bug report:
> 
> https://gitlab.com/qemu-project/qemu/-/issues/1290
> 
> I believe it's a regression from the following kernel commit:
> 
>     commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
>     Author: Keith Busch <kbusch@kernel.org>
>     Date:   Fri Jun 10 12:58:29 2022 -0700
> 
>         block: relax direct io memory alignment

I suggest to revert this patch.

> The bug is that if a dm-crypt device is set up with a crypto sector size (and
> thus also a logical_block_size) of 4096, then the block layer now lets through
> direct I/O requests to dm-crypt when the user buffer has only 512-byte
> alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
> because the dma_alignment of the device-mapper device is only 511 bytes.

Propagating dma_alignment through the device mapper stack would be hard 
(because it is not in struct queue_limits). Perhaps we could set 
dma_alignment to be the equivalent to logical_block_size, if the above 
patch could not be reverted - but the we would hit the issue again when 
someone stacks md or other devices over dm.

> This has two effects in this case:
> 
>     - The error code for DIO with a misaligned buffer is now EIO, instead of
>       EINVAL as expected and documented.  This is because the I/O reaches
>       dm-crypt instead of being rejected by the block layer.
> 
>     - STATX_DIOALIGN reports 512 bytes for stx_dio_mem_align, instead of the
>       correct value of 4096.  (Technically not a regression since STATX_DIOALIGN
>       is new in v6.1, but still a bug.)
> 
> Any thoughts on what the correct fix is here?  Maybe the device-mapper layer
> needs to set dma_alignment correctly?  Or maybe the block layer needs to set it
> to 'logical_block_size - 1' by default?
> 
> - Eric
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://listman.redhat.com/mailman/listinfo/dm-devel

Mikulas
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-02 18:45   ` Mikulas Patocka
  0 siblings, 0 replies; 16+ messages in thread
From: Mikulas Patocka @ 2022-11-02 18:45 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-block, dm-devel, Keith Busch, Stefan Hajnoczi

Hi


On Tue, 1 Nov 2022, Eric Biggers wrote:

> Hi,
> 
> I happened to notice the following QEMU bug report:
> 
> https://gitlab.com/qemu-project/qemu/-/issues/1290
> 
> I believe it's a regression from the following kernel commit:
> 
>     commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
>     Author: Keith Busch <kbusch@kernel.org>
>     Date:   Fri Jun 10 12:58:29 2022 -0700
> 
>         block: relax direct io memory alignment

I suggest to revert this patch.

> The bug is that if a dm-crypt device is set up with a crypto sector size (and
> thus also a logical_block_size) of 4096, then the block layer now lets through
> direct I/O requests to dm-crypt when the user buffer has only 512-byte
> alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
> because the dma_alignment of the device-mapper device is only 511 bytes.

Propagating dma_alignment through the device mapper stack would be hard 
(because it is not in struct queue_limits). Perhaps we could set 
dma_alignment to be the equivalent to logical_block_size, if the above 
patch could not be reverted - but the we would hit the issue again when 
someone stacks md or other devices over dm.

> This has two effects in this case:
> 
>     - The error code for DIO with a misaligned buffer is now EIO, instead of
>       EINVAL as expected and documented.  This is because the I/O reaches
>       dm-crypt instead of being rejected by the block layer.
> 
>     - STATX_DIOALIGN reports 512 bytes for stx_dio_mem_align, instead of the
>       correct value of 4096.  (Technically not a regression since STATX_DIOALIGN
>       is new in v6.1, but still a bug.)
> 
> Any thoughts on what the correct fix is here?  Maybe the device-mapper layer
> needs to set dma_alignment correctly?  Or maybe the block layer needs to set it
> to 'logical_block_size - 1' by default?
> 
> - Eric
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://listman.redhat.com/mailman/listinfo/dm-devel

Mikulas


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
  2022-11-02 18:45   ` Mikulas Patocka
@ 2022-11-02 18:58     ` Keith Busch
  -1 siblings, 0 replies; 16+ messages in thread
From: Keith Busch @ 2022-11-02 18:58 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Eric Biggers, linux-block, dm-devel, Stefan Hajnoczi

On Wed, Nov 02, 2022 at 02:45:10PM -0400, Mikulas Patocka wrote:
> On Tue, 1 Nov 2022, Eric Biggers wrote:
> > Hi,
> > 
> > I happened to notice the following QEMU bug report:
> > 
> > https://gitlab.com/qemu-project/qemu/-/issues/1290
> > 
> > I believe it's a regression from the following kernel commit:
> > 
> >     commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
> >     Author: Keith Busch <kbusch@kernel.org>
> >     Date:   Fri Jun 10 12:58:29 2022 -0700
> > 
> >         block: relax direct io memory alignment
> 
> I suggest to revert this patch.

I hope we can make that option a last resort.
 
> > The bug is that if a dm-crypt device is set up with a crypto sector size (and
> > thus also a logical_block_size) of 4096, then the block layer now lets through
> > direct I/O requests to dm-crypt when the user buffer has only 512-byte
> > alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
> > because the dma_alignment of the device-mapper device is only 511 bytes.
> 
> Propagating dma_alignment through the device mapper stack would be hard 
> (because it is not in struct queue_limits). Perhaps we could set 
> dma_alignment to be the equivalent to logical_block_size, if the above 
> patch could not be reverted - but the we would hit the issue again when 
> someone stacks md or other devices over dm.

It looks straight forward to relocate the attribute to the the
queue_limits. If it stacks correctly, then no one would encounter a
problem no matter what md/dm combination you have.

I have something that looks promising, but I'm trying to give it a
thorough test before sending out another incomplete patch. Hopefully
ready by end of the day.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-02 18:58     ` Keith Busch
  0 siblings, 0 replies; 16+ messages in thread
From: Keith Busch @ 2022-11-02 18:58 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Eric Biggers, linux-block, dm-devel, Stefan Hajnoczi

On Wed, Nov 02, 2022 at 02:45:10PM -0400, Mikulas Patocka wrote:
> On Tue, 1 Nov 2022, Eric Biggers wrote:
> > Hi,
> > 
> > I happened to notice the following QEMU bug report:
> > 
> > https://gitlab.com/qemu-project/qemu/-/issues/1290
> > 
> > I believe it's a regression from the following kernel commit:
> > 
> >     commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
> >     Author: Keith Busch <kbusch@kernel.org>
> >     Date:   Fri Jun 10 12:58:29 2022 -0700
> > 
> >         block: relax direct io memory alignment
> 
> I suggest to revert this patch.

I hope we can make that option a last resort.
 
> > The bug is that if a dm-crypt device is set up with a crypto sector size (and
> > thus also a logical_block_size) of 4096, then the block layer now lets through
> > direct I/O requests to dm-crypt when the user buffer has only 512-byte
> > alignment, instead of the 4096-bytes expected by dm-crypt in that case.  This is
> > because the dma_alignment of the device-mapper device is only 511 bytes.
> 
> Propagating dma_alignment through the device mapper stack would be hard 
> (because it is not in struct queue_limits). Perhaps we could set 
> dma_alignment to be the equivalent to logical_block_size, if the above 
> patch could not be reverted - but the we would hit the issue again when 
> someone stacks md or other devices over dm.

It looks straight forward to relocate the attribute to the the
queue_limits. If it stacks correctly, then no one would encounter a
problem no matter what md/dm combination you have.

I have something that looks promising, but I'm trying to give it a
thorough test before sending out another incomplete patch. Hopefully
ready by end of the day.

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Regression: wrong DIO alignment check with dm-crypt
  2022-11-02 17:03       ` [dm-devel] " Dmitrii Tcvetkov
@ 2022-11-02 20:09         ` Keith Busch
  -1 siblings, 0 replies; 16+ messages in thread
From: Keith Busch @ 2022-11-02 20:09 UTC (permalink / raw)
  To: Dmitrii Tcvetkov; +Cc: Eric Biggers, linux-block, dm-devel, Stefan Hajnoczi

On Wed, Nov 02, 2022 at 08:03:45PM +0300, Dmitrii Tcvetkov wrote:
> 
> Applied on top 6.1-rc3, the issue still reproduces.

Yeah, I see that now. I needed to run a dm-crypt setup to figure out how
they're actually doing this, so now I have that up and running.

I think this type of usage will require the dma_alignment to move from
the request_queue to the queue_limits. Here's that, and I've
successfully tested this with cryptsetup. I still need more work to get
mdraid atop that working on my dev machine, but I'll post this now since
it's looking better:

---
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 8bb9eef5310e..e84304959318 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -81,6 +81,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
 	lim->max_dev_sectors = UINT_MAX;
 	lim->max_write_zeroes_sectors = UINT_MAX;
 	lim->max_zone_append_sectors = UINT_MAX;
+	lim->dma_alignment = 511;
 }
 EXPORT_SYMBOL(blk_set_stacking_limits);
 
@@ -600,6 +601,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 
 	t->io_min = max(t->io_min, b->io_min);
 	t->io_opt = lcm_not_zero(t->io_opt, b->io_opt);
+	t->dma_alignment = max(t->dma_alignment, b->dma_alignment);
 
 	/* Set non-power-of-2 compatible chunk_sectors boundary */
 	if (b->chunk_sectors)
@@ -773,7 +775,7 @@ EXPORT_SYMBOL(blk_queue_virt_boundary);
  **/
 void blk_queue_dma_alignment(struct request_queue *q, int mask)
 {
-	q->dma_alignment = mask;
+	q->limits.dma_alignment = mask;
 }
 EXPORT_SYMBOL(blk_queue_dma_alignment);
 
@@ -795,8 +797,8 @@ void blk_queue_update_dma_alignment(struct request_queue *q, int mask)
 {
 	BUG_ON(mask > PAGE_SIZE);
 
-	if (mask > q->dma_alignment)
-		q->dma_alignment = mask;
+	if (mask > q->limits.dma_alignment)
+		q->limits.dma_alignment = mask;
 }
 EXPORT_SYMBOL(blk_queue_update_dma_alignment);
 
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 159c6806c19b..2653516bcdef 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -3630,6 +3630,7 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	limits->physical_block_size =
 		max_t(unsigned, limits->physical_block_size, cc->sector_size);
 	limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size);
+	limits->dma_alignment = limits->logical_block_size - 1;
 }
 
 static struct target_type crypt_target = {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 854b4745cdd1..69ee5ea29e2f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -310,6 +310,13 @@ struct queue_limits {
 	unsigned char		discard_misaligned;
 	unsigned char		raid_partial_stripes_expensive;
 	enum blk_zoned_model	zoned;
+
+	/*
+	 * Drivers that set dma_alignment to less than 511 must be prepared to
+	 * handle individual bvec's that are not a multiple of a SECTOR_SIZE
+	 * due to possible offsets.
+	 */
+	unsigned int		dma_alignment;
 };
 
 typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx,
@@ -455,12 +462,6 @@ struct request_queue {
 	unsigned long		nr_requests;	/* Max # of requests */
 
 	unsigned int		dma_pad_mask;
-	/*
-	 * Drivers that set dma_alignment to less than 511 must be prepared to
-	 * handle individual bvec's that are not a multiple of a SECTOR_SIZE
-	 * due to possible offsets.
-	 */
-	unsigned int		dma_alignment;
 
 #ifdef CONFIG_BLK_INLINE_ENCRYPTION
 	struct blk_crypto_profile *crypto_profile;
@@ -1318,7 +1319,7 @@ static inline sector_t bdev_zone_sectors(struct block_device *bdev)
 
 static inline int queue_dma_alignment(const struct request_queue *q)
 {
-	return q ? q->dma_alignment : 511;
+	return q ? q->limits.dma_alignment : 511;
 }
 
 static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
--

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-02 20:09         ` Keith Busch
  0 siblings, 0 replies; 16+ messages in thread
From: Keith Busch @ 2022-11-02 20:09 UTC (permalink / raw)
  To: Dmitrii Tcvetkov; +Cc: Eric Biggers, linux-block, dm-devel, Stefan Hajnoczi

On Wed, Nov 02, 2022 at 08:03:45PM +0300, Dmitrii Tcvetkov wrote:
> 
> Applied on top 6.1-rc3, the issue still reproduces.

Yeah, I see that now. I needed to run a dm-crypt setup to figure out how
they're actually doing this, so now I have that up and running.

I think this type of usage will require the dma_alignment to move from
the request_queue to the queue_limits. Here's that, and I've
successfully tested this with cryptsetup. I still need more work to get
mdraid atop that working on my dev machine, but I'll post this now since
it's looking better:

---
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 8bb9eef5310e..e84304959318 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -81,6 +81,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
 	lim->max_dev_sectors = UINT_MAX;
 	lim->max_write_zeroes_sectors = UINT_MAX;
 	lim->max_zone_append_sectors = UINT_MAX;
+	lim->dma_alignment = 511;
 }
 EXPORT_SYMBOL(blk_set_stacking_limits);
 
@@ -600,6 +601,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 
 	t->io_min = max(t->io_min, b->io_min);
 	t->io_opt = lcm_not_zero(t->io_opt, b->io_opt);
+	t->dma_alignment = max(t->dma_alignment, b->dma_alignment);
 
 	/* Set non-power-of-2 compatible chunk_sectors boundary */
 	if (b->chunk_sectors)
@@ -773,7 +775,7 @@ EXPORT_SYMBOL(blk_queue_virt_boundary);
  **/
 void blk_queue_dma_alignment(struct request_queue *q, int mask)
 {
-	q->dma_alignment = mask;
+	q->limits.dma_alignment = mask;
 }
 EXPORT_SYMBOL(blk_queue_dma_alignment);
 
@@ -795,8 +797,8 @@ void blk_queue_update_dma_alignment(struct request_queue *q, int mask)
 {
 	BUG_ON(mask > PAGE_SIZE);
 
-	if (mask > q->dma_alignment)
-		q->dma_alignment = mask;
+	if (mask > q->limits.dma_alignment)
+		q->limits.dma_alignment = mask;
 }
 EXPORT_SYMBOL(blk_queue_update_dma_alignment);
 
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 159c6806c19b..2653516bcdef 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -3630,6 +3630,7 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	limits->physical_block_size =
 		max_t(unsigned, limits->physical_block_size, cc->sector_size);
 	limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size);
+	limits->dma_alignment = limits->logical_block_size - 1;
 }
 
 static struct target_type crypt_target = {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 854b4745cdd1..69ee5ea29e2f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -310,6 +310,13 @@ struct queue_limits {
 	unsigned char		discard_misaligned;
 	unsigned char		raid_partial_stripes_expensive;
 	enum blk_zoned_model	zoned;
+
+	/*
+	 * Drivers that set dma_alignment to less than 511 must be prepared to
+	 * handle individual bvec's that are not a multiple of a SECTOR_SIZE
+	 * due to possible offsets.
+	 */
+	unsigned int		dma_alignment;
 };
 
 typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx,
@@ -455,12 +462,6 @@ struct request_queue {
 	unsigned long		nr_requests;	/* Max # of requests */
 
 	unsigned int		dma_pad_mask;
-	/*
-	 * Drivers that set dma_alignment to less than 511 must be prepared to
-	 * handle individual bvec's that are not a multiple of a SECTOR_SIZE
-	 * due to possible offsets.
-	 */
-	unsigned int		dma_alignment;
 
 #ifdef CONFIG_BLK_INLINE_ENCRYPTION
 	struct blk_crypto_profile *crypto_profile;
@@ -1318,7 +1319,7 @@ static inline sector_t bdev_zone_sectors(struct block_device *bdev)
 
 static inline int queue_dma_alignment(const struct request_queue *q)
 {
-	return q ? q->dma_alignment : 511;
+	return q ? q->limits.dma_alignment : 511;
 }
 
 static inline unsigned int bdev_dma_alignment(struct block_device *bdev)
--

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: Regression: wrong DIO alignment check with dm-crypt
  2022-11-02 20:09         ` [dm-devel] " Keith Busch
@ 2022-11-03 16:38           ` Dmitrii Tcvetkov
  -1 siblings, 0 replies; 16+ messages in thread
From: Dmitrii Tcvetkov @ 2022-11-03 16:38 UTC (permalink / raw)
  To: Keith Busch; +Cc: Eric Biggers, linux-block, dm-devel, Stefan Hajnoczi

On Wed, 2 Nov 2022 14:09:54 -0600
Keith Busch <kbusch@kernel.org> wrote:

> On Wed, Nov 02, 2022 at 08:03:45PM +0300, Dmitrii Tcvetkov wrote:
> > 
> > Applied on top 6.1-rc3, the issue still reproduces.
> 
> Yeah, I see that now. I needed to run a dm-crypt setup to figure out
> how they're actually doing this, so now I have that up and running.
> 
> I think this type of usage will require the dma_alignment to move from
> the request_queue to the queue_limits. Here's that, and I've
> successfully tested this with cryptsetup. I still need more work to
> get mdraid atop that working on my dev machine, but I'll post this
> now since it's looking better:
> 
> ---
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 8bb9eef5310e..e84304959318 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -81,6 +81,7 @@ void blk_set_stacking_limits(struct queue_limits
> *lim) lim->max_dev_sectors = UINT_MAX;
>  	lim->max_write_zeroes_sectors = UINT_MAX;
>  	lim->max_zone_append_sectors = UINT_MAX;
> +	lim->dma_alignment = 511;
>  }
>  EXPORT_SYMBOL(blk_set_stacking_limits);
>  
> @@ -600,6 +601,7 @@ int blk_stack_limits(struct queue_limits *t,
> struct queue_limits *b, 
>  	t->io_min = max(t->io_min, b->io_min);
>  	t->io_opt = lcm_not_zero(t->io_opt, b->io_opt);
> +	t->dma_alignment = max(t->dma_alignment, b->dma_alignment);
>  
>  	/* Set non-power-of-2 compatible chunk_sectors boundary */
>  	if (b->chunk_sectors)
> @@ -773,7 +775,7 @@ EXPORT_SYMBOL(blk_queue_virt_boundary);
>   **/
>  void blk_queue_dma_alignment(struct request_queue *q, int mask)
>  {
> -	q->dma_alignment = mask;
> +	q->limits.dma_alignment = mask;
>  }
>  EXPORT_SYMBOL(blk_queue_dma_alignment);
>  
> @@ -795,8 +797,8 @@ void blk_queue_update_dma_alignment(struct
> request_queue *q, int mask) {
>  	BUG_ON(mask > PAGE_SIZE);
>  
> -	if (mask > q->dma_alignment)
> -		q->dma_alignment = mask;
> +	if (mask > q->limits.dma_alignment)
> +		q->limits.dma_alignment = mask;
>  }
>  EXPORT_SYMBOL(blk_queue_update_dma_alignment);
>  
> diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
> index 159c6806c19b..2653516bcdef 100644
> --- a/drivers/md/dm-crypt.c
> +++ b/drivers/md/dm-crypt.c
> @@ -3630,6 +3630,7 @@ static void crypt_io_hints(struct dm_target
> *ti, struct queue_limits *limits) limits->physical_block_size =
>  		max_t(unsigned, limits->physical_block_size,
> cc->sector_size); limits->io_min = max_t(unsigned, limits->io_min,
> cc->sector_size);
> +	limits->dma_alignment = limits->logical_block_size - 1;
>  }
>  
>  static struct target_type crypt_target = {
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 854b4745cdd1..69ee5ea29e2f 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -310,6 +310,13 @@ struct queue_limits {
>  	unsigned char		discard_misaligned;
>  	unsigned char		raid_partial_stripes_expensive;
>  	enum blk_zoned_model	zoned;
> +
> +	/*
> +	 * Drivers that set dma_alignment to less than 511 must be
> prepared to
> +	 * handle individual bvec's that are not a multiple of a
> SECTOR_SIZE
> +	 * due to possible offsets.
> +	 */
> +	unsigned int		dma_alignment;
>  };
>  
>  typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int
> idx, @@ -455,12 +462,6 @@ struct request_queue {
>  	unsigned long		nr_requests;	/* Max # of
> requests */ 
>  	unsigned int		dma_pad_mask;
> -	/*
> -	 * Drivers that set dma_alignment to less than 511 must be
> prepared to
> -	 * handle individual bvec's that are not a multiple of a
> SECTOR_SIZE
> -	 * due to possible offsets.
> -	 */
> -	unsigned int		dma_alignment;
>  
>  #ifdef CONFIG_BLK_INLINE_ENCRYPTION
>  	struct blk_crypto_profile *crypto_profile;
> @@ -1318,7 +1319,7 @@ static inline sector_t bdev_zone_sectors(struct
> block_device *bdev) 
>  static inline int queue_dma_alignment(const struct request_queue *q)
>  {
> -	return q ? q->dma_alignment : 511;
> +	return q ? q->limits.dma_alignment : 511;
>  }
>  
>  static inline unsigned int bdev_dma_alignment(struct block_device
> *bdev) --

With this patch the issue doesn't reproduce.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dm-devel] Regression: wrong DIO alignment check with dm-crypt
@ 2022-11-03 16:38           ` Dmitrii Tcvetkov
  0 siblings, 0 replies; 16+ messages in thread
From: Dmitrii Tcvetkov @ 2022-11-03 16:38 UTC (permalink / raw)
  To: Keith Busch; +Cc: Eric Biggers, linux-block, dm-devel, Stefan Hajnoczi

On Wed, 2 Nov 2022 14:09:54 -0600
Keith Busch <kbusch@kernel.org> wrote:

> On Wed, Nov 02, 2022 at 08:03:45PM +0300, Dmitrii Tcvetkov wrote:
> > 
> > Applied on top 6.1-rc3, the issue still reproduces.
> 
> Yeah, I see that now. I needed to run a dm-crypt setup to figure out
> how they're actually doing this, so now I have that up and running.
> 
> I think this type of usage will require the dma_alignment to move from
> the request_queue to the queue_limits. Here's that, and I've
> successfully tested this with cryptsetup. I still need more work to
> get mdraid atop that working on my dev machine, but I'll post this
> now since it's looking better:
> 
> ---
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 8bb9eef5310e..e84304959318 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -81,6 +81,7 @@ void blk_set_stacking_limits(struct queue_limits
> *lim) lim->max_dev_sectors = UINT_MAX;
>  	lim->max_write_zeroes_sectors = UINT_MAX;
>  	lim->max_zone_append_sectors = UINT_MAX;
> +	lim->dma_alignment = 511;
>  }
>  EXPORT_SYMBOL(blk_set_stacking_limits);
>  
> @@ -600,6 +601,7 @@ int blk_stack_limits(struct queue_limits *t,
> struct queue_limits *b, 
>  	t->io_min = max(t->io_min, b->io_min);
>  	t->io_opt = lcm_not_zero(t->io_opt, b->io_opt);
> +	t->dma_alignment = max(t->dma_alignment, b->dma_alignment);
>  
>  	/* Set non-power-of-2 compatible chunk_sectors boundary */
>  	if (b->chunk_sectors)
> @@ -773,7 +775,7 @@ EXPORT_SYMBOL(blk_queue_virt_boundary);
>   **/
>  void blk_queue_dma_alignment(struct request_queue *q, int mask)
>  {
> -	q->dma_alignment = mask;
> +	q->limits.dma_alignment = mask;
>  }
>  EXPORT_SYMBOL(blk_queue_dma_alignment);
>  
> @@ -795,8 +797,8 @@ void blk_queue_update_dma_alignment(struct
> request_queue *q, int mask) {
>  	BUG_ON(mask > PAGE_SIZE);
>  
> -	if (mask > q->dma_alignment)
> -		q->dma_alignment = mask;
> +	if (mask > q->limits.dma_alignment)
> +		q->limits.dma_alignment = mask;
>  }
>  EXPORT_SYMBOL(blk_queue_update_dma_alignment);
>  
> diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
> index 159c6806c19b..2653516bcdef 100644
> --- a/drivers/md/dm-crypt.c
> +++ b/drivers/md/dm-crypt.c
> @@ -3630,6 +3630,7 @@ static void crypt_io_hints(struct dm_target
> *ti, struct queue_limits *limits) limits->physical_block_size =
>  		max_t(unsigned, limits->physical_block_size,
> cc->sector_size); limits->io_min = max_t(unsigned, limits->io_min,
> cc->sector_size);
> +	limits->dma_alignment = limits->logical_block_size - 1;
>  }
>  
>  static struct target_type crypt_target = {
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 854b4745cdd1..69ee5ea29e2f 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -310,6 +310,13 @@ struct queue_limits {
>  	unsigned char		discard_misaligned;
>  	unsigned char		raid_partial_stripes_expensive;
>  	enum blk_zoned_model	zoned;
> +
> +	/*
> +	 * Drivers that set dma_alignment to less than 511 must be
> prepared to
> +	 * handle individual bvec's that are not a multiple of a
> SECTOR_SIZE
> +	 * due to possible offsets.
> +	 */
> +	unsigned int		dma_alignment;
>  };
>  
>  typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int
> idx, @@ -455,12 +462,6 @@ struct request_queue {
>  	unsigned long		nr_requests;	/* Max # of
> requests */ 
>  	unsigned int		dma_pad_mask;
> -	/*
> -	 * Drivers that set dma_alignment to less than 511 must be
> prepared to
> -	 * handle individual bvec's that are not a multiple of a
> SECTOR_SIZE
> -	 * due to possible offsets.
> -	 */
> -	unsigned int		dma_alignment;
>  
>  #ifdef CONFIG_BLK_INLINE_ENCRYPTION
>  	struct blk_crypto_profile *crypto_profile;
> @@ -1318,7 +1319,7 @@ static inline sector_t bdev_zone_sectors(struct
> block_device *bdev) 
>  static inline int queue_dma_alignment(const struct request_queue *q)
>  {
> -	return q ? q->dma_alignment : 511;
> +	return q ? q->limits.dma_alignment : 511;
>  }
>  
>  static inline unsigned int bdev_dma_alignment(struct block_device
> *bdev) --

With this patch the issue doesn't reproduce.

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-11-04  6:50 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-02  3:11 Regression: wrong DIO alignment check with dm-crypt Eric Biggers
2022-11-02  3:11 ` [dm-devel] " Eric Biggers
2022-11-02 14:52 ` Keith Busch
2022-11-02 14:52   ` [dm-devel] " Keith Busch
2022-11-02 16:14   ` Keith Busch
2022-11-02 16:14     ` [dm-devel] " Keith Busch
2022-11-02 17:03     ` Dmitrii Tcvetkov
2022-11-02 17:03       ` [dm-devel] " Dmitrii Tcvetkov
2022-11-02 20:09       ` Keith Busch
2022-11-02 20:09         ` [dm-devel] " Keith Busch
2022-11-03 16:38         ` Dmitrii Tcvetkov
2022-11-03 16:38           ` [dm-devel] " Dmitrii Tcvetkov
2022-11-02 18:45 ` Mikulas Patocka
2022-11-02 18:45   ` Mikulas Patocka
2022-11-02 18:58   ` Keith Busch
2022-11-02 18:58     ` Keith Busch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.