All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/63] Improve static type checking for request flags
@ 2022-06-29 23:30 Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 01/63] treewide: Rename enum req_opf into enum req_op Bart Van Assche
                   ` (63 more replies)
  0 siblings, 64 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche

Hi Jens,

A source of confusion in the block layer is that can be nontrivial to determine
which type of flags a u32 function argument accepts. This patch series clears
up that confusion for request flags by introducing a new __bitwise type, namely
blk_opf_t. Additionally, the type 'int' is change into 'enum req_op' where used
to hold a request operation.

Analysis of the sparse warnings introduced by this conversion resulted in one
bug fix ("blktrace: Trace remap operations correctly").

Although the number of patches in this series is significant, the risk of this
patch series is low since most patches involve changing one integer type (int
or u32) into another integer type of the same size (enum req_op or blk_opf_t).

Please consider this patch series for kernel v5.20.

Thanks,

Bart.

Changes compared to v1:
- Combined request operation and flags into a single variable or argument.
- Changed the type of several enum req_op arguments into blk_opf_t for future
  extensibility.
- Removed the __force casts from tracing code.
- Added a few additional patches that convert additional block drivers.

Bart Van Assche (63):
  treewide: Rename enum req_opf into enum req_op
  block: Use enum req_op where appropriate
  block: Change the type of the last .rw_page() argument
  block: Change the type of req_op() and bio_op() into enum req_op
  block: Introduce the type blk_opf_t
  block: Use the new blk_opf_t type
  block/bfq: Use the new blk_opf_t type
  block/mq-deadline: Use the new blk_opf_t type
  block/kyber: Use the new blk_opf_t type
  blktrace: Trace remapped requests correctly
  blktrace: Use the new blk_opf_t type
  block/brd: Use the enum req_op type
  block/drbd: Use the enum req_op and blk_opf_t types
  block/drbd: Combine two drbd_submit_peer_request() arguments
  block/floppy: Fix a sparse warning
  block/rnbd: Use blk_opf_t where appropriate
  xen-blkback: Use the enum req_op and blk_opf_t types
  block/zram: Use enum req_op where appropriate
  nvdimm-btt: Use the enum req_op type
  um: Use enum req_op where appropriate
  dm/core: Reduce the size of struct dm_io_request
  dm/core: Rename kcopyd_job.rw into kcopyd.op
  dm/core: Combine request operation type and flags
  dm/ebs: Change 'int rw' into 'enum req_op op'
  dm/dm-flakey: Use the new blk_opf_t type
  dm/dm-integrity: Combine request operation and flags
  dm mirror log: Use the new blk_opf_t type
  dm-snap: Combine request operation type and flags
  dm/zone: Use the enum req_op type
  dm/dm-zoned: Use the enum req_op type
  md/core: Combine two sync_page_io() arguments
  md/bcache: Combine two uuid_io() arguments
  md/bcache: Combine two prio_io() arguments
  md/raid1: Use the new blk_opf_t type
  md/raid10: Use the new blk_opf_t type
  md/raid5: Use the enum req_op and blk_opf_t types
  nvme/host: Use the enum req_op and blk_opf_t types
  nvme/target: Use the new blk_opf_t type
  scsi/core: Improve static type checking
  scsi/core: Change the return type of scsi_noretry_cmd() into bool
  scsi/core: Use the new blk_opf_t type
  scsi/device_handlers: Use the new blk_opf_t type
  scsi/ufs: Rename a 'dir' argument into 'op'
  scsi/target: Use the new blk_opf_t type
  mm: Use the new blk_opf_t type
  fs/buffer: Use the new blk_opf_t type
  fs/buffer: Combine two submit_bh() and ll_rw_block() arguments
  fs/direct-io: Reduce the size of struct dio
  fs/mpage: Use the new blk_opf_t type
  fs/btrfs: Use the enum req_op and blk_opf_t types
  fs/ext4: Use the new blk_opf_t type
  fs/f2fs: Use the enum req_op and blk_opf_t types
  fs/gfs2: Use the enum req_op and blk_opf_t types
  fs/hfsplus: Use the enum req_op and blk_opf_t types
  fs/iomap: Use the new blk_opf_t type
  fs/jbd2: Fix the documentation of the jbd2_write_superblock() callers
  fs/nfs: Use enum req_op where appropriate
  fs/nilfs2: Use the enum req_op and blk_opf_t types
  fs/ntfs3: Use enum req_op where appropriate
  fs/ocfs2: Use the enum req_op and blk_opf_t types
  PM: Use the enum req_op and blk_opf_t types
  fs/xfs: Use the enum req_op and blk_opf_t types
  fs/zonefs: Use the enum req_op type for request operations

 arch/um/drivers/ubd_kern.c                  |   4 +-
 block/bfq-cgroup.c                          |  16 +--
 block/bfq-iosched.c                         |   8 +-
 block/bfq-iosched.h                         |   8 +-
 block/bio.c                                 |  10 +-
 block/blk-cgroup-rwstat.h                   |   2 +-
 block/blk-core.c                            |   8 +-
 block/blk-flush.c                           |   6 +-
 block/blk-merge.c                           |   8 +-
 block/blk-mq-debugfs.c                      |   6 +-
 block/blk-mq.c                              |  15 +--
 block/blk-mq.h                              |   6 +-
 block/blk-throttle.c                        |   7 +-
 block/blk-wbt.c                             |  18 +--
 block/blk-zoned.c                           |   7 +-
 block/blk.h                                 |   2 +-
 block/elevator.h                            |   2 +-
 block/fops.c                                |   8 +-
 block/kyber-iosched.c                       |   6 +-
 block/mq-deadline.c                         |   2 +-
 drivers/block/brd.c                         |   4 +-
 drivers/block/drbd/drbd_actlog.c            |   9 +-
 drivers/block/drbd/drbd_bitmap.c            |   3 +-
 drivers/block/drbd/drbd_int.h               |   5 +-
 drivers/block/drbd/drbd_receiver.c          |  24 ++--
 drivers/block/drbd/drbd_worker.c            |   2 +-
 drivers/block/floppy.c                      |   2 +-
 drivers/block/null_blk/main.c               |   9 +-
 drivers/block/null_blk/null_blk.h           |  12 +-
 drivers/block/null_blk/trace.h              |   2 +-
 drivers/block/null_blk/zoned.c              |   4 +-
 drivers/block/paride/pd.c                   |   2 +
 drivers/block/rnbd/rnbd-proto.h             |   7 +-
 drivers/block/xen-blkback/blkback.c         |   6 +-
 drivers/block/zram/zram_drv.c               |   4 +-
 drivers/md/bcache/super.c                   |  25 ++--
 drivers/md/dm-bufio.c                       |  26 ++---
 drivers/md/dm-ebs-target.c                  |  15 +--
 drivers/md/dm-flakey.c                      |   8 +-
 drivers/md/dm-integrity.c                   |  76 ++++++-------
 drivers/md/dm-io.c                          |  38 +++----
 drivers/md/dm-kcopyd.c                      |  26 ++---
 drivers/md/dm-log.c                         |   8 +-
 drivers/md/dm-raid.c                        |   2 +-
 drivers/md/dm-raid1.c                       |  12 +-
 drivers/md/dm-snap-persistent.c             |  25 ++--
 drivers/md/dm-writecache.c                  |  12 +-
 drivers/md/dm-zone.c                        |   2 +-
 drivers/md/dm-zoned-metadata.c              |   5 +-
 drivers/md/dm-zoned.h                       |   2 +-
 drivers/md/dm.c                             |  12 +-
 drivers/md/md-bitmap.c                      |   6 +-
 drivers/md/md.c                             |  10 +-
 drivers/md/md.h                             |   3 +-
 drivers/md/raid1.c                          |  12 +-
 drivers/md/raid10.c                         |  20 ++--
 drivers/md/raid5-cache.c                    |  12 +-
 drivers/md/raid5-ppl.c                      |  12 +-
 drivers/md/raid5.c                          |   5 +-
 drivers/nvdimm/btt.c                        |   4 +-
 drivers/nvdimm/pmem.c                       |   2 +-
 drivers/nvme/host/ioctl.c                   |   4 +-
 drivers/nvme/host/nvme.h                    |   2 +-
 drivers/nvme/target/io-cmd-bdev.c           |   3 +-
 drivers/nvme/target/zns.c                   |   6 +-
 drivers/scsi/device_handler/scsi_dh_alua.c  |   4 +-
 drivers/scsi/device_handler/scsi_dh_emc.c   |   2 +-
 drivers/scsi/device_handler/scsi_dh_hp_sw.c |   4 +-
 drivers/scsi/device_handler/scsi_dh_rdac.c  |   2 +-
 drivers/scsi/scsi_error.c                   |  16 +--
 drivers/scsi/scsi_lib.c                     |  12 +-
 drivers/scsi/scsi_priv.h                    |   2 +-
 drivers/scsi/sd_zbc.c                       |   2 +-
 drivers/target/target_core_iblock.c         |   4 +-
 drivers/ufs/core/ufshpb.c                   |   7 +-
 fs/btrfs/check-integrity.c                  |   4 +-
 fs/btrfs/compression.c                      |   6 +-
 fs/btrfs/compression.h                      |   2 +-
 fs/btrfs/extent_io.c                        |  18 +--
 fs/btrfs/inode.c                            |   4 +-
 fs/btrfs/raid56.c                           |   4 +-
 fs/buffer.c                                 |  56 ++++-----
 fs/direct-io.c                              |  37 +++---
 fs/ext4/ext4.h                              |   8 +-
 fs/ext4/fast_commit.c                       |   4 +-
 fs/ext4/mmp.c                               |   2 +-
 fs/ext4/super.c                             |  20 ++--
 fs/f2fs/data.c                              |  11 +-
 fs/f2fs/f2fs.h                              |   6 +-
 fs/f2fs/node.c                              |   2 +-
 fs/f2fs/segment.c                           |   2 +-
 fs/gfs2/bmap.c                              |   5 +-
 fs/gfs2/dir.c                               |   5 +-
 fs/gfs2/log.c                               |   4 +-
 fs/gfs2/log.h                               |   2 +-
 fs/gfs2/lops.c                              |   4 +-
 fs/gfs2/lops.h                              |   2 +-
 fs/gfs2/meta_io.c                           |  18 ++-
 fs/gfs2/quota.c                             |   2 +-
 fs/hfsplus/hfsplus_fs.h                     |   2 +-
 fs/hfsplus/part_tbl.c                       |   5 +-
 fs/hfsplus/super.c                          |   4 +-
 fs/hfsplus/wrapper.c                        |  12 +-
 fs/iomap/direct-io.c                        |   8 +-
 fs/isofs/compress.c                         |   2 +-
 fs/jbd2/commit.c                            |   8 +-
 fs/jbd2/journal.c                           |  19 ++--
 fs/jbd2/recovery.c                          |   4 +-
 fs/mpage.c                                  |   2 +-
 fs/nfs/blocklayout/blocklayout.c            |  13 +--
 fs/nilfs2/btnode.c                          |   8 +-
 fs/nilfs2/btnode.h                          |   4 +-
 fs/nilfs2/btree.c                           |   6 +-
 fs/nilfs2/gcinode.c                         |   7 +-
 fs/nilfs2/mdt.c                             |  19 ++--
 fs/ntfs/aops.c                              |   6 +-
 fs/ntfs/compress.c                          |   2 +-
 fs/ntfs/file.c                              |   2 +-
 fs/ntfs/logfile.c                           |   2 +-
 fs/ntfs/mft.c                               |   4 +-
 fs/ntfs3/file.c                             |   2 +-
 fs/ntfs3/fsntfs.c                           |   2 +-
 fs/ntfs3/inode.c                            |   2 +-
 fs/ntfs3/ntfs_fs.h                          |   2 +-
 fs/ocfs2/aops.c                             |   2 +-
 fs/ocfs2/buffer_head_io.c                   |   8 +-
 fs/ocfs2/cluster/heartbeat.c                |   9 +-
 fs/ocfs2/super.c                            |   2 +-
 fs/reiserfs/inode.c                         |   4 +-
 fs/reiserfs/journal.c                       |  12 +-
 fs/reiserfs/stree.c                         |   4 +-
 fs/reiserfs/super.c                         |   2 +-
 fs/udf/dir.c                                |   2 +-
 fs/udf/directory.c                          |   2 +-
 fs/udf/inode.c                              |   2 +-
 fs/ufs/balloc.c                             |   2 +-
 fs/xfs/xfs_bio_io.c                         |   2 +-
 fs/xfs/xfs_buf.c                            |   4 +-
 fs/xfs/xfs_linux.h                          |   2 +-
 fs/xfs/xfs_log_recover.c                    |   2 +-
 fs/zonefs/super.c                           |   5 +-
 fs/zonefs/trace.h                           |   4 +-
 include/linux/bio.h                         |  10 +-
 include/linux/blk-mq.h                      |  12 +-
 include/linux/blk_types.h                   | 119 +++++++++++---------
 include/linux/blkdev.h                      |  12 +-
 include/linux/blktrace_api.h                |   3 +-
 include/linux/buffer_head.h                 |   9 +-
 include/linux/dm-io.h                       |   4 +-
 include/linux/jbd2.h                        |   2 +-
 include/linux/writeback.h                   |   4 +-
 include/scsi/scsi_cmnd.h                    |   4 +-
 include/scsi/scsi_device.h                  |   2 +-
 include/trace/events/f2fs.h                 |  22 ++--
 include/trace/events/jbd2.h                 |  12 +-
 include/trace/events/nilfs2.h               |   4 +-
 kernel/power/swap.c                         |  29 +++--
 kernel/trace/blktrace.c                     |  51 ++++-----
 158 files changed, 722 insertions(+), 716 deletions(-)


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [PATCH v2 01/63] treewide: Rename enum req_opf into enum req_op
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 02/63] block: Use enum req_op where appropriate Bart Van Assche
                   ` (62 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei

The type name enum req_opf is misleading since it suggests that values of
this type include both an operation type and flags. Since values of this
type represent an operation only, change the type name into enum req_op.

Convert the enum req_op documentation into kernel-doc format. Move a few
definitions such that the enum req_op documentation occurs just above
the enum req_op definition.

The name "req_opf" was introduced by commit ef295ecf090d ("block: better op
and flags encoding").

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-zoned.c                 |  7 +++----
 drivers/block/null_blk/main.c     |  9 ++++-----
 drivers/block/null_blk/null_blk.h | 12 +++++-------
 drivers/block/null_blk/trace.h    |  2 +-
 drivers/block/null_blk/zoned.c    |  4 ++--
 drivers/md/dm-integrity.c         |  2 +-
 drivers/nvme/target/zns.c         |  4 ++--
 drivers/scsi/sd_zbc.c             |  2 +-
 drivers/ufs/core/ufshpb.c         |  5 ++---
 fs/zonefs/super.c                 |  5 ++---
 fs/zonefs/trace.h                 |  2 +-
 include/linux/blk_types.h         | 16 ++++++++--------
 include/linux/blkdev.h            |  2 +-
 13 files changed, 33 insertions(+), 39 deletions(-)

diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 38cd840d8838..ac0b9dfd2321 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -257,9 +257,8 @@ static int blkdev_zone_reset_all(struct block_device *bdev, gfp_t gfp_mask)
  *    The operation to execute on each zone can be a zone reset, open, close
  *    or finish request.
  */
-int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op,
-		     sector_t sector, sector_t nr_sectors,
-		     gfp_t gfp_mask)
+int blkdev_zone_mgmt(struct block_device *bdev, enum req_op op,
+		     sector_t sector, sector_t nr_sectors, gfp_t gfp_mask)
 {
 	struct request_queue *q = bdev_get_queue(bdev);
 	sector_t zone_sectors = blk_queue_zone_sectors(q);
@@ -398,7 +397,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
 	void __user *argp = (void __user *)arg;
 	struct request_queue *q;
 	struct blk_zone_range zrange;
-	enum req_opf op;
+	enum req_op op;
 	int ret;
 
 	if (!argp)
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index d695ea29efa6..4f5782637639 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1310,7 +1310,7 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd,
 }
 
 static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd,
-						     enum req_opf op,
+						     enum req_op op,
 						     sector_t sector,
 						     sector_t nr_sectors)
 {
@@ -1381,9 +1381,8 @@ static inline void nullb_complete_cmd(struct nullb_cmd *cmd)
 	}
 }
 
-blk_status_t null_process_cmd(struct nullb_cmd *cmd,
-			      enum req_opf op, sector_t sector,
-			      unsigned int nr_sectors)
+blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op,
+			      sector_t sector, unsigned int nr_sectors)
 {
 	struct nullb_device *dev = cmd->nq->dev;
 	blk_status_t ret;
@@ -1401,7 +1400,7 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd,
 }
 
 static blk_status_t null_handle_cmd(struct nullb_cmd *cmd, sector_t sector,
-				    sector_t nr_sectors, enum req_opf op)
+				    sector_t nr_sectors, enum req_op op)
 {
 	struct nullb_device *dev = cmd->nq->dev;
 	struct nullb *nullb = dev->nullb;
diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h
index 8359b43842f2..6fbf0a1b2622 100644
--- a/drivers/block/null_blk/null_blk.h
+++ b/drivers/block/null_blk/null_blk.h
@@ -136,9 +136,8 @@ struct nullb {
 
 blk_status_t null_handle_discard(struct nullb_device *dev, sector_t sector,
 				 sector_t nr_sectors);
-blk_status_t null_process_cmd(struct nullb_cmd *cmd,
-			      enum req_opf op, sector_t sector,
-			      unsigned int nr_sectors);
+blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_op op,
+			      sector_t sector, unsigned int nr_sectors);
 
 #ifdef CONFIG_BLK_DEV_ZONED
 int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q);
@@ -146,9 +145,8 @@ int null_register_zoned_dev(struct nullb *nullb);
 void null_free_zoned_dev(struct nullb_device *dev);
 int null_report_zones(struct gendisk *disk, sector_t sector,
 		      unsigned int nr_zones, report_zones_cb cb, void *data);
-blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd,
-				    enum req_opf op, sector_t sector,
-				    sector_t nr_sectors);
+blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_op op,
+				    sector_t sector, sector_t nr_sectors);
 size_t null_zone_valid_read_len(struct nullb *nullb,
 				sector_t sector, unsigned int len);
 #else
@@ -164,7 +162,7 @@ static inline int null_register_zoned_dev(struct nullb *nullb)
 }
 static inline void null_free_zoned_dev(struct nullb_device *dev) {}
 static inline blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd,
-			enum req_opf op, sector_t sector, sector_t nr_sectors)
+			enum req_op op, sector_t sector, sector_t nr_sectors)
 {
 	return BLK_STS_NOTSUPP;
 }
diff --git a/drivers/block/null_blk/trace.h b/drivers/block/null_blk/trace.h
index 86d6c12c603c..6b2b370e786f 100644
--- a/drivers/block/null_blk/trace.h
+++ b/drivers/block/null_blk/trace.h
@@ -36,7 +36,7 @@ TRACE_EVENT(nullb_zone_op,
 	    TP_ARGS(cmd, zone_no, zone_cond),
 	    TP_STRUCT__entry(
 		__array(char, disk, DISK_NAME_LEN)
-		__field(enum req_opf, op)
+		__field(enum req_op, op)
 		__field(unsigned int, zone_no)
 		__field(unsigned int, zone_cond)
 	    ),
diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index 2fdd7b20c224..99f45bb85080 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -600,7 +600,7 @@ static blk_status_t null_reset_zone(struct nullb_device *dev,
 	return BLK_STS_OK;
 }
 
-static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
+static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_op op,
 				   sector_t sector)
 {
 	struct nullb_device *dev = cmd->nq->dev;
@@ -653,7 +653,7 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
 	return ret;
 }
 
-blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op,
+blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_op op,
 				    sector_t sector, sector_t nr_sectors)
 {
 	struct nullb_device *dev;
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 3d5a0ce123c9..148978ad03a8 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -298,7 +298,7 @@ struct dm_integrity_io {
 	struct work_struct work;
 
 	struct dm_integrity_c *ic;
-	enum req_opf op;
+	enum req_op op;
 	bool fua;
 
 	struct dm_integrity_range range;
diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
index 82b61acf7a72..62afb7936132 100644
--- a/drivers/nvme/target/zns.c
+++ b/drivers/nvme/target/zns.c
@@ -308,7 +308,7 @@ void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req)
 	queue_work(zbd_wq, &req->z.zmgmt_work);
 }
 
-static inline enum req_opf zsa_req_op(u8 zsa)
+static inline enum req_op zsa_req_op(u8 zsa)
 {
 	switch (zsa) {
 	case NVME_ZONE_OPEN:
@@ -465,7 +465,7 @@ static void nvmet_bdev_zmgmt_send_work(struct work_struct *w)
 {
 	struct nvmet_req *req = container_of(w, struct nvmet_req, z.zmgmt_work);
 	sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->zms.slba);
-	enum req_opf op = zsa_req_op(req->cmd->zms.zsa);
+	enum req_op op = zsa_req_op(req->cmd->zms.zsa);
 	struct block_device *bdev = req->ns->bdev;
 	sector_t zone_sectors = bdev_zone_sectors(bdev);
 	u16 status = NVME_SC_SUCCESS;
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index 6acc4f406eb8..6d3658e36c84 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -529,7 +529,7 @@ static unsigned int sd_zbc_zone_wp_update(struct scsi_cmnd *cmd,
 	struct request *rq = scsi_cmd_to_rq(cmd);
 	struct scsi_disk *sdkp = scsi_disk(rq->q->disk);
 	unsigned int zno = blk_rq_zone_no(rq);
-	enum req_opf op = req_op(rq);
+	enum req_op op = req_op(rq);
 	unsigned long flags;
 
 	/*
diff --git a/drivers/ufs/core/ufshpb.c b/drivers/ufs/core/ufshpb.c
index de2bb8401bc4..24f1ee82c215 100644
--- a/drivers/ufs/core/ufshpb.c
+++ b/drivers/ufs/core/ufshpb.c
@@ -433,9 +433,8 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
 	return 0;
 }
 
-static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb,
-					 int rgn_idx, enum req_opf dir,
-					 bool atomic)
+static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, int rgn_idx,
+					 enum req_op dir, bool atomic)
 {
 	struct ufshpb_req *rq;
 	struct request *req;
diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
index 053299758deb..59e77992443c 100644
--- a/fs/zonefs/super.c
+++ b/fs/zonefs/super.c
@@ -60,8 +60,7 @@ static void zonefs_account_active(struct inode *inode)
 	}
 }
 
-static inline int zonefs_zone_mgmt(struct inode *inode,
-				   enum req_opf op)
+static inline int zonefs_zone_mgmt(struct inode *inode, enum req_op op)
 {
 	struct zonefs_inode_info *zi = ZONEFS_I(inode);
 	int ret;
@@ -525,7 +524,7 @@ static int zonefs_file_truncate(struct inode *inode, loff_t isize)
 {
 	struct zonefs_inode_info *zi = ZONEFS_I(inode);
 	loff_t old_isize;
-	enum req_opf op;
+	enum req_op op;
 	int ret = 0;
 
 	/*
diff --git a/fs/zonefs/trace.h b/fs/zonefs/trace.h
index f369d7d50303..21501da764bd 100644
--- a/fs/zonefs/trace.h
+++ b/fs/zonefs/trace.h
@@ -20,7 +20,7 @@
 #define show_dev(dev) MAJOR(dev), MINOR(dev)
 
 TRACE_EVENT(zonefs_zone_mgmt,
-	    TP_PROTO(struct inode *inode, enum req_opf op),
+	    TP_PROTO(struct inode *inode, enum req_op op),
 	    TP_ARGS(inode, op),
 	    TP_STRUCT__entry(
 			     __field(dev_t, dev)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index a24d4078fb21..0e6a2af7ed3d 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -337,8 +337,12 @@ enum {
 
 typedef __u32 __bitwise blk_mq_req_flags_t;
 
-/*
- * Operations and flags common to the bio and request structures.
+#define REQ_OP_BITS	8
+#define REQ_OP_MASK	((1 << REQ_OP_BITS) - 1)
+#define REQ_FLAG_BITS	24
+
+/**
+ * enum req_op - Operations common to the bio and request structures.
  * We use 8 bits for encoding the operation, and the remaining 24 for flags.
  *
  * The least significant bit of the operation number indicates the data
@@ -350,11 +354,7 @@ typedef __u32 __bitwise blk_mq_req_flags_t;
  * If a operation does not transfer data the least significant bit has no
  * meaning.
  */
-#define REQ_OP_BITS	8
-#define REQ_OP_MASK	((1 << REQ_OP_BITS) - 1)
-#define REQ_FLAG_BITS	24
-
-enum req_opf {
+enum req_op {
 	/* read sectors from the device */
 	REQ_OP_READ		= 0,
 	/* write sectors to the device */
@@ -509,7 +509,7 @@ static inline bool op_is_discard(unsigned int op)
  * due to its different handling in the block layer and device response in
  * case of command failure.
  */
-static inline bool op_is_zone_mgmt(enum req_opf op)
+static inline bool op_is_zone_mgmt(enum req_op op)
 {
 	switch (op & REQ_OP_MASK) {
 	case REQ_OP_ZONE_RESET:
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b9a94c53c6cd..a8378cae93d9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -299,7 +299,7 @@ void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model);
 int blkdev_report_zones(struct block_device *bdev, sector_t sector,
 			unsigned int nr_zones, report_zones_cb cb, void *data);
 unsigned int blkdev_nr_zones(struct gendisk *disk);
-extern int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op,
+extern int blkdev_zone_mgmt(struct block_device *bdev, enum req_op op,
 			    sector_t sectors, sector_t nr_sectors,
 			    gfp_t gfp_mask);
 int blk_revalidate_disk_zones(struct gendisk *disk,

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 02/63] block: Use enum req_op where appropriate
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 01/63] treewide: Rename enum req_opf into enum req_op Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 03/63] block: Change the type of the last .rw_page() argument Bart Van Assche
                   ` (61 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei

Change the type of the arguments that are used to pass a REQ_OP_* value
from int or unsigned int into enum req_op to improve static type
checking.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-core.c          | 6 +++---
 block/blk-mq-debugfs.c    | 2 +-
 block/blk-throttle.c      | 7 ++++---
 block/blk-wbt.c           | 2 +-
 block/blk.h               | 2 +-
 include/linux/blk_types.h | 2 +-
 include/linux/blkdev.h    | 6 +++---
 7 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 5ad7bd93077c..d11945207bcf 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -136,7 +136,7 @@ static const char *const blk_op_name[] = {
  * string format. Useful in the debugging and tracing bio or request. For
  * invalid REQ_OP_XXX it returns string "UNKNOWN".
  */
-inline const char *blk_op_str(unsigned int op)
+inline const char *blk_op_str(enum req_op op)
 {
 	const char *op_str = "UNKNOWN";
 
@@ -954,7 +954,7 @@ void update_io_ticks(struct block_device *part, unsigned long now, bool end)
 }
 
 unsigned long bdev_start_io_acct(struct block_device *bdev,
-				 unsigned int sectors, unsigned int op,
+				 unsigned int sectors, enum req_op op,
 				 unsigned long start_time)
 {
 	const int sgrp = op_stat_group(op);
@@ -995,7 +995,7 @@ unsigned long bio_start_io_acct(struct bio *bio)
 }
 EXPORT_SYMBOL_GPL(bio_start_io_acct);
 
-void bdev_end_io_acct(struct block_device *bdev, unsigned int op,
+void bdev_end_io_acct(struct block_device *bdev, enum req_op op,
 		      unsigned long start_time)
 {
 	const int sgrp = op_stat_group(op);
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index b80fae7ab1d9..745a78f412d1 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -304,7 +304,7 @@ static const char *blk_mq_rq_state_name(enum mq_rq_state rq_state)
 int __blk_mq_debugfs_rq_show(struct seq_file *m, struct request *rq)
 {
 	const struct blk_mq_ops *const mq_ops = rq->q->mq_ops;
-	const unsigned int op = req_op(rq);
+	const enum req_op op = req_op(rq);
 	const char *op_str = blk_op_str(op);
 
 	seq_printf(m, "%p {.op=", rq);
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 139b2d7a99e2..9f5fe62afff9 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2203,8 +2203,9 @@ bool __blk_throtl_bio(struct bio *bio)
 
 #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
 static void throtl_track_latency(struct throtl_data *td, sector_t size,
-	int op, unsigned long time)
+				 enum req_op op, unsigned long time)
 {
+	const bool rw = op_is_write(op);
 	struct latency_bucket *latency;
 	int index;
 
@@ -2215,10 +2216,10 @@ static void throtl_track_latency(struct throtl_data *td, sector_t size,
 
 	index = request_bucket_index(size);
 
-	latency = get_cpu_ptr(td->latency_buckets[op]);
+	latency = get_cpu_ptr(td->latency_buckets[rw]);
 	latency[index].total_latency += time;
 	latency[index].samples++;
-	put_cpu_ptr(td->latency_buckets[op]);
+	put_cpu_ptr(td->latency_buckets[rw]);
 }
 
 void blk_throtl_stat_add(struct request *rq, u64 time_ns)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 0c119be0e813..7bf09ae06577 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -670,7 +670,7 @@ u64 wbt_default_latency_nsec(struct request_queue *q)
 
 static int wbt_data_dir(const struct request *rq)
 {
-	const int op = req_op(rq);
+	const enum req_op op = req_op(rq);
 
 	if (op == REQ_OP_READ)
 		return READ;
diff --git a/block/blk.h b/block/blk.h
index 58ad50cacd2d..6eeddab066a1 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -160,7 +160,7 @@ static inline bool blk_discard_mergable(struct request *req)
 }
 
 static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q,
-						     int op)
+						     enum req_op op)
 {
 	if (unlikely(op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE))
 		return min(q->limits.max_discard_sectors,
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 0e6a2af7ed3d..cce8768bc00b 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -522,7 +522,7 @@ static inline bool op_is_zone_mgmt(enum req_op op)
 	}
 }
 
-static inline int op_stat_group(unsigned int op)
+static inline int op_stat_group(enum req_op op)
 {
 	if (op_is_discard(op))
 		return STAT_DISCARD;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index a8378cae93d9..bd395daa057c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -881,7 +881,7 @@ extern void blk_queue_exit(struct request_queue *q);
 extern void blk_sync_queue(struct request_queue *q);
 
 /* Helper to convert REQ_OP_XXX to its string format XXX */
-extern const char *blk_op_str(unsigned int op);
+extern const char *blk_op_str(enum req_op op);
 
 int blk_status_to_errno(blk_status_t status);
 blk_status_t errno_to_blk_status(int errno);
@@ -1466,9 +1466,9 @@ static inline void blk_wake_io_task(struct task_struct *waiter)
 }
 
 unsigned long bdev_start_io_acct(struct block_device *bdev,
-				 unsigned int sectors, unsigned int op,
+				 unsigned int sectors, enum req_op op,
 				 unsigned long start_time);
-void bdev_end_io_acct(struct block_device *bdev, unsigned int op,
+void bdev_end_io_acct(struct block_device *bdev, enum req_op op,
 		unsigned long start_time);
 
 void bio_start_io_acct_time(struct bio *bio, unsigned long start_time);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 03/63] block: Change the type of the last .rw_page() argument
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 01/63] treewide: Rename enum req_opf into enum req_op Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 02/63] block: Use enum req_op where appropriate Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 04/63] block: Change the type of req_op() and bio_op() into enum req_op Bart Van Assche
                   ` (60 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Tejun Heo,
	Minchan Kim, Dan Williams

All .rw_page() callers pass an enum req_op value as last argument. Make
this explicit by changing the type of the last argument into enum req_op.
See also commit 3f289dcb4b26 ("block: make bdev_ops->rw_page() take a
REQ_OP instead of bool").

Cc: Tejun Heo <tj@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/brd.c           | 2 +-
 drivers/block/zram/zram_drv.c | 2 +-
 drivers/nvdimm/btt.c          | 2 +-
 drivers/nvdimm/pmem.c         | 2 +-
 include/linux/blkdev.h        | 2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 9e26d5e769f3..7b82876af36e 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -310,7 +310,7 @@ static void brd_submit_bio(struct bio *bio)
 }
 
 static int brd_rw_page(struct block_device *bdev, sector_t sector,
-		       struct page *page, unsigned int op)
+		       struct page *page, enum req_op op)
 {
 	struct brd_device *brd = bdev->bd_disk->private_data;
 	int err;
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index e5233c911e43..a35b86c58aa2 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1631,7 +1631,7 @@ static void zram_slot_free_notify(struct block_device *bdev,
 }
 
 static int zram_rw_page(struct block_device *bdev, sector_t sector,
-		       struct page *page, unsigned int op)
+		       struct page *page, enum req_op op)
 {
 	int offset, ret;
 	u32 index;
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 5e622c0d4b66..dfbf73145d16 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1483,7 +1483,7 @@ static void btt_submit_bio(struct bio *bio)
 }
 
 static int btt_rw_page(struct block_device *bdev, sector_t sector,
-		struct page *page, unsigned int op)
+		struct page *page, enum req_op op)
 {
 	struct btt *btt = bdev->bd_disk->private_data;
 	int rc;
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index a72b81fa3242..f36efcc11f67 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -239,7 +239,7 @@ static void pmem_submit_bio(struct bio *bio)
 }
 
 static int pmem_rw_page(struct block_device *bdev, sector_t sector,
-		       struct page *page, unsigned int op)
+		       struct page *page, enum req_op op)
 {
 	struct pmem_device *pmem = bdev->bd_disk->private_data;
 	blk_status_t rc;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index bd395daa057c..0c4a53104705 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1413,7 +1413,7 @@ struct block_device_operations {
 			unsigned int flags);
 	int (*open) (struct block_device *, fmode_t);
 	void (*release) (struct gendisk *, fmode_t);
-	int (*rw_page)(struct block_device *, sector_t, struct page *, unsigned int);
+	int (*rw_page)(struct block_device *, sector_t, struct page *, enum req_op);
 	int (*ioctl) (struct block_device *, fmode_t, unsigned, unsigned long);
 	int (*compat_ioctl) (struct block_device *, fmode_t, unsigned, unsigned long);
 	unsigned int (*check_events) (struct gendisk *disk,

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 04/63] block: Change the type of req_op() and bio_op() into enum req_op
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (2 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 03/63] block: Change the type of the last .rw_page() argument Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 05/63] block: Introduce the type blk_opf_t Bart Van Assche
                   ` (59 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei

Improve static type checking by changing the type of the value returned by
req_op() and bio_op() from unsigned int into enum req_op. Insert
'default: break;' in switch statements on the enum req_op type to prevent
that the compiler warns about these switch statements.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-merge.c         | 2 ++
 drivers/block/paride/pd.c | 2 ++
 drivers/md/dm.c           | 2 ++
 include/linux/blk-mq.h    | 6 ++++--
 include/linux/blk_types.h | 6 ++++--
 5 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 0f5f42ebd0bb..9d96c9239219 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -405,6 +405,8 @@ unsigned int blk_recalc_rq_segments(struct request *rq)
 		return 1;
 	case REQ_OP_WRITE_ZEROES:
 		return 0;
+	default:
+		break;
 	}
 
 	rq_for_each_bvec(bv, rq, iter)
diff --git a/drivers/block/paride/pd.c b/drivers/block/paride/pd.c
index c8c14c6f5c3a..f8a75bc90f70 100644
--- a/drivers/block/paride/pd.c
+++ b/drivers/block/paride/pd.c
@@ -501,6 +501,8 @@ static enum action do_pd_io_start(void)
 			return do_pd_read_start();
 		else
 			return do_pd_write_start();
+	default:
+		break;
 	}
 	return Fail;
 }
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 8872f9c63688..c6e004157da3 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1542,6 +1542,8 @@ static blk_status_t __process_abnormal_io(struct clone_info *ci,
 	case REQ_OP_WRITE_ZEROES:
 		num_bios = ti->num_write_zeroes_bios;
 		break;
+	default:
+		break;
 	}
 
 	/*
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 43aad0da3305..16b70f952a07 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -197,8 +197,10 @@ struct request {
 	void *end_io_data;
 };
 
-#define req_op(req) \
-	((req)->cmd_flags & REQ_OP_MASK)
+static inline enum req_op req_op(const struct request *req)
+{
+	return req->cmd_flags & REQ_OP_MASK;
+}
 
 static inline bool blk_rq_is_passthrough(struct request *rq)
 {
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index cce8768bc00b..e66cbe377ae8 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -463,8 +463,10 @@ enum stat_group {
 	NR_STAT_GROUPS
 };
 
-#define bio_op(bio) \
-	((bio)->bi_opf & REQ_OP_MASK)
+static inline enum req_op bio_op(const struct bio *bio)
+{
+	return bio->bi_opf & REQ_OP_MASK;
+}
 
 /* obsolete, don't use in new code */
 static inline void bio_set_op_attrs(struct bio *bio, unsigned op,

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 05/63] block: Introduce the type blk_opf_t
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (3 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 04/63] block: Change the type of req_op() and bio_op() into enum req_op Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 06/63] block: Use the new blk_opf_t type Bart Van Assche
                   ` (58 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei

Introduce the type blk_opf_t for the request operation and flags (REQ_OP_*
and REQ_*). This type will be used to improve documentation of the block
layer code and also to allow sparse to verify whether request flags are used
correctly.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/blk_types.h | 97 ++++++++++++++++++++-------------------
 1 file changed, 51 insertions(+), 46 deletions(-)

diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index e66cbe377ae8..1ef99790f6ed 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -240,6 +240,8 @@ static inline void bio_issue_init(struct bio_issue *issue,
 			((u64)size << BIO_ISSUE_SIZE_SHIFT));
 }
 
+typedef __u32 __bitwise blk_opf_t;
+
 typedef unsigned int blk_qc_t;
 #define BLK_QC_T_NONE		-1U
 
@@ -250,7 +252,7 @@ typedef unsigned int blk_qc_t;
 struct bio {
 	struct bio		*bi_next;	/* request queue link */
 	struct block_device	*bi_bdev;
-	unsigned int		bi_opf;		/* bottom bits REQ_OP, top bits
+	blk_opf_t		bi_opf;		/* bottom bits REQ_OP, top bits
 						 * req_flags.
 						 */
 	unsigned short		bi_flags;	/* BIO_* below */
@@ -338,7 +340,7 @@ enum {
 typedef __u32 __bitwise blk_mq_req_flags_t;
 
 #define REQ_OP_BITS	8
-#define REQ_OP_MASK	((1 << REQ_OP_BITS) - 1)
+#define REQ_OP_MASK	(__force blk_opf_t)((1 << REQ_OP_BITS) - 1)
 #define REQ_FLAG_BITS	24
 
 /**
@@ -356,35 +358,35 @@ typedef __u32 __bitwise blk_mq_req_flags_t;
  */
 enum req_op {
 	/* read sectors from the device */
-	REQ_OP_READ		= 0,
+	REQ_OP_READ		= (__force blk_opf_t)0,
 	/* write sectors to the device */
-	REQ_OP_WRITE		= 1,
+	REQ_OP_WRITE		= (__force blk_opf_t)1,
 	/* flush the volatile write cache */
-	REQ_OP_FLUSH		= 2,
+	REQ_OP_FLUSH		= (__force blk_opf_t)2,
 	/* discard sectors */
-	REQ_OP_DISCARD		= 3,
+	REQ_OP_DISCARD		= (__force blk_opf_t)3,
 	/* securely erase sectors */
-	REQ_OP_SECURE_ERASE	= 5,
+	REQ_OP_SECURE_ERASE	= (__force blk_opf_t)5,
 	/* write the zero filled sector many times */
-	REQ_OP_WRITE_ZEROES	= 9,
+	REQ_OP_WRITE_ZEROES	= (__force blk_opf_t)9,
 	/* Open a zone */
-	REQ_OP_ZONE_OPEN	= 10,
+	REQ_OP_ZONE_OPEN	= (__force blk_opf_t)10,
 	/* Close a zone */
-	REQ_OP_ZONE_CLOSE	= 11,
+	REQ_OP_ZONE_CLOSE	= (__force blk_opf_t)11,
 	/* Transition a zone to full */
-	REQ_OP_ZONE_FINISH	= 12,
+	REQ_OP_ZONE_FINISH	= (__force blk_opf_t)12,
 	/* write data at the current zone write pointer */
-	REQ_OP_ZONE_APPEND	= 13,
+	REQ_OP_ZONE_APPEND	= (__force blk_opf_t)13,
 	/* reset a zone write pointer */
-	REQ_OP_ZONE_RESET	= 15,
+	REQ_OP_ZONE_RESET	= (__force blk_opf_t)15,
 	/* reset all the zone present on the device */
-	REQ_OP_ZONE_RESET_ALL	= 17,
+	REQ_OP_ZONE_RESET_ALL	= (__force blk_opf_t)17,
 
 	/* Driver private requests */
-	REQ_OP_DRV_IN		= 34,
-	REQ_OP_DRV_OUT		= 35,
+	REQ_OP_DRV_IN		= (__force blk_opf_t)34,
+	REQ_OP_DRV_OUT		= (__force blk_opf_t)35,
 
-	REQ_OP_LAST,
+	REQ_OP_LAST		= (__force blk_opf_t)36,
 };
 
 enum req_flag_bits {
@@ -425,28 +427,31 @@ enum req_flag_bits {
 	__REQ_NR_BITS,		/* stops here */
 };
 
-#define REQ_FAILFAST_DEV	(1ULL << __REQ_FAILFAST_DEV)
-#define REQ_FAILFAST_TRANSPORT	(1ULL << __REQ_FAILFAST_TRANSPORT)
-#define REQ_FAILFAST_DRIVER	(1ULL << __REQ_FAILFAST_DRIVER)
-#define REQ_SYNC		(1ULL << __REQ_SYNC)
-#define REQ_META		(1ULL << __REQ_META)
-#define REQ_PRIO		(1ULL << __REQ_PRIO)
-#define REQ_NOMERGE		(1ULL << __REQ_NOMERGE)
-#define REQ_IDLE		(1ULL << __REQ_IDLE)
-#define REQ_INTEGRITY		(1ULL << __REQ_INTEGRITY)
-#define REQ_FUA			(1ULL << __REQ_FUA)
-#define REQ_PREFLUSH		(1ULL << __REQ_PREFLUSH)
-#define REQ_RAHEAD		(1ULL << __REQ_RAHEAD)
-#define REQ_BACKGROUND		(1ULL << __REQ_BACKGROUND)
-#define REQ_NOWAIT		(1ULL << __REQ_NOWAIT)
-#define REQ_CGROUP_PUNT		(1ULL << __REQ_CGROUP_PUNT)
-
-#define REQ_NOUNMAP		(1ULL << __REQ_NOUNMAP)
-#define REQ_POLLED		(1ULL << __REQ_POLLED)
-#define REQ_ALLOC_CACHE		(1ULL << __REQ_ALLOC_CACHE)
-
-#define REQ_DRV			(1ULL << __REQ_DRV)
-#define REQ_SWAP		(1ULL << __REQ_SWAP)
+#define REQ_FAILFAST_DEV	\
+			(__force blk_opf_t)(1ULL << __REQ_FAILFAST_DEV)
+#define REQ_FAILFAST_TRANSPORT	\
+			(__force blk_opf_t)(1ULL << __REQ_FAILFAST_TRANSPORT)
+#define REQ_FAILFAST_DRIVER	\
+			(__force blk_opf_t)(1ULL << __REQ_FAILFAST_DRIVER)
+#define REQ_SYNC	(__force blk_opf_t)(1ULL << __REQ_SYNC)
+#define REQ_META	(__force blk_opf_t)(1ULL << __REQ_META)
+#define REQ_PRIO	(__force blk_opf_t)(1ULL << __REQ_PRIO)
+#define REQ_NOMERGE	(__force blk_opf_t)(1ULL << __REQ_NOMERGE)
+#define REQ_IDLE	(__force blk_opf_t)(1ULL << __REQ_IDLE)
+#define REQ_INTEGRITY	(__force blk_opf_t)(1ULL << __REQ_INTEGRITY)
+#define REQ_FUA		(__force blk_opf_t)(1ULL << __REQ_FUA)
+#define REQ_PREFLUSH	(__force blk_opf_t)(1ULL << __REQ_PREFLUSH)
+#define REQ_RAHEAD	(__force blk_opf_t)(1ULL << __REQ_RAHEAD)
+#define REQ_BACKGROUND	(__force blk_opf_t)(1ULL << __REQ_BACKGROUND)
+#define REQ_NOWAIT	(__force blk_opf_t)(1ULL << __REQ_NOWAIT)
+#define REQ_CGROUP_PUNT	(__force blk_opf_t)(1ULL << __REQ_CGROUP_PUNT)
+
+#define REQ_NOUNMAP	(__force blk_opf_t)(1ULL << __REQ_NOUNMAP)
+#define REQ_POLLED	(__force blk_opf_t)(1ULL << __REQ_POLLED)
+#define REQ_ALLOC_CACHE	(__force blk_opf_t)(1ULL << __REQ_ALLOC_CACHE)
+
+#define REQ_DRV		(__force blk_opf_t)(1ULL << __REQ_DRV)
+#define REQ_SWAP	(__force blk_opf_t)(1ULL << __REQ_SWAP)
 
 #define REQ_FAILFAST_MASK \
 	(REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER)
@@ -469,22 +474,22 @@ static inline enum req_op bio_op(const struct bio *bio)
 }
 
 /* obsolete, don't use in new code */
-static inline void bio_set_op_attrs(struct bio *bio, unsigned op,
-		unsigned op_flags)
+static inline void bio_set_op_attrs(struct bio *bio, enum req_op op,
+				    blk_opf_t op_flags)
 {
 	bio->bi_opf = op | op_flags;
 }
 
-static inline bool op_is_write(unsigned int op)
+static inline bool op_is_write(blk_opf_t op)
 {
-	return (op & 1);
+	return !!(op & (__force blk_opf_t)1);
 }
 
 /*
  * Check if the bio or request is one that needs special treatment in the
  * flush state machine.
  */
-static inline bool op_is_flush(unsigned int op)
+static inline bool op_is_flush(blk_opf_t op)
 {
 	return op & (REQ_FUA | REQ_PREFLUSH);
 }
@@ -494,13 +499,13 @@ static inline bool op_is_flush(unsigned int op)
  * PREFLUSH flag.  Other operations may be marked as synchronous using the
  * REQ_SYNC flag.
  */
-static inline bool op_is_sync(unsigned int op)
+static inline bool op_is_sync(blk_opf_t op)
 {
 	return (op & REQ_OP_MASK) == REQ_OP_READ ||
 		(op & (REQ_SYNC | REQ_FUA | REQ_PREFLUSH));
 }
 
-static inline bool op_is_discard(unsigned int op)
+static inline bool op_is_discard(blk_opf_t op)
 {
 	return (op & REQ_OP_MASK) == REQ_OP_DISCARD;
 }

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 06/63] block: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (4 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 05/63] block: Introduce the type blk_opf_t Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 07/63] block/bfq: " Bart Van Assche
                   ` (57 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei

Use the new blk_opf_t type for arguments and variables that represent
request flags or a bitwise combination of a request operation and
request flags. Rename the function arguments and also a structure member
that hold a request operation and flags from 'rw' into 'opf'.

This patch does not change any functionality.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/bio.c               | 10 +++++-----
 block/blk-cgroup-rwstat.h |  2 +-
 block/blk-core.c          |  2 +-
 block/blk-flush.c         |  6 +++---
 block/blk-merge.c         |  6 +++---
 block/blk-mq-debugfs.c    |  4 ++--
 block/blk-mq.c            | 15 ++++++++-------
 block/blk-mq.h            |  6 +++---
 block/blk-wbt.c           | 16 ++++++++--------
 block/elevator.h          |  2 +-
 block/fops.c              |  8 ++++----
 include/linux/bio.h       | 10 +++++-----
 include/linux/blk-mq.h    |  6 +++---
 include/linux/blkdev.h    |  2 +-
 14 files changed, 48 insertions(+), 47 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 933ea3210954..8978f7d3ae48 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -239,7 +239,7 @@ static void bio_free(struct bio *bio)
  * when IO has completed, or when the bio is released.
  */
 void bio_init(struct bio *bio, struct block_device *bdev, struct bio_vec *table,
-	      unsigned short max_vecs, unsigned int opf)
+	      unsigned short max_vecs, blk_opf_t opf)
 {
 	bio->bi_next = NULL;
 	bio->bi_bdev = bdev;
@@ -292,7 +292,7 @@ EXPORT_SYMBOL(bio_init);
  *   preserved are the ones that are initialized by bio_alloc_bioset(). See
  *   comment in struct bio.
  */
-void bio_reset(struct bio *bio, struct block_device *bdev, unsigned int opf)
+void bio_reset(struct bio *bio, struct block_device *bdev, blk_opf_t opf)
 {
 	bio_uninit(bio);
 	memset(bio, 0, BIO_RESET_BYTES);
@@ -341,7 +341,7 @@ void bio_chain(struct bio *bio, struct bio *parent)
 EXPORT_SYMBOL(bio_chain);
 
 struct bio *blk_next_bio(struct bio *bio, struct block_device *bdev,
-		unsigned int nr_pages, unsigned int opf, gfp_t gfp)
+		unsigned int nr_pages, blk_opf_t opf, gfp_t gfp)
 {
 	struct bio *new = bio_alloc(bdev, nr_pages, opf, gfp);
 
@@ -409,7 +409,7 @@ static void punt_bios_to_rescuer(struct bio_set *bs)
 }
 
 static struct bio *bio_alloc_percpu_cache(struct block_device *bdev,
-		unsigned short nr_vecs, unsigned int opf, gfp_t gfp,
+		unsigned short nr_vecs, blk_opf_t opf, gfp_t gfp,
 		struct bio_set *bs)
 {
 	struct bio_alloc_cache *cache;
@@ -468,7 +468,7 @@ static struct bio *bio_alloc_percpu_cache(struct block_device *bdev,
  * Returns: Pointer to new bio on success, NULL on failure.
  */
 struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs,
-			     unsigned int opf, gfp_t gfp_mask,
+			     blk_opf_t opf, gfp_t gfp_mask,
 			     struct bio_set *bs)
 {
 	gfp_t saved_gfp = gfp_mask;
diff --git a/block/blk-cgroup-rwstat.h b/block/blk-cgroup-rwstat.h
index 9f2723b34b75..6c0237148c0d 100644
--- a/block/blk-cgroup-rwstat.h
+++ b/block/blk-cgroup-rwstat.h
@@ -59,7 +59,7 @@ void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
  * caller is responsible for synchronizing calls to this function.
  */
 static inline void blkg_rwstat_add(struct blkg_rwstat *rwstat,
-				   unsigned int op, uint64_t val)
+				   blk_opf_t op, uint64_t val)
 {
 	struct percpu_counter *cnt;
 
diff --git a/block/blk-core.c b/block/blk-core.c
index d11945207bcf..943ff0cd7294 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1204,7 +1204,7 @@ EXPORT_SYMBOL_GPL(blk_io_schedule);
 
 int __init blk_dev_init(void)
 {
-	BUILD_BUG_ON(REQ_OP_LAST >= (1 << REQ_OP_BITS));
+	BUILD_BUG_ON((__force u32)REQ_OP_LAST >= (1 << REQ_OP_BITS));
 	BUILD_BUG_ON(REQ_OP_BITS + REQ_FLAG_BITS > 8 *
 			sizeof_field(struct request, cmd_flags));
 	BUILD_BUG_ON(REQ_OP_BITS + REQ_FLAG_BITS > 8 *
diff --git a/block/blk-flush.c b/block/blk-flush.c
index c68968724870..d20a0c6b2c66 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -94,7 +94,7 @@ enum {
 };
 
 static void blk_kick_flush(struct request_queue *q,
-			   struct blk_flush_queue *fq, unsigned int flags);
+			   struct blk_flush_queue *fq, blk_opf_t flags);
 
 static inline struct blk_flush_queue *
 blk_get_flush_queue(struct request_queue *q, struct blk_mq_ctx *ctx)
@@ -173,7 +173,7 @@ static void blk_flush_complete_seq(struct request *rq,
 {
 	struct request_queue *q = rq->q;
 	struct list_head *pending = &fq->flush_queue[fq->flush_pending_idx];
-	unsigned int cmd_flags;
+	blk_opf_t cmd_flags;
 
 	BUG_ON(rq->flush.seq & seq);
 	rq->flush.seq |= seq;
@@ -290,7 +290,7 @@ bool is_flush_rq(struct request *rq)
  *
  */
 static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq,
-			   unsigned int flags)
+			   blk_opf_t flags)
 {
 	struct list_head *pending = &fq->flush_queue[fq->flush_pending_idx];
 	struct request *first_rq =
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 9d96c9239219..3a928ebb6b5e 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -712,7 +712,7 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
  */
 void blk_rq_set_mixed_merge(struct request *rq)
 {
-	unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK;
+	blk_opf_t ff = rq->cmd_flags & REQ_FAILFAST_MASK;
 	struct bio *bio;
 
 	if (rq->rq_flags & RQF_MIXED_MERGE)
@@ -928,7 +928,7 @@ enum bio_merge_status {
 static enum bio_merge_status bio_attempt_back_merge(struct request *req,
 		struct bio *bio, unsigned int nr_segs)
 {
-	const int ff = bio->bi_opf & REQ_FAILFAST_MASK;
+	const blk_opf_t ff = bio->bi_opf & REQ_FAILFAST_MASK;
 
 	if (!ll_back_merge_fn(req, bio, nr_segs))
 		return BIO_MERGE_FAILED;
@@ -952,7 +952,7 @@ static enum bio_merge_status bio_attempt_back_merge(struct request *req,
 static enum bio_merge_status bio_attempt_front_merge(struct request *req,
 		struct bio *bio, unsigned int nr_segs)
 {
-	const int ff = bio->bi_opf & REQ_FAILFAST_MASK;
+	const blk_opf_t ff = bio->bi_opf & REQ_FAILFAST_MASK;
 
 	if (!ll_front_merge_fn(req, bio, nr_segs))
 		return BIO_MERGE_FAILED;
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 745a78f412d1..eed467fec9bf 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -313,8 +313,8 @@ int __blk_mq_debugfs_rq_show(struct seq_file *m, struct request *rq)
 	else
 		seq_printf(m, "%s", op_str);
 	seq_puts(m, ", .cmd_flags=");
-	blk_flags_show(m, rq->cmd_flags & ~REQ_OP_MASK, cmd_flag_name,
-		       ARRAY_SIZE(cmd_flag_name));
+	blk_flags_show(m, (__force unsigned int)(rq->cmd_flags & ~REQ_OP_MASK),
+		       cmd_flag_name, ARRAY_SIZE(cmd_flag_name));
 	seq_puts(m, ", .rq_flags=");
 	blk_flags_show(m, (__force unsigned int)rq->rq_flags, rqf_name,
 		       ARRAY_SIZE(rqf_name));
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 15c7c5c4ad22..02d6548605cc 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -508,13 +508,13 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data)
 					alloc_time_ns);
 }
 
-struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
+struct request *blk_mq_alloc_request(struct request_queue *q, blk_opf_t opf,
 		blk_mq_req_flags_t flags)
 {
 	struct blk_mq_alloc_data data = {
 		.q		= q,
 		.flags		= flags,
-		.cmd_flags	= op,
+		.cmd_flags	= opf,
 		.nr_tags	= 1,
 	};
 	struct request *rq;
@@ -538,12 +538,12 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 EXPORT_SYMBOL(blk_mq_alloc_request);
 
 struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
-	unsigned int op, blk_mq_req_flags_t flags, unsigned int hctx_idx)
+	blk_opf_t opf, blk_mq_req_flags_t flags, unsigned int hctx_idx)
 {
 	struct blk_mq_alloc_data data = {
 		.q		= q,
 		.flags		= flags,
-		.cmd_flags	= op,
+		.cmd_flags	= opf,
 		.nr_tags	= 1,
 	};
 	u64 alloc_time_ns = 0;
@@ -655,7 +655,7 @@ void blk_dump_rq_flags(struct request *rq, char *msg)
 {
 	printk(KERN_INFO "%s: dev %s: flags=%llx\n", msg,
 		rq->q->disk ? rq->q->disk->disk_name : "?",
-		(unsigned long long) rq->cmd_flags);
+		(__force unsigned long long) rq->cmd_flags);
 
 	printk(KERN_INFO "  sector %llu, nr/cnr %u/%u\n",
 	       (unsigned long long)blk_rq_pos(rq),
@@ -708,8 +708,9 @@ static void blk_print_req_error(struct request *req, blk_status_t status)
 		"phys_seg %u prio class %u\n",
 		blk_status_to_str(status),
 		req->q->disk ? req->q->disk->disk_name : "?",
-		blk_rq_pos(req), req_op(req), blk_op_str(req_op(req)),
-		req->cmd_flags & ~REQ_OP_MASK,
+		blk_rq_pos(req), (__force u32)req_op(req),
+		blk_op_str(req_op(req)),
+		(__force u32)(req->cmd_flags & ~REQ_OP_MASK),
 		req->nr_phys_segments,
 		IOPRIO_PRIO_CLASS(req->ioprio));
 }
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 54e20edf0da3..7004fe4ccb44 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -86,7 +86,7 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue_type(struct request_queue *
 	return xa_load(&q->hctx_table, q->tag_set->map[type].mq_map[cpu]);
 }
 
-static inline enum hctx_type blk_mq_get_hctx_type(unsigned int opf)
+static inline enum hctx_type blk_mq_get_hctx_type(blk_opf_t opf)
 {
 	enum hctx_type type = HCTX_TYPE_DEFAULT;
 
@@ -107,7 +107,7 @@ static inline enum hctx_type blk_mq_get_hctx_type(unsigned int opf)
  * @ctx: software queue cpu ctx
  */
 static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q,
-						     unsigned int opf,
+						     blk_opf_t opf,
 						     struct blk_mq_ctx *ctx)
 {
 	return ctx->hctxs[blk_mq_get_hctx_type(opf)];
@@ -152,7 +152,7 @@ struct blk_mq_alloc_data {
 	struct request_queue *q;
 	blk_mq_req_flags_t flags;
 	unsigned int shallow_depth;
-	unsigned int cmd_flags;
+	blk_opf_t cmd_flags;
 	req_flags_t rq_flags;
 
 	/* allocate multiple requests/tags in one go */
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 7bf09ae06577..f2e4bf1dca47 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -451,7 +451,7 @@ static bool close_io(struct rq_wb *rwb)
 
 #define REQ_HIPRIO	(REQ_SYNC | REQ_META | REQ_PRIO)
 
-static inline unsigned int get_limit(struct rq_wb *rwb, unsigned long rw)
+static inline unsigned int get_limit(struct rq_wb *rwb, blk_opf_t opf)
 {
 	unsigned int limit;
 
@@ -462,7 +462,7 @@ static inline unsigned int get_limit(struct rq_wb *rwb, unsigned long rw)
 	if (!rwb_enabled(rwb))
 		return UINT_MAX;
 
-	if ((rw & REQ_OP_MASK) == REQ_OP_DISCARD)
+	if ((opf & REQ_OP_MASK) == REQ_OP_DISCARD)
 		return rwb->wb_background;
 
 	/*
@@ -473,9 +473,9 @@ static inline unsigned int get_limit(struct rq_wb *rwb, unsigned long rw)
 	 * the idle limit, or go to normal if we haven't had competing
 	 * IO for a bit.
 	 */
-	if ((rw & REQ_HIPRIO) || wb_recent_wait(rwb) || current_is_kswapd())
+	if ((opf & REQ_HIPRIO) || wb_recent_wait(rwb) || current_is_kswapd())
 		limit = rwb->rq_depth.max_depth;
-	else if ((rw & REQ_BACKGROUND) || close_io(rwb)) {
+	else if ((opf & REQ_BACKGROUND) || close_io(rwb)) {
 		/*
 		 * If less than 100ms since we completed unrelated IO,
 		 * limit us to half the depth for background writeback.
@@ -490,13 +490,13 @@ static inline unsigned int get_limit(struct rq_wb *rwb, unsigned long rw)
 struct wbt_wait_data {
 	struct rq_wb *rwb;
 	enum wbt_flags wb_acct;
-	unsigned long rw;
+	blk_opf_t opf;
 };
 
 static bool wbt_inflight_cb(struct rq_wait *rqw, void *private_data)
 {
 	struct wbt_wait_data *data = private_data;
-	return rq_wait_inc_below(rqw, get_limit(data->rwb, data->rw));
+	return rq_wait_inc_below(rqw, get_limit(data->rwb, data->opf));
 }
 
 static void wbt_cleanup_cb(struct rq_wait *rqw, void *private_data)
@@ -510,13 +510,13 @@ static void wbt_cleanup_cb(struct rq_wait *rqw, void *private_data)
  * the timer to kick off queuing again.
  */
 static void __wbt_wait(struct rq_wb *rwb, enum wbt_flags wb_acct,
-		       unsigned long rw)
+		       blk_opf_t opf)
 {
 	struct rq_wait *rqw = get_rq_wait(rwb, wb_acct);
 	struct wbt_wait_data data = {
 		.rwb = rwb,
 		.wb_acct = wb_acct,
-		.rw = rw,
+		.opf = opf,
 	};
 
 	rq_qos_wait(rqw, &data, wbt_inflight_cb, wbt_cleanup_cb);
diff --git a/block/elevator.h b/block/elevator.h
index 16cd8bdedb7e..3f0593b3bf9d 100644
--- a/block/elevator.h
+++ b/block/elevator.h
@@ -34,7 +34,7 @@ struct elevator_mq_ops {
 	int (*request_merge)(struct request_queue *q, struct request **, struct bio *);
 	void (*request_merged)(struct request_queue *, struct request *, enum elv_merge);
 	void (*requests_merged)(struct request_queue *, struct request *, struct request *);
-	void (*limit_depth)(unsigned int, struct blk_mq_alloc_data *);
+	void (*limit_depth)(blk_opf_t, struct blk_mq_alloc_data *);
 	void (*prepare_request)(struct request *);
 	void (*finish_request)(struct request *);
 	void (*insert_requests)(struct blk_mq_hw_ctx *, struct list_head *, bool);
diff --git a/block/fops.c b/block/fops.c
index 86d3cab9bf93..dd22e8c9b76f 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -32,9 +32,9 @@ static int blkdev_get_block(struct inode *inode, sector_t iblock,
 	return 0;
 }
 
-static unsigned int dio_bio_write_op(struct kiocb *iocb)
+static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
 {
-	unsigned int op = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE;
+	blk_opf_t op = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE;
 
 	/* avoid the need for a I/O completion work item */
 	if (iocb->ki_flags & IOCB_DSYNC)
@@ -175,7 +175,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
 	struct blkdev_dio *dio;
 	struct bio *bio;
 	bool is_read = (iov_iter_rw(iter) == READ), is_sync;
-	unsigned int opf = is_read ? REQ_OP_READ : dio_bio_write_op(iocb);
+	blk_opf_t opf = is_read ? REQ_OP_READ : dio_bio_write_op(iocb);
 	loff_t pos = iocb->ki_pos;
 	int ret = 0;
 
@@ -297,7 +297,7 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
 {
 	struct block_device *bdev = iocb->ki_filp->private_data;
 	bool is_read = iov_iter_rw(iter) == READ;
-	unsigned int opf = is_read ? REQ_OP_READ : dio_bio_write_op(iocb);
+	blk_opf_t opf = is_read ? REQ_OP_READ : dio_bio_write_op(iocb);
 	struct blkdev_dio *dio;
 	struct bio *bio;
 	loff_t pos = iocb->ki_pos;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 992ee987f273..ca22b06700a9 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -405,7 +405,7 @@ extern void bioset_exit(struct bio_set *);
 extern int biovec_init_pool(mempool_t *pool, int pool_entries);
 
 struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs,
-			     unsigned int opf, gfp_t gfp_mask,
+			     blk_opf_t opf, gfp_t gfp_mask,
 			     struct bio_set *bs);
 struct bio *bio_kmalloc(unsigned short nr_vecs, gfp_t gfp_mask);
 extern void bio_put(struct bio *);
@@ -418,7 +418,7 @@ int bio_init_clone(struct block_device *bdev, struct bio *bio,
 extern struct bio_set fs_bio_set;
 
 static inline struct bio *bio_alloc(struct block_device *bdev,
-		unsigned short nr_vecs, unsigned int opf, gfp_t gfp_mask)
+		unsigned short nr_vecs, blk_opf_t opf, gfp_t gfp_mask)
 {
 	return bio_alloc_bioset(bdev, nr_vecs, opf, gfp_mask, &fs_bio_set);
 }
@@ -456,9 +456,9 @@ struct request_queue;
 
 extern int submit_bio_wait(struct bio *bio);
 void bio_init(struct bio *bio, struct block_device *bdev, struct bio_vec *table,
-	      unsigned short max_vecs, unsigned int opf);
+	      unsigned short max_vecs, blk_opf_t opf);
 extern void bio_uninit(struct bio *);
-void bio_reset(struct bio *bio, struct block_device *bdev, unsigned int opf);
+void bio_reset(struct bio *bio, struct block_device *bdev, blk_opf_t opf);
 void bio_chain(struct bio *, struct bio *);
 
 int bio_add_page(struct bio *, struct page *, unsigned len, unsigned off);
@@ -789,6 +789,6 @@ static inline void bio_clear_polled(struct bio *bio)
 }
 
 struct bio *blk_next_bio(struct bio *bio, struct block_device *bdev,
-		unsigned int nr_pages, unsigned int opf, gfp_t gfp);
+		unsigned int nr_pages, blk_opf_t opf, gfp_t gfp);
 
 #endif /* __LINUX_BIO_H */
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 16b70f952a07..6c016204bdda 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -79,7 +79,7 @@ struct request {
 	struct blk_mq_ctx *mq_ctx;
 	struct blk_mq_hw_ctx *mq_hctx;
 
-	unsigned int cmd_flags;		/* op and common flags */
+	blk_opf_t cmd_flags;		/* op and common flags */
 	req_flags_t rq_flags;
 
 	int tag;
@@ -714,10 +714,10 @@ enum {
 	BLK_MQ_REQ_PM		= (__force blk_mq_req_flags_t)(1 << 2),
 };
 
-struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
+struct request *blk_mq_alloc_request(struct request_queue *q, blk_opf_t opf,
 		blk_mq_req_flags_t flags);
 struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
-		unsigned int op, blk_mq_req_flags_t flags,
+		blk_opf_t opf, blk_mq_req_flags_t flags,
 		unsigned int hctx_idx);
 
 /*
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 0c4a53104705..5bdb7a74c98f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -227,7 +227,7 @@ static inline int blk_validate_block_size(unsigned long bsize)
 	return 0;
 }
 
-static inline bool blk_op_is_passthrough(unsigned int op)
+static inline bool blk_op_is_passthrough(blk_opf_t op)
 {
 	op &= REQ_OP_MASK;
 	return op == REQ_OP_DRV_IN || op == REQ_OP_DRV_OUT;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 07/63] block/bfq: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (5 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 06/63] block: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-30  8:33   ` Jan Kara
  2022-06-29 23:30 ` [PATCH v2 08/63] block/mq-deadline: " Bart Van Assche
                   ` (56 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Jan Kara

Use the new blk_opf_t type for arguments and variables that represent
request flags or a bitwise combination of a request operation and
request flags.

This patch does not change any functionality.

Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/bfq-cgroup.c  | 16 ++++++++--------
 block/bfq-iosched.c |  8 ++++----
 block/bfq-iosched.h |  8 ++++----
 3 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 9fc605791b1e..af79028627a5 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -220,7 +220,7 @@ void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg)
 }
 
 void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
-			      unsigned int op)
+			      blk_opf_t op)
 {
 	blkg_rwstat_add(&bfqg->stats.queued, op, 1);
 	bfqg_stats_end_empty_time(&bfqg->stats);
@@ -228,18 +228,18 @@ void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
 		bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq));
 }
 
-void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op)
+void bfqg_stats_update_io_remove(struct bfq_group *bfqg, blk_opf_t op)
 {
 	blkg_rwstat_add(&bfqg->stats.queued, op, -1);
 }
 
-void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op)
+void bfqg_stats_update_io_merged(struct bfq_group *bfqg, blk_opf_t op)
 {
 	blkg_rwstat_add(&bfqg->stats.merged, op, 1);
 }
 
 void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
-				  u64 io_start_time_ns, unsigned int op)
+				  u64 io_start_time_ns, blk_opf_t op)
 {
 	struct bfqg_stats *stats = &bfqg->stats;
 	u64 now = ktime_get_ns();
@@ -255,11 +255,11 @@ void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
 #else /* CONFIG_BFQ_CGROUP_DEBUG */
 
 void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
-			      unsigned int op) { }
-void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) { }
-void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) { }
+			      blk_opf_t op) { }
+void bfqg_stats_update_io_remove(struct bfq_group *bfqg, blk_opf_t op) { }
+void bfqg_stats_update_io_merged(struct bfq_group *bfqg, blk_opf_t op) { }
 void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
-				  u64 io_start_time_ns, unsigned int op) { }
+				  u64 io_start_time_ns, blk_opf_t op) { }
 void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { }
 void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { }
 void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { }
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index e6d7e6b01a05..a724fc882158 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -668,7 +668,7 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
  * significantly affect service guarantees coming from the BFQ scheduling
  * algorithm.
  */
-static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
+static void bfq_limit_depth(blk_opf_t op, struct blk_mq_alloc_data *data)
 {
 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
 	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
@@ -6104,7 +6104,7 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
 static void bfq_update_insert_stats(struct request_queue *q,
 				    struct bfq_queue *bfqq,
 				    bool idle_timer_disabled,
-				    unsigned int cmd_flags)
+				    blk_opf_t cmd_flags)
 {
 	if (!bfqq)
 		return;
@@ -6129,7 +6129,7 @@ static void bfq_update_insert_stats(struct request_queue *q,
 static inline void bfq_update_insert_stats(struct request_queue *q,
 					   struct bfq_queue *bfqq,
 					   bool idle_timer_disabled,
-					   unsigned int cmd_flags) {}
+					   blk_opf_t cmd_flags) {}
 #endif /* CONFIG_BFQ_CGROUP_DEBUG */
 
 static struct bfq_queue *bfq_init_rq(struct request *rq);
@@ -6141,7 +6141,7 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 	struct bfq_data *bfqd = q->elevator->elevator_data;
 	struct bfq_queue *bfqq;
 	bool idle_timer_disabled = false;
-	unsigned int cmd_flags;
+	blk_opf_t cmd_flags;
 	LIST_HEAD(free);
 
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index ca8177d7bf7c..6bde1f4ecc50 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -994,11 +994,11 @@ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
 
 void bfqg_stats_update_legacy_io(struct request_queue *q, struct request *rq);
 void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
-			      unsigned int op);
-void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op);
-void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op);
+			      blk_opf_t op);
+void bfqg_stats_update_io_remove(struct bfq_group *bfqg, blk_opf_t op);
+void bfqg_stats_update_io_merged(struct bfq_group *bfqg, blk_opf_t op);
 void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
-				  u64 io_start_time_ns, unsigned int op);
+				  u64 io_start_time_ns, blk_opf_t op);
 void bfqg_stats_update_dequeue(struct bfq_group *bfqg);
 void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg);
 void bfqg_stats_update_idle_time(struct bfq_group *bfqg);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 08/63] block/mq-deadline: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (6 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 07/63] block/bfq: " Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-30 23:35   ` Damien Le Moal
  2022-06-29 23:30 ` [PATCH v2 09/63] block/kyber: " Bart Van Assche
                   ` (55 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Damien Le Moal

Use the new blk_opf_t type for an argument that represents a bitwise
combination of a request operation and request flags.

This patch does not change any functionality.

Cc: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/mq-deadline.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 1a9e835e816c..c5589e9155e6 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -543,7 +543,7 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
  * Called by __blk_mq_alloc_request(). The shallow_depth value set by this
  * function is used by __blk_mq_get_tag().
  */
-static void dd_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
+static void dd_limit_depth(blk_opf_t op, struct blk_mq_alloc_data *data)
 {
 	struct deadline_data *dd = data->q->elevator->elevator_data;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 09/63] block/kyber: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (7 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 08/63] block/mq-deadline: " Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 10/63] blktrace: Trace remapped requests correctly Bart Van Assche
                   ` (54 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Omar Sandoval

Use the new blk_opf_t type for arguments that represent a bitwise
combination of a request operation and request flags.

This patch does not change any functionality.

Cc: Omar Sandoval <osandov@fb.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/kyber-iosched.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index 8f7c745b4a57..102b6a311b5c 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -195,9 +195,9 @@ struct kyber_hctx_data {
 static int kyber_domain_wake(wait_queue_entry_t *wait, unsigned mode, int flags,
 			     void *key);
 
-static unsigned int kyber_sched_domain(unsigned int op)
+static unsigned int kyber_sched_domain(blk_opf_t opf)
 {
-	switch (op & REQ_OP_MASK) {
+	switch (opf & REQ_OP_MASK) {
 	case REQ_OP_READ:
 		return KYBER_READ;
 	case REQ_OP_WRITE:
@@ -553,7 +553,7 @@ static void rq_clear_domain_token(struct kyber_queue_data *kqd,
 	}
 }
 
-static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
+static void kyber_limit_depth(blk_opf_t op, struct blk_mq_alloc_data *data)
 {
 	/*
 	 * We use the scheduler tags as per-hardware queue queueing tokens.

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 10/63] blktrace: Trace remapped requests correctly
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (8 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 09/63] block/kyber: " Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-30  2:05   ` NOMURA JUNICHI(野村 淳一)
  2022-06-29 23:30 ` [PATCH v2 11/63] blktrace: Use the new blk_opf_t type Bart Van Assche
                   ` (53 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Mike Snitzer,
	Jun'ichi Nomura

Trace the remapped operation and its flags instead of only the data
direction of remapped operations. This issue was detected by analyzing
the warnings reported by sparse related to the new blk_opf_t type.

Cc: Mike Snitzer <snitzer@kernel.org>
Cc: Jun'ichi Nomura <junichi.nomura@nec.com>
Fixes: b0da3f0dada7 ("Add a tracepoint for block request remapping")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/trace/blktrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index c584effe5fe9..cee788817fb3 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -1058,7 +1058,7 @@ static void blk_add_trace_rq_remap(void *ignore, struct request *rq, dev_t dev,
 	r.sector_from = cpu_to_be64(from);
 
 	__blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq),
-			rq_data_dir(rq), 0, BLK_TA_REMAP, 0,
+			req_op(rq), rq->cmd_flags, BLK_TA_REMAP, 0,
 			sizeof(r), &r, blk_trace_request_get_cgid(rq));
 	rcu_read_unlock();
 }

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 11/63] blktrace: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (9 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 10/63] blktrace: Trace remapped requests correctly Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 12/63] block/brd: Use the enum req_op type Bart Van Assche
                   ` (52 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Steven Rostedt

Improve static type checking by using the new blk_opf_t type for a function
argument that represents a combination of a request operation and request
flags. Rename that argument from 'op' into 'opf' to make its role more
clear.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/blktrace_api.h |  3 ++-
 kernel/trace/blktrace.c      | 51 ++++++++++++++++++------------------
 2 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/include/linux/blktrace_api.h b/include/linux/blktrace_api.h
index f6f9b544365a..cfbda114348c 100644
--- a/include/linux/blktrace_api.h
+++ b/include/linux/blktrace_api.h
@@ -7,6 +7,7 @@
 #include <linux/compat.h>
 #include <uapi/linux/blktrace_api.h>
 #include <linux/list.h>
+#include <linux/blk_types.h>
 
 #if defined(CONFIG_BLK_DEV_IO_TRACE)
 
@@ -105,7 +106,7 @@ struct compat_blk_user_trace_setup {
 
 #endif
 
-void blk_fill_rwbs(char *rwbs, unsigned int op);
+void blk_fill_rwbs(char *rwbs, blk_opf_t opf);
 
 static inline sector_t blk_rq_trace_sector(struct request *rq)
 {
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index cee788817fb3..26688c16afd6 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -205,7 +205,7 @@ static const u32 ddir_act[2] = { BLK_TC_ACT(BLK_TC_READ),
 #define BLK_TC_PREFLUSH		BLK_TC_FLUSH
 
 /* The ilog2() calls fall out because they're constant */
-#define MASK_TC_BIT(rw, __name) ((rw & REQ_ ## __name) << \
+#define MASK_TC_BIT(rw, __name) ((__force u32)(rw & REQ_ ## __name) <<	\
 	  (ilog2(BLK_TC_ ## __name) + BLK_TC_SHIFT - __REQ_ ## __name))
 
 /*
@@ -213,8 +213,8 @@ static const u32 ddir_act[2] = { BLK_TC_ACT(BLK_TC_READ),
  * blk_io_trace structure and places it in a per-cpu subbuffer.
  */
 static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
-		     int op, int op_flags, u32 what, int error, int pdu_len,
-		     void *pdu_data, u64 cgid)
+			    const blk_opf_t opf, u32 what, int error,
+			    int pdu_len, void *pdu_data, u64 cgid)
 {
 	struct task_struct *tsk = current;
 	struct ring_buffer_event *event = NULL;
@@ -227,16 +227,17 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
 	int cpu;
 	bool blk_tracer = blk_tracer_enabled;
 	ssize_t cgid_len = cgid ? sizeof(cgid) : 0;
+	const enum req_op op = opf & REQ_OP_MASK;
 
 	if (unlikely(bt->trace_state != Blktrace_running && !blk_tracer))
 		return;
 
 	what |= ddir_act[op_is_write(op) ? WRITE : READ];
-	what |= MASK_TC_BIT(op_flags, SYNC);
-	what |= MASK_TC_BIT(op_flags, RAHEAD);
-	what |= MASK_TC_BIT(op_flags, META);
-	what |= MASK_TC_BIT(op_flags, PREFLUSH);
-	what |= MASK_TC_BIT(op_flags, FUA);
+	what |= MASK_TC_BIT(opf, SYNC);
+	what |= MASK_TC_BIT(opf, RAHEAD);
+	what |= MASK_TC_BIT(opf, META);
+	what |= MASK_TC_BIT(opf, PREFLUSH);
+	what |= MASK_TC_BIT(opf, FUA);
 	if (op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE)
 		what |= BLK_TC_ACT(BLK_TC_DISCARD);
 	if (op == REQ_OP_FLUSH)
@@ -842,9 +843,8 @@ static void blk_add_trace_rq(struct request *rq, blk_status_t error,
 	else
 		what |= BLK_TC_ACT(BLK_TC_FS);
 
-	__blk_add_trace(bt, blk_rq_trace_sector(rq), nr_bytes, req_op(rq),
-			rq->cmd_flags, what, blk_status_to_errno(error), 0,
-			NULL, cgid);
+	__blk_add_trace(bt, blk_rq_trace_sector(rq), nr_bytes, rq->cmd_flags,
+			what, blk_status_to_errno(error), 0, NULL, cgid);
 	rcu_read_unlock();
 }
 
@@ -903,7 +903,7 @@ static void blk_add_trace_bio(struct request_queue *q, struct bio *bio,
 	}
 
 	__blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size,
-			bio_op(bio), bio->bi_opf, what, error, 0, NULL,
+			bio->bi_opf, what, error, 0, NULL,
 			blk_trace_bio_get_cgid(q, bio));
 	rcu_read_unlock();
 }
@@ -949,7 +949,7 @@ static void blk_add_trace_plug(void *ignore, struct request_queue *q)
 	rcu_read_lock();
 	bt = rcu_dereference(q->blk_trace);
 	if (bt)
-		__blk_add_trace(bt, 0, 0, 0, 0, BLK_TA_PLUG, 0, 0, NULL, 0);
+		__blk_add_trace(bt, 0, 0, 0, BLK_TA_PLUG, 0, 0, NULL, 0);
 	rcu_read_unlock();
 }
 
@@ -969,7 +969,7 @@ static void blk_add_trace_unplug(void *ignore, struct request_queue *q,
 		else
 			what = BLK_TA_UNPLUG_TIMER;
 
-		__blk_add_trace(bt, 0, 0, 0, 0, what, 0, sizeof(rpdu), &rpdu, 0);
+		__blk_add_trace(bt, 0, 0, 0, what, 0, sizeof(rpdu), &rpdu, 0);
 	}
 	rcu_read_unlock();
 }
@@ -985,8 +985,7 @@ static void blk_add_trace_split(void *ignore, struct bio *bio, unsigned int pdu)
 		__be64 rpdu = cpu_to_be64(pdu);
 
 		__blk_add_trace(bt, bio->bi_iter.bi_sector,
-				bio->bi_iter.bi_size, bio_op(bio), bio->bi_opf,
-				BLK_TA_SPLIT,
+				bio->bi_iter.bi_size, bio->bi_opf, BLK_TA_SPLIT,
 				blk_status_to_errno(bio->bi_status),
 				sizeof(rpdu), &rpdu,
 				blk_trace_bio_get_cgid(q, bio));
@@ -1022,7 +1021,7 @@ static void blk_add_trace_bio_remap(void *ignore, struct bio *bio, dev_t dev,
 	r.sector_from = cpu_to_be64(from);
 
 	__blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size,
-			bio_op(bio), bio->bi_opf, BLK_TA_REMAP,
+			bio->bi_opf, BLK_TA_REMAP,
 			blk_status_to_errno(bio->bi_status),
 			sizeof(r), &r, blk_trace_bio_get_cgid(q, bio));
 	rcu_read_unlock();
@@ -1058,7 +1057,7 @@ static void blk_add_trace_rq_remap(void *ignore, struct request *rq, dev_t dev,
 	r.sector_from = cpu_to_be64(from);
 
 	__blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq),
-			req_op(rq), rq->cmd_flags, BLK_TA_REMAP, 0,
+			rq->cmd_flags, BLK_TA_REMAP, 0,
 			sizeof(r), &r, blk_trace_request_get_cgid(rq));
 	rcu_read_unlock();
 }
@@ -1084,7 +1083,7 @@ void blk_add_driver_data(struct request *rq, void *data, size_t len)
 		return;
 	}
 
-	__blk_add_trace(bt, blk_rq_trace_sector(rq), blk_rq_bytes(rq), 0, 0,
+	__blk_add_trace(bt, blk_rq_trace_sector(rq), blk_rq_bytes(rq), 0,
 				BLK_TA_DRV_DATA, 0, len, data,
 				blk_trace_request_get_cgid(rq));
 	rcu_read_unlock();
@@ -1881,14 +1880,14 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
  *     caller with resulting string.
  *
  **/
-void blk_fill_rwbs(char *rwbs, unsigned int op)
+void blk_fill_rwbs(char *rwbs, blk_opf_t opf)
 {
 	int i = 0;
 
-	if (op & REQ_PREFLUSH)
+	if (opf & REQ_PREFLUSH)
 		rwbs[i++] = 'F';
 
-	switch (op & REQ_OP_MASK) {
+	switch (opf & REQ_OP_MASK) {
 	case REQ_OP_WRITE:
 		rwbs[i++] = 'W';
 		break;
@@ -1909,13 +1908,13 @@ void blk_fill_rwbs(char *rwbs, unsigned int op)
 		rwbs[i++] = 'N';
 	}
 
-	if (op & REQ_FUA)
+	if (opf & REQ_FUA)
 		rwbs[i++] = 'F';
-	if (op & REQ_RAHEAD)
+	if (opf & REQ_RAHEAD)
 		rwbs[i++] = 'A';
-	if (op & REQ_SYNC)
+	if (opf & REQ_SYNC)
 		rwbs[i++] = 'S';
-	if (op & REQ_META)
+	if (opf & REQ_META)
 		rwbs[i++] = 'M';
 
 	rwbs[i] = '\0';

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 12/63] block/brd: Use the enum req_op type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (10 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 11/63] blktrace: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 13/63] block/drbd: Use the enum req_op and blk_opf_t types Bart Van Assche
                   ` (51 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ming Lei

Improve static type checking by using the enum req_op type for a
function argument that represents a request operation.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/brd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 7b82876af36e..859499cd1ff8 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -256,7 +256,7 @@ static void copy_from_brd(void *dst, struct brd_device *brd,
  * Process a single bvec of a bio.
  */
 static int brd_do_bvec(struct brd_device *brd, struct page *page,
-			unsigned int len, unsigned int off, unsigned int op,
+			unsigned int len, unsigned int off, enum req_op op,
 			sector_t sector)
 {
 	void *mem;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 13/63] block/drbd: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (11 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 12/63] block/brd: Use the enum req_op type Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 14/63] block/drbd: Combine two drbd_submit_peer_request() arguments Bart Van Assche
                   ` (50 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Christoph Böhmwalder, Lars Ellenberg, Philipp Reisner

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags.

Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Cc: Lars Ellenberg <lars.ellenberg@linbit.com>
Cc: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/drbd/drbd_actlog.c   |  9 +++++----
 drivers/block/drbd/drbd_bitmap.c   |  3 ++-
 drivers/block/drbd/drbd_int.h      |  6 +++---
 drivers/block/drbd/drbd_receiver.c | 11 ++++++-----
 4 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/drivers/block/drbd/drbd_actlog.c b/drivers/block/drbd/drbd_actlog.c
index f5bcded3640d..e27478ae579c 100644
--- a/drivers/block/drbd/drbd_actlog.c
+++ b/drivers/block/drbd/drbd_actlog.c
@@ -124,12 +124,13 @@ void wait_until_done_or_force_detached(struct drbd_device *device, struct drbd_b
 
 static int _drbd_md_sync_page_io(struct drbd_device *device,
 				 struct drbd_backing_dev *bdev,
-				 sector_t sector, int op)
+				 sector_t sector, enum req_op op)
 {
 	struct bio *bio;
 	/* we do all our meta data IO in aligned 4k blocks. */
 	const int size = 4096;
-	int err, op_flags = 0;
+	int err;
+	blk_opf_t op_flags = 0;
 
 	device->md_io.done = 0;
 	device->md_io.error = -ENODEV;
@@ -174,7 +175,7 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
 }
 
 int drbd_md_sync_page_io(struct drbd_device *device, struct drbd_backing_dev *bdev,
-			 sector_t sector, int op)
+			 sector_t sector, enum req_op op)
 {
 	int err;
 	D_ASSERT(device, atomic_read(&device->md_io.in_use) == 1);
@@ -385,7 +386,7 @@ static int __al_write_transaction(struct drbd_device *device, struct al_transact
 		write_al_updates = rcu_dereference(device->ldev->disk_conf)->al_updates;
 		rcu_read_unlock();
 		if (write_al_updates) {
-			if (drbd_md_sync_page_io(device, device->ldev, sector, WRITE)) {
+			if (drbd_md_sync_page_io(device, device->ldev, sector, REQ_OP_WRITE)) {
 				err = -EIO;
 				drbd_chk_io_error(device, 1, DRBD_META_IO_ERROR);
 			} else {
diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index bd2133ef6e0a..0845f28a48a7 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -990,7 +990,8 @@ static inline sector_t drbd_md_last_bitmap_sector(struct drbd_backing_dev *bdev)
 static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_hold(local)
 {
 	struct drbd_device *device = ctx->device;
-	unsigned int op = (ctx->flags & BM_AIO_READ) ? REQ_OP_READ : REQ_OP_WRITE;
+	const enum req_op op = ctx->flags & BM_AIO_READ ? REQ_OP_READ :
+		REQ_OP_WRITE;
 	struct drbd_bitmap *b = device->bitmap;
 	struct bio *bio;
 	struct page *page;
diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
index 4d3efaa20b7b..ecb2ecd8d67d 100644
--- a/drivers/block/drbd/drbd_int.h
+++ b/drivers/block/drbd/drbd_int.h
@@ -1495,7 +1495,7 @@ extern int drbd_resync_finished(struct drbd_device *device);
 extern void *drbd_md_get_buffer(struct drbd_device *device, const char *intent);
 extern void drbd_md_put_buffer(struct drbd_device *device);
 extern int drbd_md_sync_page_io(struct drbd_device *device,
-		struct drbd_backing_dev *bdev, sector_t sector, int op);
+		struct drbd_backing_dev *bdev, sector_t sector, enum req_op op);
 extern void drbd_ov_out_of_sync_found(struct drbd_device *, sector_t, int);
 extern void wait_until_done_or_force_detached(struct drbd_device *device,
 		struct drbd_backing_dev *bdev, unsigned int *done);
@@ -1547,8 +1547,8 @@ extern bool drbd_rs_c_min_rate_throttle(struct drbd_device *device);
 extern bool drbd_rs_should_slow_down(struct drbd_device *device, sector_t sector,
 		bool throttle_if_app_is_waiting);
 extern int drbd_submit_peer_request(struct drbd_device *,
-				    struct drbd_peer_request *, const unsigned,
-				    const unsigned, const int);
+				    struct drbd_peer_request *, enum req_op,
+				    blk_opf_t, int);
 extern int drbd_free_peer_reqs(struct drbd_device *, struct list_head *);
 extern struct drbd_peer_request *drbd_alloc_peer_req(struct drbd_peer_device *, u64,
 						     sector_t, unsigned int,
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index 6762be53f409..caf646dd91ba 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -1621,7 +1621,7 @@ static void drbd_issue_peer_discard_or_zero_out(struct drbd_device *device, stru
 /* TODO allocate from our own bio_set. */
 int drbd_submit_peer_request(struct drbd_device *device,
 			     struct drbd_peer_request *peer_req,
-			     const unsigned op, const unsigned op_flags,
+			     const enum req_op op, const blk_opf_t op_flags,
 			     const int fault_type)
 {
 	struct bio *bios = NULL;
@@ -2383,14 +2383,14 @@ static int wait_for_and_update_peer_seq(struct drbd_peer_device *peer_device, co
 /* see also bio_flags_to_wire()
  * DRBD_REQ_*, because we need to semantically map the flags to data packet
  * flags and back. We may replicate to other kernel versions. */
-static unsigned long wire_flags_to_bio_flags(u32 dpf)
+static blk_opf_t wire_flags_to_bio_flags(u32 dpf)
 {
 	return  (dpf & DP_RW_SYNC ? REQ_SYNC : 0) |
 		(dpf & DP_FUA ? REQ_FUA : 0) |
 		(dpf & DP_FLUSH ? REQ_PREFLUSH : 0);
 }
 
-static unsigned long wire_flags_to_bio_op(u32 dpf)
+static enum req_op wire_flags_to_bio_op(u32 dpf)
 {
 	if (dpf & DP_ZEROES)
 		return REQ_OP_WRITE_ZEROES;
@@ -2543,7 +2543,8 @@ static int receive_Data(struct drbd_connection *connection, struct packet_info *
 	struct drbd_peer_request *peer_req;
 	struct p_data *p = pi->data;
 	u32 peer_seq = be32_to_cpu(p->seq_num);
-	int op, op_flags;
+	enum req_op op;
+	blk_opf_t op_flags;
 	u32 dp_flags;
 	int err, tp;
 
@@ -4951,7 +4952,7 @@ static int receive_rs_deallocated(struct drbd_connection *connection, struct pac
 
 	if (get_ldev(device)) {
 		struct drbd_peer_request *peer_req;
-		const int op = REQ_OP_WRITE_ZEROES;
+		const enum req_op op = REQ_OP_WRITE_ZEROES;
 
 		peer_req = drbd_alloc_peer_req(peer_device, ID_SYNCER, sector,
 					       size, 0, GFP_NOIO);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 14/63] block/drbd: Combine two drbd_submit_peer_request() arguments
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (12 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 13/63] block/drbd: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-07-05 19:53   ` Christoph Böhmwalder
  2022-06-29 23:30 ` [PATCH v2 15/63] block/floppy: Fix a sparse warning Bart Van Assche
                   ` (49 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Christoph Böhmwalder, Lars Ellenberg, Philipp Reisner

Combine the drbd_submit_peer_request() 'op' and 'op_flags' arguments
into a single argument. This patch does not change any functionality.

Cc: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Cc: Lars Ellenberg <lars.ellenberg@linbit.com>
Cc: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/drbd/drbd_int.h      |  3 +--
 drivers/block/drbd/drbd_receiver.c | 15 +++++++--------
 drivers/block/drbd/drbd_worker.c   |  2 +-
 3 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
index ecb2ecd8d67d..f15f2f041596 100644
--- a/drivers/block/drbd/drbd_int.h
+++ b/drivers/block/drbd/drbd_int.h
@@ -1547,8 +1547,7 @@ extern bool drbd_rs_c_min_rate_throttle(struct drbd_device *device);
 extern bool drbd_rs_should_slow_down(struct drbd_device *device, sector_t sector,
 		bool throttle_if_app_is_waiting);
 extern int drbd_submit_peer_request(struct drbd_device *,
-				    struct drbd_peer_request *, enum req_op,
-				    blk_opf_t, int);
+				    struct drbd_peer_request *, blk_opf_t, int);
 extern int drbd_free_peer_reqs(struct drbd_device *, struct list_head *);
 extern struct drbd_peer_request *drbd_alloc_peer_req(struct drbd_peer_device *, u64,
 						     sector_t, unsigned int,
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index caf646dd91ba..af4c7d65490b 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -1621,8 +1621,7 @@ static void drbd_issue_peer_discard_or_zero_out(struct drbd_device *device, stru
 /* TODO allocate from our own bio_set. */
 int drbd_submit_peer_request(struct drbd_device *device,
 			     struct drbd_peer_request *peer_req,
-			     const enum req_op op, const blk_opf_t op_flags,
-			     const int fault_type)
+			     const blk_opf_t opf, const int fault_type)
 {
 	struct bio *bios = NULL;
 	struct bio *bio;
@@ -1668,8 +1667,7 @@ int drbd_submit_peer_request(struct drbd_device *device,
 	 * generated bio, but a bio allocated on behalf of the peer.
 	 */
 next_bio:
-	bio = bio_alloc(device->ldev->backing_bdev, nr_pages, op | op_flags,
-			GFP_NOIO);
+	bio = bio_alloc(device->ldev->backing_bdev, nr_pages, opf, GFP_NOIO);
 	/* > peer_req->i.sector, unless this is the first bio */
 	bio->bi_iter.bi_sector = sector;
 	bio->bi_private = peer_req;
@@ -2060,7 +2058,7 @@ static int recv_resync_read(struct drbd_peer_device *peer_device, sector_t secto
 	spin_unlock_irq(&device->resource->req_lock);
 
 	atomic_add(pi->size >> 9, &device->rs_sect_ev);
-	if (drbd_submit_peer_request(device, peer_req, REQ_OP_WRITE, 0,
+	if (drbd_submit_peer_request(device, peer_req, REQ_OP_WRITE,
 				     DRBD_FAULT_RS_WR) == 0)
 		return 0;
 
@@ -2682,7 +2680,7 @@ static int receive_Data(struct drbd_connection *connection, struct packet_info *
 		peer_req->flags |= EE_CALL_AL_COMPLETE_IO;
 	}
 
-	err = drbd_submit_peer_request(device, peer_req, op, op_flags,
+	err = drbd_submit_peer_request(device, peer_req, op | op_flags,
 				       DRBD_FAULT_DT_WR);
 	if (!err)
 		return 0;
@@ -2980,7 +2978,7 @@ static int receive_DataRequest(struct drbd_connection *connection, struct packet
 submit:
 	update_receiver_timing_details(connection, drbd_submit_peer_request);
 	inc_unacked(device);
-	if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ, 0,
+	if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ,
 				     fault_type) == 0)
 		return 0;
 
@@ -4970,7 +4968,8 @@ static int receive_rs_deallocated(struct drbd_connection *connection, struct pac
 		spin_unlock_irq(&device->resource->req_lock);
 
 		atomic_add(pi->size >> 9, &device->rs_sect_ev);
-		err = drbd_submit_peer_request(device, peer_req, op, 0, DRBD_FAULT_RS_WR);
+		err = drbd_submit_peer_request(device, peer_req, op,
+					       DRBD_FAULT_RS_WR);
 
 		if (err) {
 			spin_lock_irq(&device->resource->req_lock);
diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
index af3051dd8912..0bb1a900c2d5 100644
--- a/drivers/block/drbd/drbd_worker.c
+++ b/drivers/block/drbd/drbd_worker.c
@@ -405,7 +405,7 @@ static int read_for_csum(struct drbd_peer_device *peer_device, sector_t sector,
 	spin_unlock_irq(&device->resource->req_lock);
 
 	atomic_add(size >> 9, &device->rs_sect_ev);
-	if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ, 0,
+	if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ,
 				     DRBD_FAULT_RS_RD) == 0)
 		return 0;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 15/63] block/floppy: Fix a sparse warning
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (13 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 14/63] block/drbd: Combine two drbd_submit_peer_request() arguments Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-29 23:30 ` [PATCH v2 16/63] block/rnbd: Use blk_opf_t where appropriate Bart Van Assche
                   ` (48 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Denis Efremov

Since the type of request.cmd_flags has been changed from u32 into
blk_opf_t, use the __force keyword when casting to an integer type to
prevent that sparse warns about this cast.

Cc: Denis Efremov <efremov@linux.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/floppy.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 491e7205a0db..ccad3d7b3ddd 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -2859,7 +2859,7 @@ static blk_status_t floppy_queue_rq(struct blk_mq_hw_ctx *hctx,
 	if (WARN(atomic_read(&usage_count) == 0,
 		 "warning: usage count=0, current_req=%p sect=%ld flags=%llx\n",
 		 current_req, (long)blk_rq_pos(current_req),
-		 (unsigned long long) current_req->cmd_flags))
+		 (__force unsigned long long) current_req->cmd_flags))
 		return BLK_STS_IOERR;
 
 	if (test_and_set_bit(0, &fdc_busy)) {

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 16/63] block/rnbd: Use blk_opf_t where appropriate
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (14 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 15/63] block/floppy: Fix a sparse warning Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-07-01  4:47   ` Jinpu Wang
  2022-06-29 23:30 ` [PATCH v2 17/63] xen-blkback: Use the enum req_op and blk_opf_t types Bart Van Assche
                   ` (47 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Md . Haris Iqbal, Jack Wang

Improve static type checking by using the new blk_opf_t type to represent
the combination of a request and request flags.

Cc: Md. Haris Iqbal <haris.iqbal@ionos.com>
Cc: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/rnbd/rnbd-proto.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/block/rnbd/rnbd-proto.h b/drivers/block/rnbd/rnbd-proto.h
index bfb08dd434d1..ea7ac8bca63c 100644
--- a/drivers/block/rnbd/rnbd-proto.h
+++ b/drivers/block/rnbd/rnbd-proto.h
@@ -229,9 +229,9 @@ static inline bool rnbd_flags_supported(u32 flags)
 	return true;
 }
 
-static inline u32 rnbd_to_bio_flags(u32 rnbd_opf)
+static inline blk_opf_t rnbd_to_bio_flags(u32 rnbd_opf)
 {
-	u32 bio_opf;
+	blk_opf_t bio_opf;
 
 	switch (rnbd_op(rnbd_opf)) {
 	case RNBD_OP_READ:
@@ -286,7 +286,8 @@ static inline u32 rq_to_rnbd_flags(struct request *rq)
 		break;
 	default:
 		WARN(1, "Unknown request type %d (flags %llu)\n",
-		     req_op(rq), (unsigned long long)rq->cmd_flags);
+		     (__force u32)req_op(rq),
+		     (__force unsigned long long)rq->cmd_flags);
 		rnbd_opf = 0;
 	}
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 17/63] xen-blkback: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (15 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 16/63] block/rnbd: Use blk_opf_t where appropriate Bart Van Assche
@ 2022-06-29 23:30 ` Bart Van Assche
  2022-06-30  7:17   ` Roger Pau Monné
  2022-06-29 23:31 ` [PATCH v2 18/63] block/zram: Use enum req_op where appropriate Bart Van Assche
                   ` (46 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Roger Pau Monné

Improve static type checking by using the enum req_op type for request
operations and the new blk_opf_t type for request flags.

Cc: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/xen-blkback/blkback.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index a97f2bf5b01b..a5cf7f1e871c 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -442,7 +442,7 @@ static void free_req(struct xen_blkif_ring *ring, struct pending_req *req)
  * Routines for managing virtual block devices (vbds).
  */
 static int xen_vbd_translate(struct phys_req *req, struct xen_blkif *blkif,
-			     int operation)
+			     enum req_op operation)
 {
 	struct xen_vbd *vbd = &blkif->vbd;
 	int rc = -EACCES;
@@ -1193,8 +1193,8 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
 	struct bio *bio = NULL;
 	struct bio **biolist = pending_req->biolist;
 	int i, nbio = 0;
-	int operation;
-	int operation_flags = 0;
+	enum req_op operation;
+	blk_opf_t operation_flags = 0;
 	struct blk_plug plug;
 	bool drain = false;
 	struct grant_page **pages = pending_req->segments;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 18/63] block/zram: Use enum req_op where appropriate
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (16 preceding siblings ...)
  2022-06-29 23:30 ` [PATCH v2 17/63] xen-blkback: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 19/63] nvdimm-btt: Use the enum req_op type Bart Van Assche
                   ` (45 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Minchan Kim

Improve static type checking by using the enum req_op type where
appropriate.

Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/zram/zram_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index a35b86c58aa2..4abeb261b833 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1523,7 +1523,7 @@ static void zram_bio_discard(struct zram *zram, u32 index,
  * Returns 1 if IO request was successfully submitted.
  */
 static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
-			int offset, unsigned int op, struct bio *bio)
+			int offset, enum req_op op, struct bio *bio)
 {
 	int ret;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 19/63] nvdimm-btt: Use the enum req_op type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (17 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 18/63] block/zram: Use enum req_op where appropriate Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 20/63] um: Use enum req_op where appropriate Bart Van Assche
                   ` (44 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Dan Williams

Improve static type checking by using the enum req_op type where
appropriate.

Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/nvdimm/btt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index dfbf73145d16..0297b7882e33 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1422,7 +1422,7 @@ static int btt_write_pg(struct btt *btt, struct bio_integrity_payload *bip,
 
 static int btt_do_bvec(struct btt *btt, struct bio_integrity_payload *bip,
 			struct page *page, unsigned int len, unsigned int off,
-			unsigned int op, sector_t sector)
+			enum req_op op, sector_t sector)
 {
 	int ret;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 20/63] um: Use enum req_op where appropriate
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (18 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 19/63] nvdimm-btt: Use the enum req_op type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 21/63] dm/core: Reduce the size of struct dm_io_request Bart Van Assche
                   ` (43 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Richard Weinberger, Anton Ivanov

Improve static type checking by using type enum req_op instead of int where
appropriate.

Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 arch/um/drivers/ubd_kern.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 479b79e11442..eb2d2f0f0bcc 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -1262,7 +1262,7 @@ static void ubd_map_req(struct ubd *dev, struct io_thread_req *io_req,
 	struct req_iterator iter;
 	int i = 0;
 	unsigned long byte_offset = io_req->offset;
-	int op = req_op(req);
+	enum req_op op = req_op(req);
 
 	if (op == REQ_OP_WRITE_ZEROES || op == REQ_OP_DISCARD) {
 		io_req->io_desc[0].buffer = NULL;
@@ -1325,7 +1325,7 @@ static int ubd_submit_request(struct ubd *dev, struct request *req)
 	int segs = 0;
 	struct io_thread_req *io_req;
 	int ret;
-	int op = req_op(req);
+	enum req_op op = req_op(req);
 
 	if (op == REQ_OP_FLUSH)
 		segs = 0;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 21/63] dm/core: Reduce the size of struct dm_io_request
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (19 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 20/63] um: Use enum req_op where appropriate Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 22/63] dm/core: Rename kcopyd_job.rw into kcopyd.op Bart Van Assche
                   ` (42 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer

Combine the bi_op and bi_op_flags into the bi_opf member. Use the new
blk_opf_t type to improve static type checking. This patch does not
change any functionality.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-bufio.c           |  9 +++------
 drivers/md/dm-integrity.c       | 15 +++++----------
 drivers/md/dm-io.c              | 10 ++++++----
 drivers/md/dm-kcopyd.c          |  3 +--
 drivers/md/dm-log.c             |  6 ++----
 drivers/md/dm-raid1.c           | 12 +++++-------
 drivers/md/dm-snap-persistent.c |  3 +--
 drivers/md/dm-writecache.c      | 12 ++++--------
 include/linux/dm-io.h           |  4 ++--
 9 files changed, 29 insertions(+), 45 deletions(-)

diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index 5ffa1dcf84cf..1b7acda45c78 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -582,8 +582,7 @@ static void use_dmio(struct dm_buffer *b, int rw, sector_t sector,
 {
 	int r;
 	struct dm_io_request io_req = {
-		.bi_op = rw,
-		.bi_op_flags = 0,
+		.bi_opf = rw,
 		.notify.fn = dmio_complete,
 		.notify.context = b,
 		.client = b->c->dm_io,
@@ -1341,8 +1340,7 @@ EXPORT_SYMBOL_GPL(dm_bufio_write_dirty_buffers);
 int dm_bufio_issue_flush(struct dm_bufio_client *c)
 {
 	struct dm_io_request io_req = {
-		.bi_op = REQ_OP_WRITE,
-		.bi_op_flags = REQ_PREFLUSH | REQ_SYNC,
+		.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC,
 		.mem.type = DM_IO_KMEM,
 		.mem.ptr.addr = NULL,
 		.client = c->dm_io,
@@ -1365,8 +1363,7 @@ EXPORT_SYMBOL_GPL(dm_bufio_issue_flush);
 int dm_bufio_issue_discard(struct dm_bufio_client *c, sector_t block, sector_t count)
 {
 	struct dm_io_request io_req = {
-		.bi_op = REQ_OP_DISCARD,
-		.bi_op_flags = REQ_SYNC,
+		.bi_opf = REQ_OP_DISCARD | REQ_SYNC,
 		.mem.type = DM_IO_KMEM,
 		.mem.ptr.addr = NULL,
 		.client = c->dm_io,
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 148978ad03a8..2ccc103dea1e 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -557,8 +557,7 @@ static int sync_rw_sb(struct dm_integrity_c *ic, int op, int op_flags)
 	struct dm_io_region io_loc;
 	int r;
 
-	io_req.bi_op = op;
-	io_req.bi_op_flags = op_flags;
+	io_req.bi_opf = op | op_flags;
 	io_req.mem.type = DM_IO_KMEM;
 	io_req.mem.ptr.addr = ic->sb;
 	io_req.notify.fn = NULL;
@@ -1067,8 +1066,7 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, int op, int op_flags,
 	pl_index = sector >> (PAGE_SHIFT - SECTOR_SHIFT);
 	pl_offset = (sector << SECTOR_SHIFT) & (PAGE_SIZE - 1);
 
-	io_req.bi_op = op;
-	io_req.bi_op_flags = op_flags;
+	io_req.bi_opf = op | op_flags;
 	io_req.mem.type = DM_IO_PAGE_LIST;
 	if (ic->journal_io)
 		io_req.mem.ptr.pl = &ic->journal_io[pl_index];
@@ -1188,8 +1186,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, unsigned section, unsig
 	pl_index = sector >> (PAGE_SHIFT - SECTOR_SHIFT);
 	pl_offset = (sector << SECTOR_SHIFT) & (PAGE_SIZE - 1);
 
-	io_req.bi_op = REQ_OP_WRITE;
-	io_req.bi_op_flags = 0;
+	io_req.bi_opf = REQ_OP_WRITE;
 	io_req.mem.type = DM_IO_PAGE_LIST;
 	io_req.mem.ptr.pl = &ic->journal[pl_index];
 	io_req.mem.offset = pl_offset;
@@ -1516,8 +1513,7 @@ static void dm_integrity_flush_buffers(struct dm_integrity_c *ic, bool flush_dat
 	if (!ic->meta_dev)
 		flush_data = false;
 	if (flush_data) {
-		fr.io_req.bi_op = REQ_OP_WRITE,
-		fr.io_req.bi_op_flags = REQ_PREFLUSH | REQ_SYNC,
+		fr.io_req.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC,
 		fr.io_req.mem.type = DM_IO_KMEM,
 		fr.io_req.mem.ptr.addr = NULL,
 		fr.io_req.notify.fn = flush_notify,
@@ -2706,8 +2702,7 @@ static void integrity_recalc(struct work_struct *w)
 	if (unlikely(dm_integrity_failed(ic)))
 		goto err;
 
-	io_req.bi_op = REQ_OP_READ;
-	io_req.bi_op_flags = 0;
+	io_req.bi_opf = REQ_OP_READ;
 	io_req.mem.type = DM_IO_VMA;
 	io_req.mem.ptr.addr = ic->recalc_buffer;
 	io_req.notify.fn = NULL;
diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
index e4b95eaeec8c..0606e00d1817 100644
--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -489,7 +489,7 @@ static int dp_init(struct dm_io_request *io_req, struct dpages *dp,
 
 	case DM_IO_VMA:
 		flush_kernel_vmap_range(io_req->mem.ptr.vma, size);
-		if (io_req->bi_op == REQ_OP_READ) {
+		if ((io_req->bi_opf & REQ_OP_MASK) == REQ_OP_READ) {
 			dp->vma_invalidate_address = io_req->mem.ptr.vma;
 			dp->vma_invalidate_size = size;
 		}
@@ -519,11 +519,13 @@ int dm_io(struct dm_io_request *io_req, unsigned num_regions,
 
 	if (!io_req->notify.fn)
 		return sync_io(io_req->client, num_regions, where,
-			       io_req->bi_op, io_req->bi_op_flags, &dp,
+			       io_req->bi_opf & REQ_OP_MASK,
+			       io_req->bi_opf & ~REQ_OP_MASK, &dp,
 			       sync_error_bits);
 
-	return async_io(io_req->client, num_regions, where, io_req->bi_op,
-			io_req->bi_op_flags, &dp, io_req->notify.fn,
+	return async_io(io_req->client, num_regions, where,
+			io_req->bi_opf & REQ_OP_MASK,
+			io_req->bi_opf & ~REQ_OP_MASK, &dp, io_req->notify.fn,
 			io_req->notify.context);
 }
 EXPORT_SYMBOL(dm_io);
diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
index 37b03ab7e5c9..a99b994e2b62 100644
--- a/drivers/md/dm-kcopyd.c
+++ b/drivers/md/dm-kcopyd.c
@@ -549,8 +549,7 @@ static int run_io_job(struct kcopyd_job *job)
 {
 	int r;
 	struct dm_io_request io_req = {
-		.bi_op = job->rw,
-		.bi_op_flags = 0,
+		.bi_opf = job->rw,
 		.mem.type = DM_IO_PAGE_LIST,
 		.mem.ptr.pl = job->pages,
 		.mem.offset = 0,
diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
index 0c6620e7b7bf..56ad13f9347b 100644
--- a/drivers/md/dm-log.c
+++ b/drivers/md/dm-log.c
@@ -293,8 +293,7 @@ static void header_from_disk(struct log_header_core *core, struct log_header_dis
 
 static int rw_header(struct log_c *lc, int op)
 {
-	lc->io_req.bi_op = op;
-	lc->io_req.bi_op_flags = 0;
+	lc->io_req.bi_opf = op;
 
 	return dm_io(&lc->io_req, 1, &lc->header_location, NULL);
 }
@@ -307,8 +306,7 @@ static int flush_header(struct log_c *lc)
 		.count = 0,
 	};
 
-	lc->io_req.bi_op = REQ_OP_WRITE;
-	lc->io_req.bi_op_flags = REQ_PREFLUSH;
+	lc->io_req.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
 
 	return dm_io(&lc->io_req, 1, &null_location, NULL);
 }
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
index 8811d484fdd1..06a38dc32025 100644
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -260,8 +260,7 @@ static int mirror_flush(struct dm_target *ti)
 	struct dm_io_region io[MAX_NR_MIRRORS];
 	struct mirror *m;
 	struct dm_io_request io_req = {
-		.bi_op = REQ_OP_WRITE,
-		.bi_op_flags = REQ_PREFLUSH | REQ_SYNC,
+		.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC,
 		.mem.type = DM_IO_KMEM,
 		.mem.ptr.addr = NULL,
 		.client = ms->io_client,
@@ -535,8 +534,7 @@ static void read_async_bio(struct mirror *m, struct bio *bio)
 {
 	struct dm_io_region io;
 	struct dm_io_request io_req = {
-		.bi_op = REQ_OP_READ,
-		.bi_op_flags = 0,
+		.bi_opf = REQ_OP_READ,
 		.mem.type = DM_IO_BIO,
 		.mem.ptr.bio = bio,
 		.notify.fn = read_callback,
@@ -648,9 +646,9 @@ static void do_write(struct mirror_set *ms, struct bio *bio)
 	unsigned int i;
 	struct dm_io_region io[MAX_NR_MIRRORS], *dest = io;
 	struct mirror *m;
+	blk_opf_t op_flags = bio->bi_opf & (REQ_FUA | REQ_PREFLUSH);
 	struct dm_io_request io_req = {
-		.bi_op = REQ_OP_WRITE,
-		.bi_op_flags = bio->bi_opf & (REQ_FUA | REQ_PREFLUSH),
+		.bi_opf = REQ_OP_WRITE | op_flags,
 		.mem.type = DM_IO_BIO,
 		.mem.ptr.bio = bio,
 		.notify.fn = write_callback,
@@ -659,7 +657,7 @@ static void do_write(struct mirror_set *ms, struct bio *bio)
 	};
 
 	if (bio_op(bio) == REQ_OP_DISCARD) {
-		io_req.bi_op = REQ_OP_DISCARD;
+		io_req.bi_opf = REQ_OP_DISCARD | op_flags;
 		io_req.mem.type = DM_IO_KMEM;
 		io_req.mem.ptr.addr = NULL;
 	}
diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
index 3bb5cff5d6fc..eaf969de3d3a 100644
--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -235,8 +235,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int op,
 		.count = ps->store->chunk_size,
 	};
 	struct dm_io_request io_req = {
-		.bi_op = op,
-		.bi_op_flags = op_flags,
+		.bi_opf = op | op_flags,
 		.mem.type = DM_IO_VMA,
 		.mem.ptr.vma = area,
 		.client = ps->io_client,
diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
index d74c5a7a0ab4..2b994b3e22a7 100644
--- a/drivers/md/dm-writecache.c
+++ b/drivers/md/dm-writecache.c
@@ -523,8 +523,7 @@ static void ssd_commit_flushed(struct dm_writecache *wc, bool wait_for_ios)
 
 		region.sector += wc->start_sector;
 		atomic_inc(&endio.count);
-		req.bi_op = REQ_OP_WRITE;
-		req.bi_op_flags = REQ_SYNC;
+		req.bi_opf = REQ_OP_WRITE | REQ_SYNC;
 		req.mem.type = DM_IO_VMA;
 		req.mem.ptr.vma = (char *)wc->memory_map + (size_t)i * BITMAP_GRANULARITY;
 		req.client = wc->dm_io;
@@ -562,8 +561,7 @@ static void ssd_commit_superblock(struct dm_writecache *wc)
 
 	region.sector += wc->start_sector;
 
-	req.bi_op = REQ_OP_WRITE;
-	req.bi_op_flags = REQ_SYNC | REQ_FUA;
+	req.bi_opf = REQ_OP_WRITE | REQ_SYNC | REQ_FUA;
 	req.mem.type = DM_IO_VMA;
 	req.mem.ptr.vma = (char *)wc->memory_map;
 	req.client = wc->dm_io;
@@ -592,8 +590,7 @@ static void writecache_disk_flush(struct dm_writecache *wc, struct dm_dev *dev)
 	region.bdev = dev->bdev;
 	region.sector = 0;
 	region.count = 0;
-	req.bi_op = REQ_OP_WRITE;
-	req.bi_op_flags = REQ_PREFLUSH;
+	req.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
 	req.mem.type = DM_IO_KMEM;
 	req.mem.ptr.addr = NULL;
 	req.client = wc->dm_io;
@@ -981,8 +978,7 @@ static int writecache_read_metadata(struct dm_writecache *wc, sector_t n_sectors
 	region.bdev = wc->ssd_dev->bdev;
 	region.sector = wc->start_sector;
 	region.count = n_sectors;
-	req.bi_op = REQ_OP_READ;
-	req.bi_op_flags = REQ_SYNC;
+	req.bi_opf = REQ_OP_READ | REQ_SYNC;
 	req.mem.type = DM_IO_VMA;
 	req.mem.ptr.vma = (char *)wc->memory_map;
 	req.client = wc->dm_io;
diff --git a/include/linux/dm-io.h b/include/linux/dm-io.h
index a52c6580cc9a..8e1c4ab5df04 100644
--- a/include/linux/dm-io.h
+++ b/include/linux/dm-io.h
@@ -13,6 +13,7 @@
 #ifdef __KERNEL__
 
 #include <linux/types.h>
+#include <linux/blk_types.h>
 
 struct dm_io_region {
 	struct block_device *bdev;
@@ -57,8 +58,7 @@ struct dm_io_notify {
  */
 struct dm_io_client;
 struct dm_io_request {
-	int bi_op;			/* REQ_OP */
-	int bi_op_flags;		/* req_flag_bits */
+	blk_opf_t	    bi_opf;	/* Request type and flags */
 	struct dm_io_memory mem;	/* Memory to use for io */
 	struct dm_io_notify notify;	/* Synchronous if notify.fn is NULL */
 	struct dm_io_client *client;	/* Client memory handler */

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 22/63] dm/core: Rename kcopyd_job.rw into kcopyd.op
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (20 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 21/63] dm/core: Reduce the size of struct dm_io_request Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 23/63] dm/core: Combine request operation type and flags Bart Van Assche
                   ` (41 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer

The member name 'rw' suggests that this member either has the value 'READ'
or 'WRITE' and no other values. Since that member also can have the value
REQ_OP_WRITE_ZEROES, rename 'rw' into 'op'. This patch does not change any
functionality since REQ_OP_READ = READ = 0 and REQ_OP_WRITE = WRITE = 1.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-kcopyd.c | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
index a99b994e2b62..9c8f3544e99d 100644
--- a/drivers/md/dm-kcopyd.c
+++ b/drivers/md/dm-kcopyd.c
@@ -350,9 +350,9 @@ struct kcopyd_job {
 	unsigned long write_err;
 
 	/*
-	 * Either READ or WRITE
+	 * REQ_OP_READ, REQ_OP_WRITE or REQ_OP_WRITE_ZEROES.
 	 */
-	int rw;
+	enum req_op op;
 	struct dm_io_region source;
 
 	/*
@@ -418,7 +418,8 @@ static struct kcopyd_job *pop_io_job(struct list_head *jobs,
 	 * constraint and sequential writes that are at the right position.
 	 */
 	list_for_each_entry(job, jobs, list) {
-		if (job->rw == READ || !(job->flags & BIT(DM_KCOPYD_WRITE_SEQ))) {
+		if (job->op == REQ_OP_READ ||
+		    !(job->flags & BIT(DM_KCOPYD_WRITE_SEQ))) {
 			list_del(&job->list);
 			return job;
 		}
@@ -518,7 +519,7 @@ static void complete_io(unsigned long error, void *context)
 	io_job_finish(kc->throttle);
 
 	if (error) {
-		if (op_is_write(job->rw))
+		if (op_is_write(job->op))
 			job->write_err |= error;
 		else
 			job->read_err = 1;
@@ -530,11 +531,11 @@ static void complete_io(unsigned long error, void *context)
 		}
 	}
 
-	if (op_is_write(job->rw))
+	if (op_is_write(job->op))
 		push(&kc->complete_jobs, job);
 
 	else {
-		job->rw = WRITE;
+		job->op = REQ_OP_WRITE;
 		push(&kc->io_jobs, job);
 	}
 
@@ -549,7 +550,7 @@ static int run_io_job(struct kcopyd_job *job)
 {
 	int r;
 	struct dm_io_request io_req = {
-		.bi_opf = job->rw,
+		.bi_opf = job->op,
 		.mem.type = DM_IO_PAGE_LIST,
 		.mem.ptr.pl = job->pages,
 		.mem.offset = 0,
@@ -570,7 +571,7 @@ static int run_io_job(struct kcopyd_job *job)
 
 	io_job_start(job->kc->throttle);
 
-	if (job->rw == READ)
+	if (job->op == REQ_OP_READ)
 		r = dm_io(&io_req, 1, &job->source, NULL);
 	else
 		r = dm_io(&io_req, job->num_dests, job->dests, NULL);
@@ -613,7 +614,7 @@ static int process_jobs(struct list_head *jobs, struct dm_kcopyd_client *kc,
 
 		if (r < 0) {
 			/* error this rogue job */
-			if (op_is_write(job->rw))
+			if (op_is_write(job->op))
 				job->write_err = (unsigned long) -1L;
 			else
 				job->read_err = 1;
@@ -816,7 +817,7 @@ void dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from,
 	if (from) {
 		job->source = *from;
 		job->pages = NULL;
-		job->rw = READ;
+		job->op = REQ_OP_READ;
 	} else {
 		memset(&job->source, 0, sizeof job->source);
 		job->source.count = job->dests[0].count;
@@ -825,10 +826,10 @@ void dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from,
 		/*
 		 * Use WRITE ZEROES to optimize zeroing if all dests support it.
 		 */
-		job->rw = REQ_OP_WRITE_ZEROES;
+		job->op = REQ_OP_WRITE_ZEROES;
 		for (i = 0; i < job->num_dests; i++)
 			if (!bdev_write_zeroes_sectors(job->dests[i].bdev)) {
-				job->rw = WRITE;
+				job->op = REQ_OP_WRITE;
 				break;
 			}
 	}

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 23/63] dm/core: Combine request operation type and flags
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (21 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 22/63] dm/core: Rename kcopyd_job.rw into kcopyd.op Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 24/63] dm/ebs: Change 'int rw' into 'enum req_op op' Bart Van Assche
                   ` (40 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer

Improve kernel code uniformity by combining the request operation type and
flags into a single variable. Change 'int rw' into 'enum req_op op' because
the name 'op' is what is used in the block layer to hold a request type.
Use the blk_opf_t and enum req_op types where appropriate to improve static
type checking.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-bufio.c | 19 ++++++++++---------
 drivers/md/dm-io.c    | 36 +++++++++++++++++-------------------
 drivers/md/dm.c       | 10 +++++-----
 3 files changed, 32 insertions(+), 33 deletions(-)

diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index 1b7acda45c78..dc01ce33265b 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -577,12 +577,12 @@ static void dmio_complete(unsigned long error, void *context)
 	b->end_io(b, unlikely(error != 0) ? BLK_STS_IOERR : 0);
 }
 
-static void use_dmio(struct dm_buffer *b, int rw, sector_t sector,
+static void use_dmio(struct dm_buffer *b, enum req_op op, sector_t sector,
 		     unsigned n_sectors, unsigned offset)
 {
 	int r;
 	struct dm_io_request io_req = {
-		.bi_opf = rw,
+		.bi_opf = op,
 		.notify.fn = dmio_complete,
 		.notify.context = b,
 		.client = b->c->dm_io,
@@ -615,7 +615,7 @@ static void bio_complete(struct bio *bio)
 	b->end_io(b, status);
 }
 
-static void use_bio(struct dm_buffer *b, int rw, sector_t sector,
+static void use_bio(struct dm_buffer *b, enum req_op op, sector_t sector,
 		    unsigned n_sectors, unsigned offset)
 {
 	struct bio *bio;
@@ -629,10 +629,10 @@ static void use_bio(struct dm_buffer *b, int rw, sector_t sector,
 	bio = bio_kmalloc(vec_size, GFP_NOWAIT | __GFP_NORETRY | __GFP_NOWARN);
 	if (!bio) {
 dmio:
-		use_dmio(b, rw, sector, n_sectors, offset);
+		use_dmio(b, op, sector, n_sectors, offset);
 		return;
 	}
-	bio_init(bio, b->c->bdev, bio->bi_inline_vecs, vec_size, rw);
+	bio_init(bio, b->c->bdev, bio->bi_inline_vecs, vec_size, op);
 	bio->bi_iter.bi_sector = sector;
 	bio->bi_end_io = bio_complete;
 	bio->bi_private = b;
@@ -668,7 +668,8 @@ static inline sector_t block_to_sector(struct dm_bufio_client *c, sector_t block
 	return sector;
 }
 
-static void submit_io(struct dm_buffer *b, int rw, void (*end_io)(struct dm_buffer *, blk_status_t))
+static void submit_io(struct dm_buffer *b, enum req_op op,
+		      void (*end_io)(struct dm_buffer *, blk_status_t))
 {
 	unsigned n_sectors;
 	sector_t sector;
@@ -678,7 +679,7 @@ static void submit_io(struct dm_buffer *b, int rw, void (*end_io)(struct dm_buff
 
 	sector = block_to_sector(b->c, b->block);
 
-	if (rw != REQ_OP_WRITE) {
+	if (op != REQ_OP_WRITE) {
 		n_sectors = b->c->block_size >> SECTOR_SHIFT;
 		offset = 0;
 	} else {
@@ -697,9 +698,9 @@ static void submit_io(struct dm_buffer *b, int rw, void (*end_io)(struct dm_buff
 	}
 
 	if (b->data_mode != DATA_MODE_VMALLOC)
-		use_bio(b, rw, sector, n_sectors, offset);
+		use_bio(b, op, sector, n_sectors, offset);
 	else
-		use_dmio(b, rw, sector, n_sectors, offset);
+		use_dmio(b, op, sector, n_sectors, offset);
 }
 
 /*----------------------------------------------------------------
diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
index 0606e00d1817..783564533459 100644
--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -293,7 +293,7 @@ static void km_dp_init(struct dpages *dp, void *data)
 /*-----------------------------------------------------------------
  * IO routines that accept a list of pages.
  *---------------------------------------------------------------*/
-static void do_region(int op, int op_flags, unsigned region,
+static void do_region(const blk_opf_t opf, unsigned region,
 		      struct dm_io_region *where, struct dpages *dp,
 		      struct io *io)
 {
@@ -306,6 +306,7 @@ static void do_region(int op, int op_flags, unsigned region,
 	struct request_queue *q = bdev_get_queue(where->bdev);
 	sector_t num_sectors;
 	unsigned int special_cmd_max_sectors;
+	const enum req_op op = opf & REQ_OP_MASK;
 
 	/*
 	 * Reject unsupported discard and write same requests.
@@ -339,8 +340,8 @@ static void do_region(int op, int op_flags, unsigned region,
 						(PAGE_SIZE >> SECTOR_SHIFT)));
 		}
 
-		bio = bio_alloc_bioset(where->bdev, num_bvecs, op | op_flags,
-				       GFP_NOIO, &io->client->bios);
+		bio = bio_alloc_bioset(where->bdev, num_bvecs, opf, GFP_NOIO,
+				       &io->client->bios);
 		bio->bi_iter.bi_sector = where->sector + (where->count - remaining);
 		bio->bi_end_io = endio;
 		store_io_and_region_in_bio(bio, io, region);
@@ -368,7 +369,7 @@ static void do_region(int op, int op_flags, unsigned region,
 	} while (remaining);
 }
 
-static void dispatch_io(int op, int op_flags, unsigned int num_regions,
+static void dispatch_io(blk_opf_t opf, unsigned int num_regions,
 			struct dm_io_region *where, struct dpages *dp,
 			struct io *io, int sync)
 {
@@ -378,7 +379,7 @@ static void dispatch_io(int op, int op_flags, unsigned int num_regions,
 	BUG_ON(num_regions > DM_IO_MAX_REGIONS);
 
 	if (sync)
-		op_flags |= REQ_SYNC;
+		opf |= REQ_SYNC;
 
 	/*
 	 * For multiple regions we need to be careful to rewind
@@ -386,8 +387,8 @@ static void dispatch_io(int op, int op_flags, unsigned int num_regions,
 	 */
 	for (i = 0; i < num_regions; i++) {
 		*dp = old_pages;
-		if (where[i].count || (op_flags & REQ_PREFLUSH))
-			do_region(op, op_flags, i, where + i, dp, io);
+		if (where[i].count || (opf & REQ_PREFLUSH))
+			do_region(opf, i, where + i, dp, io);
 	}
 
 	/*
@@ -411,13 +412,13 @@ static void sync_io_complete(unsigned long error, void *context)
 }
 
 static int sync_io(struct dm_io_client *client, unsigned int num_regions,
-		   struct dm_io_region *where, int op, int op_flags,
-		   struct dpages *dp, unsigned long *error_bits)
+		   struct dm_io_region *where, blk_opf_t opf, struct dpages *dp,
+		   unsigned long *error_bits)
 {
 	struct io *io;
 	struct sync_io sio;
 
-	if (num_regions > 1 && !op_is_write(op)) {
+	if (num_regions > 1 && !op_is_write(opf)) {
 		WARN_ON(1);
 		return -EIO;
 	}
@@ -434,7 +435,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
 	io->vma_invalidate_address = dp->vma_invalidate_address;
 	io->vma_invalidate_size = dp->vma_invalidate_size;
 
-	dispatch_io(op, op_flags, num_regions, where, dp, io, 1);
+	dispatch_io(opf, num_regions, where, dp, io, 1);
 
 	wait_for_completion_io(&sio.wait);
 
@@ -445,12 +446,12 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
 }
 
 static int async_io(struct dm_io_client *client, unsigned int num_regions,
-		    struct dm_io_region *where, int op, int op_flags,
+		    struct dm_io_region *where, blk_opf_t opf,
 		    struct dpages *dp, io_notify_fn fn, void *context)
 {
 	struct io *io;
 
-	if (num_regions > 1 && !op_is_write(op)) {
+	if (num_regions > 1 && !op_is_write(opf)) {
 		WARN_ON(1);
 		fn(1, context);
 		return -EIO;
@@ -466,7 +467,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions,
 	io->vma_invalidate_address = dp->vma_invalidate_address;
 	io->vma_invalidate_size = dp->vma_invalidate_size;
 
-	dispatch_io(op, op_flags, num_regions, where, dp, io, 0);
+	dispatch_io(opf, num_regions, where, dp, io, 0);
 	return 0;
 }
 
@@ -519,13 +520,10 @@ int dm_io(struct dm_io_request *io_req, unsigned num_regions,
 
 	if (!io_req->notify.fn)
 		return sync_io(io_req->client, num_regions, where,
-			       io_req->bi_opf & REQ_OP_MASK,
-			       io_req->bi_opf & ~REQ_OP_MASK, &dp,
-			       sync_error_bits);
+			       io_req->bi_opf, &dp, sync_error_bits);
 
 	return async_io(io_req->client, num_regions, where,
-			io_req->bi_opf & REQ_OP_MASK,
-			io_req->bi_opf & ~REQ_OP_MASK, &dp, io_req->notify.fn,
+			io_req->bi_opf, &dp, io_req->notify.fn,
 			io_req->notify.context);
 }
 EXPORT_SYMBOL(dm_io);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c6e004157da3..438b8791265c 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -716,7 +716,7 @@ static void dm_put_live_table_fast(struct mapped_device *md) __releases(RCU)
 }
 
 static inline struct dm_table *dm_get_live_table_bio(struct mapped_device *md,
-						     int *srcu_idx, unsigned bio_opf)
+					int *srcu_idx, blk_opf_t bio_opf)
 {
 	if (bio_opf & REQ_NOWAIT)
 		return dm_get_live_table_fast(md);
@@ -725,7 +725,7 @@ static inline struct dm_table *dm_get_live_table_bio(struct mapped_device *md,
 }
 
 static inline void dm_put_live_table_bio(struct mapped_device *md, int srcu_idx,
-					 unsigned bio_opf)
+					 blk_opf_t bio_opf)
 {
 	if (bio_opf & REQ_NOWAIT)
 		dm_put_live_table_fast(md);
@@ -1511,7 +1511,7 @@ static void __send_changing_extent_only(struct clone_info *ci, struct dm_target
 
 static bool is_abnormal_io(struct bio *bio)
 {
-	unsigned int op = bio_op(bio);
+	enum req_op op = bio_op(bio);
 
 	if (op != REQ_OP_READ && op != REQ_OP_WRITE && op != REQ_OP_FLUSH) {
 		switch (op) {
@@ -1625,7 +1625,7 @@ static blk_status_t __split_and_process_bio(struct clone_info *ci)
 	 * Only support bio polling for normal IO, and the target io is
 	 * exactly inside the dm_io instance (verified in dm_poll_dm_io)
 	 */
-	ci->submit_as_polled = ci->bio->bi_opf & REQ_POLLED;
+	ci->submit_as_polled = !!(ci->bio->bi_opf & REQ_POLLED);
 
 	len = min_t(sector_t, max_io_len(ti, ci->sector), ci->sector_count);
 	setup_split_accounting(ci, len);
@@ -1722,7 +1722,7 @@ static void dm_submit_bio(struct bio *bio)
 	struct mapped_device *md = bio->bi_bdev->bd_disk->private_data;
 	int srcu_idx;
 	struct dm_table *map;
-	unsigned bio_opf = bio->bi_opf;
+	blk_opf_t bio_opf = bio->bi_opf;
 
 	map = dm_get_live_table_bio(md, &srcu_idx, bio_opf);
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 24/63] dm/ebs: Change 'int rw' into 'enum req_op op'
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (22 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 23/63] dm/core: Combine request operation type and flags Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 25/63] dm/dm-flakey: Use the new blk_opf_t type Bart Van Assche
                   ` (39 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer, Heinz Mauelshagen

Improve static type checking by using type 'enum req_op' instead of 'int'.
Make the role of the 'rw' arguments more clear by renaming these into
'op' (operation). This patch does not change any functionality since
REQ_OP_READ = READ = 0 and REQ_OP_WRITE = WRITE = 1.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Cc: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-ebs-target.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c
index 0221fa63f888..223e8e1a7a13 100644
--- a/drivers/md/dm-ebs-target.c
+++ b/drivers/md/dm-ebs-target.c
@@ -61,7 +61,8 @@ static inline bool __ebs_check_bs(unsigned int bs)
  *
  * copy blocks between bufio blocks and bio vector's (partial/overlapping) pages.
  */
-static int __ebs_rw_bvec(struct ebs_c *ec, int rw, struct bio_vec *bv, struct bvec_iter *iter)
+static int __ebs_rw_bvec(struct ebs_c *ec, enum req_op op, struct bio_vec *bv,
+			 struct bvec_iter *iter)
 {
 	int r = 0;
 	unsigned char *ba, *pa;
@@ -81,7 +82,7 @@ static int __ebs_rw_bvec(struct ebs_c *ec, int rw, struct bio_vec *bv, struct bv
 		cur_len = min(dm_bufio_get_block_size(ec->bufio) - buf_off, bv_len);
 
 		/* Avoid reading for writes in case bio vector's page overwrites block completely. */
-		if (rw == READ || buf_off || bv_len < dm_bufio_get_block_size(ec->bufio))
+		if (op == REQ_OP_READ || buf_off || bv_len < dm_bufio_get_block_size(ec->bufio))
 			ba = dm_bufio_read(ec->bufio, block, &b);
 		else
 			ba = dm_bufio_new(ec->bufio, block, &b);
@@ -95,7 +96,7 @@ static int __ebs_rw_bvec(struct ebs_c *ec, int rw, struct bio_vec *bv, struct bv
 		} else {
 			/* Copy data to/from bio to buffer if read/new was successful above. */
 			ba += buf_off;
-			if (rw == READ) {
+			if (op == REQ_OP_READ) {
 				memcpy(pa, ba, cur_len);
 				flush_dcache_page(bv->bv_page);
 			} else {
@@ -117,14 +118,14 @@ static int __ebs_rw_bvec(struct ebs_c *ec, int rw, struct bio_vec *bv, struct bv
 }
 
 /* READ/WRITE: iterate bio vector's copying between (partial) pages and bufio blocks. */
-static int __ebs_rw_bio(struct ebs_c *ec, int rw, struct bio *bio)
+static int __ebs_rw_bio(struct ebs_c *ec, enum req_op op, struct bio *bio)
 {
 	int r = 0, rr;
 	struct bio_vec bv;
 	struct bvec_iter iter;
 
 	bio_for_each_bvec(bv, bio, iter) {
-		rr = __ebs_rw_bvec(ec, rw, &bv, &iter);
+		rr = __ebs_rw_bvec(ec, op, &bv, &iter);
 		if (rr)
 			r = rr;
 	}
@@ -205,10 +206,10 @@ static void __ebs_process_bios(struct work_struct *ws)
 	bio_list_for_each(bio, &bios) {
 		r = -EIO;
 		if (bio_op(bio) == REQ_OP_READ)
-			r = __ebs_rw_bio(ec, READ, bio);
+			r = __ebs_rw_bio(ec, REQ_OP_READ, bio);
 		else if (bio_op(bio) == REQ_OP_WRITE) {
 			write = true;
-			r = __ebs_rw_bio(ec, WRITE, bio);
+			r = __ebs_rw_bio(ec, REQ_OP_WRITE, bio);
 		} else if (bio_op(bio) == REQ_OP_DISCARD) {
 			__ebs_forget_bio(ec, bio);
 			r = __ebs_discard_bio(ec, bio);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 25/63] dm/dm-flakey: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (23 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 24/63] dm/ebs: Change 'int rw' into 'enum req_op op' Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 26/63] dm/dm-integrity: Combine request operation and flags Bart Van Assche
                   ` (38 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer, Josef Bacik

Use the new blk_opf_t type for structure members that represent request
flags.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Cc: Josef Bacik <josef@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-flakey.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
index f2305eb758a2..89fa7a68c6c4 100644
--- a/drivers/md/dm-flakey.c
+++ b/drivers/md/dm-flakey.c
@@ -32,7 +32,7 @@ struct flakey_c {
 	unsigned corrupt_bio_byte;
 	unsigned corrupt_bio_rw;
 	unsigned corrupt_bio_value;
-	unsigned corrupt_bio_flags;
+	blk_opf_t corrupt_bio_flags;
 };
 
 enum feature_flag_bits {
@@ -145,7 +145,11 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
 			/*
 			 * Only corrupt bios with these flags set.
 			 */
-			r = dm_read_arg(_args + 3, as, &fc->corrupt_bio_flags, &ti->error);
+			BUILD_BUG_ON(sizeof(fc->corrupt_bio_flags) !=
+				     sizeof(unsigned int));
+			r = dm_read_arg(_args + 3, as,
+				(__force unsigned *)&fc->corrupt_bio_flags,
+				&ti->error);
 			if (r)
 				return r;
 			argc--;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 26/63] dm/dm-integrity: Combine request operation and flags
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (24 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 25/63] dm/dm-flakey: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 27/63] dm mirror log: Use the new blk_opf_t type Bart Van Assche
                   ` (37 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer, Eric Biggers

Combine the request operation type and request flags into a single
argument. Improve static type checking by using the enum req_op type for
variables that represent a request operation and the new blk_opf_t type for
variables that represent request flags.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Cc: Eric Biggers <ebiggers@google.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-integrity.c | 63 +++++++++++++++++++++------------------
 1 file changed, 34 insertions(+), 29 deletions(-)

diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 2ccc103dea1e..c60f9b2ece2d 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -551,13 +551,14 @@ static int sb_mac(struct dm_integrity_c *ic, bool wr)
 	return 0;
 }
 
-static int sync_rw_sb(struct dm_integrity_c *ic, int op, int op_flags)
+static int sync_rw_sb(struct dm_integrity_c *ic, blk_opf_t opf)
 {
 	struct dm_io_request io_req;
 	struct dm_io_region io_loc;
+	const enum req_op op = opf & REQ_OP_MASK;
 	int r;
 
-	io_req.bi_opf = op | op_flags;
+	io_req.bi_opf = opf;
 	io_req.mem.type = DM_IO_KMEM;
 	io_req.mem.ptr.addr = ic->sb;
 	io_req.notify.fn = NULL;
@@ -1049,8 +1050,9 @@ static void complete_journal_io(unsigned long error, void *context)
 	complete_journal_op(comp);
 }
 
-static void rw_journal_sectors(struct dm_integrity_c *ic, int op, int op_flags,
-			       unsigned sector, unsigned n_sectors, struct journal_completion *comp)
+static void rw_journal_sectors(struct dm_integrity_c *ic, blk_opf_t opf,
+			       unsigned sector, unsigned n_sectors,
+			       struct journal_completion *comp)
 {
 	struct dm_io_request io_req;
 	struct dm_io_region io_loc;
@@ -1066,7 +1068,7 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, int op, int op_flags,
 	pl_index = sector >> (PAGE_SHIFT - SECTOR_SHIFT);
 	pl_offset = (sector << SECTOR_SHIFT) & (PAGE_SIZE - 1);
 
-	io_req.bi_opf = op | op_flags;
+	io_req.bi_opf = opf;
 	io_req.mem.type = DM_IO_PAGE_LIST;
 	if (ic->journal_io)
 		io_req.mem.ptr.pl = &ic->journal_io[pl_index];
@@ -1086,7 +1088,8 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, int op, int op_flags,
 
 	r = dm_io(&io_req, 1, &io_loc, NULL);
 	if (unlikely(r)) {
-		dm_integrity_io_error(ic, op == REQ_OP_READ ? "reading journal" : "writing journal", r);
+		dm_integrity_io_error(ic, (opf & REQ_OP_MASK) == REQ_OP_READ ?
+				      "reading journal" : "writing journal", r);
 		if (comp) {
 			WARN_ONCE(1, "asynchronous dm_io failed: %d", r);
 			complete_journal_io(-1UL, comp);
@@ -1094,15 +1097,16 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, int op, int op_flags,
 	}
 }
 
-static void rw_journal(struct dm_integrity_c *ic, int op, int op_flags, unsigned section,
-		       unsigned n_sections, struct journal_completion *comp)
+static void rw_journal(struct dm_integrity_c *ic, blk_opf_t opf,
+		       unsigned section, unsigned n_sections,
+		       struct journal_completion *comp)
 {
 	unsigned sector, n_sectors;
 
 	sector = section * ic->journal_section_sectors;
 	n_sectors = n_sections * ic->journal_section_sectors;
 
-	rw_journal_sectors(ic, op, op_flags, sector, n_sectors, comp);
+	rw_journal_sectors(ic, opf, sector, n_sectors, comp);
 }
 
 static void write_journal(struct dm_integrity_c *ic, unsigned commit_start, unsigned commit_sections)
@@ -1127,7 +1131,7 @@ static void write_journal(struct dm_integrity_c *ic, unsigned commit_start, unsi
 			for (i = 0; i < commit_sections; i++)
 				rw_section_mac(ic, commit_start + i, true);
 		}
-		rw_journal(ic, REQ_OP_WRITE, REQ_FUA | REQ_SYNC, commit_start,
+		rw_journal(ic, REQ_OP_WRITE | REQ_FUA | REQ_SYNC, commit_start,
 			   commit_sections, &io_comp);
 	} else {
 		unsigned to_end;
@@ -1139,7 +1143,8 @@ static void write_journal(struct dm_integrity_c *ic, unsigned commit_start, unsi
 			crypt_comp_1.in_flight = (atomic_t)ATOMIC_INIT(0);
 			encrypt_journal(ic, true, commit_start, to_end, &crypt_comp_1);
 			if (try_wait_for_completion(&crypt_comp_1.comp)) {
-				rw_journal(ic, REQ_OP_WRITE, REQ_FUA, commit_start, to_end, &io_comp);
+				rw_journal(ic, REQ_OP_WRITE | REQ_FUA,
+					   commit_start, to_end, &io_comp);
 				reinit_completion(&crypt_comp_1.comp);
 				crypt_comp_1.in_flight = (atomic_t)ATOMIC_INIT(0);
 				encrypt_journal(ic, true, 0, commit_sections - to_end, &crypt_comp_1);
@@ -1150,17 +1155,17 @@ static void write_journal(struct dm_integrity_c *ic, unsigned commit_start, unsi
 				crypt_comp_2.in_flight = (atomic_t)ATOMIC_INIT(0);
 				encrypt_journal(ic, true, 0, commit_sections - to_end, &crypt_comp_2);
 				wait_for_completion_io(&crypt_comp_1.comp);
-				rw_journal(ic, REQ_OP_WRITE, REQ_FUA, commit_start, to_end, &io_comp);
+				rw_journal(ic, REQ_OP_WRITE | REQ_FUA, commit_start, to_end, &io_comp);
 				wait_for_completion_io(&crypt_comp_2.comp);
 			}
 		} else {
 			for (i = 0; i < to_end; i++)
 				rw_section_mac(ic, commit_start + i, true);
-			rw_journal(ic, REQ_OP_WRITE, REQ_FUA, commit_start, to_end, &io_comp);
+			rw_journal(ic, REQ_OP_WRITE | REQ_FUA, commit_start, to_end, &io_comp);
 			for (i = 0; i < commit_sections - to_end; i++)
 				rw_section_mac(ic, i, true);
 		}
-		rw_journal(ic, REQ_OP_WRITE, REQ_FUA, 0, commit_sections - to_end, &io_comp);
+		rw_journal(ic, REQ_OP_WRITE | REQ_FUA, 0, commit_sections - to_end, &io_comp);
 	}
 
 	wait_for_completion_io(&io_comp.comp);
@@ -2622,7 +2627,7 @@ static void recalc_write_super(struct dm_integrity_c *ic)
 	if (dm_integrity_failed(ic))
 		return;
 
-	r = sync_rw_sb(ic, REQ_OP_WRITE, 0);
+	r = sync_rw_sb(ic, REQ_OP_WRITE);
 	if (unlikely(r))
 		dm_integrity_io_error(ic, "writing superblock", r);
 }
@@ -2795,7 +2800,7 @@ static void bitmap_block_work(struct work_struct *w)
 	if (bio_list_empty(&waiting))
 		return;
 
-	rw_journal_sectors(ic, REQ_OP_WRITE, REQ_FUA | REQ_SYNC,
+	rw_journal_sectors(ic, REQ_OP_WRITE | REQ_FUA | REQ_SYNC,
 			   bbs->idx * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT),
 			   BITMAP_BLOCK_SIZE >> SECTOR_SHIFT, NULL);
 
@@ -2841,7 +2846,7 @@ static void bitmap_flush_work(struct work_struct *work)
 	block_bitmap_op(ic, ic->journal, 0, limit, BITMAP_OP_CLEAR);
 	block_bitmap_op(ic, ic->may_write_bitmap, 0, limit, BITMAP_OP_CLEAR);
 
-	rw_journal_sectors(ic, REQ_OP_WRITE, REQ_FUA | REQ_SYNC, 0,
+	rw_journal_sectors(ic, REQ_OP_WRITE | REQ_FUA | REQ_SYNC, 0,
 			   ic->n_bitmap_blocks * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT), NULL);
 
 	spin_lock_irq(&ic->endio_wait.lock);
@@ -2913,7 +2918,7 @@ static void replay_journal(struct dm_integrity_c *ic)
 
 	if (!ic->just_formatted) {
 		DEBUG_print("reading journal\n");
-		rw_journal(ic, REQ_OP_READ, 0, 0, ic->journal_sections, NULL);
+		rw_journal(ic, REQ_OP_READ, 0, ic->journal_sections, NULL);
 		if (ic->journal_io)
 			DEBUG_bytes(lowmem_page_address(ic->journal_io[0].page), 64, "read journal");
 		if (ic->journal_io) {
@@ -3108,7 +3113,7 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
 		/* set to 0 to test bitmap replay code */
 		init_journal(ic, 0, ic->journal_sections, 0);
 		ic->sb->flags &= ~cpu_to_le32(SB_FLAG_DIRTY_BITMAP);
-		r = sync_rw_sb(ic, REQ_OP_WRITE, REQ_FUA);
+		r = sync_rw_sb(ic, REQ_OP_WRITE | REQ_FUA);
 		if (unlikely(r))
 			dm_integrity_io_error(ic, "writing superblock", r);
 #endif
@@ -3131,23 +3136,23 @@ static void dm_integrity_resume(struct dm_target *ti)
 		if (ic->provided_data_sectors > old_provided_data_sectors &&
 		    ic->mode == 'B' &&
 		    ic->sb->log2_blocks_per_bitmap_bit == ic->log2_blocks_per_bitmap_bit) {
-			rw_journal_sectors(ic, REQ_OP_READ, 0, 0,
+			rw_journal_sectors(ic, REQ_OP_READ, 0,
 					   ic->n_bitmap_blocks * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT), NULL);
 			block_bitmap_op(ic, ic->journal, old_provided_data_sectors,
 					ic->provided_data_sectors - old_provided_data_sectors, BITMAP_OP_SET);
-			rw_journal_sectors(ic, REQ_OP_WRITE, REQ_FUA | REQ_SYNC, 0,
+			rw_journal_sectors(ic, REQ_OP_WRITE | REQ_FUA | REQ_SYNC, 0,
 					   ic->n_bitmap_blocks * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT), NULL);
 		}
 
 		ic->sb->provided_data_sectors = cpu_to_le64(ic->provided_data_sectors);
-		r = sync_rw_sb(ic, REQ_OP_WRITE, REQ_FUA);
+		r = sync_rw_sb(ic, REQ_OP_WRITE | REQ_FUA);
 		if (unlikely(r))
 			dm_integrity_io_error(ic, "writing superblock", r);
 	}
 
 	if (ic->sb->flags & cpu_to_le32(SB_FLAG_DIRTY_BITMAP)) {
 		DEBUG_print("resume dirty_bitmap\n");
-		rw_journal_sectors(ic, REQ_OP_READ, 0, 0,
+		rw_journal_sectors(ic, REQ_OP_READ, 0,
 				   ic->n_bitmap_blocks * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT), NULL);
 		if (ic->mode == 'B') {
 			if (ic->sb->log2_blocks_per_bitmap_bit == ic->log2_blocks_per_bitmap_bit &&
@@ -3166,7 +3171,7 @@ static void dm_integrity_resume(struct dm_target *ti)
 				block_bitmap_op(ic, ic->recalc_bitmap, 0, ic->provided_data_sectors, BITMAP_OP_SET);
 				block_bitmap_op(ic, ic->may_write_bitmap, 0, ic->provided_data_sectors, BITMAP_OP_SET);
 				block_bitmap_op(ic, ic->journal, 0, ic->provided_data_sectors, BITMAP_OP_SET);
-				rw_journal_sectors(ic, REQ_OP_WRITE, REQ_FUA | REQ_SYNC, 0,
+				rw_journal_sectors(ic, REQ_OP_WRITE | REQ_FUA | REQ_SYNC, 0,
 						   ic->n_bitmap_blocks * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT), NULL);
 				ic->sb->flags |= cpu_to_le32(SB_FLAG_RECALCULATING);
 				ic->sb->recalc_sector = cpu_to_le64(0);
@@ -3182,7 +3187,7 @@ static void dm_integrity_resume(struct dm_target *ti)
 			replay_journal(ic);
 			ic->sb->flags &= ~cpu_to_le32(SB_FLAG_DIRTY_BITMAP);
 		}
-		r = sync_rw_sb(ic, REQ_OP_WRITE, REQ_FUA);
+		r = sync_rw_sb(ic, REQ_OP_WRITE | REQ_FUA);
 		if (unlikely(r))
 			dm_integrity_io_error(ic, "writing superblock", r);
 	} else {
@@ -3194,7 +3199,7 @@ static void dm_integrity_resume(struct dm_target *ti)
 		if (ic->mode == 'B') {
 			ic->sb->flags |= cpu_to_le32(SB_FLAG_DIRTY_BITMAP);
 			ic->sb->log2_blocks_per_bitmap_bit = ic->log2_blocks_per_bitmap_bit;
-			r = sync_rw_sb(ic, REQ_OP_WRITE, REQ_FUA);
+			r = sync_rw_sb(ic, REQ_OP_WRITE | REQ_FUA);
 			if (unlikely(r))
 				dm_integrity_io_error(ic, "writing superblock", r);
 
@@ -3210,7 +3215,7 @@ static void dm_integrity_resume(struct dm_target *ti)
 				block_bitmap_op(ic, ic->may_write_bitmap, le64_to_cpu(ic->sb->recalc_sector),
 						ic->provided_data_sectors - le64_to_cpu(ic->sb->recalc_sector), BITMAP_OP_SET);
 			}
-			rw_journal_sectors(ic, REQ_OP_WRITE, REQ_FUA | REQ_SYNC, 0,
+			rw_journal_sectors(ic, REQ_OP_WRITE | REQ_FUA | REQ_SYNC, 0,
 					   ic->n_bitmap_blocks * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT), NULL);
 		}
 	}
@@ -4251,7 +4256,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
 		goto bad;
 	}
 
-	r = sync_rw_sb(ic, REQ_OP_READ, 0);
+	r = sync_rw_sb(ic, REQ_OP_READ);
 	if (r) {
 		ti->error = "Error reading superblock";
 		goto bad;
@@ -4495,7 +4500,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
 			ti->error = "Error initializing journal";
 			goto bad;
 		}
-		r = sync_rw_sb(ic, REQ_OP_WRITE, REQ_FUA);
+		r = sync_rw_sb(ic, REQ_OP_WRITE | REQ_FUA);
 		if (r) {
 			ti->error = "Error initializing superblock";
 			goto bad;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 27/63] dm mirror log: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (25 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 26/63] dm/dm-integrity: Combine request operation and flags Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 28/63] dm-snap: Combine request operation type and flags Bart Van Assche
                   ` (36 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer, Mikulas Patocka

Improve static type checking by using the new blk_opf_t type for a function
argument that represents a request operation type.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-log.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
index 56ad13f9347b..cf10fa667797 100644
--- a/drivers/md/dm-log.c
+++ b/drivers/md/dm-log.c
@@ -291,7 +291,7 @@ static void header_from_disk(struct log_header_core *core, struct log_header_dis
 	core->nr_regions = le64_to_cpu(disk->nr_regions);
 }
 
-static int rw_header(struct log_c *lc, int op)
+static int rw_header(struct log_c *lc, enum req_op op)
 {
 	lc->io_req.bi_opf = op;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 28/63] dm-snap: Combine request operation type and flags
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (26 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 27/63] dm mirror log: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 29/63] dm/zone: Use the enum req_op type Bart Van Assche
                   ` (35 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer

Pass the request operation and its flags as a single argument to improve
kernel code uniformity.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-snap-persistent.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
index eaf969de3d3a..f46f930eedf9 100644
--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -226,8 +226,8 @@ static void do_metadata(struct work_struct *work)
 /*
  * Read or write a chunk aligned and sized block of data from a device.
  */
-static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int op,
-		    int op_flags, int metadata)
+static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, blk_opf_t opf,
+		    int metadata)
 {
 	struct dm_io_region where = {
 		.bdev = dm_snap_cow(ps->store->snap)->bdev,
@@ -235,7 +235,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int op,
 		.count = ps->store->chunk_size,
 	};
 	struct dm_io_request io_req = {
-		.bi_opf = op | op_flags,
+		.bi_opf = opf,
 		.mem.type = DM_IO_VMA,
 		.mem.ptr.vma = area,
 		.client = ps->io_client,
@@ -281,11 +281,11 @@ static void skip_metadata(struct pstore *ps)
  * Read or write a metadata area.  Remembering to skip the first
  * chunk which holds the header.
  */
-static int area_io(struct pstore *ps, int op, int op_flags)
+static int area_io(struct pstore *ps, blk_opf_t opf)
 {
 	chunk_t chunk = area_location(ps, ps->current_area);
 
-	return chunk_io(ps, ps->area, chunk, op, op_flags, 0);
+	return chunk_io(ps, ps->area, chunk, opf, 0);
 }
 
 static void zero_memory_area(struct pstore *ps)
@@ -296,7 +296,7 @@ static void zero_memory_area(struct pstore *ps)
 static int zero_disk_area(struct pstore *ps, chunk_t area)
 {
 	return chunk_io(ps, ps->zero_area, area_location(ps, area),
-			REQ_OP_WRITE, 0, 0);
+			REQ_OP_WRITE, 0);
 }
 
 static int read_header(struct pstore *ps, int *new_snapshot)
@@ -328,7 +328,7 @@ static int read_header(struct pstore *ps, int *new_snapshot)
 	if (r)
 		return r;
 
-	r = chunk_io(ps, ps->header_area, 0, REQ_OP_READ, 0, 1);
+	r = chunk_io(ps, ps->header_area, 0, REQ_OP_READ, 1);
 	if (r)
 		goto bad;
 
@@ -389,7 +389,7 @@ static int write_header(struct pstore *ps)
 	dh->version = cpu_to_le32(ps->version);
 	dh->chunk_size = cpu_to_le32(ps->store->chunk_size);
 
-	return chunk_io(ps, ps->header_area, 0, REQ_OP_WRITE, 0, 1);
+	return chunk_io(ps, ps->header_area, 0, REQ_OP_WRITE, 1);
 }
 
 /*
@@ -733,8 +733,8 @@ static void persistent_commit_exception(struct dm_exception_store *store,
 	/*
 	 * Commit exceptions to disk.
 	 */
-	if (ps->valid && area_io(ps, REQ_OP_WRITE,
-				 REQ_PREFLUSH | REQ_FUA | REQ_SYNC))
+	if (ps->valid && area_io(ps, REQ_OP_WRITE | REQ_PREFLUSH | REQ_FUA |
+				 REQ_SYNC))
 		ps->valid = 0;
 
 	/*
@@ -774,7 +774,7 @@ static int persistent_prepare_merge(struct dm_exception_store *store,
 			return 0;
 
 		ps->current_area--;
-		r = area_io(ps, REQ_OP_READ, 0);
+		r = area_io(ps, REQ_OP_READ);
 		if (r < 0)
 			return r;
 		ps->current_committed = ps->exceptions_per_area;
@@ -811,7 +811,7 @@ static int persistent_commit_merge(struct dm_exception_store *store,
 	for (i = 0; i < nr_merged; i++)
 		clear_exception(ps, ps->current_committed - 1 - i);
 
-	r = area_io(ps, REQ_OP_WRITE, REQ_PREFLUSH | REQ_FUA);
+	r = area_io(ps, REQ_OP_WRITE | REQ_PREFLUSH | REQ_FUA);
 	if (r < 0)
 		return r;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 29/63] dm/zone: Use the enum req_op type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (27 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 28/63] dm-snap: Combine request operation type and flags Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30 23:36   ` Damien Le Moal
  2022-06-29 23:31 ` [PATCH v2 30/63] dm/dm-zoned: " Bart Van Assche
                   ` (34 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Damien Le Moal

Use the enum req_op type for request operations instead of unsigned int.
This patch fixes a sparse warning that has been introduced by making
enum req_op __bitwise.

Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-zone.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
index 3e7b1fe1580b..2e87237a35a5 100644
--- a/drivers/md/dm-zone.c
+++ b/drivers/md/dm-zone.c
@@ -361,7 +361,7 @@ static int dm_update_zone_wp_offset(struct mapped_device *md, unsigned int zno,
 }
 
 struct orig_bio_details {
-	unsigned int op;
+	enum req_op op;
 	unsigned int nr_sectors;
 };
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 30/63] dm/dm-zoned: Use the enum req_op type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (28 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 29/63] dm/zone: Use the enum req_op type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30 23:36   ` Damien Le Moal
  2022-06-29 23:31 ` [PATCH v2 31/63] md/core: Combine two sync_page_io() arguments Bart Van Assche
                   ` (33 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alasdair Kergon,
	Mike Snitzer, Damien Le Moal

Improve static type checking by using the enum req_op type for arguments
that represent a request operation.

Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@kernel.org>
Cc: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-zoned-metadata.c | 5 +++--
 drivers/md/dm-zoned.h          | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
index d1ea66114d14..34db364c23a8 100644
--- a/drivers/md/dm-zoned-metadata.c
+++ b/drivers/md/dm-zoned-metadata.c
@@ -737,7 +737,7 @@ static int dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk,
 /*
  * Read/write a metadata block.
  */
-static int dmz_rdwr_block(struct dmz_dev *dev, int op,
+static int dmz_rdwr_block(struct dmz_dev *dev, enum req_op op,
 			  sector_t block, struct page *page)
 {
 	struct bio *bio;
@@ -2045,7 +2045,8 @@ struct dm_zone *dmz_get_zone_for_reclaim(struct dmz_metadata *zmd,
  * allocated and used to map the chunk.
  * The zone returned will be set to the active state.
  */
-struct dm_zone *dmz_get_chunk_mapping(struct dmz_metadata *zmd, unsigned int chunk, int op)
+struct dm_zone *dmz_get_chunk_mapping(struct dmz_metadata *zmd,
+				      unsigned int chunk, enum req_op op)
 {
 	struct dmz_mblock *dmap_mblk = zmd->map_mblk[chunk >> DMZ_MAP_ENTRIES_SHIFT];
 	struct dmz_map *dmap = (struct dmz_map *) dmap_mblk->data;
diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h
index a02744a0846c..265494d3f711 100644
--- a/drivers/md/dm-zoned.h
+++ b/drivers/md/dm-zoned.h
@@ -248,7 +248,7 @@ struct dm_zone *dmz_get_zone_for_reclaim(struct dmz_metadata *zmd,
 					 unsigned int dev_idx, bool idle);
 
 struct dm_zone *dmz_get_chunk_mapping(struct dmz_metadata *zmd,
-				      unsigned int chunk, int op);
+				      unsigned int chunk, enum req_op op);
 void dmz_put_chunk_mapping(struct dmz_metadata *zmd, struct dm_zone *zone);
 struct dm_zone *dmz_get_chunk_buffer(struct dmz_metadata *zmd,
 				     struct dm_zone *dzone);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 31/63] md/core: Combine two sync_page_io() arguments
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (29 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 30/63] dm/dm-zoned: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 32/63] md/bcache: Combine two uuid_io() arguments Bart Van Assche
                   ` (32 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Song Liu

Improve uniformity in the kernel of handling of request operation and
flags by passing these as a single argument.

Cc: Song Liu <song@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/dm-raid.c     |  2 +-
 drivers/md/md-bitmap.c   |  2 +-
 drivers/md/md.c          | 10 +++++-----
 drivers/md/md.h          |  3 +--
 drivers/md/raid1.c       |  8 ++++----
 drivers/md/raid10.c      | 10 +++++-----
 drivers/md/raid5-cache.c | 12 ++++++------
 drivers/md/raid5-ppl.c   | 12 ++++++------
 8 files changed, 29 insertions(+), 30 deletions(-)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 9526ccbedafb..fdd6616632c9 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -2036,7 +2036,7 @@ static int read_disk_sb(struct md_rdev *rdev, int size, bool force_reload)
 
 	rdev->sb_loaded = 0;
 
-	if (!sync_page_io(rdev, 0, size, rdev->sb_page, REQ_OP_READ, 0, true)) {
+	if (!sync_page_io(rdev, 0, size, rdev->sb_page, REQ_OP_READ, true)) {
 		DMERR("Failed to read superblock of device at position %d",
 		      rdev->raid_disk);
 		md_error(rdev->mddev, rdev);
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index d87f674ab762..0a21b8317103 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -165,7 +165,7 @@ static int read_sb_page(struct mddev *mddev, loff_t offset,
 
 		if (sync_page_io(rdev, target,
 				 roundup(size, bdev_logical_block_size(rdev->bdev)),
-				 page, REQ_OP_READ, 0, true)) {
+				 page, REQ_OP_READ, true)) {
 			page->index = index;
 			return 0;
 		}
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 076255ec9ba1..31c26edff353 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -993,15 +993,15 @@ int md_super_wait(struct mddev *mddev)
 }
 
 int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
-		 struct page *page, int op, int op_flags, bool metadata_op)
+		 struct page *page, blk_opf_t opf, bool metadata_op)
 {
 	struct bio bio;
 	struct bio_vec bvec;
 
 	if (metadata_op && rdev->meta_bdev)
-		bio_init(&bio, rdev->meta_bdev, &bvec, 1, op | op_flags);
+		bio_init(&bio, rdev->meta_bdev, &bvec, 1, opf);
 	else
-		bio_init(&bio, rdev->bdev, &bvec, 1, op | op_flags);
+		bio_init(&bio, rdev->bdev, &bvec, 1, opf);
 
 	if (metadata_op)
 		bio.bi_iter.bi_sector = sector + rdev->sb_start;
@@ -1024,7 +1024,7 @@ static int read_disk_sb(struct md_rdev *rdev, int size)
 	if (rdev->sb_loaded)
 		return 0;
 
-	if (!sync_page_io(rdev, 0, size, rdev->sb_page, REQ_OP_READ, 0, true))
+	if (!sync_page_io(rdev, 0, size, rdev->sb_page, REQ_OP_READ, true))
 		goto fail;
 	rdev->sb_loaded = 1;
 	return 0;
@@ -1722,7 +1722,7 @@ static int super_1_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor_
 			return -EINVAL;
 		bb_sector = (long long)offset;
 		if (!sync_page_io(rdev, bb_sector, sectors << 9,
-				  rdev->bb_page, REQ_OP_READ, 0, true))
+				  rdev->bb_page, REQ_OP_READ, true))
 			return -EIO;
 		bbp = (__le64 *)page_address(rdev->bb_page);
 		rdev->badblocks.shift = sb->bblog_shift;
diff --git a/drivers/md/md.h b/drivers/md/md.h
index cf2cbb17acbd..b4f84b27bdef 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -738,8 +738,7 @@ extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
 			   sector_t sector, int size, struct page *page);
 extern int md_super_wait(struct mddev *mddev);
 extern int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
-			struct page *page, int op, int op_flags,
-			bool metadata_op);
+		struct page *page, blk_opf_t opf, bool metadata_op);
 extern void md_do_sync(struct md_thread *thread);
 extern void md_new_event(void);
 extern void md_allow_write(struct mddev *mddev);
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 258d4eb2d63c..e7cd1d30d68a 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1988,9 +1988,9 @@ static void end_sync_write(struct bio *bio)
 }
 
 static int r1_sync_page_io(struct md_rdev *rdev, sector_t sector,
-			    int sectors, struct page *page, int rw)
+			   int sectors, struct page *page, int rw)
 {
-	if (sync_page_io(rdev, sector, sectors << 9, page, rw, 0, false))
+	if (sync_page_io(rdev, sector, sectors << 9, page, rw, false))
 		/* success */
 		return 1;
 	if (rw == WRITE) {
@@ -2057,7 +2057,7 @@ static int fix_sync_read_error(struct r1bio *r1_bio)
 				rdev = conf->mirrors[d].rdev;
 				if (sync_page_io(rdev, sect, s<<9,
 						 pages[idx],
-						 REQ_OP_READ, 0, false)) {
+						 REQ_OP_READ, false)) {
 					success = 1;
 					break;
 				}
@@ -2305,7 +2305,7 @@ static void fix_read_error(struct r1conf *conf, int read_disk,
 				atomic_inc(&rdev->nr_pending);
 				rcu_read_unlock();
 				if (sync_page_io(rdev, sect, s<<9,
-					 conf->tmppage, REQ_OP_READ, 0, false))
+					 conf->tmppage, REQ_OP_READ, false))
 					success = 1;
 				rdev_dec_pending(rdev, mddev);
 				if (success)
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index d589f823feb1..910d3cd73105 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2512,7 +2512,7 @@ static void fix_recovery_read_error(struct r10bio *r10_bio)
 				  addr,
 				  s << 9,
 				  pages[idx],
-				  REQ_OP_READ, 0, false);
+				  REQ_OP_READ, false);
 		if (ok) {
 			rdev = conf->mirrors[dw].rdev;
 			addr = r10_bio->devs[1].addr + sect;
@@ -2520,7 +2520,7 @@ static void fix_recovery_read_error(struct r10bio *r10_bio)
 					  addr,
 					  s << 9,
 					  pages[idx],
-					  REQ_OP_WRITE, 0, false);
+					  REQ_OP_WRITE, false);
 			if (!ok) {
 				set_bit(WriteErrorSeen, &rdev->flags);
 				if (!test_and_set_bit(WantReplacement,
@@ -2644,7 +2644,7 @@ static int r10_sync_page_io(struct md_rdev *rdev, sector_t sector,
 	if (is_badblock(rdev, sector, sectors, &first_bad, &bad_sectors)
 	    && (rw == READ || test_bit(WriteErrorSeen, &rdev->flags)))
 		return -1;
-	if (sync_page_io(rdev, sector, sectors << 9, page, rw, 0, false))
+	if (sync_page_io(rdev, sector, sectors << 9, page, rw, false))
 		/* success */
 		return 1;
 	if (rw == WRITE) {
@@ -2726,7 +2726,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
 						       sect,
 						       s<<9,
 						       conf->tmppage,
-						       REQ_OP_READ, 0, false);
+						       REQ_OP_READ, false);
 				rdev_dec_pending(rdev, mddev);
 				rcu_read_lock();
 				if (success)
@@ -5107,7 +5107,7 @@ static int handle_reshape_read_error(struct mddev *mddev,
 					       addr,
 					       s << 9,
 					       pages[idx],
-					       REQ_OP_READ, 0, false);
+					       REQ_OP_READ, false);
 			rdev_dec_pending(rdev, mddev);
 			rcu_read_lock();
 			if (success)
diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 83c184eddbda..6f2dd73128b0 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -1788,7 +1788,7 @@ static int r5l_log_write_empty_meta_block(struct r5l_log *log, sector_t pos,
 	mb = page_address(page);
 	mb->checksum = cpu_to_le32(crc32c_le(log->uuid_checksum,
 					     mb, PAGE_SIZE));
-	if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, REQ_OP_WRITE,
+	if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, REQ_OP_WRITE |
 			  REQ_SYNC | REQ_FUA, false)) {
 		__free_page(page);
 		return -EIO;
@@ -1898,7 +1898,7 @@ r5l_recovery_replay_one_stripe(struct r5conf *conf,
 			atomic_inc(&rdev->nr_pending);
 			rcu_read_unlock();
 			sync_page_io(rdev, sh->sector, PAGE_SIZE,
-				     sh->dev[disk_index].page, REQ_OP_WRITE, 0,
+				     sh->dev[disk_index].page, REQ_OP_WRITE,
 				     false);
 			rdev_dec_pending(rdev, rdev->mddev);
 			rcu_read_lock();
@@ -1908,7 +1908,7 @@ r5l_recovery_replay_one_stripe(struct r5conf *conf,
 			atomic_inc(&rrdev->nr_pending);
 			rcu_read_unlock();
 			sync_page_io(rrdev, sh->sector, PAGE_SIZE,
-				     sh->dev[disk_index].page, REQ_OP_WRITE, 0,
+				     sh->dev[disk_index].page, REQ_OP_WRITE,
 				     false);
 			rdev_dec_pending(rrdev, rrdev->mddev);
 			rcu_read_lock();
@@ -2394,7 +2394,7 @@ r5c_recovery_rewrite_data_only_stripes(struct r5l_log *log,
 						  PAGE_SIZE));
 				kunmap_atomic(addr);
 				sync_page_io(log->rdev, write_pos, PAGE_SIZE,
-					     dev->page, REQ_OP_WRITE, 0, false);
+					     dev->page, REQ_OP_WRITE, false);
 				write_pos = r5l_ring_add(log, write_pos,
 							 BLOCK_SECTORS);
 				offset += sizeof(__le32) +
@@ -2406,7 +2406,7 @@ r5c_recovery_rewrite_data_only_stripes(struct r5l_log *log,
 		mb->checksum = cpu_to_le32(crc32c_le(log->uuid_checksum,
 						     mb, PAGE_SIZE));
 		sync_page_io(log->rdev, ctx->pos, PAGE_SIZE, page,
-			     REQ_OP_WRITE, REQ_SYNC | REQ_FUA, false);
+			     REQ_OP_WRITE | REQ_SYNC | REQ_FUA, false);
 		sh->log_start = ctx->pos;
 		list_add_tail(&sh->r5c, &log->stripe_in_journal_list);
 		atomic_inc(&log->stripe_in_journal_count);
@@ -2971,7 +2971,7 @@ static int r5l_load_log(struct r5l_log *log)
 	if (!page)
 		return -ENOMEM;
 
-	if (!sync_page_io(rdev, cp, PAGE_SIZE, page, REQ_OP_READ, 0, false)) {
+	if (!sync_page_io(rdev, cp, PAGE_SIZE, page, REQ_OP_READ, false)) {
 		ret = -EIO;
 		goto ioerr;
 	}
diff --git a/drivers/md/raid5-ppl.c b/drivers/md/raid5-ppl.c
index 0a2e4806b1ec..98988cb26295 100644
--- a/drivers/md/raid5-ppl.c
+++ b/drivers/md/raid5-ppl.c
@@ -897,7 +897,7 @@ static int ppl_recover_entry(struct ppl_log *log, struct ppl_header_entry *e,
 				 __func__, indent, "", rdev->bdev,
 				 (unsigned long long)sector);
 			if (!sync_page_io(rdev, sector, block_size, page2,
-					REQ_OP_READ, 0, false)) {
+					REQ_OP_READ, false)) {
 				md_error(mddev, rdev);
 				pr_debug("%s:%*s read failed!\n", __func__,
 					 indent, "");
@@ -919,7 +919,7 @@ static int ppl_recover_entry(struct ppl_log *log, struct ppl_header_entry *e,
 				 (unsigned long long)(ppl_sector + i));
 			if (!sync_page_io(log->rdev,
 					ppl_sector - log->rdev->data_offset + i,
-					block_size, page2, REQ_OP_READ, 0,
+					block_size, page2, REQ_OP_READ,
 					false)) {
 				pr_debug("%s:%*s read failed!\n", __func__,
 					 indent, "");
@@ -946,7 +946,7 @@ static int ppl_recover_entry(struct ppl_log *log, struct ppl_header_entry *e,
 			 (unsigned long long)parity_sector,
 			 parity_rdev->bdev);
 		if (!sync_page_io(parity_rdev, parity_sector, block_size,
-				page1, REQ_OP_WRITE, 0, false)) {
+				  page1, REQ_OP_WRITE, false)) {
 			pr_debug("%s:%*s parity write error!\n", __func__,
 				 indent, "");
 			md_error(mddev, parity_rdev);
@@ -998,7 +998,7 @@ static int ppl_recover(struct ppl_log *log, struct ppl_header *pplhdr,
 			int s = pp_size > PAGE_SIZE ? PAGE_SIZE : pp_size;
 
 			if (!sync_page_io(rdev, sector - rdev->data_offset,
-					s, page, REQ_OP_READ, 0, false)) {
+					s, page, REQ_OP_READ, false)) {
 				md_error(mddev, rdev);
 				ret = -EIO;
 				goto out;
@@ -1062,7 +1062,7 @@ static int ppl_write_empty_header(struct ppl_log *log)
 
 	if (!sync_page_io(rdev, rdev->ppl.sector - rdev->data_offset,
 			  PPL_HEADER_SIZE, page, REQ_OP_WRITE | REQ_SYNC |
-			  REQ_FUA, 0, false)) {
+			  REQ_FUA, false)) {
 		md_error(rdev->mddev, rdev);
 		ret = -EIO;
 	}
@@ -1100,7 +1100,7 @@ static int ppl_load_distributed(struct ppl_log *log)
 		if (!sync_page_io(rdev,
 				  rdev->ppl.sector - rdev->data_offset +
 				  pplhdr_offset, PAGE_SIZE, page, REQ_OP_READ,
-				  0, false)) {
+				  false)) {
 			md_error(mddev, rdev);
 			ret = -EIO;
 			/* if not able to read - don't recover any PPL */

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 32/63] md/bcache: Combine two uuid_io() arguments
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (30 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 31/63] md/core: Combine two sync_page_io() arguments Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 33/63] md/bcache: Combine two prio_io() arguments Bart Van Assche
                   ` (31 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Coly Li

Improve uniformity in the kernel of handling of request operation and
flags by passing these as a single argument.

Cc: Coly Li <colyli@suse.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/bcache/super.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 9dd752d272f6..a2f61a2225d2 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -414,8 +414,8 @@ static void uuid_io_unlock(struct closure *cl)
 	up(&c->uuid_write_mutex);
 }
 
-static void uuid_io(struct cache_set *c, int op, unsigned long op_flags,
-		    struct bkey *k, struct closure *parent)
+static void uuid_io(struct cache_set *c, blk_opf_t opf, struct bkey *k,
+		    struct closure *parent)
 {
 	struct closure *cl = &c->uuid_write;
 	struct uuid_entry *u;
@@ -429,22 +429,22 @@ static void uuid_io(struct cache_set *c, int op, unsigned long op_flags,
 	for (i = 0; i < KEY_PTRS(k); i++) {
 		struct bio *bio = bch_bbio_alloc(c);
 
-		bio->bi_opf = REQ_SYNC | REQ_META | op_flags;
+		bio->bi_opf = opf | REQ_SYNC | REQ_META;
 		bio->bi_iter.bi_size = KEY_SIZE(k) << 9;
 
 		bio->bi_end_io	= uuid_endio;
 		bio->bi_private = cl;
-		bio_set_op_attrs(bio, op, REQ_SYNC|REQ_META|op_flags);
 		bch_bio_map(bio, c->uuids);
 
 		bch_submit_bbio(bio, c, k, i);
 
-		if (op != REQ_OP_WRITE)
+		if ((opf & REQ_OP_MASK) != REQ_OP_WRITE)
 			break;
 	}
 
 	bch_extent_to_text(buf, sizeof(buf), k);
-	pr_debug("%s UUIDs at %s\n", op == REQ_OP_WRITE ? "wrote" : "read", buf);
+	pr_debug("%s UUIDs at %s\n", (opf & REQ_OP_MASK) == REQ_OP_WRITE ?
+		 "wrote" : "read", buf);
 
 	for (u = c->uuids; u < c->uuids + c->nr_uuids; u++)
 		if (!bch_is_zero(u->uuid, 16))
@@ -463,7 +463,7 @@ static char *uuid_read(struct cache_set *c, struct jset *j, struct closure *cl)
 		return "bad uuid pointer";
 
 	bkey_copy(&c->uuid_bucket, k);
-	uuid_io(c, REQ_OP_READ, 0, k, cl);
+	uuid_io(c, REQ_OP_READ, k, cl);
 
 	if (j->version < BCACHE_JSET_VERSION_UUIDv1) {
 		struct uuid_entry_v0	*u0 = (void *) c->uuids;
@@ -511,7 +511,7 @@ static int __uuid_write(struct cache_set *c)
 
 	size =  meta_bucket_pages(&ca->sb) * PAGE_SECTORS;
 	SET_KEY_SIZE(&k.key, size);
-	uuid_io(c, REQ_OP_WRITE, 0, &k.key, &cl);
+	uuid_io(c, REQ_OP_WRITE, &k.key, &cl);
 	closure_sync(&cl);
 
 	/* Only one bucket used for uuid write */

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 33/63] md/bcache: Combine two prio_io() arguments
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (31 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 32/63] md/bcache: Combine two uuid_io() arguments Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 34/63] md/raid1: Use the new blk_opf_t type Bart Van Assche
                   ` (30 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Coly Li

Improve uniformity in the kernel of handling of request operation and
flags by passing these as a single argument.

Cc: Coly Li <colyli@suse.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/bcache/super.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index a2f61a2225d2..ba3909bb6bea 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -587,8 +587,7 @@ static void prio_endio(struct bio *bio)
 	closure_put(&ca->prio);
 }
 
-static void prio_io(struct cache *ca, uint64_t bucket, int op,
-		    unsigned long op_flags)
+static void prio_io(struct cache *ca, uint64_t bucket, blk_opf_t opf)
 {
 	struct closure *cl = &ca->prio;
 	struct bio *bio = bch_bbio_alloc(ca->set);
@@ -601,7 +600,7 @@ static void prio_io(struct cache *ca, uint64_t bucket, int op,
 
 	bio->bi_end_io	= prio_endio;
 	bio->bi_private = ca;
-	bio_set_op_attrs(bio, op, REQ_SYNC|REQ_META|op_flags);
+	bio->bi_opf = opf | REQ_SYNC | REQ_META;
 	bch_bio_map(bio, ca->disk_buckets);
 
 	closure_bio_submit(ca->set, bio, &ca->prio);
@@ -661,7 +660,7 @@ int bch_prio_write(struct cache *ca, bool wait)
 		BUG_ON(bucket == -1);
 
 		mutex_unlock(&ca->set->bucket_lock);
-		prio_io(ca, bucket, REQ_OP_WRITE, 0);
+		prio_io(ca, bucket, REQ_OP_WRITE);
 		mutex_lock(&ca->set->bucket_lock);
 
 		ca->prio_buckets[i] = bucket;
@@ -705,7 +704,7 @@ static int prio_read(struct cache *ca, uint64_t bucket)
 			ca->prio_last_buckets[bucket_nr] = bucket;
 			bucket_nr++;
 
-			prio_io(ca, bucket, REQ_OP_READ, 0);
+			prio_io(ca, bucket, REQ_OP_READ);
 
 			if (p->csum !=
 			    bch_crc64(&p->magic, meta_bucket_bytes(&ca->sb) - 8)) {

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 34/63] md/raid1: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (32 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 33/63] md/bcache: Combine two prio_io() arguments Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30 18:41   ` Song Liu
  2022-06-29 23:31 ` [PATCH v2 35/63] md/raid10: " Bart Van Assche
                   ` (29 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Song Liu

Improve static type checking by using the new blk_opf_t type for
variables that represent request flags.

Cc: Song Liu <song@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/raid1.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index e7cd1d30d68a..8180b57be040 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1220,8 +1220,8 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
 	struct raid1_info *mirror;
 	struct bio *read_bio;
 	struct bitmap *bitmap = mddev->bitmap;
-	const int op = bio_op(bio);
-	const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
+	const enum req_op op = bio_op(bio);
+	const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
 	int max_sectors;
 	int rdisk;
 	bool r1bio_existed = !!r1_bio;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 35/63] md/raid10: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (33 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 34/63] md/raid1: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30 18:41   ` Song Liu
  2022-06-29 23:31 ` [PATCH v2 36/63] md/raid5: Use the enum req_op and blk_opf_t types Bart Van Assche
                   ` (28 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Song Liu

Improve static type checking by using the new blk_opf_t type for
variables that represent a request flags.

Cc: Song Liu <song@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/raid10.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 910d3cd73105..ac8bc7d2565a 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1136,8 +1136,8 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
 {
 	struct r10conf *conf = mddev->private;
 	struct bio *read_bio;
-	const int op = bio_op(bio);
-	const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
+	const enum req_op op = bio_op(bio);
+	const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
 	int max_sectors;
 	struct md_rdev *rdev;
 	char b[BDEVNAME_SIZE];
@@ -1230,9 +1230,9 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
 				  struct bio *bio, bool replacement,
 				  int n_copy)
 {
-	const int op = bio_op(bio);
-	const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
-	const unsigned long do_fua = (bio->bi_opf & REQ_FUA);
+	const enum req_op op = bio_op(bio);
+	const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
+	const blk_opf_t do_fua = bio->bi_opf & REQ_FUA;
 	unsigned long flags;
 	struct blk_plug_cb *cb;
 	struct raid1_plug_cb *plug = NULL;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 36/63] md/raid5: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (34 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 35/63] md/raid10: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30 18:41   ` Song Liu
  2022-06-29 23:31 ` [PATCH v2 37/63] nvme/host: " Bart Van Assche
                   ` (27 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Song Liu

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags.

Cc: Song Liu <song@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/raid5.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 5d09256d7f81..b11d8b6a2dc2 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -1082,7 +1082,8 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
 	should_defer = conf->batch_bio_dispatch && conf->group_cnt;
 
 	for (i = disks; i--; ) {
-		int op, op_flags = 0;
+		enum req_op op;
+		blk_opf_t op_flags = 0;
 		int replace_only = 0;
 		struct bio *bi, *rbi;
 		struct md_rdev *rdev, *rrdev = NULL;
@@ -5896,7 +5897,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
 			(unsigned long long)logical_sector);
 
 		sh = raid5_get_active_stripe(conf, new_sector, previous,
-				       (bi->bi_opf & REQ_RAHEAD), 0);
+					     !!(bi->bi_opf & REQ_RAHEAD), 0);
 		if (sh) {
 			if (unlikely(previous)) {
 				/* expansion might have moved on while waiting for a

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 37/63] nvme/host: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (35 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 36/63] md/raid5: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 38/63] nvme/target: Use the new blk_opf_t type Bart Van Assche
                   ` (26 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Keith Busch,
	Sagi Grimberg

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/nvme/host/ioctl.c | 4 ++--
 drivers/nvme/host/nvme.h  | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index a2e89db1cd63..27614bee7380 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -68,7 +68,7 @@ static struct request *nvme_alloc_user_request(struct request_queue *q,
 		struct nvme_command *cmd, void __user *ubuffer,
 		unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
 		u32 meta_seed, void **metap, unsigned timeout, bool vec,
-		unsigned int rq_flags, blk_mq_req_flags_t blk_flags)
+		blk_opf_t rq_flags, blk_mq_req_flags_t blk_flags)
 {
 	bool write = nvme_is_write(cmd);
 	struct nvme_ns *ns = q->queuedata;
@@ -407,7 +407,7 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 	struct nvme_uring_data d;
 	struct nvme_command c;
 	struct request *req;
-	unsigned int rq_flags = 0;
+	blk_opf_t rq_flags = 0;
 	blk_mq_req_flags_t blk_flags = 0;
 	void *meta = NULL;
 
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 0da94b233fed..ac9753674d5d 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -733,7 +733,7 @@ void nvme_wait_freeze(struct nvme_ctrl *ctrl);
 int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
 void nvme_start_freeze(struct nvme_ctrl *ctrl);
 
-static inline unsigned int nvme_req_op(struct nvme_command *cmd)
+static inline enum req_op nvme_req_op(struct nvme_command *cmd)
 {
 	return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
 }

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 38/63] nvme/target: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (36 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 37/63] nvme/host: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 39/63] scsi/core: Improve static type checking Bart Van Assche
                   ` (25 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Sagi Grimberg,
	Chaitanya Kulkarni

Improve static type checking by using the new blk_opf_t type for variables
that represent a request operation combined with request flags.

Cc: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/nvme/target/io-cmd-bdev.c | 3 ++-
 drivers/nvme/target/zns.c         | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 27a72504d31c..306d97b3840f 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -246,7 +246,8 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 	struct scatterlist *sg;
 	struct blk_plug plug;
 	sector_t sector;
-	int op, i, rc;
+	blk_opf_t op;
+	int i, rc;
 	struct sg_mapping_iter prot_miter;
 	unsigned int iter_flags;
 	unsigned int total_len = nvmet_rw_data_len(req) + req->metadata_len;
diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
index 62afb7936132..c5e315d57d60 100644
--- a/drivers/nvme/target/zns.c
+++ b/drivers/nvme/target/zns.c
@@ -525,7 +525,7 @@ static void nvmet_bdev_zone_append_bio_done(struct bio *bio)
 void nvmet_bdev_execute_zone_append(struct nvmet_req *req)
 {
 	sector_t sect = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
-	const unsigned int op = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE;
+	const blk_opf_t op = REQ_OP_ZONE_APPEND | REQ_SYNC | REQ_IDLE;
 	u16 status = NVME_SC_SUCCESS;
 	unsigned int total_len = 0;
 	struct scatterlist *sg;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 39/63] scsi/core: Improve static type checking
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (37 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 38/63] nvme/target: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 40/63] scsi/core: Change the return type of scsi_noretry_cmd() into bool Bart Van Assche
                   ` (24 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Hannes Reinecke, John Garry,
	Mike Christie

Improve static type checking by using the new blk_opf_t type for the
combination of a request operation and its flags.

Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: John Garry <john.garry@huawei.com>
Cc: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/scsi/scsi_lib.c  | 6 +++---
 include/scsi/scsi_cmnd.h | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index cdf0056582d5..126997e464cb 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1125,12 +1125,12 @@ static void scsi_initialize_rq(struct request *rq)
 	cmd->retries = 0;
 }
 
-struct request *scsi_alloc_request(struct request_queue *q,
-		unsigned int op, blk_mq_req_flags_t flags)
+struct request *scsi_alloc_request(struct request_queue *q, blk_opf_t opf,
+				   blk_mq_req_flags_t flags)
 {
 	struct request *rq;
 
-	rq = blk_mq_alloc_request(q, op, flags);
+	rq = blk_mq_alloc_request(q, opf, flags);
 	if (!IS_ERR(rq))
 		scsi_initialize_rq(rq);
 	return rq;
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index 1e80e70dfa92..bac55decf900 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -386,7 +386,7 @@ static inline unsigned scsi_transfer_length(struct scsi_cmnd *scmd)
 extern void scsi_build_sense(struct scsi_cmnd *scmd, int desc,
 			     u8 key, u8 asc, u8 ascq);
 
-struct request *scsi_alloc_request(struct request_queue *q,
-		unsigned int op, blk_mq_req_flags_t flags);
+struct request *scsi_alloc_request(struct request_queue *q, blk_opf_t opf,
+				   blk_mq_req_flags_t flags);
 
 #endif /* _SCSI_SCSI_CMND_H */

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 40/63] scsi/core: Change the return type of scsi_noretry_cmd() into bool
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (38 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 39/63] scsi/core: Improve static type checking Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 41/63] scsi/core: Use the new blk_opf_t type Bart Van Assche
                   ` (23 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Hannes Reinecke, John Garry,
	Mike Christie

This patch prepares for introducing the new blk_opf_t type in the SCSI core.
Since the value returned by scsi_noretry_cmd() is only used in boolean
expressions, this patch does not change any functionality.

Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: John Garry <john.garry@huawei.com>
Cc: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/scsi/scsi_error.c | 16 ++++++++--------
 drivers/scsi/scsi_priv.h  |  2 +-
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
index 49ef864df581..4b8686162ddd 100644
--- a/drivers/scsi/scsi_error.c
+++ b/drivers/scsi/scsi_error.c
@@ -1779,7 +1779,7 @@ static void scsi_eh_offline_sdevs(struct list_head *work_q,
  * scsi_noretry_cmd - determine if command should be failed fast
  * @scmd:	SCSI cmd to examine.
  */
-int scsi_noretry_cmd(struct scsi_cmnd *scmd)
+bool scsi_noretry_cmd(struct scsi_cmnd *scmd)
 {
 	struct request *req = scsi_cmd_to_rq(scmd);
 
@@ -1789,19 +1789,19 @@ int scsi_noretry_cmd(struct scsi_cmnd *scmd)
 	case DID_TIME_OUT:
 		goto check_type;
 	case DID_BUS_BUSY:
-		return req->cmd_flags & REQ_FAILFAST_TRANSPORT;
+		return !!(req->cmd_flags & REQ_FAILFAST_TRANSPORT);
 	case DID_PARITY:
-		return req->cmd_flags & REQ_FAILFAST_DEV;
+		return !!(req->cmd_flags & REQ_FAILFAST_DEV);
 	case DID_ERROR:
 		if (get_status_byte(scmd) == SAM_STAT_RESERVATION_CONFLICT)
-			return 0;
+			return false;
 		fallthrough;
 	case DID_SOFT_ERROR:
-		return req->cmd_flags & REQ_FAILFAST_DRIVER;
+		return !!(req->cmd_flags & REQ_FAILFAST_DRIVER);
 	}
 
 	if (!scsi_status_is_check_condition(scmd->result))
-		return 0;
+		return false;
 
 check_type:
 	/*
@@ -1809,9 +1809,9 @@ int scsi_noretry_cmd(struct scsi_cmnd *scmd)
 	 * the check condition was retryable.
 	 */
 	if (req->cmd_flags & REQ_FAILFAST_DEV || blk_rq_is_passthrough(req))
-		return 1;
+		return true;
 
-	return 0;
+	return false;
 }
 
 /**
diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
index 5c4786310a31..90fbffca7eb0 100644
--- a/drivers/scsi/scsi_priv.h
+++ b/drivers/scsi/scsi_priv.h
@@ -82,7 +82,7 @@ void scsi_eh_ready_devs(struct Scsi_Host *shost,
 			struct list_head *done_q);
 int scsi_eh_get_sense(struct list_head *work_q,
 		      struct list_head *done_q);
-int scsi_noretry_cmd(struct scsi_cmnd *scmd);
+bool scsi_noretry_cmd(struct scsi_cmnd *scmd);
 void scsi_eh_done(struct scsi_cmnd *scmd);
 
 /* scsi_lib.c */

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 41/63] scsi/core: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (39 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 40/63] scsi/core: Change the return type of scsi_noretry_cmd() into bool Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 42/63] scsi/device_handlers: " Bart Van Assche
                   ` (22 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Ming Lei, Hannes Reinecke, John Garry,
	Mike Christie

Use the new blk_opf_t type for arguments and variables that represent
request flags. Use the !! operator in scsi_noretry_cmd() to convert the
blk_opf_t type into a boolean. This patch does not change any functionality.

Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: John Garry <john.garry@huawei.com>
Cc: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/scsi/scsi_lib.c    | 6 +++---
 include/scsi/scsi_device.h | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 126997e464cb..d07d193e7d78 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -209,8 +209,8 @@ void scsi_queue_insert(struct scsi_cmnd *cmd, int reason)
 int __scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
 		 int data_direction, void *buffer, unsigned bufflen,
 		 unsigned char *sense, struct scsi_sense_hdr *sshdr,
-		 int timeout, int retries, u64 flags, req_flags_t rq_flags,
-		 int *resid)
+		 int timeout, int retries, blk_opf_t flags,
+		 req_flags_t rq_flags, int *resid)
 {
 	struct request *req;
 	struct scsi_cmnd *scmd;
@@ -633,7 +633,7 @@ static blk_status_t scsi_result_to_blk_status(struct scsi_cmnd *cmd, int result)
  */
 static unsigned int scsi_rq_err_bytes(const struct request *rq)
 {
-	unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK;
+	blk_opf_t ff = rq->cmd_flags & REQ_FAILFAST_MASK;
 	unsigned int bytes = 0;
 	struct bio *bio;
 
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index 7cf5f3b7589f..2493bd65351a 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -457,7 +457,7 @@ extern void scsi_sanitize_inquiry_string(unsigned char *s, int len);
 extern int __scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
 			int data_direction, void *buffer, unsigned bufflen,
 			unsigned char *sense, struct scsi_sense_hdr *sshdr,
-			int timeout, int retries, u64 flags,
+			int timeout, int retries, blk_opf_t flags,
 			req_flags_t rq_flags, int *resid);
 /* Make sure any sense buffer is the correct size. */
 #define scsi_execute(sdev, cmd, data_direction, buffer, bufflen, sense,	\

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 42/63] scsi/device_handlers: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (40 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 41/63] scsi/core: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 43/63] scsi/ufs: Rename a 'dir' argument into 'op' Bart Van Assche
                   ` (21 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke

Improve static type checking by using the new blk_opf_t type for variables
that represent request flags.

Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/scsi/device_handler/scsi_dh_alua.c  | 4 ++--
 drivers/scsi/device_handler/scsi_dh_emc.c   | 2 +-
 drivers/scsi/device_handler/scsi_dh_hp_sw.c | 4 ++--
 drivers/scsi/device_handler/scsi_dh_rdac.c  | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
index 1d9be771f3ee..610a51538f03 100644
--- a/drivers/scsi/device_handler/scsi_dh_alua.c
+++ b/drivers/scsi/device_handler/scsi_dh_alua.c
@@ -127,7 +127,7 @@ static int submit_rtpg(struct scsi_device *sdev, unsigned char *buff,
 		       int bufflen, struct scsi_sense_hdr *sshdr, int flags)
 {
 	u8 cdb[MAX_COMMAND_SIZE];
-	int req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
+	blk_opf_t req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
 		REQ_FAILFAST_DRIVER;
 
 	/* Prepare the command. */
@@ -157,7 +157,7 @@ static int submit_stpg(struct scsi_device *sdev, int group_id,
 	u8 cdb[MAX_COMMAND_SIZE];
 	unsigned char stpg_data[8];
 	int stpg_len = 8;
-	int req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
+	blk_opf_t req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
 		REQ_FAILFAST_DRIVER;
 
 	/* Prepare the data buffer */
diff --git a/drivers/scsi/device_handler/scsi_dh_emc.c b/drivers/scsi/device_handler/scsi_dh_emc.c
index bd28ec6cfb72..2e21ab447873 100644
--- a/drivers/scsi/device_handler/scsi_dh_emc.c
+++ b/drivers/scsi/device_handler/scsi_dh_emc.c
@@ -239,7 +239,7 @@ static int send_trespass_cmd(struct scsi_device *sdev,
 	unsigned char cdb[MAX_COMMAND_SIZE];
 	int err, res = SCSI_DH_OK, len;
 	struct scsi_sense_hdr sshdr;
-	u64 req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
+	blk_opf_t req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
 		REQ_FAILFAST_DRIVER;
 
 	if (csdev->flags & CLARIION_SHORT_TRESPASS) {
diff --git a/drivers/scsi/device_handler/scsi_dh_hp_sw.c b/drivers/scsi/device_handler/scsi_dh_hp_sw.c
index 4a3f7831a2d6..0d2cfa60aa06 100644
--- a/drivers/scsi/device_handler/scsi_dh_hp_sw.c
+++ b/drivers/scsi/device_handler/scsi_dh_hp_sw.c
@@ -83,7 +83,7 @@ static int hp_sw_tur(struct scsi_device *sdev, struct hp_sw_dh_data *h)
 	unsigned char cmd[6] = { TEST_UNIT_READY };
 	struct scsi_sense_hdr sshdr;
 	int ret = SCSI_DH_OK, res;
-	u64 req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
+	blk_opf_t req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
 		REQ_FAILFAST_DRIVER;
 
 retry:
@@ -121,7 +121,7 @@ static int hp_sw_start_stop(struct hp_sw_dh_data *h)
 	struct scsi_device *sdev = h->sdev;
 	int res, rc = SCSI_DH_OK;
 	int retry_cnt = HP_SW_RETRIES;
-	u64 req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
+	blk_opf_t req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
 		REQ_FAILFAST_DRIVER;
 
 retry:
diff --git a/drivers/scsi/device_handler/scsi_dh_rdac.c b/drivers/scsi/device_handler/scsi_dh_rdac.c
index 66652ab409cc..bf8754741f85 100644
--- a/drivers/scsi/device_handler/scsi_dh_rdac.c
+++ b/drivers/scsi/device_handler/scsi_dh_rdac.c
@@ -536,7 +536,7 @@ static void send_mode_select(struct work_struct *work)
 	unsigned char cdb[MAX_COMMAND_SIZE];
 	struct scsi_sense_hdr sshdr;
 	unsigned int data_size;
-	u64 req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
+	blk_opf_t req_flags = REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT |
 		REQ_FAILFAST_DRIVER;
 
 	spin_lock(&ctlr->ms_lock);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 43/63] scsi/ufs: Rename a 'dir' argument into 'op'
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (41 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 42/63] scsi/device_handlers: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 44/63] scsi/target: Use the new blk_opf_t type Bart Van Assche
                   ` (20 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche,
	Martin K . Petersen, Avri Altman

Improve consistency of the kernel code by renaming a request operation
argument from 'dir' into 'op'.

Cc: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Avri Altman <avri.altman@wdc.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/ufs/core/ufshpb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/ufs/core/ufshpb.c b/drivers/ufs/core/ufshpb.c
index 24f1ee82c215..a1a7a1175a5a 100644
--- a/drivers/ufs/core/ufshpb.c
+++ b/drivers/ufs/core/ufshpb.c
@@ -434,7 +434,7 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
 }
 
 static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, int rgn_idx,
-					 enum req_op dir, bool atomic)
+					 enum req_op op, bool atomic)
 {
 	struct ufshpb_req *rq;
 	struct request *req;
@@ -445,7 +445,7 @@ static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, int rgn_idx,
 		return NULL;
 
 retry:
-	req = blk_mq_alloc_request(hpb->sdev_ufs_lu->request_queue, dir,
+	req = blk_mq_alloc_request(hpb->sdev_ufs_lu->request_queue, op,
 			      BLK_MQ_REQ_NOWAIT);
 
 	if (!atomic && (PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) {

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 44/63] scsi/target: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (42 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 43/63] scsi/ufs: Rename a 'dir' argument into 'op' Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 45/63] mm: " Bart Van Assche
                   ` (19 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Mike Christie

Improve static type checking by using the new blk_opf_t type for variables
that represent a request operation combined with request flags.

Cc: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/target/target_core_iblock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 378c80313a0f..5fef19af88df 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -343,7 +343,7 @@ static void iblock_bio_done(struct bio *bio)
 }
 
 static struct bio *iblock_get_bio(struct se_cmd *cmd, sector_t lba, u32 sg_num,
-				  unsigned int opf)
+				  blk_opf_t opf)
 {
 	struct iblock_dev *ib_dev = IBLOCK_DEV(cmd->se_dev);
 	struct bio *bio;
@@ -719,7 +719,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	struct bio_list list;
 	struct scatterlist *sg;
 	u32 sg_num = sgl_nents;
-	unsigned int opf;
+	blk_opf_t opf;
 	unsigned bio_cnt;
 	int i, rc;
 	struct sg_mapping_iter prot_miter;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 45/63] mm: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (43 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 44/63] scsi/target: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 46/63] fs/buffer: " Bart Van Assche
                   ` (18 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Andrew Morton

Improve static type checking by using the new blk_opf_t type for block
layer request flags.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 include/linux/writeback.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index b8c9610c2313..3f045f6d6c4f 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -101,9 +101,9 @@ struct writeback_control {
 #endif
 };
 
-static inline int wbc_to_write_flags(struct writeback_control *wbc)
+static inline blk_opf_t wbc_to_write_flags(struct writeback_control *wbc)
 {
-	int flags = 0;
+	blk_opf_t flags = 0;
 
 	if (wbc->punt_to_cgroup)
 		flags = REQ_CGROUP_PUNT;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 46/63] fs/buffer: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (44 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 45/63] mm: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30  8:34   ` Jan Kara
  2022-06-29 23:31 ` [PATCH v2 47/63] fs/buffer: Combine two submit_bh() and ll_rw_block() arguments Bart Van Assche
                   ` (17 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Matthew Wilcox,
	Jan Kara

Improve static type checking by using the new blk_opf_t type for block layer
request flags. Change WRITE into REQ_OP_WRITE. This patch does not change
any functionality since REQ_OP_WRITE == WRITE == 1.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/buffer.c                 | 21 +++++++++++----------
 include/linux/buffer_head.h |  9 +++++----
 2 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 898c7f301b1b..4a00b61f35ec 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -52,8 +52,8 @@
 #include "internal.h"
 
 static int fsync_buffers_list(spinlock_t *lock, struct list_head *list);
-static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
-			 struct writeback_control *wbc);
+static int submit_bh_wbc(enum req_op op, blk_opf_t op_flags,
+			 struct buffer_head *bh, struct writeback_control *wbc);
 
 #define BH_ENTRY(list) list_entry((list), struct buffer_head, b_assoc_buffers)
 
@@ -1716,7 +1716,7 @@ int __block_write_full_page(struct inode *inode, struct page *page,
 	struct buffer_head *bh, *head;
 	unsigned int blocksize, bbits;
 	int nr_underway = 0;
-	int write_flags = wbc_to_write_flags(wbc);
+	blk_opf_t write_flags = wbc_to_write_flags(wbc);
 
 	head = create_page_buffers(page, inode,
 					(1 << BH_Dirty)|(1 << BH_Uptodate));
@@ -2994,8 +2994,8 @@ static void end_bio_bh_io_sync(struct bio *bio)
 	bio_put(bio);
 }
 
-static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
-			 struct writeback_control *wbc)
+static int submit_bh_wbc(enum req_op op, blk_opf_t op_flags,
+			 struct buffer_head *bh, struct writeback_control *wbc)
 {
 	struct bio *bio;
 
@@ -3040,7 +3040,7 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
 	return 0;
 }
 
-int submit_bh(int op, int op_flags, struct buffer_head *bh)
+int submit_bh(enum req_op op, blk_opf_t op_flags, struct buffer_head *bh)
 {
 	return submit_bh_wbc(op, op_flags, bh, NULL);
 }
@@ -3072,7 +3072,8 @@ EXPORT_SYMBOL(submit_bh);
  * All of the buffers must be for the same device, and must also be a
  * multiple of the current approved size for the device.
  */
-void ll_rw_block(int op, int op_flags,  int nr, struct buffer_head *bhs[])
+void ll_rw_block(enum req_op op, blk_opf_t op_flags, int nr,
+		 struct buffer_head *bhs[])
 {
 	int i;
 
@@ -3081,7 +3082,7 @@ void ll_rw_block(int op, int op_flags,  int nr, struct buffer_head *bhs[])
 
 		if (!trylock_buffer(bh))
 			continue;
-		if (op == WRITE) {
+		if (op == REQ_OP_WRITE) {
 			if (test_clear_buffer_dirty(bh)) {
 				bh->b_end_io = end_buffer_write_sync;
 				get_bh(bh);
@@ -3101,7 +3102,7 @@ void ll_rw_block(int op, int op_flags,  int nr, struct buffer_head *bhs[])
 }
 EXPORT_SYMBOL(ll_rw_block);
 
-void write_dirty_buffer(struct buffer_head *bh, int op_flags)
+void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags)
 {
 	lock_buffer(bh);
 	if (!test_clear_buffer_dirty(bh)) {
@@ -3119,7 +3120,7 @@ EXPORT_SYMBOL(write_dirty_buffer);
  * and then start new I/O and then wait upon it.  The caller must have a ref on
  * the buffer_head.
  */
-int __sync_dirty_buffer(struct buffer_head *bh, int op_flags)
+int __sync_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags)
 {
 	int ret = 0;
 
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index c9d1463bb20f..9795df9400bd 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -9,6 +9,7 @@
 #define _LINUX_BUFFER_HEAD_H
 
 #include <linux/types.h>
+#include <linux/blk_types.h>
 #include <linux/fs.h>
 #include <linux/linkage.h>
 #include <linux/pagemap.h>
@@ -201,11 +202,11 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags);
 void free_buffer_head(struct buffer_head * bh);
 void unlock_buffer(struct buffer_head *bh);
 void __lock_buffer(struct buffer_head *bh);
-void ll_rw_block(int, int, int, struct buffer_head * bh[]);
+void ll_rw_block(enum req_op, blk_opf_t, int, struct buffer_head * bh[]);
 int sync_dirty_buffer(struct buffer_head *bh);
-int __sync_dirty_buffer(struct buffer_head *bh, int op_flags);
-void write_dirty_buffer(struct buffer_head *bh, int op_flags);
-int submit_bh(int, int, struct buffer_head *);
+int __sync_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags);
+void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags);
+int submit_bh(enum req_op, blk_opf_t, struct buffer_head *);
 void write_boundary_block(struct block_device *bdev,
 			sector_t bblock, unsigned blocksize);
 int bh_uptodate_or_lock(struct buffer_head *bh);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 47/63] fs/buffer: Combine two submit_bh() and ll_rw_block() arguments
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (45 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 46/63] fs/buffer: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30 18:43   ` Song Liu
  2022-06-29 23:31 ` [PATCH v2 48/63] fs/direct-io: Reduce the size of struct dio Bart Van Assche
                   ` (16 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Alexander Viro,
	Song Liu

Both submit_bh() and ll_rw_block() accept a request operation type and
request flags as their first two arguments. Micro-optimize these two
functions by combining these first two arguments into a single argument.
This patch does not change the behavior of any of the modified code.

Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Song Liu <song@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/md/md-bitmap.c      |  4 +--
 fs/buffer.c                 | 53 +++++++++++++++++++------------------
 fs/ext4/fast_commit.c       |  2 +-
 fs/ext4/mmp.c               |  2 +-
 fs/ext4/super.c             |  6 ++---
 fs/gfs2/bmap.c              |  5 ++--
 fs/gfs2/dir.c               |  5 ++--
 fs/gfs2/meta_io.c           |  9 +++----
 fs/gfs2/quota.c             |  2 +-
 fs/isofs/compress.c         |  2 +-
 fs/jbd2/commit.c            |  8 +++---
 fs/jbd2/journal.c           |  4 +--
 fs/jbd2/recovery.c          |  4 +--
 fs/nilfs2/btnode.c          |  2 +-
 fs/nilfs2/gcinode.c         |  2 +-
 fs/nilfs2/mdt.c             |  2 +-
 fs/ntfs/aops.c              |  6 ++---
 fs/ntfs/compress.c          |  2 +-
 fs/ntfs/file.c              |  2 +-
 fs/ntfs/logfile.c           |  2 +-
 fs/ntfs/mft.c               |  4 +--
 fs/ntfs3/file.c             |  2 +-
 fs/ntfs3/inode.c            |  2 +-
 fs/ocfs2/aops.c             |  2 +-
 fs/ocfs2/buffer_head_io.c   |  8 +++---
 fs/ocfs2/super.c            |  2 +-
 fs/reiserfs/inode.c         |  4 +--
 fs/reiserfs/journal.c       | 12 ++++-----
 fs/reiserfs/stree.c         |  4 +--
 fs/reiserfs/super.c         |  2 +-
 fs/udf/dir.c                |  2 +-
 fs/udf/directory.c          |  2 +-
 fs/udf/inode.c              |  2 +-
 fs/ufs/balloc.c             |  2 +-
 include/linux/buffer_head.h |  4 +--
 35 files changed, 88 insertions(+), 90 deletions(-)

diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index 0a21b8317103..bf6dffadbe6f 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -302,7 +302,7 @@ static void write_page(struct bitmap *bitmap, struct page *page, int wait)
 			atomic_inc(&bitmap->pending_writes);
 			set_buffer_locked(bh);
 			set_buffer_mapped(bh);
-			submit_bh(REQ_OP_WRITE, REQ_SYNC, bh);
+			submit_bh(REQ_OP_WRITE | REQ_SYNC, bh);
 			bh = bh->b_this_page;
 		}
 
@@ -394,7 +394,7 @@ static int read_page(struct file *file, unsigned long index,
 			atomic_inc(&bitmap->pending_writes);
 			set_buffer_locked(bh);
 			set_buffer_mapped(bh);
-			submit_bh(REQ_OP_READ, 0, bh);
+			submit_bh(REQ_OP_READ, bh);
 		}
 		blk_cur++;
 		bh = bh->b_this_page;
diff --git a/fs/buffer.c b/fs/buffer.c
index 4a00b61f35ec..af53569930bb 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -52,8 +52,8 @@
 #include "internal.h"
 
 static int fsync_buffers_list(spinlock_t *lock, struct list_head *list);
-static int submit_bh_wbc(enum req_op op, blk_opf_t op_flags,
-			 struct buffer_head *bh, struct writeback_control *wbc);
+static int submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
+			 struct writeback_control *wbc);
 
 #define BH_ENTRY(list) list_entry((list), struct buffer_head, b_assoc_buffers)
 
@@ -562,7 +562,7 @@ void write_boundary_block(struct block_device *bdev,
 	struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize);
 	if (bh) {
 		if (buffer_dirty(bh))
-			ll_rw_block(REQ_OP_WRITE, 0, 1, &bh);
+			ll_rw_block(REQ_OP_WRITE, 1, &bh);
 		put_bh(bh);
 	}
 }
@@ -1174,7 +1174,7 @@ static struct buffer_head *__bread_slow(struct buffer_head *bh)
 	} else {
 		get_bh(bh);
 		bh->b_end_io = end_buffer_read_sync;
-		submit_bh(REQ_OP_READ, 0, bh);
+		submit_bh(REQ_OP_READ, bh);
 		wait_on_buffer(bh);
 		if (buffer_uptodate(bh))
 			return bh;
@@ -1342,7 +1342,7 @@ void __breadahead(struct block_device *bdev, sector_t block, unsigned size)
 {
 	struct buffer_head *bh = __getblk(bdev, block, size);
 	if (likely(bh)) {
-		ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, &bh);
+		ll_rw_block(REQ_OP_READ | REQ_RAHEAD, 1, &bh);
 		brelse(bh);
 	}
 }
@@ -1353,7 +1353,7 @@ void __breadahead_gfp(struct block_device *bdev, sector_t block, unsigned size,
 {
 	struct buffer_head *bh = __getblk_gfp(bdev, block, size, gfp);
 	if (likely(bh)) {
-		ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, &bh);
+		ll_rw_block(REQ_OP_READ | REQ_RAHEAD, 1, &bh);
 		brelse(bh);
 	}
 }
@@ -1804,7 +1804,7 @@ int __block_write_full_page(struct inode *inode, struct page *page,
 	do {
 		struct buffer_head *next = bh->b_this_page;
 		if (buffer_async_write(bh)) {
-			submit_bh_wbc(REQ_OP_WRITE, write_flags, bh, wbc);
+			submit_bh_wbc(REQ_OP_WRITE | write_flags, bh, wbc);
 			nr_underway++;
 		}
 		bh = next;
@@ -1858,7 +1858,7 @@ int __block_write_full_page(struct inode *inode, struct page *page,
 		struct buffer_head *next = bh->b_this_page;
 		if (buffer_async_write(bh)) {
 			clear_buffer_dirty(bh);
-			submit_bh_wbc(REQ_OP_WRITE, write_flags, bh, wbc);
+			submit_bh_wbc(REQ_OP_WRITE | write_flags, bh, wbc);
 			nr_underway++;
 		}
 		bh = next;
@@ -2033,7 +2033,7 @@ int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
 		if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
 		    !buffer_unwritten(bh) &&
 		     (block_start < from || block_end > to)) {
-			ll_rw_block(REQ_OP_READ, 0, 1, &bh);
+			ll_rw_block(REQ_OP_READ, 1, &bh);
 			*wait_bh++=bh;
 		}
 	}
@@ -2334,7 +2334,7 @@ int block_read_full_folio(struct folio *folio, get_block_t *get_block)
 		if (buffer_uptodate(bh))
 			end_buffer_async_read(bh, 1);
 		else
-			submit_bh(REQ_OP_READ, 0, bh);
+			submit_bh(REQ_OP_READ, bh);
 	}
 	return 0;
 }
@@ -2665,7 +2665,7 @@ int nobh_write_begin(struct address_space *mapping, loff_t pos, unsigned len,
 		if (block_start < from || block_end > to) {
 			lock_buffer(bh);
 			bh->b_end_io = end_buffer_read_nobh;
-			submit_bh(REQ_OP_READ, 0, bh);
+			submit_bh(REQ_OP_READ, bh);
 			nr_reads++;
 		}
 	}
@@ -2915,7 +2915,7 @@ int block_truncate_page(struct address_space *mapping,
 
 	if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh)) {
 		err = -EIO;
-		ll_rw_block(REQ_OP_READ, 0, 1, &bh);
+		ll_rw_block(REQ_OP_READ, 1, &bh);
 		wait_on_buffer(bh);
 		/* Uhhuh. Read error. Complain and punt. */
 		if (!buffer_uptodate(bh))
@@ -2994,9 +2994,10 @@ static void end_bio_bh_io_sync(struct bio *bio)
 	bio_put(bio);
 }
 
-static int submit_bh_wbc(enum req_op op, blk_opf_t op_flags,
-			 struct buffer_head *bh, struct writeback_control *wbc)
+static int submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
+			 struct writeback_control *wbc)
 {
+	const enum req_op op = opf & REQ_OP_MASK;
 	struct bio *bio;
 
 	BUG_ON(!buffer_locked(bh));
@@ -3012,11 +3013,11 @@ static int submit_bh_wbc(enum req_op op, blk_opf_t op_flags,
 		clear_buffer_write_io_error(bh);
 
 	if (buffer_meta(bh))
-		op_flags |= REQ_META;
+		opf |= REQ_META;
 	if (buffer_prio(bh))
-		op_flags |= REQ_PRIO;
+		opf |= REQ_PRIO;
 
-	bio = bio_alloc(bh->b_bdev, 1, op | op_flags, GFP_NOIO);
+	bio = bio_alloc(bh->b_bdev, 1, opf, GFP_NOIO);
 
 	fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
 
@@ -3040,9 +3041,9 @@ static int submit_bh_wbc(enum req_op op, blk_opf_t op_flags,
 	return 0;
 }
 
-int submit_bh(enum req_op op, blk_opf_t op_flags, struct buffer_head *bh)
+int submit_bh(blk_opf_t opf, struct buffer_head *bh)
 {
-	return submit_bh_wbc(op, op_flags, bh, NULL);
+	return submit_bh_wbc(opf, bh, NULL);
 }
 EXPORT_SYMBOL(submit_bh);
 
@@ -3072,9 +3073,9 @@ EXPORT_SYMBOL(submit_bh);
  * All of the buffers must be for the same device, and must also be a
  * multiple of the current approved size for the device.
  */
-void ll_rw_block(enum req_op op, blk_opf_t op_flags, int nr,
-		 struct buffer_head *bhs[])
+void ll_rw_block(const blk_opf_t opf, int nr, struct buffer_head *bhs[])
 {
+	const enum req_op op = opf & REQ_OP_MASK;
 	int i;
 
 	for (i = 0; i < nr; i++) {
@@ -3086,14 +3087,14 @@ void ll_rw_block(enum req_op op, blk_opf_t op_flags, int nr,
 			if (test_clear_buffer_dirty(bh)) {
 				bh->b_end_io = end_buffer_write_sync;
 				get_bh(bh);
-				submit_bh(op, op_flags, bh);
+				submit_bh(opf, bh);
 				continue;
 			}
 		} else {
 			if (!buffer_uptodate(bh)) {
 				bh->b_end_io = end_buffer_read_sync;
 				get_bh(bh);
-				submit_bh(op, op_flags, bh);
+				submit_bh(opf, bh);
 				continue;
 			}
 		}
@@ -3111,7 +3112,7 @@ void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags)
 	}
 	bh->b_end_io = end_buffer_write_sync;
 	get_bh(bh);
-	submit_bh(REQ_OP_WRITE, op_flags, bh);
+	submit_bh(REQ_OP_WRITE | op_flags, bh);
 }
 EXPORT_SYMBOL(write_dirty_buffer);
 
@@ -3138,7 +3139,7 @@ int __sync_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags)
 
 		get_bh(bh);
 		bh->b_end_io = end_buffer_write_sync;
-		ret = submit_bh(REQ_OP_WRITE, op_flags, bh);
+		ret = submit_bh(REQ_OP_WRITE | op_flags, bh);
 		wait_on_buffer(bh);
 		if (!ret && !buffer_uptodate(bh))
 			ret = -EIO;
@@ -3366,7 +3367,7 @@ int bh_submit_read(struct buffer_head *bh)
 
 	get_bh(bh);
 	bh->b_end_io = end_buffer_read_sync;
-	submit_bh(REQ_OP_READ, 0, bh);
+	submit_bh(REQ_OP_READ, bh);
 	wait_on_buffer(bh);
 	if (buffer_uptodate(bh))
 		return 0;
diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
index 795a60ad1897..0df5482c6c1c 100644
--- a/fs/ext4/fast_commit.c
+++ b/fs/ext4/fast_commit.c
@@ -668,7 +668,7 @@ static void ext4_fc_submit_bh(struct super_block *sb, bool is_tail)
 	set_buffer_dirty(bh);
 	set_buffer_uptodate(bh);
 	bh->b_end_io = ext4_end_buffer_io_sync;
-	submit_bh(REQ_OP_WRITE, write_flags, bh);
+	submit_bh(REQ_OP_WRITE | write_flags, bh);
 	EXT4_SB(sb)->s_fc_bh = NULL;
 }
 
diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
index 79d05e464c43..a0140626894e 100644
--- a/fs/ext4/mmp.c
+++ b/fs/ext4/mmp.c
@@ -52,7 +52,7 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
 	lock_buffer(bh);
 	bh->b_end_io = end_buffer_write_sync;
 	get_bh(bh);
-	submit_bh(REQ_OP_WRITE, REQ_SYNC | REQ_META | REQ_PRIO, bh);
+	submit_bh(REQ_OP_WRITE | REQ_SYNC | REQ_META | REQ_PRIO, bh);
 	wait_on_buffer(bh);
 	sb_end_write(sb);
 	if (unlikely(!buffer_uptodate(bh)))
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 845f2f8aee5f..24922184b622 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -171,7 +171,7 @@ static inline void __ext4_read_bh(struct buffer_head *bh, int op_flags,
 
 	bh->b_end_io = end_io ? end_io : end_buffer_read_sync;
 	get_bh(bh);
-	submit_bh(REQ_OP_READ, op_flags, bh);
+	submit_bh(REQ_OP_READ | op_flags, bh);
 }
 
 void ext4_read_bh_nowait(struct buffer_head *bh, int op_flags,
@@ -5939,8 +5939,8 @@ static int ext4_commit_super(struct super_block *sb)
 	/* Clear potential dirty bit if it was journalled update */
 	clear_buffer_dirty(sbh);
 	sbh->b_end_io = end_buffer_write_sync;
-	submit_bh(REQ_OP_WRITE,
-		  REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0), sbh);
+	submit_bh(REQ_OP_WRITE | REQ_SYNC |
+		  (test_opt(sb, BARRIER) ? REQ_FUA : 0), sbh);
 	wait_on_buffer(sbh);
 	if (buffer_write_io_error(sbh)) {
 		ext4_msg(sb, KERN_ERR, "I/O error while writing "
diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index b6697333bb2b..3bdb2c668a71 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -310,9 +310,8 @@ static void gfs2_metapath_ra(struct gfs2_glock *gl, __be64 *start, __be64 *end)
 		if (trylock_buffer(rabh)) {
 			if (!buffer_uptodate(rabh)) {
 				rabh->b_end_io = end_buffer_read_sync;
-				submit_bh(REQ_OP_READ,
-					  REQ_RAHEAD | REQ_META | REQ_PRIO,
-					  rabh);
+				submit_bh(REQ_OP_READ | REQ_RAHEAD | REQ_META |
+					  REQ_PRIO, rabh);
 				continue;
 			}
 			unlock_buffer(rabh);
diff --git a/fs/gfs2/dir.c b/fs/gfs2/dir.c
index 42b7dfffb5e7..a0562dd1bada 100644
--- a/fs/gfs2/dir.c
+++ b/fs/gfs2/dir.c
@@ -1508,9 +1508,8 @@ static void gfs2_dir_readahead(struct inode *inode, unsigned hsize, u32 index,
 				continue;
 			}
 			bh->b_end_io = end_buffer_read_sync;
-			submit_bh(REQ_OP_READ,
-				  REQ_RAHEAD | REQ_META | REQ_PRIO,
-				  bh);
+			submit_bh(REQ_OP_READ | REQ_RAHEAD | REQ_META |
+				  REQ_PRIO, bh);
 			continue;
 		}
 		brelse(bh);
diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
index 868dcc71b581..3570739f005d 100644
--- a/fs/gfs2/meta_io.c
+++ b/fs/gfs2/meta_io.c
@@ -75,7 +75,7 @@ static int gfs2_aspace_writepage(struct page *page, struct writeback_control *wb
 	do {
 		struct buffer_head *next = bh->b_this_page;
 		if (buffer_async_write(bh)) {
-			submit_bh(REQ_OP_WRITE, write_flags, bh);
+			submit_bh(REQ_OP_WRITE | write_flags, bh);
 			nr_underway++;
 		}
 		bh = next;
@@ -527,7 +527,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen)
 	if (buffer_uptodate(first_bh))
 		goto out;
 	if (!buffer_locked(first_bh))
-		ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &first_bh);
+		ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &first_bh);
 
 	dblock++;
 	extlen--;
@@ -536,9 +536,8 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen)
 		bh = gfs2_getbuf(gl, dblock, CREATE);
 
 		if (!buffer_uptodate(bh) && !buffer_locked(bh))
-			ll_rw_block(REQ_OP_READ,
-				    REQ_RAHEAD | REQ_META | REQ_PRIO,
-				    1, &bh);
+			ll_rw_block(REQ_OP_READ | REQ_RAHEAD | REQ_META |
+				    REQ_PRIO, 1, &bh);
 		brelse(bh);
 		dblock++;
 		extlen--;
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index 59d727a4ae2c..c98a7faa67d3 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2/quota.c
@@ -746,7 +746,7 @@ static int gfs2_write_buf_to_page(struct gfs2_inode *ip, unsigned long index,
 		if (PageUptodate(page))
 			set_buffer_uptodate(bh);
 		if (!buffer_uptodate(bh)) {
-			ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &bh);
+			ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &bh);
 			wait_on_buffer(bh);
 			if (!buffer_uptodate(bh))
 				goto unlock_out;
diff --git a/fs/isofs/compress.c b/fs/isofs/compress.c
index 95a19f25d61c..b466172eec25 100644
--- a/fs/isofs/compress.c
+++ b/fs/isofs/compress.c
@@ -82,7 +82,7 @@ static loff_t zisofs_uncompress_block(struct inode *inode, loff_t block_start,
 		return 0;
 	}
 	haveblocks = isofs_get_blocks(inode, blocknum, bhs, needblocks);
-	ll_rw_block(REQ_OP_READ, 0, haveblocks, bhs);
+	ll_rw_block(REQ_OP_READ, haveblocks, bhs);
 
 	curbh = 0;
 	curpage = 0;
diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
index eb315e81f1a6..890b5543a1c5 100644
--- a/fs/jbd2/commit.c
+++ b/fs/jbd2/commit.c
@@ -155,10 +155,10 @@ static int journal_submit_commit_record(journal_t *journal,
 
 	if (journal->j_flags & JBD2_BARRIER &&
 	    !jbd2_has_feature_async_commit(journal))
-		ret = submit_bh(REQ_OP_WRITE,
-			REQ_SYNC | REQ_PREFLUSH | REQ_FUA, bh);
+		ret = submit_bh(REQ_OP_WRITE | REQ_SYNC | REQ_PREFLUSH |
+				REQ_FUA, bh);
 	else
-		ret = submit_bh(REQ_OP_WRITE, REQ_SYNC, bh);
+		ret = submit_bh(REQ_OP_WRITE | REQ_SYNC, bh);
 
 	*cbh = bh;
 	return ret;
@@ -763,7 +763,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
 				clear_buffer_dirty(bh);
 				set_buffer_uptodate(bh);
 				bh->b_end_io = journal_end_buffer_io_sync;
-				submit_bh(REQ_OP_WRITE, REQ_SYNC, bh);
+				submit_bh(REQ_OP_WRITE | REQ_SYNC, bh);
 			}
 			cond_resched();
 
diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
index c0cbeeaec2d1..fd169a356da9 100644
--- a/fs/jbd2/journal.c
+++ b/fs/jbd2/journal.c
@@ -1636,7 +1636,7 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
 		sb->s_checksum = jbd2_superblock_csum(journal, sb);
 	get_bh(bh);
 	bh->b_end_io = end_buffer_write_sync;
-	ret = submit_bh(REQ_OP_WRITE, write_flags, bh);
+	ret = submit_bh(REQ_OP_WRITE | write_flags, bh);
 	wait_on_buffer(bh);
 	if (buffer_write_io_error(bh)) {
 		clear_buffer_write_io_error(bh);
@@ -1898,7 +1898,7 @@ static int journal_get_superblock(journal_t *journal)
 
 	J_ASSERT(bh != NULL);
 	if (!buffer_uptodate(bh)) {
-		ll_rw_block(REQ_OP_READ, 0, 1, &bh);
+		ll_rw_block(REQ_OP_READ, 1, &bh);
 		wait_on_buffer(bh);
 		if (!buffer_uptodate(bh)) {
 			printk(KERN_ERR
diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
index 8ca3527189f8..e699d6ab2c0e 100644
--- a/fs/jbd2/recovery.c
+++ b/fs/jbd2/recovery.c
@@ -100,7 +100,7 @@ static int do_readahead(journal_t *journal, unsigned int start)
 		if (!buffer_uptodate(bh) && !buffer_locked(bh)) {
 			bufs[nbufs++] = bh;
 			if (nbufs == MAXBUF) {
-				ll_rw_block(REQ_OP_READ, 0, nbufs, bufs);
+				ll_rw_block(REQ_OP_READ, nbufs, bufs);
 				journal_brelse_array(bufs, nbufs);
 				nbufs = 0;
 			}
@@ -109,7 +109,7 @@ static int do_readahead(journal_t *journal, unsigned int start)
 	}
 
 	if (nbufs)
-		ll_rw_block(REQ_OP_READ, 0, nbufs, bufs);
+		ll_rw_block(REQ_OP_READ, nbufs, bufs);
 	err = 0;
 
 failed:
diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
index ca611ac09f7c..5c39efbf733f 100644
--- a/fs/nilfs2/btnode.c
+++ b/fs/nilfs2/btnode.c
@@ -122,7 +122,7 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
 	bh->b_blocknr = pblocknr; /* set block address for read */
 	bh->b_end_io = end_buffer_read_sync;
 	get_bh(bh);
-	submit_bh(mode, mode_flags, bh);
+	submit_bh(mode | mode_flags, bh);
 	bh->b_blocknr = blocknr; /* set back to the given block address */
 	*submit_ptr = pblocknr;
 	err = 0;
diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
index 04fdd420eae7..847def8af315 100644
--- a/fs/nilfs2/gcinode.c
+++ b/fs/nilfs2/gcinode.c
@@ -92,7 +92,7 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
 	bh->b_blocknr = pbn;
 	bh->b_end_io = end_buffer_read_sync;
 	get_bh(bh);
-	submit_bh(REQ_OP_READ, 0, bh);
+	submit_bh(REQ_OP_READ, bh);
 	if (vbn)
 		bh->b_blocknr = vbn;
  out:
diff --git a/fs/nilfs2/mdt.c b/fs/nilfs2/mdt.c
index d29a0f2b9c16..66e8811c2528 100644
--- a/fs/nilfs2/mdt.c
+++ b/fs/nilfs2/mdt.c
@@ -148,7 +148,7 @@ nilfs_mdt_submit_block(struct inode *inode, unsigned long blkoff,
 
 	bh->b_end_io = end_buffer_read_sync;
 	get_bh(bh);
-	submit_bh(mode, mode_flags, bh);
+	submit_bh(mode | mode_flags, bh);
 	ret = 0;
 
 	trace_nilfs2_mdt_submit_block(inode, inode->i_ino, blkoff, mode);
diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c
index 9e3964ea2ea0..b5765fdb3a47 100644
--- a/fs/ntfs/aops.c
+++ b/fs/ntfs/aops.c
@@ -342,7 +342,7 @@ static int ntfs_read_block(struct page *page)
 		for (i = 0; i < nr; i++) {
 			tbh = arr[i];
 			if (likely(!buffer_uptodate(tbh)))
-				submit_bh(REQ_OP_READ, 0, tbh);
+				submit_bh(REQ_OP_READ, tbh);
 			else
 				ntfs_end_buffer_async_read(tbh, 1);
 		}
@@ -859,7 +859,7 @@ static int ntfs_write_block(struct page *page, struct writeback_control *wbc)
 	do {
 		struct buffer_head *next = bh->b_this_page;
 		if (buffer_async_write(bh)) {
-			submit_bh(REQ_OP_WRITE, 0, bh);
+			submit_bh(REQ_OP_WRITE, bh);
 			need_end_writeback = false;
 		}
 		bh = next;
@@ -1187,7 +1187,7 @@ static int ntfs_write_mst_block(struct page *page,
 		BUG_ON(!buffer_mapped(tbh));
 		get_bh(tbh);
 		tbh->b_end_io = end_buffer_write_sync;
-		submit_bh(REQ_OP_WRITE, 0, tbh);
+		submit_bh(REQ_OP_WRITE, tbh);
 	}
 	/* Synchronize the mft mirror now if not @sync. */
 	if (is_mft && !sync)
diff --git a/fs/ntfs/compress.c b/fs/ntfs/compress.c
index a60f543e7557..587e9b187873 100644
--- a/fs/ntfs/compress.c
+++ b/fs/ntfs/compress.c
@@ -658,7 +658,7 @@ int ntfs_read_compressed_block(struct page *page)
 		}
 		get_bh(tbh);
 		tbh->b_end_io = end_buffer_read_sync;
-		submit_bh(REQ_OP_READ, 0, tbh);
+		submit_bh(REQ_OP_READ, tbh);
 	}
 
 	/* Wait for io completion on all buffer heads. */
diff --git a/fs/ntfs/file.c b/fs/ntfs/file.c
index a8abe2296514..46ed69b86c33 100644
--- a/fs/ntfs/file.c
+++ b/fs/ntfs/file.c
@@ -537,7 +537,7 @@ static inline int ntfs_submit_bh_for_read(struct buffer_head *bh)
 	lock_buffer(bh);
 	get_bh(bh);
 	bh->b_end_io = end_buffer_read_sync;
-	return submit_bh(REQ_OP_READ, 0, bh);
+	return submit_bh(REQ_OP_READ, bh);
 }
 
 /**
diff --git a/fs/ntfs/logfile.c b/fs/ntfs/logfile.c
index bc1bf217b38e..6ce60ffc6ac0 100644
--- a/fs/ntfs/logfile.c
+++ b/fs/ntfs/logfile.c
@@ -807,7 +807,7 @@ bool ntfs_empty_logfile(struct inode *log_vi)
 			 * completed ignore errors afterwards as we can assume
 			 * that if one buffer worked all of them will work.
 			 */
-			submit_bh(REQ_OP_WRITE, 0, bh);
+			submit_bh(REQ_OP_WRITE, bh);
 			if (should_wait) {
 				should_wait = false;
 				wait_on_buffer(bh);
diff --git a/fs/ntfs/mft.c b/fs/ntfs/mft.c
index 0d62cd5bb7f8..f7bf5ce960cc 100644
--- a/fs/ntfs/mft.c
+++ b/fs/ntfs/mft.c
@@ -583,7 +583,7 @@ int ntfs_sync_mft_mirror(ntfs_volume *vol, const unsigned long mft_no,
 			clear_buffer_dirty(tbh);
 			get_bh(tbh);
 			tbh->b_end_io = end_buffer_write_sync;
-			submit_bh(REQ_OP_WRITE, 0, tbh);
+			submit_bh(REQ_OP_WRITE, tbh);
 		}
 		/* Wait on i/o completion of buffers. */
 		for (i_bhs = 0; i_bhs < nr_bhs; i_bhs++) {
@@ -780,7 +780,7 @@ int write_mft_record_nolock(ntfs_inode *ni, MFT_RECORD *m, int sync)
 		clear_buffer_dirty(tbh);
 		get_bh(tbh);
 		tbh->b_end_io = end_buffer_write_sync;
-		submit_bh(REQ_OP_WRITE, 0, tbh);
+		submit_bh(REQ_OP_WRITE, tbh);
 	}
 	/* Synchronize the mft mirror now if not @sync. */
 	if (!sync && ni->mft_no < vol->mftmirr_size)
diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
index 8e9d2b35175f..4a21745711fe 100644
--- a/fs/ntfs3/file.c
+++ b/fs/ntfs3/file.c
@@ -242,7 +242,7 @@ static int ntfs_zero_range(struct inode *inode, u64 vbo, u64 vbo_to)
 				lock_buffer(bh);
 				bh->b_end_io = end_buffer_read_sync;
 				get_bh(bh);
-				submit_bh(REQ_OP_READ, 0, bh);
+				submit_bh(REQ_OP_READ, bh);
 
 				wait_on_buffer(bh);
 				if (!buffer_uptodate(bh)) {
diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
index be4ebdd8048b..d100a063def2 100644
--- a/fs/ntfs3/inode.c
+++ b/fs/ntfs3/inode.c
@@ -629,7 +629,7 @@ static noinline int ntfs_get_block_vbo(struct inode *inode, u64 vbo,
 			bh->b_size = block_size;
 			off = vbo & (PAGE_SIZE - 1);
 			set_bh_page(bh, page, off);
-			ll_rw_block(REQ_OP_READ, 0, 1, &bh);
+			ll_rw_block(REQ_OP_READ, 1, &bh);
 			wait_on_buffer(bh);
 			if (!buffer_uptodate(bh)) {
 				err = -EIO;
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 35d40a67204c..304ed2be1b83 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -638,7 +638,7 @@ int ocfs2_map_page_blocks(struct page *page, u64 *p_blkno,
 			   !buffer_new(bh) &&
 			   ocfs2_should_read_blk(inode, page, block_start) &&
 			   (block_start < from || block_end > to)) {
-			ll_rw_block(REQ_OP_READ, 0, 1, &bh);
+			ll_rw_block(REQ_OP_READ, 1, &bh);
 			*wait_bh++=bh;
 		}
 
diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
index e7758778abef..196638a22b48 100644
--- a/fs/ocfs2/buffer_head_io.c
+++ b/fs/ocfs2/buffer_head_io.c
@@ -64,7 +64,7 @@ int ocfs2_write_block(struct ocfs2_super *osb, struct buffer_head *bh,
 
 	get_bh(bh); /* for end_buffer_write_sync() */
 	bh->b_end_io = end_buffer_write_sync;
-	submit_bh(REQ_OP_WRITE, 0, bh);
+	submit_bh(REQ_OP_WRITE, bh);
 
 	wait_on_buffer(bh);
 
@@ -147,7 +147,7 @@ int ocfs2_read_blocks_sync(struct ocfs2_super *osb, u64 block,
 
 		get_bh(bh); /* for end_buffer_read_sync() */
 		bh->b_end_io = end_buffer_read_sync;
-		submit_bh(REQ_OP_READ, 0, bh);
+		submit_bh(REQ_OP_READ, bh);
 	}
 
 read_failure:
@@ -328,7 +328,7 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr,
 			if (validate)
 				set_buffer_needs_validate(bh);
 			bh->b_end_io = end_buffer_read_sync;
-			submit_bh(REQ_OP_READ, 0, bh);
+			submit_bh(REQ_OP_READ, bh);
 			continue;
 		}
 	}
@@ -449,7 +449,7 @@ int ocfs2_write_super_or_backup(struct ocfs2_super *osb,
 	get_bh(bh); /* for end_buffer_write_sync() */
 	bh->b_end_io = end_buffer_write_sync;
 	ocfs2_compute_meta_ecc(osb->sb, bh->b_data, &di->i_check);
-	submit_bh(REQ_OP_WRITE, 0, bh);
+	submit_bh(REQ_OP_WRITE, bh);
 
 	wait_on_buffer(bh);
 
diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index f7298816d8d9..e68807196076 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -1785,7 +1785,7 @@ static int ocfs2_get_sector(struct super_block *sb,
 	if (!buffer_dirty(*bh))
 		clear_buffer_uptodate(*bh);
 	unlock_buffer(*bh);
-	ll_rw_block(REQ_OP_READ, 0, 1, bh);
+	ll_rw_block(REQ_OP_READ, 1, bh);
 	wait_on_buffer(*bh);
 	if (!buffer_uptodate(*bh)) {
 		mlog_errno(-EIO);
diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
index 0cffe054b78e..23f542d1748b 100644
--- a/fs/reiserfs/inode.c
+++ b/fs/reiserfs/inode.c
@@ -2664,7 +2664,7 @@ static int reiserfs_write_full_page(struct page *page,
 	do {
 		struct buffer_head *next = bh->b_this_page;
 		if (buffer_async_write(bh)) {
-			submit_bh(REQ_OP_WRITE, 0, bh);
+			submit_bh(REQ_OP_WRITE, bh);
 			nr++;
 		}
 		put_bh(bh);
@@ -2724,7 +2724,7 @@ static int reiserfs_write_full_page(struct page *page,
 		struct buffer_head *next = bh->b_this_page;
 		if (buffer_async_write(bh)) {
 			clear_buffer_dirty(bh);
-			submit_bh(REQ_OP_WRITE, 0, bh);
+			submit_bh(REQ_OP_WRITE, bh);
 			nr++;
 		}
 		put_bh(bh);
diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
index d8cc9a366124..94addfcefede 100644
--- a/fs/reiserfs/journal.c
+++ b/fs/reiserfs/journal.c
@@ -650,7 +650,7 @@ static void submit_logged_buffer(struct buffer_head *bh)
 		BUG();
 	if (!buffer_uptodate(bh))
 		BUG();
-	submit_bh(REQ_OP_WRITE, 0, bh);
+	submit_bh(REQ_OP_WRITE, bh);
 }
 
 static void submit_ordered_buffer(struct buffer_head *bh)
@@ -660,7 +660,7 @@ static void submit_ordered_buffer(struct buffer_head *bh)
 	clear_buffer_dirty(bh);
 	if (!buffer_uptodate(bh))
 		BUG();
-	submit_bh(REQ_OP_WRITE, 0, bh);
+	submit_bh(REQ_OP_WRITE, bh);
 }
 
 #define CHUNK_SIZE 32
@@ -868,7 +868,7 @@ static int write_ordered_buffers(spinlock_t * lock,
 		 */
 		if (buffer_dirty(bh) && unlikely(bh->b_page->mapping == NULL)) {
 			spin_unlock(lock);
-			ll_rw_block(REQ_OP_WRITE, 0, 1, &bh);
+			ll_rw_block(REQ_OP_WRITE, 1, &bh);
 			spin_lock(lock);
 		}
 		put_bh(bh);
@@ -1054,7 +1054,7 @@ static int flush_commit_list(struct super_block *s,
 		if (tbh) {
 			if (buffer_dirty(tbh)) {
 		            depth = reiserfs_write_unlock_nested(s);
-			    ll_rw_block(REQ_OP_WRITE, 0, 1, &tbh);
+			    ll_rw_block(REQ_OP_WRITE, 1, &tbh);
 			    reiserfs_write_lock_nested(s, depth);
 			}
 			put_bh(tbh) ;
@@ -2240,7 +2240,7 @@ static int journal_read_transaction(struct super_block *sb,
 		}
 	}
 	/* read in the log blocks, memcpy to the corresponding real block */
-	ll_rw_block(REQ_OP_READ, 0, get_desc_trans_len(desc), log_blocks);
+	ll_rw_block(REQ_OP_READ, get_desc_trans_len(desc), log_blocks);
 	for (i = 0; i < get_desc_trans_len(desc); i++) {
 
 		wait_on_buffer(log_blocks[i]);
@@ -2342,7 +2342,7 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev,
 		} else
 			bhlist[j++] = bh;
 	}
-	ll_rw_block(REQ_OP_READ, 0, j, bhlist);
+	ll_rw_block(REQ_OP_READ, j, bhlist);
 	for (i = 1; i < j; i++)
 		brelse(bhlist[i]);
 	bh = bhlist[0];
diff --git a/fs/reiserfs/stree.c b/fs/reiserfs/stree.c
index ef42729216d1..9a293609a022 100644
--- a/fs/reiserfs/stree.c
+++ b/fs/reiserfs/stree.c
@@ -579,7 +579,7 @@ static int search_by_key_reada(struct super_block *s,
 		if (!buffer_uptodate(bh[j])) {
 			if (depth == -1)
 				depth = reiserfs_write_unlock_nested(s);
-			ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, bh + j);
+			ll_rw_block(REQ_OP_READ | REQ_RAHEAD, 1, bh + j);
 		}
 		brelse(bh[j]);
 	}
@@ -685,7 +685,7 @@ int search_by_key(struct super_block *sb, const struct cpu_key *key,
 			if (!buffer_uptodate(bh) && depth == -1)
 				depth = reiserfs_write_unlock_nested(sb);
 
-			ll_rw_block(REQ_OP_READ, 0, 1, &bh);
+			ll_rw_block(REQ_OP_READ, 1, &bh);
 			wait_on_buffer(bh);
 
 			if (depth != -1)
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
index cfb7c44c7366..c88cd2ce0665 100644
--- a/fs/reiserfs/super.c
+++ b/fs/reiserfs/super.c
@@ -1702,7 +1702,7 @@ static int read_super_block(struct super_block *s, int offset)
 /* after journal replay, reread all bitmap and super blocks */
 static int reread_meta_blocks(struct super_block *s)
 {
-	ll_rw_block(REQ_OP_READ, 0, 1, &SB_BUFFER_WITH_SB(s));
+	ll_rw_block(REQ_OP_READ, 1, &SB_BUFFER_WITH_SB(s));
 	wait_on_buffer(SB_BUFFER_WITH_SB(s));
 	if (!buffer_uptodate(SB_BUFFER_WITH_SB(s))) {
 		reiserfs_warning(s, "reiserfs-2504", "error reading the super");
diff --git a/fs/udf/dir.c b/fs/udf/dir.c
index 42e3e551fa4c..cad3772f9dbe 100644
--- a/fs/udf/dir.c
+++ b/fs/udf/dir.c
@@ -130,7 +130,7 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
 					brelse(tmp);
 			}
 			if (num) {
-				ll_rw_block(REQ_OP_READ, REQ_RAHEAD, num, bha);
+				ll_rw_block(REQ_OP_READ | REQ_RAHEAD, num, bha);
 				for (i = 0; i < num; i++)
 					brelse(bha[i]);
 			}
diff --git a/fs/udf/directory.c b/fs/udf/directory.c
index 73720320f0ab..a2adf6293093 100644
--- a/fs/udf/directory.c
+++ b/fs/udf/directory.c
@@ -89,7 +89,7 @@ struct fileIdentDesc *udf_fileident_read(struct inode *dir, loff_t *nf_pos,
 					brelse(tmp);
 			}
 			if (num) {
-				ll_rw_block(REQ_OP_READ, REQ_RAHEAD, num, bha);
+				ll_rw_block(REQ_OP_READ | REQ_RAHEAD, num, bha);
 				for (i = 0; i < num; i++)
 					brelse(bha[i]);
 			}
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index edc88716751a..8d06daed549f 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -1214,7 +1214,7 @@ struct buffer_head *udf_bread(struct inode *inode, udf_pblk_t block,
 	if (buffer_uptodate(bh))
 		return bh;
 
-	ll_rw_block(REQ_OP_READ, 0, 1, &bh);
+	ll_rw_block(REQ_OP_READ, 1, &bh);
 
 	wait_on_buffer(bh);
 	if (buffer_uptodate(bh))
diff --git a/fs/ufs/balloc.c b/fs/ufs/balloc.c
index 075d3d9114c8..bd810d8239f2 100644
--- a/fs/ufs/balloc.c
+++ b/fs/ufs/balloc.c
@@ -296,7 +296,7 @@ static void ufs_change_blocknr(struct inode *inode, sector_t beg,
 			if (!buffer_mapped(bh))
 					map_bh(bh, inode->i_sb, oldb + pos);
 			if (!buffer_uptodate(bh)) {
-				ll_rw_block(REQ_OP_READ, 0, 1, &bh);
+				ll_rw_block(REQ_OP_READ, 1, &bh);
 				wait_on_buffer(bh);
 				if (!buffer_uptodate(bh)) {
 					ufs_error(inode->i_sb, __func__,
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 9795df9400bd..bb68eb6407da 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -202,11 +202,11 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags);
 void free_buffer_head(struct buffer_head * bh);
 void unlock_buffer(struct buffer_head *bh);
 void __lock_buffer(struct buffer_head *bh);
-void ll_rw_block(enum req_op, blk_opf_t, int, struct buffer_head * bh[]);
+void ll_rw_block(blk_opf_t, int, struct buffer_head * bh[]);
 int sync_dirty_buffer(struct buffer_head *bh);
 int __sync_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags);
 void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags);
-int submit_bh(enum req_op, blk_opf_t, struct buffer_head *);
+int submit_bh(blk_opf_t, struct buffer_head *);
 void write_boundary_block(struct block_device *bdev,
 			sector_t bblock, unsigned blocksize);
 int bh_uptodate_or_lock(struct buffer_head *bh);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 48/63] fs/direct-io: Reduce the size of struct dio
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (46 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 47/63] fs/buffer: Combine two submit_bh() and ll_rw_block() arguments Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30  8:50   ` Jan Kara
  2022-06-29 23:31 ` [PATCH v2 49/63] fs/mpage: Use the new blk_opf_t type Bart Van Assche
                   ` (15 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Al Viro,
	Darrick J . Wong, Jan Kara

Reduce the size of struct dio by combining the 'op' and 'op_flags' into
the new 'opf' member. Use the new blk_opf_t type to improve static type
checking. This patch does not change any functionality.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Darrick J. Wong <djwong@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/direct-io.c | 37 ++++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

diff --git a/fs/direct-io.c b/fs/direct-io.c
index 840752006f60..b72706d163f5 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -117,8 +117,7 @@ struct dio_submit {
 /* dio_state communicated between submission path and end_io */
 struct dio {
 	int flags;			/* doesn't change */
-	int op;
-	int op_flags;
+	blk_opf_t opf;			/* request operation type and flags */
 	struct gendisk *bio_disk;
 	struct inode *inode;
 	loff_t i_size;			/* i_size when submitted */
@@ -154,6 +153,11 @@ struct dio {
 
 static struct kmem_cache *dio_cache __read_mostly;
 
+static inline bool op_is_read(blk_opf_t opf)
+{
+	return !op_is_write(opf);
+}
+
 /*
  * How many pages are in the queue?
  */
@@ -172,7 +176,7 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio)
 	ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES,
 				&sdio->from);
 
-	if (ret < 0 && sdio->blocks_available && (dio->op == REQ_OP_WRITE)) {
+	if (ret < 0 && sdio->blocks_available && op_is_write(dio->opf)) {
 		struct page *page = ZERO_PAGE(0);
 		/*
 		 * A memory fault, but the filesystem has some outstanding
@@ -251,7 +255,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
 		transferred = dio->result;
 
 		/* Check for short read case */
-		if ((dio->op == REQ_OP_READ) &&
+		if (op_is_read(dio->opf) &&
 		    ((offset + transferred) > dio->i_size))
 			transferred = dio->i_size - offset;
 		/* ignore EFAULT if some IO has been done */
@@ -286,7 +290,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
 	 * zeros from unwritten extents.
 	 */
 	if (flags & DIO_COMPLETE_INVALIDATE &&
-	    ret > 0 && dio->op == REQ_OP_WRITE &&
+	    ret > 0 && op_is_write(dio->opf) &&
 	    dio->inode->i_mapping->nrpages) {
 		err = invalidate_inode_pages2_range(dio->inode->i_mapping,
 					offset >> PAGE_SHIFT,
@@ -305,7 +309,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
 		 */
 		dio->iocb->ki_pos += transferred;
 
-		if (ret > 0 && dio->op == REQ_OP_WRITE)
+		if (ret > 0 && op_is_write(dio->opf))
 			ret = generic_write_sync(dio->iocb, ret);
 		dio->iocb->ki_complete(dio->iocb, ret);
 	}
@@ -353,7 +357,7 @@ static void dio_bio_end_aio(struct bio *bio)
 		 */
 		if (dio->result)
 			defer_completion = dio->defer_completion ||
-					   (dio->op == REQ_OP_WRITE &&
+					   (op_is_write(dio->opf) &&
 					    dio->inode->i_mapping->nrpages);
 		if (defer_completion) {
 			INIT_WORK(&dio->complete_work, dio_aio_complete_work);
@@ -396,7 +400,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
 	 * bio_alloc() is guaranteed to return a bio when allowed to sleep and
 	 * we request a valid number of vectors.
 	 */
-	bio = bio_alloc(bdev, nr_vecs, dio->op | dio->op_flags, GFP_KERNEL);
+	bio = bio_alloc(bdev, nr_vecs, dio->opf, GFP_KERNEL);
 	bio->bi_iter.bi_sector = first_sector;
 	if (dio->is_async)
 		bio->bi_end_io = dio_bio_end_aio;
@@ -426,7 +430,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
 	dio->refcount++;
 	spin_unlock_irqrestore(&dio->bio_lock, flags);
 
-	if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty)
+	if (dio->is_async && op_is_read(dio->opf) && dio->should_dirty)
 		bio_set_pages_dirty(bio);
 
 	dio->bio_disk = bio->bi_bdev->bd_disk;
@@ -492,7 +496,7 @@ static struct bio *dio_await_one(struct dio *dio)
 static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio)
 {
 	blk_status_t err = bio->bi_status;
-	bool should_dirty = dio->op == REQ_OP_READ && dio->should_dirty;
+	bool should_dirty = op_is_read(dio->opf) && dio->should_dirty;
 
 	if (err) {
 		if (err == BLK_STS_AGAIN && (bio->bi_opf & REQ_NOWAIT))
@@ -653,7 +657,7 @@ static int get_more_blocks(struct dio *dio, struct dio_submit *sdio,
 		 * which may decide to handle it or also return an unmapped
 		 * buffer head.
 		 */
-		create = dio->op == REQ_OP_WRITE;
+		create = op_is_write(dio->opf);
 		if (dio->flags & DIO_SKIP_HOLES) {
 			i_size = i_size_read(dio->inode);
 			if (i_size && fs_startblk <= (i_size - 1) >> i_blkbits)
@@ -804,7 +808,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
 	int ret = 0;
 	int boundary = sdio->boundary;	/* dio_send_cur_page may clear it */
 
-	if (dio->op == REQ_OP_WRITE) {
+	if (op_is_write(dio->opf)) {
 		/*
 		 * Read accounting is performed in submit_bio()
 		 */
@@ -992,7 +996,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio,
 				loff_t i_size_aligned;
 
 				/* AKPM: eargh, -ENOTBLK is a hack */
-				if (dio->op == REQ_OP_WRITE) {
+				if (op_is_write(dio->opf)) {
 					put_page(page);
 					return -ENOTBLK;
 				}
@@ -1196,12 +1200,11 @@ ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
 
 	dio->inode = inode;
 	if (iov_iter_rw(iter) == WRITE) {
-		dio->op = REQ_OP_WRITE;
-		dio->op_flags = REQ_SYNC | REQ_IDLE;
+		dio->opf = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE;
 		if (iocb->ki_flags & IOCB_NOWAIT)
-			dio->op_flags |= REQ_NOWAIT;
+			dio->opf |= REQ_NOWAIT;
 	} else {
-		dio->op = REQ_OP_READ;
+		dio->opf = REQ_OP_READ;
 	}
 
 	/*

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 49/63] fs/mpage: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (47 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 48/63] fs/direct-io: Reduce the size of struct dio Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 50/63] fs/btrfs: Use the enum req_op and blk_opf_t types Bart Van Assche
                   ` (14 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Al Viro

Improve static type checking by using the new blk_opf_t type for the
combination of a block layer request with block layer request flags.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/mpage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/mpage.c b/fs/mpage.c
index 0d25f44f5707..5830705672dd 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -145,7 +145,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 	struct block_device *bdev = NULL;
 	int length;
 	int fully_mapped = 1;
-	int op = REQ_OP_READ;
+	blk_opf_t op = REQ_OP_READ;
 	unsigned nblocks;
 	unsigned relative_block;
 	gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 50/63] fs/btrfs: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (48 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 49/63] fs/mpage: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30 11:37   ` David Sterba
  2022-06-29 23:31 ` [PATCH v2 51/63] fs/ext4: Use the new blk_opf_t type Bart Van Assche
                   ` (13 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, David Sterba,
	Josef Bacik

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags.

Cc: David Sterba <dsterba@suse.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/btrfs/check-integrity.c |  4 ++--
 fs/btrfs/compression.c     |  6 +++---
 fs/btrfs/compression.h     |  2 +-
 fs/btrfs/extent_io.c       | 18 +++++++++---------
 fs/btrfs/inode.c           |  4 ++--
 fs/btrfs/raid56.c          |  4 ++--
 6 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index 5d20137b7b67..98c6e5feab19 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btrfs/check-integrity.c
@@ -152,7 +152,7 @@ struct btrfsic_block {
 	struct btrfsic_block *next_in_same_bio;
 	void *orig_bio_private;
 	bio_end_io_t *orig_bio_end_io;
-	int submit_bio_bh_rw;
+	blk_opf_t submit_bio_bh_rw;
 	u64 flush_gen; /* only valid if !never_written */
 };
 
@@ -1681,7 +1681,7 @@ static void btrfsic_process_written_block(struct btrfsic_dev_state *dev_state,
 					  u64 dev_bytenr, char **mapped_datav,
 					  unsigned int num_pages,
 					  struct bio *bio, int *bio_is_patched,
-					  int submit_bio_bh_rw)
+					  blk_opf_t submit_bio_bh_rw)
 {
 	int is_metadata;
 	struct btrfsic_block *block;
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index f4564f32f6d9..a82b9f17f476 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -455,7 +455,7 @@ static blk_status_t submit_compressed_bio(struct btrfs_fs_info *fs_info,
 
 
 static struct bio *alloc_compressed_bio(struct compressed_bio *cb, u64 disk_bytenr,
-					unsigned int opf, bio_end_io_t endio_func,
+					blk_opf_t opf, bio_end_io_t endio_func,
 					u64 *next_stripe_start)
 {
 	struct btrfs_fs_info *fs_info = btrfs_sb(cb->inode->i_sb);
@@ -505,7 +505,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 				 unsigned int compressed_len,
 				 struct page **compressed_pages,
 				 unsigned int nr_pages,
-				 unsigned int write_flags,
+				 blk_opf_t write_flags,
 				 struct cgroup_subsys_state *blkcg_css,
 				 bool writeback)
 {
@@ -517,7 +517,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 	blk_status_t ret;
 	int skip_sum = inode->flags & BTRFS_INODE_NODATASUM;
 	const bool use_append = btrfs_use_zone_append(inode, disk_start);
-	const unsigned int bio_op = use_append ? REQ_OP_ZONE_APPEND : REQ_OP_WRITE;
+	const enum req_op bio_op = use_append ? REQ_OP_ZONE_APPEND : REQ_OP_WRITE;
 
 	ASSERT(IS_ALIGNED(start, fs_info->sectorsize) &&
 	       IS_ALIGNED(len, fs_info->sectorsize));
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 2707404389a5..2b56d63e01ce 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -99,7 +99,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
 				  unsigned int compressed_len,
 				  struct page **compressed_pages,
 				  unsigned int nr_pages,
-				  unsigned int write_flags,
+				  blk_opf_t write_flags,
 				  struct cgroup_subsys_state *blkcg_css,
 				  bool writeback);
 void btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 04e36343da3a..60a20df353e7 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3357,7 +3357,7 @@ static int calc_bio_boundaries(struct btrfs_bio_ctrl *bio_ctrl,
 static int alloc_new_bio(struct btrfs_inode *inode,
 			 struct btrfs_bio_ctrl *bio_ctrl,
 			 struct writeback_control *wbc,
-			 unsigned int opf,
+			 blk_opf_t opf,
 			 bio_end_io_t end_io_func,
 			 u64 disk_bytenr, u32 offset, u64 file_offset,
 			 enum btrfs_compression_type compress_type)
@@ -3437,7 +3437,7 @@ static int alloc_new_bio(struct btrfs_inode *inode,
  * @prev_bio_flags:  flags of previous bio to see if we can merge the current one
  * @compress_type:   compress type for current bio
  */
-static int submit_extent_page(unsigned int opf,
+static int submit_extent_page(blk_opf_t opf,
 			      struct writeback_control *wbc,
 			      struct btrfs_bio_ctrl *bio_ctrl,
 			      struct page *page, u64 disk_bytenr,
@@ -3615,7 +3615,7 @@ __get_extent_map(struct inode *inode, struct page *page, size_t pg_offset,
  */
 static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
 		      struct btrfs_bio_ctrl *bio_ctrl,
-		      unsigned int read_flags, u64 *prev_em_start)
+		      blk_opf_t read_flags, u64 *prev_em_start)
 {
 	struct inode *inode = page->mapping->host;
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
@@ -3983,8 +3983,8 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
 	int saved_ret = 0;
 	int ret = 0;
 	int nr = 0;
-	u32 opf = REQ_OP_WRITE;
-	const unsigned int write_flags = wbc_to_write_flags(wbc);
+	enum req_op op = REQ_OP_WRITE;
+	const blk_opf_t write_flags = wbc_to_write_flags(wbc);
 	bool has_error = false;
 	bool compressed;
 
@@ -4058,7 +4058,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
 		iosize = min(min(em_end, end + 1), dirty_range_end) - cur;
 
 		if (btrfs_use_zone_append(inode, em->block_start))
-			opf = REQ_OP_ZONE_APPEND;
+			op = REQ_OP_ZONE_APPEND;
 
 		free_extent_map(em);
 		em = NULL;
@@ -4094,7 +4094,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
 		 */
 		btrfs_page_clear_dirty(fs_info, page, cur, iosize);
 
-		ret = submit_extent_page(opf | write_flags, wbc,
+		ret = submit_extent_page(op | write_flags, wbc,
 					 &epd->bio_ctrl, page,
 					 disk_bytenr, iosize,
 					 cur - page_offset(page),
@@ -4575,7 +4575,7 @@ static int write_one_subpage_eb(struct extent_buffer *eb,
 {
 	struct btrfs_fs_info *fs_info = eb->fs_info;
 	struct page *page = eb->pages[0];
-	unsigned int write_flags = wbc_to_write_flags(wbc) | REQ_META;
+	blk_opf_t write_flags = wbc_to_write_flags(wbc) | REQ_META;
 	bool no_dirty_ebs = false;
 	int ret;
 
@@ -4620,7 +4620,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb,
 {
 	u64 disk_bytenr = eb->start;
 	int i, num_pages;
-	unsigned int write_flags = wbc_to_write_flags(wbc) | REQ_META;
+	blk_opf_t write_flags = wbc_to_write_flags(wbc) | REQ_META;
 	int ret = 0;
 
 	prepare_eb_write(eb);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 05e0c4a5affd..f8378c949be4 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -485,7 +485,7 @@ struct async_chunk {
 	struct page *locked_page;
 	u64 start;
 	u64 end;
-	unsigned int write_flags;
+	blk_opf_t write_flags;
 	struct list_head extents;
 	struct cgroup_subsys_state *blkcg_css;
 	struct btrfs_work work;
@@ -1435,7 +1435,7 @@ static int cow_file_range_async(struct btrfs_inode *inode,
 	int i;
 	bool should_compress;
 	unsigned nofs_flag;
-	const unsigned int write_flags = wbc_to_write_flags(wbc);
+	const blk_opf_t write_flags = wbc_to_write_flags(wbc);
 
 	unlock_extent(&inode->io_tree, start, end);
 
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index a5b623ee6fac..c520412d1f86 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1136,7 +1136,7 @@ static int rbio_add_io_sector(struct btrfs_raid_bio *rbio,
 			      unsigned int stripe_nr,
 			      unsigned int sector_nr,
 			      unsigned long bio_max_len,
-			      unsigned int opf)
+			      enum req_op op)
 {
 	const u32 sectorsize = rbio->bioc->fs_info->sectorsize;
 	struct bio *last = bio_list->tail;
@@ -1181,7 +1181,7 @@ static int rbio_add_io_sector(struct btrfs_raid_bio *rbio,
 
 	/* put a new bio on the list */
 	bio = bio_alloc(stripe->dev->bdev, max(bio_max_len >> PAGE_SHIFT, 1UL),
-			opf, GFP_NOFS);
+			op, GFP_NOFS);
 	bio->bi_iter.bi_sector = disk_start >> 9;
 	bio->bi_private = rbio;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 51/63] fs/ext4: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (49 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 50/63] fs/btrfs: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 52/63] fs/f2fs: Use the enum req_op and blk_opf_t types Bart Van Assche
                   ` (12 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Theodore Ts'o

Improve static type checking by using the new blk_opf_t type for
variables that represent request flags.

Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/ext4/ext4.h        |  8 ++++----
 fs/ext4/fast_commit.c |  2 +-
 fs/ext4/super.c       | 14 +++++++-------
 3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 75b8d81b2469..29fc575a4eb6 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3058,14 +3058,14 @@ extern unsigned int ext4_list_backups(struct super_block *sb,
 
 /* super.c */
 extern struct buffer_head *ext4_sb_bread(struct super_block *sb,
-					 sector_t block, int op_flags);
+					 sector_t block, blk_opf_t op_flags);
 extern struct buffer_head *ext4_sb_bread_unmovable(struct super_block *sb,
 						   sector_t block);
-extern void ext4_read_bh_nowait(struct buffer_head *bh, int op_flags,
+extern void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
 				bh_end_io_t *end_io);
-extern int ext4_read_bh(struct buffer_head *bh, int op_flags,
+extern int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
 			bh_end_io_t *end_io);
-extern int ext4_read_bh_lock(struct buffer_head *bh, int op_flags, bool wait);
+extern int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait);
 extern void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block);
 extern int ext4_seq_options_show(struct seq_file *seq, void *offset);
 extern int ext4_calculate_overhead(struct super_block *sb);
diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
index 0df5482c6c1c..eb4c8ad1bb61 100644
--- a/fs/ext4/fast_commit.c
+++ b/fs/ext4/fast_commit.c
@@ -658,7 +658,7 @@ void ext4_fc_track_range(handle_t *handle, struct inode *inode, ext4_lblk_t star
 
 static void ext4_fc_submit_bh(struct super_block *sb, bool is_tail)
 {
-	int write_flags = REQ_SYNC;
+	blk_opf_t write_flags = REQ_SYNC;
 	struct buffer_head *bh = EXT4_SB(sb)->s_fc_bh;
 
 	/* Add REQ_FUA | REQ_PREFLUSH only its tail */
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 24922184b622..2c68dec63e54 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -159,7 +159,7 @@ MODULE_ALIAS("ext3");
 #define IS_EXT3_SB(sb) ((sb)->s_bdev->bd_holder == &ext3_fs_type)
 
 
-static inline void __ext4_read_bh(struct buffer_head *bh, int op_flags,
+static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
 				  bh_end_io_t *end_io)
 {
 	/*
@@ -174,7 +174,7 @@ static inline void __ext4_read_bh(struct buffer_head *bh, int op_flags,
 	submit_bh(REQ_OP_READ | op_flags, bh);
 }
 
-void ext4_read_bh_nowait(struct buffer_head *bh, int op_flags,
+void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
 			 bh_end_io_t *end_io)
 {
 	BUG_ON(!buffer_locked(bh));
@@ -186,7 +186,7 @@ void ext4_read_bh_nowait(struct buffer_head *bh, int op_flags,
 	__ext4_read_bh(bh, op_flags, end_io);
 }
 
-int ext4_read_bh(struct buffer_head *bh, int op_flags, bh_end_io_t *end_io)
+int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io)
 {
 	BUG_ON(!buffer_locked(bh));
 
@@ -203,7 +203,7 @@ int ext4_read_bh(struct buffer_head *bh, int op_flags, bh_end_io_t *end_io)
 	return -EIO;
 }
 
-int ext4_read_bh_lock(struct buffer_head *bh, int op_flags, bool wait)
+int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait)
 {
 	if (trylock_buffer(bh)) {
 		if (wait)
@@ -227,8 +227,8 @@ int ext4_read_bh_lock(struct buffer_head *bh, int op_flags, bool wait)
  * return.
  */
 static struct buffer_head *__ext4_sb_bread_gfp(struct super_block *sb,
-					       sector_t block, int op_flags,
-					       gfp_t gfp)
+					       sector_t block,
+					       blk_opf_t op_flags, gfp_t gfp)
 {
 	struct buffer_head *bh;
 	int ret;
@@ -248,7 +248,7 @@ static struct buffer_head *__ext4_sb_bread_gfp(struct super_block *sb,
 }
 
 struct buffer_head *ext4_sb_bread(struct super_block *sb, sector_t block,
-				   int op_flags)
+				   blk_opf_t op_flags)
 {
 	return __ext4_sb_bread_gfp(sb, block, op_flags, __GFP_MOVABLE);
 }

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 52/63] fs/f2fs: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (50 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 51/63] fs/ext4: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 53/63] fs/gfs2: " Bart Van Assche
                   ` (11 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Jaegeuk Kim

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags.

Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/f2fs/data.c              | 11 ++++++-----
 fs/f2fs/f2fs.h              |  6 +++---
 fs/f2fs/node.c              |  2 +-
 fs/f2fs/segment.c           |  2 +-
 include/trace/events/f2fs.h | 22 +++++++++++-----------
 5 files changed, 22 insertions(+), 21 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 7fcbcf979737..5c13ee321940 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -387,11 +387,11 @@ int f2fs_target_device_index(struct f2fs_sb_info *sbi, block_t blkaddr)
 	return 0;
 }
 
-static unsigned int f2fs_io_flags(struct f2fs_io_info *fio)
+static blk_opf_t f2fs_io_flags(struct f2fs_io_info *fio)
 {
 	unsigned int temp_mask = (1 << NR_TEMP_TYPE) - 1;
 	unsigned int fua_flag, meta_flag, io_flag;
-	unsigned int op_flags = 0;
+	blk_opf_t op_flags = 0;
 
 	if (fio->op != REQ_OP_WRITE)
 		return 0;
@@ -999,7 +999,7 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
 }
 
 static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
-				      unsigned nr_pages, unsigned op_flag,
+				      unsigned nr_pages, blk_opf_t op_flag,
 				      pgoff_t first_idx, bool for_write)
 {
 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
@@ -1047,7 +1047,8 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
 
 /* This can handle encryption stuffs */
 static int f2fs_submit_page_read(struct inode *inode, struct page *page,
-				 block_t blkaddr, int op_flags, bool for_write)
+				 block_t blkaddr, blk_opf_t op_flags,
+				 bool for_write)
 {
 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
 	struct bio *bio;
@@ -1181,7 +1182,7 @@ int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index)
 }
 
 struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index,
-						int op_flags, bool for_write)
+				     blk_opf_t op_flags, bool for_write)
 {
 	struct address_space *mapping = inode->i_mapping;
 	struct dnode_of_data dn;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index d9bbecd008d2..868170b72de9 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -1183,8 +1183,8 @@ struct f2fs_io_info {
 	nid_t ino;		/* inode number */
 	enum page_type type;	/* contains DATA/NODE/META/META_FLUSH */
 	enum temp_type temp;	/* contains HOT/WARM/COLD */
-	int op;			/* contains REQ_OP_ */
-	int op_flags;		/* req_flag_bits */
+	enum req_op op;		/* contains REQ_OP_ */
+	blk_opf_t op_flags;	/* req_flag_bits */
 	block_t new_blkaddr;	/* new block address to be written */
 	block_t old_blkaddr;	/* old block address before Cow */
 	struct page *page;	/* page to be written */
@@ -3741,7 +3741,7 @@ int f2fs_reserve_new_block(struct dnode_of_data *dn);
 int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index);
 int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index);
 struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index,
-			int op_flags, bool for_write);
+			blk_opf_t op_flags, bool for_write);
 struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index);
 struct page *f2fs_get_lock_data_page(struct inode *inode, pgoff_t index,
 			bool for_write);
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index cf6f7fc83c08..04a145f1dcfc 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1327,7 +1327,7 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
  * 0: f2fs_put_page(page, 0)
  * LOCKED_PAGE or error: f2fs_put_page(page, 1)
  */
-static int read_node_page(struct page *page, int op_flags)
+static int read_node_page(struct page *page, blk_opf_t op_flags)
 {
 	struct f2fs_sb_info *sbi = F2FS_P_SB(page);
 	struct node_info ni;
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 874c1b9c41a2..c7afc588cf26 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -1082,7 +1082,7 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
 	struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
 	struct list_head *wait_list = (dpolicy->type == DPOLICY_FSTRIM) ?
 					&(dcc->fstrim_list) : &(dcc->wait_list);
-	int flag = dpolicy->sync ? REQ_SYNC : 0;
+	blk_opf_t flag = dpolicy->sync ? REQ_SYNC : 0;
 	block_t lstart, start, len, total_len;
 	int err = 0;
 
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index 513e889ef8aa..f1e922237736 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -66,7 +66,7 @@ TRACE_DEFINE_ENUM(CP_RESIZE);
 
 #define F2FS_OP_FLAGS (REQ_RAHEAD | REQ_SYNC | REQ_META | REQ_PRIO |	\
 			REQ_PREFLUSH | REQ_FUA)
-#define F2FS_BIO_FLAG_MASK(t)	(t & F2FS_OP_FLAGS)
+#define F2FS_BIO_FLAG_MASK(t) (__force u32)((t) & F2FS_OP_FLAGS)
 
 #define show_bio_type(op,op_flags)	show_bio_op(op),		\
 						show_bio_op_flags(op_flags)
@@ -75,12 +75,12 @@ TRACE_DEFINE_ENUM(CP_RESIZE);
 
 #define show_bio_op_flags(flags)					\
 	__print_flags(F2FS_BIO_FLAG_MASK(flags), "|",			\
-		{ REQ_RAHEAD,		"R" },				\
-		{ REQ_SYNC,		"S" },				\
-		{ REQ_META,		"M" },				\
-		{ REQ_PRIO,		"P" },				\
-		{ REQ_PREFLUSH,		"PF" },				\
-		{ REQ_FUA,		"FUA" })
+		{ (__force u32)REQ_RAHEAD,	"R" },			\
+		{ (__force u32)REQ_SYNC,	"S" },			\
+		{ (__force u32)REQ_META,	"M" },			\
+		{ (__force u32)REQ_PRIO,	"P" },			\
+		{ (__force u32)REQ_PREFLUSH,	"PF" },			\
+		{ (__force u32)REQ_FUA,		"FUA" })
 
 #define show_data_type(type)						\
 	__print_symbolic(type,						\
@@ -1036,8 +1036,8 @@ DECLARE_EVENT_CLASS(f2fs__submit_page_bio,
 		__field(pgoff_t, index)
 		__field(block_t, old_blkaddr)
 		__field(block_t, new_blkaddr)
-		__field(int, op)
-		__field(int, op_flags)
+		__field(enum req_op, op)
+		__field(blk_opf_t, op_flags)
 		__field(int, temp)
 		__field(int, type)
 	),
@@ -1092,8 +1092,8 @@ DECLARE_EVENT_CLASS(f2fs__bio,
 	TP_STRUCT__entry(
 		__field(dev_t,	dev)
 		__field(dev_t,	target)
-		__field(int,	op)
-		__field(int,	op_flags)
+		__field(enum req_op,	op)
+		__field(blk_opf_t,	op_flags)
 		__field(int,	type)
 		__field(sector_t,	sector)
 		__field(unsigned int,	size)

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 53/63] fs/gfs2: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (51 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 52/63] fs/f2fs: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-07-07  6:03   ` Andreas Gruenbacher
  2022-06-29 23:31 ` [PATCH v2 54/63] fs/hfsplus: " Bart Van Assche
                   ` (10 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Bob Peterson,
	Andreas Gruenbacher

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags. Combine the first two
gfs2_submit_bhs() arguments into a single argument.

Cc: Bob Peterson <rpeterso@redhat.com>
Cc: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/gfs2/log.c     | 4 ++--
 fs/gfs2/log.h     | 2 +-
 fs/gfs2/lops.c    | 4 ++--
 fs/gfs2/lops.h    | 2 +-
 fs/gfs2/meta_io.c | 9 ++++-----
 5 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
index f0ee3ff6f9a8..eec4159b08aa 100644
--- a/fs/gfs2/log.c
+++ b/fs/gfs2/log.c
@@ -823,7 +823,7 @@ void gfs2_flush_revokes(struct gfs2_sbd *sdp)
 
 void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
 			   u64 seq, u32 tail, u32 lblock, u32 flags,
-			   int op_flags)
+			   blk_opf_t op_flags)
 {
 	struct gfs2_log_header *lh;
 	u32 hash, crc;
@@ -905,7 +905,7 @@ void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
 
 static void log_write_header(struct gfs2_sbd *sdp, u32 flags)
 {
-	int op_flags = REQ_PREFLUSH | REQ_FUA | REQ_META | REQ_SYNC;
+	blk_opf_t op_flags = REQ_PREFLUSH | REQ_FUA | REQ_META | REQ_SYNC;
 	enum gfs2_freeze_state state = atomic_read(&sdp->sd_freeze_state);
 
 	gfs2_assert_withdraw(sdp, (state != SFS_FROZEN));
diff --git a/fs/gfs2/log.h b/fs/gfs2/log.h
index fc905c2af53c..653cffcbf869 100644
--- a/fs/gfs2/log.h
+++ b/fs/gfs2/log.h
@@ -82,7 +82,7 @@ extern void gfs2_log_reserve(struct gfs2_sbd *sdp, struct gfs2_trans *tr,
 			     unsigned int *extra_revokes);
 extern void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
 				  u64 seq, u32 tail, u32 lblock, u32 flags,
-				  int op_flags);
+				  blk_opf_t op_flags);
 extern void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl,
 			   u32 type);
 extern void gfs2_log_commit(struct gfs2_sbd *sdp, struct gfs2_trans *trans);
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index 6ba51cbb94cf..90a2d7bc91c4 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -238,7 +238,7 @@ static void gfs2_end_log_write(struct bio *bio)
  * there is no pending bio, then this is a no-op.
  */
 
-void gfs2_log_submit_bio(struct bio **biop, int opf)
+void gfs2_log_submit_bio(struct bio **biop, blk_opf_t opf)
 {
 	struct bio *bio = *biop;
 	if (bio) {
@@ -292,7 +292,7 @@ static struct bio *gfs2_log_alloc_bio(struct gfs2_sbd *sdp, u64 blkno,
  */
 
 static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno,
-				    struct bio **biop, int op,
+				    struct bio **biop, enum req_op op,
 				    bio_end_io_t *end_io, bool flush)
 {
 	struct bio *bio = *biop;
diff --git a/fs/gfs2/lops.h b/fs/gfs2/lops.h
index f707601597dc..1412ffba1d44 100644
--- a/fs/gfs2/lops.h
+++ b/fs/gfs2/lops.h
@@ -16,7 +16,7 @@ extern u64 gfs2_log_bmap(struct gfs2_jdesc *jd, unsigned int lbn);
 extern void gfs2_log_write(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
 			   struct page *page, unsigned size, unsigned offset,
 			   u64 blkno);
-extern void gfs2_log_submit_bio(struct bio **biop, int opf);
+extern void gfs2_log_submit_bio(struct bio **biop, blk_opf_t opf);
 extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh);
 extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
 			   struct gfs2_log_header_host *head, bool keep_cache);
diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
index 3570739f005d..7e70e0ba5a6c 100644
--- a/fs/gfs2/meta_io.c
+++ b/fs/gfs2/meta_io.c
@@ -34,7 +34,7 @@ static int gfs2_aspace_writepage(struct page *page, struct writeback_control *wb
 {
 	struct buffer_head *bh, *head;
 	int nr_underway = 0;
-	int write_flags = REQ_META | REQ_PRIO | wbc_to_write_flags(wbc);
+	blk_opf_t write_flags = REQ_META | REQ_PRIO | wbc_to_write_flags(wbc);
 
 	BUG_ON(!PageLocked(page));
 	BUG_ON(!page_has_buffers(page));
@@ -217,14 +217,13 @@ static void gfs2_meta_read_endio(struct bio *bio)
  * Submit several consecutive buffer head I/O requests as a single bio I/O
  * request.  (See submit_bh_wbc.)
  */
-static void gfs2_submit_bhs(int op, int op_flags, struct buffer_head *bhs[],
-			    int num)
+static void gfs2_submit_bhs(blk_opf_t opf, struct buffer_head *bhs[], int num)
 {
 	while (num > 0) {
 		struct buffer_head *bh = *bhs;
 		struct bio *bio;
 
-		bio = bio_alloc(bh->b_bdev, num, op | op_flags, GFP_NOIO);
+		bio = bio_alloc(bh->b_bdev, num, opf, GFP_NOIO);
 		bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
 		while (num > 0) {
 			bh = *bhs;
@@ -288,7 +287,7 @@ int gfs2_meta_read(struct gfs2_glock *gl, u64 blkno, int flags,
 		}
 	}
 
-	gfs2_submit_bhs(REQ_OP_READ, REQ_META | REQ_PRIO, bhs, num);
+	gfs2_submit_bhs(REQ_OP_READ | REQ_META | REQ_PRIO, bhs, num);
 	if (!(flags & DIO_WAIT))
 		return 0;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 54/63] fs/hfsplus: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (52 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 53/63] fs/gfs2: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 55/63] fs/iomap: Use the new blk_opf_t type Bart Van Assche
                   ` (9 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags. Combine the last two
hfsplus_submit_bio() arguments into a single argument.

Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/hfsplus/hfsplus_fs.h |  2 +-
 fs/hfsplus/part_tbl.c   |  5 ++---
 fs/hfsplus/super.c      |  4 ++--
 fs/hfsplus/wrapper.c    | 12 ++++++------
 4 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
index 396e73aa0961..a5db2e3b2980 100644
--- a/fs/hfsplus/hfsplus_fs.h
+++ b/fs/hfsplus/hfsplus_fs.h
@@ -525,7 +525,7 @@ int hfsplus_compare_dentry(const struct dentry *dentry, unsigned int len,
 
 /* wrapper.c */
 int hfsplus_submit_bio(struct super_block *sb, sector_t sector, void *buf,
-		       void **data, int op, int op_flags);
+		       void **data, blk_opf_t opf);
 int hfsplus_read_wrapper(struct super_block *sb);
 
 /*
diff --git a/fs/hfsplus/part_tbl.c b/fs/hfsplus/part_tbl.c
index 63164ebc52fa..9ec21664eda6 100644
--- a/fs/hfsplus/part_tbl.c
+++ b/fs/hfsplus/part_tbl.c
@@ -112,8 +112,7 @@ static int hfs_parse_new_pmap(struct super_block *sb, void *buf,
 		if ((u8 *)pm - (u8 *)buf >= buf_size) {
 			res = hfsplus_submit_bio(sb,
 						 *part_start + HFS_PMAP_BLK + i,
-						 buf, (void **)&pm, REQ_OP_READ,
-						 0);
+						 buf, (void **)&pm, REQ_OP_READ);
 			if (res)
 				return res;
 		}
@@ -137,7 +136,7 @@ int hfs_part_find(struct super_block *sb,
 		return -ENOMEM;
 
 	res = hfsplus_submit_bio(sb, *part_start + HFS_PMAP_BLK,
-				 buf, &data, REQ_OP_READ, 0);
+				 buf, &data, REQ_OP_READ);
 	if (res)
 		goto out;
 
diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
index 8479add998b5..122ed89ebf9f 100644
--- a/fs/hfsplus/super.c
+++ b/fs/hfsplus/super.c
@@ -221,7 +221,7 @@ static int hfsplus_sync_fs(struct super_block *sb, int wait)
 
 	error2 = hfsplus_submit_bio(sb,
 				   sbi->part_start + HFSPLUS_VOLHEAD_SECTOR,
-				   sbi->s_vhdr_buf, NULL, REQ_OP_WRITE,
+				   sbi->s_vhdr_buf, NULL, REQ_OP_WRITE |
 				   REQ_SYNC);
 	if (!error)
 		error = error2;
@@ -230,7 +230,7 @@ static int hfsplus_sync_fs(struct super_block *sb, int wait)
 
 	error2 = hfsplus_submit_bio(sb,
 				  sbi->part_start + sbi->sect_count - 2,
-				  sbi->s_backup_vhdr_buf, NULL, REQ_OP_WRITE,
+				  sbi->s_backup_vhdr_buf, NULL, REQ_OP_WRITE |
 				  REQ_SYNC);
 	if (!error)
 		error2 = error;
diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c
index 0b8ad6586df5..0b791adf02e5 100644
--- a/fs/hfsplus/wrapper.c
+++ b/fs/hfsplus/wrapper.c
@@ -45,8 +45,9 @@ struct hfsplus_wd {
  * will work correctly.
  */
 int hfsplus_submit_bio(struct super_block *sb, sector_t sector,
-		       void *buf, void **data, int op, int op_flags)
+		       void *buf, void **data, blk_opf_t opf)
 {
+	const enum req_op op = opf & REQ_OP_MASK;
 	struct bio *bio;
 	int ret = 0;
 	u64 io_size;
@@ -63,10 +64,10 @@ int hfsplus_submit_bio(struct super_block *sb, sector_t sector,
 	offset = start & (io_size - 1);
 	sector &= ~((io_size >> HFSPLUS_SECTOR_SHIFT) - 1);
 
-	bio = bio_alloc(sb->s_bdev, 1, op | op_flags, GFP_NOIO);
+	bio = bio_alloc(sb->s_bdev, 1, opf, GFP_NOIO);
 	bio->bi_iter.bi_sector = sector;
 
-	if (op != WRITE && data)
+	if (op != REQ_OP_WRITE && data)
 		*data = (u8 *)buf + offset;
 
 	while (io_size > 0) {
@@ -184,7 +185,7 @@ int hfsplus_read_wrapper(struct super_block *sb)
 reread:
 	error = hfsplus_submit_bio(sb, part_start + HFSPLUS_VOLHEAD_SECTOR,
 				   sbi->s_vhdr_buf, (void **)&sbi->s_vhdr,
-				   REQ_OP_READ, 0);
+				   REQ_OP_READ);
 	if (error)
 		goto out_free_backup_vhdr;
 
@@ -216,8 +217,7 @@ int hfsplus_read_wrapper(struct super_block *sb)
 
 	error = hfsplus_submit_bio(sb, part_start + part_size - 2,
 				   sbi->s_backup_vhdr_buf,
-				   (void **)&sbi->s_backup_vhdr, REQ_OP_READ,
-				   0);
+				   (void **)&sbi->s_backup_vhdr, REQ_OP_READ);
 	if (error)
 		goto out_free_backup_vhdr;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 55/63] fs/iomap: Use the new blk_opf_t type
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (53 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 54/63] fs/hfsplus: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 56/63] fs/jbd2: Fix the documentation of the jbd2_write_superblock() callers Bart Van Assche
                   ` (8 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche, Al Viro

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
the combination of a request operation and request flags.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/iomap/direct-io.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 5d098adba443..18a3d9357dce 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -52,7 +52,7 @@ struct iomap_dio {
 };
 
 static struct bio *iomap_dio_alloc_bio(const struct iomap_iter *iter,
-		struct iomap_dio *dio, unsigned short nr_vecs, unsigned int opf)
+		struct iomap_dio *dio, unsigned short nr_vecs, blk_opf_t opf)
 {
 	if (dio->dops && dio->dops->bio_set)
 		return bio_alloc_bioset(iter->iomap.bdev, nr_vecs, opf,
@@ -212,10 +212,10 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio,
  * mapping, and whether or not we want FUA.  Note that we can end up
  * clearing the WRITE_FUA flag in the dio request.
  */
-static inline unsigned int iomap_dio_bio_opflags(struct iomap_dio *dio,
+static inline blk_opf_t iomap_dio_bio_opflags(struct iomap_dio *dio,
 		const struct iomap *iomap, bool use_fua)
 {
-	unsigned int opflags = REQ_SYNC | REQ_IDLE;
+	blk_opf_t opflags = REQ_SYNC | REQ_IDLE;
 
 	if (!(dio->flags & IOMAP_DIO_WRITE)) {
 		WARN_ON_ONCE(iomap->flags & IOMAP_F_ZONE_APPEND);
@@ -244,7 +244,7 @@ static loff_t iomap_dio_bio_iter(const struct iomap_iter *iter,
 	unsigned int fs_block_size = i_blocksize(inode), pad;
 	loff_t length = iomap_length(iter);
 	loff_t pos = iter->pos;
-	unsigned int bio_opf;
+	blk_opf_t bio_opf;
 	struct bio *bio;
 	bool need_zeroout = false;
 	bool use_fua = false;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 56/63] fs/jbd2: Fix the documentation of the jbd2_write_superblock() callers
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (54 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 55/63] fs/iomap: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 57/63] fs/nfs: Use enum req_op where appropriate Bart Van Assche
                   ` (7 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Mike Christie,
	Theodore Ts'o

Commit 2a222ca992c3 ("fs: have submit_bh users pass in op and flags
separately") renamed the jbd2_write_superblock() 'write_op' argument into
'write_flags'. Propagate this change to the jbd2_write_superblock()
callers. Additionally, change the type of 'write_flags' into blk_opf_t.

Cc: Mike Christie <michael.christie@oracle.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/jbd2/journal.c           | 15 ++++++++-------
 include/linux/jbd2.h        |  2 +-
 include/trace/events/jbd2.h | 12 ++++++------
 3 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
index fd169a356da9..40c376f5d6bc 100644
--- a/fs/jbd2/journal.c
+++ b/fs/jbd2/journal.c
@@ -1602,7 +1602,7 @@ static int journal_reset(journal_t *journal)
  * This function expects that the caller will have locked the journal
  * buffer head, and will return with it unlocked
  */
-static int jbd2_write_superblock(journal_t *journal, int write_flags)
+static int jbd2_write_superblock(journal_t *journal, blk_opf_t write_flags)
 {
 	struct buffer_head *bh = journal->j_sb_buffer;
 	journal_superblock_t *sb = journal->j_superblock;
@@ -1659,13 +1659,14 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
  * @journal: The journal to update.
  * @tail_tid: TID of the new transaction at the tail of the log
  * @tail_block: The first block of the transaction at the tail of the log
- * @write_op: With which operation should we write the journal sb
+ * @write_flags: Flags for the journal sb write operation
  *
  * Update a journal's superblock information about log tail and write it to
  * disk, waiting for the IO to complete.
  */
 int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
-				     unsigned long tail_block, int write_op)
+				    unsigned long tail_block,
+				    blk_opf_t write_flags)
 {
 	journal_superblock_t *sb = journal->j_superblock;
 	int ret;
@@ -1685,7 +1686,7 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
 	sb->s_sequence = cpu_to_be32(tail_tid);
 	sb->s_start    = cpu_to_be32(tail_block);
 
-	ret = jbd2_write_superblock(journal, write_op);
+	ret = jbd2_write_superblock(journal, write_flags);
 	if (ret)
 		goto out;
 
@@ -1702,12 +1703,12 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
 /**
  * jbd2_mark_journal_empty() - Mark on disk journal as empty.
  * @journal: The journal to update.
- * @write_op: With which operation should we write the journal sb
+ * @write_flags: Flags for the journal sb write operation
  *
  * Update a journal's dynamic superblock fields to show that journal is empty.
  * Write updated superblock to disk waiting for IO to complete.
  */
-static void jbd2_mark_journal_empty(journal_t *journal, int write_op)
+static void jbd2_mark_journal_empty(journal_t *journal, blk_opf_t write_flags)
 {
 	journal_superblock_t *sb = journal->j_superblock;
 	bool had_fast_commit = false;
@@ -1733,7 +1734,7 @@ static void jbd2_mark_journal_empty(journal_t *journal, int write_op)
 		had_fast_commit = true;
 	}
 
-	jbd2_write_superblock(journal, write_op);
+	jbd2_write_superblock(journal, write_flags);
 
 	if (had_fast_commit)
 		jbd2_set_feature_fast_commit(journal);
diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
index e79d6e0b14e8..dc1724131300 100644
--- a/include/linux/jbd2.h
+++ b/include/linux/jbd2.h
@@ -1557,7 +1557,7 @@ extern int	   jbd2_journal_wipe       (journal_t *, int);
 extern int	   jbd2_journal_skip_recovery	(journal_t *);
 extern void	   jbd2_journal_update_sb_errno(journal_t *);
 extern int	   jbd2_journal_update_sb_log_tail	(journal_t *, tid_t,
-				unsigned long, int);
+				unsigned long, blk_opf_t);
 extern void	   jbd2_journal_abort      (journal_t *, int);
 extern int	   jbd2_journal_errno      (journal_t *);
 extern void	   jbd2_journal_ack_err    (journal_t *);
diff --git a/include/trace/events/jbd2.h b/include/trace/events/jbd2.h
index a4dfe005983d..99f783c384bb 100644
--- a/include/trace/events/jbd2.h
+++ b/include/trace/events/jbd2.h
@@ -355,22 +355,22 @@ TRACE_EVENT(jbd2_update_log_tail,
 
 TRACE_EVENT(jbd2_write_superblock,
 
-	TP_PROTO(journal_t *journal, int write_op),
+	TP_PROTO(journal_t *journal, blk_opf_t write_flags),
 
-	TP_ARGS(journal, write_op),
+	TP_ARGS(journal, write_flags),
 
 	TP_STRUCT__entry(
 		__field(	dev_t,  dev			)
-		__field(	  int,  write_op		)
+		__field(    blk_opf_t,  write_flags		)
 	),
 
 	TP_fast_assign(
 		__entry->dev		= journal->j_fs_dev->bd_dev;
-		__entry->write_op	= write_op;
+		__entry->write_flags	= write_flags;
 	),
 
-	TP_printk("dev %d,%d write_op %x", MAJOR(__entry->dev),
-		  MINOR(__entry->dev), __entry->write_op)
+	TP_printk("dev %d,%d write_flags %x", MAJOR(__entry->dev),
+		  MINOR(__entry->dev), (__force u32)__entry->write_flags)
 );
 
 TRACE_EVENT(jbd2_lock_buffer_stall,

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 57/63] fs/nfs: Use enum req_op where appropriate
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (55 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 56/63] fs/jbd2: Fix the documentation of the jbd2_write_superblock() callers Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 58/63] fs/nilfs2: Use the enum req_op and blk_opf_t types Bart Van Assche
                   ` (6 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Trond Myklebust,
	Anna Schumaker

Improve static type checking by using enum req_op for request operations.
Rename an 'rw' argument into 'op' since that name is typically used for
request operations. This patch does not change any functionality. Note:
REQ_OP_READ = READ = 0 and REQ_OP_WRITE = WRITE = 1.

Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Anna Schumaker <anna@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/nfs/blocklayout/blocklayout.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 79a8b451791f..943aeea1eb16 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -121,7 +121,7 @@ static bool offset_in_map(u64 offset, struct pnfs_block_dev_map *map)
 }
 
 static struct bio *
-do_add_page_to_bio(struct bio *bio, int npg, int rw, sector_t isect,
+do_add_page_to_bio(struct bio *bio, int npg, enum req_op op, sector_t isect,
 		struct page *page, struct pnfs_block_dev_map *map,
 		struct pnfs_block_extent *be, bio_end_io_t end_io,
 		struct parallel_io *par, unsigned int offset, int *len)
@@ -131,7 +131,7 @@ do_add_page_to_bio(struct bio *bio, int npg, int rw, sector_t isect,
 	u64 disk_addr, end;
 
 	dprintk("%s: npg %d rw %d isect %llu offset %u len %d\n", __func__,
-		npg, rw, (unsigned long long)isect, offset, *len);
+		npg, (__force u32)op, (unsigned long long)isect, offset, *len);
 
 	/* translate to device offset */
 	isect += be->be_v_offset;
@@ -154,7 +154,7 @@ do_add_page_to_bio(struct bio *bio, int npg, int rw, sector_t isect,
 
 retry:
 	if (!bio) {
-		bio = bio_alloc(map->bdev, bio_max_segs(npg), rw, GFP_NOIO);
+		bio = bio_alloc(map->bdev, bio_max_segs(npg), op, GFP_NOIO);
 		bio->bi_iter.bi_sector = disk_addr >> SECTOR_SHIFT;
 		bio->bi_end_io = end_io;
 		bio->bi_private = par;
@@ -291,7 +291,7 @@ bl_read_pagelist(struct nfs_pgio_header *header)
 		} else {
 			bio = do_add_page_to_bio(bio,
 						 header->page_array.npages - i,
-						 READ,
+						 REQ_OP_READ,
 						 isect, pages[i], &map, &be,
 						 bl_end_io_read, par,
 						 pg_offset, &pg_len);
@@ -420,9 +420,8 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
 
 		pg_len = PAGE_SIZE;
 		bio = do_add_page_to_bio(bio, header->page_array.npages - i,
-					 WRITE, isect, pages[i], &map, &be,
-					 bl_end_io_write, par,
-					 0, &pg_len);
+					 REQ_OP_WRITE, isect, pages[i], &map,
+					 &be, bl_end_io_write, par, 0, &pg_len);
 		if (IS_ERR(bio)) {
 			header->pnfs_error = PTR_ERR(bio);
 			bio = NULL;

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 58/63] fs/nilfs2: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (56 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 57/63] fs/nfs: Use enum req_op where appropriate Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-07-01  7:03   ` Ryusuke Konishi
  2022-06-29 23:31 ` [PATCH v2 59/63] fs/ntfs3: Use enum req_op where appropriate Bart Van Assche
                   ` (5 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Ryusuke Konishi

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags. Combine the 'mode' and
'mode_flags' arguments of nilfs_btnode_submit_block into a single
argument 'opf'.

Cc: Ryusuke Konishi <konishi.ryusuke@gmail.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/nilfs2/btnode.c            |  8 ++++----
 fs/nilfs2/btnode.h            |  4 ++--
 fs/nilfs2/btree.c             |  6 +++---
 fs/nilfs2/gcinode.c           |  5 ++---
 fs/nilfs2/mdt.c               | 19 ++++++++++---------
 include/trace/events/nilfs2.h |  4 ++--
 6 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
index 5c39efbf733f..e74fda212620 100644
--- a/fs/nilfs2/btnode.c
+++ b/fs/nilfs2/btnode.c
@@ -70,7 +70,7 @@ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
 }
 
 int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
-			      sector_t pblocknr, int mode, int mode_flags,
+			      sector_t pblocknr, blk_opf_t opf,
 			      struct buffer_head **pbh, sector_t *submit_ptr)
 {
 	struct buffer_head *bh;
@@ -103,13 +103,13 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
 		}
 	}
 
-	if (mode_flags & REQ_RAHEAD) {
+	if (opf & REQ_RAHEAD) {
 		if (pblocknr != *submit_ptr + 1 || !trylock_buffer(bh)) {
 			err = -EBUSY; /* internal code */
 			brelse(bh);
 			goto out_locked;
 		}
-	} else { /* mode == READ */
+	} else { /* opf == REQ_OP_READ */
 		lock_buffer(bh);
 	}
 	if (buffer_uptodate(bh)) {
@@ -122,7 +122,7 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
 	bh->b_blocknr = pblocknr; /* set block address for read */
 	bh->b_end_io = end_buffer_read_sync;
 	get_bh(bh);
-	submit_bh(mode | mode_flags, bh);
+	submit_bh(opf, bh);
 	bh->b_blocknr = blocknr; /* set back to the given block address */
 	*submit_ptr = pblocknr;
 	err = 0;
diff --git a/fs/nilfs2/btnode.h b/fs/nilfs2/btnode.h
index bd5544e63a01..4bc5612dff94 100644
--- a/fs/nilfs2/btnode.h
+++ b/fs/nilfs2/btnode.h
@@ -34,8 +34,8 @@ void nilfs_init_btnc_inode(struct inode *btnc_inode);
 void nilfs_btnode_cache_clear(struct address_space *);
 struct buffer_head *nilfs_btnode_create_block(struct address_space *btnc,
 					      __u64 blocknr);
-int nilfs_btnode_submit_block(struct address_space *, __u64, sector_t, int,
-			      int, struct buffer_head **, sector_t *);
+int nilfs_btnode_submit_block(struct address_space *, __u64, sector_t,
+			      blk_opf_t, struct buffer_head **, sector_t *);
 void nilfs_btnode_delete(struct buffer_head *);
 int nilfs_btnode_prepare_change_key(struct address_space *,
 				    struct nilfs_btnode_chkey_ctxt *);
diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
index f544c22fff78..9f4d9432d38a 100644
--- a/fs/nilfs2/btree.c
+++ b/fs/nilfs2/btree.c
@@ -477,7 +477,7 @@ static int __nilfs_btree_get_block(const struct nilfs_bmap *btree, __u64 ptr,
 	sector_t submit_ptr = 0;
 	int ret;
 
-	ret = nilfs_btnode_submit_block(btnc, ptr, 0, REQ_OP_READ, 0, &bh,
+	ret = nilfs_btnode_submit_block(btnc, ptr, 0, REQ_OP_READ, &bh,
 					&submit_ptr);
 	if (ret) {
 		if (ret != -EEXIST)
@@ -495,8 +495,8 @@ static int __nilfs_btree_get_block(const struct nilfs_bmap *btree, __u64 ptr,
 			ptr2 = nilfs_btree_node_get_ptr(ra->node, i, ra->ncmax);
 
 			ret = nilfs_btnode_submit_block(btnc, ptr2, 0,
-							REQ_OP_READ, REQ_RAHEAD,
-							&ra_bh, &submit_ptr);
+						REQ_OP_READ | REQ_RAHEAD,
+						&ra_bh, &submit_ptr);
 			if (likely(!ret || ret == -EEXIST))
 				brelse(ra_bh);
 			else if (ret != -EBUSY)
diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
index 847def8af315..b0d22ff24b67 100644
--- a/fs/nilfs2/gcinode.c
+++ b/fs/nilfs2/gcinode.c
@@ -129,9 +129,8 @@ int nilfs_gccache_submit_read_node(struct inode *inode, sector_t pbn,
 	struct inode *btnc_inode = NILFS_I(inode)->i_assoc_inode;
 	int ret;
 
-	ret = nilfs_btnode_submit_block(btnc_inode->i_mapping,
-					vbn ? : pbn, pbn, REQ_OP_READ, 0,
-					out_bh, &pbn);
+	ret = nilfs_btnode_submit_block(btnc_inode->i_mapping, vbn ? : pbn, pbn,
+					REQ_OP_READ, out_bh, &pbn);
 	if (ret == -EEXIST) /* internal code (cache hit) */
 		ret = 0;
 	return ret;
diff --git a/fs/nilfs2/mdt.c b/fs/nilfs2/mdt.c
index 66e8811c2528..cbf4fa60eea2 100644
--- a/fs/nilfs2/mdt.c
+++ b/fs/nilfs2/mdt.c
@@ -111,8 +111,8 @@ static int nilfs_mdt_create_block(struct inode *inode, unsigned long block,
 }
 
 static int
-nilfs_mdt_submit_block(struct inode *inode, unsigned long blkoff,
-		       int mode, int mode_flags, struct buffer_head **out_bh)
+nilfs_mdt_submit_block(struct inode *inode, unsigned long blkoff, blk_opf_t opf,
+		       struct buffer_head **out_bh)
 {
 	struct buffer_head *bh;
 	__u64 blknum = 0;
@@ -126,12 +126,12 @@ nilfs_mdt_submit_block(struct inode *inode, unsigned long blkoff,
 	if (buffer_uptodate(bh))
 		goto out;
 
-	if (mode_flags & REQ_RAHEAD) {
+	if (opf & REQ_RAHEAD) {
 		if (!trylock_buffer(bh)) {
 			ret = -EBUSY;
 			goto failed_bh;
 		}
-	} else /* mode == READ */
+	} else /* opf == REQ_OP_READ */
 		lock_buffer(bh);
 
 	if (buffer_uptodate(bh)) {
@@ -148,10 +148,11 @@ nilfs_mdt_submit_block(struct inode *inode, unsigned long blkoff,
 
 	bh->b_end_io = end_buffer_read_sync;
 	get_bh(bh);
-	submit_bh(mode | mode_flags, bh);
+	submit_bh(opf, bh);
 	ret = 0;
 
-	trace_nilfs2_mdt_submit_block(inode, inode->i_ino, blkoff, mode);
+	trace_nilfs2_mdt_submit_block(inode, inode->i_ino, blkoff,
+				      opf & REQ_OP_MASK);
  out:
 	get_bh(bh);
 	*out_bh = bh;
@@ -172,7 +173,7 @@ static int nilfs_mdt_read_block(struct inode *inode, unsigned long block,
 	int i, nr_ra_blocks = NILFS_MDT_MAX_RA_BLOCKS;
 	int err;
 
-	err = nilfs_mdt_submit_block(inode, block, REQ_OP_READ, 0, &first_bh);
+	err = nilfs_mdt_submit_block(inode, block, REQ_OP_READ, &first_bh);
 	if (err == -EEXIST) /* internal code */
 		goto out;
 
@@ -182,8 +183,8 @@ static int nilfs_mdt_read_block(struct inode *inode, unsigned long block,
 	if (readahead) {
 		blkoff = block + 1;
 		for (i = 0; i < nr_ra_blocks; i++, blkoff++) {
-			err = nilfs_mdt_submit_block(inode, blkoff, REQ_OP_READ,
-						     REQ_RAHEAD, &bh);
+			err = nilfs_mdt_submit_block(inode, blkoff,
+						REQ_OP_READ | REQ_RAHEAD, &bh);
 			if (likely(!err || err == -EEXIST))
 				brelse(bh);
 			else if (err != -EBUSY)
diff --git a/include/trace/events/nilfs2.h b/include/trace/events/nilfs2.h
index 84ee31fc04cc..8efc6236f57c 100644
--- a/include/trace/events/nilfs2.h
+++ b/include/trace/events/nilfs2.h
@@ -192,7 +192,7 @@ TRACE_EVENT(nilfs2_mdt_submit_block,
 	    TP_PROTO(struct inode *inode,
 		     unsigned long ino,
 		     unsigned long blkoff,
-		     int mode),
+		     enum req_op mode),
 
 	    TP_ARGS(inode, ino, blkoff, mode),
 
@@ -200,7 +200,7 @@ TRACE_EVENT(nilfs2_mdt_submit_block,
 		    __field(struct inode *, inode)
 		    __field(unsigned long, ino)
 		    __field(unsigned long, blkoff)
-		    __field(int, mode)
+		    __field(enum req_op, mode)
 	    ),
 
 	    TP_fast_assign(

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 59/63] fs/ntfs3: Use enum req_op where appropriate
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (57 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 58/63] fs/nilfs2: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 60/63] fs/ocfs2: Use the enum req_op and blk_opf_t types Bart Van Assche
                   ` (4 subsequent siblings)
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Konstantin Komarov

Improve static type checking by using enum req_op instead of u32 for
block layer request operations.

Cc: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/ntfs3/fsntfs.c  | 2 +-
 fs/ntfs3/ntfs_fs.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
index 3de5700a9b83..1835e35199c2 100644
--- a/fs/ntfs3/fsntfs.c
+++ b/fs/ntfs3/fsntfs.c
@@ -1448,7 +1448,7 @@ int ntfs_write_bh(struct ntfs_sb_info *sbi, struct NTFS_RECORD_HEADER *rhdr,
  */
 int ntfs_bio_pages(struct ntfs_sb_info *sbi, const struct runs_tree *run,
 		   struct page **pages, u32 nr_pages, u64 vbo, u32 bytes,
-		   u32 op)
+		   enum req_op op)
 {
 	int err = 0;
 	struct bio *new, *bio = NULL;
diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
index 8de129a6419b..3a8abf13143e 100644
--- a/fs/ntfs3/ntfs_fs.h
+++ b/fs/ntfs3/ntfs_fs.h
@@ -617,7 +617,7 @@ int ntfs_write_bh(struct ntfs_sb_info *sbi, struct NTFS_RECORD_HEADER *rhdr,
 		  struct ntfs_buffers *nb, int sync);
 int ntfs_bio_pages(struct ntfs_sb_info *sbi, const struct runs_tree *run,
 		   struct page **pages, u32 nr_pages, u64 vbo, u32 bytes,
-		   u32 op);
+		   enum req_op op);
 int ntfs_bio_fill_1(struct ntfs_sb_info *sbi, const struct runs_tree *run);
 int ntfs_vbo_to_lbo(struct ntfs_sb_info *sbi, const struct runs_tree *run,
 		    u64 vbo, u64 *lbo, u64 *bytes);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 60/63] fs/ocfs2: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (58 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 59/63] fs/ntfs3: Use enum req_op where appropriate Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-07-01  1:47   ` Joseph Qi
  2022-06-29 23:31 ` [PATCH v2 61/63] PM: " Bart Van Assche
                   ` (3 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Mark Fasheh,
	Joel Becker, Joseph Qi

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags. Combine the last two
o2hb_setup_one_bio() arguments into a single argument.

Cc: Mark Fasheh <mark@fasheh.com>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Joseph Qi <joseph.qi@linux.alibaba.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/ocfs2/cluster/heartbeat.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index ea0e70c0fce0..5d7e303b0293 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -503,8 +503,7 @@ static void o2hb_bio_end_io(struct bio *bio)
 static struct bio *o2hb_setup_one_bio(struct o2hb_region *reg,
 				      struct o2hb_bio_wait_ctxt *wc,
 				      unsigned int *current_slot,
-				      unsigned int max_slots, int op,
-				      int op_flags)
+				      unsigned int max_slots, blk_opf_t opf)
 {
 	int len, current_page;
 	unsigned int vec_len, vec_start;
@@ -518,7 +517,7 @@ static struct bio *o2hb_setup_one_bio(struct o2hb_region *reg,
 	 * GFP_KERNEL that the local node can get fenced. It would be
 	 * nicest if we could pre-allocate these bios and avoid this
 	 * all together. */
-	bio = bio_alloc(reg->hr_bdev, 16, op | op_flags, GFP_ATOMIC);
+	bio = bio_alloc(reg->hr_bdev, 16, opf, GFP_ATOMIC);
 	if (!bio) {
 		mlog(ML_ERROR, "Could not alloc slots BIO!\n");
 		bio = ERR_PTR(-ENOMEM);
@@ -566,7 +565,7 @@ static int o2hb_read_slots(struct o2hb_region *reg,
 
 	while(current_slot < max_slots) {
 		bio = o2hb_setup_one_bio(reg, &wc, &current_slot, max_slots,
-					 REQ_OP_READ, 0);
+					 REQ_OP_READ);
 		if (IS_ERR(bio)) {
 			status = PTR_ERR(bio);
 			mlog_errno(status);
@@ -598,7 +597,7 @@ static int o2hb_issue_node_write(struct o2hb_region *reg,
 
 	slot = o2nm_this_node();
 
-	bio = o2hb_setup_one_bio(reg, write_wc, &slot, slot+1, REQ_OP_WRITE,
+	bio = o2hb_setup_one_bio(reg, write_wc, &slot, slot+1, REQ_OP_WRITE |
 				 REQ_SYNC);
 	if (IS_ERR(bio)) {
 		status = PTR_ERR(bio);

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 61/63] PM: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (59 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 60/63] fs/ocfs2: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30 15:21   ` Rafael J. Wysocki
  2022-06-29 23:31 ` [PATCH v2 62/63] fs/xfs: " Bart Van Assche
                   ` (2 subsequent siblings)
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Rafael J . Wysocki

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for
variables that represent request flags. Combine the first two
hib_submit_io() arguments into a single argument.

Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 kernel/power/swap.c | 29 +++++++++++++----------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index 91fffdd2c7fb..277434b6c0bf 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -269,15 +269,14 @@ static void hib_end_io(struct bio *bio)
 	bio_put(bio);
 }
 
-static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr,
-		struct hib_bio_batch *hb)
+static int hib_submit_io(blk_opf_t opf, pgoff_t page_off, void *addr,
+			 struct hib_bio_batch *hb)
 {
 	struct page *page = virt_to_page(addr);
 	struct bio *bio;
 	int error = 0;
 
-	bio = bio_alloc(hib_resume_bdev, 1, op | op_flags,
-			GFP_NOIO | __GFP_HIGH);
+	bio = bio_alloc(hib_resume_bdev, 1, opf, GFP_NOIO | __GFP_HIGH);
 	bio->bi_iter.bi_sector = page_off * (PAGE_SIZE >> 9);
 
 	if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
@@ -317,8 +316,7 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
 {
 	int error;
 
-	hib_submit_io(REQ_OP_READ, 0, swsusp_resume_block,
-		      swsusp_header, NULL);
+	hib_submit_io(REQ_OP_READ, swsusp_resume_block, swsusp_header, NULL);
 	if (!memcmp("SWAP-SPACE",swsusp_header->sig, 10) ||
 	    !memcmp("SWAPSPACE2",swsusp_header->sig, 10)) {
 		memcpy(swsusp_header->orig_sig,swsusp_header->sig, 10);
@@ -331,7 +329,7 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
 		swsusp_header->flags = flags;
 		if (flags & SF_CRC32_MODE)
 			swsusp_header->crc32 = handle->crc32;
-		error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
+		error = hib_submit_io(REQ_OP_WRITE | REQ_SYNC,
 				      swsusp_resume_block, swsusp_header, NULL);
 	} else {
 		pr_err("Swap header not found!\n");
@@ -408,7 +406,7 @@ static int write_page(void *buf, sector_t offset, struct hib_bio_batch *hb)
 	} else {
 		src = buf;
 	}
-	return hib_submit_io(REQ_OP_WRITE, REQ_SYNC, offset, src, hb);
+	return hib_submit_io(REQ_OP_WRITE | REQ_SYNC, offset, src, hb);
 }
 
 static void release_swap_writer(struct swap_map_handle *handle)
@@ -1003,7 +1001,7 @@ static int get_swap_reader(struct swap_map_handle *handle,
 			return -ENOMEM;
 		}
 
-		error = hib_submit_io(REQ_OP_READ, 0, offset, tmp->map, NULL);
+		error = hib_submit_io(REQ_OP_READ, offset, tmp->map, NULL);
 		if (error) {
 			release_swap_reader(handle);
 			return error;
@@ -1027,7 +1025,7 @@ static int swap_read_page(struct swap_map_handle *handle, void *buf,
 	offset = handle->cur->entries[handle->k];
 	if (!offset)
 		return -EFAULT;
-	error = hib_submit_io(REQ_OP_READ, 0, offset, buf, hb);
+	error = hib_submit_io(REQ_OP_READ, offset, buf, hb);
 	if (error)
 		return error;
 	if (++handle->k >= MAP_PAGE_ENTRIES) {
@@ -1526,8 +1524,7 @@ int swsusp_check(void)
 	if (!IS_ERR(hib_resume_bdev)) {
 		set_blocksize(hib_resume_bdev, PAGE_SIZE);
 		clear_page(swsusp_header);
-		error = hib_submit_io(REQ_OP_READ, 0,
-					swsusp_resume_block,
+		error = hib_submit_io(REQ_OP_READ, swsusp_resume_block,
 					swsusp_header, NULL);
 		if (error)
 			goto put;
@@ -1535,7 +1532,7 @@ int swsusp_check(void)
 		if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) {
 			memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
 			/* Reset swap signature now */
-			error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
+			error = hib_submit_io(REQ_OP_WRITE | REQ_SYNC,
 						swsusp_resume_block,
 						swsusp_header, NULL);
 		} else {
@@ -1586,11 +1583,11 @@ int swsusp_unmark(void)
 {
 	int error;
 
-	hib_submit_io(REQ_OP_READ, 0, swsusp_resume_block,
-		      swsusp_header, NULL);
+	hib_submit_io(REQ_OP_READ, swsusp_resume_block,
+			swsusp_header, NULL);
 	if (!memcmp(HIBERNATE_SIG,swsusp_header->sig, 10)) {
 		memcpy(swsusp_header->sig,swsusp_header->orig_sig, 10);
-		error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
+		error = hib_submit_io(REQ_OP_WRITE | REQ_SYNC,
 					swsusp_resume_block,
 					swsusp_header, NULL);
 	} else {

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 62/63] fs/xfs: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (60 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 61/63] PM: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-29 23:31 ` [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations Bart Van Assche
  2022-07-13 21:48 ` [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
  63 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Darrick J . Wong

Improve static type checking by using the enum req_op type for variables
that represent a request operation and the new blk_opf_t type for the
combination of a request operation with request flags.

Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/xfs/xfs_bio_io.c      | 2 +-
 fs/xfs/xfs_buf.c         | 4 ++--
 fs/xfs/xfs_linux.h       | 2 +-
 fs/xfs/xfs_log_recover.c | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/xfs/xfs_bio_io.c b/fs/xfs/xfs_bio_io.c
index ae4345b37621..fe21c76f75b8 100644
--- a/fs/xfs/xfs_bio_io.c
+++ b/fs/xfs/xfs_bio_io.c
@@ -15,7 +15,7 @@ xfs_rw_bdev(
 	sector_t		sector,
 	unsigned int		count,
 	char			*data,
-	unsigned int		op)
+	enum req_op		op)
 
 {
 	unsigned int		is_vmalloc = is_vmalloc_addr(data);
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index bf4e60871068..5e8f40d8c052 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -1416,7 +1416,7 @@ xfs_buf_ioapply_map(
 	int		map,
 	int		*buf_offset,
 	int		*count,
-	int		op)
+	blk_opf_t	op)
 {
 	int		page_index;
 	unsigned int	total_nr_pages = bp->b_page_count;
@@ -1493,7 +1493,7 @@ _xfs_buf_ioapply(
 	struct xfs_buf	*bp)
 {
 	struct blk_plug	plug;
-	int		op;
+	blk_opf_t	op;
 	int		offset;
 	int		size;
 	int		i;
diff --git a/fs/xfs/xfs_linux.h b/fs/xfs/xfs_linux.h
index cb9105d667db..f9878021e7d0 100644
--- a/fs/xfs/xfs_linux.h
+++ b/fs/xfs/xfs_linux.h
@@ -196,7 +196,7 @@ static inline uint64_t howmany_64(uint64_t x, uint32_t y)
 }
 
 int xfs_rw_bdev(struct block_device *bdev, sector_t sector, unsigned int count,
-		char *data, unsigned int op);
+		char *data, enum req_op op);
 
 #define ASSERT_ALWAYS(expr)	\
 	(likely(expr) ? (void)0 : assfail(NULL, #expr, __FILE__, __LINE__))
diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
index 5f7e4e6e33ce..940c8107cbd4 100644
--- a/fs/xfs/xfs_log_recover.c
+++ b/fs/xfs/xfs_log_recover.c
@@ -122,7 +122,7 @@ xlog_do_io(
 	xfs_daddr_t		blk_no,
 	unsigned int		nbblks,
 	char			*data,
-	unsigned int		op)
+	enum req_op		op)
 {
 	int			error;
 

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (61 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 62/63] fs/xfs: " Bart Van Assche
@ 2022-06-29 23:31 ` Bart Van Assche
  2022-06-30  6:02   ` Johannes Thumshirn
  2022-06-30 23:39   ` Damien Le Moal
  2022-07-13 21:48 ` [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
  63 siblings, 2 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-06-29 23:31 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Damien Le Moal,
	Naohiro Aota, Johannes Thumshirn

Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Naohiro Aota <naohiro.aota@wdc.com>
Cc: Johannes Thumshirn <jth@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 fs/zonefs/trace.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/zonefs/trace.h b/fs/zonefs/trace.h
index 21501da764bd..42edcfd393ed 100644
--- a/fs/zonefs/trace.h
+++ b/fs/zonefs/trace.h
@@ -25,7 +25,7 @@ TRACE_EVENT(zonefs_zone_mgmt,
 	    TP_STRUCT__entry(
 			     __field(dev_t, dev)
 			     __field(ino_t, ino)
-			     __field(int, op)
+			     __field(enum req_op, op)
 			     __field(sector_t, sector)
 			     __field(sector_t, nr_sectors)
 	    ),

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* RE: [PATCH v2 10/63] blktrace: Trace remapped requests correctly
  2022-06-29 23:30 ` [PATCH v2 10/63] blktrace: Trace remapped requests correctly Bart Van Assche
@ 2022-06-30  2:05   ` NOMURA JUNICHI(野村 淳一)
  2022-06-30 18:13     ` Bart Van Assche
  0 siblings, 1 reply; 96+ messages in thread
From: NOMURA JUNICHI(野村 淳一) @ 2022-06-30  2:05 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe; +Cc: linux-block, Christoph Hellwig, Mike Snitzer

[-- Attachment #1: Type: text/plain, Size: 860 bytes --]

From: Bart Van Assche <bvanassche@acm.org>
> Trace the remapped operation and its flags instead of only the data
> direction of remapped operations. This issue was detected by analyzing
> the warnings reported by sparse related to the new blk_opf_t type.
> 
> Cc: Mike Snitzer <snitzer@kernel.org>
> Cc: Jun'ichi Nomura <junichi.nomura@nec.com>
> Fixes: b0da3f0dada7 ("Add a tracepoint for block request remapping")
.. 
>  	__blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq),
> -			rq_data_dir(rq), 0, BLK_TA_REMAP, 0,
> +			req_op(rq), rq->cmd_flags, BLK_TA_REMAP, 0,
>  			sizeof(r), &r, blk_trace_request_get_cgid(rq));

Thank you.  I think the change is fine but what it really fixes is
1b9a9ab78b0a ("blktrace: use op accessors"), where arguments of
__blk_add_trace() was extended.

-- 
Jun'ichi Nomura, NEC Corporation / NEC Solution Innovators, Ltd.


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5766 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations
  2022-06-29 23:31 ` [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations Bart Van Assche
@ 2022-06-30  6:02   ` Johannes Thumshirn
  2022-06-30 23:39   ` Damien Le Moal
  1 sibling, 0 replies; 96+ messages in thread
From: Johannes Thumshirn @ 2022-06-30  6:02 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Naohiro Aota,
	Johannes Thumshirn

On 30.06.22 01:35, Bart Van Assche wrote:
> Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> Cc: Naohiro Aota <naohiro.aota@wdc.com>
> Cc: Johannes Thumshirn <jth@kernel.org>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  fs/zonefs/trace.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/zonefs/trace.h b/fs/zonefs/trace.h
> index 21501da764bd..42edcfd393ed 100644
> --- a/fs/zonefs/trace.h
> +++ b/fs/zonefs/trace.h
> @@ -25,7 +25,7 @@ TRACE_EVENT(zonefs_zone_mgmt,
>  	    TP_STRUCT__entry(
>  			     __field(dev_t, dev)
>  			     __field(ino_t, ino)
> -			     __field(int, op)
> +			     __field(enum req_op, op)
>  			     __field(sector_t, sector)
>  			     __field(sector_t, nr_sectors)
>  	    ),
> 
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 17/63] xen-blkback: Use the enum req_op and blk_opf_t types
  2022-06-29 23:30 ` [PATCH v2 17/63] xen-blkback: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-30  7:17   ` Roger Pau Monné
  0 siblings, 0 replies; 96+ messages in thread
From: Roger Pau Monné @ 2022-06-30  7:17 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig

On Wed, Jun 29, 2022 at 04:30:59PM -0700, Bart Van Assche wrote:
> Improve static type checking by using the enum req_op type for request
> operations and the new blk_opf_t type for request flags.
> 
> Cc: Roger Pau Monné <roger.pau@citrix.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Seems sensible:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 07/63] block/bfq: Use the new blk_opf_t type
  2022-06-29 23:30 ` [PATCH v2 07/63] block/bfq: " Bart Van Assche
@ 2022-06-30  8:33   ` Jan Kara
  0 siblings, 0 replies; 96+ messages in thread
From: Jan Kara @ 2022-06-30  8:33 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig, Jan Kara

On Wed 29-06-22 16:30:49, Bart Van Assche wrote:
> Use the new blk_opf_t type for arguments and variables that represent
> request flags or a bitwise combination of a request operation and
> request flags.
> 
> This patch does not change any functionality.
> 
> Cc: Jan Kara <jack@suse.cz>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Sure. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/bfq-cgroup.c  | 16 ++++++++--------
>  block/bfq-iosched.c |  8 ++++----
>  block/bfq-iosched.h |  8 ++++----
>  3 files changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
> index 9fc605791b1e..af79028627a5 100644
> --- a/block/bfq-cgroup.c
> +++ b/block/bfq-cgroup.c
> @@ -220,7 +220,7 @@ void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg)
>  }
>  
>  void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
> -			      unsigned int op)
> +			      blk_opf_t op)
>  {
>  	blkg_rwstat_add(&bfqg->stats.queued, op, 1);
>  	bfqg_stats_end_empty_time(&bfqg->stats);
> @@ -228,18 +228,18 @@ void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
>  		bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq));
>  }
>  
> -void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op)
> +void bfqg_stats_update_io_remove(struct bfq_group *bfqg, blk_opf_t op)
>  {
>  	blkg_rwstat_add(&bfqg->stats.queued, op, -1);
>  }
>  
> -void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op)
> +void bfqg_stats_update_io_merged(struct bfq_group *bfqg, blk_opf_t op)
>  {
>  	blkg_rwstat_add(&bfqg->stats.merged, op, 1);
>  }
>  
>  void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
> -				  u64 io_start_time_ns, unsigned int op)
> +				  u64 io_start_time_ns, blk_opf_t op)
>  {
>  	struct bfqg_stats *stats = &bfqg->stats;
>  	u64 now = ktime_get_ns();
> @@ -255,11 +255,11 @@ void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
>  #else /* CONFIG_BFQ_CGROUP_DEBUG */
>  
>  void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
> -			      unsigned int op) { }
> -void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) { }
> -void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) { }
> +			      blk_opf_t op) { }
> +void bfqg_stats_update_io_remove(struct bfq_group *bfqg, blk_opf_t op) { }
> +void bfqg_stats_update_io_merged(struct bfq_group *bfqg, blk_opf_t op) { }
>  void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
> -				  u64 io_start_time_ns, unsigned int op) { }
> +				  u64 io_start_time_ns, blk_opf_t op) { }
>  void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { }
>  void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { }
>  void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { }
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index e6d7e6b01a05..a724fc882158 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -668,7 +668,7 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
>   * significantly affect service guarantees coming from the BFQ scheduling
>   * algorithm.
>   */
> -static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
> +static void bfq_limit_depth(blk_opf_t op, struct blk_mq_alloc_data *data)
>  {
>  	struct bfq_data *bfqd = data->q->elevator->elevator_data;
>  	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
> @@ -6104,7 +6104,7 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
>  static void bfq_update_insert_stats(struct request_queue *q,
>  				    struct bfq_queue *bfqq,
>  				    bool idle_timer_disabled,
> -				    unsigned int cmd_flags)
> +				    blk_opf_t cmd_flags)
>  {
>  	if (!bfqq)
>  		return;
> @@ -6129,7 +6129,7 @@ static void bfq_update_insert_stats(struct request_queue *q,
>  static inline void bfq_update_insert_stats(struct request_queue *q,
>  					   struct bfq_queue *bfqq,
>  					   bool idle_timer_disabled,
> -					   unsigned int cmd_flags) {}
> +					   blk_opf_t cmd_flags) {}
>  #endif /* CONFIG_BFQ_CGROUP_DEBUG */
>  
>  static struct bfq_queue *bfq_init_rq(struct request *rq);
> @@ -6141,7 +6141,7 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
>  	struct bfq_data *bfqd = q->elevator->elevator_data;
>  	struct bfq_queue *bfqq;
>  	bool idle_timer_disabled = false;
> -	unsigned int cmd_flags;
> +	blk_opf_t cmd_flags;
>  	LIST_HEAD(free);
>  
>  #ifdef CONFIG_BFQ_GROUP_IOSCHED
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index ca8177d7bf7c..6bde1f4ecc50 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -994,11 +994,11 @@ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
>  
>  void bfqg_stats_update_legacy_io(struct request_queue *q, struct request *rq);
>  void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
> -			      unsigned int op);
> -void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op);
> -void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op);
> +			      blk_opf_t op);
> +void bfqg_stats_update_io_remove(struct bfq_group *bfqg, blk_opf_t op);
> +void bfqg_stats_update_io_merged(struct bfq_group *bfqg, blk_opf_t op);
>  void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
> -				  u64 io_start_time_ns, unsigned int op);
> +				  u64 io_start_time_ns, blk_opf_t op);
>  void bfqg_stats_update_dequeue(struct bfq_group *bfqg);
>  void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg);
>  void bfqg_stats_update_idle_time(struct bfq_group *bfqg);
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 46/63] fs/buffer: Use the new blk_opf_t type
  2022-06-29 23:31 ` [PATCH v2 46/63] fs/buffer: " Bart Van Assche
@ 2022-06-30  8:34   ` Jan Kara
  0 siblings, 0 replies; 96+ messages in thread
From: Jan Kara @ 2022-06-30  8:34 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Matthew Wilcox, Jan Kara

On Wed 29-06-22 16:31:28, Bart Van Assche wrote:
> Improve static type checking by using the new blk_opf_t type for block layer
> request flags. Change WRITE into REQ_OP_WRITE. This patch does not change
> any functionality since REQ_OP_WRITE == WRITE == 1.
> 
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Jan Kara <jack@suse.cz>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  fs/buffer.c                 | 21 +++++++++++----------
>  include/linux/buffer_head.h |  9 +++++----
>  2 files changed, 16 insertions(+), 14 deletions(-)
> 
> diff --git a/fs/buffer.c b/fs/buffer.c
> index 898c7f301b1b..4a00b61f35ec 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -52,8 +52,8 @@
>  #include "internal.h"
>  
>  static int fsync_buffers_list(spinlock_t *lock, struct list_head *list);
> -static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
> -			 struct writeback_control *wbc);
> +static int submit_bh_wbc(enum req_op op, blk_opf_t op_flags,
> +			 struct buffer_head *bh, struct writeback_control *wbc);
>  
>  #define BH_ENTRY(list) list_entry((list), struct buffer_head, b_assoc_buffers)
>  
> @@ -1716,7 +1716,7 @@ int __block_write_full_page(struct inode *inode, struct page *page,
>  	struct buffer_head *bh, *head;
>  	unsigned int blocksize, bbits;
>  	int nr_underway = 0;
> -	int write_flags = wbc_to_write_flags(wbc);
> +	blk_opf_t write_flags = wbc_to_write_flags(wbc);
>  
>  	head = create_page_buffers(page, inode,
>  					(1 << BH_Dirty)|(1 << BH_Uptodate));
> @@ -2994,8 +2994,8 @@ static void end_bio_bh_io_sync(struct bio *bio)
>  	bio_put(bio);
>  }
>  
> -static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
> -			 struct writeback_control *wbc)
> +static int submit_bh_wbc(enum req_op op, blk_opf_t op_flags,
> +			 struct buffer_head *bh, struct writeback_control *wbc)
>  {
>  	struct bio *bio;
>  
> @@ -3040,7 +3040,7 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
>  	return 0;
>  }
>  
> -int submit_bh(int op, int op_flags, struct buffer_head *bh)
> +int submit_bh(enum req_op op, blk_opf_t op_flags, struct buffer_head *bh)
>  {
>  	return submit_bh_wbc(op, op_flags, bh, NULL);
>  }
> @@ -3072,7 +3072,8 @@ EXPORT_SYMBOL(submit_bh);
>   * All of the buffers must be for the same device, and must also be a
>   * multiple of the current approved size for the device.
>   */
> -void ll_rw_block(int op, int op_flags,  int nr, struct buffer_head *bhs[])
> +void ll_rw_block(enum req_op op, blk_opf_t op_flags, int nr,
> +		 struct buffer_head *bhs[])
>  {
>  	int i;
>  
> @@ -3081,7 +3082,7 @@ void ll_rw_block(int op, int op_flags,  int nr, struct buffer_head *bhs[])
>  
>  		if (!trylock_buffer(bh))
>  			continue;
> -		if (op == WRITE) {
> +		if (op == REQ_OP_WRITE) {
>  			if (test_clear_buffer_dirty(bh)) {
>  				bh->b_end_io = end_buffer_write_sync;
>  				get_bh(bh);
> @@ -3101,7 +3102,7 @@ void ll_rw_block(int op, int op_flags,  int nr, struct buffer_head *bhs[])
>  }
>  EXPORT_SYMBOL(ll_rw_block);
>  
> -void write_dirty_buffer(struct buffer_head *bh, int op_flags)
> +void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags)
>  {
>  	lock_buffer(bh);
>  	if (!test_clear_buffer_dirty(bh)) {
> @@ -3119,7 +3120,7 @@ EXPORT_SYMBOL(write_dirty_buffer);
>   * and then start new I/O and then wait upon it.  The caller must have a ref on
>   * the buffer_head.
>   */
> -int __sync_dirty_buffer(struct buffer_head *bh, int op_flags)
> +int __sync_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags)
>  {
>  	int ret = 0;
>  
> diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
> index c9d1463bb20f..9795df9400bd 100644
> --- a/include/linux/buffer_head.h
> +++ b/include/linux/buffer_head.h
> @@ -9,6 +9,7 @@
>  #define _LINUX_BUFFER_HEAD_H
>  
>  #include <linux/types.h>
> +#include <linux/blk_types.h>
>  #include <linux/fs.h>
>  #include <linux/linkage.h>
>  #include <linux/pagemap.h>
> @@ -201,11 +202,11 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags);
>  void free_buffer_head(struct buffer_head * bh);
>  void unlock_buffer(struct buffer_head *bh);
>  void __lock_buffer(struct buffer_head *bh);
> -void ll_rw_block(int, int, int, struct buffer_head * bh[]);
> +void ll_rw_block(enum req_op, blk_opf_t, int, struct buffer_head * bh[]);
>  int sync_dirty_buffer(struct buffer_head *bh);
> -int __sync_dirty_buffer(struct buffer_head *bh, int op_flags);
> -void write_dirty_buffer(struct buffer_head *bh, int op_flags);
> -int submit_bh(int, int, struct buffer_head *);
> +int __sync_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags);
> +void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags);
> +int submit_bh(enum req_op, blk_opf_t, struct buffer_head *);
>  void write_boundary_block(struct block_device *bdev,
>  			sector_t bblock, unsigned blocksize);
>  int bh_uptodate_or_lock(struct buffer_head *bh);
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 48/63] fs/direct-io: Reduce the size of struct dio
  2022-06-29 23:31 ` [PATCH v2 48/63] fs/direct-io: Reduce the size of struct dio Bart Van Assche
@ 2022-06-30  8:50   ` Jan Kara
  2022-06-30 19:06     ` Bart Van Assche
  0 siblings, 1 reply; 96+ messages in thread
From: Jan Kara @ 2022-06-30  8:50 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Al Viro,
	Darrick J . Wong, Jan Kara

On Wed 29-06-22 16:31:30, Bart Van Assche wrote:
> Reduce the size of struct dio by combining the 'op' and 'op_flags' into
> the new 'opf' member. Use the new blk_opf_t type to improve static type
> checking. This patch does not change any functionality.
> 
> Cc: Al Viro <viro@zeniv.linux.org.uk>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Darrick J. Wong <djwong@kernel.org>
> Cc: Jan Kara <jack@suse.cz>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Just one nit below.

> diff --git a/fs/direct-io.c b/fs/direct-io.c
> index 840752006f60..b72706d163f5 100644
> --- a/fs/direct-io.c
> +++ b/fs/direct-io.c
> @@ -117,8 +117,7 @@ struct dio_submit {
>  /* dio_state communicated between submission path and end_io */
>  struct dio {
>  	int flags;			/* doesn't change */
> -	int op;
> -	int op_flags;
> +	blk_opf_t opf;			/* request operation type and flags */
>  	struct gendisk *bio_disk;
>  	struct inode *inode;
>  	loff_t i_size;			/* i_size when submitted */
> @@ -154,6 +153,11 @@ struct dio {
>  
>  static struct kmem_cache *dio_cache __read_mostly;
>  
> +static inline bool op_is_read(blk_opf_t opf)
> +{
> +	return !op_is_write(opf);
> +}
> +

This is a bit dangerous although currently OK for direct IO code. I'm just
afraid someone will extend direct IO code (unlikely) or copy-paste this to
a place where there can be more operations than read & write... Maybe just
add here a comment like /* Direct IO code does only reads or writes */.

Otherwise feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 50/63] fs/btrfs: Use the enum req_op and blk_opf_t types
  2022-06-29 23:31 ` [PATCH v2 50/63] fs/btrfs: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-30 11:37   ` David Sterba
  0 siblings, 0 replies; 96+ messages in thread
From: David Sterba @ 2022-06-30 11:37 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, David Sterba, Josef Bacik

On Wed, Jun 29, 2022 at 04:31:32PM -0700, Bart Van Assche wrote:
> Improve static type checking by using the enum req_op type for variables
> that represent a request operation and the new blk_opf_t type for
> variables that represent request flags.
> 
> Cc: David Sterba <dsterba@suse.com>
> Cc: Josef Bacik <josef@toxicpanda.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Acked-by: David Sterba <dsterba@suse.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 61/63] PM: Use the enum req_op and blk_opf_t types
  2022-06-29 23:31 ` [PATCH v2 61/63] PM: " Bart Van Assche
@ 2022-06-30 15:21   ` Rafael J. Wysocki
  0 siblings, 0 replies; 96+ messages in thread
From: Rafael J. Wysocki @ 2022-06-30 15:21 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe; +Cc: linux-block, Christoph Hellwig

On 6/30/2022 1:31 AM, Bart Van Assche wrote:
> Improve static type checking by using the enum req_op type for variables
> that represent a request operation and the new blk_opf_t type for
> variables that represent request flags. Combine the first two
> hib_submit_io() arguments into a single argument.
>
> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>


> ---
>   kernel/power/swap.c | 29 +++++++++++++----------------
>   1 file changed, 13 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> index 91fffdd2c7fb..277434b6c0bf 100644
> --- a/kernel/power/swap.c
> +++ b/kernel/power/swap.c
> @@ -269,15 +269,14 @@ static void hib_end_io(struct bio *bio)
>   	bio_put(bio);
>   }
>   
> -static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr,
> -		struct hib_bio_batch *hb)
> +static int hib_submit_io(blk_opf_t opf, pgoff_t page_off, void *addr,
> +			 struct hib_bio_batch *hb)
>   {
>   	struct page *page = virt_to_page(addr);
>   	struct bio *bio;
>   	int error = 0;
>   
> -	bio = bio_alloc(hib_resume_bdev, 1, op | op_flags,
> -			GFP_NOIO | __GFP_HIGH);
> +	bio = bio_alloc(hib_resume_bdev, 1, opf, GFP_NOIO | __GFP_HIGH);
>   	bio->bi_iter.bi_sector = page_off * (PAGE_SIZE >> 9);
>   
>   	if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
> @@ -317,8 +316,7 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
>   {
>   	int error;
>   
> -	hib_submit_io(REQ_OP_READ, 0, swsusp_resume_block,
> -		      swsusp_header, NULL);
> +	hib_submit_io(REQ_OP_READ, swsusp_resume_block, swsusp_header, NULL);
>   	if (!memcmp("SWAP-SPACE",swsusp_header->sig, 10) ||
>   	    !memcmp("SWAPSPACE2",swsusp_header->sig, 10)) {
>   		memcpy(swsusp_header->orig_sig,swsusp_header->sig, 10);
> @@ -331,7 +329,7 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
>   		swsusp_header->flags = flags;
>   		if (flags & SF_CRC32_MODE)
>   			swsusp_header->crc32 = handle->crc32;
> -		error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> +		error = hib_submit_io(REQ_OP_WRITE | REQ_SYNC,
>   				      swsusp_resume_block, swsusp_header, NULL);
>   	} else {
>   		pr_err("Swap header not found!\n");
> @@ -408,7 +406,7 @@ static int write_page(void *buf, sector_t offset, struct hib_bio_batch *hb)
>   	} else {
>   		src = buf;
>   	}
> -	return hib_submit_io(REQ_OP_WRITE, REQ_SYNC, offset, src, hb);
> +	return hib_submit_io(REQ_OP_WRITE | REQ_SYNC, offset, src, hb);
>   }
>   
>   static void release_swap_writer(struct swap_map_handle *handle)
> @@ -1003,7 +1001,7 @@ static int get_swap_reader(struct swap_map_handle *handle,
>   			return -ENOMEM;
>   		}
>   
> -		error = hib_submit_io(REQ_OP_READ, 0, offset, tmp->map, NULL);
> +		error = hib_submit_io(REQ_OP_READ, offset, tmp->map, NULL);
>   		if (error) {
>   			release_swap_reader(handle);
>   			return error;
> @@ -1027,7 +1025,7 @@ static int swap_read_page(struct swap_map_handle *handle, void *buf,
>   	offset = handle->cur->entries[handle->k];
>   	if (!offset)
>   		return -EFAULT;
> -	error = hib_submit_io(REQ_OP_READ, 0, offset, buf, hb);
> +	error = hib_submit_io(REQ_OP_READ, offset, buf, hb);
>   	if (error)
>   		return error;
>   	if (++handle->k >= MAP_PAGE_ENTRIES) {
> @@ -1526,8 +1524,7 @@ int swsusp_check(void)
>   	if (!IS_ERR(hib_resume_bdev)) {
>   		set_blocksize(hib_resume_bdev, PAGE_SIZE);
>   		clear_page(swsusp_header);
> -		error = hib_submit_io(REQ_OP_READ, 0,
> -					swsusp_resume_block,
> +		error = hib_submit_io(REQ_OP_READ, swsusp_resume_block,
>   					swsusp_header, NULL);
>   		if (error)
>   			goto put;
> @@ -1535,7 +1532,7 @@ int swsusp_check(void)
>   		if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) {
>   			memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
>   			/* Reset swap signature now */
> -			error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> +			error = hib_submit_io(REQ_OP_WRITE | REQ_SYNC,
>   						swsusp_resume_block,
>   						swsusp_header, NULL);
>   		} else {
> @@ -1586,11 +1583,11 @@ int swsusp_unmark(void)
>   {
>   	int error;
>   
> -	hib_submit_io(REQ_OP_READ, 0, swsusp_resume_block,
> -		      swsusp_header, NULL);
> +	hib_submit_io(REQ_OP_READ, swsusp_resume_block,
> +			swsusp_header, NULL);
>   	if (!memcmp(HIBERNATE_SIG,swsusp_header->sig, 10)) {
>   		memcpy(swsusp_header->sig,swsusp_header->orig_sig, 10);
> -		error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> +		error = hib_submit_io(REQ_OP_WRITE | REQ_SYNC,
>   					swsusp_resume_block,
>   					swsusp_header, NULL);
>   	} else {



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/63] blktrace: Trace remapped requests correctly
  2022-06-30  2:05   ` NOMURA JUNICHI(野村 淳一)
@ 2022-06-30 18:13     ` Bart Van Assche
  2022-06-30 23:56       ` NOMURA JUNICHI(野村 淳一)
  0 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-30 18:13 UTC (permalink / raw)
  To: NOMURA JUNICHI(野村 淳一), Jens Axboe
  Cc: linux-block, Christoph Hellwig, Mike Snitzer

On 6/29/22 19:05, NOMURA JUNICHI(野村 淳一) wrote:
> From: Bart Van Assche <bvanassche@acm.org>
>> Trace the remapped operation and its flags instead of only the data
>> direction of remapped operations. This issue was detected by analyzing
>> the warnings reported by sparse related to the new blk_opf_t type.
>>
>> Cc: Mike Snitzer <snitzer@kernel.org>
>> Cc: Jun'ichi Nomura <junichi.nomura@nec.com>
>> Fixes: b0da3f0dada7 ("Add a tracepoint for block request remapping")
> ..
>>   	__blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq),
>> -			rq_data_dir(rq), 0, BLK_TA_REMAP, 0,
>> +			req_op(rq), rq->cmd_flags, BLK_TA_REMAP, 0,
>>   			sizeof(r), &r, blk_trace_request_get_cgid(rq));
> 
> Thank you.  I think the change is fine but what it really fixes is
> 1b9a9ab78b0a ("blktrace: use op accessors"), where arguments of
> __blk_add_trace() was extended.

Thanks for the feedback. I will update the "Fixes" tag.

BTW, does the above reply count as a "Reviewed-by"?

Thanks,

Bart.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 34/63] md/raid1: Use the new blk_opf_t type
  2022-06-29 23:31 ` [PATCH v2 34/63] md/raid1: Use the new blk_opf_t type Bart Van Assche
@ 2022-06-30 18:41   ` Song Liu
  0 siblings, 0 replies; 96+ messages in thread
From: Song Liu @ 2022-06-30 18:41 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig

On Wed, Jun 29, 2022 at 4:32 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> Improve static type checking by using the new blk_opf_t type for
> variables that represent request flags.
>
> Cc: Song Liu <song@kernel.org>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Acked-by: Song Liu <song@kernel.org>

> ---
>  drivers/md/raid1.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index e7cd1d30d68a..8180b57be040 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -1220,8 +1220,8 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
>         struct raid1_info *mirror;
>         struct bio *read_bio;
>         struct bitmap *bitmap = mddev->bitmap;
> -       const int op = bio_op(bio);
> -       const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
> +       const enum req_op op = bio_op(bio);
> +       const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
>         int max_sectors;
>         int rdisk;
>         bool r1bio_existed = !!r1_bio;

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 35/63] md/raid10: Use the new blk_opf_t type
  2022-06-29 23:31 ` [PATCH v2 35/63] md/raid10: " Bart Van Assche
@ 2022-06-30 18:41   ` Song Liu
  0 siblings, 0 replies; 96+ messages in thread
From: Song Liu @ 2022-06-30 18:41 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig

On Wed, Jun 29, 2022 at 4:32 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> Improve static type checking by using the new blk_opf_t type for
> variables that represent a request flags.
>
> Cc: Song Liu <song@kernel.org>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Acked-by: Song Liu <song@kernel.org>

> ---
>  drivers/md/raid10.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index 910d3cd73105..ac8bc7d2565a 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -1136,8 +1136,8 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
>  {
>         struct r10conf *conf = mddev->private;
>         struct bio *read_bio;
> -       const int op = bio_op(bio);
> -       const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
> +       const enum req_op op = bio_op(bio);
> +       const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
>         int max_sectors;
>         struct md_rdev *rdev;
>         char b[BDEVNAME_SIZE];
> @@ -1230,9 +1230,9 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
>                                   struct bio *bio, bool replacement,
>                                   int n_copy)
>  {
> -       const int op = bio_op(bio);
> -       const unsigned long do_sync = (bio->bi_opf & REQ_SYNC);
> -       const unsigned long do_fua = (bio->bi_opf & REQ_FUA);
> +       const enum req_op op = bio_op(bio);
> +       const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
> +       const blk_opf_t do_fua = bio->bi_opf & REQ_FUA;
>         unsigned long flags;
>         struct blk_plug_cb *cb;
>         struct raid1_plug_cb *plug = NULL;

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 36/63] md/raid5: Use the enum req_op and blk_opf_t types
  2022-06-29 23:31 ` [PATCH v2 36/63] md/raid5: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-06-30 18:41   ` Song Liu
  0 siblings, 0 replies; 96+ messages in thread
From: Song Liu @ 2022-06-30 18:41 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig

On Wed, Jun 29, 2022 at 4:32 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> Improve static type checking by using the enum req_op type for variables
> that represent a request operation and the new blk_opf_t type for
> variables that represent request flags.
>
> Cc: Song Liu <song@kernel.org>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Acked-by: Song Liu <song@kernel.org>

> ---
>  drivers/md/raid5.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 5d09256d7f81..b11d8b6a2dc2 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -1082,7 +1082,8 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
>         should_defer = conf->batch_bio_dispatch && conf->group_cnt;
>
>         for (i = disks; i--; ) {
> -               int op, op_flags = 0;
> +               enum req_op op;
> +               blk_opf_t op_flags = 0;
>                 int replace_only = 0;
>                 struct bio *bi, *rbi;
>                 struct md_rdev *rdev, *rrdev = NULL;
> @@ -5896,7 +5897,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
>                         (unsigned long long)logical_sector);
>
>                 sh = raid5_get_active_stripe(conf, new_sector, previous,
> -                                      (bi->bi_opf & REQ_RAHEAD), 0);
> +                                            !!(bi->bi_opf & REQ_RAHEAD), 0);
>                 if (sh) {
>                         if (unlikely(previous)) {
>                                 /* expansion might have moved on while waiting for a

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 47/63] fs/buffer: Combine two submit_bh() and ll_rw_block() arguments
  2022-06-29 23:31 ` [PATCH v2 47/63] fs/buffer: Combine two submit_bh() and ll_rw_block() arguments Bart Van Assche
@ 2022-06-30 18:43   ` Song Liu
  0 siblings, 0 replies; 96+ messages in thread
From: Song Liu @ 2022-06-30 18:43 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Alexander Viro

On Wed, Jun 29, 2022 at 4:33 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> Both submit_bh() and ll_rw_block() accept a request operation type and
> request flags as their first two arguments. Micro-optimize these two
> functions by combining these first two arguments into a single argument.
> This patch does not change the behavior of any of the modified code.
>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Song Liu <song@kernel.org>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  drivers/md/md-bitmap.c      |  4 +--

For the md bits:

Acked-by: Song Liu <song@kernel.org>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 48/63] fs/direct-io: Reduce the size of struct dio
  2022-06-30  8:50   ` Jan Kara
@ 2022-06-30 19:06     ` Bart Van Assche
  2022-07-01 10:58       ` Jan Kara
  0 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-06-30 19:06 UTC (permalink / raw)
  To: Jan Kara
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Al Viro, Darrick J . Wong

On 6/30/22 01:50, Jan Kara wrote:
> On Wed 29-06-22 16:31:30, Bart Van Assche wrote:
>> +static inline bool op_is_read(blk_opf_t opf)
>> +{
>> +	return !op_is_write(opf);
>> +}
>> +
> 
> This is a bit dangerous although currently OK for direct IO code. I'm just
> afraid someone will extend direct IO code (unlikely) or copy-paste this to
> a place where there can be more operations than read & write... Maybe just
> add here a comment like /* Direct IO code does only reads or writes */.

Hi Jan,

Since source code comments tend to get overlooked, how about replacing
this patch by the patch below?

Thanks,

Bart.

---
  fs/direct-io.c | 40 +++++++++++++++++++++++-----------------
  1 file changed, 23 insertions(+), 17 deletions(-)

diff --git a/fs/direct-io.c b/fs/direct-io.c
index 840752006f60..94b71440c332 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -117,8 +117,7 @@ struct dio_submit {
  /* dio_state communicated between submission path and end_io */
  struct dio {
  	int flags;			/* doesn't change */
-	int op;
-	int op_flags;
+	blk_opf_t opf;			/* request operation type and flags */
  	struct gendisk *bio_disk;
  	struct inode *inode;
  	loff_t i_size;			/* i_size when submitted */
@@ -167,12 +166,13 @@ static inline unsigned dio_pages_present(struct dio_submit *sdio)
   */
  static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio)
  {
+	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
  	ssize_t ret;

  	ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES,
  				&sdio->from);

-	if (ret < 0 && sdio->blocks_available && (dio->op == REQ_OP_WRITE)) {
+	if (ret < 0 && sdio->blocks_available && dio_op == REQ_OP_WRITE) {
  		struct page *page = ZERO_PAGE(0);
  		/*
  		 * A memory fault, but the filesystem has some outstanding
@@ -234,6 +234,7 @@ static inline struct page *dio_get_page(struct dio *dio,
   */
  static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
  {
+	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
  	loff_t offset = dio->iocb->ki_pos;
  	ssize_t transferred = 0;
  	int err;
@@ -251,7 +252,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
  		transferred = dio->result;

  		/* Check for short read case */
-		if ((dio->op == REQ_OP_READ) &&
+		if (dio_op == REQ_OP_READ &&
  		    ((offset + transferred) > dio->i_size))
  			transferred = dio->i_size - offset;
  		/* ignore EFAULT if some IO has been done */
@@ -286,7 +287,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
  	 * zeros from unwritten extents.
  	 */
  	if (flags & DIO_COMPLETE_INVALIDATE &&
-	    ret > 0 && dio->op == REQ_OP_WRITE &&
+	    ret > 0 && dio_op == REQ_OP_WRITE &&
  	    dio->inode->i_mapping->nrpages) {
  		err = invalidate_inode_pages2_range(dio->inode->i_mapping,
  					offset >> PAGE_SHIFT,
@@ -305,7 +306,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
  		 */
  		dio->iocb->ki_pos += transferred;

-		if (ret > 0 && dio->op == REQ_OP_WRITE)
+		if (ret > 0 && dio_op == REQ_OP_WRITE)
  			ret = generic_write_sync(dio->iocb, ret);
  		dio->iocb->ki_complete(dio->iocb, ret);
  	}
@@ -329,6 +330,7 @@ static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio);
  static void dio_bio_end_aio(struct bio *bio)
  {
  	struct dio *dio = bio->bi_private;
+	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
  	unsigned long remaining;
  	unsigned long flags;
  	bool defer_completion = false;
@@ -353,7 +355,7 @@ static void dio_bio_end_aio(struct bio *bio)
  		 */
  		if (dio->result)
  			defer_completion = dio->defer_completion ||
-					   (dio->op == REQ_OP_WRITE &&
+					   (dio_op == REQ_OP_WRITE &&
  					    dio->inode->i_mapping->nrpages);
  		if (defer_completion) {
  			INIT_WORK(&dio->complete_work, dio_aio_complete_work);
@@ -396,7 +398,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
  	 * bio_alloc() is guaranteed to return a bio when allowed to sleep and
  	 * we request a valid number of vectors.
  	 */
-	bio = bio_alloc(bdev, nr_vecs, dio->op | dio->op_flags, GFP_KERNEL);
+	bio = bio_alloc(bdev, nr_vecs, dio->opf, GFP_KERNEL);
  	bio->bi_iter.bi_sector = first_sector;
  	if (dio->is_async)
  		bio->bi_end_io = dio_bio_end_aio;
@@ -415,6 +417,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
   */
  static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
  {
+	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
  	struct bio *bio = sdio->bio;
  	unsigned long flags;

@@ -426,7 +429,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
  	dio->refcount++;
  	spin_unlock_irqrestore(&dio->bio_lock, flags);

-	if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty)
+	if (dio->is_async && dio_op == REQ_OP_READ && dio->should_dirty)
  		bio_set_pages_dirty(bio);

  	dio->bio_disk = bio->bi_bdev->bd_disk;
@@ -492,7 +495,8 @@ static struct bio *dio_await_one(struct dio *dio)
  static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio)
  {
  	blk_status_t err = bio->bi_status;
-	bool should_dirty = dio->op == REQ_OP_READ && dio->should_dirty;
+	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
+	bool should_dirty = dio_op == REQ_OP_READ && dio->should_dirty;

  	if (err) {
  		if (err == BLK_STS_AGAIN && (bio->bi_opf & REQ_NOWAIT))
@@ -619,6 +623,7 @@ static int dio_set_defer_completion(struct dio *dio)
  static int get_more_blocks(struct dio *dio, struct dio_submit *sdio,
  			   struct buffer_head *map_bh)
  {
+	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
  	int ret;
  	sector_t fs_startblk;	/* Into file, in filesystem-sized blocks */
  	sector_t fs_endblk;	/* Into file, in filesystem-sized blocks */
@@ -653,7 +658,7 @@ static int get_more_blocks(struct dio *dio, struct dio_submit *sdio,
  		 * which may decide to handle it or also return an unmapped
  		 * buffer head.
  		 */
-		create = dio->op == REQ_OP_WRITE;
+		create = dio_op == REQ_OP_WRITE;
  		if (dio->flags & DIO_SKIP_HOLES) {
  			i_size = i_size_read(dio->inode);
  			if (i_size && fs_startblk <= (i_size - 1) >> i_blkbits)
@@ -801,10 +806,11 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
  		    unsigned offset, unsigned len, sector_t blocknr,
  		    struct buffer_head *map_bh)
  {
+	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
  	int ret = 0;
  	int boundary = sdio->boundary;	/* dio_send_cur_page may clear it */

-	if (dio->op == REQ_OP_WRITE) {
+	if (dio_op == REQ_OP_WRITE) {
  		/*
  		 * Read accounting is performed in submit_bio()
  		 */
@@ -917,6 +923,7 @@ static inline void dio_zero_block(struct dio *dio, struct dio_submit *sdio,
  static int do_direct_IO(struct dio *dio, struct dio_submit *sdio,
  			struct buffer_head *map_bh)
  {
+	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
  	const unsigned blkbits = sdio->blkbits;
  	const unsigned i_blkbits = blkbits + sdio->blkfactor;
  	int ret = 0;
@@ -992,7 +999,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio,
  				loff_t i_size_aligned;

  				/* AKPM: eargh, -ENOTBLK is a hack */
-				if (dio->op == REQ_OP_WRITE) {
+				if (dio_op == REQ_OP_WRITE) {
  					put_page(page);
  					return -ENOTBLK;
  				}
@@ -1196,12 +1203,11 @@ ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,

  	dio->inode = inode;
  	if (iov_iter_rw(iter) == WRITE) {
-		dio->op = REQ_OP_WRITE;
-		dio->op_flags = REQ_SYNC | REQ_IDLE;
+		dio->opf = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE;
  		if (iocb->ki_flags & IOCB_NOWAIT)
-			dio->op_flags |= REQ_NOWAIT;
+			dio->opf |= REQ_NOWAIT;
  	} else {
-		dio->op = REQ_OP_READ;
+		dio->opf = REQ_OP_READ;
  	}

  	/*

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 08/63] block/mq-deadline: Use the new blk_opf_t type
  2022-06-29 23:30 ` [PATCH v2 08/63] block/mq-deadline: " Bart Van Assche
@ 2022-06-30 23:35   ` Damien Le Moal
  0 siblings, 0 replies; 96+ messages in thread
From: Damien Le Moal @ 2022-06-30 23:35 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal

On 6/30/22 08:30, Bart Van Assche wrote:
> Use the new blk_opf_t type for an argument that represents a bitwise
> combination of a request operation and request flags.
> 
> This patch does not change any functionality.
> 
> Cc: Damien Le Moal <damien.lemoal@wdc.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  block/mq-deadline.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/block/mq-deadline.c b/block/mq-deadline.c
> index 1a9e835e816c..c5589e9155e6 100644
> --- a/block/mq-deadline.c
> +++ b/block/mq-deadline.c
> @@ -543,7 +543,7 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
>   * Called by __blk_mq_alloc_request(). The shallow_depth value set by this
>   * function is used by __blk_mq_get_tag().
>   */
> -static void dd_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
> +static void dd_limit_depth(blk_opf_t op, struct blk_mq_alloc_data *data)
>  {
>  	struct deadline_data *dd = data->q->elevator->elevator_data;
>  

Looks good.

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 29/63] dm/zone: Use the enum req_op type
  2022-06-29 23:31 ` [PATCH v2 29/63] dm/zone: Use the enum req_op type Bart Van Assche
@ 2022-06-30 23:36   ` Damien Le Moal
  0 siblings, 0 replies; 96+ messages in thread
From: Damien Le Moal @ 2022-06-30 23:36 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe; +Cc: linux-block, Christoph Hellwig

On 6/30/22 08:31, Bart Van Assche wrote:
> Use the enum req_op type for request operations instead of unsigned int.
> This patch fixes a sparse warning that has been introduced by making
> enum req_op __bitwise.
> 
> Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  drivers/md/dm-zone.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
> index 3e7b1fe1580b..2e87237a35a5 100644
> --- a/drivers/md/dm-zone.c
> +++ b/drivers/md/dm-zone.c
> @@ -361,7 +361,7 @@ static int dm_update_zone_wp_offset(struct mapped_device *md, unsigned int zno,
>  }
>  
>  struct orig_bio_details {
> -	unsigned int op;
> +	enum req_op op;
>  	unsigned int nr_sectors;
>  };
>  

Looks good.

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 30/63] dm/dm-zoned: Use the enum req_op type
  2022-06-29 23:31 ` [PATCH v2 30/63] dm/dm-zoned: " Bart Van Assche
@ 2022-06-30 23:36   ` Damien Le Moal
  0 siblings, 0 replies; 96+ messages in thread
From: Damien Le Moal @ 2022-06-30 23:36 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Alasdair Kergon, Mike Snitzer

On 6/30/22 08:31, Bart Van Assche wrote:
> Improve static type checking by using the enum req_op type for arguments
> that represent a request operation.
> 
> Cc: Alasdair Kergon <agk@redhat.com>
> Cc: Mike Snitzer <snitzer@kernel.org>
> Cc: Damien Le Moal <damien.lemoal@wdc.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  drivers/md/dm-zoned-metadata.c | 5 +++--
>  drivers/md/dm-zoned.h          | 2 +-
>  2 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
> index d1ea66114d14..34db364c23a8 100644
> --- a/drivers/md/dm-zoned-metadata.c
> +++ b/drivers/md/dm-zoned-metadata.c
> @@ -737,7 +737,7 @@ static int dmz_write_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk,
>  /*
>   * Read/write a metadata block.
>   */
> -static int dmz_rdwr_block(struct dmz_dev *dev, int op,
> +static int dmz_rdwr_block(struct dmz_dev *dev, enum req_op op,
>  			  sector_t block, struct page *page)
>  {
>  	struct bio *bio;
> @@ -2045,7 +2045,8 @@ struct dm_zone *dmz_get_zone_for_reclaim(struct dmz_metadata *zmd,
>   * allocated and used to map the chunk.
>   * The zone returned will be set to the active state.
>   */
> -struct dm_zone *dmz_get_chunk_mapping(struct dmz_metadata *zmd, unsigned int chunk, int op)
> +struct dm_zone *dmz_get_chunk_mapping(struct dmz_metadata *zmd,
> +				      unsigned int chunk, enum req_op op)
>  {
>  	struct dmz_mblock *dmap_mblk = zmd->map_mblk[chunk >> DMZ_MAP_ENTRIES_SHIFT];
>  	struct dmz_map *dmap = (struct dmz_map *) dmap_mblk->data;
> diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h
> index a02744a0846c..265494d3f711 100644
> --- a/drivers/md/dm-zoned.h
> +++ b/drivers/md/dm-zoned.h
> @@ -248,7 +248,7 @@ struct dm_zone *dmz_get_zone_for_reclaim(struct dmz_metadata *zmd,
>  					 unsigned int dev_idx, bool idle);
>  
>  struct dm_zone *dmz_get_chunk_mapping(struct dmz_metadata *zmd,
> -				      unsigned int chunk, int op);
> +				      unsigned int chunk, enum req_op op);
>  void dmz_put_chunk_mapping(struct dmz_metadata *zmd, struct dm_zone *zone);
>  struct dm_zone *dmz_get_chunk_buffer(struct dmz_metadata *zmd,
>  				     struct dm_zone *dzone);

Looks good.

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations
  2022-06-29 23:31 ` [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations Bart Van Assche
  2022-06-30  6:02   ` Johannes Thumshirn
@ 2022-06-30 23:39   ` Damien Le Moal
  2022-07-07 17:58     ` Bart Van Assche
  1 sibling, 1 reply; 96+ messages in thread
From: Damien Le Moal @ 2022-06-30 23:39 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Naohiro Aota, Johannes Thumshirn

On 6/30/22 08:31, Bart Van Assche wrote:
> Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> Cc: Naohiro Aota <naohiro.aota@wdc.com>
> Cc: Johannes Thumshirn <jth@kernel.org>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  fs/zonefs/trace.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/zonefs/trace.h b/fs/zonefs/trace.h
> index 21501da764bd..42edcfd393ed 100644
> --- a/fs/zonefs/trace.h
> +++ b/fs/zonefs/trace.h
> @@ -25,7 +25,7 @@ TRACE_EVENT(zonefs_zone_mgmt,
>  	    TP_STRUCT__entry(
>  			     __field(dev_t, dev)
>  			     __field(ino_t, ino)
> -			     __field(int, op)
> +			     __field(enum req_op, op)
>  			     __field(sector_t, sector)
>  			     __field(sector_t, nr_sectors)
>  	    ),

Nit: the patch title could be more to the point, E.g.

fs/zonefs: Use the enum req_op type for zone operation trace events

Otherwise, looks good.

Acked-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 96+ messages in thread

* RE: [PATCH v2 10/63] blktrace: Trace remapped requests correctly
  2022-06-30 18:13     ` Bart Van Assche
@ 2022-06-30 23:56       ` NOMURA JUNICHI(野村 淳一)
  0 siblings, 0 replies; 96+ messages in thread
From: NOMURA JUNICHI(野村 淳一) @ 2022-06-30 23:56 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe; +Cc: linux-block, Christoph Hellwig, Mike Snitzer

[-- Attachment #1: Type: text/plain, Size: 412 bytes --]

From: Bart Van Assche <bvanassche@acm.org>
> > Thank you.  I think the change is fine but what it really fixes is
> > 1b9a9ab78b0a ("blktrace: use op accessors"), where arguments of
> > __blk_add_trace() was extended.
> 
> Thanks for the feedback. I will update the "Fixes" tag.
> 
> BTW, does the above reply count as a "Reviewed-by"?

Yes.

-- 
Jun'ichi Nomura, NEC Corporation / NEC Solution Innovators, Ltd.

[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5766 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 60/63] fs/ocfs2: Use the enum req_op and blk_opf_t types
  2022-06-29 23:31 ` [PATCH v2 60/63] fs/ocfs2: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-07-01  1:47   ` Joseph Qi
  2022-07-01 13:48     ` Bart Van Assche
  0 siblings, 1 reply; 96+ messages in thread
From: Joseph Qi @ 2022-07-01  1:47 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Mark Fasheh, Joel Becker



On 6/30/22 7:31 AM, Bart Van Assche wrote:
> Improve static type checking by using the enum req_op type for variables
> that represent a request operation and the new blk_opf_t type for
> variables that represent request flags. Combine the last two
> o2hb_setup_one_bio() arguments into a single argument.
> 
> Cc: Mark Fasheh <mark@fasheh.com>
> Cc: Joel Becker <jlbec@evilplan.org>
> Cc: Joseph Qi <joseph.qi@linux.alibaba.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  fs/ocfs2/cluster/heartbeat.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
> index ea0e70c0fce0..5d7e303b0293 100644
> --- a/fs/ocfs2/cluster/heartbeat.c
> +++ b/fs/ocfs2/cluster/heartbeat.c
> @@ -503,8 +503,7 @@ static void o2hb_bio_end_io(struct bio *bio)
>  static struct bio *o2hb_setup_one_bio(struct o2hb_region *reg,
>  				      struct o2hb_bio_wait_ctxt *wc,
>  				      unsigned int *current_slot,
> -				      unsigned int max_slots, int op,
> -				      int op_flags)
> +				      unsigned int max_slots, blk_opf_t opf)
>  {
>  	int len, current_page;
>  	unsigned int vec_len, vec_start;
> @@ -518,7 +517,7 @@ static struct bio *o2hb_setup_one_bio(struct o2hb_region *reg,
>  	 * GFP_KERNEL that the local node can get fenced. It would be
>  	 * nicest if we could pre-allocate these bios and avoid this
>  	 * all together. */
> -	bio = bio_alloc(reg->hr_bdev, 16, op | op_flags, GFP_ATOMIC);
> +	bio = bio_alloc(reg->hr_bdev, 16, opf, GFP_ATOMIC);
>  	if (!bio) {
>  		mlog(ML_ERROR, "Could not alloc slots BIO!\n");
>  		bio = ERR_PTR(-ENOMEM);
> @@ -566,7 +565,7 @@ static int o2hb_read_slots(struct o2hb_region *reg,
>  
>  	while(current_slot < max_slots) {
>  		bio = o2hb_setup_one_bio(reg, &wc, &current_slot, max_slots,
> -					 REQ_OP_READ, 0);
> +					 REQ_OP_READ);
>  		if (IS_ERR(bio)) {
>  			status = PTR_ERR(bio);
>  			mlog_errno(status);
> @@ -598,7 +597,7 @@ static int o2hb_issue_node_write(struct o2hb_region *reg,
>  
>  	slot = o2nm_this_node();
>  
> -	bio = o2hb_setup_one_bio(reg, write_wc, &slot, slot+1, REQ_OP_WRITE,
> +	bio = o2hb_setup_one_bio(reg, write_wc, &slot, slot+1, REQ_OP_WRITE |
>  				 REQ_SYNC);

Better to let 'REQ_OP_WRITE | REQ_SYNC' at the same line.
Other looks fine.
Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>

>  	if (IS_ERR(bio)) {
>  		status = PTR_ERR(bio);

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 16/63] block/rnbd: Use blk_opf_t where appropriate
  2022-06-29 23:30 ` [PATCH v2 16/63] block/rnbd: Use blk_opf_t where appropriate Bart Van Assche
@ 2022-07-01  4:47   ` Jinpu Wang
  0 siblings, 0 replies; 96+ messages in thread
From: Jinpu Wang @ 2022-07-01  4:47 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Md . Haris Iqbal

On Thu, Jun 30, 2022 at 1:32 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> Improve static type checking by using the new blk_opf_t type to represent
> the combination of a request and request flags.
>
> Cc: Md. Haris Iqbal <haris.iqbal@ionos.com>
> Cc: Jack Wang <jinpu.wang@ionos.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Acked-by: Jack Wang <jinpu.wang@ionos.com>
Thanks!
> ---
>  drivers/block/rnbd/rnbd-proto.h | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/block/rnbd/rnbd-proto.h b/drivers/block/rnbd/rnbd-proto.h
> index bfb08dd434d1..ea7ac8bca63c 100644
> --- a/drivers/block/rnbd/rnbd-proto.h
> +++ b/drivers/block/rnbd/rnbd-proto.h
> @@ -229,9 +229,9 @@ static inline bool rnbd_flags_supported(u32 flags)
>         return true;
>  }
>
> -static inline u32 rnbd_to_bio_flags(u32 rnbd_opf)
> +static inline blk_opf_t rnbd_to_bio_flags(u32 rnbd_opf)
>  {
> -       u32 bio_opf;
> +       blk_opf_t bio_opf;
>
>         switch (rnbd_op(rnbd_opf)) {
>         case RNBD_OP_READ:
> @@ -286,7 +286,8 @@ static inline u32 rq_to_rnbd_flags(struct request *rq)
>                 break;
>         default:
>                 WARN(1, "Unknown request type %d (flags %llu)\n",
> -                    req_op(rq), (unsigned long long)rq->cmd_flags);
> +                    (__force u32)req_op(rq),
> +                    (__force unsigned long long)rq->cmd_flags);
>                 rnbd_opf = 0;
>         }
>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 58/63] fs/nilfs2: Use the enum req_op and blk_opf_t types
  2022-06-29 23:31 ` [PATCH v2 58/63] fs/nilfs2: Use the enum req_op and blk_opf_t types Bart Van Assche
@ 2022-07-01  7:03   ` Ryusuke Konishi
  0 siblings, 0 replies; 96+ messages in thread
From: Ryusuke Konishi @ 2022-07-01  7:03 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig

On Thu, Jun 30, 2022 at 8:33 AM Bart Van Assche wrote:
>
> Improve static type checking by using the enum req_op type for variables
> that represent a request operation and the new blk_opf_t type for
> variables that represent request flags. Combine the 'mode' and
> 'mode_flags' arguments of nilfs_btnode_submit_block into a single
> argument 'opf'.
>
> Cc: Ryusuke Konishi <konishi.ryusuke@gmail.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  fs/nilfs2/btnode.c            |  8 ++++----
>  fs/nilfs2/btnode.h            |  4 ++--
>  fs/nilfs2/btree.c             |  6 +++---
>  fs/nilfs2/gcinode.c           |  5 ++---
>  fs/nilfs2/mdt.c               | 19 ++++++++++---------
>  include/trace/events/nilfs2.h |  4 ++--
>  6 files changed, 23 insertions(+), 23 deletions(-)

Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>

Looks good.  Thank you!

Ryusuke Konishi

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 48/63] fs/direct-io: Reduce the size of struct dio
  2022-06-30 19:06     ` Bart Van Assche
@ 2022-07-01 10:58       ` Jan Kara
  0 siblings, 0 replies; 96+ messages in thread
From: Jan Kara @ 2022-07-01 10:58 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jan Kara, Jens Axboe, linux-block, Christoph Hellwig, Al Viro,
	Darrick J . Wong

On Thu 30-06-22 12:06:30, Bart Van Assche wrote:
> On 6/30/22 01:50, Jan Kara wrote:
> > On Wed 29-06-22 16:31:30, Bart Van Assche wrote:
> > > +static inline bool op_is_read(blk_opf_t opf)
> > > +{
> > > +	return !op_is_write(opf);
> > > +}
> > > +
> > 
> > This is a bit dangerous although currently OK for direct IO code. I'm just
> > afraid someone will extend direct IO code (unlikely) or copy-paste this to
> > a place where there can be more operations than read & write... Maybe just
> > add here a comment like /* Direct IO code does only reads or writes */.
> 
> Hi Jan,
> 
> Since source code comments tend to get overlooked, how about replacing
> this patch by the patch below?

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  fs/direct-io.c | 40 +++++++++++++++++++++++-----------------
>  1 file changed, 23 insertions(+), 17 deletions(-)
> 
> diff --git a/fs/direct-io.c b/fs/direct-io.c
> index 840752006f60..94b71440c332 100644
> --- a/fs/direct-io.c
> +++ b/fs/direct-io.c
> @@ -117,8 +117,7 @@ struct dio_submit {
>  /* dio_state communicated between submission path and end_io */
>  struct dio {
>  	int flags;			/* doesn't change */
> -	int op;
> -	int op_flags;
> +	blk_opf_t opf;			/* request operation type and flags */
>  	struct gendisk *bio_disk;
>  	struct inode *inode;
>  	loff_t i_size;			/* i_size when submitted */
> @@ -167,12 +166,13 @@ static inline unsigned dio_pages_present(struct dio_submit *sdio)
>   */
>  static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio)
>  {
> +	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
>  	ssize_t ret;
> 
>  	ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES,
>  				&sdio->from);
> 
> -	if (ret < 0 && sdio->blocks_available && (dio->op == REQ_OP_WRITE)) {
> +	if (ret < 0 && sdio->blocks_available && dio_op == REQ_OP_WRITE) {
>  		struct page *page = ZERO_PAGE(0);
>  		/*
>  		 * A memory fault, but the filesystem has some outstanding
> @@ -234,6 +234,7 @@ static inline struct page *dio_get_page(struct dio *dio,
>   */
>  static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
>  {
> +	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
>  	loff_t offset = dio->iocb->ki_pos;
>  	ssize_t transferred = 0;
>  	int err;
> @@ -251,7 +252,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
>  		transferred = dio->result;
> 
>  		/* Check for short read case */
> -		if ((dio->op == REQ_OP_READ) &&
> +		if (dio_op == REQ_OP_READ &&
>  		    ((offset + transferred) > dio->i_size))
>  			transferred = dio->i_size - offset;
>  		/* ignore EFAULT if some IO has been done */
> @@ -286,7 +287,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
>  	 * zeros from unwritten extents.
>  	 */
>  	if (flags & DIO_COMPLETE_INVALIDATE &&
> -	    ret > 0 && dio->op == REQ_OP_WRITE &&
> +	    ret > 0 && dio_op == REQ_OP_WRITE &&
>  	    dio->inode->i_mapping->nrpages) {
>  		err = invalidate_inode_pages2_range(dio->inode->i_mapping,
>  					offset >> PAGE_SHIFT,
> @@ -305,7 +306,7 @@ static ssize_t dio_complete(struct dio *dio, ssize_t ret, unsigned int flags)
>  		 */
>  		dio->iocb->ki_pos += transferred;
> 
> -		if (ret > 0 && dio->op == REQ_OP_WRITE)
> +		if (ret > 0 && dio_op == REQ_OP_WRITE)
>  			ret = generic_write_sync(dio->iocb, ret);
>  		dio->iocb->ki_complete(dio->iocb, ret);
>  	}
> @@ -329,6 +330,7 @@ static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio);
>  static void dio_bio_end_aio(struct bio *bio)
>  {
>  	struct dio *dio = bio->bi_private;
> +	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
>  	unsigned long remaining;
>  	unsigned long flags;
>  	bool defer_completion = false;
> @@ -353,7 +355,7 @@ static void dio_bio_end_aio(struct bio *bio)
>  		 */
>  		if (dio->result)
>  			defer_completion = dio->defer_completion ||
> -					   (dio->op == REQ_OP_WRITE &&
> +					   (dio_op == REQ_OP_WRITE &&
>  					    dio->inode->i_mapping->nrpages);
>  		if (defer_completion) {
>  			INIT_WORK(&dio->complete_work, dio_aio_complete_work);
> @@ -396,7 +398,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
>  	 * bio_alloc() is guaranteed to return a bio when allowed to sleep and
>  	 * we request a valid number of vectors.
>  	 */
> -	bio = bio_alloc(bdev, nr_vecs, dio->op | dio->op_flags, GFP_KERNEL);
> +	bio = bio_alloc(bdev, nr_vecs, dio->opf, GFP_KERNEL);
>  	bio->bi_iter.bi_sector = first_sector;
>  	if (dio->is_async)
>  		bio->bi_end_io = dio_bio_end_aio;
> @@ -415,6 +417,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
>   */
>  static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
>  {
> +	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
>  	struct bio *bio = sdio->bio;
>  	unsigned long flags;
> 
> @@ -426,7 +429,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
>  	dio->refcount++;
>  	spin_unlock_irqrestore(&dio->bio_lock, flags);
> 
> -	if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty)
> +	if (dio->is_async && dio_op == REQ_OP_READ && dio->should_dirty)
>  		bio_set_pages_dirty(bio);
> 
>  	dio->bio_disk = bio->bi_bdev->bd_disk;
> @@ -492,7 +495,8 @@ static struct bio *dio_await_one(struct dio *dio)
>  static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio)
>  {
>  	blk_status_t err = bio->bi_status;
> -	bool should_dirty = dio->op == REQ_OP_READ && dio->should_dirty;
> +	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
> +	bool should_dirty = dio_op == REQ_OP_READ && dio->should_dirty;
> 
>  	if (err) {
>  		if (err == BLK_STS_AGAIN && (bio->bi_opf & REQ_NOWAIT))
> @@ -619,6 +623,7 @@ static int dio_set_defer_completion(struct dio *dio)
>  static int get_more_blocks(struct dio *dio, struct dio_submit *sdio,
>  			   struct buffer_head *map_bh)
>  {
> +	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
>  	int ret;
>  	sector_t fs_startblk;	/* Into file, in filesystem-sized blocks */
>  	sector_t fs_endblk;	/* Into file, in filesystem-sized blocks */
> @@ -653,7 +658,7 @@ static int get_more_blocks(struct dio *dio, struct dio_submit *sdio,
>  		 * which may decide to handle it or also return an unmapped
>  		 * buffer head.
>  		 */
> -		create = dio->op == REQ_OP_WRITE;
> +		create = dio_op == REQ_OP_WRITE;
>  		if (dio->flags & DIO_SKIP_HOLES) {
>  			i_size = i_size_read(dio->inode);
>  			if (i_size && fs_startblk <= (i_size - 1) >> i_blkbits)
> @@ -801,10 +806,11 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
>  		    unsigned offset, unsigned len, sector_t blocknr,
>  		    struct buffer_head *map_bh)
>  {
> +	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
>  	int ret = 0;
>  	int boundary = sdio->boundary;	/* dio_send_cur_page may clear it */
> 
> -	if (dio->op == REQ_OP_WRITE) {
> +	if (dio_op == REQ_OP_WRITE) {
>  		/*
>  		 * Read accounting is performed in submit_bio()
>  		 */
> @@ -917,6 +923,7 @@ static inline void dio_zero_block(struct dio *dio, struct dio_submit *sdio,
>  static int do_direct_IO(struct dio *dio, struct dio_submit *sdio,
>  			struct buffer_head *map_bh)
>  {
> +	const enum req_op dio_op = dio->opf & REQ_OP_MASK;
>  	const unsigned blkbits = sdio->blkbits;
>  	const unsigned i_blkbits = blkbits + sdio->blkfactor;
>  	int ret = 0;
> @@ -992,7 +999,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio,
>  				loff_t i_size_aligned;
> 
>  				/* AKPM: eargh, -ENOTBLK is a hack */
> -				if (dio->op == REQ_OP_WRITE) {
> +				if (dio_op == REQ_OP_WRITE) {
>  					put_page(page);
>  					return -ENOTBLK;
>  				}
> @@ -1196,12 +1203,11 @@ ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
> 
>  	dio->inode = inode;
>  	if (iov_iter_rw(iter) == WRITE) {
> -		dio->op = REQ_OP_WRITE;
> -		dio->op_flags = REQ_SYNC | REQ_IDLE;
> +		dio->opf = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE;
>  		if (iocb->ki_flags & IOCB_NOWAIT)
> -			dio->op_flags |= REQ_NOWAIT;
> +			dio->opf |= REQ_NOWAIT;
>  	} else {
> -		dio->op = REQ_OP_READ;
> +		dio->opf = REQ_OP_READ;
>  	}
> 
>  	/*
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 60/63] fs/ocfs2: Use the enum req_op and blk_opf_t types
  2022-07-01  1:47   ` Joseph Qi
@ 2022-07-01 13:48     ` Bart Van Assche
  0 siblings, 0 replies; 96+ messages in thread
From: Bart Van Assche @ 2022-07-01 13:48 UTC (permalink / raw)
  To: Joseph Qi, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Mark Fasheh, Joel Becker

On 6/30/22 18:47, Joseph Qi wrote:
> On 6/30/22 7:31 AM, Bart Van Assche wrote:
>> -	bio = o2hb_setup_one_bio(reg, write_wc, &slot, slot+1, REQ_OP_WRITE,
>> +	bio = o2hb_setup_one_bio(reg, write_wc, &slot, slot+1, REQ_OP_WRITE |
>>   				 REQ_SYNC);
> 
> Better to let 'REQ_OP_WRITE | REQ_SYNC' at the same line.
> Other looks fine.
> Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>

Hi Joseph,

I will make sure that REQ_OP_WRITE | REQ_SYNC appears on the same line. 
Thanks for the review.

Bart.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 14/63] block/drbd: Combine two drbd_submit_peer_request() arguments
  2022-06-29 23:30 ` [PATCH v2 14/63] block/drbd: Combine two drbd_submit_peer_request() arguments Bart Van Assche
@ 2022-07-05 19:53   ` Christoph Böhmwalder
  0 siblings, 0 replies; 96+ messages in thread
From: Christoph Böhmwalder @ 2022-07-05 19:53 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: linux-block, Christoph Hellwig, Lars Ellenberg, Philipp Reisner,
	Jens Axboe

Am 30.06.22 um 01:30 schrieb Bart Van Assche:
> Combine the drbd_submit_peer_request() 'op' and 'op_flags' arguments
> into a single argument. This patch does not change any functionality.
> 
> Cc: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
> Cc: Lars Ellenberg <lars.ellenberg@linbit.com>
> Cc: Philipp Reisner <philipp.reisner@linbit.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  drivers/block/drbd/drbd_int.h      |  3 +--
>  drivers/block/drbd/drbd_receiver.c | 15 +++++++--------
>  drivers/block/drbd/drbd_worker.c   |  2 +-
>  3 files changed, 9 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
> index ecb2ecd8d67d..f15f2f041596 100644
> --- a/drivers/block/drbd/drbd_int.h
> +++ b/drivers/block/drbd/drbd_int.h
> @@ -1547,8 +1547,7 @@ extern bool drbd_rs_c_min_rate_throttle(struct drbd_device *device);
>  extern bool drbd_rs_should_slow_down(struct drbd_device *device, sector_t sector,
>  		bool throttle_if_app_is_waiting);
>  extern int drbd_submit_peer_request(struct drbd_device *,
> -				    struct drbd_peer_request *, enum req_op,
> -				    blk_opf_t, int);
> +				    struct drbd_peer_request *, blk_opf_t, int);
>  extern int drbd_free_peer_reqs(struct drbd_device *, struct list_head *);
>  extern struct drbd_peer_request *drbd_alloc_peer_req(struct drbd_peer_device *, u64,
>  						     sector_t, unsigned int,
> diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
> index caf646dd91ba..af4c7d65490b 100644
> --- a/drivers/block/drbd/drbd_receiver.c
> +++ b/drivers/block/drbd/drbd_receiver.c
> @@ -1621,8 +1621,7 @@ static void drbd_issue_peer_discard_or_zero_out(struct drbd_device *device, stru
>  /* TODO allocate from our own bio_set. */
>  int drbd_submit_peer_request(struct drbd_device *device,
>  			     struct drbd_peer_request *peer_req,
> -			     const enum req_op op, const blk_opf_t op_flags,
> -			     const int fault_type)
> +			     const blk_opf_t opf, const int fault_type)
>  {
>  	struct bio *bios = NULL;
>  	struct bio *bio;
> @@ -1668,8 +1667,7 @@ int drbd_submit_peer_request(struct drbd_device *device,
>  	 * generated bio, but a bio allocated on behalf of the peer.
>  	 */
>  next_bio:
> -	bio = bio_alloc(device->ldev->backing_bdev, nr_pages, op | op_flags,
> -			GFP_NOIO);
> +	bio = bio_alloc(device->ldev->backing_bdev, nr_pages, opf, GFP_NOIO);
>  	/* > peer_req->i.sector, unless this is the first bio */
>  	bio->bi_iter.bi_sector = sector;
>  	bio->bi_private = peer_req;
> @@ -2060,7 +2058,7 @@ static int recv_resync_read(struct drbd_peer_device *peer_device, sector_t secto
>  	spin_unlock_irq(&device->resource->req_lock);
>  
>  	atomic_add(pi->size >> 9, &device->rs_sect_ev);
> -	if (drbd_submit_peer_request(device, peer_req, REQ_OP_WRITE, 0,
> +	if (drbd_submit_peer_request(device, peer_req, REQ_OP_WRITE,
>  				     DRBD_FAULT_RS_WR) == 0)
>  		return 0;
>  
> @@ -2682,7 +2680,7 @@ static int receive_Data(struct drbd_connection *connection, struct packet_info *
>  		peer_req->flags |= EE_CALL_AL_COMPLETE_IO;
>  	}
>  
> -	err = drbd_submit_peer_request(device, peer_req, op, op_flags,
> +	err = drbd_submit_peer_request(device, peer_req, op | op_flags,
>  				       DRBD_FAULT_DT_WR);
>  	if (!err)
>  		return 0;
> @@ -2980,7 +2978,7 @@ static int receive_DataRequest(struct drbd_connection *connection, struct packet
>  submit:
>  	update_receiver_timing_details(connection, drbd_submit_peer_request);
>  	inc_unacked(device);
> -	if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ, 0,
> +	if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ,
>  				     fault_type) == 0)
>  		return 0;
>  
> @@ -4970,7 +4968,8 @@ static int receive_rs_deallocated(struct drbd_connection *connection, struct pac
>  		spin_unlock_irq(&device->resource->req_lock);
>  
>  		atomic_add(pi->size >> 9, &device->rs_sect_ev);
> -		err = drbd_submit_peer_request(device, peer_req, op, 0, DRBD_FAULT_RS_WR);
> +		err = drbd_submit_peer_request(device, peer_req, op,
> +					       DRBD_FAULT_RS_WR);
>  
>  		if (err) {
>  			spin_lock_irq(&device->resource->req_lock);
> diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
> index af3051dd8912..0bb1a900c2d5 100644
> --- a/drivers/block/drbd/drbd_worker.c
> +++ b/drivers/block/drbd/drbd_worker.c
> @@ -405,7 +405,7 @@ static int read_for_csum(struct drbd_peer_device *peer_device, sector_t sector,
>  	spin_unlock_irq(&device->resource->req_lock);
>  
>  	atomic_add(size >> 9, &device->rs_sect_ev);
> -	if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ, 0,
> +	if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ,
>  				     DRBD_FAULT_RS_RD) == 0)
>  		return 0;
>  

Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>

Thanks!

-- 
Christoph Böhmwalder
LINBIT | Keeping the Digital World Running
DRBD HA —  Disaster Recovery — Software defined Storage

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 53/63] fs/gfs2: Use the enum req_op and blk_opf_t types
  2022-06-29 23:31 ` [PATCH v2 53/63] fs/gfs2: " Bart Van Assche
@ 2022-07-07  6:03   ` Andreas Gruenbacher
  0 siblings, 0 replies; 96+ messages in thread
From: Andreas Gruenbacher @ 2022-07-07  6:03 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig, Bob Peterson

On Thu, Jun 30, 2022 at 1:33 AM Bart Van Assche <bvanassche@acm.org> wrote:
> Improve static type checking by using the enum req_op type for variables
> that represent a request operation and the new blk_opf_t type for
> variables that represent request flags. Combine the first two
> gfs2_submit_bhs() arguments into a single argument.
>
> Cc: Bob Peterson <rpeterso@redhat.com>
> Cc: Andreas Gruenbacher <agruenba@redhat.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  fs/gfs2/log.c     | 4 ++--
>  fs/gfs2/log.h     | 2 +-
>  fs/gfs2/lops.c    | 4 ++--
>  fs/gfs2/lops.h    | 2 +-
>  fs/gfs2/meta_io.c | 9 ++++-----
>  5 files changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
> index f0ee3ff6f9a8..eec4159b08aa 100644
> --- a/fs/gfs2/log.c
> +++ b/fs/gfs2/log.c
> @@ -823,7 +823,7 @@ void gfs2_flush_revokes(struct gfs2_sbd *sdp)
>
>  void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
>                            u64 seq, u32 tail, u32 lblock, u32 flags,
> -                          int op_flags)
> +                          blk_opf_t op_flags)
>  {
>         struct gfs2_log_header *lh;
>         u32 hash, crc;
> @@ -905,7 +905,7 @@ void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
>
>  static void log_write_header(struct gfs2_sbd *sdp, u32 flags)
>  {
> -       int op_flags = REQ_PREFLUSH | REQ_FUA | REQ_META | REQ_SYNC;
> +       blk_opf_t op_flags = REQ_PREFLUSH | REQ_FUA | REQ_META | REQ_SYNC;
>         enum gfs2_freeze_state state = atomic_read(&sdp->sd_freeze_state);
>
>         gfs2_assert_withdraw(sdp, (state != SFS_FROZEN));
> diff --git a/fs/gfs2/log.h b/fs/gfs2/log.h
> index fc905c2af53c..653cffcbf869 100644
> --- a/fs/gfs2/log.h
> +++ b/fs/gfs2/log.h
> @@ -82,7 +82,7 @@ extern void gfs2_log_reserve(struct gfs2_sbd *sdp, struct gfs2_trans *tr,
>                              unsigned int *extra_revokes);
>  extern void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
>                                   u64 seq, u32 tail, u32 lblock, u32 flags,
> -                                 int op_flags);
> +                                 blk_opf_t op_flags);
>  extern void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl,
>                            u32 type);
>  extern void gfs2_log_commit(struct gfs2_sbd *sdp, struct gfs2_trans *trans);
> diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
> index 6ba51cbb94cf..90a2d7bc91c4 100644
> --- a/fs/gfs2/lops.c
> +++ b/fs/gfs2/lops.c
> @@ -238,7 +238,7 @@ static void gfs2_end_log_write(struct bio *bio)
>   * there is no pending bio, then this is a no-op.
>   */
>
> -void gfs2_log_submit_bio(struct bio **biop, int opf)
> +void gfs2_log_submit_bio(struct bio **biop, blk_opf_t opf)
>  {
>         struct bio *bio = *biop;
>         if (bio) {
> @@ -292,7 +292,7 @@ static struct bio *gfs2_log_alloc_bio(struct gfs2_sbd *sdp, u64 blkno,
>   */
>
>  static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno,
> -                                   struct bio **biop, int op,
> +                                   struct bio **biop, enum req_op op,
>                                     bio_end_io_t *end_io, bool flush)
>  {
>         struct bio *bio = *biop;
> diff --git a/fs/gfs2/lops.h b/fs/gfs2/lops.h
> index f707601597dc..1412ffba1d44 100644
> --- a/fs/gfs2/lops.h
> +++ b/fs/gfs2/lops.h
> @@ -16,7 +16,7 @@ extern u64 gfs2_log_bmap(struct gfs2_jdesc *jd, unsigned int lbn);
>  extern void gfs2_log_write(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
>                            struct page *page, unsigned size, unsigned offset,
>                            u64 blkno);
> -extern void gfs2_log_submit_bio(struct bio **biop, int opf);
> +extern void gfs2_log_submit_bio(struct bio **biop, blk_opf_t opf);
>  extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh);
>  extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
>                            struct gfs2_log_header_host *head, bool keep_cache);
> diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
> index 3570739f005d..7e70e0ba5a6c 100644
> --- a/fs/gfs2/meta_io.c
> +++ b/fs/gfs2/meta_io.c
> @@ -34,7 +34,7 @@ static int gfs2_aspace_writepage(struct page *page, struct writeback_control *wb
>  {
>         struct buffer_head *bh, *head;
>         int nr_underway = 0;
> -       int write_flags = REQ_META | REQ_PRIO | wbc_to_write_flags(wbc);
> +       blk_opf_t write_flags = REQ_META | REQ_PRIO | wbc_to_write_flags(wbc);
>
>         BUG_ON(!PageLocked(page));
>         BUG_ON(!page_has_buffers(page));
> @@ -217,14 +217,13 @@ static void gfs2_meta_read_endio(struct bio *bio)
>   * Submit several consecutive buffer head I/O requests as a single bio I/O
>   * request.  (See submit_bh_wbc.)
>   */
> -static void gfs2_submit_bhs(int op, int op_flags, struct buffer_head *bhs[],
> -                           int num)
> +static void gfs2_submit_bhs(blk_opf_t opf, struct buffer_head *bhs[], int num)
>  {
>         while (num > 0) {
>                 struct buffer_head *bh = *bhs;
>                 struct bio *bio;
>
> -               bio = bio_alloc(bh->b_bdev, num, op | op_flags, GFP_NOIO);
> +               bio = bio_alloc(bh->b_bdev, num, opf, GFP_NOIO);
>                 bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
>                 while (num > 0) {
>                         bh = *bhs;
> @@ -288,7 +287,7 @@ int gfs2_meta_read(struct gfs2_glock *gl, u64 blkno, int flags,
>                 }
>         }
>
> -       gfs2_submit_bhs(REQ_OP_READ, REQ_META | REQ_PRIO, bhs, num);
> +       gfs2_submit_bhs(REQ_OP_READ | REQ_META | REQ_PRIO, bhs, num);
>         if (!(flags & DIO_WAIT))
>                 return 0;
>

Looking good from a gfs2 point of view, thanks.

Reviewed-by: Andreas Gruenbacher <agruenba@redhat.com>

Andreas


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations
  2022-06-30 23:39   ` Damien Le Moal
@ 2022-07-07 17:58     ` Bart Van Assche
  2022-07-07 22:07       ` Damien Le Moal
  0 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-07-07 17:58 UTC (permalink / raw)
  To: Damien Le Moal, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Naohiro Aota, Johannes Thumshirn

On 6/30/22 16:39, Damien Le Moal wrote:
> On 6/30/22 08:31, Bart Van Assche wrote:
>> Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
>> Cc: Naohiro Aota <naohiro.aota@wdc.com>
>> Cc: Johannes Thumshirn <jth@kernel.org>
>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
>> ---
>>   fs/zonefs/trace.h | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/zonefs/trace.h b/fs/zonefs/trace.h
>> index 21501da764bd..42edcfd393ed 100644
>> --- a/fs/zonefs/trace.h
>> +++ b/fs/zonefs/trace.h
>> @@ -25,7 +25,7 @@ TRACE_EVENT(zonefs_zone_mgmt,
>>   	    TP_STRUCT__entry(
>>   			     __field(dev_t, dev)
>>   			     __field(ino_t, ino)
>> -			     __field(int, op)
>> +			     __field(enum req_op, op)
>>   			     __field(sector_t, sector)
>>   			     __field(sector_t, nr_sectors)
>>   	    ),
> 
> Nit: the patch title could be more to the point, E.g.
> 
> fs/zonefs: Use the enum req_op type for zone operation trace events
> 
> Otherwise, looks good.
> 
> Acked-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

How about using the title "fs/zonefs: Use the enum req_op type for 
tracing request operations"?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations
  2022-07-07 17:58     ` Bart Van Assche
@ 2022-07-07 22:07       ` Damien Le Moal
  0 siblings, 0 replies; 96+ messages in thread
From: Damien Le Moal @ 2022-07-07 22:07 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Naohiro Aota, Johannes Thumshirn

On 7/8/22 02:58, Bart Van Assche wrote:
> On 6/30/22 16:39, Damien Le Moal wrote:
>> On 6/30/22 08:31, Bart Van Assche wrote:
>>> Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
>>> Cc: Naohiro Aota <naohiro.aota@wdc.com>
>>> Cc: Johannes Thumshirn <jth@kernel.org>
>>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
>>> ---
>>>   fs/zonefs/trace.h | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/fs/zonefs/trace.h b/fs/zonefs/trace.h
>>> index 21501da764bd..42edcfd393ed 100644
>>> --- a/fs/zonefs/trace.h
>>> +++ b/fs/zonefs/trace.h
>>> @@ -25,7 +25,7 @@ TRACE_EVENT(zonefs_zone_mgmt,
>>>   	    TP_STRUCT__entry(
>>>   			     __field(dev_t, dev)
>>>   			     __field(ino_t, ino)
>>> -			     __field(int, op)
>>> +			     __field(enum req_op, op)
>>>   			     __field(sector_t, sector)
>>>   			     __field(sector_t, nr_sectors)
>>>   	    ),
>>
>> Nit: the patch title could be more to the point, E.g.
>>
>> fs/zonefs: Use the enum req_op type for zone operation trace events
>>
>> Otherwise, looks good.
>>
>> Acked-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> 
> How about using the title "fs/zonefs: Use the enum req_op type for 
> tracing request operations"?

Works too.

> 
> Thanks,
> 
> Bart.


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/63] Improve static type checking for request flags
  2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
                   ` (62 preceding siblings ...)
  2022-06-29 23:31 ` [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations Bart Van Assche
@ 2022-07-13 21:48 ` Bart Van Assche
  2022-07-13 23:46   ` Jens Axboe
  63 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-07-13 21:48 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig

On 6/29/22 16:30, Bart Van Assche wrote:
> A source of confusion in the block layer is that can be nontrivial to determine
> which type of flags a u32 function argument accepts. This patch series clears
> up that confusion for request flags by introducing a new __bitwise type, namely
> blk_opf_t. Additionally, the type 'int' is change into 'enum req_op' where used
> to hold a request operation.
> 
> Analysis of the sparse warnings introduced by this conversion resulted in one
> bug fix ("blktrace: Trace remap operations correctly").
> 
> Although the number of patches in this series is significant, the risk of this
> patch series is low since most patches involve changing one integer type (int
> or u32) into another integer type of the same size (enum req_op or blk_opf_t).
> 
> Please consider this patch series for kernel v5.20.

(replying to my own email)

Hi Jens,

I think that everyone who is interested in reviewing this patch series 
has had sufficient time to review the patches in this patch series. Do 
you prefer to queue this patch series for kernel v5.20 or kernel v5.21?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/63] Improve static type checking for request flags
  2022-07-13 21:48 ` [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
@ 2022-07-13 23:46   ` Jens Axboe
  2022-07-13 23:49     ` Bart Van Assche
  0 siblings, 1 reply; 96+ messages in thread
From: Jens Axboe @ 2022-07-13 23:46 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-block, Christoph Hellwig

On 7/13/22 3:48 PM, Bart Van Assche wrote:
> On 6/29/22 16:30, Bart Van Assche wrote:
>> A source of confusion in the block layer is that can be nontrivial to determine
>> which type of flags a u32 function argument accepts. This patch series clears
>> up that confusion for request flags by introducing a new __bitwise type, namely
>> blk_opf_t. Additionally, the type 'int' is change into 'enum req_op' where used
>> to hold a request operation.
>>
>> Analysis of the sparse warnings introduced by this conversion resulted in one
>> bug fix ("blktrace: Trace remap operations correctly").
>>
>> Although the number of patches in this series is significant, the risk of this
>> patch series is low since most patches involve changing one integer type (int
>> or u32) into another integer type of the same size (enum req_op or blk_opf_t).
>>
>> Please consider this patch series for kernel v5.20.
> 
> (replying to my own email)
> 
> Hi Jens,
> 
> I think that everyone who is interested in reviewing this patch series
> has had sufficient time to review the patches in this patch series. Do
> you prefer to queue this patch series for kernel v5.20 or kernel
> v5.21?

I've been pondering the same. I'm fine with going for 5.20 as this will
be a pain to maintain, but the first patch doesn't even apply to my
for-5.20/block branch...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/63] Improve static type checking for request flags
  2022-07-13 23:46   ` Jens Axboe
@ 2022-07-13 23:49     ` Bart Van Assche
  2022-07-13 23:57       ` Jens Axboe
  0 siblings, 1 reply; 96+ messages in thread
From: Bart Van Assche @ 2022-07-13 23:49 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig

On 7/13/22 16:46, Jens Axboe wrote:
> On 7/13/22 3:48 PM, Bart Van Assche wrote:
>> I think that everyone who is interested in reviewing this patch series
>> has had sufficient time to review the patches in this patch series. Do
>> you prefer to queue this patch series for kernel v5.20 or kernel
>> v5.21?
> 
> I've been pondering the same. I'm fine with going for 5.20 as this will
> be a pain to maintain, but the first patch doesn't even apply to my
> for-5.20/block branch...

Hi Jens,

This patch series was prepared against your for-next branch. I will 
rebase it on top of your for-5.20/block branch, retest and repost it.

Thanks,

Bart.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/63] Improve static type checking for request flags
  2022-07-13 23:49     ` Bart Van Assche
@ 2022-07-13 23:57       ` Jens Axboe
  0 siblings, 0 replies; 96+ messages in thread
From: Jens Axboe @ 2022-07-13 23:57 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-block, Christoph Hellwig

On 7/13/22 5:49 PM, Bart Van Assche wrote:
> On 7/13/22 16:46, Jens Axboe wrote:
>> On 7/13/22 3:48 PM, Bart Van Assche wrote:
>>> I think that everyone who is interested in reviewing this patch series
>>> has had sufficient time to review the patches in this patch series. Do
>>> you prefer to queue this patch series for kernel v5.20 or kernel
>>> v5.21?
>>
>> I've been pondering the same. I'm fine with going for 5.20 as this will
>> be a pain to maintain, but the first patch doesn't even apply to my
>> for-5.20/block branch...
> 
> Hi Jens,
> 
> This patch series was prepared against your for-next branch. I will
> rebase it on top of your for-5.20/block branch, retest and repost it.

It probably just conflicts with changes added after you posted this one,
though not that many went in I think. In any case, please do respin on
the current branch and we'll see what it looks like.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 96+ messages in thread

end of thread, other threads:[~2022-07-13 23:57 UTC | newest]

Thread overview: 96+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-29 23:30 [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 01/63] treewide: Rename enum req_opf into enum req_op Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 02/63] block: Use enum req_op where appropriate Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 03/63] block: Change the type of the last .rw_page() argument Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 04/63] block: Change the type of req_op() and bio_op() into enum req_op Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 05/63] block: Introduce the type blk_opf_t Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 06/63] block: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 07/63] block/bfq: " Bart Van Assche
2022-06-30  8:33   ` Jan Kara
2022-06-29 23:30 ` [PATCH v2 08/63] block/mq-deadline: " Bart Van Assche
2022-06-30 23:35   ` Damien Le Moal
2022-06-29 23:30 ` [PATCH v2 09/63] block/kyber: " Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 10/63] blktrace: Trace remapped requests correctly Bart Van Assche
2022-06-30  2:05   ` NOMURA JUNICHI(野村 淳一)
2022-06-30 18:13     ` Bart Van Assche
2022-06-30 23:56       ` NOMURA JUNICHI(野村 淳一)
2022-06-29 23:30 ` [PATCH v2 11/63] blktrace: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 12/63] block/brd: Use the enum req_op type Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 13/63] block/drbd: Use the enum req_op and blk_opf_t types Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 14/63] block/drbd: Combine two drbd_submit_peer_request() arguments Bart Van Assche
2022-07-05 19:53   ` Christoph Böhmwalder
2022-06-29 23:30 ` [PATCH v2 15/63] block/floppy: Fix a sparse warning Bart Van Assche
2022-06-29 23:30 ` [PATCH v2 16/63] block/rnbd: Use blk_opf_t where appropriate Bart Van Assche
2022-07-01  4:47   ` Jinpu Wang
2022-06-29 23:30 ` [PATCH v2 17/63] xen-blkback: Use the enum req_op and blk_opf_t types Bart Van Assche
2022-06-30  7:17   ` Roger Pau Monné
2022-06-29 23:31 ` [PATCH v2 18/63] block/zram: Use enum req_op where appropriate Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 19/63] nvdimm-btt: Use the enum req_op type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 20/63] um: Use enum req_op where appropriate Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 21/63] dm/core: Reduce the size of struct dm_io_request Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 22/63] dm/core: Rename kcopyd_job.rw into kcopyd.op Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 23/63] dm/core: Combine request operation type and flags Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 24/63] dm/ebs: Change 'int rw' into 'enum req_op op' Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 25/63] dm/dm-flakey: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 26/63] dm/dm-integrity: Combine request operation and flags Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 27/63] dm mirror log: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 28/63] dm-snap: Combine request operation type and flags Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 29/63] dm/zone: Use the enum req_op type Bart Van Assche
2022-06-30 23:36   ` Damien Le Moal
2022-06-29 23:31 ` [PATCH v2 30/63] dm/dm-zoned: " Bart Van Assche
2022-06-30 23:36   ` Damien Le Moal
2022-06-29 23:31 ` [PATCH v2 31/63] md/core: Combine two sync_page_io() arguments Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 32/63] md/bcache: Combine two uuid_io() arguments Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 33/63] md/bcache: Combine two prio_io() arguments Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 34/63] md/raid1: Use the new blk_opf_t type Bart Van Assche
2022-06-30 18:41   ` Song Liu
2022-06-29 23:31 ` [PATCH v2 35/63] md/raid10: " Bart Van Assche
2022-06-30 18:41   ` Song Liu
2022-06-29 23:31 ` [PATCH v2 36/63] md/raid5: Use the enum req_op and blk_opf_t types Bart Van Assche
2022-06-30 18:41   ` Song Liu
2022-06-29 23:31 ` [PATCH v2 37/63] nvme/host: " Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 38/63] nvme/target: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 39/63] scsi/core: Improve static type checking Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 40/63] scsi/core: Change the return type of scsi_noretry_cmd() into bool Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 41/63] scsi/core: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 42/63] scsi/device_handlers: " Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 43/63] scsi/ufs: Rename a 'dir' argument into 'op' Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 44/63] scsi/target: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 45/63] mm: " Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 46/63] fs/buffer: " Bart Van Assche
2022-06-30  8:34   ` Jan Kara
2022-06-29 23:31 ` [PATCH v2 47/63] fs/buffer: Combine two submit_bh() and ll_rw_block() arguments Bart Van Assche
2022-06-30 18:43   ` Song Liu
2022-06-29 23:31 ` [PATCH v2 48/63] fs/direct-io: Reduce the size of struct dio Bart Van Assche
2022-06-30  8:50   ` Jan Kara
2022-06-30 19:06     ` Bart Van Assche
2022-07-01 10:58       ` Jan Kara
2022-06-29 23:31 ` [PATCH v2 49/63] fs/mpage: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 50/63] fs/btrfs: Use the enum req_op and blk_opf_t types Bart Van Assche
2022-06-30 11:37   ` David Sterba
2022-06-29 23:31 ` [PATCH v2 51/63] fs/ext4: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 52/63] fs/f2fs: Use the enum req_op and blk_opf_t types Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 53/63] fs/gfs2: " Bart Van Assche
2022-07-07  6:03   ` Andreas Gruenbacher
2022-06-29 23:31 ` [PATCH v2 54/63] fs/hfsplus: " Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 55/63] fs/iomap: Use the new blk_opf_t type Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 56/63] fs/jbd2: Fix the documentation of the jbd2_write_superblock() callers Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 57/63] fs/nfs: Use enum req_op where appropriate Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 58/63] fs/nilfs2: Use the enum req_op and blk_opf_t types Bart Van Assche
2022-07-01  7:03   ` Ryusuke Konishi
2022-06-29 23:31 ` [PATCH v2 59/63] fs/ntfs3: Use enum req_op where appropriate Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 60/63] fs/ocfs2: Use the enum req_op and blk_opf_t types Bart Van Assche
2022-07-01  1:47   ` Joseph Qi
2022-07-01 13:48     ` Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 61/63] PM: " Bart Van Assche
2022-06-30 15:21   ` Rafael J. Wysocki
2022-06-29 23:31 ` [PATCH v2 62/63] fs/xfs: " Bart Van Assche
2022-06-29 23:31 ` [PATCH v2 63/63] fs/zonefs: Use the enum req_op type for request operations Bart Van Assche
2022-06-30  6:02   ` Johannes Thumshirn
2022-06-30 23:39   ` Damien Le Moal
2022-07-07 17:58     ` Bart Van Assche
2022-07-07 22:07       ` Damien Le Moal
2022-07-13 21:48 ` [PATCH v2 00/63] Improve static type checking for request flags Bart Van Assche
2022-07-13 23:46   ` Jens Axboe
2022-07-13 23:49     ` Bart Van Assche
2022-07-13 23:57       ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.