* [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions
@ 2020-05-05 11:55 Zhen Lei
2020-05-05 11:55 ` [PATCH 1/4] block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions into <linux/blkdev.h> Zhen Lei
` (4 more replies)
0 siblings, 5 replies; 13+ messages in thread
From: Zhen Lei @ 2020-05-05 11:55 UTC (permalink / raw)
To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
Cc: Zhen Lei
When I studied the code of mm/swap, I found "1 << (PAGE_SHIFT - 9)" appears
many times. So I try to clean up it.
1. Replace "1 << (PAGE_SHIFT - 9)" or similar with SECTORS_PER_PAGE
2. Replace "PAGE_SHIFT - 9" with SECTORS_PER_PAGE_SHIFT
3. Replace "9" with SECTOR_SHIFT
4. Replace "512" with SECTOR_SIZE
No functional change.
Zhen Lei (4):
block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions
into <linux/blkdev.h>
mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code
block: use SECTORS_PER_PAGE_SHIFT and SECTORS_PER_PAGE to clean up
code
mtd: eliminate SECTOR related magic numbers
block/blk-settings.c | 8 ++++----
block/partitions/core.c | 4 ++--
drivers/block/zram/zram_drv.h | 2 --
drivers/md/dm-table.c | 2 +-
drivers/md/raid1.c | 4 ++--
drivers/md/raid10.c | 10 +++++-----
drivers/md/raid5-cache.c | 10 +++++-----
include/linux/blkdev.h | 10 ++++++++--
mm/page_io.c | 4 ++--
mm/swapfile.c | 12 ++++++------
10 files changed, 35 insertions(+), 31 deletions(-)
--
2.26.0.106.g9fadedd
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 1/4] block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions into <linux/blkdev.h>
2020-05-05 11:55 [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions Zhen Lei
@ 2020-05-05 11:55 ` Zhen Lei
2020-05-05 12:10 ` Matthew Wilcox
2020-05-05 11:55 ` [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code Zhen Lei
` (3 subsequent siblings)
4 siblings, 1 reply; 13+ messages in thread
From: Zhen Lei @ 2020-05-05 11:55 UTC (permalink / raw)
To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
Cc: Zhen Lei
This is similar to commit 233bde21a ("block: Move SECTOR_SIZE and
SECTOR_SHIFT definitions into <linux/blkdev.h>"), prepare to clear dozens
of duplicated codes.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
drivers/block/zram/zram_drv.h | 2 --
include/linux/blkdev.h | 10 ++++++++--
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index f2fd46daa760..12309175d55e 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -21,8 +21,6 @@
#include "zcomp.h"
-#define SECTORS_PER_PAGE_SHIFT (PAGE_SHIFT - SECTOR_SHIFT)
-#define SECTORS_PER_PAGE (1 << SECTORS_PER_PAGE_SHIFT)
#define ZRAM_LOGICAL_BLOCK_SHIFT 12
#define ZRAM_LOGICAL_BLOCK_SIZE (1 << ZRAM_LOGICAL_BLOCK_SHIFT)
#define ZRAM_SECTOR_PER_LOGICAL_BLOCK \
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 32868fbedc9e..a752e1c80bf0 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -904,10 +904,16 @@ static inline struct request_queue *bdev_get_queue(struct block_device *bdev)
* multiple of 512 bytes. Hence these two constants.
*/
#ifndef SECTOR_SHIFT
-#define SECTOR_SHIFT 9
+#define SECTOR_SHIFT 9
#endif
#ifndef SECTOR_SIZE
-#define SECTOR_SIZE (1 << SECTOR_SHIFT)
+#define SECTOR_SIZE (1 << SECTOR_SHIFT)
+#endif
+#ifndef SECTORS_PER_PAGE_SHIFT
+#define SECTORS_PER_PAGE_SHIFT (PAGE_SHIFT - SECTOR_SHIFT)
+#endif
+#ifndef SECTORS_PER_PAGE
+#define SECTORS_PER_PAGE (1 << SECTORS_PER_PAGE_SHIFT)
#endif
/*
--
2.26.0.106.g9fadedd
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code
2020-05-05 11:55 [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions Zhen Lei
2020-05-05 11:55 ` [PATCH 1/4] block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions into <linux/blkdev.h> Zhen Lei
@ 2020-05-05 11:55 ` Zhen Lei
2020-05-05 17:25 ` Matthew Wilcox
2020-05-05 11:55 ` [PATCH 3/4] block: use SECTORS_PER_PAGE_SHIFT and SECTORS_PER_PAGE " Zhen Lei
` (2 subsequent siblings)
4 siblings, 1 reply; 13+ messages in thread
From: Zhen Lei @ 2020-05-05 11:55 UTC (permalink / raw)
To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
Cc: Zhen Lei
1. Replace "PAGE_SHIFT - 9" with SECTORS_PER_PAGE_SHIFT
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
mm/page_io.c | 4 ++--
mm/swapfile.c | 12 ++++++------
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/mm/page_io.c b/mm/page_io.c
index b1d4f4558e6b..51f25238fd9a 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -38,7 +38,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags,
bio->bi_iter.bi_sector = map_swap_page(page, &bdev);
bio_set_dev(bio, bdev);
- bio->bi_iter.bi_sector <<= PAGE_SHIFT - 9;
+ bio->bi_iter.bi_sector <<= SECTORS_PER_PAGE_SHIFT;
bio->bi_end_io = end_io;
bio_add_page(bio, page, PAGE_SIZE * hpage_nr_pages(page), 0);
@@ -266,7 +266,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
static sector_t swap_page_sector(struct page *page)
{
- return (sector_t)__page_file_index(page) << (PAGE_SHIFT - 9);
+ return (sector_t)__page_file_index(page) << SECTORS_PER_PAGE_SHIFT;
}
static inline void count_swpout_vm_event(struct page *page)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 5871a2aa86a5..0871023c0166 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -177,8 +177,8 @@ static int discard_swap(struct swap_info_struct *si)
/* Do not discard the swap header page! */
se = first_se(si);
- start_block = (se->start_block + 1) << (PAGE_SHIFT - 9);
- nr_blocks = ((sector_t)se->nr_pages - 1) << (PAGE_SHIFT - 9);
+ start_block = (se->start_block + 1) << SECTORS_PER_PAGE_SHIFT;
+ nr_blocks = ((sector_t)se->nr_pages - 1) << SECTORS_PER_PAGE_SHIFT;
if (nr_blocks) {
err = blkdev_issue_discard(si->bdev, start_block,
nr_blocks, GFP_KERNEL, 0);
@@ -188,8 +188,8 @@ static int discard_swap(struct swap_info_struct *si)
}
for (se = next_se(se); se; se = next_se(se)) {
- start_block = se->start_block << (PAGE_SHIFT - 9);
- nr_blocks = (sector_t)se->nr_pages << (PAGE_SHIFT - 9);
+ start_block = se->start_block << SECTORS_PER_PAGE_SHIFT;
+ nr_blocks = (sector_t)se->nr_pages << SECTORS_PER_PAGE_SHIFT;
err = blkdev_issue_discard(si->bdev, start_block,
nr_blocks, GFP_KERNEL, 0);
@@ -240,8 +240,8 @@ static void discard_swap_cluster(struct swap_info_struct *si,
start_page += nr_blocks;
nr_pages -= nr_blocks;
- start_block <<= PAGE_SHIFT - 9;
- nr_blocks <<= PAGE_SHIFT - 9;
+ start_block <<= SECTORS_PER_PAGE_SHIFT;
+ nr_blocks <<= SECTORS_PER_PAGE_SHIFT;
if (blkdev_issue_discard(si->bdev, start_block,
nr_blocks, GFP_NOIO, 0))
break;
--
2.26.0.106.g9fadedd
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 3/4] block: use SECTORS_PER_PAGE_SHIFT and SECTORS_PER_PAGE to clean up code
2020-05-05 11:55 [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions Zhen Lei
2020-05-05 11:55 ` [PATCH 1/4] block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions into <linux/blkdev.h> Zhen Lei
2020-05-05 11:55 ` [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code Zhen Lei
@ 2020-05-05 11:55 ` Zhen Lei
2020-05-05 11:55 ` [PATCH 4/4] mtd: eliminate SECTOR related magic numbers Zhen Lei
2020-05-05 17:32 ` [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions antlists
4 siblings, 0 replies; 13+ messages in thread
From: Zhen Lei @ 2020-05-05 11:55 UTC (permalink / raw)
To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
Cc: Zhen Lei
1. Replace "1 << (PAGE_SHIFT - 9)" with SECTORS_PER_PAGE
2. Replace "PAGE_SHIFT - 9" with SECTORS_PER_PAGE_SHIFT
3. Replace "9" with SECTOR_SHIFT
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
block/blk-settings.c | 8 ++++----
block/partitions/core.c | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 14397b4c4b53..a62037a7ff0f 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -149,8 +149,8 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
struct queue_limits *limits = &q->limits;
unsigned int max_sectors;
- if ((max_hw_sectors << 9) < PAGE_SIZE) {
- max_hw_sectors = 1 << (PAGE_SHIFT - 9);
+ if ((max_hw_sectors << SECTOR_SHIFT) < PAGE_SIZE) {
+ max_hw_sectors = SECTORS_PER_PAGE;
printk(KERN_INFO "%s: set to minimum %d\n",
__func__, max_hw_sectors);
}
@@ -159,7 +159,7 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
max_sectors = min_not_zero(max_hw_sectors, limits->max_dev_sectors);
max_sectors = min_t(unsigned int, max_sectors, BLK_DEF_MAX_SECTORS);
limits->max_sectors = max_sectors;
- q->backing_dev_info->io_pages = max_sectors >> (PAGE_SHIFT - 9);
+ q->backing_dev_info->io_pages = max_sectors >> SECTORS_PER_PAGE_SHIFT;
}
EXPORT_SYMBOL(blk_queue_max_hw_sectors);
@@ -630,7 +630,7 @@ void disk_stack_limits(struct gendisk *disk, struct block_device *bdev,
}
t->backing_dev_info->io_pages =
- t->limits.max_sectors >> (PAGE_SHIFT - 9);
+ t->limits.max_sectors >> SECTORS_PER_PAGE_SHIFT;
}
EXPORT_SYMBOL(disk_stack_limits);
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 9ef48a8cff86..6e44ac840ca0 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -641,7 +641,7 @@ void *read_part_sector(struct parsed_partitions *state, sector_t n, Sector *p)
}
page = read_mapping_page(mapping,
- (pgoff_t)(n >> (PAGE_SHIFT - 9)), NULL);
+ (pgoff_t)(n >> SECTORS_PER_PAGE_SHIFT), NULL);
if (IS_ERR(page))
goto out;
if (PageError(page))
@@ -649,7 +649,7 @@ void *read_part_sector(struct parsed_partitions *state, sector_t n, Sector *p)
p->v = page;
return (unsigned char *)page_address(page) +
- ((n & ((1 << (PAGE_SHIFT - 9)) - 1)) << SECTOR_SHIFT);
+ ((n & (SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT);
out_put_page:
put_page(page);
out:
--
2.26.0.106.g9fadedd
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 4/4] mtd: eliminate SECTOR related magic numbers
2020-05-05 11:55 [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions Zhen Lei
` (2 preceding siblings ...)
2020-05-05 11:55 ` [PATCH 3/4] block: use SECTORS_PER_PAGE_SHIFT and SECTORS_PER_PAGE " Zhen Lei
@ 2020-05-05 11:55 ` Zhen Lei
2020-05-05 17:32 ` [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions antlists
4 siblings, 0 replies; 13+ messages in thread
From: Zhen Lei @ 2020-05-05 11:55 UTC (permalink / raw)
To: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
Cc: Zhen Lei
1. Replace "1 << (PAGE_SHIFT - 9)" or similar with SECTORS_PER_PAGE
2. Replace "PAGE_SHIFT - 9" with SECTORS_PER_PAGE_SHIFT
3. Replace "9" with SECTOR_SHIFT
4. Replace "512" with SECTOR_SIZE
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
drivers/md/dm-table.c | 2 +-
drivers/md/raid1.c | 4 ++--
drivers/md/raid10.c | 10 +++++-----
drivers/md/raid5-cache.c | 10 +++++-----
4 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 0a2cc197f62b..cf9d85ec66fd 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1964,7 +1964,7 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
#endif
/* Allow reads to exceed readahead limits */
- q->backing_dev_info->io_pages = limits->max_sectors >> (PAGE_SHIFT - 9);
+ q->backing_dev_info->io_pages = limits->max_sectors >> SECTORS_PER_PAGE_SHIFT;
}
unsigned int dm_table_get_num_targets(struct dm_table *t)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index cd810e195086..35d3fa22dd54 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2129,7 +2129,7 @@ static void process_checks(struct r1bio *r1_bio)
int vcnt;
/* Fix variable parts of all bios */
- vcnt = (r1_bio->sectors + PAGE_SIZE / 512 - 1) >> (PAGE_SHIFT - 9);
+ vcnt = (r1_bio->sectors + SECTORS_PER_PAGE - 1) >> SECTORS_PER_PAGE_SHIFT;
for (i = 0; i < conf->raid_disks * 2; i++) {
blk_status_t status;
struct bio *b = r1_bio->bios[i];
@@ -2148,7 +2148,7 @@ static void process_checks(struct r1bio *r1_bio)
b->bi_private = rp;
/* initialize bvec table again */
- md_bio_reset_resync_pages(b, rp, r1_bio->sectors << 9);
+ md_bio_reset_resync_pages(b, rp, r1_bio->sectors << SECTOR_SHIFT);
}
for (primary = 0; primary < conf->raid_disks * 2; primary++)
if (r1_bio->bios[primary]->bi_end_io == end_sync_read &&
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index ec136e44aef7..3202953a800d 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2025,11 +2025,11 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
first = i;
fbio = r10_bio->devs[i].bio;
- fbio->bi_iter.bi_size = r10_bio->sectors << 9;
+ fbio->bi_iter.bi_size = r10_bio->sectors << SECTOR_SHIFT;
fbio->bi_iter.bi_idx = 0;
fpages = get_resync_pages(fbio)->pages;
- vcnt = (r10_bio->sectors + (PAGE_SIZE >> 9) - 1) >> (PAGE_SHIFT - 9);
+ vcnt = (r10_bio->sectors + SECTORS_PER_PAGE - 1) >> SECTORS_PER_PAGE_SHIFT;
/* now find blocks with errors */
for (i=0 ; i < conf->copies ; i++) {
int j, d;
@@ -2054,13 +2054,13 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
int sectors = r10_bio->sectors;
for (j = 0; j < vcnt; j++) {
int len = PAGE_SIZE;
- if (sectors < (len / 512))
- len = sectors * 512;
+ if (sectors < (len / SECTOR_SIZE))
+ len = sectors * SECTOR_SIZE;
if (memcmp(page_address(fpages[j]),
page_address(tpages[j]),
len))
break;
- sectors -= len/512;
+ sectors -= len / SECTOR_SIZE;
}
if (j == vcnt)
continue;
diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 9b6da759dca2..5bd8e2b51341 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -833,7 +833,7 @@ static void r5l_append_payload_meta(struct r5l_log *log, u16 type,
payload->header.type = cpu_to_le16(type);
payload->header.flags = cpu_to_le16(0);
payload->size = cpu_to_le32((1 + !!checksum2_valid) <<
- (PAGE_SHIFT - 9));
+ SECTORS_PER_PAGE_SHIFT);
payload->location = cpu_to_le64(location);
payload->checksum[0] = cpu_to_le32(checksum1);
if (checksum2_valid)
@@ -1042,7 +1042,7 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
mutex_lock(&log->io_mutex);
/* meta + data */
- reserve = (1 + write_disks) << (PAGE_SHIFT - 9);
+ reserve = (1 + write_disks) << SECTORS_PER_PAGE_SHIFT;
if (log->r5c_journal_mode == R5C_JOURNAL_MODE_WRITE_THROUGH) {
if (!r5l_has_free_space(log, reserve)) {
@@ -2053,7 +2053,7 @@ r5l_recovery_verify_data_checksum_for_mb(struct r5l_log *log,
le32_to_cpu(payload->size));
mb_offset += sizeof(struct r5l_payload_data_parity) +
sizeof(__le32) *
- (le32_to_cpu(payload->size) >> (PAGE_SHIFT - 9));
+ (le32_to_cpu(payload->size) >> SECTORS_PER_PAGE_SHIFT);
}
}
@@ -2199,7 +2199,7 @@ r5c_recovery_analyze_meta_block(struct r5l_log *log,
mb_offset += sizeof(struct r5l_payload_data_parity) +
sizeof(__le32) *
- (le32_to_cpu(payload->size) >> (PAGE_SHIFT - 9));
+ (le32_to_cpu(payload->size) >> SECTORS_PER_PAGE_SHIFT);
}
return 0;
@@ -2916,7 +2916,7 @@ int r5c_cache_data(struct r5l_log *log, struct stripe_head *sh)
mutex_lock(&log->io_mutex);
/* meta + data */
- reserve = (1 + pages) << (PAGE_SHIFT - 9);
+ reserve = (1 + pages) << SECTORS_PER_PAGE_SHIFT;
if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state) &&
sh->log_start == MaxSector)
--
2.26.0.106.g9fadedd
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 1/4] block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions into <linux/blkdev.h>
2020-05-05 11:55 ` [PATCH 1/4] block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions into <linux/blkdev.h> Zhen Lei
@ 2020-05-05 12:10 ` Matthew Wilcox
2020-05-06 4:06 ` Leizhen (ThunderTown)
0 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox @ 2020-05-05 12:10 UTC (permalink / raw)
To: Zhen Lei
Cc: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
On Tue, May 05, 2020 at 07:55:40PM +0800, Zhen Lei wrote:
> +#ifndef SECTORS_PER_PAGE_SHIFT
> +#define SECTORS_PER_PAGE_SHIFT (PAGE_SHIFT - SECTOR_SHIFT)
> +#endif
> +#ifndef SECTORS_PER_PAGE
> +#define SECTORS_PER_PAGE (1 << SECTORS_PER_PAGE_SHIFT)
> #endif
I find SECTORS_PER_PAGE_SHIFT quite hard to read. I had a quick skim
of your other patches, and it seems to me that we could replace
'<< SECTORS_PER_PAGE_SHIFT' with '* SECTORS_PER_PAGE' and it would be
more readable in every case.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code
2020-05-05 11:55 ` [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code Zhen Lei
@ 2020-05-05 17:25 ` Matthew Wilcox
2020-05-06 1:33 ` Leizhen (ThunderTown)
0 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox @ 2020-05-05 17:25 UTC (permalink / raw)
To: Zhen Lei
Cc: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
On Tue, May 05, 2020 at 07:55:41PM +0800, Zhen Lei wrote:
> +++ b/mm/swapfile.c
> @@ -177,8 +177,8 @@ static int discard_swap(struct swap_info_struct *si)
>
> /* Do not discard the swap header page! */
> se = first_se(si);
> - start_block = (se->start_block + 1) << (PAGE_SHIFT - 9);
> - nr_blocks = ((sector_t)se->nr_pages - 1) << (PAGE_SHIFT - 9);
> + start_block = (se->start_block + 1) << SECTORS_PER_PAGE_SHIFT;
> + nr_blocks = ((sector_t)se->nr_pages - 1) << SECTORS_PER_PAGE_SHIFT;
Thinking about this some more, wouldn't this look better?
start_block = page_sectors(se->start_block + 1);
nr_block = page_sectors(se->nr_pages - 1);
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions
2020-05-05 11:55 [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions Zhen Lei
` (3 preceding siblings ...)
2020-05-05 11:55 ` [PATCH 4/4] mtd: eliminate SECTOR related magic numbers Zhen Lei
@ 2020-05-05 17:32 ` antlists
2020-05-05 18:01 ` Matthew Wilcox
4 siblings, 1 reply; 13+ messages in thread
From: antlists @ 2020-05-05 17:32 UTC (permalink / raw)
To: Zhen Lei, Minchan Kim, Nitin Gupta, Sergey Senozhatsky,
Jens Axboe, linux-block, Andrew Morton, linux-mm,
Alasdair Kergon, Mike Snitzer, dm-devel, Song Liu, linux-raid,
linux-kernel
On 05/05/2020 12:55, Zhen Lei wrote:
> When I studied the code of mm/swap, I found "1 << (PAGE_SHIFT - 9)" appears
> many times. So I try to clean up it.
>
> 1. Replace "1 << (PAGE_SHIFT - 9)" or similar with SECTORS_PER_PAGE
> 2. Replace "PAGE_SHIFT - 9" with SECTORS_PER_PAGE_SHIFT
> 3. Replace "9" with SECTOR_SHIFT
> 4. Replace "512" with SECTOR_SIZE
Naive question - what is happening about 4096-byte sectors? Do we need
to forward-plan?
Cheers,
Wol
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions
2020-05-05 17:32 ` [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions antlists
@ 2020-05-05 18:01 ` Matthew Wilcox
0 siblings, 0 replies; 13+ messages in thread
From: Matthew Wilcox @ 2020-05-05 18:01 UTC (permalink / raw)
To: antlists
Cc: Zhen Lei, Minchan Kim, Nitin Gupta, Sergey Senozhatsky,
Jens Axboe, linux-block, Andrew Morton, linux-mm,
Alasdair Kergon, Mike Snitzer, dm-devel, Song Liu, linux-raid,
linux-kernel
On Tue, May 05, 2020 at 06:32:36PM +0100, antlists wrote:
> On 05/05/2020 12:55, Zhen Lei wrote:
> > When I studied the code of mm/swap, I found "1 << (PAGE_SHIFT - 9)" appears
> > many times. So I try to clean up it.
> >
> > 1. Replace "1 << (PAGE_SHIFT - 9)" or similar with SECTORS_PER_PAGE
> > 2. Replace "PAGE_SHIFT - 9" with SECTORS_PER_PAGE_SHIFT
> > 3. Replace "9" with SECTOR_SHIFT
> > 4. Replace "512" with SECTOR_SIZE
>
> Naive question - what is happening about 4096-byte sectors? Do we need to
> forward-plan?
They're fully supported already, but Linux defines a sector to be 512
bytes. So we multiply by 8 and divide by 8 a few times unnecessarily,
but it's not worth making sector size be a per-device property.
Good thought, though.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code
2020-05-05 17:25 ` Matthew Wilcox
@ 2020-05-06 1:33 ` Leizhen (ThunderTown)
2020-05-06 3:47 ` Leizhen (ThunderTown)
0 siblings, 1 reply; 13+ messages in thread
From: Leizhen (ThunderTown) @ 2020-05-06 1:33 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
On 2020/5/6 1:25, Matthew Wilcox wrote:
> On Tue, May 05, 2020 at 07:55:41PM +0800, Zhen Lei wrote:
>> +++ b/mm/swapfile.c
>> @@ -177,8 +177,8 @@ static int discard_swap(struct swap_info_struct *si)
>>
>> /* Do not discard the swap header page! */
>> se = first_se(si);
>> - start_block = (se->start_block + 1) << (PAGE_SHIFT - 9);
>> - nr_blocks = ((sector_t)se->nr_pages - 1) << (PAGE_SHIFT - 9);
>> + start_block = (se->start_block + 1) << SECTORS_PER_PAGE_SHIFT;
>> + nr_blocks = ((sector_t)se->nr_pages - 1) << SECTORS_PER_PAGE_SHIFT;
>
> Thinking about this some more, wouldn't this look better?
>
> start_block = page_sectors(se->start_block + 1);
> nr_block = page_sectors(se->nr_pages - 1);
>
OK,That's fine, it's clearer. And in this way, there won't be more than 80 columns.
>
> .
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code
2020-05-06 1:33 ` Leizhen (ThunderTown)
@ 2020-05-06 3:47 ` Leizhen (ThunderTown)
2020-05-06 9:16 ` Leizhen (ThunderTown)
0 siblings, 1 reply; 13+ messages in thread
From: Leizhen (ThunderTown) @ 2020-05-06 3:47 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
On 2020/5/6 9:33, Leizhen (ThunderTown) wrote:
>
>
> On 2020/5/6 1:25, Matthew Wilcox wrote:
>> On Tue, May 05, 2020 at 07:55:41PM +0800, Zhen Lei wrote:
>>> +++ b/mm/swapfile.c
>>> @@ -177,8 +177,8 @@ static int discard_swap(struct swap_info_struct *si)
>>>
>>> /* Do not discard the swap header page! */
>>> se = first_se(si);
>>> - start_block = (se->start_block + 1) << (PAGE_SHIFT - 9);
>>> - nr_blocks = ((sector_t)se->nr_pages - 1) << (PAGE_SHIFT - 9);
>>> + start_block = (se->start_block + 1) << SECTORS_PER_PAGE_SHIFT;
>>> + nr_blocks = ((sector_t)se->nr_pages - 1) << SECTORS_PER_PAGE_SHIFT;
>>
>> Thinking about this some more, wouldn't this look better?
>>
>> start_block = page_sectors(se->start_block + 1);
>> nr_block = page_sectors(se->nr_pages - 1);
>>
>
> OK,That's fine, it's clearer. And in this way, there won't be more than 80 columns.
Should we rename "page_sectors" to "page_to_sectors"? Because we may need to define
"sectors_to_page" also.
>
>>
>> .
>>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/4] block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions into <linux/blkdev.h>
2020-05-05 12:10 ` Matthew Wilcox
@ 2020-05-06 4:06 ` Leizhen (ThunderTown)
0 siblings, 0 replies; 13+ messages in thread
From: Leizhen (ThunderTown) @ 2020-05-06 4:06 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
On 2020/5/5 20:10, Matthew Wilcox wrote:
> On Tue, May 05, 2020 at 07:55:40PM +0800, Zhen Lei wrote:
>> +#ifndef SECTORS_PER_PAGE_SHIFT
>> +#define SECTORS_PER_PAGE_SHIFT (PAGE_SHIFT - SECTOR_SHIFT)
>> +#endif
>> +#ifndef SECTORS_PER_PAGE
>> +#define SECTORS_PER_PAGE (1 << SECTORS_PER_PAGE_SHIFT)
>> #endif
>
> I find SECTORS_PER_PAGE_SHIFT quite hard to read. I had a quick skim
> of your other patches, and it seems to me that we could replace
> '<< SECTORS_PER_PAGE_SHIFT' with '* SECTORS_PER_PAGE' and it would be
> more readable in every case.
OK, I will delete SECTORS_PER_PAGE_SHIFT, and replace the shift with {*|/} SECTORS_PER_PAGE if it's
not suitable to be replaced by sectors_to_page()/page_to_sectors().
>
> .
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code
2020-05-06 3:47 ` Leizhen (ThunderTown)
@ 2020-05-06 9:16 ` Leizhen (ThunderTown)
0 siblings, 0 replies; 13+ messages in thread
From: Leizhen (ThunderTown) @ 2020-05-06 9:16 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Minchan Kim, Nitin Gupta, Sergey Senozhatsky, Jens Axboe,
linux-block, Andrew Morton, linux-mm, Alasdair Kergon,
Mike Snitzer, dm-devel, Song Liu, linux-raid, linux-kernel
On 2020/5/6 11:47, Leizhen (ThunderTown) wrote:
>
>
> On 2020/5/6 9:33, Leizhen (ThunderTown) wrote:
>>
>>
>> On 2020/5/6 1:25, Matthew Wilcox wrote:
>>> On Tue, May 05, 2020 at 07:55:41PM +0800, Zhen Lei wrote:
>>>> +++ b/mm/swapfile.c
>>>> @@ -177,8 +177,8 @@ static int discard_swap(struct swap_info_struct *si)
>>>>
>>>> /* Do not discard the swap header page! */
>>>> se = first_se(si);
>>>> - start_block = (se->start_block + 1) << (PAGE_SHIFT - 9);
>>>> - nr_blocks = ((sector_t)se->nr_pages - 1) << (PAGE_SHIFT - 9);
>>>> + start_block = (se->start_block + 1) << SECTORS_PER_PAGE_SHIFT;
>>>> + nr_blocks = ((sector_t)se->nr_pages - 1) << SECTORS_PER_PAGE_SHIFT;
>>>
>>> Thinking about this some more, wouldn't this look better?
>>>
>>> start_block = page_sectors(se->start_block + 1);
>>> nr_block = page_sectors(se->nr_pages - 1);
>>>
>>
>> OK,That's fine, it's clearer. And in this way, there won't be more than 80 columns.
>
> Should we rename "page_sectors" to "page_to_sectors"? Because we may need to define
> "sectors_to_page" also.
Change the "sectors_to_page" to "sectors_to_npage", npage means "number of pages"
or "page number". To distinguish the use case of "pfn_to_page()" etc. The latter
returns the pointer of "struct page".
>
>>
>>>
>>> .
>>>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2020-05-06 9:16 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-05 11:55 [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions Zhen Lei
2020-05-05 11:55 ` [PATCH 1/4] block: Move SECTORS_PER_PAGE and SECTORS_PER_PAGE_SHIFT definitions into <linux/blkdev.h> Zhen Lei
2020-05-05 12:10 ` Matthew Wilcox
2020-05-06 4:06 ` Leizhen (ThunderTown)
2020-05-05 11:55 ` [PATCH 2/4] mm/swap: use SECTORS_PER_PAGE_SHIFT to clean up code Zhen Lei
2020-05-05 17:25 ` Matthew Wilcox
2020-05-06 1:33 ` Leizhen (ThunderTown)
2020-05-06 3:47 ` Leizhen (ThunderTown)
2020-05-06 9:16 ` Leizhen (ThunderTown)
2020-05-05 11:55 ` [PATCH 3/4] block: use SECTORS_PER_PAGE_SHIFT and SECTORS_PER_PAGE " Zhen Lei
2020-05-05 11:55 ` [PATCH 4/4] mtd: eliminate SECTOR related magic numbers Zhen Lei
2020-05-05 17:32 ` [PATCH 0/4] eliminate SECTOR related magic numbers and duplicated conversions antlists
2020-05-05 18:01 ` Matthew Wilcox
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.