On 2020/9/17 上午12:01, David Sterba wrote: > On Tue, Sep 15, 2020 at 01:35:17PM +0800, Qu Wenruo wrote: >> generic_bin_search() distinguishes between reading a key which doesn't >> cross a page and one which does. However this distinction is not >> necessary since read_extent_buffer handles both cases transparently. >> >> Just use read_extent_buffer to streamline the code. >> >> Signed-off-by: Qu Wenruo >> --- >> fs/btrfs/ctree.c | 13 ++----------- >> 1 file changed, 2 insertions(+), 11 deletions(-) >> >> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c >> index cd1cd673bc0b..e204e1320745 100644 >> --- a/fs/btrfs/ctree.c >> +++ b/fs/btrfs/ctree.c >> @@ -1697,7 +1697,6 @@ static noinline int generic_bin_search(struct extent_buffer *eb, >> } >> >> while (low < high) { >> - unsigned long oip; >> unsigned long offset; >> struct btrfs_disk_key *tmp; >> struct btrfs_disk_key unaligned; >> @@ -1705,17 +1704,9 @@ static noinline int generic_bin_search(struct extent_buffer *eb, >> >> mid = (low + high) / 2; >> offset = p + mid * item_size; >> - oip = offset_in_page(offset); >> >> - if (oip + key_size <= PAGE_SIZE) { >> - const unsigned long idx = offset >> PAGE_SHIFT; >> - char *kaddr = page_address(eb->pages[idx]); >> - >> - tmp = (struct btrfs_disk_key *)(kaddr + oip); >> - } else { >> - read_extent_buffer(eb, &unaligned, offset, key_size); >> - tmp = &unaligned; >> - } >> + read_extent_buffer(eb, &unaligned, offset, key_size); >> + tmp = &unaligned; > > Reading from the first page is a performance optimization on systems > with 4K pages, ie. the majority. I'm not in favor removing it just to > make the code look nicer. > For 4K system, with the optimization it only saves one read_extent_buffer() call cost. In read_extent_buffer() it's still doing the same thing. Or we will need to manually call get_eb_page_offset() here to make it work for subpage. I wonder if it really worthy. Thanks, Qu