All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] Btrfs: optimize key searches in btrfs_search_slot
@ 2013-08-29 12:44 Filipe David Borba Manana
  2013-08-29 13:42 ` [PATCH v2] " Filipe David Borba Manana
                   ` (6 more replies)
  0 siblings, 7 replies; 24+ messages in thread
From: Filipe David Borba Manana @ 2013-08-29 12:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: sbehrens, Filipe David Borba Manana

When the binary search returns 0 (exact match), the target key
will necessarily be at slot 0 of all nodes below the current one,
so in this case the binary search is not needed because it will
always return 0, and we waste time doing it, holding node locks
for longer than necessary, etc.

Below follow histograms with the times spent on the current approach of
doing a binary search when the previous binary search returned 0, and
times for the new approach, which directly picks the first item/child
node in the leaf/node.

Current approach:

Count: 5013
Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
  25.000 -   33.930:   211 ######
  33.930 -   45.927:   277 ########
  45.927 -   62.045:  1834 #####################################################
  62.045 -   83.699:  1203 ###################################
  83.699 -  112.789:   609 ##################
 112.789 -  151.872:   450 #############
 151.872 -  204.377:   246 #######
 204.377 -  274.917:   124 ####
 274.917 -  369.684:    48 #
 369.684 -  497.000:    11 |

Approach proposed by this patch:

Count: 5013
Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
 10.000 -   20.339:  3160 #####################################################
 20.339 -   40.397:  1131 ###################
 40.397 -   79.308:   507 #########
 79.308 -  154.794:   199 ###
154.794 -  301.232:    14 |
301.232 -  585.313:     1 |
585.313 - 8303.000:     1 |

These samples were captured during a run of the btrfs tests 001, 002 and
004 in the xfstests, with a leaf/node size of 4Kb.

Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
---
 fs/btrfs/ctree.c |   61 ++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 59 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 5fa521b..5b20eec 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2426,6 +2426,59 @@ done:
 	return ret;
 }
 
+static int key_search(struct extent_buffer *b, struct btrfs_key *key,
+		      int level, int *prev_cmp, int *slot)
+{
+	unsigned long eb_offset = 0;
+	unsigned long len_left = b->len;
+	char *kaddr = NULL;
+	unsigned long map_start = 0;
+	unsigned long map_len = 0;
+	unsigned long offset;
+	struct btrfs_disk_key *k = NULL;
+	struct btrfs_disk_key unaligned;
+
+	if (*prev_cmp != 0) {
+		*prev_cmp = bin_search(b, key, level, slot);
+		return *prev_cmp;
+	}
+
+	if (level == 0)
+		offset = offsetof(struct btrfs_leaf, items);
+	else
+		offset = offsetof(struct btrfs_node, ptrs);
+
+	/*
+	 * Map the entire extent buffer, otherwise callers can't access
+	 * all keys/items of the leaf/node. Specially needed for case
+	 * where leaf/node size is greater than page cache size.
+	 */
+	while (len_left > 0) {
+		unsigned long len = min(PAGE_CACHE_SIZE, len_left);
+		int err;
+
+		err = map_private_extent_buffer(b, eb_offset, len, &kaddr,
+						&map_start, &map_len);
+		len_left -= len;
+		eb_offset += len;
+		if (k)
+			continue;
+		if (!err) {
+			k = (struct btrfs_disk_key *)(kaddr + offset -
+						      map_start);
+		} else {
+			read_extent_buffer(b, &unaligned,
+					   offset, sizeof(unaligned));
+			k = &unaligned;
+		}
+	}
+
+	BUG_ON(comp_keys(k, key) != 0);
+	*slot = 0;
+
+	return 0;
+}
+
 /*
  * look for key in the tree.  path is filled in with nodes along the way
  * if key is found, we return zero and you can find the item in the leaf
@@ -2454,6 +2507,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	int write_lock_level = 0;
 	u8 lowest_level = 0;
 	int min_write_lock_level;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(lowest_level && ins_len > 0);
@@ -2484,6 +2538,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	min_write_lock_level = write_lock_level;
 
 again:
+	prev_cmp = -1;
 	/*
 	 * we try very hard to do read locks on the root
 	 */
@@ -2584,7 +2639,7 @@ cow_done:
 		if (!cow)
 			btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
@@ -2719,6 +2774,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	int level;
 	int lowest_unlock = 1;
 	u8 lowest_level = 0;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(p->nodes[0] != NULL);
@@ -2729,6 +2785,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	}
 
 again:
+	prev_cmp = -1;
 	b = get_old_root(root, time_seq);
 	level = btrfs_header_level(b);
 	p->locks[level] = BTRFS_READ_LOCK;
@@ -2746,7 +2803,7 @@ again:
 		 */
 		btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 12:44 [PATCH] Btrfs: optimize key searches in btrfs_search_slot Filipe David Borba Manana
@ 2013-08-29 13:42 ` Filipe David Borba Manana
  2013-08-29 13:49 ` [PATCH] " Josef Bacik
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 24+ messages in thread
From: Filipe David Borba Manana @ 2013-08-29 13:42 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Filipe David Borba Manana

When the binary search returns 0 (exact match), the target key
will necessarily be at slot 0 of all nodes below the current one,
so in this case the binary search is not needed because it will
always return 0, and we waste time doing it, holding node locks
for longer than necessary, etc.

Below follow histograms with the times spent on the current approach of
doing a binary search when the previous binary search returned 0, and
times for the new approach, which directly picks the first item/child
node in the leaf/node.

Current approach:

Count: 5013
Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
  25.000 -   33.930:   211 ######
  33.930 -   45.927:   277 ########
  45.927 -   62.045:  1834 #####################################################
  62.045 -   83.699:  1203 ###################################
  83.699 -  112.789:   609 ##################
 112.789 -  151.872:   450 #############
 151.872 -  204.377:   246 #######
 204.377 -  274.917:   124 ####
 274.917 -  369.684:    48 #
 369.684 -  497.000:    11 |

Approach proposed by this patch:

Count: 5013
Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
 10.000 -   20.339:  3160 #####################################################
 20.339 -   40.397:  1131 ###################
 40.397 -   79.308:   507 #########
 79.308 -  154.794:   199 ###
154.794 -  301.232:    14 |
301.232 -  585.313:     1 |
585.313 - 8303.000:     1 |

These samples were captured during a run of the btrfs tests 001, 002 and
004 in the xfstests, with a leaf/node size of 4Kb.

Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
---

V2: Simplified code, removed unnecessary code.
 fs/btrfs/ctree.c |   44 ++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 42 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 5fa521b..a159270 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2426,6 +2426,42 @@ done:
 	return ret;
 }
 
+static int key_search(struct extent_buffer *b, struct btrfs_key *key,
+		      int level, int *prev_cmp, int *slot)
+{
+	char *kaddr = NULL;
+	unsigned long map_start = 0;
+	unsigned long map_len = 0;
+	unsigned long offset;
+	struct btrfs_disk_key *k = NULL;
+	struct btrfs_disk_key unaligned;
+	int err;
+
+	if (*prev_cmp != 0) {
+		*prev_cmp = bin_search(b, key, level, slot);
+		return *prev_cmp;
+	}
+
+	if (level == 0)
+		offset = offsetof(struct btrfs_leaf, items);
+	else
+		offset = offsetof(struct btrfs_node, ptrs);
+
+	err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
+					&kaddr, &map_start, &map_len);
+	if (!err) {
+		k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
+	} else {
+		read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
+		k = &unaligned;
+	}
+
+	BUG_ON(comp_keys(k, key) != 0);
+	*slot = 0;
+
+	return 0;
+}
+
 /*
  * look for key in the tree.  path is filled in with nodes along the way
  * if key is found, we return zero and you can find the item in the leaf
@@ -2454,6 +2490,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	int write_lock_level = 0;
 	u8 lowest_level = 0;
 	int min_write_lock_level;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(lowest_level && ins_len > 0);
@@ -2484,6 +2521,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	min_write_lock_level = write_lock_level;
 
 again:
+	prev_cmp = -1;
 	/*
 	 * we try very hard to do read locks on the root
 	 */
@@ -2584,7 +2622,7 @@ cow_done:
 		if (!cow)
 			btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
@@ -2719,6 +2757,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	int level;
 	int lowest_unlock = 1;
 	u8 lowest_level = 0;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(p->nodes[0] != NULL);
@@ -2729,6 +2768,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	}
 
 again:
+	prev_cmp = -1;
 	b = get_old_root(root, time_seq);
 	level = btrfs_header_level(b);
 	p->locks[level] = BTRFS_READ_LOCK;
@@ -2746,7 +2786,7 @@ again:
 		 */
 		btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 12:44 [PATCH] Btrfs: optimize key searches in btrfs_search_slot Filipe David Borba Manana
  2013-08-29 13:42 ` [PATCH v2] " Filipe David Borba Manana
@ 2013-08-29 13:49 ` Josef Bacik
  2013-08-29 13:53   ` Filipe David Manana
  2013-08-29 13:59 ` [PATCH v3] " Filipe David Borba Manana
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Josef Bacik @ 2013-08-29 13:49 UTC (permalink / raw)
  To: Filipe David Borba Manana; +Cc: linux-btrfs, sbehrens

On Thu, Aug 29, 2013 at 01:44:13PM +0100, Filipe David Borba Manana wrote:
> When the binary search returns 0 (exact match), the target key
> will necessarily be at slot 0 of all nodes below the current one,
> so in this case the binary search is not needed because it will
> always return 0, and we waste time doing it, holding node locks
> for longer than necessary, etc.
> 
> Below follow histograms with the times spent on the current approach of
> doing a binary search when the previous binary search returned 0, and
> times for the new approach, which directly picks the first item/child
> node in the leaf/node.
> 
> Current approach:
> 
> Count: 5013
> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
> Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
>   25.000 -   33.930:   211 ######
>   33.930 -   45.927:   277 ########
>   45.927 -   62.045:  1834 #####################################################
>   62.045 -   83.699:  1203 ###################################
>   83.699 -  112.789:   609 ##################
>  112.789 -  151.872:   450 #############
>  151.872 -  204.377:   246 #######
>  204.377 -  274.917:   124 ####
>  274.917 -  369.684:    48 #
>  369.684 -  497.000:    11 |
> 
> Approach proposed by this patch:
> 
> Count: 5013
> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
> Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
>  10.000 -   20.339:  3160 #####################################################
>  20.339 -   40.397:  1131 ###################
>  40.397 -   79.308:   507 #########
>  79.308 -  154.794:   199 ###
> 154.794 -  301.232:    14 |
> 301.232 -  585.313:     1 |
> 585.313 - 8303.000:     1 |
> 
> These samples were captured during a run of the btrfs tests 001, 002 and
> 004 in the xfstests, with a leaf/node size of 4Kb.
> 
> Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
> ---
>  fs/btrfs/ctree.c |   61 ++++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 59 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
> index 5fa521b..5b20eec 100644
> --- a/fs/btrfs/ctree.c
> +++ b/fs/btrfs/ctree.c
> @@ -2426,6 +2426,59 @@ done:
>  	return ret;
>  }
>  
> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
> +		      int level, int *prev_cmp, int *slot)
> +{
> +	unsigned long eb_offset = 0;
> +	unsigned long len_left = b->len;
> +	char *kaddr = NULL;
> +	unsigned long map_start = 0;
> +	unsigned long map_len = 0;
> +	unsigned long offset;
> +	struct btrfs_disk_key *k = NULL;
> +	struct btrfs_disk_key unaligned;
> +
> +	if (*prev_cmp != 0) {
> +		*prev_cmp = bin_search(b, key, level, slot);
> +		return *prev_cmp;
> +	}
> +
> +	if (level == 0)
> +		offset = offsetof(struct btrfs_leaf, items);
> +	else
> +		offset = offsetof(struct btrfs_node, ptrs);
> +
> +	/*
> +	 * Map the entire extent buffer, otherwise callers can't access
> +	 * all keys/items of the leaf/node. Specially needed for case
> +	 * where leaf/node size is greater than page cache size.
> +	 */
> +	while (len_left > 0) {
> +		unsigned long len = min(PAGE_CACHE_SIZE, len_left);
> +		int err;
> +
> +		err = map_private_extent_buffer(b, eb_offset, len, &kaddr,
> +						&map_start, &map_len);
> +		len_left -= len;
> +		eb_offset += len;
> +		if (k)
> +			continue;
> +		if (!err) {
> +			k = (struct btrfs_disk_key *)(kaddr + offset -
> +						      map_start);
> +		} else {
> +			read_extent_buffer(b, &unaligned,
> +					   offset, sizeof(unaligned));
> +			k = &unaligned;
> +		}
> +	}
> +

This confuses me, if we're at slot 0 we should be at the front of the first
page, no matter what, so why not just read the first key and carry on?

> +	BUG_ON(comp_keys(k, key) != 0);

Please use the ASSERT() macro.  Thanks,

Josef

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 13:49 ` [PATCH] " Josef Bacik
@ 2013-08-29 13:53   ` Filipe David Manana
  0 siblings, 0 replies; 24+ messages in thread
From: Filipe David Manana @ 2013-08-29 13:53 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, Stefan Behrens

On Thu, Aug 29, 2013 at 2:49 PM, Josef Bacik <jbacik@fusionio.com> wrote:
> On Thu, Aug 29, 2013 at 01:44:13PM +0100, Filipe David Borba Manana wrote:
>> When the binary search returns 0 (exact match), the target key
>> will necessarily be at slot 0 of all nodes below the current one,
>> so in this case the binary search is not needed because it will
>> always return 0, and we waste time doing it, holding node locks
>> for longer than necessary, etc.
>>
>> Below follow histograms with the times spent on the current approach of
>> doing a binary search when the previous binary search returned 0, and
>> times for the new approach, which directly picks the first item/child
>> node in the leaf/node.
>>
>> Current approach:
>>
>> Count: 5013
>> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
>> Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
>>   25.000 -   33.930:   211 ######
>>   33.930 -   45.927:   277 ########
>>   45.927 -   62.045:  1834 #####################################################
>>   62.045 -   83.699:  1203 ###################################
>>   83.699 -  112.789:   609 ##################
>>  112.789 -  151.872:   450 #############
>>  151.872 -  204.377:   246 #######
>>  204.377 -  274.917:   124 ####
>>  274.917 -  369.684:    48 #
>>  369.684 -  497.000:    11 |
>>
>> Approach proposed by this patch:
>>
>> Count: 5013
>> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
>> Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
>>  10.000 -   20.339:  3160 #####################################################
>>  20.339 -   40.397:  1131 ###################
>>  40.397 -   79.308:   507 #########
>>  79.308 -  154.794:   199 ###
>> 154.794 -  301.232:    14 |
>> 301.232 -  585.313:     1 |
>> 585.313 - 8303.000:     1 |
>>
>> These samples were captured during a run of the btrfs tests 001, 002 and
>> 004 in the xfstests, with a leaf/node size of 4Kb.
>>
>> Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
>> ---
>>  fs/btrfs/ctree.c |   61 ++++++++++++++++++++++++++++++++++++++++++++++++++++--
>>  1 file changed, 59 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
>> index 5fa521b..5b20eec 100644
>> --- a/fs/btrfs/ctree.c
>> +++ b/fs/btrfs/ctree.c
>> @@ -2426,6 +2426,59 @@ done:
>>       return ret;
>>  }
>>
>> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
>> +                   int level, int *prev_cmp, int *slot)
>> +{
>> +     unsigned long eb_offset = 0;
>> +     unsigned long len_left = b->len;
>> +     char *kaddr = NULL;
>> +     unsigned long map_start = 0;
>> +     unsigned long map_len = 0;
>> +     unsigned long offset;
>> +     struct btrfs_disk_key *k = NULL;
>> +     struct btrfs_disk_key unaligned;
>> +
>> +     if (*prev_cmp != 0) {
>> +             *prev_cmp = bin_search(b, key, level, slot);
>> +             return *prev_cmp;
>> +     }
>> +
>> +     if (level == 0)
>> +             offset = offsetof(struct btrfs_leaf, items);
>> +     else
>> +             offset = offsetof(struct btrfs_node, ptrs);
>> +
>> +     /*
>> +      * Map the entire extent buffer, otherwise callers can't access
>> +      * all keys/items of the leaf/node. Specially needed for case
>> +      * where leaf/node size is greater than page cache size.
>> +      */
>> +     while (len_left > 0) {
>> +             unsigned long len = min(PAGE_CACHE_SIZE, len_left);
>> +             int err;
>> +
>> +             err = map_private_extent_buffer(b, eb_offset, len, &kaddr,
>> +                                             &map_start, &map_len);
>> +             len_left -= len;
>> +             eb_offset += len;
>> +             if (k)
>> +                     continue;
>> +             if (!err) {
>> +                     k = (struct btrfs_disk_key *)(kaddr + offset -
>> +                                                   map_start);
>> +             } else {
>> +                     read_extent_buffer(b, &unaligned,
>> +                                        offset, sizeof(unaligned));
>> +                     k = &unaligned;
>> +             }
>> +     }
>> +
>
> This confuses me, if we're at slot 0 we should be at the front of the first
> page, no matter what, so why not just read the first key and carry on?

Correct. Mistake of mine, corrected in the second patch version. I was
having NULL pointer dereferences in read_extent_buffer when the
leaf/node sizes were bigger than page cache size. Turned out to be a
mistake from me, and no need to do the whole mapping on page size
units.

>
>> +     BUG_ON(comp_keys(k, key) != 0);
>
> Please use the ASSERT() macro.  Thanks,

Ok, updating it.
Thanks Josef.

>
> Josef



-- 
Filipe David Manana,

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 12:44 [PATCH] Btrfs: optimize key searches in btrfs_search_slot Filipe David Borba Manana
  2013-08-29 13:42 ` [PATCH v2] " Filipe David Borba Manana
  2013-08-29 13:49 ` [PATCH] " Josef Bacik
@ 2013-08-29 13:59 ` Filipe David Borba Manana
  2013-08-29 18:08   ` Zach Brown
  2013-08-29 19:21 ` [PATCH v4] " Filipe David Borba Manana
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Filipe David Borba Manana @ 2013-08-29 13:59 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Filipe David Borba Manana

When the binary search returns 0 (exact match), the target key
will necessarily be at slot 0 of all nodes below the current one,
so in this case the binary search is not needed because it will
always return 0, and we waste time doing it, holding node locks
for longer than necessary, etc.

Below follow histograms with the times spent on the current approach of
doing a binary search when the previous binary search returned 0, and
times for the new approach, which directly picks the first item/child
node in the leaf/node.

Current approach:

Count: 5013
Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
  25.000 -   33.930:   211 ######
  33.930 -   45.927:   277 ########
  45.927 -   62.045:  1834 #####################################################
  62.045 -   83.699:  1203 ###################################
  83.699 -  112.789:   609 ##################
 112.789 -  151.872:   450 #############
 151.872 -  204.377:   246 #######
 204.377 -  274.917:   124 ####
 274.917 -  369.684:    48 #
 369.684 -  497.000:    11 |

Approach proposed by this patch:

Count: 5013
Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
 10.000 -   20.339:  3160 #####################################################
 20.339 -   40.397:  1131 ###################
 40.397 -   79.308:   507 #########
 79.308 -  154.794:   199 ###
154.794 -  301.232:    14 |
301.232 -  585.313:     1 |
585.313 - 8303.000:     1 |

These samples were captured during a run of the btrfs tests 001, 002 and
004 in the xfstests, with a leaf/node size of 4Kb.

Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
---

V2: Simplified code, removed unnecessary code.
V3: Replaced BUG_ON() with the new ASSERT() from Josef.

 fs/btrfs/ctree.c |   44 ++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 42 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 5fa521b..b69dd46 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2426,6 +2426,42 @@ done:
 	return ret;
 }
 
+static int key_search(struct extent_buffer *b, struct btrfs_key *key,
+		      int level, int *prev_cmp, int *slot)
+{
+	char *kaddr = NULL;
+	unsigned long map_start = 0;
+	unsigned long map_len = 0;
+	unsigned long offset;
+	struct btrfs_disk_key *k = NULL;
+	struct btrfs_disk_key unaligned;
+	int err;
+
+	if (*prev_cmp != 0) {
+		*prev_cmp = bin_search(b, key, level, slot);
+		return *prev_cmp;
+	}
+
+	if (level == 0)
+		offset = offsetof(struct btrfs_leaf, items);
+	else
+		offset = offsetof(struct btrfs_node, ptrs);
+
+	err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
+					&kaddr, &map_start, &map_len);
+	if (!err) {
+		k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
+	} else {
+		read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
+		k = &unaligned;
+	}
+
+	ASSERT(comp_keys(k, key) == 0);
+	*slot = 0;
+
+	return 0;
+}
+
 /*
  * look for key in the tree.  path is filled in with nodes along the way
  * if key is found, we return zero and you can find the item in the leaf
@@ -2454,6 +2490,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	int write_lock_level = 0;
 	u8 lowest_level = 0;
 	int min_write_lock_level;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(lowest_level && ins_len > 0);
@@ -2484,6 +2521,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	min_write_lock_level = write_lock_level;
 
 again:
+	prev_cmp = -1;
 	/*
 	 * we try very hard to do read locks on the root
 	 */
@@ -2584,7 +2622,7 @@ cow_done:
 		if (!cow)
 			btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
@@ -2719,6 +2757,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	int level;
 	int lowest_unlock = 1;
 	u8 lowest_level = 0;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(p->nodes[0] != NULL);
@@ -2729,6 +2768,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	}
 
 again:
+	prev_cmp = -1;
 	b = get_old_root(root, time_seq);
 	level = btrfs_header_level(b);
 	p->locks[level] = BTRFS_READ_LOCK;
@@ -2746,7 +2786,7 @@ again:
 		 */
 		btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 13:59 ` [PATCH v3] " Filipe David Borba Manana
@ 2013-08-29 18:08   ` Zach Brown
  2013-08-29 18:35     ` Josef Bacik
  2013-08-29 18:41     ` Filipe David Manana
  0 siblings, 2 replies; 24+ messages in thread
From: Zach Brown @ 2013-08-29 18:08 UTC (permalink / raw)
  To: Filipe David Borba Manana; +Cc: linux-btrfs

On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote:
> When the binary search returns 0 (exact match), the target key
> will necessarily be at slot 0 of all nodes below the current one,
> so in this case the binary search is not needed because it will
> always return 0, and we waste time doing it, holding node locks
> for longer than necessary, etc.
> 
> Below follow histograms with the times spent on the current approach of
> doing a binary search when the previous binary search returned 0, and
> times for the new approach, which directly picks the first item/child
> node in the leaf/node.
> 
> Count: 5013
> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
> Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000

> Count: 5013
> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
> Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000

Where'd the giant increase in the range max come from?  Just jittery
measurement?  Maybe get a lot more data points to smooth that out?

> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
> +		      int level, int *prev_cmp, int *slot)
> +{
> +	char *kaddr = NULL;
> +	unsigned long map_start = 0;
> +	unsigned long map_len = 0;
> +	unsigned long offset;
> +	struct btrfs_disk_key *k = NULL;
> +	struct btrfs_disk_key unaligned;
> +	int err;
> +
> +	if (*prev_cmp != 0) {
> +		*prev_cmp = bin_search(b, key, level, slot);
> +		return *prev_cmp;
> +	}


> +	*slot = 0;
> +
> +	return 0;

So this is the actual work done by the function.

> +
> +	if (level == 0)
> +		offset = offsetof(struct btrfs_leaf, items);
> +	else
> +		offset = offsetof(struct btrfs_node, ptrs);

(+10 fragility points for assuming that the key starts each struct
instead of using [0].key)

> +
> +	err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
> +					&kaddr, &map_start, &map_len);
> +	if (!err) {
> +		k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
> +	} else {
> +		read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
> +		k = &unaligned;
> +	}
> +
> +	ASSERT(comp_keys(k, key) == 0);

All of the rest of the function, including most of the local variables,
is overhead for that assertion.  We don't actually care about the
relative sorted key position of the two keys so we don't need smart
field-aware comparisions.  We can use a dumb memcmp.

We can replace all that stuff with two easy memcmp_extent_buffers()
which vanish if ASSERT is a nop. 

	if (level)
		ASSERT(!memcmp_extent_buffer(b, key,
			offsetof(struct btrfs_node, ptrs[0].key),
			sizeof(*key)));
	else
		ASSERT(!memcmp_extent_buffer(b, key,
			offsetof(struct btrfs_leaf, items[0].key),
			sizeof(*key)));

Right?

- z

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 18:08   ` Zach Brown
@ 2013-08-29 18:35     ` Josef Bacik
  2013-08-29 19:00       ` Zach Brown
  2013-08-29 18:41     ` Filipe David Manana
  1 sibling, 1 reply; 24+ messages in thread
From: Josef Bacik @ 2013-08-29 18:35 UTC (permalink / raw)
  To: Zach Brown; +Cc: Filipe David Borba Manana, linux-btrfs

On Thu, Aug 29, 2013 at 11:08:16AM -0700, Zach Brown wrote:
> On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote:
> > When the binary search returns 0 (exact match), the target key
> > will necessarily be at slot 0 of all nodes below the current one,
> > so in this case the binary search is not needed because it will
> > always return 0, and we waste time doing it, holding node locks
> > for longer than necessary, etc.
> > 
> > Below follow histograms with the times spent on the current approach of
> > doing a binary search when the previous binary search returned 0, and
> > times for the new approach, which directly picks the first item/child
> > node in the leaf/node.
> > 
> > Count: 5013
> > Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
> > Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
> 
> > Count: 5013
> > Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
> > Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
> 
> Where'd the giant increase in the range max come from?  Just jittery
> measurement?  Maybe get a lot more data points to smooth that out?
> 
> > +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
> > +		      int level, int *prev_cmp, int *slot)
> > +{
> > +	char *kaddr = NULL;
> > +	unsigned long map_start = 0;
> > +	unsigned long map_len = 0;
> > +	unsigned long offset;
> > +	struct btrfs_disk_key *k = NULL;
> > +	struct btrfs_disk_key unaligned;
> > +	int err;
> > +
> > +	if (*prev_cmp != 0) {
> > +		*prev_cmp = bin_search(b, key, level, slot);
> > +		return *prev_cmp;
> > +	}
> 
> 
> > +	*slot = 0;
> > +
> > +	return 0;
> 
> So this is the actual work done by the function.
> 
> > +
> > +	if (level == 0)
> > +		offset = offsetof(struct btrfs_leaf, items);
> > +	else
> > +		offset = offsetof(struct btrfs_node, ptrs);
> 
> (+10 fragility points for assuming that the key starts each struct
> instead of using [0].key)
> 
> > +
> > +	err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
> > +					&kaddr, &map_start, &map_len);
> > +	if (!err) {
> > +		k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
> > +	} else {
> > +		read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
> > +		k = &unaligned;
> > +	}
> > +
> > +	ASSERT(comp_keys(k, key) == 0);
> 
> All of the rest of the function, including most of the local variables,
> is overhead for that assertion.  We don't actually care about the
> relative sorted key position of the two keys so we don't need smart
> field-aware comparisions.  We can use a dumb memcmp.
> 
> We can replace all that stuff with two easy memcmp_extent_buffers()
> which vanish if ASSERT is a nop. 
> 

Actually we can't since we have a cpu key and the keys in the eb are disk keys.
So maybe keep what we have here and wrap it completely in CONFIG_BTRFS_ASSERT?

Josef

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 18:08   ` Zach Brown
  2013-08-29 18:35     ` Josef Bacik
@ 2013-08-29 18:41     ` Filipe David Manana
  2013-08-29 19:02       ` Zach Brown
  1 sibling, 1 reply; 24+ messages in thread
From: Filipe David Manana @ 2013-08-29 18:41 UTC (permalink / raw)
  To: Zach Brown; +Cc: linux-btrfs

On Thu, Aug 29, 2013 at 7:08 PM, Zach Brown <zab@redhat.com> wrote:
> On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote:
>> When the binary search returns 0 (exact match), the target key
>> will necessarily be at slot 0 of all nodes below the current one,
>> so in this case the binary search is not needed because it will
>> always return 0, and we waste time doing it, holding node locks
>> for longer than necessary, etc.
>>
>> Below follow histograms with the times spent on the current approach of
>> doing a binary search when the previous binary search returned 0, and
>> times for the new approach, which directly picks the first item/child
>> node in the leaf/node.
>>
>> Count: 5013
>> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
>> Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
>
>> Count: 5013
>> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
>> Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
>
> Where'd the giant increase in the range max come from?  Just jittery
> measurement?  Maybe get a lot more data points to smooth that out?

Correct, just jittery.

>
>> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
>> +                   int level, int *prev_cmp, int *slot)
>> +{
>> +     char *kaddr = NULL;
>> +     unsigned long map_start = 0;
>> +     unsigned long map_len = 0;
>> +     unsigned long offset;
>> +     struct btrfs_disk_key *k = NULL;
>> +     struct btrfs_disk_key unaligned;
>> +     int err;
>> +
>> +     if (*prev_cmp != 0) {
>> +             *prev_cmp = bin_search(b, key, level, slot);
>> +             return *prev_cmp;
>> +     }
>
>
>> +     *slot = 0;
>> +
>> +     return 0;
>
> So this is the actual work done by the function.

Correct. That and the very first if statement in the function.

>
>> +
>> +     if (level == 0)
>> +             offset = offsetof(struct btrfs_leaf, items);
>> +     else
>> +             offset = offsetof(struct btrfs_node, ptrs);
>
> (+10 fragility points for assuming that the key starts each struct
> instead of using [0].key)

Ok. I just copied that from ctree.c:bin_search(). I guess that gives
another +10 fragility points.
Thanks for pointing out.

>
>> +
>> +     err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
>> +                                     &kaddr, &map_start, &map_len);
>> +     if (!err) {
>> +             k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
>> +     } else {
>> +             read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
>> +             k = &unaligned;
>> +     }
>> +
>> +     ASSERT(comp_keys(k, key) == 0);
>
> All of the rest of the function, including most of the local variables,
> is overhead for that assertion.  We don't actually care about the
> relative sorted key position of the two keys so we don't need smart
> field-aware comparisions.  We can use a dumb memcmp.
>
> We can replace all that stuff with two easy memcmp_extent_buffers()
> which vanish if ASSERT is a nop.
>
>         if (level)
>                 ASSERT(!memcmp_extent_buffer(b, key,
>                         offsetof(struct btrfs_node, ptrs[0].key),
>                         sizeof(*key)));
>         else
>                 ASSERT(!memcmp_extent_buffer(b, key,
>                         offsetof(struct btrfs_leaf, items[0].key),
>                         sizeof(*key)));
>
> Right?

No, and as Josef just pointed, like that we compare a btrfs_key with a
btrfs_disk_key, which is wrong due to endianess differences.

So I'll go for Josef's suggestion in the following mail about wrapping
stuff with a CONFIG_BTRFS_ASSERT #ifdef macro.

>
> - z



-- 
Filipe David Manana,

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 18:35     ` Josef Bacik
@ 2013-08-29 19:00       ` Zach Brown
  0 siblings, 0 replies; 24+ messages in thread
From: Zach Brown @ 2013-08-29 19:00 UTC (permalink / raw)
  To: Josef Bacik; +Cc: Filipe David Borba Manana, linux-btrfs

> > We can replace all that stuff with two easy memcmp_extent_buffers()
> > which vanish if ASSERT is a nop. 
> 
> Actually we can't since we have a cpu key and the keys in the eb are disk keys.
> So maybe keep what we have here and wrap it completely in CONFIG_BTRFS_ASSERT?

I could have sworn that I checked that the input was a disk key.

In that case, then, I'd put all this off in a helper function that's
called in the asserts that swabs to a disk key and then does the memcmp.
All this fiddly assert junk (which just compares keys!) doesn't belong
implemented by hand in this trivial helper.

- z

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 18:41     ` Filipe David Manana
@ 2013-08-29 19:02       ` Zach Brown
  0 siblings, 0 replies; 24+ messages in thread
From: Zach Brown @ 2013-08-29 19:02 UTC (permalink / raw)
  To: Filipe David Manana; +Cc: linux-btrfs

> >> +     if (level == 0)
> >> +             offset = offsetof(struct btrfs_leaf, items);
> >> +     else
> >> +             offset = offsetof(struct btrfs_node, ptrs);
> >
> > (+10 fragility points for assuming that the key starts each struct
> > instead of using [0].key)
> 
> Ok. I just copied that from ctree.c:bin_search(). I guess that gives
> another +10 fragility points.
> Thanks for pointing out.

Yeah.  Don't worry, you have quite a way to go before building up
personal fragility points that come anywhere near the wealth of
fragility points that btrfs has in the bank :).

- z

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 12:44 [PATCH] Btrfs: optimize key searches in btrfs_search_slot Filipe David Borba Manana
                   ` (2 preceding siblings ...)
  2013-08-29 13:59 ` [PATCH v3] " Filipe David Borba Manana
@ 2013-08-29 19:21 ` Filipe David Borba Manana
  2013-08-30 14:14   ` David Sterba
  2013-08-30 14:46 ` [PATCH v5] " Filipe David Borba Manana
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Filipe David Borba Manana @ 2013-08-29 19:21 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Filipe David Borba Manana

When the binary search returns 0 (exact match), the target key
will necessarily be at slot 0 of all nodes below the current one,
so in this case the binary search is not needed because it will
always return 0, and we waste time doing it, holding node locks
for longer than necessary, etc.

Below follow histograms with the times spent on the current approach of
doing a binary search when the previous binary search returned 0, and
times for the new approach, which directly picks the first item/child
node in the leaf/node.

Current approach:

Count: 5013
Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
  25.000 -   33.930:   211 ######
  33.930 -   45.927:   277 ########
  45.927 -   62.045:  1834 #####################################################
  62.045 -   83.699:  1203 ###################################
  83.699 -  112.789:   609 ##################
 112.789 -  151.872:   450 #############
 151.872 -  204.377:   246 #######
 204.377 -  274.917:   124 ####
 274.917 -  369.684:    48 #
 369.684 -  497.000:    11 |

Approach proposed by this patch:

Count: 5013
Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
 10.000 -   20.339:  3160 #####################################################
 20.339 -   40.397:  1131 ###################
 40.397 -   79.308:   507 #########
 79.308 -  154.794:   199 ###
154.794 -  301.232:    14 |
301.232 -  585.313:     1 |
585.313 - 8303.000:     1 |

These samples were captured during a run of the btrfs tests 001, 002 and
004 in the xfstests, with a leaf/node size of 4Kb.

Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
---

V2: Simplified code, removed unnecessary code.
V3: Replaced BUG_ON() with the new ASSERT() from Josef.
V4: Addressed latest comments from Zach Brown and Josef Bacik.
    Surrounded all code that is used for the assertion with a
    #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed
    offset arguments to be more strictly correct.

 fs/btrfs/ctree.c |   43 +++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 41 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 5fa521b..a48cbb2 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2426,6 +2426,41 @@ done:
 	return ret;
 }
 
+static void key_search_validate(struct extent_buffer *b,
+				struct btrfs_key *key,
+				int level)
+{
+#ifdef CONFIG_BTRFS_ASSERT
+	struct btrfs_disk_key disk_key;
+
+	btrfs_cpu_key_to_disk(&disk_key, key);
+
+	if (level == 0)
+		ASSERT(!memcmp_extent_buffer(b, &disk_key,
+		    offsetof(struct btrfs_leaf, items[0].key),
+		    sizeof(disk_key)));
+	else
+		ASSERT(!memcmp_extent_buffer(b, &disk_key,
+		    offsetof(struct btrfs_node, ptrs[0].key),
+		    sizeof(disk_key)));
+#endif
+}
+
+static int key_search(struct extent_buffer *b, struct btrfs_key *key,
+		      int level, int *prev_cmp, int *slot)
+{
+
+	if (*prev_cmp != 0) {
+		*prev_cmp = bin_search(b, key, level, slot);
+		return *prev_cmp;
+	}
+
+	key_search_validate(b, key, level);
+	*slot = 0;
+
+	return 0;
+}
+
 /*
  * look for key in the tree.  path is filled in with nodes along the way
  * if key is found, we return zero and you can find the item in the leaf
@@ -2454,6 +2489,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	int write_lock_level = 0;
 	u8 lowest_level = 0;
 	int min_write_lock_level;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(lowest_level && ins_len > 0);
@@ -2484,6 +2520,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	min_write_lock_level = write_lock_level;
 
 again:
+	prev_cmp = -1;
 	/*
 	 * we try very hard to do read locks on the root
 	 */
@@ -2584,7 +2621,7 @@ cow_done:
 		if (!cow)
 			btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
@@ -2719,6 +2756,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	int level;
 	int lowest_unlock = 1;
 	u8 lowest_level = 0;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(p->nodes[0] != NULL);
@@ -2729,6 +2767,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	}
 
 again:
+	prev_cmp = -1;
 	b = get_old_root(root, time_seq);
 	level = btrfs_header_level(b);
 	p->locks[level] = BTRFS_READ_LOCK;
@@ -2746,7 +2785,7 @@ again:
 		 */
 		btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 19:21 ` [PATCH v4] " Filipe David Borba Manana
@ 2013-08-30 14:14   ` David Sterba
  2013-08-30 14:47     ` Filipe David Manana
  0 siblings, 1 reply; 24+ messages in thread
From: David Sterba @ 2013-08-30 14:14 UTC (permalink / raw)
  To: Filipe David Borba Manana; +Cc: linux-btrfs

On Thu, Aug 29, 2013 at 08:21:51PM +0100, Filipe David Borba Manana wrote:
> Count: 5013
> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
> Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
>   25.000 -   33.930:   211 ######
>   33.930 -   45.927:   277 ########
>   45.927 -   62.045:  1834 #####################################################
>   62.045 -   83.699:  1203 ###################################
>   83.699 -  112.789:   609 ##################
>  112.789 -  151.872:   450 #############
>  151.872 -  204.377:   246 #######
>  204.377 -  274.917:   124 ####
>  274.917 -  369.684:    48 #
>  369.684 -  497.000:    11 |
> 
> Approach proposed by this patch:
> 
> Count: 5013
> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
> Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
>  10.000 -   20.339:  3160 #####################################################
>  20.339 -   40.397:  1131 ###################
>  40.397 -   79.308:   507 #########
>  79.308 -  154.794:   199 ###
> 154.794 -  301.232:    14 |
> 301.232 -  585.313:     1 |
> 585.313 - 8303.000:     1 |

The statistics do not change from patch to patch+1 but you're doing
changes that may affect performance, can you please update them as well?

thanks,
david

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v5] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 12:44 [PATCH] Btrfs: optimize key searches in btrfs_search_slot Filipe David Borba Manana
                   ` (3 preceding siblings ...)
  2013-08-29 19:21 ` [PATCH v4] " Filipe David Borba Manana
@ 2013-08-30 14:46 ` Filipe David Borba Manana
  2013-08-31 11:08   ` Miao Xie
  2013-08-31 12:54 ` [PATCH v6] " Filipe David Borba Manana
  2013-09-01 10:39 ` [PATCH v7] " Filipe David Borba Manana
  6 siblings, 1 reply; 24+ messages in thread
From: Filipe David Borba Manana @ 2013-08-30 14:46 UTC (permalink / raw)
  To: linux-btrfs; +Cc: dsterba, jbacik, Filipe David Borba Manana

When the binary search returns 0 (exact match), the target key
will necessarily be at slot 0 of all nodes below the current one,
so in this case the binary search is not needed because it will
always return 0, and we waste time doing it, holding node locks
for longer than necessary, etc.

Below follow histograms with the times spent on the current approach of
doing a binary search when the previous binary search returned 0, and
times for the new approach, which directly picks the first item/child
node in the leaf/node.

Current approach:

Count: 6682
Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429
Percentiles:  90th: 124.000; 95th: 145.000; 99th: 206.000
  35.000 -   61.080:  1235 ################
  61.080 -  106.053:  4207 #####################################################
 106.053 -  183.606:  1122 ##############
 183.606 -  317.341:   111 #
 317.341 -  547.959:     6 |
 547.959 - 8370.000:     1 |

Approach proposed by this patch:

Count: 6682
Range:  6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev:  7.160
Percentiles:  90th: 23.000; 95th: 27.000; 99th: 40.000
   6.000 -    8.418:    58 #
   8.418 -   11.670:  1149 #########################
  11.670 -   16.046:  2418 #####################################################
  16.046 -   21.934:  2098 ##############################################
  21.934 -   29.854:   744 ################
  29.854 -   40.511:   154 ###
  40.511 -   54.848:    41 #
  54.848 -   74.136:     5 |
  74.136 -  100.087:     9 |
 100.087 -  135.000:     6 |

These samples were captured during a run of the btrfs tests 001, 002 and
004 in the xfstests, with a leaf/node size of 4Kb.

Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
---

V2: Simplified code, removed unnecessary code.
V3: Replaced BUG_ON() with the new ASSERT() from Josef.
V4: Addressed latest comments from Zach Brown and Josef Bacik.
    Surrounded all code that is used for the assertion with a
    #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed
    offset arguments to be more strictly correct.
V5: Updated histograms to reflect latest version of the code.

 fs/btrfs/ctree.c |   42 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 5fa521b..6434672 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2426,6 +2426,40 @@ done:
 	return ret;
 }
 
+static void key_search_validate(struct extent_buffer *b,
+				struct btrfs_key *key,
+				int level)
+{
+#ifdef CONFIG_BTRFS_ASSERT
+	struct btrfs_disk_key disk_key;
+
+	btrfs_cpu_key_to_disk(&disk_key, key);
+
+	if (level == 0)
+		ASSERT(!memcmp_extent_buffer(b, &disk_key,
+		    offsetof(struct btrfs_leaf, items[0].key),
+		    sizeof(disk_key)));
+	else
+		ASSERT(!memcmp_extent_buffer(b, &disk_key,
+		    offsetof(struct btrfs_node, ptrs[0].key),
+		    sizeof(disk_key)));
+#endif
+}
+
+static int key_search(struct extent_buffer *b, struct btrfs_key *key,
+		      int level, int *prev_cmp, int *slot)
+{
+	if (*prev_cmp != 0) {
+		*prev_cmp = bin_search(b, key, level, slot);
+		return *prev_cmp;
+	}
+
+	key_search_validate(b, key, level);
+	*slot = 0;
+
+	return 0;
+}
+
 /*
  * look for key in the tree.  path is filled in with nodes along the way
  * if key is found, we return zero and you can find the item in the leaf
@@ -2454,6 +2488,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	int write_lock_level = 0;
 	u8 lowest_level = 0;
 	int min_write_lock_level;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(lowest_level && ins_len > 0);
@@ -2484,6 +2519,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	min_write_lock_level = write_lock_level;
 
 again:
+	prev_cmp = -1;
 	/*
 	 * we try very hard to do read locks on the root
 	 */
@@ -2584,7 +2620,7 @@ cow_done:
 		if (!cow)
 			btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
@@ -2719,6 +2755,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	int level;
 	int lowest_unlock = 1;
 	u8 lowest_level = 0;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(p->nodes[0] != NULL);
@@ -2729,6 +2766,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	}
 
 again:
+	prev_cmp = -1;
 	b = get_old_root(root, time_seq);
 	level = btrfs_header_level(b);
 	p->locks[level] = BTRFS_READ_LOCK;
@@ -2746,7 +2784,7 @@ again:
 		 */
 		btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-30 14:14   ` David Sterba
@ 2013-08-30 14:47     ` Filipe David Manana
  2013-08-30 14:59       ` David Sterba
  0 siblings, 1 reply; 24+ messages in thread
From: Filipe David Manana @ 2013-08-30 14:47 UTC (permalink / raw)
  To: dsterba, Filipe David Borba Manana, linux-btrfs

On Fri, Aug 30, 2013 at 3:14 PM, David Sterba <dsterba@suse.cz> wrote:
> On Thu, Aug 29, 2013 at 08:21:51PM +0100, Filipe David Borba Manana wrote:
>> Count: 5013
>> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
>> Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
>>   25.000 -   33.930:   211 ######
>>   33.930 -   45.927:   277 ########
>>   45.927 -   62.045:  1834 #####################################################
>>   62.045 -   83.699:  1203 ###################################
>>   83.699 -  112.789:   609 ##################
>>  112.789 -  151.872:   450 #############
>>  151.872 -  204.377:   246 #######
>>  204.377 -  274.917:   124 ####
>>  274.917 -  369.684:    48 #
>>  369.684 -  497.000:    11 |
>>
>> Approach proposed by this patch:
>>
>> Count: 5013
>> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
>> Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
>>  10.000 -   20.339:  3160 #####################################################
>>  20.339 -   40.397:  1131 ###################
>>  40.397 -   79.308:   507 #########
>>  79.308 -  154.794:   199 ###
>> 154.794 -  301.232:    14 |
>> 301.232 -  585.313:     1 |
>> 585.313 - 8303.000:     1 |
>
> The statistics do not change from patch to patch+1 but you're doing
> changes that may affect performance, can you please update them as well?

Sure.
They're actually better now :)
Patch following with updated histograms.

>
> thanks,
> david



-- 
Filipe David Manana,

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-30 14:47     ` Filipe David Manana
@ 2013-08-30 14:59       ` David Sterba
  2013-08-30 15:10         ` Filipe David Manana
  0 siblings, 1 reply; 24+ messages in thread
From: David Sterba @ 2013-08-30 14:59 UTC (permalink / raw)
  To: Filipe David Manana; +Cc: dsterba, linux-btrfs

On Fri, Aug 30, 2013 at 03:47:21PM +0100, Filipe David Manana wrote:
> Sure.
> They're actually better now :)

Thanks. The numbers changed in both samples, but the relative difference
is still 2x improvement in this particular test.

david

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-30 14:59       ` David Sterba
@ 2013-08-30 15:10         ` Filipe David Manana
  0 siblings, 0 replies; 24+ messages in thread
From: Filipe David Manana @ 2013-08-30 15:10 UTC (permalink / raw)
  To: dsterba, Filipe David Manana, linux-btrfs

On Fri, Aug 30, 2013 at 3:59 PM, David Sterba <dsterba@suse.cz> wrote:
> On Fri, Aug 30, 2013 at 03:47:21PM +0100, Filipe David Manana wrote:
>> Sure.
>> They're actually better now :)
>
> Thanks. The numbers changed in both samples, but the relative difference
> is still 2x improvement in this particular test.

I tend to favor the percentiles above everything else, and for these
last comparison, they're all about 5x better. These times are for a
single search node/leaf. The highest the level (where root is highest)
an exact match first happens, the better it will be for the overall
tree search operation, as more times this optimal code path will be
executed.

>
> david



-- 
Filipe David Manana,

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v5] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-30 14:46 ` [PATCH v5] " Filipe David Borba Manana
@ 2013-08-31 11:08   ` Miao Xie
  0 siblings, 0 replies; 24+ messages in thread
From: Miao Xie @ 2013-08-31 11:08 UTC (permalink / raw)
  To: Filipe David Borba Manana; +Cc: linux-btrfs, dsterba, jbacik

On 	fri, 30 Aug 2013 15:46:43 +0100, Filipe David Borba Manana wrote:
> When the binary search returns 0 (exact match), the target key
> will necessarily be at slot 0 of all nodes below the current one,
> so in this case the binary search is not needed because it will
> always return 0, and we waste time doing it, holding node locks
> for longer than necessary, etc.
> 
> Below follow histograms with the times spent on the current approach of
> doing a binary search when the previous binary search returned 0, and
> times for the new approach, which directly picks the first item/child
> node in the leaf/node.
> 
> Current approach:
> 
> Count: 6682
> Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429
> Percentiles:  90th: 124.000; 95th: 145.000; 99th: 206.000
>   35.000 -   61.080:  1235 ################
>   61.080 -  106.053:  4207 #####################################################
>  106.053 -  183.606:  1122 ##############
>  183.606 -  317.341:   111 #
>  317.341 -  547.959:     6 |
>  547.959 - 8370.000:     1 |
> 
> Approach proposed by this patch:
> 
> Count: 6682
> Range:  6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev:  7.160
> Percentiles:  90th: 23.000; 95th: 27.000; 99th: 40.000
>    6.000 -    8.418:    58 #
>    8.418 -   11.670:  1149 #########################
>   11.670 -   16.046:  2418 #####################################################
>   16.046 -   21.934:  2098 ##############################################
>   21.934 -   29.854:   744 ################
>   29.854 -   40.511:   154 ###
>   40.511 -   54.848:    41 #
>   54.848 -   74.136:     5 |
>   74.136 -  100.087:     9 |
>  100.087 -  135.000:     6 |
> 
> These samples were captured during a run of the btrfs tests 001, 002 and
> 004 in the xfstests, with a leaf/node size of 4Kb.
> 
> Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
> ---
> 
> V2: Simplified code, removed unnecessary code.
> V3: Replaced BUG_ON() with the new ASSERT() from Josef.
> V4: Addressed latest comments from Zach Brown and Josef Bacik.
>     Surrounded all code that is used for the assertion with a
>     #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed
>     offset arguments to be more strictly correct.
> V5: Updated histograms to reflect latest version of the code.
> 
>  fs/btrfs/ctree.c |   42 ++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 40 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
> index 5fa521b..6434672 100644
> --- a/fs/btrfs/ctree.c
> +++ b/fs/btrfs/ctree.c
> @@ -2426,6 +2426,40 @@ done:
>  	return ret;
>  }
>  
> +static void key_search_validate(struct extent_buffer *b,
> +				struct btrfs_key *key,
> +				int level)
> +{
> +#ifdef CONFIG_BTRFS_ASSERT
> +	struct btrfs_disk_key disk_key;
> +
> +	btrfs_cpu_key_to_disk(&disk_key, key);
> +
> +	if (level == 0)
> +		ASSERT(!memcmp_extent_buffer(b, &disk_key,
> +		    offsetof(struct btrfs_leaf, items[0].key),
> +		    sizeof(disk_key)));
> +	else
> +		ASSERT(!memcmp_extent_buffer(b, &disk_key,
> +		    offsetof(struct btrfs_node, ptrs[0].key),
> +		    sizeof(disk_key)));
> +#endif
> +}

I think it is better to move #ifdef out of key_search_validate(),
and make the function return the check result, then

> +
> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
> +		      int level, int *prev_cmp, int *slot)
> +{
> +	if (*prev_cmp != 0) {
> +		*prev_cmp = bin_search(b, key, level, slot);
> +		return *prev_cmp;
> +	}
> +
> +	key_search_validate(b, key, level);

ASSERT(key_search_validate(b, key, level));

it can make the compiler happen when CONFIG_BTRFS_ASSERT is not set.

Thanks
Miao

> +	*slot = 0;
> +
> +	return 0;
> +}
> +
>  /*
>   * look for key in the tree.  path is filled in with nodes along the way
>   * if key is found, we return zero and you can find the item in the leaf
> @@ -2454,6 +2488,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
>  	int write_lock_level = 0;
>  	u8 lowest_level = 0;
>  	int min_write_lock_level;
> +	int prev_cmp;
>  
>  	lowest_level = p->lowest_level;
>  	WARN_ON(lowest_level && ins_len > 0);
> @@ -2484,6 +2519,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
>  	min_write_lock_level = write_lock_level;
>  
>  again:
> +	prev_cmp = -1;
>  	/*
>  	 * we try very hard to do read locks on the root
>  	 */
> @@ -2584,7 +2620,7 @@ cow_done:
>  		if (!cow)
>  			btrfs_unlock_up_safe(p, level + 1);
>  
> -		ret = bin_search(b, key, level, &slot);
> +		ret = key_search(b, key, level, &prev_cmp, &slot);
>  
>  		if (level != 0) {
>  			int dec = 0;
> @@ -2719,6 +2755,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
>  	int level;
>  	int lowest_unlock = 1;
>  	u8 lowest_level = 0;
> +	int prev_cmp;
>  
>  	lowest_level = p->lowest_level;
>  	WARN_ON(p->nodes[0] != NULL);
> @@ -2729,6 +2766,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
>  	}
>  
>  again:
> +	prev_cmp = -1;
>  	b = get_old_root(root, time_seq);
>  	level = btrfs_header_level(b);
>  	p->locks[level] = BTRFS_READ_LOCK;
> @@ -2746,7 +2784,7 @@ again:
>  		 */
>  		btrfs_unlock_up_safe(p, level + 1);
>  
> -		ret = bin_search(b, key, level, &slot);
> +		ret = key_search(b, key, level, &prev_cmp, &slot);
>  
>  		if (level != 0) {
>  			int dec = 0;
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v6] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 12:44 [PATCH] Btrfs: optimize key searches in btrfs_search_slot Filipe David Borba Manana
                   ` (4 preceding siblings ...)
  2013-08-30 14:46 ` [PATCH v5] " Filipe David Borba Manana
@ 2013-08-31 12:54 ` Filipe David Borba Manana
  2013-09-01  7:21   ` Miao Xie
  2013-09-01 10:39 ` [PATCH v7] " Filipe David Borba Manana
  6 siblings, 1 reply; 24+ messages in thread
From: Filipe David Borba Manana @ 2013-08-31 12:54 UTC (permalink / raw)
  To: linux-btrfs; +Cc: miaox, jbacik, Filipe David Borba Manana

When the binary search returns 0 (exact match), the target key
will necessarily be at slot 0 of all nodes below the current one,
so in this case the binary search is not needed because it will
always return 0, and we waste time doing it, holding node locks
for longer than necessary, etc.

Below follow histograms with the times spent on the current approach of
doing a binary search when the previous binary search returned 0, and
times for the new approach, which directly picks the first item/child
node in the leaf/node.

Current approach:

Count: 6682
Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429
Percentiles:  90th: 124.000; 95th: 145.000; 99th: 206.000
  35.000 -   61.080:  1235 ################
  61.080 -  106.053:  4207 #####################################################
 106.053 -  183.606:  1122 ##############
 183.606 -  317.341:   111 #
 317.341 -  547.959:     6 |
 547.959 - 8370.000:     1 |

Approach proposed by this patch:

Count: 6682
Range:  6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev:  7.160
Percentiles:  90th: 23.000; 95th: 27.000; 99th: 40.000
   6.000 -    8.418:    58 #
   8.418 -   11.670:  1149 #########################
  11.670 -   16.046:  2418 #####################################################
  16.046 -   21.934:  2098 ##############################################
  21.934 -   29.854:   744 ################
  29.854 -   40.511:   154 ###
  40.511 -   54.848:    41 #
  54.848 -   74.136:     5 |
  74.136 -  100.087:     9 |
 100.087 -  135.000:     6 |

These samples were captured during a run of the btrfs tests 001, 002 and
004 in the xfstests, with a leaf/node size of 4Kb.

Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
---

V2: Simplified code, removed unnecessary code.
V3: Replaced BUG_ON() with the new ASSERT() from Josef.
V4: Addressed latest comments from Zach Brown and Josef Bacik.
    Surrounded all code that is used for the assertion with a
    #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed
    offset arguments to be more strictly correct.
V5: Updated histograms to reflect latest version of the code.
V6: Use single assert macro and no more #ifdef CONFIG_BTRFS_ASSERT
    ... #endif logic, as suggested by Miao Xie.

 fs/btrfs/ctree.c |   39 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 5fa521b..5f38157 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2426,6 +2426,37 @@ done:
 	return ret;
 }
 
+static int key_search_validate(struct extent_buffer *b,
+			       struct btrfs_key *key,
+			       int level)
+{
+	struct btrfs_disk_key disk_key;
+	unsigned long offset;
+
+	btrfs_cpu_key_to_disk(&disk_key, key);
+
+	if (level == 0)
+		offset = offsetof(struct btrfs_leaf, items[0].key);
+	else
+		offset = offsetof(struct btrfs_node, ptrs[0].key);
+
+	return !memcmp_extent_buffer(b, &disk_key, offset, sizeof(disk_key));
+}
+
+static int key_search(struct extent_buffer *b, struct btrfs_key *key,
+		      int level, int *prev_cmp, int *slot)
+{
+	if (*prev_cmp != 0) {
+		*prev_cmp = bin_search(b, key, level, slot);
+		return *prev_cmp;
+	}
+
+	ASSERT(key_search_validate(b, key, level));
+	*slot = 0;
+
+	return 0;
+}
+
 /*
  * look for key in the tree.  path is filled in with nodes along the way
  * if key is found, we return zero and you can find the item in the leaf
@@ -2454,6 +2485,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	int write_lock_level = 0;
 	u8 lowest_level = 0;
 	int min_write_lock_level;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(lowest_level && ins_len > 0);
@@ -2484,6 +2516,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	min_write_lock_level = write_lock_level;
 
 again:
+	prev_cmp = -1;
 	/*
 	 * we try very hard to do read locks on the root
 	 */
@@ -2584,7 +2617,7 @@ cow_done:
 		if (!cow)
 			btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
@@ -2719,6 +2752,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	int level;
 	int lowest_unlock = 1;
 	u8 lowest_level = 0;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(p->nodes[0] != NULL);
@@ -2729,6 +2763,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	}
 
 again:
+	prev_cmp = -1;
 	b = get_old_root(root, time_seq);
 	level = btrfs_header_level(b);
 	p->locks[level] = BTRFS_READ_LOCK;
@@ -2746,7 +2781,7 @@ again:
 		 */
 		btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-31 12:54 ` [PATCH v6] " Filipe David Borba Manana
@ 2013-09-01  7:21   ` Miao Xie
  2013-09-01 10:32     ` Filipe David Manana
  0 siblings, 1 reply; 24+ messages in thread
From: Miao Xie @ 2013-09-01  7:21 UTC (permalink / raw)
  To: Filipe David Borba Manana; +Cc: linux-btrfs, jbacik

On 	sat, 31 Aug 2013 13:54:56 +0100, Filipe David Borba Manana wrote:
> When the binary search returns 0 (exact match), the target key
> will necessarily be at slot 0 of all nodes below the current one,
> so in this case the binary search is not needed because it will
> always return 0, and we waste time doing it, holding node locks
> for longer than necessary, etc.
> 
> Below follow histograms with the times spent on the current approach of
> doing a binary search when the previous binary search returned 0, and
> times for the new approach, which directly picks the first item/child
> node in the leaf/node.
> 
> Current approach:
> 
> Count: 6682
> Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429
> Percentiles:  90th: 124.000; 95th: 145.000; 99th: 206.000
>   35.000 -   61.080:  1235 ################
>   61.080 -  106.053:  4207 #####################################################
>  106.053 -  183.606:  1122 ##############
>  183.606 -  317.341:   111 #
>  317.341 -  547.959:     6 |
>  547.959 - 8370.000:     1 |
> 
> Approach proposed by this patch:
> 
> Count: 6682
> Range:  6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev:  7.160
> Percentiles:  90th: 23.000; 95th: 27.000; 99th: 40.000
>    6.000 -    8.418:    58 #
>    8.418 -   11.670:  1149 #########################
>   11.670 -   16.046:  2418 #####################################################
>   16.046 -   21.934:  2098 ##############################################
>   21.934 -   29.854:   744 ################
>   29.854 -   40.511:   154 ###
>   40.511 -   54.848:    41 #
>   54.848 -   74.136:     5 |
>   74.136 -  100.087:     9 |
>  100.087 -  135.000:     6 |
> 
> These samples were captured during a run of the btrfs tests 001, 002 and
> 004 in the xfstests, with a leaf/node size of 4Kb.
> 
> Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
> Signed-off-by: Josef Bacik <jbacik@fusionio.com>
> ---
> 
> V2: Simplified code, removed unnecessary code.
> V3: Replaced BUG_ON() with the new ASSERT() from Josef.
> V4: Addressed latest comments from Zach Brown and Josef Bacik.
>     Surrounded all code that is used for the assertion with a
>     #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed
>     offset arguments to be more strictly correct.
> V5: Updated histograms to reflect latest version of the code.
> V6: Use single assert macro and no more #ifdef CONFIG_BTRFS_ASSERT
>     ... #endif logic, as suggested by Miao Xie.
> 
>  fs/btrfs/ctree.c |   39 +++++++++++++++++++++++++++++++++++++--
>  1 file changed, 37 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
> index 5fa521b..5f38157 100644
> --- a/fs/btrfs/ctree.c
> +++ b/fs/btrfs/ctree.c
> @@ -2426,6 +2426,37 @@ done:
>  	return ret;
>  }
>  
> +static int key_search_validate(struct extent_buffer *b,
> +			       struct btrfs_key *key,
> +			       int level)
> +{
> +	struct btrfs_disk_key disk_key;
> +	unsigned long offset;
> +
> +	btrfs_cpu_key_to_disk(&disk_key, key);
> +
> +	if (level == 0)
> +		offset = offsetof(struct btrfs_leaf, items[0].key);
> +	else
> +		offset = offsetof(struct btrfs_node, ptrs[0].key);
> +
> +	return !memcmp_extent_buffer(b, &disk_key, offset, sizeof(disk_key));
> +}

Maybe I didn't explain clearly in the previous mail, what I suggested was to
move "#ifdef CONFIG_BTRFS_ASSERT" out of the function, not to remove it. The final
code is:

#ifdef CONFIG_BTRFS_ASSERT
static int key_search_validate()
{
}
#endif

static int key_search()
{
	...
	ASSERT(key_search_validate(b, key, level));
	...
}

If there is no "#ifdef CONFIG_BTRFS_ASSERT" wrapper around key_search_validate(),
the compiler will output the unused function warning if CONFIG_BTRFS_ASSERT is not set.

Thanks
Miao

> +
> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
> +		      int level, int *prev_cmp, int *slot)
> +{
> +	if (*prev_cmp != 0) {
> +		*prev_cmp = bin_search(b, key, level, slot);
> +		return *prev_cmp;
> +	}
> +
> +	ASSERT(key_search_validate(b, key, level));
> +	*slot = 0;
> +
> +	return 0;
> +}
> +
>  /*
>   * look for key in the tree.  path is filled in with nodes along the way
>   * if key is found, we return zero and you can find the item in the leaf
> @@ -2454,6 +2485,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
>  	int write_lock_level = 0;
>  	u8 lowest_level = 0;
>  	int min_write_lock_level;
> +	int prev_cmp;
>  
>  	lowest_level = p->lowest_level;
>  	WARN_ON(lowest_level && ins_len > 0);
> @@ -2484,6 +2516,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
>  	min_write_lock_level = write_lock_level;
>  
>  again:
> +	prev_cmp = -1;
>  	/*
>  	 * we try very hard to do read locks on the root
>  	 */
> @@ -2584,7 +2617,7 @@ cow_done:
>  		if (!cow)
>  			btrfs_unlock_up_safe(p, level + 1);
>  
> -		ret = bin_search(b, key, level, &slot);
> +		ret = key_search(b, key, level, &prev_cmp, &slot);
>  
>  		if (level != 0) {
>  			int dec = 0;
> @@ -2719,6 +2752,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
>  	int level;
>  	int lowest_unlock = 1;
>  	u8 lowest_level = 0;
> +	int prev_cmp;
>  
>  	lowest_level = p->lowest_level;
>  	WARN_ON(p->nodes[0] != NULL);
> @@ -2729,6 +2763,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
>  	}
>  
>  again:
> +	prev_cmp = -1;
>  	b = get_old_root(root, time_seq);
>  	level = btrfs_header_level(b);
>  	p->locks[level] = BTRFS_READ_LOCK;
> @@ -2746,7 +2781,7 @@ again:
>  		 */
>  		btrfs_unlock_up_safe(p, level + 1);
>  
> -		ret = bin_search(b, key, level, &slot);
> +		ret = key_search(b, key, level, &prev_cmp, &slot);
>  
>  		if (level != 0) {
>  			int dec = 0;
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] Btrfs: optimize key searches in btrfs_search_slot
  2013-09-01  7:21   ` Miao Xie
@ 2013-09-01 10:32     ` Filipe David Manana
  0 siblings, 0 replies; 24+ messages in thread
From: Filipe David Manana @ 2013-09-01 10:32 UTC (permalink / raw)
  To: Miao Xie; +Cc: linux-btrfs, Josef Bacik

On Sun, Sep 1, 2013 at 8:21 AM, Miao Xie <miaox@cn.fujitsu.com> wrote:
> On      sat, 31 Aug 2013 13:54:56 +0100, Filipe David Borba Manana wrote:
>> When the binary search returns 0 (exact match), the target key
>> will necessarily be at slot 0 of all nodes below the current one,
>> so in this case the binary search is not needed because it will
>> always return 0, and we waste time doing it, holding node locks
>> for longer than necessary, etc.
>>
>> Below follow histograms with the times spent on the current approach of
>> doing a binary search when the previous binary search returned 0, and
>> times for the new approach, which directly picks the first item/child
>> node in the leaf/node.
>>
>> Current approach:
>>
>> Count: 6682
>> Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429
>> Percentiles:  90th: 124.000; 95th: 145.000; 99th: 206.000
>>   35.000 -   61.080:  1235 ################
>>   61.080 -  106.053:  4207 #####################################################
>>  106.053 -  183.606:  1122 ##############
>>  183.606 -  317.341:   111 #
>>  317.341 -  547.959:     6 |
>>  547.959 - 8370.000:     1 |
>>
>> Approach proposed by this patch:
>>
>> Count: 6682
>> Range:  6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev:  7.160
>> Percentiles:  90th: 23.000; 95th: 27.000; 99th: 40.000
>>    6.000 -    8.418:    58 #
>>    8.418 -   11.670:  1149 #########################
>>   11.670 -   16.046:  2418 #####################################################
>>   16.046 -   21.934:  2098 ##############################################
>>   21.934 -   29.854:   744 ################
>>   29.854 -   40.511:   154 ###
>>   40.511 -   54.848:    41 #
>>   54.848 -   74.136:     5 |
>>   74.136 -  100.087:     9 |
>>  100.087 -  135.000:     6 |
>>
>> These samples were captured during a run of the btrfs tests 001, 002 and
>> 004 in the xfstests, with a leaf/node size of 4Kb.
>>
>> Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
>> Signed-off-by: Josef Bacik <jbacik@fusionio.com>
>> ---
>>
>> V2: Simplified code, removed unnecessary code.
>> V3: Replaced BUG_ON() with the new ASSERT() from Josef.
>> V4: Addressed latest comments from Zach Brown and Josef Bacik.
>>     Surrounded all code that is used for the assertion with a
>>     #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed
>>     offset arguments to be more strictly correct.
>> V5: Updated histograms to reflect latest version of the code.
>> V6: Use single assert macro and no more #ifdef CONFIG_BTRFS_ASSERT
>>     ... #endif logic, as suggested by Miao Xie.
>>
>>  fs/btrfs/ctree.c |   39 +++++++++++++++++++++++++++++++++++++--
>>  1 file changed, 37 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
>> index 5fa521b..5f38157 100644
>> --- a/fs/btrfs/ctree.c
>> +++ b/fs/btrfs/ctree.c
>> @@ -2426,6 +2426,37 @@ done:
>>       return ret;
>>  }
>>
>> +static int key_search_validate(struct extent_buffer *b,
>> +                            struct btrfs_key *key,
>> +                            int level)
>> +{
>> +     struct btrfs_disk_key disk_key;
>> +     unsigned long offset;
>> +
>> +     btrfs_cpu_key_to_disk(&disk_key, key);
>> +
>> +     if (level == 0)
>> +             offset = offsetof(struct btrfs_leaf, items[0].key);
>> +     else
>> +             offset = offsetof(struct btrfs_node, ptrs[0].key);
>> +
>> +     return !memcmp_extent_buffer(b, &disk_key, offset, sizeof(disk_key));
>> +}
>
> Maybe I didn't explain clearly in the previous mail, what I suggested was to
> move "#ifdef CONFIG_BTRFS_ASSERT" out of the function, not to remove it. The final
> code is:
>
> #ifdef CONFIG_BTRFS_ASSERT
> static int key_search_validate()
> {
> }
> #endif
>
> static int key_search()
> {
>         ...
>         ASSERT(key_search_validate(b, key, level));
>         ...
> }
>
> If there is no "#ifdef CONFIG_BTRFS_ASSERT" wrapper around key_search_validate(),
> the compiler will output the unused function warning if CONFIG_BTRFS_ASSERT is not set.

Ok. I misunderstood what you meant before. If the goal is not to
remove the #ifdef #endif, then honestly I'm not seeing what value the
suggestion brings in compared to patch v5, as it seems purely a style
preference (and highly subjective whether it's better or not).
Nevertheless I'm fine with it and hopefully everyone else will be too.

thanks

>
> Thanks
> Miao
>
>> +
>> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
>> +                   int level, int *prev_cmp, int *slot)
>> +{
>> +     if (*prev_cmp != 0) {
>> +             *prev_cmp = bin_search(b, key, level, slot);
>> +             return *prev_cmp;
>> +     }
>> +
>> +     ASSERT(key_search_validate(b, key, level));
>> +     *slot = 0;
>> +
>> +     return 0;
>> +}
>> +
>>  /*
>>   * look for key in the tree.  path is filled in with nodes along the way
>>   * if key is found, we return zero and you can find the item in the leaf
>> @@ -2454,6 +2485,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
>>       int write_lock_level = 0;
>>       u8 lowest_level = 0;
>>       int min_write_lock_level;
>> +     int prev_cmp;
>>
>>       lowest_level = p->lowest_level;
>>       WARN_ON(lowest_level && ins_len > 0);
>> @@ -2484,6 +2516,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
>>       min_write_lock_level = write_lock_level;
>>
>>  again:
>> +     prev_cmp = -1;
>>       /*
>>        * we try very hard to do read locks on the root
>>        */
>> @@ -2584,7 +2617,7 @@ cow_done:
>>               if (!cow)
>>                       btrfs_unlock_up_safe(p, level + 1);
>>
>> -             ret = bin_search(b, key, level, &slot);
>> +             ret = key_search(b, key, level, &prev_cmp, &slot);
>>
>>               if (level != 0) {
>>                       int dec = 0;
>> @@ -2719,6 +2752,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
>>       int level;
>>       int lowest_unlock = 1;
>>       u8 lowest_level = 0;
>> +     int prev_cmp;
>>
>>       lowest_level = p->lowest_level;
>>       WARN_ON(p->nodes[0] != NULL);
>> @@ -2729,6 +2763,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
>>       }
>>
>>  again:
>> +     prev_cmp = -1;
>>       b = get_old_root(root, time_seq);
>>       level = btrfs_header_level(b);
>>       p->locks[level] = BTRFS_READ_LOCK;
>> @@ -2746,7 +2781,7 @@ again:
>>                */
>>               btrfs_unlock_up_safe(p, level + 1);
>>
>> -             ret = bin_search(b, key, level, &slot);
>> +             ret = key_search(b, key, level, &prev_cmp, &slot);
>>
>>               if (level != 0) {
>>                       int dec = 0;
>>
>



-- 
Filipe David Manana,

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v7] Btrfs: optimize key searches in btrfs_search_slot
  2013-08-29 12:44 [PATCH] Btrfs: optimize key searches in btrfs_search_slot Filipe David Borba Manana
                   ` (5 preceding siblings ...)
  2013-08-31 12:54 ` [PATCH v6] " Filipe David Borba Manana
@ 2013-09-01 10:39 ` Filipe David Borba Manana
  2013-09-02 13:39   ` David Sterba
  6 siblings, 1 reply; 24+ messages in thread
From: Filipe David Borba Manana @ 2013-09-01 10:39 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Filipe David Borba Manana, Josef Bacik

When the binary search returns 0 (exact match), the target key
will necessarily be at slot 0 of all nodes below the current one,
so in this case the binary search is not needed because it will
always return 0, and we waste time doing it, holding node locks
for longer than necessary, etc.

Below follow histograms with the times spent on the current approach of
doing a binary search when the previous binary search returned 0, and
times for the new approach, which directly picks the first item/child
node in the leaf/node.

Current approach:

Count: 6682
Range: 35.000 - 8370.000; Mean: 85.837; Median: 75.000; Stddev: 106.429
Percentiles:  90th: 124.000; 95th: 145.000; 99th: 206.000
  35.000 -   61.080:  1235 ################
  61.080 -  106.053:  4207 #####################################################
 106.053 -  183.606:  1122 ##############
 183.606 -  317.341:   111 #
 317.341 -  547.959:     6 |
 547.959 - 8370.000:     1 |

Approach proposed by this patch:

Count: 6682
Range:  6.000 - 135.000; Mean: 16.690; Median: 16.000; Stddev:  7.160
Percentiles:  90th: 23.000; 95th: 27.000; 99th: 40.000
   6.000 -    8.418:    58 #
   8.418 -   11.670:  1149 #########################
  11.670 -   16.046:  2418 #####################################################
  16.046 -   21.934:  2098 ##############################################
  21.934 -   29.854:   744 ################
  29.854 -   40.511:   154 ###
  40.511 -   54.848:    41 #
  54.848 -   74.136:     5 |
  74.136 -  100.087:     9 |
 100.087 -  135.000:     6 |

These samples were captured during a run of the btrfs tests 001, 002 and
004 in the xfstests, with a leaf/node size of 4Kb.

Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
---

V2: Simplified code, removed unnecessary code.
V3: Replaced BUG_ON() with the new ASSERT() from Josef.
V4: Addressed latest comments from Zach Brown and Josef Bacik.
    Surrounded all code that is used for the assertion with a
    #ifdef CONFIG_BTRFS_ASSERT ... #endif block. Also changed
    offset arguments to be more strictly correct.
V5: Updated histograms to reflect latest version of the code.
V6: Use single assert macro and no more #ifdef CONFIG_BTRFS_ASSERT
    ... #endif logic, as suggested by Miao Xie.
V7: Added back the #ifdef ... #endif logic, to avoid compiler
    warning about unused function when CONFIG_BTRFS_ASSERT is
    not enabled.

 fs/btrfs/ctree.c |   41 +++++++++++++++++++++++++++++++++++++++--
 1 file changed, 39 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 5fa521b..4d602f7 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2426,6 +2426,39 @@ done:
 	return ret;
 }
 
+#ifdef CONFIG_BTRFS_ASSERT
+static int key_search_validate(struct extent_buffer *b,
+			       struct btrfs_key *key,
+			       int level)
+{
+	struct btrfs_disk_key disk_key;
+	unsigned long offset;
+
+	btrfs_cpu_key_to_disk(&disk_key, key);
+
+	if (level == 0)
+		offset = offsetof(struct btrfs_leaf, items[0].key);
+	else
+		offset = offsetof(struct btrfs_node, ptrs[0].key);
+
+	return !memcmp_extent_buffer(b, &disk_key, offset, sizeof(disk_key));
+}
+#endif
+
+static int key_search(struct extent_buffer *b, struct btrfs_key *key,
+		      int level, int *prev_cmp, int *slot)
+{
+	if (*prev_cmp != 0) {
+		*prev_cmp = bin_search(b, key, level, slot);
+		return *prev_cmp;
+	}
+
+	ASSERT(key_search_validate(b, key, level));
+	*slot = 0;
+
+	return 0;
+}
+
 /*
  * look for key in the tree.  path is filled in with nodes along the way
  * if key is found, we return zero and you can find the item in the leaf
@@ -2454,6 +2487,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	int write_lock_level = 0;
 	u8 lowest_level = 0;
 	int min_write_lock_level;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(lowest_level && ins_len > 0);
@@ -2484,6 +2518,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	min_write_lock_level = write_lock_level;
 
 again:
+	prev_cmp = -1;
 	/*
 	 * we try very hard to do read locks on the root
 	 */
@@ -2584,7 +2619,7 @@ cow_done:
 		if (!cow)
 			btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
@@ -2719,6 +2754,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	int level;
 	int lowest_unlock = 1;
 	u8 lowest_level = 0;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(p->nodes[0] != NULL);
@@ -2729,6 +2765,7 @@ int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	}
 
 again:
+	prev_cmp = -1;
 	b = get_old_root(root, time_seq);
 	level = btrfs_header_level(b);
 	p->locks[level] = BTRFS_READ_LOCK;
@@ -2746,7 +2783,7 @@ again:
 		 */
 		btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] Btrfs: optimize key searches in btrfs_search_slot
  2013-09-01 10:39 ` [PATCH v7] " Filipe David Borba Manana
@ 2013-09-02 13:39   ` David Sterba
  2013-09-02 14:40     ` Filipe David Manana
  0 siblings, 1 reply; 24+ messages in thread
From: David Sterba @ 2013-09-02 13:39 UTC (permalink / raw)
  To: Filipe David Borba Manana; +Cc: linux-btrfs, Josef Bacik

On Sun, Sep 01, 2013 at 11:39:28AM +0100, Filipe David Borba Manana wrote:
> +#ifdef CONFIG_BTRFS_ASSERT
> +static int key_search_validate(struct extent_buffer *b,
> +			       struct btrfs_key *key,
> +			       int level)
> +{
...
> +}
> +#endif
> +
> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
> +		      int level, int *prev_cmp, int *slot)
> +{
> +	if (*prev_cmp != 0) {
> +		*prev_cmp = bin_search(b, key, level, slot);
> +		return *prev_cmp;
> +	}
> +
> +	ASSERT(key_search_validate(b, key, level));

But what if I want to use key_search_validate out of the context of an
ASSERT ? I don't see a reason why the function needs to be under #ifdef
BTRFS_ASSERT / #endif at all.

> +	*slot = 0;
> +
> +	return 0;
> +}

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] Btrfs: optimize key searches in btrfs_search_slot
  2013-09-02 13:39   ` David Sterba
@ 2013-09-02 14:40     ` Filipe David Manana
  2013-09-02 14:52       ` David Sterba
  0 siblings, 1 reply; 24+ messages in thread
From: Filipe David Manana @ 2013-09-02 14:40 UTC (permalink / raw)
  To: dsterba, Filipe David Borba Manana, linux-btrfs, Josef Bacik

On Mon, Sep 2, 2013 at 2:39 PM, David Sterba <dsterba@suse.cz> wrote:
> On Sun, Sep 01, 2013 at 11:39:28AM +0100, Filipe David Borba Manana wrote:
>> +#ifdef CONFIG_BTRFS_ASSERT
>> +static int key_search_validate(struct extent_buffer *b,
>> +                            struct btrfs_key *key,
>> +                            int level)
>> +{
> ...
>> +}
>> +#endif
>> +
>> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
>> +                   int level, int *prev_cmp, int *slot)
>> +{
>> +     if (*prev_cmp != 0) {
>> +             *prev_cmp = bin_search(b, key, level, slot);
>> +             return *prev_cmp;
>> +     }
>> +
>> +     ASSERT(key_search_validate(b, key, level));
>
> But what if I want to use key_search_validate out of the context of an
> ASSERT ?

Right. But right now nothing else uses it. Shall the need for it come,
it's trivial to address.

> I don't see a reason why the function needs to be under #ifdef
> BTRFS_ASSERT / #endif at all.

To avoid the compiler warning, as mentioned before.

Between patch versions v5 to v7, I don't have any strong preference.
All have correct, small and simple code.

>
>> +     *slot = 0;
>> +
>> +     return 0;
>> +}



-- 
Filipe David Manana,

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] Btrfs: optimize key searches in btrfs_search_slot
  2013-09-02 14:40     ` Filipe David Manana
@ 2013-09-02 14:52       ` David Sterba
  0 siblings, 0 replies; 24+ messages in thread
From: David Sterba @ 2013-09-02 14:52 UTC (permalink / raw)
  To: Filipe David Manana; +Cc: dsterba, linux-btrfs, Josef Bacik

On Mon, Sep 02, 2013 at 03:40:39PM +0100, Filipe David Manana wrote:
> Between patch versions v5 to v7, I don't have any strong preference.
> All have correct, small and simple code.

I'm ok with v7.

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2013-09-02 14:52 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-29 12:44 [PATCH] Btrfs: optimize key searches in btrfs_search_slot Filipe David Borba Manana
2013-08-29 13:42 ` [PATCH v2] " Filipe David Borba Manana
2013-08-29 13:49 ` [PATCH] " Josef Bacik
2013-08-29 13:53   ` Filipe David Manana
2013-08-29 13:59 ` [PATCH v3] " Filipe David Borba Manana
2013-08-29 18:08   ` Zach Brown
2013-08-29 18:35     ` Josef Bacik
2013-08-29 19:00       ` Zach Brown
2013-08-29 18:41     ` Filipe David Manana
2013-08-29 19:02       ` Zach Brown
2013-08-29 19:21 ` [PATCH v4] " Filipe David Borba Manana
2013-08-30 14:14   ` David Sterba
2013-08-30 14:47     ` Filipe David Manana
2013-08-30 14:59       ` David Sterba
2013-08-30 15:10         ` Filipe David Manana
2013-08-30 14:46 ` [PATCH v5] " Filipe David Borba Manana
2013-08-31 11:08   ` Miao Xie
2013-08-31 12:54 ` [PATCH v6] " Filipe David Borba Manana
2013-09-01  7:21   ` Miao Xie
2013-09-01 10:32     ` Filipe David Manana
2013-09-01 10:39 ` [PATCH v7] " Filipe David Borba Manana
2013-09-02 13:39   ` David Sterba
2013-09-02 14:40     ` Filipe David Manana
2013-09-02 14:52       ` David Sterba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.