All of lore.kernel.org
 help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: Yu Kuai <yukuai3@huawei.com>,
	axboe@kernel.dk, andriy.shevchenko@linux.intel.com,
	john.garry@huawei.com, ming.lei@redhat.com
Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	yi.zhang@huawei.com
Subject: Re: [PATCH -next RFC v2 8/8] sbitmap: wake up the number of threads based on required tags
Date: Fri, 8 Apr 2022 07:31:02 -0700	[thread overview]
Message-ID: <5d84c02f-62c6-6418-6629-cebd42dc2ca5@acm.org> (raw)
In-Reply-To: <20220408073916.1428590-9-yukuai3@huawei.com>

On 4/8/22 00:39, Yu Kuai wrote:
> Always wake up 'wake_batch' threads will intensify competition and
> split io won't be issued continuously. Now that how many tags is required
> is recorded for huge io, it's safe to wake up baed on required tags.
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
>   lib/sbitmap.c | 22 +++++++++++++++++++++-
>   1 file changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index 8d01e02ea4b1..eac9fa5c2b4d 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -614,6 +614,26 @@ static inline void sbq_update_preemption(struct sbitmap_queue *sbq,
>   	WRITE_ONCE(sbq->force_tag_preemption, force);
>   }
>   
> +static unsigned int get_wake_nr(struct sbq_wait_state *ws, unsigned int nr_tags)

Consider renaming "get_wake_nr()" into "nr_to_wake_up()".

> +{
> +	struct sbq_wait *wait;
> +	struct wait_queue_entry *entry;
> +	unsigned int nr = 1;
> +
> +	spin_lock_irq(&ws->wait.lock);
> +	list_for_each_entry(entry, &ws->wait.head, entry) {
> +		wait = container_of(entry, struct sbq_wait, wait);
> +		if (nr_tags <= wait->nr_tags)
> +			break;
> +
> +		nr++;
> +		nr_tags -= wait->nr_tags;
> +	}
> +	spin_unlock_irq(&ws->wait.lock);
> +
> +	return nr;
> +}
> +
>   static bool __sbq_wake_up(struct sbitmap_queue *sbq)
>   {
>   	struct sbq_wait_state *ws;
> @@ -648,7 +668,7 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq)
>   	smp_mb__before_atomic();
>   	atomic_set(&ws->wait_cnt, wake_batch);
>   	sbq_update_preemption(sbq, wake_batch);
> -	wake_up_nr(&ws->wait, wake_batch);
> +	wake_up_nr(&ws->wait, get_wake_nr(ws, wake_batch));
>   
>   	return true;
>   }

ws->wait.lock is unlocked after the number of threads to wake up has 
been computed and is locked again by wake_up_nr(). The ws->wait.head 
list may be modified after get_wake_nr() returns and before wake_up_nr() 
is called. Isn't that a race condition?

Thanks,

Bart.

  reply	other threads:[~2022-04-08 14:31 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-08  7:39 [PATCH -next RFC v2 0/8] improve tag allocation under heavy load Yu Kuai
2022-04-08  7:39 ` [PATCH -next RFC v2 1/8] sbitmap: record the number of waiters for each waitqueue Yu Kuai
2022-04-08  7:39 ` [PATCH -next RFC v2 2/8] blk-mq: call 'bt_wait_ptr()' later in blk_mq_get_tag() Yu Kuai
2022-04-08 14:20   ` Bart Van Assche
2022-04-09  2:09     ` yukuai (C)
2022-04-08  7:39 ` [PATCH -next RFC v2 3/8] sbitmap: make sure waitqueues are balanced Yu Kuai
2022-04-15  6:31   ` Li, Ming
2022-04-15  7:07     ` yukuai (C)
2022-04-08  7:39 ` [PATCH -next RFC v2 4/8] blk-mq: don't preempt tag under heavy load Yu Kuai
2022-04-08 14:24   ` Bart Van Assche
2022-04-09  2:38     ` yukuai (C)
2022-04-08  7:39 ` [PATCH -next RFC v2 5/8] sbitmap: force tag preemption if free tags are sufficient Yu Kuai
2022-04-08  7:39 ` [PATCH -next RFC v2 6/8] blk-mq: force tag preemption for split bios Yu Kuai
2022-04-08  7:39 ` [PATCH -next RFC v2 7/8] blk-mq: record how many tags are needed for splited bio Yu Kuai
2022-04-08  7:39 ` [PATCH -next RFC v2 8/8] sbitmap: wake up the number of threads based on required tags Yu Kuai
2022-04-08 14:31   ` Bart Van Assche [this message]
2022-04-09  2:19     ` yukuai (C)
2022-04-08 21:13   ` Bart Van Assche
2022-04-09  2:17     ` yukuai (C)
2022-04-09  4:16       ` Bart Van Assche
2022-04-09  7:01         ` yukuai (C)
2022-04-12  3:20           ` Bart Van Assche
2022-04-08 19:10 ` [PATCH -next RFC v2 0/8] improve tag allocation under heavy load Jens Axboe
2022-04-09  2:26   ` yukuai (C)
2022-04-09  2:28     ` Jens Axboe
2022-04-09  2:34       ` yukuai (C)
2022-04-09  7:14       ` yukuai (C)
2022-04-09 21:31       ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5d84c02f-62c6-6418-6629-cebd42dc2ca5@acm.org \
    --to=bvanassche@acm.org \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=axboe@kernel.dk \
    --cc=john.garry@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=yi.zhang@huawei.com \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.