linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org, konrad.wilk@oracle.com,
	axboe@kernel.dk, boris.ostrovsky@oracle.com
Subject: Re: [PATCH 1/4] xen/blkback: don't keep persistent grants too long
Date: Tue, 7 Aug 2018 08:34:43 +0200	[thread overview]
Message-ID: <efbe9e36-5d56-c3d5-5260-7ffda9d00ab2@suse.com> (raw)
In-Reply-To: <20180806155852.7jvudjpzzq6fdp33@mac>

On 06/08/18 17:58, Roger Pau Monné wrote:
> On Mon, Aug 06, 2018 at 01:33:59PM +0200, Juergen Gross wrote:
>> Persistent grants are allocated until a threshold per ring is being
>> reached. Those grants won't be freed until the ring is being destroyed
>> meaning there will be resources kept busy which might no longer be
>> used.
>>
>> Instead of freeing only persistent grants until the threshold is
>> reached add a timestamp and remove all persistent grants not having
>> been in use for a minute.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>  drivers/block/xen-blkback/blkback.c | 77 +++++++++++++++++++++++--------------
>>  drivers/block/xen-blkback/common.h  |  1 +
>>  2 files changed, 50 insertions(+), 28 deletions(-)
> 
> You should document this new parameter in
> Documentation/ABI/testing/sysfs-driver-xen-blkback.

Yes.

> 
>>
>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
>> index b55b245e8052..485e3ecab144 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -84,6 +84,18 @@ MODULE_PARM_DESC(max_persistent_grants,
>>                   "Maximum number of grants to map persistently");
>>  
>>  /*
>> + * How long a persistent grant is allowed to remain allocated without being in
>> + * use. The time is in seconds, 0 means indefinitely long.
>> + */
>> +
>> +unsigned int xen_blkif_pgrant_timeout = 60;
>> +module_param_named(persistent_grant_unused_seconds, xen_blkif_pgrant_timeout,
>> +		   uint, 0644);
>> +MODULE_PARM_DESC(persistent_grant_unused_seconds,
>> +		 "Time in seconds an unused persistent grant is allowed to "
>> +		 "remain allocated. Default is 60, 0 means unlimited.");
>> +
>> +/*
>>   * Maximum number of rings/queues blkback supports, allow as many queues as there
>>   * are CPUs if user has not specified a value.
>>   */
>> @@ -123,6 +135,13 @@ module_param(log_stats, int, 0644);
>>  /* Number of free pages to remove on each call to gnttab_free_pages */
>>  #define NUM_BATCH_FREE_PAGES 10
>>  
>> +static inline bool persistent_gnt_timeout(struct persistent_gnt *persistent_gnt)
>> +{
>> +	return xen_blkif_pgrant_timeout &&
>> +	       (jiffies - persistent_gnt->last_used >=
>> +		HZ * xen_blkif_pgrant_timeout);
>> +}
>> +
>>  static inline int get_free_page(struct xen_blkif_ring *ring, struct page **page)
>>  {
>>  	unsigned long flags;
>> @@ -278,6 +297,7 @@ static void put_persistent_gnt(struct xen_blkif_ring *ring,
>>  {
>>  	if(!test_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags))
>>  		pr_alert_ratelimited("freeing a grant already unused\n");
>> +	persistent_gnt->last_used = jiffies;
>>  	set_bit(PERSISTENT_GNT_WAS_ACTIVE, persistent_gnt->flags);
>>  	clear_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags);
>>  	atomic_dec(&ring->persistent_gnt_in_use);
>> @@ -374,23 +394,23 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
>>  	bool scan_used = false, clean_used = false;
>>  	struct rb_root *root;
>>  
>> -	if (ring->persistent_gnt_c < xen_blkif_max_pgrants ||
>> -	    (ring->persistent_gnt_c == xen_blkif_max_pgrants &&
>> -	    !ring->blkif->vbd.overflow_max_grants)) {
>> -		goto out;
>> -	}
>> -
>>  	if (work_busy(&ring->persistent_purge_work)) {
>>  		pr_alert_ratelimited("Scheduled work from previous purge is still busy, cannot purge list\n");
>>  		goto out;
>>  	}
>>  
>> -	num_clean = (xen_blkif_max_pgrants / 100) * LRU_PERCENT_CLEAN;
>> -	num_clean = ring->persistent_gnt_c - xen_blkif_max_pgrants + num_clean;
>> -	num_clean = min(ring->persistent_gnt_c, num_clean);
>> -	if ((num_clean == 0) ||
>> -	    (num_clean > (ring->persistent_gnt_c - atomic_read(&ring->persistent_gnt_in_use))))
>> -		goto out;
>> +	if (ring->persistent_gnt_c < xen_blkif_max_pgrants ||
>> +	    (ring->persistent_gnt_c == xen_blkif_max_pgrants &&
>> +	    !ring->blkif->vbd.overflow_max_grants)) {
>> +		num_clean = 0;
>> +	} else {
>> +		num_clean = (xen_blkif_max_pgrants / 100) * LRU_PERCENT_CLEAN;
>> +		num_clean = ring->persistent_gnt_c - xen_blkif_max_pgrants +
>> +			    num_clean;
>> +		num_clean = min(ring->persistent_gnt_c, num_clean);
>> +		pr_debug("Going to purge at least %u persistent grants\n",
>> +			 num_clean);
>> +	}
>>  
>>  	/*
>>  	 * At this point, we can assure that there will be no calls
>> @@ -401,9 +421,7 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
>>           * number of grants.
>>  	 */
>>  
>> -	total = num_clean;
>> -
>> -	pr_debug("Going to purge %u persistent grants\n", num_clean);
>> +	total = 0;
>>  
>>  	BUG_ON(!list_empty(&ring->persistent_purge_list));
>>  	root = &ring->persistent_gnts;
>> @@ -419,39 +437,42 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
>>  
>>  		if (test_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags))
>>  			continue;
>> -		if (!scan_used &&
>> +		if (!scan_used && !persistent_gnt_timeout(persistent_gnt) &&
>>  		    (test_bit(PERSISTENT_GNT_WAS_ACTIVE, persistent_gnt->flags)))
> 
> If you store the jiffies of the time when the grant was last used it
> seems like we could get rid of the PERSISTENT_GNT_WAS_ACTIVE flag and
> instead use the per-grant jiffies and the jiffies from the last scan
> in order to decide which grants to remove?

True. This might make the control flow a little bit easier to
understand.


Juergen

  reply	other threads:[~2018-08-07  6:34 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-06 11:33 [PATCH 0/4] xen/blk: persistent grant rework Juergen Gross
2018-08-06 11:33 ` [PATCH 1/4] xen/blkback: don't keep persistent grants too long Juergen Gross
2018-08-06 15:58   ` Roger Pau Monné
2018-08-07  6:34     ` Juergen Gross [this message]
2018-08-06 11:34 ` [PATCH] xen/blkfront: remove unused macros Juergen Gross
2018-08-06 11:36   ` Juergen Gross
2018-08-06 11:34 ` [PATCH 2/4] xen/blkfront: cleanup stale persistent grants Juergen Gross
2018-08-06 16:16   ` Roger Pau Monné
2018-08-07  6:31     ` Juergen Gross
2018-08-07 14:14       ` Roger Pau Monné
2018-08-07 15:56         ` Juergen Gross
2018-08-08  8:27           ` Roger Pau Monné
2018-08-06 11:34 ` [PATCH 3/4] xen/blkfront: reorder tests in xlblk_init() Juergen Gross
2018-08-06 16:18   ` Roger Pau Monné
2018-08-06 11:34 ` [PATCH 4/4] xen/blkback: remove unused pers_gnts_lock from struct xen_blkif_ring Juergen Gross
2018-08-06 16:20   ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=efbe9e36-5d56-c3d5-5260-7ffda9d00ab2@suse.com \
    --to=jgross@suse.com \
    --cc=axboe@kernel.dk \
    --cc=boris.ostrovsky@oracle.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=roger.pau@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).