All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Metcalf <cmetcalf@tilera.com>
To: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	<linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Cody P Schafer <cody@linux.vnet.ibm.com>
Subject: Re: [PATCH v7 2/2] mm: make lru_add_drain_all() selective
Date: Tue, 13 Aug 2013 19:44:55 -0400	[thread overview]
Message-ID: <520AC4F7.9090604@tilera.com> (raw)
In-Reply-To: <20130813232904.GJ28996@mtj.dyndns.org>

On 8/13/2013 7:29 PM, Tejun Heo wrote:
> It won't nest and doing it simultaneously won't buy anything, right?
> Wouldn't it be better to protect it with a mutex and define all
> necessary resources statically (yeah, cpumask is pain in the ass and I
> think we should un-deprecate cpumask_t for static use cases)?  Then,
> there'd be no allocation to worry about on the path.

Here's what lru_add_drain_all() looks like with a guarding mutex.
Pretty much the same code complexity as when we have to allocate the
cpumask, and there really aren't any issues from locking, since we can assume
all is well and return immediately if we fail to get the lock.

int lru_add_drain_all(void)
{
        static struct cpumask mask;
        static DEFINE_MUTEX(lock);
        int cpu, rc;

        if (!mutex_trylock(&lock))
                return 0;  /* already ongoing elsewhere */

        cpumask_clear(&mask);
        get_online_cpus();

        /*
         * Figure out which cpus need flushing.  It's OK if we race
         * with changes to the per-cpu lru pvecs, since it's no worse
         * than if we flushed all cpus, since a cpu could still end
         * up putting pages back on its pvec before we returned.
         * And this avoids interrupting other cpus unnecessarily.
         */
        for_each_online_cpu(cpu) {
                if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
                    pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
                    pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
                    need_activate_page_drain(cpu))
                        cpumask_set_cpu(cpu, &mask);
        }

        rc = schedule_on_cpu_mask(lru_add_drain_per_cpu, &mask);

        put_online_cpus();
        mutex_unlock(&lock);
        return rc;
}

-- 
Chris Metcalf, Tilera Corp.
http://www.tilera.com


WARNING: multiple messages have this Message-ID (diff)
From: Chris Metcalf <cmetcalf@tilera.com>
To: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Cody P Schafer <cody@linux.vnet.ibm.com>
Subject: Re: [PATCH v7 2/2] mm: make lru_add_drain_all() selective
Date: Tue, 13 Aug 2013 19:44:55 -0400	[thread overview]
Message-ID: <520AC4F7.9090604@tilera.com> (raw)
In-Reply-To: <20130813232904.GJ28996@mtj.dyndns.org>

On 8/13/2013 7:29 PM, Tejun Heo wrote:
> It won't nest and doing it simultaneously won't buy anything, right?
> Wouldn't it be better to protect it with a mutex and define all
> necessary resources statically (yeah, cpumask is pain in the ass and I
> think we should un-deprecate cpumask_t for static use cases)?  Then,
> there'd be no allocation to worry about on the path.

Here's what lru_add_drain_all() looks like with a guarding mutex.
Pretty much the same code complexity as when we have to allocate the
cpumask, and there really aren't any issues from locking, since we can assume
all is well and return immediately if we fail to get the lock.

int lru_add_drain_all(void)
{
        static struct cpumask mask;
        static DEFINE_MUTEX(lock);
        int cpu, rc;

        if (!mutex_trylock(&lock))
                return 0;  /* already ongoing elsewhere */

        cpumask_clear(&mask);
        get_online_cpus();

        /*
         * Figure out which cpus need flushing.  It's OK if we race
         * with changes to the per-cpu lru pvecs, since it's no worse
         * than if we flushed all cpus, since a cpu could still end
         * up putting pages back on its pvec before we returned.
         * And this avoids interrupting other cpus unnecessarily.
         */
        for_each_online_cpu(cpu) {
                if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
                    pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
                    pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
                    need_activate_page_drain(cpu))
                        cpumask_set_cpu(cpu, &mask);
        }

        rc = schedule_on_cpu_mask(lru_add_drain_per_cpu, &mask);

        put_online_cpus();
        mutex_unlock(&lock);
        return rc;
}

-- 
Chris Metcalf, Tilera Corp.
http://www.tilera.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2013-08-13 23:44 UTC|newest]

Thread overview: 104+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-06 20:22 [PATCH] mm: make lru_add_drain_all() selective Chris Metcalf
2013-08-06 20:22 ` Chris Metcalf
2013-08-06 20:22 ` [PATCH v2] " Chris Metcalf
2013-08-06 20:22   ` Chris Metcalf
2013-08-07 20:45   ` Tejun Heo
2013-08-07 20:45     ` Tejun Heo
2013-08-07 20:49     ` [PATCH v3 1/2] workqueue: add new schedule_on_cpu_mask() API Chris Metcalf
2013-08-07 20:49       ` Chris Metcalf
2013-08-07 20:52     ` [PATCH v3 2/2] mm: make lru_add_drain_all() selective Chris Metcalf
2013-08-07 20:52       ` Chris Metcalf
2013-08-07 22:48   ` [PATCH v2] " Cody P Schafer
2013-08-07 22:48     ` Cody P Schafer
2013-08-07 20:49     ` [PATCH v4 1/2] workqueue: add new schedule_on_cpu_mask() API Chris Metcalf
2013-08-07 20:49       ` Chris Metcalf
2013-08-09 15:02       ` Tejun Heo
2013-08-09 15:02         ` Tejun Heo
2013-08-09 16:12         ` Chris Metcalf
2013-08-09 16:12           ` Chris Metcalf
2013-08-09 16:30           ` Tejun Heo
2013-08-09 16:30             ` Tejun Heo
2013-08-07 20:49             ` [PATCH v5 " Chris Metcalf
2013-08-07 20:49               ` Chris Metcalf
2013-08-09 17:40               ` Tejun Heo
2013-08-09 17:40                 ` Tejun Heo
2013-08-09 17:49                 ` [PATCH v6 " Chris Metcalf
2013-08-09 17:49                   ` Chris Metcalf
2013-08-09 17:52                 ` [PATCH v6 2/2] mm: make lru_add_drain_all() selective Chris Metcalf
2013-08-09 17:52                   ` Chris Metcalf
2013-08-07 20:52             ` [PATCH v5 " Chris Metcalf
2013-08-07 20:52               ` Chris Metcalf
2013-08-07 20:52     ` [PATCH v4 " Chris Metcalf
2013-08-07 20:52       ` Chris Metcalf
2013-08-12 21:05       ` Andrew Morton
2013-08-12 21:05         ` Andrew Morton
2013-08-13  1:53         ` Chris Metcalf
2013-08-13  1:53           ` Chris Metcalf
2013-08-13 19:35           ` Andrew Morton
2013-08-13 19:35             ` Andrew Morton
2013-08-13 20:19             ` Tejun Heo
2013-08-13 20:19               ` Tejun Heo
2013-08-13 20:31               ` Andrew Morton
2013-08-13 20:31                 ` Andrew Morton
2013-08-13 20:59                 ` Chris Metcalf
2013-08-13 20:59                   ` Chris Metcalf
2013-08-13 21:13                   ` Andrew Morton
2013-08-13 21:13                     ` Andrew Morton
2013-08-13 22:13                     ` Chris Metcalf
2013-08-13 22:13                       ` Chris Metcalf
2013-08-13 22:26                       ` Andrew Morton
2013-08-13 22:26                         ` Andrew Morton
2013-08-13 23:04                         ` Chris Metcalf
2013-08-13 23:04                           ` Chris Metcalf
2013-08-13 22:51                       ` [PATCH v7 1/2] workqueue: add schedule_on_each_cpu_cond Chris Metcalf
2013-08-13 22:51                         ` Chris Metcalf
2013-08-13 22:53                       ` [PATCH v7 2/2] mm: make lru_add_drain_all() selective Chris Metcalf
2013-08-13 22:53                         ` Chris Metcalf
2013-08-13 23:29                         ` Tejun Heo
2013-08-13 23:29                           ` Tejun Heo
2013-08-13 23:32                           ` Chris Metcalf
2013-08-13 23:32                             ` Chris Metcalf
2013-08-14  6:46                             ` Andrew Morton
2013-08-14  6:46                               ` Andrew Morton
2013-08-14 13:05                               ` Tejun Heo
2013-08-14 13:05                                 ` Tejun Heo
2013-08-14 16:03                               ` Chris Metcalf
2013-08-14 16:03                                 ` Chris Metcalf
2013-08-14 16:57                                 ` Tejun Heo
2013-08-14 16:57                                   ` Tejun Heo
2013-08-14 17:18                                   ` Chris Metcalf
2013-08-14 17:18                                     ` Chris Metcalf
2013-08-14 20:07                                     ` Tejun Heo
2013-08-14 20:07                                       ` Tejun Heo
2013-08-14 20:22                                       ` [PATCH v8] " Chris Metcalf
2013-08-14 20:22                                         ` Chris Metcalf
2013-08-14 20:44                                         ` Andrew Morton
2013-08-14 20:44                                           ` Andrew Morton
2013-08-14 20:50                                           ` Tejun Heo
2013-08-14 20:50                                             ` Tejun Heo
2013-08-14 21:03                                             ` Andrew Morton
2013-08-14 21:03                                               ` Andrew Morton
2013-08-14 21:07                                             ` Andrew Morton
2013-08-14 21:07                                               ` Andrew Morton
2013-08-14 21:12                                         ` Andrew Morton
2013-08-14 21:12                                           ` Andrew Morton
2013-08-14 21:23                                           ` Chris Metcalf
2013-08-14 21:23                                             ` Chris Metcalf
2013-08-13 23:44                           ` Chris Metcalf [this message]
2013-08-13 23:44                             ` [PATCH v7 2/2] " Chris Metcalf
2013-08-13 23:51                             ` Tejun Heo
2013-08-13 23:51                               ` Tejun Heo
2013-08-13 21:07                 ` [PATCH v4 " Tejun Heo
2013-08-13 21:07                   ` Tejun Heo
2013-08-13 21:16                   ` Andrew Morton
2013-08-13 21:16                     ` Andrew Morton
2013-08-13 22:07                     ` Tejun Heo
2013-08-13 22:07                       ` Tejun Heo
2013-08-13 22:18                       ` Andrew Morton
2013-08-13 22:18                         ` Andrew Morton
2013-08-13 22:33                         ` Tejun Heo
2013-08-13 22:33                           ` Tejun Heo
2013-08-13 22:47                           ` Andrew Morton
2013-08-13 22:47                             ` Andrew Morton
2013-08-13 23:03                             ` Tejun Heo
2013-08-13 23:03                               ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=520AC4F7.9090604@tilera.com \
    --to=cmetcalf@tilera.com \
    --cc=akpm@linux-foundation.org \
    --cc=cody@linux.vnet.ibm.com \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.