All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: David Rientjes <rientjes@google.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Greg Thelen <gthelen@google.com>,
	Minchan Kim <minchan@kernel.org>, Mel Gorman <mgorman@suse.de>,
	Michal Nazarewicz <mina86@mina86.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com>
Subject: Re: [PATCH 03/10] mm, compaction: periodically drop lock and restore IRQs in scanners
Date: Tue, 10 Jun 2014 09:15:22 +0200	[thread overview]
Message-ID: <5396B08A.6090900@suse.cz> (raw)
In-Reply-To: <alpine.DEB.2.02.1406091656340.17705@chino.kir.corp.google.com>

On 06/10/2014 01:58 AM, David Rientjes wrote:
> On Mon, 9 Jun 2014, Vlastimil Babka wrote:
>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index d37f4a8..e1a4283 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -185,54 +185,77 @@ static void update_pageblock_skip(struct compact_control *cc,
>>   }
>>   #endif /* CONFIG_COMPACTION */
>>
>> -enum compact_contended should_release_lock(spinlock_t *lock)
>> +/*
>> + * Compaction requires the taking of some coarse locks that are potentially
>> + * very heavily contended. For async compaction, back out if the lock cannot
>> + * be taken immediately. For sync compaction, spin on the lock if needed.
>> + *
>> + * Returns true if the lock is held
>> + * Returns false if the lock is not held and compaction should abort
>> + */
>> +static bool compact_trylock_irqsave(spinlock_t *lock,
>> +			unsigned long *flags, struct compact_control *cc)
>>   {
>> -	if (need_resched())
>> -		return COMPACT_CONTENDED_SCHED;
>> -	else if (spin_is_contended(lock))
>> -		return COMPACT_CONTENDED_LOCK;
>> -	else
>> -		return COMPACT_CONTENDED_NONE;
>> +	if (cc->mode == MIGRATE_ASYNC) {
>> +		if (!spin_trylock_irqsave(lock, *flags)) {
>> +			cc->contended = COMPACT_CONTENDED_LOCK;
>> +			return false;
>> +		}
>> +	} else {
>> +		spin_lock_irqsave(lock, *flags);
>> +	}
>> +
>> +	return true;
>>   }
>>
>>   /*
>>    * Compaction requires the taking of some coarse locks that are potentially
>> - * very heavily contended. Check if the process needs to be scheduled or
>> - * if the lock is contended. For async compaction, back out in the event
>> - * if contention is severe. For sync compaction, schedule.
>> + * very heavily contended. The lock should be periodically unlocked to avoid
>> + * having disabled IRQs for a long time, even when there is nobody waiting on
>> + * the lock. It might also be that allowing the IRQs will result in
>> + * need_resched() becoming true. If scheduling is needed, or somebody else
>> + * has taken the lock, async compaction aborts. Sync compaction schedules.
>> + * Either compaction type will also abort if a fatal signal is pending.
>> + * In either case if the lock was locked, it is dropped and not regained.
>>    *
>> - * Returns true if the lock is held.
>> - * Returns false if the lock is released and compaction should abort
>> + * Returns true if compaction should abort due to fatal signal pending, or
>> + *		async compaction due to lock contention or need to schedule
>> + * Returns false when compaction can continue (sync compaction might have
>> + *		scheduled)
>>    */
>> -static bool compact_checklock_irqsave(spinlock_t *lock, unsigned long *flags,
>> -				      bool locked, struct compact_control *cc)
>> +static bool compact_unlock_should_abort(spinlock_t *lock,
>> +		unsigned long flags, bool *locked, struct compact_control *cc)
>>   {
>> -	enum compact_contended contended = should_release_lock(lock);
>> +	if (*locked) {
>> +		spin_unlock_irqrestore(lock, flags);
>> +		*locked = false;
>> +	}
>>
>> -	if (contended) {
>> -		if (locked) {
>> -			spin_unlock_irqrestore(lock, *flags);
>> -			locked = false;
>> -		}
>> +	if (fatal_signal_pending(current)) {
>> +		cc->contended = COMPACT_CONTENDED_SCHED;
>> +		return true;
>> +	}
>>
>> -		/* async aborts if taking too long or contended */
>> -		if (cc->mode == MIGRATE_ASYNC) {
>> -			cc->contended = contended;
>> -			return false;
>> +	if (cc->mode == MIGRATE_ASYNC) {
>> +		if (need_resched()) {
>> +			cc->contended = COMPACT_CONTENDED_SCHED;
>> +			return true;
>>   		}
>> -
>> +		if (spin_is_locked(lock)) {
>> +			cc->contended = COMPACT_CONTENDED_LOCK;
>> +			return true;
>> +		}
>
> Any reason to abort here?  If we need to do compact_trylock_irqsave() on
> this lock again then we'll abort when we come to that point, but it seems
> pointless to abort early if the lock isn't actually needed anymore or it
> is dropped before trying to acquire it again.

spin_is_locked() true means somebody was most probably waiting for us to 
unlock so maybe we should back off. But I'm not sure if that check can 
actually succeed so early after unlock.

>> +	} else {
>>   		cond_resched();
>>   	}
>>
>> -	if (!locked)
>> -		spin_lock_irqsave(lock, *flags);
>> -	return true;
>> +	return false;
>>   }
>>
>>   /*
>>    * Aside from avoiding lock contention, compaction also periodically checks
>>    * need_resched() and either schedules in sync compaction or aborts async
>> - * compaction. This is similar to what compact_checklock_irqsave() does, but
>> + * compaction. This is similar to what compact_unlock_should_abort() does, but
>>    * is used where no lock is concerned.
>>    *
>>    * Returns false when no scheduling was needed, or sync compaction scheduled.
>> @@ -291,6 +314,16 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>   		int isolated, i;
>>   		struct page *page = cursor;
>>
>> +		/*
>> +		 * Periodically drop the lock (if held) regardless of its
>> +		 * contention, to give chance to IRQs. Abort async compaction
>> +		 * if contended.
>> +		 */
>> +		if (!(blockpfn % SWAP_CLUSTER_MAX)
>> +		    && compact_unlock_should_abort(&cc->zone->lock, flags,
>> +								&locked, cc))
>> +			break;
>> +
>>   		nr_scanned++;
>>   		if (!pfn_valid_within(blockpfn))
>>   			goto isolate_fail;
>> @@ -308,8 +341,9 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>   		 * spin on the lock and we acquire the lock as late as
>>   		 * possible.
>>   		 */
>> -		locked = compact_checklock_irqsave(&cc->zone->lock, &flags,
>> -								locked, cc);
>> +		if (!locked)
>> +			locked = compact_trylock_irqsave(&cc->zone->lock,
>> +								&flags, cc);
>>   		if (!locked)
>>   			break;
>>
>> @@ -514,13 +548,15 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
>>
>>   	/* Time to isolate some pages for migration */
>>   	for (; low_pfn < end_pfn; low_pfn++) {
>> -		/* give a chance to irqs before checking need_resched() */
>> -		if (locked && !(low_pfn % SWAP_CLUSTER_MAX)) {
>> -			if (should_release_lock(&zone->lru_lock)) {
>> -				spin_unlock_irqrestore(&zone->lru_lock, flags);
>> -				locked = false;
>> -			}
>> -		}
>> +		/*
>> +		 * Periodically drop the lock (if held) regardless of its
>> +		 * contention, to give chance to IRQs. Abort async compaction
>> +		 * if contended.
>> +		 */
>> +		if (!(low_pfn % SWAP_CLUSTER_MAX)
>> +		    && compact_unlock_should_abort(&zone->lru_lock, flags,
>> +								&locked, cc))
>> +			break;
>>
>>   		/*
>>   		 * migrate_pfn does not necessarily start aligned to a
>> @@ -622,10 +658,11 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
>>   		    page_count(page) > page_mapcount(page))
>>   			continue;
>>
>> -		/* Check if it is ok to still hold the lock */
>> -		locked = compact_checklock_irqsave(&zone->lru_lock, &flags,
>> -								locked, cc);
>> -		if (!locked || fatal_signal_pending(current))
>> +		/* If the lock is not held, try to take it */
>> +		if (!locked)
>> +			locked = compact_trylock_irqsave(&zone->lru_lock,
>> +								&flags, cc);
>> +		if (!locked)
>>   			break;
>>
>>   		/* Recheck PageLRU and PageTransHuge under lock */


WARNING: multiple messages have this Message-ID (diff)
From: Vlastimil Babka <vbabka@suse.cz>
To: David Rientjes <rientjes@google.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Greg Thelen <gthelen@google.com>,
	Minchan Kim <minchan@kernel.org>, Mel Gorman <mgorman@suse.de>,
	Michal Nazarewicz <mina86@mina86.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com>
Subject: Re: [PATCH 03/10] mm, compaction: periodically drop lock and restore IRQs in scanners
Date: Tue, 10 Jun 2014 09:15:22 +0200	[thread overview]
Message-ID: <5396B08A.6090900@suse.cz> (raw)
In-Reply-To: <alpine.DEB.2.02.1406091656340.17705@chino.kir.corp.google.com>

On 06/10/2014 01:58 AM, David Rientjes wrote:
> On Mon, 9 Jun 2014, Vlastimil Babka wrote:
>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index d37f4a8..e1a4283 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -185,54 +185,77 @@ static void update_pageblock_skip(struct compact_control *cc,
>>   }
>>   #endif /* CONFIG_COMPACTION */
>>
>> -enum compact_contended should_release_lock(spinlock_t *lock)
>> +/*
>> + * Compaction requires the taking of some coarse locks that are potentially
>> + * very heavily contended. For async compaction, back out if the lock cannot
>> + * be taken immediately. For sync compaction, spin on the lock if needed.
>> + *
>> + * Returns true if the lock is held
>> + * Returns false if the lock is not held and compaction should abort
>> + */
>> +static bool compact_trylock_irqsave(spinlock_t *lock,
>> +			unsigned long *flags, struct compact_control *cc)
>>   {
>> -	if (need_resched())
>> -		return COMPACT_CONTENDED_SCHED;
>> -	else if (spin_is_contended(lock))
>> -		return COMPACT_CONTENDED_LOCK;
>> -	else
>> -		return COMPACT_CONTENDED_NONE;
>> +	if (cc->mode == MIGRATE_ASYNC) {
>> +		if (!spin_trylock_irqsave(lock, *flags)) {
>> +			cc->contended = COMPACT_CONTENDED_LOCK;
>> +			return false;
>> +		}
>> +	} else {
>> +		spin_lock_irqsave(lock, *flags);
>> +	}
>> +
>> +	return true;
>>   }
>>
>>   /*
>>    * Compaction requires the taking of some coarse locks that are potentially
>> - * very heavily contended. Check if the process needs to be scheduled or
>> - * if the lock is contended. For async compaction, back out in the event
>> - * if contention is severe. For sync compaction, schedule.
>> + * very heavily contended. The lock should be periodically unlocked to avoid
>> + * having disabled IRQs for a long time, even when there is nobody waiting on
>> + * the lock. It might also be that allowing the IRQs will result in
>> + * need_resched() becoming true. If scheduling is needed, or somebody else
>> + * has taken the lock, async compaction aborts. Sync compaction schedules.
>> + * Either compaction type will also abort if a fatal signal is pending.
>> + * In either case if the lock was locked, it is dropped and not regained.
>>    *
>> - * Returns true if the lock is held.
>> - * Returns false if the lock is released and compaction should abort
>> + * Returns true if compaction should abort due to fatal signal pending, or
>> + *		async compaction due to lock contention or need to schedule
>> + * Returns false when compaction can continue (sync compaction might have
>> + *		scheduled)
>>    */
>> -static bool compact_checklock_irqsave(spinlock_t *lock, unsigned long *flags,
>> -				      bool locked, struct compact_control *cc)
>> +static bool compact_unlock_should_abort(spinlock_t *lock,
>> +		unsigned long flags, bool *locked, struct compact_control *cc)
>>   {
>> -	enum compact_contended contended = should_release_lock(lock);
>> +	if (*locked) {
>> +		spin_unlock_irqrestore(lock, flags);
>> +		*locked = false;
>> +	}
>>
>> -	if (contended) {
>> -		if (locked) {
>> -			spin_unlock_irqrestore(lock, *flags);
>> -			locked = false;
>> -		}
>> +	if (fatal_signal_pending(current)) {
>> +		cc->contended = COMPACT_CONTENDED_SCHED;
>> +		return true;
>> +	}
>>
>> -		/* async aborts if taking too long or contended */
>> -		if (cc->mode == MIGRATE_ASYNC) {
>> -			cc->contended = contended;
>> -			return false;
>> +	if (cc->mode == MIGRATE_ASYNC) {
>> +		if (need_resched()) {
>> +			cc->contended = COMPACT_CONTENDED_SCHED;
>> +			return true;
>>   		}
>> -
>> +		if (spin_is_locked(lock)) {
>> +			cc->contended = COMPACT_CONTENDED_LOCK;
>> +			return true;
>> +		}
>
> Any reason to abort here?  If we need to do compact_trylock_irqsave() on
> this lock again then we'll abort when we come to that point, but it seems
> pointless to abort early if the lock isn't actually needed anymore or it
> is dropped before trying to acquire it again.

spin_is_locked() true means somebody was most probably waiting for us to 
unlock so maybe we should back off. But I'm not sure if that check can 
actually succeed so early after unlock.

>> +	} else {
>>   		cond_resched();
>>   	}
>>
>> -	if (!locked)
>> -		spin_lock_irqsave(lock, *flags);
>> -	return true;
>> +	return false;
>>   }
>>
>>   /*
>>    * Aside from avoiding lock contention, compaction also periodically checks
>>    * need_resched() and either schedules in sync compaction or aborts async
>> - * compaction. This is similar to what compact_checklock_irqsave() does, but
>> + * compaction. This is similar to what compact_unlock_should_abort() does, but
>>    * is used where no lock is concerned.
>>    *
>>    * Returns false when no scheduling was needed, or sync compaction scheduled.
>> @@ -291,6 +314,16 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>   		int isolated, i;
>>   		struct page *page = cursor;
>>
>> +		/*
>> +		 * Periodically drop the lock (if held) regardless of its
>> +		 * contention, to give chance to IRQs. Abort async compaction
>> +		 * if contended.
>> +		 */
>> +		if (!(blockpfn % SWAP_CLUSTER_MAX)
>> +		    && compact_unlock_should_abort(&cc->zone->lock, flags,
>> +								&locked, cc))
>> +			break;
>> +
>>   		nr_scanned++;
>>   		if (!pfn_valid_within(blockpfn))
>>   			goto isolate_fail;
>> @@ -308,8 +341,9 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>   		 * spin on the lock and we acquire the lock as late as
>>   		 * possible.
>>   		 */
>> -		locked = compact_checklock_irqsave(&cc->zone->lock, &flags,
>> -								locked, cc);
>> +		if (!locked)
>> +			locked = compact_trylock_irqsave(&cc->zone->lock,
>> +								&flags, cc);
>>   		if (!locked)
>>   			break;
>>
>> @@ -514,13 +548,15 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
>>
>>   	/* Time to isolate some pages for migration */
>>   	for (; low_pfn < end_pfn; low_pfn++) {
>> -		/* give a chance to irqs before checking need_resched() */
>> -		if (locked && !(low_pfn % SWAP_CLUSTER_MAX)) {
>> -			if (should_release_lock(&zone->lru_lock)) {
>> -				spin_unlock_irqrestore(&zone->lru_lock, flags);
>> -				locked = false;
>> -			}
>> -		}
>> +		/*
>> +		 * Periodically drop the lock (if held) regardless of its
>> +		 * contention, to give chance to IRQs. Abort async compaction
>> +		 * if contended.
>> +		 */
>> +		if (!(low_pfn % SWAP_CLUSTER_MAX)
>> +		    && compact_unlock_should_abort(&zone->lru_lock, flags,
>> +								&locked, cc))
>> +			break;
>>
>>   		/*
>>   		 * migrate_pfn does not necessarily start aligned to a
>> @@ -622,10 +658,11 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
>>   		    page_count(page) > page_mapcount(page))
>>   			continue;
>>
>> -		/* Check if it is ok to still hold the lock */
>> -		locked = compact_checklock_irqsave(&zone->lru_lock, &flags,
>> -								locked, cc);
>> -		if (!locked || fatal_signal_pending(current))
>> +		/* If the lock is not held, try to take it */
>> +		if (!locked)
>> +			locked = compact_trylock_irqsave(&zone->lru_lock,
>> +								&flags, cc);
>> +		if (!locked)
>>   			break;
>>
>>   		/* Recheck PageLRU and PageTransHuge under lock */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-06-10  7:15 UTC|newest]

Thread overview: 88+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-09  9:26 [PATCH 01/10] mm, compaction: do not recheck suitable_migration_target under lock Vlastimil Babka
2014-06-09  9:26 ` Vlastimil Babka
2014-06-09  9:26 ` [PATCH 02/10] mm, compaction: report compaction as contended only due to lock contention Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-09 23:50   ` David Rientjes
2014-06-09 23:50     ` David Rientjes
2014-06-10  7:11     ` Vlastimil Babka
2014-06-10  7:11       ` Vlastimil Babka
2014-06-10 23:40       ` David Rientjes
2014-06-10 23:40         ` David Rientjes
2014-06-11  1:10   ` Minchan Kim
2014-06-11  1:10     ` Minchan Kim
2014-06-11 12:22     ` Vlastimil Babka
2014-06-11 12:22       ` Vlastimil Babka
2014-06-11 23:49       ` Minchan Kim
2014-06-11 23:49         ` Minchan Kim
2014-06-12 14:02         ` Vlastimil Babka
2014-06-12 14:02           ` Vlastimil Babka
2014-06-13  2:40           ` Minchan Kim
2014-06-13  2:40             ` Minchan Kim
2014-06-20 11:47             ` Vlastimil Babka
2014-06-20 11:47               ` Vlastimil Babka
2014-06-09  9:26 ` [PATCH 03/10] mm, compaction: periodically drop lock and restore IRQs in scanners Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-09 23:58   ` David Rientjes
2014-06-09 23:58     ` David Rientjes
2014-06-10  7:15     ` Vlastimil Babka [this message]
2014-06-10  7:15       ` Vlastimil Babka
2014-06-10 23:41       ` David Rientjes
2014-06-10 23:41         ` David Rientjes
2014-06-11  1:32   ` Minchan Kim
2014-06-11  1:32     ` Minchan Kim
2014-06-11 11:24     ` Vlastimil Babka
2014-06-11 11:24       ` Vlastimil Babka
2014-06-09  9:26 ` [PATCH 04/10] mm, compaction: skip rechecks when lock was already held Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-10  0:00   ` David Rientjes
2014-06-10  0:00     ` David Rientjes
2014-06-11  1:50   ` Minchan Kim
2014-06-11  1:50     ` Minchan Kim
2014-06-09  9:26 ` [PATCH 05/10] mm, compaction: remember position within pageblock in free pages scanner Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-10  0:07   ` David Rientjes
2014-06-10  0:07     ` David Rientjes
2014-06-11  2:12   ` Minchan Kim
2014-06-11  2:12     ` Minchan Kim
2014-06-11  8:16     ` Joonsoo Kim
2014-06-11  8:16       ` Joonsoo Kim
2014-06-11 11:41       ` Vlastimil Babka
2014-06-11 11:41         ` Vlastimil Babka
2014-06-11 11:33     ` Vlastimil Babka
2014-06-11 11:33       ` Vlastimil Babka
2014-06-11  3:29   ` Zhang Yanfei
2014-06-11  3:29     ` Zhang Yanfei
2014-06-09  9:26 ` [PATCH 06/10] mm, compaction: skip buddy pages by their order in the migrate scanner Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-10  0:08   ` David Rientjes
2014-06-10  0:08     ` David Rientjes
2014-06-09  9:26 ` [PATCH 07/10] mm: rename allocflags_to_migratetype for clarity Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-11  2:41   ` Minchan Kim
2014-06-11  2:41     ` Minchan Kim
2014-06-11  3:38     ` Zhang Yanfei
2014-06-11  3:38       ` Zhang Yanfei
2014-06-09  9:26 ` [PATCH 08/10] mm, compaction: pass gfp mask to compact_control Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-11  2:48   ` Minchan Kim
2014-06-11  2:48     ` Minchan Kim
2014-06-11 11:46     ` Vlastimil Babka
2014-06-11 11:46       ` Vlastimil Babka
2014-06-12  0:24       ` David Rientjes
2014-06-12  0:24         ` David Rientjes
2014-06-09  9:26 ` [RFC PATCH 09/10] mm, compaction: try to capture the just-created high-order freepage Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-11 14:56   ` Vlastimil Babka
2014-06-11 14:56     ` Vlastimil Babka
2014-06-12  2:20     ` Minchan Kim
2014-06-12  2:20       ` Minchan Kim
2014-06-12  8:21       ` Vlastimil Babka
2014-06-12  8:21         ` Vlastimil Babka
2014-06-09  9:26 ` [RFC PATCH 10/10] mm, compaction: do not migrate pages when that cannot satisfy page fault allocation Vlastimil Babka
2014-06-09  9:26   ` Vlastimil Babka
2014-06-09 23:41 ` [PATCH 01/10] mm, compaction: do not recheck suitable_migration_target under lock David Rientjes
2014-06-09 23:41   ` David Rientjes
2014-06-11  0:33 ` Minchan Kim
2014-06-11  0:33   ` Minchan Kim
2014-06-11  2:45 ` Zhang Yanfei
2014-06-11  2:45   ` Zhang Yanfei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5396B08A.6090900@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=gthelen@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mina86@mina86.com \
    --cc=minchan@kernel.org \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.