All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boqun Feng <boqun.feng@gmail.com>
To: Byungchul Park <byungchul.park@lge.com>
Cc: linux-kernel@vger.kernel.org, torvalds@linux-foundation.org,
	damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org,
	adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org,
	mingo@redhat.com, peterz@infradead.org, will@kernel.org,
	tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org,
	sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com,
	johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu,
	willy@infradead.org, david@fromorbit.com, amir73il@gmail.com,
	gregkh@linuxfoundation.org, kernel-team@lge.com,
	linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org,
	minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com,
	sj@kernel.org, jglisse@redhat.com, dennis@kernel.org,
	cl@linux.com, penberg@kernel.org, rientjes@google.com,
	vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org,
	paolo.valente@linaro.org, josef@toxicpanda.com,
	linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk,
	jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com,
	hch@infradead.org, djwong@kernel.org,
	dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com,
	melissa.srw@gmail.com, hamohammed.sa@gmail.com,
	42.hyeyoo@gmail.com, chris.p.wilson@intel.com,
	gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com,
	longman@redhat.com
Subject: Re: [PATCH RFC v7 00/23] DEPT(Dependency Tracker)
Date: Thu, 19 Jan 2023 18:23:49 -0800	[thread overview]
Message-ID: <Y8n7NdFl9WEbGXH1@boqun-archlinux> (raw)
In-Reply-To: <1674179505-26987-1-git-send-email-byungchul.park@lge.com>

On Fri, Jan 20, 2023 at 10:51:45AM +0900, Byungchul Park wrote:
> Boqun wrote:
> > On Thu, Jan 19, 2023 at 01:33:58PM +0000, Matthew Wilcox wrote:
> > > On Thu, Jan 19, 2023 at 03:23:08PM +0900, Byungchul Park wrote:
> > > > Boqun wrote:
> > > > > *Looks like the DEPT dependency graph doesn't handle the
> > > > > fair/unfair readers as lockdep current does. Which bring the
> > > > > next question.
> > > > 
> > > > No. DEPT works better for unfair read. It works based on wait/event. So
> > > > read_lock() is considered a potential wait waiting on write_unlock()
> > > > while write_lock() is considered a potential wait waiting on either
> > > > write_unlock() or read_unlock(). DEPT is working perfect for it.
> > > > 
> > > > For fair read (maybe you meant queued read lock), I think the case
> > > > should be handled in the same way as normal lock. I might get it wrong.
> > > > Please let me know if I miss something.
> > > 
> > > From the lockdep/DEPT point of view, the question is whether:
> > > 
> > >	read_lock(A)
> > >	read_lock(A)
> > > 
> > > can deadlock if a writer comes in between the two acquisitions and
> > > sleeps waiting on A to be released.  A fair lock will block new
> > > readers when a writer is waiting, while an unfair lock will allow
> > > new readers even while a writer is waiting.
> > > 
> > 
> > To be more accurate, a fair reader will wait if there is a writer
> > waiting for other reader (fair or not) to unlock, and an unfair reader
> > won't.
> 
> What a kind guys, both of you! Thanks.
> 
> I asked to check if there are other subtle things than this. Fortunately,
> I already understand what you guys shared.
> 
> > In kernel there are read/write locks that can have both fair and unfair
> > readers (e.g. queued rwlock). Regarding deadlocks,
> > 
> > 	T0		T1		T2
> > 	--		--		--
> > 	fair_read_lock(A);
> > 			write_lock(B);
> > 					write_lock(A);
> > 	write_lock(B);
> > 			unfair_read_lock(A);
> 
> With the DEPT's point of view (let me re-write the scenario):
> 
> 	T0		T1		T2
> 	--		--		--
> 	fair_read_lock(A);
> 			write_lock(B);
> 					write_lock(A);
> 	write_lock(B);
> 			unfair_read_lock(A);
> 	write_unlock(B);
> 	read_unlock(A);
> 			read_unlock(A);
> 			write_unlock(B);
> 					write_unlock(A);
> 
> T0: read_unlock(A) cannot happen if write_lock(B) is stuck by a B owner
>     not doing either write_unlock(B) or read_unlock(B). In other words:
> 
>       1. read_unlock(A) happening depends on write_unlock(B) happening.
>       2. read_unlock(A) happening depends on read_unlock(B) happening.
> 
> T1: write_unlock(B) cannot happen if unfair_read_lock(A) is stuck by a A
>     owner not doing write_unlock(A). In other words:
> 
>       3. write_unlock(B) happening depends on write_unlock(A) happening.
> 
> 1, 2 and 3 give the following dependencies:
> 
>     1. read_unlock(A) -> write_unlock(B)
>     2. read_unlock(A) -> read_unlock(B)
>     3. write_unlock(B) -> write_unlock(A)
> 
> There's no circular dependency so it's safe. DEPT doesn't report this.
> 
> > the above is not a deadlock, since T1's unfair reader can "steal" the
> > lock. However the following is a deadlock:
> > 
> > 	T0		T1		T2
> > 	--		--		--
> > 	unfair_read_lock(A);
> > 			write_lock(B);
> > 					write_lock(A);
> > 	write_lock(B);
> > 			fair_read_lock(A);
> > 
> > , since T'1 fair reader will wait.
> 
> With the DEPT's point of view (let me re-write the scenario):
> 
> 	T0		T1		T2
> 	--		--		--
> 	unfair_read_lock(A);
> 			write_lock(B);
> 					write_lock(A);
> 	write_lock(B);
> 			fair_read_lock(A);
> 	write_unlock(B);
> 	read_unlock(A);
> 			read_unlock(A);
> 			write_unlock(B);
> 					write_unlock(A);
> 
> T0: read_unlock(A) cannot happen if write_lock(B) is stuck by a B owner
>     not doing either write_unlock(B) or read_unlock(B). In other words:
> 
>       1. read_unlock(A) happening depends on write_unlock(B) happening.
>       2. read_unlock(A) happening depends on read_unlock(B) happening.
> 
> T1: write_unlock(B) cannot happen if fair_read_lock(A) is stuck by a A
>     owner not doing either write_unlock(A) or read_unlock(A). In other
>     words:
> 
>       3. write_unlock(B) happening depends on write_unlock(A) happening.
>       4. write_unlock(B) happening depends on read_unlock(A) happening.
> 
> 1, 2, 3 and 4 give the following dependencies:
> 
>     1. read_unlock(A) -> write_unlock(B)
>     2. read_unlock(A) -> read_unlock(B)
>     3. write_unlock(B) -> write_unlock(A)
>     4. write_unlock(B) -> read_unlock(A)
> 
> With 1 and 4, there's a circular dependency so DEPT definitely report
> this as a problem.
> 
> REMIND: DEPT focuses on waits and events.

Do you have the test cases showing DEPT can detect this?

Regards,
Boqun

> 
> > FWIW, lockdep is able to catch this (figuring out which is deadlock and
> > which is not) since two years ago, plus other trivial deadlock detection
> > for read/write locks. Needless to say, if lib/lock-selftests.c was given
> > a try, one could find it out on one's own.
> > 
> > Regards,
> > Boqun
> > 

WARNING: multiple messages have this Message-ID (diff)
From: Boqun Feng <boqun.feng@gmail.com>
To: Byungchul Park <byungchul.park@lge.com>
Cc: hamohammed.sa@gmail.com, jack@suse.cz, peterz@infradead.org,
	daniel.vetter@ffwll.ch, amir73il@gmail.com, david@fromorbit.com,
	dri-devel@lists.freedesktop.org, mhocko@kernel.org,
	linux-mm@kvack.org, linux-ide@vger.kernel.org,
	adilger.kernel@dilger.ca, chris.p.wilson@intel.com,
	joel@joelfernandes.org, 42.hyeyoo@gmail.com, cl@linux.com,
	will@kernel.org, duyuyang@gmail.com, sashal@kernel.org,
	paolo.valente@linaro.org, damien.lemoal@opensource.wdc.com,
	willy@infradead.org, hch@infradead.org, mingo@redhat.com,
	djwong@kernel.org, vdavydov.dev@gmail.com, rientjes@google.com,
	dennis@kernel.org, linux-ext4@vger.kernel.org, ngupta@vflare.org,
	johannes.berg@intel.com, dan.j.williams@intel.com,
	josef@toxicpanda.com, rostedt@goodmis.org,
	gwan-gyeong.mun@intel.com, linux-block@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, jglisse@redhat.com,
	viro@zeniv.linux.org.uk, longman@redhat.com, tglx@linutronix.de,
	vbabka@suse.cz, melissa.srw@gmail.com, sj@kernel.org,
	tytso@mit.edu, rodrigosiqueiramelo@gmail.com,
	kernel-team@lge.com, gregkh@linuxfoundation.org,
	jlayton@kernel.org, linux-kernel@vger.kernel.org,
	penberg@kernel.org, minchan@kernel.org,
	max.byungchul.park@gmail.com, hannes@cmpxchg.org, tj@kernel.org,
	akpm@linux-foundation.org, torvalds@linux-foundation.org
Subject: Re: [PATCH RFC v7 00/23] DEPT(Dependency Tracker)
Date: Thu, 19 Jan 2023 18:23:49 -0800	[thread overview]
Message-ID: <Y8n7NdFl9WEbGXH1@boqun-archlinux> (raw)
In-Reply-To: <1674179505-26987-1-git-send-email-byungchul.park@lge.com>

On Fri, Jan 20, 2023 at 10:51:45AM +0900, Byungchul Park wrote:
> Boqun wrote:
> > On Thu, Jan 19, 2023 at 01:33:58PM +0000, Matthew Wilcox wrote:
> > > On Thu, Jan 19, 2023 at 03:23:08PM +0900, Byungchul Park wrote:
> > > > Boqun wrote:
> > > > > *Looks like the DEPT dependency graph doesn't handle the
> > > > > fair/unfair readers as lockdep current does. Which bring the
> > > > > next question.
> > > > 
> > > > No. DEPT works better for unfair read. It works based on wait/event. So
> > > > read_lock() is considered a potential wait waiting on write_unlock()
> > > > while write_lock() is considered a potential wait waiting on either
> > > > write_unlock() or read_unlock(). DEPT is working perfect for it.
> > > > 
> > > > For fair read (maybe you meant queued read lock), I think the case
> > > > should be handled in the same way as normal lock. I might get it wrong.
> > > > Please let me know if I miss something.
> > > 
> > > From the lockdep/DEPT point of view, the question is whether:
> > > 
> > >	read_lock(A)
> > >	read_lock(A)
> > > 
> > > can deadlock if a writer comes in between the two acquisitions and
> > > sleeps waiting on A to be released.  A fair lock will block new
> > > readers when a writer is waiting, while an unfair lock will allow
> > > new readers even while a writer is waiting.
> > > 
> > 
> > To be more accurate, a fair reader will wait if there is a writer
> > waiting for other reader (fair or not) to unlock, and an unfair reader
> > won't.
> 
> What a kind guys, both of you! Thanks.
> 
> I asked to check if there are other subtle things than this. Fortunately,
> I already understand what you guys shared.
> 
> > In kernel there are read/write locks that can have both fair and unfair
> > readers (e.g. queued rwlock). Regarding deadlocks,
> > 
> > 	T0		T1		T2
> > 	--		--		--
> > 	fair_read_lock(A);
> > 			write_lock(B);
> > 					write_lock(A);
> > 	write_lock(B);
> > 			unfair_read_lock(A);
> 
> With the DEPT's point of view (let me re-write the scenario):
> 
> 	T0		T1		T2
> 	--		--		--
> 	fair_read_lock(A);
> 			write_lock(B);
> 					write_lock(A);
> 	write_lock(B);
> 			unfair_read_lock(A);
> 	write_unlock(B);
> 	read_unlock(A);
> 			read_unlock(A);
> 			write_unlock(B);
> 					write_unlock(A);
> 
> T0: read_unlock(A) cannot happen if write_lock(B) is stuck by a B owner
>     not doing either write_unlock(B) or read_unlock(B). In other words:
> 
>       1. read_unlock(A) happening depends on write_unlock(B) happening.
>       2. read_unlock(A) happening depends on read_unlock(B) happening.
> 
> T1: write_unlock(B) cannot happen if unfair_read_lock(A) is stuck by a A
>     owner not doing write_unlock(A). In other words:
> 
>       3. write_unlock(B) happening depends on write_unlock(A) happening.
> 
> 1, 2 and 3 give the following dependencies:
> 
>     1. read_unlock(A) -> write_unlock(B)
>     2. read_unlock(A) -> read_unlock(B)
>     3. write_unlock(B) -> write_unlock(A)
> 
> There's no circular dependency so it's safe. DEPT doesn't report this.
> 
> > the above is not a deadlock, since T1's unfair reader can "steal" the
> > lock. However the following is a deadlock:
> > 
> > 	T0		T1		T2
> > 	--		--		--
> > 	unfair_read_lock(A);
> > 			write_lock(B);
> > 					write_lock(A);
> > 	write_lock(B);
> > 			fair_read_lock(A);
> > 
> > , since T'1 fair reader will wait.
> 
> With the DEPT's point of view (let me re-write the scenario):
> 
> 	T0		T1		T2
> 	--		--		--
> 	unfair_read_lock(A);
> 			write_lock(B);
> 					write_lock(A);
> 	write_lock(B);
> 			fair_read_lock(A);
> 	write_unlock(B);
> 	read_unlock(A);
> 			read_unlock(A);
> 			write_unlock(B);
> 					write_unlock(A);
> 
> T0: read_unlock(A) cannot happen if write_lock(B) is stuck by a B owner
>     not doing either write_unlock(B) or read_unlock(B). In other words:
> 
>       1. read_unlock(A) happening depends on write_unlock(B) happening.
>       2. read_unlock(A) happening depends on read_unlock(B) happening.
> 
> T1: write_unlock(B) cannot happen if fair_read_lock(A) is stuck by a A
>     owner not doing either write_unlock(A) or read_unlock(A). In other
>     words:
> 
>       3. write_unlock(B) happening depends on write_unlock(A) happening.
>       4. write_unlock(B) happening depends on read_unlock(A) happening.
> 
> 1, 2, 3 and 4 give the following dependencies:
> 
>     1. read_unlock(A) -> write_unlock(B)
>     2. read_unlock(A) -> read_unlock(B)
>     3. write_unlock(B) -> write_unlock(A)
>     4. write_unlock(B) -> read_unlock(A)
> 
> With 1 and 4, there's a circular dependency so DEPT definitely report
> this as a problem.
> 
> REMIND: DEPT focuses on waits and events.

Do you have the test cases showing DEPT can detect this?

Regards,
Boqun

> 
> > FWIW, lockdep is able to catch this (figuring out which is deadlock and
> > which is not) since two years ago, plus other trivial deadlock detection
> > for read/write locks. Needless to say, if lib/lock-selftests.c was given
> > a try, one could find it out on one's own.
> > 
> > Regards,
> > Boqun
> > 

  reply	other threads:[~2023-01-20  2:24 UTC|newest]

Thread overview: 102+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-09  3:33 [PATCH RFC v7 00/23] DEPT(Dependency Tracker) Byungchul Park
2023-01-09  3:33 ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 01/23] llist: Move llist_{head,node} definition to types.h Byungchul Park
2023-01-09  3:33   ` [PATCH RFC v7 01/23] llist: Move llist_{head, node} " Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 02/23] dept: Implement Dept(Dependency Tracker) Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  8:34   ` kernel test robot
2023-01-09  3:33 ` [PATCH RFC v7 03/23] dept: Add single event dependency tracker APIs Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-18 13:01   ` Thomas Gleixner
2023-01-18 13:01     ` Thomas Gleixner
2023-01-09  3:33 ` [PATCH RFC v7 04/23] dept: Add lock " Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 05/23] dept: Tie to Lockdep and IRQ tracing Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  6:20   ` kernel test robot
2023-01-09 10:35   ` kernel test robot
2023-01-09  3:33 ` [PATCH RFC v7 06/23] dept: Add proc knobs to show stats and dependency graph Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-18 12:56   ` Thomas Gleixner
2023-01-18 12:56     ` Thomas Gleixner
2023-01-09  3:33 ` [PATCH RFC v7 07/23] dept: Apply sdt_might_sleep_strong() to wait_for_completion()/complete() Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  6:00   ` kernel test robot
2023-01-18 12:59   ` Thomas Gleixner
2023-01-18 12:59     ` Thomas Gleixner
2023-01-09  3:33 ` [PATCH RFC v7 08/23] dept: Apply sdt_might_sleep_strong() to PG_{locked,writeback} wait Byungchul Park
2023-01-09  3:33   ` [PATCH RFC v7 08/23] dept: Apply sdt_might_sleep_strong() to PG_{locked, writeback} wait Byungchul Park
2023-01-09  9:10   ` [PATCH RFC v7 08/23] dept: Apply sdt_might_sleep_strong() to PG_{locked,writeback} wait Sergey Shtylyov
2023-01-09  9:10     ` Sergey Shtylyov
2023-01-09 11:37   ` Hillf Danton
2023-01-19  1:38     ` Byungchul Park
2023-01-21  3:35       ` Byungchul Park
2023-01-21  4:21         ` Hillf Danton
2023-01-09  3:33 ` [PATCH RFC v7 09/23] dept: Apply sdt_might_sleep_weak() to swait Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 10/23] dept: Apply sdt_might_sleep_weak() to waitqueue wait Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 11/23] dept: Apply sdt_might_sleep_weak() to hashed-waitqueue wait Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 12/23] dept: Distinguish each syscall context from another Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 13/23] dept: Distinguish each work " Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 14/23] dept: Add a mechanism to refill the internal memory pools on running out Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 15/23] locking/lockdep, cpu/hotplus: Use a weaker annotation in AP thread Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 16/23] dept: Apply sdt_might_sleep_strong() to dma fence wait Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 17/23] dept: Track timeout waits separately with a new Kconfig Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 18/23] dept: Apply timeout consideration to wait_for_completion()/complete() Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 19/23] dept: Apply timeout consideration to swait Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 20/23] dept: Apply timeout consideration to waitqueue wait Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 21/23] dept: Apply timeout consideration to hashed-waitqueue wait Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 22/23] dept: Apply timeout consideration to dma fence wait Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-09  3:33 ` [PATCH RFC v7 23/23] dept: Record the latest one out of consecutive waits of the same class Byungchul Park
2023-01-09  3:33   ` Byungchul Park
2023-01-16 18:00 ` [PATCH RFC v7 00/23] DEPT(Dependency Tracker) Linus Torvalds
2023-01-16 18:00   ` Linus Torvalds
2023-01-17 18:18   ` Boqun Feng
2023-01-17 18:18     ` Boqun Feng
2023-01-17 18:40     ` Waiman Long
2023-01-17 18:40       ` Waiman Long
2023-01-18 12:55     ` Thomas Gleixner
2023-01-18 12:55       ` Thomas Gleixner
2023-01-19  9:05       ` Byungchul Park
2023-01-19  9:05         ` Byungchul Park
2023-01-19  6:23     ` Byungchul Park
2023-01-19  6:23       ` Byungchul Park
2023-01-19  7:06       ` Byungchul Park
2023-01-19  7:06         ` Byungchul Park
2023-01-19 13:33       ` Matthew Wilcox
2023-01-19 13:33         ` Matthew Wilcox
2023-01-19 19:25         ` Boqun Feng
2023-01-19 19:25           ` Boqun Feng
2023-01-20  1:51           ` Byungchul Park
2023-01-20  1:51             ` Byungchul Park
2023-01-20  2:23             ` Boqun Feng [this message]
2023-01-20  2:23               ` Boqun Feng
2023-01-20  3:07               ` Boqun Feng
2023-01-20  3:07                 ` Boqun Feng
2023-01-20  3:26                 ` Boqun Feng
2023-01-20  3:26                   ` Boqun Feng
2023-01-21  3:28                 ` Byungchul Park
2023-01-21  3:28                   ` Byungchul Park
2023-01-21  3:44                   ` Boqun Feng
2023-01-21  3:44                     ` Boqun Feng
2023-01-21  4:01                     ` Boqun Feng
2023-01-21  4:01                       ` Boqun Feng
2023-01-21  4:47                     ` Byungchul Park
2023-01-21  4:47                       ` Byungchul Park
2023-01-19  0:58   ` Byungchul Park
2023-01-19  0:58     ` Byungchul Park
2023-01-21  2:40     ` Byungchul Park
2023-01-21  2:40       ` Byungchul Park

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y8n7NdFl9WEbGXH1@boqun-archlinux \
    --to=boqun.feng@gmail.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=adilger.kernel@dilger.ca \
    --cc=akpm@linux-foundation.org \
    --cc=amir73il@gmail.com \
    --cc=byungchul.park@lge.com \
    --cc=chris.p.wilson@intel.com \
    --cc=cl@linux.com \
    --cc=damien.lemoal@opensource.wdc.com \
    --cc=dan.j.williams@intel.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=david@fromorbit.com \
    --cc=dennis@kernel.org \
    --cc=djwong@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=duyuyang@gmail.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=gwan-gyeong.mun@intel.com \
    --cc=hamohammed.sa@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=jglisse@redhat.com \
    --cc=jlayton@kernel.org \
    --cc=joel@joelfernandes.org \
    --cc=johannes.berg@intel.com \
    --cc=josef@toxicpanda.com \
    --cc=kernel-team@lge.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-ide@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=max.byungchul.park@gmail.com \
    --cc=melissa.srw@gmail.com \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=mingo@redhat.com \
    --cc=ngupta@vflare.org \
    --cc=paolo.valente@linaro.org \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=rodrigosiqueiramelo@gmail.com \
    --cc=rostedt@goodmis.org \
    --cc=sashal@kernel.org \
    --cc=sj@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=tytso@mit.edu \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.