All of lore.kernel.org
 help / color / mirror / Atom feed
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andreas Gruenbacher <agruenba@redhat.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Christoph Hellwig <hch@infradead.org>,
	"Darrick J. Wong" <djwong@kernel.org>, Jan Kara <jack@suse.cz>,
	Matthew Wilcox <willy@infradead.org>,
	cluster-devel <cluster-devel@redhat.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	ocfs2-devel@oss.oracle.com, kvm-ppc@vger.kernel.org,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	Tony Luck <tony.luck@intel.com>,
	Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH v8 00/17] gfs2: Fix mmap + page fault deadlocks
Date: Fri, 29 Oct 2021 11:47:33 -0700	[thread overview]
Message-ID: <CAHk-=wgNV5Ka0yTssic0JbZEcO3wvoTC65budK88k4D-34v0xA@mail.gmail.com> (raw)
In-Reply-To: <YXw0a9n+/PLAcObB@arm.com>

On Fri, Oct 29, 2021 at 10:50 AM Catalin Marinas
<catalin.marinas@arm.com> wrote:
>
> First of all, a uaccess in interrupt should not force such signal as it
> had nothing to do with the interrupted context. I guess we can do an
> in_task() check in the fault handler.

Yeah. It ends up being similar to the thread flag in that you still
end up having to protect against NMI and other users of asynchronous
page faults.

So the suggestion was more of a "mindset" difference and modified
version of the task flag rather than anything fundamentally different.

> Second, is there a chance that we enter the fault-in loop with a SIGSEGV
> already pending? Maybe it's not a problem, we just bail out of the loop
> early and deliver the signal, though unrelated to the actual uaccess in
> the loop.

If we ever run in user space with a pending per-thread SIGSEGV, that
would already be a fairly bad bug. The intent of "force_sig()" is not
only to make sure you can't block the signal, but also that it targets
the particular thread that caused the problem: unlike other random
"send signal to process", a SIGSEGV caused by a bad memory access is
really local to that _thread_, not the signal thread group.

So somebody else sending a SIGSEGV asynchronsly is actually very
different - it goes to the thread group (although you can specify
individual threads too - but once you do that you're already outside
of POSIX).

That said, the more I look at it, the more I think I was wrong. I
think the "we have a SIGSEGV pending" could act as the per-thread
flag, but the complexity of the signal handling is probably an
argument against it.

Not because a SIGSEGV could already be pending, but because so many
other situations could be pending.

In particular, the signal code won't send new signals to a thread if
that thread group is already exiting. So another thread may have
already started the exit and core dump sequence, and is in the process
of killing the shared signal threads, and if one of those threads is
now in the kernel and goes through the copy_from_user() dance, that
whole "thread group is exiting" will mean that the signal code won't
add a new SIGSEGV to the queue.

So the signal could conceptually be used as the flag to stop looping,
but it ends up being such a complicated flag that I think it's
probably not worth it after all. Even if it semantically would be
fairly nice to use pre-existing machinery.

Could it be worked around? Sure. That kernel loop probably has to
check for fatal_signal_pending() anyway, so it would all work even in
the presense of the above kinds of issues. But just the fact that I
went and looked at just how exciting the signal code is made me think
"ok, conceptually nice, but we take a lot of locks and we do a lot of
special things even in the 'simple' force_sig() case".

> Third is the sigcontext.pc presented to the signal handler. Normally for
> SIGSEGV it points to the address of a load/store instruction and a
> handler could disable MTE and restart from that point. With a syscall we
> don't want it to point to the syscall place as it shouldn't be restarted
> in case it copied something.

I think this is actually independent of the whole "how to return
errors". We'll still need to return an error from the system call,
even if we also have a signal pending.

                  Linus

WARNING: multiple messages have this Message-ID (diff)
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: kvm-ppc@vger.kernel.org, Christoph Hellwig <hch@infradead.org>,
	cluster-devel <cluster-devel@redhat.com>, Jan Kara <jack@suse.cz>,
	Andreas Gruenbacher <agruenba@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Paul Mackerras <paulus@ozlabs.org>,
	Tony Luck <tony.luck@intel.com>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	ocfs2-devel@oss.oracle.com
Subject: Re: [Ocfs2-devel] [PATCH v8 00/17] gfs2: Fix mmap + page fault deadlocks
Date: Fri, 29 Oct 2021 11:47:33 -0700	[thread overview]
Message-ID: <CAHk-=wgNV5Ka0yTssic0JbZEcO3wvoTC65budK88k4D-34v0xA@mail.gmail.com> (raw)
In-Reply-To: <YXw0a9n+/PLAcObB@arm.com>

On Fri, Oct 29, 2021 at 10:50 AM Catalin Marinas
<catalin.marinas@arm.com> wrote:
>
> First of all, a uaccess in interrupt should not force such signal as it
> had nothing to do with the interrupted context. I guess we can do an
> in_task() check in the fault handler.

Yeah. It ends up being similar to the thread flag in that you still
end up having to protect against NMI and other users of asynchronous
page faults.

So the suggestion was more of a "mindset" difference and modified
version of the task flag rather than anything fundamentally different.

> Second, is there a chance that we enter the fault-in loop with a SIGSEGV
> already pending? Maybe it's not a problem, we just bail out of the loop
> early and deliver the signal, though unrelated to the actual uaccess in
> the loop.

If we ever run in user space with a pending per-thread SIGSEGV, that
would already be a fairly bad bug. The intent of "force_sig()" is not
only to make sure you can't block the signal, but also that it targets
the particular thread that caused the problem: unlike other random
"send signal to process", a SIGSEGV caused by a bad memory access is
really local to that _thread_, not the signal thread group.

So somebody else sending a SIGSEGV asynchronsly is actually very
different - it goes to the thread group (although you can specify
individual threads too - but once you do that you're already outside
of POSIX).

That said, the more I look at it, the more I think I was wrong. I
think the "we have a SIGSEGV pending" could act as the per-thread
flag, but the complexity of the signal handling is probably an
argument against it.

Not because a SIGSEGV could already be pending, but because so many
other situations could be pending.

In particular, the signal code won't send new signals to a thread if
that thread group is already exiting. So another thread may have
already started the exit and core dump sequence, and is in the process
of killing the shared signal threads, and if one of those threads is
now in the kernel and goes through the copy_from_user() dance, that
whole "thread group is exiting" will mean that the signal code won't
add a new SIGSEGV to the queue.

So the signal could conceptually be used as the flag to stop looping,
but it ends up being such a complicated flag that I think it's
probably not worth it after all. Even if it semantically would be
fairly nice to use pre-existing machinery.

Could it be worked around? Sure. That kernel loop probably has to
check for fatal_signal_pending() anyway, so it would all work even in
the presense of the above kinds of issues. But just the fact that I
went and looked at just how exciting the signal code is made me think
"ok, conceptually nice, but we take a lot of locks and we do a lot of
special things even in the 'simple' force_sig() case".

> Third is the sigcontext.pc presented to the signal handler. Normally for
> SIGSEGV it points to the address of a load/store instruction and a
> handler could disable MTE and restart from that point. With a syscall we
> don't want it to point to the syscall place as it shouldn't be restarted
> in case it copied something.

I think this is actually independent of the whole "how to return
errors". We'll still need to return an error from the system call,
even if we also have a signal pending.

                  Linus

_______________________________________________
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel

WARNING: multiple messages have this Message-ID (diff)
From: Linus Torvalds <torvalds@linux-foundation.org>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCH v8 00/17] gfs2: Fix mmap + page fault deadlocks
Date: Fri, 29 Oct 2021 11:47:33 -0700	[thread overview]
Message-ID: <CAHk-=wgNV5Ka0yTssic0JbZEcO3wvoTC65budK88k4D-34v0xA@mail.gmail.com> (raw)
In-Reply-To: <YXw0a9n+/PLAcObB@arm.com>

On Fri, Oct 29, 2021 at 10:50 AM Catalin Marinas
<catalin.marinas@arm.com> wrote:
>
> First of all, a uaccess in interrupt should not force such signal as it
> had nothing to do with the interrupted context. I guess we can do an
> in_task() check in the fault handler.

Yeah. It ends up being similar to the thread flag in that you still
end up having to protect against NMI and other users of asynchronous
page faults.

So the suggestion was more of a "mindset" difference and modified
version of the task flag rather than anything fundamentally different.

> Second, is there a chance that we enter the fault-in loop with a SIGSEGV
> already pending? Maybe it's not a problem, we just bail out of the loop
> early and deliver the signal, though unrelated to the actual uaccess in
> the loop.

If we ever run in user space with a pending per-thread SIGSEGV, that
would already be a fairly bad bug. The intent of "force_sig()" is not
only to make sure you can't block the signal, but also that it targets
the particular thread that caused the problem: unlike other random
"send signal to process", a SIGSEGV caused by a bad memory access is
really local to that _thread_, not the signal thread group.

So somebody else sending a SIGSEGV asynchronsly is actually very
different - it goes to the thread group (although you can specify
individual threads too - but once you do that you're already outside
of POSIX).

That said, the more I look at it, the more I think I was wrong. I
think the "we have a SIGSEGV pending" could act as the per-thread
flag, but the complexity of the signal handling is probably an
argument against it.

Not because a SIGSEGV could already be pending, but because so many
other situations could be pending.

In particular, the signal code won't send new signals to a thread if
that thread group is already exiting. So another thread may have
already started the exit and core dump sequence, and is in the process
of killing the shared signal threads, and if one of those threads is
now in the kernel and goes through the copy_from_user() dance, that
whole "thread group is exiting" will mean that the signal code won't
add a new SIGSEGV to the queue.

So the signal could conceptually be used as the flag to stop looping,
but it ends up being such a complicated flag that I think it's
probably not worth it after all. Even if it semantically would be
fairly nice to use pre-existing machinery.

Could it be worked around? Sure. That kernel loop probably has to
check for fatal_signal_pending() anyway, so it would all work even in
the presense of the above kinds of issues. But just the fact that I
went and looked at just how exciting the signal code is made me think
"ok, conceptually nice, but we take a lot of locks and we do a lot of
special things even in the 'simple' force_sig() case".

> Third is the sigcontext.pc presented to the signal handler. Normally for
> SIGSEGV it points to the address of a load/store instruction and a
> handler could disable MTE and restart from that point. With a syscall we
> don't want it to point to the syscall place as it shouldn't be restarted
> in case it copied something.

I think this is actually independent of the whole "how to return
errors". We'll still need to return an error from the system call,
even if we also have a signal pending.

                  Linus



WARNING: multiple messages have this Message-ID (diff)
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andreas Gruenbacher <agruenba@redhat.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Christoph Hellwig <hch@infradead.org>,
	"Darrick J. Wong" <djwong@kernel.org>, Jan Kara <jack@suse.cz>,
	Matthew Wilcox <willy@infradead.org>,
	cluster-devel <cluster-devel@redhat.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	ocfs2-devel@oss.oracle.com, kvm-ppc@vger.kernel.org,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	Tony Luck <tony.luck@intel.com>,
	Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH v8 00/17] gfs2: Fix mmap + page fault deadlocks
Date: Fri, 29 Oct 2021 18:47:33 +0000	[thread overview]
Message-ID: <CAHk-=wgNV5Ka0yTssic0JbZEcO3wvoTC65budK88k4D-34v0xA@mail.gmail.com> (raw)
In-Reply-To: <YXw0a9n+/PLAcObB@arm.com>

On Fri, Oct 29, 2021 at 10:50 AM Catalin Marinas
<catalin.marinas@arm.com> wrote:
>
> First of all, a uaccess in interrupt should not force such signal as it
> had nothing to do with the interrupted context. I guess we can do an
> in_task() check in the fault handler.

Yeah. It ends up being similar to the thread flag in that you still
end up having to protect against NMI and other users of asynchronous
page faults.

So the suggestion was more of a "mindset" difference and modified
version of the task flag rather than anything fundamentally different.

> Second, is there a chance that we enter the fault-in loop with a SIGSEGV
> already pending? Maybe it's not a problem, we just bail out of the loop
> early and deliver the signal, though unrelated to the actual uaccess in
> the loop.

If we ever run in user space with a pending per-thread SIGSEGV, that
would already be a fairly bad bug. The intent of "force_sig()" is not
only to make sure you can't block the signal, but also that it targets
the particular thread that caused the problem: unlike other random
"send signal to process", a SIGSEGV caused by a bad memory access is
really local to that _thread_, not the signal thread group.

So somebody else sending a SIGSEGV asynchronsly is actually very
different - it goes to the thread group (although you can specify
individual threads too - but once you do that you're already outside
of POSIX).

That said, the more I look at it, the more I think I was wrong. I
think the "we have a SIGSEGV pending" could act as the per-thread
flag, but the complexity of the signal handling is probably an
argument against it.

Not because a SIGSEGV could already be pending, but because so many
other situations could be pending.

In particular, the signal code won't send new signals to a thread if
that thread group is already exiting. So another thread may have
already started the exit and core dump sequence, and is in the process
of killing the shared signal threads, and if one of those threads is
now in the kernel and goes through the copy_from_user() dance, that
whole "thread group is exiting" will mean that the signal code won't
add a new SIGSEGV to the queue.

So the signal could conceptually be used as the flag to stop looping,
but it ends up being such a complicated flag that I think it's
probably not worth it after all. Even if it semantically would be
fairly nice to use pre-existing machinery.

Could it be worked around? Sure. That kernel loop probably has to
check for fatal_signal_pending() anyway, so it would all work even in
the presense of the above kinds of issues. But just the fact that I
went and looked at just how exciting the signal code is made me think
"ok, conceptually nice, but we take a lot of locks and we do a lot of
special things even in the 'simple' force_sig() case".

> Third is the sigcontext.pc presented to the signal handler. Normally for
> SIGSEGV it points to the address of a load/store instruction and a
> handler could disable MTE and restart from that point. With a syscall we
> don't want it to point to the syscall place as it shouldn't be restarted
> in case it copied something.

I think this is actually independent of the whole "how to return
errors". We'll still need to return an error from the system call,
even if we also have a signal pending.

                  Linus

  reply	other threads:[~2021-10-29 18:47 UTC|newest]

Thread overview: 188+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-19 13:41 [PATCH v8 00/17] gfs2: Fix mmap + page fault deadlocks Andreas Gruenbacher
2021-10-19 13:41 ` Andreas Gruenbacher
2021-10-19 13:41 ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 01/17] iov_iter: Fix iov_iter_get_pages{,_alloc} page fault return value Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] [PATCH v8 01/17] iov_iter: Fix iov_iter_get_pages{, _alloc} " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 02/17] powerpc/kvm: Fix kvm_use_magic_page Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 03/17] gup: Turn fault_in_pages_{readable,writeable} into fault_in_{readable,writeable} Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] [PATCH v8 03/17] gup: Turn fault_in_pages_{readable, writeable} into fault_in_{readable, writeable} Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 04/17] iov_iter: Turn iov_iter_fault_in_readable into fault_in_iov_iter_readable Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 05/17] iov_iter: Introduce fault_in_iov_iter_writeable Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-20 16:25   ` Catalin Marinas
2021-10-20 16:25     ` Catalin Marinas
2021-10-20 16:25     ` [Cluster-devel] " Catalin Marinas
2021-10-20 16:25     ` [Ocfs2-devel] " Catalin Marinas
2021-10-19 13:41 ` [PATCH v8 06/17] gfs2: Add wrapper for iomap_file_buffered_write Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 07/17] gfs2: Clean up function may_grant Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 08/17] gfs2: Introduce flag for glock holder auto-demotion Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 09/17] gfs2: Move the inode glock locking to gfs2_file_buffered_write Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 10/17] gfs2: Eliminate ip->i_gh Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 11/17] gfs2: Fix mmap + page fault deadlocks for buffered I/O Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:41 ` [PATCH v8 12/17] iomap: Fix iomap_dio_rw return value for user copies Andreas Gruenbacher
2021-10-19 13:41   ` Andreas Gruenbacher
2021-10-19 13:41   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:41   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:42 ` [PATCH v8 13/17] iomap: Support partial direct I/O on user copy failures Andreas Gruenbacher
2021-10-19 13:42   ` Andreas Gruenbacher
2021-10-19 13:42   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:42   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:42 ` [PATCH v8 14/17] iomap: Add done_before argument to iomap_dio_rw Andreas Gruenbacher
2021-10-19 13:42   ` Andreas Gruenbacher
2021-10-19 13:42   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:42   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 15:51   ` Darrick J. Wong
2021-10-19 15:51     ` Darrick J. Wong
2021-10-19 15:51     ` [Cluster-devel] " Darrick J. Wong
2021-10-19 15:51     ` Darrick J. Wong
2021-10-19 19:30     ` Andreas Gruenbacher
2021-10-19 19:30       ` Andreas Gruenbacher
2021-10-19 19:30       ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 19:30       ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-20  1:57       ` Darrick J. Wong
2021-10-20  1:57         ` Darrick J. Wong
2021-10-20  1:57         ` [Cluster-devel] " Darrick J. Wong
2021-10-20  1:57         ` [Ocfs2-devel] " Darrick J. Wong
2021-10-19 13:42 ` [PATCH v8 15/17] gup: Introduce FOLL_NOFAULT flag to disable page faults Andreas Gruenbacher
2021-10-19 13:42   ` Andreas Gruenbacher
2021-10-19 13:42   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:42   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:42 ` [PATCH v8 16/17] iov_iter: Introduce nofault " Andreas Gruenbacher
2021-10-19 13:42   ` Andreas Gruenbacher
2021-10-19 13:42   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:42   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 13:42 ` [PATCH v8 17/17] gfs2: Fix mmap + page fault deadlocks for direct I/O Andreas Gruenbacher
2021-10-19 13:42   ` Andreas Gruenbacher
2021-10-19 13:42   ` [Cluster-devel] " Andreas Gruenbacher
2021-10-19 13:42   ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-19 15:40 ` [PATCH v8 00/17] gfs2: Fix mmap + page fault deadlocks Linus Torvalds
2021-10-19 15:40   ` Linus Torvalds
2021-10-19 15:40   ` [Cluster-devel] " Linus Torvalds
2021-10-19 15:40   ` [Ocfs2-devel] " Linus Torvalds
2021-10-19 15:59   ` Bob Peterson
2021-10-19 16:00     ` [Cluster-devel] " Bob Peterson
2021-10-19 16:00     ` [Ocfs2-devel] " Bob Peterson
2021-10-19 16:00     ` Bob Peterson
2021-10-20 16:36   ` Catalin Marinas
2021-10-20 16:36     ` Catalin Marinas
2021-10-20 16:36     ` [Cluster-devel] " Catalin Marinas
2021-10-20 16:36     ` [Ocfs2-devel] " Catalin Marinas
2021-10-20 20:11     ` Linus Torvalds
2021-10-20 20:11       ` Linus Torvalds
2021-10-20 20:11       ` [Cluster-devel] " Linus Torvalds
2021-10-20 20:11       ` [Ocfs2-devel] " Linus Torvalds
2021-10-20 22:44       ` Catalin Marinas
2021-10-20 22:44         ` Catalin Marinas
2021-10-20 22:44         ` [Cluster-devel] " Catalin Marinas
2021-10-20 22:44         ` [Ocfs2-devel] " Catalin Marinas
2021-10-21  6:19         ` Linus Torvalds
2021-10-21  6:19           ` Linus Torvalds
2021-10-21  6:19           ` [Cluster-devel] " Linus Torvalds
2021-10-21  6:19           ` [Ocfs2-devel] " Linus Torvalds
2021-10-22 18:06           ` Catalin Marinas
2021-10-22 18:06             ` Catalin Marinas
2021-10-22 18:06             ` [Cluster-devel] " Catalin Marinas
2021-10-22 18:06             ` [Ocfs2-devel] " Catalin Marinas
2021-10-22 19:22             ` Linus Torvalds
2021-10-22 19:22               ` Linus Torvalds
2021-10-22 19:22               ` [Cluster-devel] " Linus Torvalds
2021-10-22 19:22               ` [Ocfs2-devel] " Linus Torvalds
2021-10-25 18:59               ` Andreas Gruenbacher
2021-10-25 19:00                 ` [Cluster-devel] " Andreas Gruenbacher
2021-10-25 19:00                 ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-25 19:00                 ` Andreas Gruenbacher
2021-10-26 18:24                 ` Catalin Marinas
2021-10-26 18:24                   ` Catalin Marinas
2021-10-26 18:24                   ` [Cluster-devel] " Catalin Marinas
2021-10-26 18:24                   ` [Ocfs2-devel] " Catalin Marinas
2021-10-26 18:50                   ` Linus Torvalds
2021-10-26 18:50                     ` Linus Torvalds
2021-10-26 18:50                     ` [Cluster-devel] " Linus Torvalds
2021-10-26 18:50                     ` [Ocfs2-devel] " Linus Torvalds
2021-10-26 19:18                     ` Linus Torvalds
2021-10-26 19:18                       ` Linus Torvalds
2021-10-26 19:18                       ` [Cluster-devel] " Linus Torvalds
2021-10-26 19:18                       ` [Ocfs2-devel] " Linus Torvalds
2021-10-27 19:13                     ` Catalin Marinas
2021-10-27 19:13                       ` Catalin Marinas
2021-10-27 19:13                       ` [Cluster-devel] " Catalin Marinas
2021-10-27 19:13                       ` [Ocfs2-devel] " Catalin Marinas
2021-10-27 21:14                       ` Linus Torvalds
2021-10-27 21:14                         ` Linus Torvalds
2021-10-27 21:14                         ` [Cluster-devel] " Linus Torvalds
2021-10-27 21:14                         ` [Ocfs2-devel] " Linus Torvalds
2021-10-28 21:20                         ` Catalin Marinas
2021-10-28 21:20                           ` Catalin Marinas
2021-10-28 21:20                           ` [Cluster-devel] " Catalin Marinas
2021-10-28 21:20                           ` [Ocfs2-devel] " Catalin Marinas
2021-10-28 21:40                           ` Catalin Marinas
2021-10-28 21:40                             ` Catalin Marinas
2021-10-28 21:40                             ` [Cluster-devel] " Catalin Marinas
2021-10-28 21:40                             ` [Ocfs2-devel] " Catalin Marinas
2021-10-28 22:15                           ` Andreas Grünbacher
2021-10-28 22:15                             ` Andreas Grünbacher
2021-10-28 22:15                             ` [Cluster-devel] " Andreas Grünbacher
2021-10-28 22:15                             ` [Ocfs2-devel] " Andreas Grünbacher
2021-10-29 12:50                             ` Catalin Marinas
2021-10-29 12:50                               ` Catalin Marinas
2021-10-29 12:50                               ` [Cluster-devel] " Catalin Marinas
2021-10-29 12:50                               ` [Ocfs2-devel] " Catalin Marinas
2021-10-28 22:32                           ` Linus Torvalds
2021-10-28 22:32                             ` Linus Torvalds
2021-10-28 22:32                             ` [Cluster-devel] " Linus Torvalds
2021-10-28 22:32                             ` [Ocfs2-devel] " Linus Torvalds
2021-10-29 17:50                             ` Catalin Marinas
2021-10-29 17:50                               ` Catalin Marinas
2021-10-29 17:50                               ` [Cluster-devel] " Catalin Marinas
2021-10-29 17:50                               ` [Ocfs2-devel] " Catalin Marinas
2021-10-29 18:47                               ` Linus Torvalds [this message]
2021-10-29 18:47                                 ` Linus Torvalds
2021-10-29 18:47                                 ` [Cluster-devel] " Linus Torvalds
2021-10-29 18:47                                 ` [Ocfs2-devel] " Linus Torvalds
2021-10-25 18:24             ` Andreas Gruenbacher
2021-10-25 18:24               ` Andreas Gruenbacher
2021-10-25 18:24               ` [Cluster-devel] " Andreas Gruenbacher
2021-10-25 18:24               ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-26  5:12               ` Theodore Ts'o
2021-10-26  5:12                 ` Theodore Ts'o
2021-10-26  5:12                 ` [Cluster-devel] " Theodore Ts'o
2021-10-26  5:12                 ` [Ocfs2-devel] " Theodore Ts'o
2021-10-26  9:44               ` Andreas Gruenbacher
2021-10-26  9:44                 ` Andreas Gruenbacher
2021-10-26  9:44                 ` [Cluster-devel] " Andreas Gruenbacher
2021-10-26  9:44                 ` [Ocfs2-devel] " Andreas Gruenbacher
2021-10-27 21:21               ` Andreas Gruenbacher
2021-10-27 21:21                 ` Andreas Gruenbacher
2021-10-27 21:21                 ` [Cluster-devel] " Andreas Gruenbacher
2021-10-27 21:21                 ` [Ocfs2-devel] " Andreas Gruenbacher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHk-=wgNV5Ka0yTssic0JbZEcO3wvoTC65budK88k4D-34v0xA@mail.gmail.com' \
    --to=torvalds@linux-foundation.org \
    --cc=agruenba@redhat.com \
    --cc=catalin.marinas@arm.com \
    --cc=cluster-devel@redhat.com \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=ocfs2-devel@oss.oracle.com \
    --cc=paulus@ozlabs.org \
    --cc=tony.luck@intel.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.