All of lore.kernel.org
 help / color / mirror / Atom feed
* [git pull] device mapper changes for 4.18
@ 2018-06-04 15:32 Mike Snitzer
  2018-06-04 18:54 ` Linus Torvalds
  0 siblings, 1 reply; 16+ messages in thread
From: Mike Snitzer @ 2018-06-04 15:32 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: peterz, dm-devel, Mikulas Patocka, Alasdair G Kergon

Hi Linus,

I've based on linux-block because the new dm-writecache target that is
in this pull needed to be updated to use the new bioset_init/mempool_init
interfaces.

The following changes since commit 04c4950d5b373ba712d928592e05e73510785bca:

  block: fixup bioset_integrity_create() call (2018-05-30 18:51:21 -0600)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git tags/for-4.18/dm-changes

for you to fetch changes up to 1be8d9c3da01a08d1d92064abb91e4610115db11:

  dm: add writecache target (2018-05-31 08:11:42 -0400)

Please pull, thanks!
Mike

----------------------------------------------------------------
- Export 2 swait symbols for use by the new DM writecache target.

- Add DM writecache target that offers writeback caching to persistent
  memory or SSD.

- Small DM core change to give context in error message for why a DM
  table type transition wasn't allowed.

----------------------------------------------------------------
Mike Snitzer (1):
      dm: report which conflicting type caused error during table_load()

Mikulas Patocka (2):
      swait: export symbols __prepare_to_swait and __finish_swait
      dm: add writecache target

 Documentation/device-mapper/writecache.txt |   68 +
 drivers/md/Kconfig                         |   11 +
 drivers/md/Makefile                        |    1 +
 drivers/md/dm-ioctl.c                      |    3 +-
 drivers/md/dm-writecache.c                 | 2285 ++++++++++++++++++++++++++++
 kernel/sched/swait.c                       |    2 +
 6 files changed, 2369 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/device-mapper/writecache.txt
 create mode 100644 drivers/md/dm-writecache.c

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 15:32 [git pull] device mapper changes for 4.18 Mike Snitzer
@ 2018-06-04 18:54 ` Linus Torvalds
  2018-06-04 19:09   ` Mike Snitzer
  2018-06-04 19:09   ` [git pull] " Linus Torvalds
  0 siblings, 2 replies; 16+ messages in thread
From: Linus Torvalds @ 2018-06-04 18:54 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: Peter Zijlstra, dm-devel, Mikulas Patocka, Alasdair G Kergon

On Mon, Jun 4, 2018 at 8:32 AM Mike Snitzer <snitzer@redhat.com> wrote:
>
> - Export 2 swait symbols for use by the new DM writecache target.

I am *EXTREMELY* unhappy with this.

The swait interface is pure and utter garbage, and I thought we
already agreed to just have a big comment telling people not to use
them.

That seems to not have happened.

The swait() interfaces are pure and utter garbage because they have
absolutely horrid semantics that are very different from normal
wait-queues, and there has never been any sign that the swait users
are actually valid.

In particular, existing users were using swait because of complete
garbage reasons, like the alleged "win" for KVM, which was just
because there was only ever one waiter anyway.

Is the new writecache usage really worth it?

Is it even *correct*?

As mentioned, swait semantics are completely buggy, with "swake_up()"
not at all approximating a normal wake-up. It only wakes *one* user,
so it's more like an exclusive waiter, except it ends up alway
sassuming that every waiter is exclusive without any actual marker for
that.

End result: the interface has some very subtle races, and I'm not at
all convinced that the new writecache code is aware of this.

In particular, I see what appears to be a bug in writecache_map(): it
does a writecache_wait_on_freelist(), but it doesn't actually
guarantee that it will then use the slot that it was woken up for (it
just does a "continue", which might instead do a
writecache_find_entry().

So *another* thread that was waiting for a slot will now not be woken
up, and the thread that *was* woken up didn't actually use the
freelist entry that it was waiting for.

This is *EXACTLY* the kind of code that should not use swait lists.
It's buggy, and broken. And it probably works in 99% of all cases by
pure luck, so it's hard to debug too.

In other words: I'm not pulling this shit. I want people to be *VERY*
aware of how broken swait queues are. You are *not* to use them unless
you understand them, and I have yet to find a single person who does.

No way in hell are we exporting that shit.

                   Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 18:54 ` Linus Torvalds
@ 2018-06-04 19:09   ` Mike Snitzer
  2018-06-04 19:29     ` Linus Torvalds
  2018-06-11 19:41     ` [git pull v2] " Mike Snitzer
  2018-06-04 19:09   ` [git pull] " Linus Torvalds
  1 sibling, 2 replies; 16+ messages in thread
From: Mike Snitzer @ 2018-06-04 19:09 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, dm-devel, Mikulas Patocka, Alasdair G Kergon

On Mon, Jun 04 2018 at  2:54pm -0400,
Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Mon, Jun 4, 2018 at 8:32 AM Mike Snitzer <snitzer@redhat.com> wrote:
> >
> > - Export 2 swait symbols for use by the new DM writecache target.
> 
> I am *EXTREMELY* unhappy with this.
> 
> The swait interface is pure and utter garbage, and I thought we
> already agreed to just have a big comment telling people not to use
> them.
> 
> That seems to not have happened.
> 
> The swait() interfaces are pure and utter garbage because they have
> absolutely horrid semantics that are very different from normal
> wait-queues, and there has never been any sign that the swait users
> are actually valid.
> 
> In particular, existing users were using swait because of complete
> garbage reasons, like the alleged "win" for KVM, which was just
> because there was only ever one waiter anyway.
> 
> Is the new writecache usage really worth it?
> 
> Is it even *correct*?
> 
> As mentioned, swait semantics are completely buggy, with "swake_up()"
> not at all approximating a normal wake-up. It only wakes *one* user,
> so it's more like an exclusive waiter, except it ends up alway
> sassuming that every waiter is exclusive without any actual marker for
> that.
> 
> End result: the interface has some very subtle races, and I'm not at
> all convinced that the new writecache code is aware of this.
> 
> In particular, I see what appears to be a bug in writecache_map(): it
> does a writecache_wait_on_freelist(), but it doesn't actually
> guarantee that it will then use the slot that it was woken up for (it
> just does a "continue", which might instead do a
> writecache_find_entry().
> 
> So *another* thread that was waiting for a slot will now not be woken
> up, and the thread that *was* woken up didn't actually use the
> freelist entry that it was waiting for.
> 
> This is *EXACTLY* the kind of code that should not use swait lists.
> It's buggy, and broken. And it probably works in 99% of all cases by
> pure luck, so it's hard to debug too.
> 
> In other words: I'm not pulling this shit. I want people to be *VERY*
> aware of how broken swait queues are. You are *not* to use them unless
> you understand them, and I have yet to find a single person who does.
> 
> No way in hell are we exporting that shit.

Fair enough, we'll get it fixed up to use normal waitqueues for next
merge window.

Mikulas elected to use swait because of the very low latency nature of
layering ontop of persistent memory.  Use of "simple waitqueues"
_seemed_ logical to me.

Apologies for not being aware of just how nasty swait is.

Wish there was more notice that the code is _that_ subtle... obviously I
wouldn't have pestered Peter to try to prop up dm-writecache's (ab)use
of swait.

Mike

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 18:54 ` Linus Torvalds
  2018-06-04 19:09   ` Mike Snitzer
@ 2018-06-04 19:09   ` Linus Torvalds
  1 sibling, 0 replies; 16+ messages in thread
From: Linus Torvalds @ 2018-06-04 19:09 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: Peter Zijlstra, dm-devel, Mikulas Patocka, Alasdair G Kergon

On Mon, Jun 4, 2018 at 11:54 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> The swait interface is pure and utter garbage, and I thought we
> already agreed to just have a big comment telling people not to use
> them.
>
> That seems to not have happened.

.. oh, actually it did, but it was apparently too subtle.

See commit 88796e7e5c45 ("sched/swait: Document it clearly that the
swait facilities are special and shouldn't be used"), which has pretty
strong language in the commit message, but the strong language never
really made it into the comments, which are much more subtle.

I added more strongly worded comments. Maybe somebody will even read them.

                Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 19:09   ` Mike Snitzer
@ 2018-06-04 19:29     ` Linus Torvalds
  2018-06-04 19:36       ` Peter Zijlstra
  2018-06-11 19:41     ` [git pull v2] " Mike Snitzer
  1 sibling, 1 reply; 16+ messages in thread
From: Linus Torvalds @ 2018-06-04 19:29 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: Peter Zijlstra, dm-devel, Mikulas Patocka, Alasdair G Kergon

On Mon, Jun 4, 2018 at 12:09 PM Mike Snitzer <snitzer@redhat.com> wrote:
>
> Mikulas elected to use swait because of the very low latency nature of
> layering ontop of persistent memory.  Use of "simple waitqueues"
> _seemed_ logical to me.

I know. It's actually the main reason I have an almost irrational
hatred of those interfaces. They _look_ so simple and obvious, and
they are very tempting to use. And then they have that very subtle
issue that the default wakeup is exclusive.

I've actually wanted to remove them entirely, but there are two
existing users (kvm and rcu), and the RCU one actually is a good user.
The kvm one is completely pointless, but I haven't had the energy to
just change it to use a direct task pointer, and I was hoping the kvm
people would do that themselves (because it should be both faster and
simpler than swait).

One option might be to rename them to be less tempting. Instead of
"swait" where the "s" stands for "simple" (which it isn't, because the
complexity is in the subtle semantics), we could perhaps write it out
as "specialized_wait". Make people actually write that "specialized"
word out, and maybe they'd have to be aware of just how subtle the
differences are to normal wait-queues.

Because those functions *are* smaller and can definitely be faster and
have lower latencies. So in *theory* they are perfectly fine, it's
just that they need a *lot* of careful thinking about before  you use
them.

So the rules with swake lists are that you either have to

 (a) use "swake_up_all()" to wake up everybody

 (b) be *very* careful and guarantee that every single place that
sleeps on an swait queue will actually consume the resource that it
was waiting on - or wake up the next sleeper.

and usually people absolutely don't want to do (a), and then they get (b) wrong.

And when you get (b) wrong, you can end up with processes stuck
waiting on things even after they got released. But in *practice* it
almost never actually happens, particularly if you have some array of
resources - like that freelist - where once somebody gets a resource,
they'll do another wakeup when they release it, so if you have lots of
threads that fight for the resource, you'll also end up with lots of
wakeups. Even if some thread ends up being blocked when there are free
resources, _another_ thread will come in, pick up one of those free
resources, and then wake up the incorrectly blocked one when it is
done.

So it's actually really hard to see the bug in practice. you have to
have really bad luck to first hit that "don't wake up the next waiter,
because the waiter that you _did_ wake didn't need the resource after
all", and then you also have to stop allocating (and freeing) other
copies of that resource.

So the common case is that you never really see the problem as a
deadlock, but you _can_ see it as an odd blip that basically is "stop
handling requests for one thread, until another thread comes in and
starts doing requests, which then restarts the first thread".

And don't get me wrong - you can get the exact same problem with
regular wait-queues too, but then you have to explicitly say "I'm an
exclusive waiter" and violate the rules for exclusivity.  We've had
that, but then I blame the user, not the wait-queue interface itself.

             Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 19:29     ` Linus Torvalds
@ 2018-06-04 19:36       ` Peter Zijlstra
  2018-06-04 19:39         ` Linus Torvalds
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2018-06-04 19:36 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: dm-devel, Mikulas Patocka, Alasdair G Kergon, Mike Snitzer

On Mon, Jun 04, 2018 at 12:29:21PM -0700, Linus Torvalds wrote:
> On Mon, Jun 4, 2018 at 12:09 PM Mike Snitzer <snitzer@redhat.com> wrote:
> >
> > Mikulas elected to use swait because of the very low latency nature of
> > layering ontop of persistent memory.  Use of "simple waitqueues"
> > _seemed_ logical to me.
> 
> I know. It's actually the main reason I have an almost irrational
> hatred of those interfaces. They _look_ so simple and obvious, and
> they are very tempting to use. And then they have that very subtle
> issue that the default wakeup is exclusive.

Would it help if we did s/swake_up/swake_up_one/g ?

Then there would not be an swake_up() to cause confusion.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 19:36       ` Peter Zijlstra
@ 2018-06-04 19:39         ` Linus Torvalds
  2018-06-04 19:58           ` Peter Zijlstra
  0 siblings, 1 reply; 16+ messages in thread
From: Linus Torvalds @ 2018-06-04 19:39 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: dm-devel, Mikulas Patocka, Alasdair G Kergon, Mike Snitzer

On Mon, Jun 4, 2018 at 12:37 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> Would it help if we did s/swake_up/swake_up_one/g ?
>
> Then there would not be an swake_up() to cause confusion.

Yes, i think that would already be a big improvement, forcing people
to be aware of the exclusive nature.

           Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 19:39         ` Linus Torvalds
@ 2018-06-04 19:58           ` Peter Zijlstra
  2018-06-04 20:40             ` Linus Torvalds
  2018-06-04 21:13             ` Mike Snitzer
  0 siblings, 2 replies; 16+ messages in thread
From: Peter Zijlstra @ 2018-06-04 19:58 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: dm-devel, Mikulas Patocka, Alasdair G Kergon, Mike Snitzer

On Mon, Jun 04, 2018 at 12:39:11PM -0700, Linus Torvalds wrote:
> On Mon, Jun 4, 2018 at 12:37 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > Would it help if we did s/swake_up/swake_up_one/g ?
> >
> > Then there would not be an swake_up() to cause confusion.
> 
> Yes, i think that would already be a big improvement, forcing people
> to be aware of the exclusive nature.

The below will of course conflict with the merge request under
discussion. Also completely untested.

---
Subject: sched/swait: Unconfuse swake_up() vs wake_up()

Linus hates on swait because swake_up() behaves distinctly different
from wake_up(). Where the latter will wake all !exclusive waiters and a
single exclusive waiter, the former will wake but one waiter (everything
is exclusive).

To avoid easy confusion, rename swake_up() to swake_up_one():

  git grep -l "\<swake_up\>" | while read file;
  do
    sed -ie 's/\<swake_up\>/&_one/g' $file;
  done

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/mips/kvm/mips.c         | 4 ++--
 arch/powerpc/kvm/book3s_hv.c | 4 ++--
 arch/s390/kvm/interrupt.c    | 2 +-
 arch/x86/kernel/kvm.c        | 2 +-
 arch/x86/kvm/lapic.c         | 2 +-
 include/linux/swait.h        | 2 +-
 kernel/rcu/srcutiny.c        | 2 +-
 kernel/rcu/tree.c            | 2 +-
 kernel/rcu/tree_exp.h        | 2 +-
 kernel/rcu/tree_plugin.h     | 6 +++---
 kernel/sched/swait.c         | 4 ++--
 virt/kvm/arm/arm.c           | 2 +-
 virt/kvm/arm/psci.c          | 2 +-
 virt/kvm/async_pf.c          | 2 +-
 virt/kvm/kvm_main.c          | 2 +-
 15 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 0f725e9cee8f..612bc713c4a1 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -515,7 +515,7 @@ int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
 	dvcpu->arch.wait = 0;
 
 	if (swq_has_sleeper(&dvcpu->wq))
-		swake_up(&dvcpu->wq);
+		swake_up_one(&dvcpu->wq);
 
 	return 0;
 }
@@ -1204,7 +1204,7 @@ static void kvm_mips_comparecount_func(unsigned long data)
 
 	vcpu->arch.wait = 0;
 	if (swq_has_sleeper(&vcpu->wq))
-		swake_up(&vcpu->wq);
+		swake_up_one(&vcpu->wq);
 }
 
 /* low level hrtimer wake routine */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4d07fca5121c..f1bcf1875171 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -190,7 +190,7 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu)
 
 	wqp = kvm_arch_vcpu_wq(vcpu);
 	if (swq_has_sleeper(wqp)) {
-		swake_up(wqp);
+		swake_up_one(wqp);
 		++vcpu->stat.halt_wakeup;
 	}
 
@@ -3224,7 +3224,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 			kvmppc_start_thread(vcpu, vc);
 			trace_kvm_guest_enter(vcpu);
 		} else if (vc->vcore_state == VCORE_SLEEPING) {
-			swake_up(&vc->wq);
+			swake_up_one(&vc->wq);
 		}
 
 	}
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 37d06e022238..f58113cbee4d 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -1145,7 +1145,7 @@ void kvm_s390_vcpu_wakeup(struct kvm_vcpu *vcpu)
 		 * yield-candidate.
 		 */
 		vcpu->preempted = true;
-		swake_up(&vcpu->wq);
+		swake_up_one(&vcpu->wq);
 		vcpu->stat.halt_wakeup++;
 	}
 	/*
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5b2300b818af..db6ebe48d991 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -188,7 +188,7 @@ static void apf_task_wake_one(struct kvm_task_sleep_node *n)
 	if (n->halted)
 		smp_send_reschedule(n->cpu);
 	else if (swq_has_sleeper(&n->wq))
-		swake_up(&n->wq);
+		swake_up_one(&n->wq);
 }
 
 static void apf_task_wake_all(void)
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index b74c9c1405b9..a2f8c4c76d33 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1379,7 +1379,7 @@ static void apic_timer_expired(struct kvm_lapic *apic)
 	 * using swait_active() is safe.
 	 */
 	if (swait_active(q))
-		swake_up(q);
+		swake_up_one(q);
 
 	if (apic_lvtt_tscdeadline(apic))
 		ktimer->expired_tscdeadline = ktimer->tscdeadline;
diff --git a/include/linux/swait.h b/include/linux/swait.h
index 84f9745365ff..6224bbb08cf5 100644
--- a/include/linux/swait.h
+++ b/include/linux/swait.h
@@ -145,7 +145,7 @@ static inline bool swq_has_sleeper(struct swait_queue_head *wq)
 	return swait_active(wq);
 }
 
-extern void swake_up(struct swait_queue_head *q);
+extern void swake_up_one(struct swait_queue_head *q);
 extern void swake_up_all(struct swait_queue_head *q);
 extern void swake_up_locked(struct swait_queue_head *q);
 
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 622792abe41a..790876228824 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -110,7 +110,7 @@ void __srcu_read_unlock(struct srcu_struct *sp, int idx)
 
 	WRITE_ONCE(sp->srcu_lock_nesting[idx], newval);
 	if (!newval && READ_ONCE(sp->srcu_gp_waiting))
-		swake_up(&sp->srcu_wq);
+		swake_up_one(&sp->srcu_wq);
 }
 EXPORT_SYMBOL_GPL(__srcu_read_unlock);
 
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4fccdfa25716..c5fbd74c8af0 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1727,7 +1727,7 @@ static void rcu_gp_kthread_wake(struct rcu_state *rsp)
 	    !READ_ONCE(rsp->gp_flags) ||
 	    !rsp->gp_kthread)
 		return;
-	swake_up(&rsp->gp_wq);
+	swake_up_one(&rsp->gp_wq);
 }
 
 /*
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index d40708e8c5d6..027a432a167d 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -212,7 +212,7 @@ static void __rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
 			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 			if (wake) {
 				smp_mb(); /* EGP done before wake_up(). */
-				swake_up(&rsp->expedited_wq);
+				swake_up_one(&rsp->expedited_wq);
 			}
 			break;
 		}
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 7fd12039e512..1245610f689d 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1854,8 +1854,8 @@ static void __wake_nocb_leader(struct rcu_data *rdp, bool force,
 		WRITE_ONCE(rdp_leader->nocb_leader_sleep, false);
 		del_timer(&rdp->nocb_timer);
 		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
-		smp_mb(); /* ->nocb_leader_sleep before swake_up(). */
-		swake_up(&rdp_leader->nocb_wq);
+		smp_mb(); /* ->nocb_leader_sleep before swake_up_one(). */
+		swake_up_one(&rdp_leader->nocb_wq);
 	} else {
 		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
 	}
@@ -2176,7 +2176,7 @@ static void nocb_leader_wait(struct rcu_data *my_rdp)
 		raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags);
 		if (rdp != my_rdp && tail == &rdp->nocb_follower_head) {
 			/* List was empty, so wake up the follower.  */
-			swake_up(&rdp->nocb_wq);
+			swake_up_one(&rdp->nocb_wq);
 		}
 	}
 
diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
index b6fb2c3b3ff7..e471a58f565c 100644
--- a/kernel/sched/swait.c
+++ b/kernel/sched/swait.c
@@ -32,7 +32,7 @@ void swake_up_locked(struct swait_queue_head *q)
 }
 EXPORT_SYMBOL(swake_up_locked);
 
-void swake_up(struct swait_queue_head *q)
+void swake_up_one(struct swait_queue_head *q)
 {
 	unsigned long flags;
 
@@ -40,7 +40,7 @@ void swake_up(struct swait_queue_head *q)
 	swake_up_locked(q);
 	raw_spin_unlock_irqrestore(&q->lock, flags);
 }
-EXPORT_SYMBOL(swake_up);
+EXPORT_SYMBOL(swake_up_one);
 
 /*
  * Does not allow usage from IRQ disabled, since we must be able to
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index a4c1b76240df..172e82b75e3f 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -586,7 +586,7 @@ void kvm_arm_resume_guest(struct kvm *kvm)
 
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		vcpu->arch.pause = false;
-		swake_up(kvm_arch_vcpu_wq(vcpu));
+		swake_up_one(kvm_arch_vcpu_wq(vcpu));
 	}
 }
 
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index c4762bef13c6..18effcb166ad 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -155,7 +155,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
 	smp_mb();		/* Make sure the above is visible */
 
 	wq = kvm_arch_vcpu_wq(vcpu);
-	swake_up(wq);
+	swake_up_one(wq);
 
 	return PSCI_RET_SUCCESS;
 }
diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
index 57bcb27dcf30..23c2519c5b32 100644
--- a/virt/kvm/async_pf.c
+++ b/virt/kvm/async_pf.c
@@ -107,7 +107,7 @@ static void async_pf_execute(struct work_struct *work)
 	trace_kvm_async_pf_completed(addr, gva);
 
 	if (swq_has_sleeper(&vcpu->wq))
-		swake_up(&vcpu->wq);
+		swake_up_one(&vcpu->wq);
 
 	mmput(mm);
 	kvm_put_kvm(vcpu->kvm);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c7b2e927f699..8cbf1276ed2e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2197,7 +2197,7 @@ bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu)
 
 	wqp = kvm_arch_vcpu_wq(vcpu);
 	if (swq_has_sleeper(wqp)) {
-		swake_up(wqp);
+		swake_up_one(wqp);
 		++vcpu->stat.halt_wakeup;
 		return true;
 	}

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 19:58           ` Peter Zijlstra
@ 2018-06-04 20:40             ` Linus Torvalds
  2018-06-04 21:13             ` Mike Snitzer
  1 sibling, 0 replies; 16+ messages in thread
From: Linus Torvalds @ 2018-06-04 20:40 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: dm-devel, Mikulas Patocka, Alasdair G Kergon, Mike Snitzer

On Mon, Jun 4, 2018 at 12:59 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
>
> The below will of course conflict with the merge request under
> discussion. Also completely untested.

Side note: it's not just swake_up() itself.

It's very much the "prepare_to_swait()" logic and the event waiting friends.

For a regular non-exclusive wait-queue, this is all reasonably simple.
Want to wait on an event? Go right ahead. You waiting on it doesn't
affect anybody else waiting on it.

For an swait()  queue, though? What are the semantics of two different
users doing something like

    swait_event_interruptible(wq, <event>);

when coupled with "swake_up_one()"?

The whole notion of "wait until this is true" is not an exclusive wait
per se. And being "interruptible" means that we definitely can be in
the situation that we added ourselves to the wait-queue, but then a
signal happened, so we're going to exit. But what if we were the *one*
thread who got woken up? We can't just return with an error code,
because then we've lost a wakeup.

We do actually have this issue for regular wait-queues too, and I
think we should try to clarify it there as well:

     wait_event_interruptible_exclusive()

is simply a pretty dangerous thing to do. But at least (again) it has
that "exclusive()" in the name, so _hopefully_ users think about it.
The danger is pretty explicit.

The rule for exclusive waits has to be that even if you're
interruptible, you have to first check the event you're waiting for -
because you *have* to take it if it's there.

Alternatively, if you're returning with -EINTR or something, and you
were woken up, you need to do wake_up_one() on the queue you were
waiting on, but now it's already hard to tell whether you were woken
up by the event, or by you being interruptible, so that's already more
complicated (but it's always safe - but might be bad for performance -
to wake up too much).

So I really wish that all the swait interfaces had more of a red flag.
Because right now they all look simple, and they really aren't. They
are all equivalent to the really *complex* cases of the regular
wait-queues.

Again, the "normal" wait queue interfaces that look simple really
_are_ simple. It's simply safe to call

        wait_event{_interruptible}(wq, <event>)

and there are absolutely no subtle gotchas.  You're not affecting any
other waiters, and it's doing exactly what you think it's doing from
reading the code.

In contrast,

        swait_event(wq, <event>)

is much subtler. You're not just waiting on the event, you're also
potentially blocking some other waiter, because only one of you will
be woken up.

So maybe along with "swake_up()" -> "swake_up_one()", we should also
make all the "swait_event*()" functions be renamed as
"swait_event*_exclusive()". Same for the "prepare_to_swait()" etc
cases, add the "_exclusive()" there at the end to mark their special
semantics.

Then the "swait" functions would actually have the same semantics and
gotcha's as the (very specialized) subset of regular wait-queue event
functions.

I think that would cover my worries. Hmm?

                   Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 19:58           ` Peter Zijlstra
  2018-06-04 20:40             ` Linus Torvalds
@ 2018-06-04 21:13             ` Mike Snitzer
  2018-06-04 21:22               ` Linus Torvalds
  1 sibling, 1 reply; 16+ messages in thread
From: Mike Snitzer @ 2018-06-04 21:13 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mikulas Patocka, dm-devel, Linus Torvalds, Alasdair G Kergon

On Mon, Jun 04 2018 at  3:58pm -0400,
Peter Zijlstra <peterz@infradead.org> wrote:

> On Mon, Jun 04, 2018 at 12:39:11PM -0700, Linus Torvalds wrote:
> > On Mon, Jun 4, 2018 at 12:37 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > Would it help if we did s/swake_up/swake_up_one/g ?
> > >
> > > Then there would not be an swake_up() to cause confusion.
> > 
> > Yes, i think that would already be a big improvement, forcing people
> > to be aware of the exclusive nature.
> 
> The below will of course conflict with the merge request under
> discussion. Also completely untested.

No worries there since I'll be resubmitting dm-writecache for 4.19.

(Mikulas would like to still use swait for the dm-writecache's endio
thread, since endio_thread_wait only has a single waiter.  I told him to
convert the other 2, benchmark it with still swait in endio path, then
convert the one used in endio, benchmark will all converted and we'd
revisit if there is a compelling performance difference.  But even then
I'm not sure I want DM on the list of swait consumers... to be
continued)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 21:13             ` Mike Snitzer
@ 2018-06-04 21:22               ` Linus Torvalds
  2018-06-04 21:53                 ` Mikulas Patocka
  0 siblings, 1 reply; 16+ messages in thread
From: Linus Torvalds @ 2018-06-04 21:22 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: Peter Zijlstra, dm-devel, Mikulas Patocka, Alasdair G Kergon

On Mon, Jun 4, 2018 at 2:13 PM Mike Snitzer <snitzer@redhat.com> wrote:
>
> (Mikulas would like to still use swait for the dm-writecache's endio
> thread, since endio_thread_wait only has a single waiter.

If you already know it has a single waiter, please don't use a queue at all.

Just have the "struct task_struct *" in the waiter field, and use
"wake_up_process()". Use NULL for "no process".

That's *much* simpler than swait(), and is a well-accepted traditional
wake interface. It's also really really obvious.

The "there is only a single waiter" is *NOT* an excuse for using
swait. Quite the reverse. Using swait is stupid, slow, and complex.
And it generates code that is harder to understand.

And yes, the fact that KVM also made that completely idiotic choice in
their apic code is not an excuse either. I have no idea why they did
it either. It's stupid. In the kvm case, I think what happened was
that they had a historical wait-queue model, and they just didn't
realize that they al;ways had just one waiter, so then they converted
a waitqueue to a swait queue.

But if you already know there is only ever one waiter, please don't do
that. Just avoid queues entirely.

                  Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 21:22               ` Linus Torvalds
@ 2018-06-04 21:53                 ` Mikulas Patocka
  2018-06-04 22:16                   ` Linus Torvalds
  0 siblings, 1 reply; 16+ messages in thread
From: Mikulas Patocka @ 2018-06-04 21:53 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Peter Zijlstra, dm-devel, Alasdair G Kergon, Mike Snitzer



On Mon, 4 Jun 2018, Linus Torvalds wrote:

> On Mon, Jun 4, 2018 at 2:13 PM Mike Snitzer <snitzer@redhat.com> wrote:
> >
> > (Mikulas would like to still use swait for the dm-writecache's endio
> > thread, since endio_thread_wait only has a single waiter.
> 
> If you already know it has a single waiter, please don't use a queue at all.
> 
> Just have the "struct task_struct *" in the waiter field, and use
> "wake_up_process()". Use NULL for "no process".

I'd be interested - does the kernel deal properly with spurious wake-up? - 
i.e. suppose that the kernel thread that I created is doing simething else 
in a completely different subsystem - can I call wake_up_process on it? 
Could it confuse some unrelated code?

The commonly used synchronization primitives recheck the condition after 
wake-up, but it's hard to verify that the whole kernel does it.

> That's *much* simpler than swait(), and is a well-accepted traditional
> wake interface. It's also really really obvious.
> 
> The "there is only a single waiter" is *NOT* an excuse for using
> swait. Quite the reverse. Using swait is stupid, slow, and complex.
> And it generates code that is harder to understand.

It looked to me like the standard wait-queues suffers from feature creep 
(three flags, high number of functions abd macros, it even uses an 
indirect call to wake something up) - that's why I used swait.

> And yes, the fact that KVM also made that completely idiotic choice in
> their apic code is not an excuse either. I have no idea why they did
> it either. It's stupid. In the kvm case, I think what happened was
> that they had a historical wait-queue model, and they just didn't
> realize that they al;ways had just one waiter, so then they converted
> a waitqueue to a swait queue.
> 
> But if you already know there is only ever one waiter, please don't do
> that. Just avoid queues entirely.
> 
>                   Linus

Mikulas

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 21:53                 ` Mikulas Patocka
@ 2018-06-04 22:16                   ` Linus Torvalds
  2018-06-05  8:59                     ` Peter Zijlstra
  0 siblings, 1 reply; 16+ messages in thread
From: Linus Torvalds @ 2018-06-04 22:16 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Peter Zijlstra, dm-devel, Alasdair G Kergon, Mike Snitzer

On Mon, Jun 4, 2018 at 2:53 PM Mikulas Patocka <mpatocka@redhat.com> wrote:
>
> I'd be interested - does the kernel deal properly with spurious wake-up? -
> i.e. suppose that the kernel thread that I created is doing simething else
> in a completely different subsystem - can I call wake_up_process on it?
> Could it confuse some unrelated code?

We've always had that issue, and yes, we should handle it fine. Code
that doesn't handle it fine is broken, but I don't think we've ever
had that situation.

For example, a common case of "spurious" wakeups is when somebody adds
itself to a wait list, but then ends up doing other things (including
taking page faults because of user access etc). The wait-list is still
active, and events on the wait list will still wake people up, even if
they are sleeping on some *other* list too.

In fact, an example of spurious wakeups comes from just using regular
futexes. We send those locklessly, and you actually can get a futex
wakeup *after* you thought you removed yourself from the futex queue.

But that's actually only an example of the much more generic issue -
we've always supported having multiple sources of wakeups, so
"spurious" wakups have always been a thing.

People are probably not so aware of it, because they've never been an
actual _problem_.

Why? Our sleep/wake model has never been that "I woke up, so what I
waited on must be done". Our sleep/wake model has always been one
where being woken up just means that you go back and repeat the
checks.

The whole "wait_event()" loop being the most core example of that
model, but that's actually not the *traditional* model. Our really
traditional model of waiting for something actually predates
wait_event(), and is an explicit loop like

    add_to_wait_queue(..);
    for (;;) {
        set_task_state(TASK_INTERRUPTIBLE);
        .. see if we need to sleep, exit if ok ..
        schedule();
    }
    remove_from_wait_queue(..);

so even pretty much from day #1, the whole notion of "spurious wake
events" is a non-issue.

(We did have a legacy "sleep_on()" interface back in the dark ages,
but even that was supposed to be used in a loop).

> The commonly used synchronization primitives recheck the condition after
> wake-up, but it's hard to verify that the whole kernel does it.

See above. We have those spurious wakeups already.

> It looked to me like the standard wait-queues suffers from feature creep
> (three flags, high number of functions abd macros, it even uses an
> indirect call to wake something up) - that's why I used swait.

I agree that the standard wait-queues have gotten much more complex
over the years. But apart from the wait entries being a bit big, they
actually should not perform badly.,

The real problem with wait-queues is that because of their semantics,
you *can* end up walking the whole queue, waking up hundreds (or
thousands) of processes. That can be a latency issue for RT.

But the answer to that tends to be "don't do that then". If you have
wait-queues that can have thousands of entries, there's likely
something seriously wrong somewhere. We've had it, but it's very very
rare.

                        Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-04 22:16                   ` Linus Torvalds
@ 2018-06-05  8:59                     ` Peter Zijlstra
  2018-06-05 15:53                       ` Linus Torvalds
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2018-06-05  8:59 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: dm-devel, Mikulas Patocka, Alasdair G Kergon, Mike Snitzer

On Mon, Jun 04, 2018 at 03:16:31PM -0700, Linus Torvalds wrote:
> We've always had that issue, and yes, we should handle it fine. Code
> that doesn't handle it fine is broken, but I don't think we've ever
> had that situation.

We've had a whole bunch of broken. We fixed a pile of them a few
years back but I'm sure that if you look hard you can still find a few.

The one commit I can readily find is:

  91f65facba5a ("iommu/amd: Fix amd_iommu_free_device()")

But I'm fairly sure there was more..

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [git pull] device mapper changes for 4.18
  2018-06-05  8:59                     ` Peter Zijlstra
@ 2018-06-05 15:53                       ` Linus Torvalds
  0 siblings, 0 replies; 16+ messages in thread
From: Linus Torvalds @ 2018-06-05 15:53 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: dm-devel, Mikulas Patocka, Alasdair G Kergon, Mike Snitzer

On Tue, Jun 5, 2018 at 1:59 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> We've had a whole bunch of broken. We fixed a pile of them a few
> years back but I'm sure that if you look hard you can still find a few.
>
> The one commit I can readily find is:
>
>   91f65facba5a ("iommu/amd: Fix amd_iommu_free_device()")

Ugh, ok, I stand corrected.

These things definitely are bugs and they aren't even because of some
"old model", because we've pretty much always had the "loop to wait"
rule.

Even going back to linux-0.01, when we had that "sleep_on()" primitive
that is really bad (because of fundamental races), we had code like
this:

    static inline void wait_on_buffer(struct buffer_head * bh)
    {
        cli();
        while (bh->b_lock)
                sleep_on(&bh->b_wait);
        sti();
    }

and the above wasn't some fluke that I searched for, they're all that
way (and by "all" I mean "the first three I looked at", which probably
means that if I had looked at a fourth, I'd have found somebody
violating the "loop on condition" rule ;)

               Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [git pull v2] device mapper changes for 4.18
  2018-06-04 19:09   ` Mike Snitzer
  2018-06-04 19:29     ` Linus Torvalds
@ 2018-06-11 19:41     ` Mike Snitzer
  1 sibling, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2018-06-11 19:41 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, dm-devel, Mikulas Patocka, Alasdair G Kergon

On Mon, Jun 04 2018 at  3:09pm -0400,
Mike Snitzer <snitzer@redhat.com> wrote:

> On Mon, Jun 04 2018 at  2:54pm -0400,
> Linus Torvalds <torvalds@linux-foundation.org> wrote:
> 
> > On Mon, Jun 4, 2018 at 8:32 AM Mike Snitzer <snitzer@redhat.com> wrote:
> > >
> > > - Export 2 swait symbols for use by the new DM writecache target.
> > 
> > I am *EXTREMELY* unhappy with this.
> > 
> > The swait interface is pure and utter garbage, and I thought we
> > already agreed to just have a big comment telling people not to use
> > them.
> > 
> > That seems to not have happened.
> > 
> > The swait() interfaces are pure and utter garbage because they have
> > absolutely horrid semantics that are very different from normal
> > wait-queues, and there has never been any sign that the swait users
> > are actually valid.
> > 
> > In particular, existing users were using swait because of complete
> > garbage reasons, like the alleged "win" for KVM, which was just
> > because there was only ever one waiter anyway.
> > 
> > Is the new writecache usage really worth it?
> > 
> > Is it even *correct*?
> > 
> > As mentioned, swait semantics are completely buggy, with "swake_up()"
> > not at all approximating a normal wake-up. It only wakes *one* user,
> > so it's more like an exclusive waiter, except it ends up alway
> > sassuming that every waiter is exclusive without any actual marker for
> > that.
> > 
> > End result: the interface has some very subtle races, and I'm not at
> > all convinced that the new writecache code is aware of this.
> > 
> > In particular, I see what appears to be a bug in writecache_map(): it
> > does a writecache_wait_on_freelist(), but it doesn't actually
> > guarantee that it will then use the slot that it was woken up for (it
> > just does a "continue", which might instead do a
> > writecache_find_entry().
> > 
> > So *another* thread that was waiting for a slot will now not be woken
> > up, and the thread that *was* woken up didn't actually use the
> > freelist entry that it was waiting for.
> > 
> > This is *EXACTLY* the kind of code that should not use swait lists.
> > It's buggy, and broken. And it probably works in 99% of all cases by
> > pure luck, so it's hard to debug too.
> > 
> > In other words: I'm not pulling this shit. I want people to be *VERY*
> > aware of how broken swait queues are. You are *not* to use them unless
> > you understand them, and I have yet to find a single person who does.
> > 
> > No way in hell are we exporting that shit.
> 
> Fair enough, we'll get it fixed up to use normal waitqueues for next
> merge window.

Hi Linus,

Mikulas refactored the dm-writecache target to use normal waitqueues
instead of swait.  He also switched to using wake_up_process() based on
your suggestion for the endio usecase.

As such dm-writecache is no longer using swait and has no need for the
2 swait symbols the original submission was looking to export.

I did say I'd resubmit for the next merge window but given how quickly
Mikulas was able to respond to your feedback I'd appreciate it if you'd
consider pulling these DM changes for 4.18.

To ease your review, it should be noted that the split-out patches from
Mikulas (with some tweaks from me) are available here to show the
topmost 3 incremental changes that were folded in since the last
swait-based submission:
https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/log/?h=dm-4.18-writecache-splitout

As you can see, I also rebased ontop of Jens' "for-linus" (which you
already pulled) because commit 2a2a4c510b7 ("dm: use
bioset_init_from_src() to copy bio_set") is critical for DM's stability
with 4.18.

All said, please pull, thanks!
Mike

The following changes since commit 2a2a4c510b761e800098992cac61281c86527e5d:

  dm: use bioset_init_from_src() to copy bio_set (2018-06-08 07:06:29 -0600)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git tags/for-4.18/dm-changes-v2

for you to fetch changes up to 48debafe4f2feabcc99f8e2659e80557e3ca6b39:

  dm: add writecache target (2018-06-08 11:59:51 -0400)

----------------------------------------------------------------
- Adjust various DM structure members to improve alignment relative to
  4.18 block's mempool_t and bioset changes.

- Add DM writecache target that offers writeback caching to persistent
  memory or SSD.

- Small DM core error message change to give context for why a DM table
  type transition wasn't allowed.

----------------------------------------------------------------
Mike Snitzer (2):
      dm: report which conflicting type caused error during table_load()
      dm: adjust structure members to improve alignment

Mikulas Patocka (1):
      dm: add writecache target

 Documentation/device-mapper/writecache.txt |   68 +
 drivers/md/Kconfig                         |   11 +
 drivers/md/Makefile                        |    1 +
 drivers/md/dm-bio-prison-v1.c              |    2 +-
 drivers/md/dm-bio-prison-v2.c              |    2 +-
 drivers/md/dm-cache-target.c               |   61 +-
 drivers/md/dm-core.h                       |   38 +-
 drivers/md/dm-crypt.c                      |   26 +-
 drivers/md/dm-ioctl.c                      |    3 +-
 drivers/md/dm-kcopyd.c                     |    3 +-
 drivers/md/dm-region-hash.c                |   13 +-
 drivers/md/dm-thin.c                       |    5 +-
 drivers/md/dm-writecache.c                 | 2305 ++++++++++++++++++++++++++++
 drivers/md/dm-zoned-target.c               |    2 +-
 14 files changed, 2466 insertions(+), 74 deletions(-)
 create mode 100644 Documentation/device-mapper/writecache.txt
 create mode 100644 drivers/md/dm-writecache.c

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-06-11 19:41 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-04 15:32 [git pull] device mapper changes for 4.18 Mike Snitzer
2018-06-04 18:54 ` Linus Torvalds
2018-06-04 19:09   ` Mike Snitzer
2018-06-04 19:29     ` Linus Torvalds
2018-06-04 19:36       ` Peter Zijlstra
2018-06-04 19:39         ` Linus Torvalds
2018-06-04 19:58           ` Peter Zijlstra
2018-06-04 20:40             ` Linus Torvalds
2018-06-04 21:13             ` Mike Snitzer
2018-06-04 21:22               ` Linus Torvalds
2018-06-04 21:53                 ` Mikulas Patocka
2018-06-04 22:16                   ` Linus Torvalds
2018-06-05  8:59                     ` Peter Zijlstra
2018-06-05 15:53                       ` Linus Torvalds
2018-06-11 19:41     ` [git pull v2] " Mike Snitzer
2018-06-04 19:09   ` [git pull] " Linus Torvalds

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.