All of lore.kernel.org
 help / color / mirror / Atom feed
From: Steven Rostedt <rostedt@goodmis.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: peterz@infradead.org,
	Linus Torvalds <torvalds@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>,
	Paul McKenney <paulmck@kernel.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Will Deacon <will@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,
	Russell King <linux@armlinux.org.uk>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Chris Zankel <chris@zankel.net>,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-xtensa@linux-xtensa.org,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
	intel-gfx <intel-gfx@lists.freedesktop.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Vineet Gupta <vgupta@synopsys.com>,
	"open list\:SYNOPSYS ARC ARCHITECTURE" 
	<linux-snps-arc@lists.infradead.org>,
	Arnd Bergmann <arnd@arndb.de>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org, Nick Hu <nickhu@andestech.com>,
	Greentime Hu <green.hu@gmail.com>,
	Vincent Chen <deanbo422@gmail.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	"David S. Miller" <davem@davemloft.net>,
	linux-sparc <sparclinux@vger.kernel.org>
Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends
Date: Thu, 24 Sep 2020 08:32:41 -0400	[thread overview]
Message-ID: <20200924083241.314f2102@gandalf.local.home> (raw)
In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de>

On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the owner is pinned on a CPU?
> >
> > migrate_disable() is a BKL of pinning affinity.  
> 
> No. That's just wrong. preempt disable is a concurrency control,

I think you totally misunderstood what I was saying. The above wasn't about
comparing preempt_disable to migrate_disable. It was comparing
migrate_disable to a chain of tasks blocked on mutexes where the top owner
has preempt_disable set. You still have a bunch of tasks that can't move to
other CPUs.


> > If we only have local_lock() available (even on !RT), then it makes
> > the blocking in groups. At least this way you could grep for all the
> > different local_locks in the system and plug that into the algorithm
> > for WCS, just like one would with a bunch of mutexes.  
> 
> You cannot do that on RT at all where migrate disable is substituting
> preempt disable in spin and rw locks. The result would be the same as
> with a !RT kernel just with horribly bad performance.

Note, the spin and rwlocks already have a lock associated with them. Why
would it be any different on RT? I wasn't suggesting adding another lock
inside a spinlock. Why would I recommend THAT? I wasn't recommending
blindly replacing migrate_disable() with local_lock(). I just meant expose
local_lock() but not migrate_disable().

> 
> That means the stacking problem has to be solved anyway.
> 
> So why on earth do you want to create yet another special duct tape case
> for kamp_local() which proliferates inconsistency instead of aiming for
> consistency accross all preemption models?

The idea was to help with the scheduling issue.

Anyway, instead of blocking. What about having a counter of number of
migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's
already another task with migrate_disabled() set, and the current task has
an affinity greater than 1, it tries to migrate to another CPU?

This way migrate_disable() is less likely to have a bunch of tasks blocked
on one CPU serialized by each task exiting the migrate_disable() section.

Yes, there's more overhead, but it only happens if multiple tasks are in a
migrate disable section on the same CPU.

-- Steve


WARNING: multiple messages have this Message-ID (diff)
From: Steven Rostedt <rostedt@goodmis.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Juri Lelli <juri.lelli@redhat.com>,
	peterz@infradead.org,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	linux-mips@vger.kernel.org, Ben Segall <bsegall@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>, Guo Ren <guoren@kernel.org>,
	linux-sparc <sparclinux@vger.kernel.org>,
	Vincent Chen <deanbo422@gmail.com>, Will Deacon <will@kernel.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Michael Ellerman <mpe@ellerman.id.au>,
	the arch/x86 maintainers <x86@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	linux-csky@vger.kernel.org, David Airlie <airlied@linux.ie>,
	Mel Gorman <mgorman@suse.de>,
	"open list:SYNOPSYS ARC ARCHITECTURE"
	<linux-snps-arc@lists.infradead.org>,
	linux-xtensa@linux-xtensa.org, Paul McKenney <paulmck@kernel.org>,
	intel-gfx <intel-gfx@lists.freedesktop.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Greentime Hu <green.hu@gmail.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Chris Zankel <chris@zankel.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nick Hu <nickhu@andestech.com>, Linux-MM <linux-mm@kvack.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Arnd Bergmann <arnd@arndb.de>, Vineet Gupta <vgupta@synopsys.com>,
	Paul Mackerras <paulus@samba.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends
Date: Thu, 24 Sep 2020 12:32:41 +0000	[thread overview]
Message-ID: <20200924083241.314f2102@gandalf.local.home> (raw)
In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de>

On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the owner is pinned on a CPU?
> >
> > migrate_disable() is a BKL of pinning affinity.  
> 
> No. That's just wrong. preempt disable is a concurrency control,

I think you totally misunderstood what I was saying. The above wasn't about
comparing preempt_disable to migrate_disable. It was comparing
migrate_disable to a chain of tasks blocked on mutexes where the top owner
has preempt_disable set. You still have a bunch of tasks that can't move to
other CPUs.


> > If we only have local_lock() available (even on !RT), then it makes
> > the blocking in groups. At least this way you could grep for all the
> > different local_locks in the system and plug that into the algorithm
> > for WCS, just like one would with a bunch of mutexes.  
> 
> You cannot do that on RT at all where migrate disable is substituting
> preempt disable in spin and rw locks. The result would be the same as
> with a !RT kernel just with horribly bad performance.

Note, the spin and rwlocks already have a lock associated with them. Why
would it be any different on RT? I wasn't suggesting adding another lock
inside a spinlock. Why would I recommend THAT? I wasn't recommending
blindly replacing migrate_disable() with local_lock(). I just meant expose
local_lock() but not migrate_disable().

> 
> That means the stacking problem has to be solved anyway.
> 
> So why on earth do you want to create yet another special duct tape case
> for kamp_local() which proliferates inconsistency instead of aiming for
> consistency accross all preemption models?

The idea was to help with the scheduling issue.

Anyway, instead of blocking. What about having a counter of number of
migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's
already another task with migrate_disabled() set, and the current task has
an affinity greater than 1, it tries to migrate to another CPU?

This way migrate_disable() is less likely to have a bunch of tasks blocked
on one CPU serialized by each task exiting the migrate_disable() section.

Yes, there's more overhead, but it only happens if multiple tasks are in a
migrate disable section on the same CPU.

-- Steve

WARNING: multiple messages have this Message-ID (diff)
From: Steven Rostedt <rostedt@goodmis.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Juri Lelli <juri.lelli@redhat.com>,
	peterz@infradead.org,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	linux-mips@vger.kernel.org, Ben Segall <bsegall@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>, Guo Ren <guoren@kernel.org>,
	linux-sparc <sparclinux@vger.kernel.org>,
	Vincent Chen <deanbo422@gmail.com>, Will Deacon <will@kernel.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	the arch/x86 maintainers <x86@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	linux-csky@vger.kernel.org, David Airlie <airlied@linux.ie>,
	Mel Gorman <mgorman@suse.de>,
	"open list:SYNOPSYS ARC ARCHITECTURE"
	<linux-snps-arc@lists.infradead.org>,
	linux-xtensa@linux-xtensa.org, Paul McKenney <paulmck@kernel.org>,
	intel-gfx <intel-gfx@lists.freedesktop.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Greentime Hu <green.hu@gmail.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Chris Zankel <chris@zankel.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nick Hu <nickhu@andestech.com>, Linux-MM <linux-mm@kvack.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Arnd Bergmann <arnd@arndb.de>, Daniel Vetter <daniel@ffwll.ch>,
	Vineet Gupta <vgupta@synopsys.com>,
	Paul Mackerras <paulus@samba.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends
Date: Thu, 24 Sep 2020 08:32:41 -0400	[thread overview]
Message-ID: <20200924083241.314f2102@gandalf.local.home> (raw)
In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de>

On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the owner is pinned on a CPU?
> >
> > migrate_disable() is a BKL of pinning affinity.  
> 
> No. That's just wrong. preempt disable is a concurrency control,

I think you totally misunderstood what I was saying. The above wasn't about
comparing preempt_disable to migrate_disable. It was comparing
migrate_disable to a chain of tasks blocked on mutexes where the top owner
has preempt_disable set. You still have a bunch of tasks that can't move to
other CPUs.


> > If we only have local_lock() available (even on !RT), then it makes
> > the blocking in groups. At least this way you could grep for all the
> > different local_locks in the system and plug that into the algorithm
> > for WCS, just like one would with a bunch of mutexes.  
> 
> You cannot do that on RT at all where migrate disable is substituting
> preempt disable in spin and rw locks. The result would be the same as
> with a !RT kernel just with horribly bad performance.

Note, the spin and rwlocks already have a lock associated with them. Why
would it be any different on RT? I wasn't suggesting adding another lock
inside a spinlock. Why would I recommend THAT? I wasn't recommending
blindly replacing migrate_disable() with local_lock(). I just meant expose
local_lock() but not migrate_disable().

> 
> That means the stacking problem has to be solved anyway.
> 
> So why on earth do you want to create yet another special duct tape case
> for kamp_local() which proliferates inconsistency instead of aiming for
> consistency accross all preemption models?

The idea was to help with the scheduling issue.

Anyway, instead of blocking. What about having a counter of number of
migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's
already another task with migrate_disabled() set, and the current task has
an affinity greater than 1, it tries to migrate to another CPU?

This way migrate_disable() is less likely to have a bunch of tasks blocked
on one CPU serialized by each task exiting the migrate_disable() section.

Yes, there's more overhead, but it only happens if multiple tasks are in a
migrate disable section on the same CPU.

-- Steve


WARNING: multiple messages have this Message-ID (diff)
From: Steven Rostedt <rostedt@goodmis.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Juri Lelli <juri.lelli@redhat.com>,
	peterz@infradead.org,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	linux-mips@vger.kernel.org, Ben Segall <bsegall@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>, Guo Ren <guoren@kernel.org>,
	linux-sparc <sparclinux@vger.kernel.org>,
	Vincent Chen <deanbo422@gmail.com>, Will Deacon <will@kernel.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Michael Ellerman <mpe@ellerman.id.au>,
	the arch/x86 maintainers <x86@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	linux-csky@vger.kernel.org, David Airlie <airlied@linux.ie>,
	Mel Gorman <mgorman@suse.de>,
	"open list:SYNOPSYS ARC ARCHITECTURE"
	<linux-snps-arc@lists.infradead.org>,
	linux-xtensa@linux-xtensa.org, Paul McKenney <paulmck@kernel.org>,
	intel-gfx <intel-gfx@lists.freedesktop.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Greentime Hu <green.hu@gmail.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Chris Zankel <chris@zankel.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nick Hu <nickhu@andestech.com>, Linux-MM <linux-mm@kvack.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Arnd Bergmann <arnd@arndb.de>, Daniel Vetter <daniel@ffwll.ch>,
	Vineet Gupta <vgupta@synopsys.com>,
	Paul Mackerras <paulus@samba.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends
Date: Thu, 24 Sep 2020 08:32:41 -0400	[thread overview]
Message-ID: <20200924083241.314f2102@gandalf.local.home> (raw)
In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de>

On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the owner is pinned on a CPU?
> >
> > migrate_disable() is a BKL of pinning affinity.  
> 
> No. That's just wrong. preempt disable is a concurrency control,

I think you totally misunderstood what I was saying. The above wasn't about
comparing preempt_disable to migrate_disable. It was comparing
migrate_disable to a chain of tasks blocked on mutexes where the top owner
has preempt_disable set. You still have a bunch of tasks that can't move to
other CPUs.


> > If we only have local_lock() available (even on !RT), then it makes
> > the blocking in groups. At least this way you could grep for all the
> > different local_locks in the system and plug that into the algorithm
> > for WCS, just like one would with a bunch of mutexes.  
> 
> You cannot do that on RT at all where migrate disable is substituting
> preempt disable in spin and rw locks. The result would be the same as
> with a !RT kernel just with horribly bad performance.

Note, the spin and rwlocks already have a lock associated with them. Why
would it be any different on RT? I wasn't suggesting adding another lock
inside a spinlock. Why would I recommend THAT? I wasn't recommending
blindly replacing migrate_disable() with local_lock(). I just meant expose
local_lock() but not migrate_disable().

> 
> That means the stacking problem has to be solved anyway.
> 
> So why on earth do you want to create yet another special duct tape case
> for kamp_local() which proliferates inconsistency instead of aiming for
> consistency accross all preemption models?

The idea was to help with the scheduling issue.

Anyway, instead of blocking. What about having a counter of number of
migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's
already another task with migrate_disabled() set, and the current task has
an affinity greater than 1, it tries to migrate to another CPU?

This way migrate_disable() is less likely to have a bunch of tasks blocked
on one CPU serialized by each task exiting the migrate_disable() section.

Yes, there's more overhead, but it only happens if multiple tasks are in a
migrate disable section on the same CPU.

-- Steve


_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

WARNING: multiple messages have this Message-ID (diff)
From: Steven Rostedt <rostedt@goodmis.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Juri Lelli <juri.lelli@redhat.com>,
	peterz@infradead.org,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	linux-mips@vger.kernel.org, Ben Segall <bsegall@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>, Guo Ren <guoren@kernel.org>,
	linux-sparc <sparclinux@vger.kernel.org>,
	Vincent Chen <deanbo422@gmail.com>, Will Deacon <will@kernel.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Michael Ellerman <mpe@ellerman.id.au>,
	the arch/x86 maintainers <x86@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	linux-csky@vger.kernel.org, David Airlie <airlied@linux.ie>,
	Mel Gorman <mgorman@suse.de>,
	"open list:SYNOPSYS ARC ARCHITECTURE"
	<linux-snps-arc@lists.infradead.org>,
	linux-xtensa@linux-xtensa.org, Paul McKenney <paulmck@kernel.org>,
	intel-gfx <intel-gfx@lists.freedesktop.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Greentime Hu <green.hu@gmail.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Chris Zankel <chris@zankel.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nick Hu <nickhu@andestech.com>, Linux-MM <linux-mm@kvack.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Arnd Bergmann <arnd@arndb.de>, Daniel Vetter <daniel@ffwll.ch>,
	Vineet Gupta <vgupta@synopsys.com>,
	Paul Mackerras <paulus@samba.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends
Date: Thu, 24 Sep 2020 08:32:41 -0400	[thread overview]
Message-ID: <20200924083241.314f2102@gandalf.local.home> (raw)
In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de>

On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the owner is pinned on a CPU?
> >
> > migrate_disable() is a BKL of pinning affinity.  
> 
> No. That's just wrong. preempt disable is a concurrency control,

I think you totally misunderstood what I was saying. The above wasn't about
comparing preempt_disable to migrate_disable. It was comparing
migrate_disable to a chain of tasks blocked on mutexes where the top owner
has preempt_disable set. You still have a bunch of tasks that can't move to
other CPUs.


> > If we only have local_lock() available (even on !RT), then it makes
> > the blocking in groups. At least this way you could grep for all the
> > different local_locks in the system and plug that into the algorithm
> > for WCS, just like one would with a bunch of mutexes.  
> 
> You cannot do that on RT at all where migrate disable is substituting
> preempt disable in spin and rw locks. The result would be the same as
> with a !RT kernel just with horribly bad performance.

Note, the spin and rwlocks already have a lock associated with them. Why
would it be any different on RT? I wasn't suggesting adding another lock
inside a spinlock. Why would I recommend THAT? I wasn't recommending
blindly replacing migrate_disable() with local_lock(). I just meant expose
local_lock() but not migrate_disable().

> 
> That means the stacking problem has to be solved anyway.
> 
> So why on earth do you want to create yet another special duct tape case
> for kamp_local() which proliferates inconsistency instead of aiming for
> consistency accross all preemption models?

The idea was to help with the scheduling issue.

Anyway, instead of blocking. What about having a counter of number of
migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's
already another task with migrate_disabled() set, and the current task has
an affinity greater than 1, it tries to migrate to another CPU?

This way migrate_disable() is less likely to have a bunch of tasks blocked
on one CPU serialized by each task exiting the migrate_disable() section.

Yes, there's more overhead, but it only happens if multiple tasks are in a
migrate disable section on the same CPU.

-- Steve


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Steven Rostedt <rostedt@goodmis.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Juri Lelli <juri.lelli@redhat.com>,
	peterz@infradead.org,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	linux-mips@vger.kernel.org, Ben Segall <bsegall@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>, Guo Ren <guoren@kernel.org>,
	linux-sparc <sparclinux@vger.kernel.org>,
	Vincent Chen <deanbo422@gmail.com>, Will Deacon <will@kernel.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Michael Ellerman <mpe@ellerman.id.au>,
	the arch/x86 maintainers <x86@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	linux-csky@vger.kernel.org, David Airlie <airlied@linux.ie>,
	Mel Gorman <mgorman@suse.de>,
	"open list:SYNOPSYS ARC ARCHITECTURE"
	<linux-snps-arc@lists.infradead.org>,
	linux-xtensa@linux-xtensa.org, Paul McKenney <paulmck@kernel.org>,
	intel-gfx <intel-gfx@lists.freedesktop.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Greentime Hu <green.hu@gmail.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Chris Zankel <chris@zankel.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nick Hu <nickhu@andestech.com>, Linux-MM <linux-mm@kvack.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Arnd Bergmann <arnd@arndb.de>, Vineet Gupta <vgupta@synopsys.com>,
	Paul Mackerras <paulus@samba.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends
Date: Thu, 24 Sep 2020 08:32:41 -0400	[thread overview]
Message-ID: <20200924083241.314f2102@gandalf.local.home> (raw)
In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de>

On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the owner is pinned on a CPU?
> >
> > migrate_disable() is a BKL of pinning affinity.  
> 
> No. That's just wrong. preempt disable is a concurrency control,

I think you totally misunderstood what I was saying. The above wasn't about
comparing preempt_disable to migrate_disable. It was comparing
migrate_disable to a chain of tasks blocked on mutexes where the top owner
has preempt_disable set. You still have a bunch of tasks that can't move to
other CPUs.


> > If we only have local_lock() available (even on !RT), then it makes
> > the blocking in groups. At least this way you could grep for all the
> > different local_locks in the system and plug that into the algorithm
> > for WCS, just like one would with a bunch of mutexes.  
> 
> You cannot do that on RT at all where migrate disable is substituting
> preempt disable in spin and rw locks. The result would be the same as
> with a !RT kernel just with horribly bad performance.

Note, the spin and rwlocks already have a lock associated with them. Why
would it be any different on RT? I wasn't suggesting adding another lock
inside a spinlock. Why would I recommend THAT? I wasn't recommending
blindly replacing migrate_disable() with local_lock(). I just meant expose
local_lock() but not migrate_disable().

> 
> That means the stacking problem has to be solved anyway.
> 
> So why on earth do you want to create yet another special duct tape case
> for kamp_local() which proliferates inconsistency instead of aiming for
> consistency accross all preemption models?

The idea was to help with the scheduling issue.

Anyway, instead of blocking. What about having a counter of number of
migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's
already another task with migrate_disabled() set, and the current task has
an affinity greater than 1, it tries to migrate to another CPU?

This way migrate_disable() is less likely to have a bunch of tasks blocked
on one CPU serialized by each task exiting the migrate_disable() section.

Yes, there's more overhead, but it only happens if multiple tasks are in a
migrate disable section on the same CPU.

-- Steve

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Steven Rostedt <rostedt@goodmis.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Juri Lelli <juri.lelli@redhat.com>,
	peterz@infradead.org,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	linux-mips@vger.kernel.org, Ben Segall <bsegall@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>, Guo Ren <guoren@kernel.org>,
	linux-sparc <sparclinux@vger.kernel.org>,
	Vincent Chen <deanbo422@gmail.com>, Will Deacon <will@kernel.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Michael Ellerman <mpe@ellerman.id.au>,
	the arch/x86 maintainers <x86@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	linux-csky@vger.kernel.org, David Airlie <airlied@linux.ie>,
	Mel Gorman <mgorman@suse.de>,
	"open list:SYNOPSYS ARC ARCHITECTURE"
	<linux-snps-arc@lists.infradead.org>,
	linux-xtensa@linux-xtensa.org, Paul McKenney <paulmck@kernel.org>,
	intel-gfx <intel-gfx@lists.freedesktop.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Greentime Hu <green.hu@gmail.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Chris Zankel <chris@zankel.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nick Hu <nickhu@andestech.com>, Linux-MM <linux-mm@kvack.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Arnd Bergmann <arnd@arndb.de>, Vineet Gupta <vgupta@synopsys.com>,
	Paul Mackerras <paulus@samba.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Intel-gfx] [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends
Date: Thu, 24 Sep 2020 08:32:41 -0400	[thread overview]
Message-ID: <20200924083241.314f2102@gandalf.local.home> (raw)
In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de>

On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the owner is pinned on a CPU?
> >
> > migrate_disable() is a BKL of pinning affinity.  
> 
> No. That's just wrong. preempt disable is a concurrency control,

I think you totally misunderstood what I was saying. The above wasn't about
comparing preempt_disable to migrate_disable. It was comparing
migrate_disable to a chain of tasks blocked on mutexes where the top owner
has preempt_disable set. You still have a bunch of tasks that can't move to
other CPUs.


> > If we only have local_lock() available (even on !RT), then it makes
> > the blocking in groups. At least this way you could grep for all the
> > different local_locks in the system and plug that into the algorithm
> > for WCS, just like one would with a bunch of mutexes.  
> 
> You cannot do that on RT at all where migrate disable is substituting
> preempt disable in spin and rw locks. The result would be the same as
> with a !RT kernel just with horribly bad performance.

Note, the spin and rwlocks already have a lock associated with them. Why
would it be any different on RT? I wasn't suggesting adding another lock
inside a spinlock. Why would I recommend THAT? I wasn't recommending
blindly replacing migrate_disable() with local_lock(). I just meant expose
local_lock() but not migrate_disable().

> 
> That means the stacking problem has to be solved anyway.
> 
> So why on earth do you want to create yet another special duct tape case
> for kamp_local() which proliferates inconsistency instead of aiming for
> consistency accross all preemption models?

The idea was to help with the scheduling issue.

Anyway, instead of blocking. What about having a counter of number of
migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's
already another task with migrate_disabled() set, and the current task has
an affinity greater than 1, it tries to migrate to another CPU?

This way migrate_disable() is less likely to have a bunch of tasks blocked
on one CPU serialized by each task exiting the migrate_disable() section.

Yes, there's more overhead, but it only happens if multiple tasks are in a
migrate disable section on the same CPU.

-- Steve

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-09-24 12:32 UTC|newest]

Thread overview: 428+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-19  9:17 [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends Thomas Gleixner
2020-09-19  9:17 ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17 ` Thomas Gleixner
2020-09-19  9:17 ` Thomas Gleixner
2020-09-19  9:17 ` Thomas Gleixner
2020-09-19  9:17 ` Thomas Gleixner
2020-09-19  9:17 ` Thomas Gleixner
2020-09-19  9:17 ` Thomas Gleixner
2020-09-19  9:17 ` [patch RFC 01/15] mm/highmem: Un-EXPORT __kmap_atomic_idx() Thomas Gleixner
2020-09-19  9:17   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-21  6:23   ` Christoph Hellwig
2020-09-21  6:23     ` [Intel-gfx] " Christoph Hellwig
2020-09-21  6:23     ` Christoph Hellwig
2020-09-21  6:23     ` Christoph Hellwig
2020-09-21  6:23     ` Christoph Hellwig
2020-09-21  6:23     ` Christoph Hellwig
2020-09-19  9:17 ` [patch RFC 02/15] highmem: Provide generic variant of kmap_atomic* Thomas Gleixner
2020-09-19  9:17   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-21  6:28   ` Christoph Hellwig
2020-09-21  6:28     ` [Intel-gfx] " Christoph Hellwig
2020-09-21  6:28     ` Christoph Hellwig
2020-09-21  6:28     ` Christoph Hellwig
2020-09-21  6:28     ` Christoph Hellwig
2020-09-21  6:28     ` Christoph Hellwig
2020-09-19  9:17 ` [patch RFC 03/15] x86/mm/highmem: Use generic kmap atomic implementation Thomas Gleixner
2020-09-19  9:17   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17 ` [patch RFC 04/15] arc/mm/highmem: " Thomas Gleixner
2020-09-19  9:17   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17 ` [patch RFC 05/15] ARM: highmem: Switch to generic kmap atomic Thomas Gleixner
2020-09-19  9:17   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17 ` [patch RFC 06/15] csky/mm/highmem: " Thomas Gleixner
2020-09-19  9:17   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-23  0:05   ` Guo Ren
2020-09-23  0:05     ` [Intel-gfx] " Guo Ren
2020-09-23  0:05     ` Guo Ren
2020-09-23  0:05     ` Guo Ren
2020-09-23  0:05     ` Guo Ren
2020-09-23  0:05     ` Guo Ren
2020-09-23  0:05     ` Guo Ren
2020-09-23  0:05     ` Guo Ren
2020-09-19  9:17 ` [patch RFC 07/15] microblaze/mm/highmem: " Thomas Gleixner
2020-09-19  9:17   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17 ` [patch RFC 08/15] mips/mm/highmem: " Thomas Gleixner
2020-09-19  9:17   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:17   ` Thomas Gleixner
2020-09-19  9:18 ` [patch RFC 09/15] nds32/mm/highmem: " Thomas Gleixner
2020-09-19  9:18   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18 ` [patch RFC 10/15] powerpc/mm/highmem: " Thomas Gleixner
2020-09-19  9:18   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18 ` [patch RFC 11/15] sparc/mm/highmem: " Thomas Gleixner
2020-09-19  9:18   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18 ` [patch RFC 12/15] xtensa/mm/highmem: " Thomas Gleixner
2020-09-19  9:18   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18 ` [patch RFC 13/15] mm/highmem: Remove the old kmap_atomic cruft Thomas Gleixner
2020-09-19  9:18   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18 ` [patch RFC 14/15] sched: highmem: Store temporary kmaps in task struct Thomas Gleixner
2020-09-19  9:18   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18 ` [patch RFC 15/15] mm/highmem: Provide kmap_temporary* Thomas Gleixner
2020-09-19  9:18   ` [Intel-gfx] " Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19  9:18   ` Thomas Gleixner
2020-09-19 10:03 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for mm/highmem: Provide a preemptible variant of kmap_atomic & friends Patchwork
2020-09-19 10:05 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-09-19 10:24 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-09-19 10:35 ` [patch RFC 00/15] " Daniel Vetter
2020-09-19 10:35   ` [Intel-gfx] " Daniel Vetter
2020-09-19 10:35   ` Daniel Vetter
2020-09-19 10:35   ` Daniel Vetter
2020-09-19 10:35   ` Daniel Vetter
2020-09-19 10:35   ` Daniel Vetter
2020-09-19 10:35   ` Daniel Vetter
2020-09-19 10:35   ` Daniel Vetter
2020-09-19 10:37   ` Daniel Vetter
2020-09-19 10:37     ` [Intel-gfx] " Daniel Vetter
2020-09-19 10:37     ` Daniel Vetter
2020-09-19 10:37     ` Daniel Vetter
2020-09-19 10:37     ` Daniel Vetter
2020-09-19 10:37     ` Daniel Vetter
2020-09-19 10:37     ` Daniel Vetter
2020-09-19 10:37     ` Daniel Vetter
2020-09-20  6:23     ` Thomas Gleixner
2020-09-20  6:23       ` [Intel-gfx] " Thomas Gleixner
2020-09-20  6:23       ` Thomas Gleixner
2020-09-20  6:23       ` Thomas Gleixner
2020-09-20  6:23       ` Thomas Gleixner
2020-09-20  6:23       ` Thomas Gleixner
2020-09-20  6:23       ` Thomas Gleixner
2020-09-20  8:23       ` Daniel Vetter
2020-09-20  8:23         ` [Intel-gfx] " Daniel Vetter
2020-09-20  8:23         ` Daniel Vetter
2020-09-20  8:23         ` Daniel Vetter
2020-09-20  8:23         ` Daniel Vetter
2020-09-20  8:23         ` Daniel Vetter
2020-09-20  8:23         ` Daniel Vetter
2020-09-20 17:24         ` Thomas Gleixner
2020-09-20 17:24           ` [Intel-gfx] " Thomas Gleixner
2020-09-20 17:24           ` Thomas Gleixner
2020-09-20 17:24           ` Thomas Gleixner
2020-09-20 17:24           ` Thomas Gleixner
2020-09-20 17:24           ` Thomas Gleixner
2020-09-20 17:24           ` Thomas Gleixner
2020-09-19 11:34 ` [Intel-gfx] ✓ Fi.CI.IGT: success for " Patchwork
2020-09-19 17:18 ` [patch RFC 00/15] " Linus Torvalds
2020-09-19 17:18   ` [Intel-gfx] " Linus Torvalds
2020-09-19 17:18   ` Linus Torvalds
2020-09-19 17:18   ` Linus Torvalds
2020-09-19 17:18   ` Linus Torvalds
2020-09-19 17:18   ` Linus Torvalds
2020-09-19 17:18   ` Linus Torvalds
2020-09-19 17:18   ` Linus Torvalds
2020-09-19 17:39   ` Matthew Wilcox
2020-09-19 17:39     ` [Intel-gfx] " Matthew Wilcox
2020-09-19 17:39     ` Matthew Wilcox
2020-09-19 17:39     ` Matthew Wilcox
2020-09-19 17:39     ` Matthew Wilcox
2020-09-19 17:39     ` Matthew Wilcox
2020-09-19 17:39     ` Matthew Wilcox
2020-09-19 17:39     ` Matthew Wilcox
2020-09-19 19:13     ` Linus Torvalds
2020-09-19 19:13       ` [Intel-gfx] " Linus Torvalds
2020-09-19 19:13       ` Linus Torvalds
2020-09-19 19:13       ` Linus Torvalds
2020-09-19 19:13       ` Linus Torvalds
2020-09-19 19:13       ` Linus Torvalds
2020-09-19 19:13       ` Linus Torvalds
2020-09-19 19:13       ` Linus Torvalds
2020-09-21 19:58     ` Ira Weiny
2020-09-21 19:58       ` [Intel-gfx] " Ira Weiny
2020-09-21 19:58       ` Ira Weiny
2020-09-21 19:58       ` Ira Weiny
2020-09-21 19:58       ` Ira Weiny
2020-09-21 19:58       ` Ira Weiny
2020-09-21 19:58       ` Ira Weiny
2020-09-20  6:41   ` Thomas Gleixner
2020-09-20  6:41     ` [Intel-gfx] " Thomas Gleixner
2020-09-20  6:41     ` Thomas Gleixner
2020-09-20  6:41     ` Thomas Gleixner
2020-09-20  6:41     ` Thomas Gleixner
2020-09-20  6:41     ` Thomas Gleixner
2020-09-20  6:41     ` Thomas Gleixner
2020-09-20  6:41     ` Thomas Gleixner
2020-09-20  8:49     ` Thomas Gleixner
2020-09-20  8:49       ` [Intel-gfx] " Thomas Gleixner
2020-09-20  8:49       ` Thomas Gleixner
2020-09-20  8:49       ` Thomas Gleixner
2020-09-20  8:49       ` Thomas Gleixner
2020-09-20  8:49       ` Thomas Gleixner
2020-09-20  8:49       ` Thomas Gleixner
2020-09-20  8:49       ` Thomas Gleixner
2020-09-20 16:57       ` Linus Torvalds
2020-09-20 16:57         ` [Intel-gfx] " Linus Torvalds
2020-09-20 16:57         ` Linus Torvalds
2020-09-20 16:57         ` Linus Torvalds
2020-09-20 16:57         ` Linus Torvalds
2020-09-20 16:57         ` Linus Torvalds
2020-09-20 16:57         ` Linus Torvalds
2020-09-20 16:57         ` Linus Torvalds
2020-09-20 17:40         ` Thomas Gleixner
2020-09-20 17:40           ` [Intel-gfx] " Thomas Gleixner
2020-09-20 17:40           ` Thomas Gleixner
2020-09-20 17:40           ` Thomas Gleixner
2020-09-20 17:40           ` Thomas Gleixner
2020-09-20 17:40           ` Thomas Gleixner
2020-09-20 17:40           ` Thomas Gleixner
2020-09-20 17:40           ` Thomas Gleixner
2020-09-20 17:42           ` Linus Torvalds
2020-09-20 17:42             ` [Intel-gfx] " Linus Torvalds
2020-09-20 17:42             ` Linus Torvalds
2020-09-20 17:42             ` Linus Torvalds
2020-09-20 17:42             ` Linus Torvalds
2020-09-20 17:42             ` Linus Torvalds
2020-09-20 17:42             ` Linus Torvalds
2020-09-20 17:42             ` Linus Torvalds
2020-09-20 17:58             ` Linus Torvalds
2020-09-20 17:58               ` [Intel-gfx] " Linus Torvalds
2020-09-20 17:58               ` Linus Torvalds
2020-09-20 17:58               ` Linus Torvalds
2020-09-20 17:58               ` Linus Torvalds
2020-09-20 17:58               ` Linus Torvalds
2020-09-20 17:58               ` Linus Torvalds
2020-09-20 17:58               ` Linus Torvalds
2020-09-21  7:39             ` Thomas Gleixner
2020-09-21  7:39               ` [Intel-gfx] " Thomas Gleixner
2020-09-21  7:39               ` Thomas Gleixner
2020-09-21  7:39               ` Thomas Gleixner
2020-09-21  7:39               ` Thomas Gleixner
2020-09-21  7:39               ` Thomas Gleixner
2020-09-21  7:39               ` Thomas Gleixner
2020-09-21  7:39               ` Thomas Gleixner
2020-09-21 16:24               ` Linus Torvalds
2020-09-21 16:24                 ` [Intel-gfx] " Linus Torvalds
2020-09-21 16:24                 ` Linus Torvalds
2020-09-21 16:24                 ` Linus Torvalds
2020-09-21 16:24                 ` Linus Torvalds
2020-09-21 16:24                 ` Linus Torvalds
2020-09-21 16:24                 ` Linus Torvalds
2020-09-21 16:24                 ` Linus Torvalds
2020-09-21 19:27                 ` Thomas Gleixner
2020-09-21 19:27                   ` [Intel-gfx] " Thomas Gleixner
2020-09-21 19:27                   ` Thomas Gleixner
2020-09-21 19:27                   ` Thomas Gleixner
2020-09-21 19:27                   ` Thomas Gleixner
2020-09-21 19:27                   ` Thomas Gleixner
2020-09-21 19:27                   ` Thomas Gleixner
2020-09-21 19:27                   ` Thomas Gleixner
2020-09-23  8:40                   ` peterz
2020-09-23  8:40                     ` [Intel-gfx] " peterz
2020-09-23  8:40                     ` peterz
2020-09-23  8:40                     ` peterz
2020-09-23  8:40                     ` peterz
2020-09-23  8:40                     ` peterz
2020-09-23  8:40                     ` peterz
2020-09-23  8:40                     ` peterz
2020-09-23 13:35                     ` Thomas Gleixner
2020-09-23 13:35                       ` [Intel-gfx] " Thomas Gleixner
2020-09-23 13:35                       ` Thomas Gleixner
2020-09-23 13:35                       ` Thomas Gleixner
2020-09-23 13:35                       ` Thomas Gleixner
2020-09-23 13:35                       ` Thomas Gleixner
2020-09-23 13:35                       ` Thomas Gleixner
2020-09-23 13:35                       ` Thomas Gleixner
2020-09-23 15:52                     ` Steven Rostedt
2020-09-23 15:52                       ` [Intel-gfx] " Steven Rostedt
2020-09-23 15:52                       ` Steven Rostedt
2020-09-23 15:52                       ` Steven Rostedt
2020-09-23 15:52                       ` Steven Rostedt
2020-09-23 15:52                       ` Steven Rostedt
2020-09-23 15:52                       ` Steven Rostedt
2020-09-23 15:52                       ` Steven Rostedt
2020-09-23 20:55                       ` Thomas Gleixner
2020-09-23 20:55                         ` [Intel-gfx] " Thomas Gleixner
2020-09-23 20:55                         ` Thomas Gleixner
2020-09-23 20:55                         ` Thomas Gleixner
2020-09-23 20:55                         ` Thomas Gleixner
2020-09-23 20:55                         ` Thomas Gleixner
2020-09-23 20:55                         ` Thomas Gleixner
2020-09-23 20:55                         ` Thomas Gleixner
2020-09-23 21:12                         ` Steven Rostedt
2020-09-23 21:12                           ` [Intel-gfx] " Steven Rostedt
2020-09-23 21:12                           ` Steven Rostedt
2020-09-23 21:12                           ` Steven Rostedt
2020-09-23 21:12                           ` Steven Rostedt
2020-09-23 21:12                           ` Steven Rostedt
2020-09-23 21:12                           ` Steven Rostedt
2020-09-23 21:12                           ` Steven Rostedt
2020-09-24  6:57                           ` Thomas Gleixner
2020-09-24  6:57                             ` [Intel-gfx] " Thomas Gleixner
2020-09-24  6:57                             ` Thomas Gleixner
2020-09-24  6:57                             ` Thomas Gleixner
2020-09-24  6:57                             ` Thomas Gleixner
2020-09-24  6:57                             ` Thomas Gleixner
2020-09-24  6:57                             ` Thomas Gleixner
2020-09-24  6:57                             ` Thomas Gleixner
2020-09-24 12:32                             ` Steven Rostedt [this message]
2020-09-24 12:32                               ` [Intel-gfx] " Steven Rostedt
2020-09-24 12:32                               ` Steven Rostedt
2020-09-24 12:32                               ` Steven Rostedt
2020-09-24 12:32                               ` Steven Rostedt
2020-09-24 12:32                               ` Steven Rostedt
2020-09-24 12:32                               ` Steven Rostedt
2020-09-24 12:32                               ` Steven Rostedt
2020-09-24 12:42                               ` Peter Zijlstra
2020-09-24 12:42                                 ` [Intel-gfx] " Peter Zijlstra
2020-09-24 12:42                                 ` Peter Zijlstra
2020-09-24 12:42                                 ` Peter Zijlstra
2020-09-24 12:42                                 ` Peter Zijlstra
2020-09-24 12:42                                 ` Peter Zijlstra
2020-09-24 12:42                                 ` Peter Zijlstra
2020-09-24 12:42                                 ` Peter Zijlstra
2020-09-24 13:51                                 ` Steven Rostedt
2020-09-24 13:51                                   ` [Intel-gfx] " Steven Rostedt
2020-09-24 13:51                                   ` Steven Rostedt
2020-09-24 13:51                                   ` Steven Rostedt
2020-09-24 13:51                                   ` Steven Rostedt
2020-09-24 13:51                                   ` Steven Rostedt
2020-09-24 13:51                                   ` Steven Rostedt
2020-09-24 13:51                                   ` Steven Rostedt
2020-09-24 13:58                                   ` Peter Zijlstra
2020-09-24 13:58                                     ` [Intel-gfx] " Peter Zijlstra
2020-09-24 13:58                                     ` Peter Zijlstra
2020-09-24 13:58                                     ` Peter Zijlstra
2020-09-24 13:58                                     ` Peter Zijlstra
2020-09-24 13:58                                     ` Peter Zijlstra
2020-09-24 13:58                                     ` Peter Zijlstra
2020-09-24 13:58                                     ` Peter Zijlstra
2020-09-24 17:55                               ` Thomas Gleixner
2020-09-24 17:55                                 ` [Intel-gfx] " Thomas Gleixner
2020-09-24 17:55                                 ` Thomas Gleixner
2020-09-24 17:55                                 ` Thomas Gleixner
2020-09-24 17:55                                 ` Thomas Gleixner
2020-09-24 17:55                                 ` Thomas Gleixner
2020-09-24 17:55                                 ` Thomas Gleixner
2020-09-24 17:55                                 ` Thomas Gleixner
2020-09-24 18:58                                 ` Steven Rostedt
2020-09-24 18:58                                   ` [Intel-gfx] " Steven Rostedt
2020-09-24 18:58                                   ` Steven Rostedt
2020-09-24 18:58                                   ` Steven Rostedt
2020-09-24 18:58                                   ` Steven Rostedt
2020-09-24 18:58                                   ` Steven Rostedt
2020-09-24 18:58                                   ` Steven Rostedt
2020-09-24 18:58                                   ` Steven Rostedt
2020-09-24  8:27                       ` peterz
2020-09-24  8:27                         ` [Intel-gfx] " peterz
2020-09-24  8:27                         ` peterz
2020-09-24  8:27                         ` peterz
2020-09-24  8:27                         ` peterz
2020-09-24  8:27                         ` peterz
2020-09-24  8:27                         ` peterz
2020-09-24  8:27                         ` peterz
2020-09-24 19:36                         ` Daniel Bristot de Oliveira
2020-09-24 19:36                           ` [Intel-gfx] " Daniel Bristot de Oliveira
2020-09-24 19:36                           ` Daniel Bristot de Oliveira
2020-09-24 19:36                           ` Daniel Bristot de Oliveira
2020-09-24 19:36                           ` Daniel Bristot de Oliveira
2020-09-24 19:36                           ` Daniel Bristot de Oliveira
2020-09-24 19:36                           ` Daniel Bristot de Oliveira
2020-09-24 19:36                           ` Daniel Bristot de Oliveira
2020-09-23 10:19                   ` peterz
2020-09-23 10:19                     ` [Intel-gfx] " peterz
2020-09-23 10:19                     ` peterz
2020-09-23 10:19                     ` peterz
2020-09-23 10:19                     ` peterz
2020-09-23 10:19                     ` peterz
2020-09-23 10:19                     ` peterz
2020-09-23 10:19                     ` peterz
2020-09-23 12:33                     ` Thomas Gleixner
2020-09-23 12:33                       ` [Intel-gfx] " Thomas Gleixner
2020-09-23 12:33                       ` Thomas Gleixner
2020-09-23 12:33                       ` Thomas Gleixner
2020-09-23 12:33                       ` Thomas Gleixner
2020-09-23 12:33                       ` Thomas Gleixner
2020-09-23 12:33                       ` Thomas Gleixner
2020-09-23 12:33                       ` Thomas Gleixner
2020-09-23 14:33                   ` Thomas Gleixner
2020-09-23 14:33                     ` [Intel-gfx] " Thomas Gleixner
2020-09-23 14:33                     ` Thomas Gleixner
2020-09-23 14:33                     ` Thomas Gleixner
2020-09-23 14:33                     ` Thomas Gleixner
2020-09-23 14:33                     ` Thomas Gleixner
2020-09-23 14:33                     ` Thomas Gleixner
2020-09-23 14:33                     ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200924083241.314f2102@gandalf.local.home \
    --to=rostedt@goodmis.org \
    --cc=airlied@linux.ie \
    --cc=akpm@linux-foundation.org \
    --cc=ardb@kernel.org \
    --cc=arnd@arndb.de \
    --cc=benh@kernel.crashing.org \
    --cc=bigeasy@linutronix.de \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=chris@zankel.net \
    --cc=daniel@ffwll.ch \
    --cc=davem@davemloft.net \
    --cc=deanbo422@gmail.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=green.hu@gmail.com \
    --cc=guoren@kernel.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=jcmvbkbc@gmail.com \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-csky@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-snps-arc@lists.infradead.org \
    --cc=linux-xtensa@linux-xtensa.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mgorman@suse.de \
    --cc=monstr@monstr.eu \
    --cc=mpe@ellerman.id.au \
    --cc=nickhu@andestech.com \
    --cc=paulmck@kernel.org \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=rodrigo.vivi@intel.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=tsbogend@alpha.franken.de \
    --cc=vgupta@synopsys.com \
    --cc=vincent.guittot@linaro.org \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.