All of lore.kernel.org
 help / color / mirror / Atom feed
* Remove last BKL leftover: lock_depth
@ 2011-05-06 21:05 Andi Kleen
  2011-05-06 21:05 ` [PATCH 1/4] BKL: Remove BKL references in mutex code Andi Kleen
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Andi Kleen @ 2011-05-06 21:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: peterz, arnd, akpm

I noticed the BKL task->lock_depth field was there with various users.
But they all don't need it anymore because the BKL is gone.
Fix up all the users and then finally remove the field.

This patchkit only removes code.

-Andi


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/4] BKL: Remove BKL references in mutex code
  2011-05-06 21:05 Remove last BKL leftover: lock_depth Andi Kleen
@ 2011-05-06 21:05 ` Andi Kleen
  2011-05-06 21:05 ` [PATCH 2/4] SCHED: Remove BKL accounting from schedstats Andi Kleen
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Andi Kleen @ 2011-05-06 21:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: peterz, arnd, akpm, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

The mutex code still checks for the BKL which is already gone.
Remove that code.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 kernel/mutex.c |    7 -------
 1 files changed, 0 insertions(+), 7 deletions(-)

diff --git a/kernel/mutex.c b/kernel/mutex.c
index c4195fa..62d0a06 100644
--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -163,13 +163,6 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		struct thread_info *owner;
 
 		/*
-		 * If we own the BKL, then don't spin. The owner of
-		 * the mutex might be waiting on us to release the BKL.
-		 */
-		if (unlikely(current->lock_depth >= 0))
-			break;
-
-		/*
 		 * If there's an owner, wait for it to either
 		 * release the lock or go to sleep.
 		 */
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/4] SCHED: Remove BKL accounting from schedstats
  2011-05-06 21:05 Remove last BKL leftover: lock_depth Andi Kleen
  2011-05-06 21:05 ` [PATCH 1/4] BKL: Remove BKL references in mutex code Andi Kleen
@ 2011-05-06 21:05 ` Andi Kleen
  2011-05-07  8:16   ` Ingo Molnar
  2011-05-06 21:05 ` [PATCH 3/4] SCHED: Don't use BKL count for idle preempt count initialization Andi Kleen
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: Andi Kleen @ 2011-05-06 21:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: peterz, arnd, akpm, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Remove the BKL accounting from schedstats.

I removed the field from the debug file too, in theory
it could be kept as 0 for compatibility.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 include/linux/sched.h |    4 ----
 kernel/sched.c        |    6 ------
 kernel/sched_debug.c  |    3 ---
 3 files changed, 0 insertions(+), 13 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 18d63ce..53373bc 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -731,10 +731,6 @@ struct sched_info {
 	/* timestamps */
 	unsigned long long last_arrival,/* when we last ran on a cpu */
 			   last_queued;	/* when we were last queued to run */
-#ifdef CONFIG_SCHEDSTATS
-	/* BKL stats */
-	unsigned int bkl_count;
-#endif
 };
 #endif /* defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT) */
 
diff --git a/kernel/sched.c b/kernel/sched.c
index 312f8b9..7fa5334 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4025,12 +4025,6 @@ static inline void schedule_debug(struct task_struct *prev)
 	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
 
 	schedstat_inc(this_rq(), sched_count);
-#ifdef CONFIG_SCHEDSTATS
-	if (unlikely(prev->lock_depth >= 0)) {
-		schedstat_inc(this_rq(), rq_sched_info.bkl_count);
-		schedstat_inc(prev, sched_info.bkl_count);
-	}
-#endif
 }
 
 static void put_prev_task(struct rq *rq, struct task_struct *prev)
diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c
index 7bacd83..a14e399 100644
--- a/kernel/sched_debug.c
+++ b/kernel/sched_debug.c
@@ -296,9 +296,6 @@ static void print_cpu(struct seq_file *m, int cpu)
 	P(ttwu_count);
 	P(ttwu_local);
 
-	SEQ_printf(m, "  .%-30s: %d\n", "bkl_count",
-				rq->rq_sched_info.bkl_count);
-
 #undef P
 #undef P64
 #endif
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/4] SCHED: Don't use BKL count for idle preempt count initialization
  2011-05-06 21:05 Remove last BKL leftover: lock_depth Andi Kleen
  2011-05-06 21:05 ` [PATCH 1/4] BKL: Remove BKL references in mutex code Andi Kleen
  2011-05-06 21:05 ` [PATCH 2/4] SCHED: Remove BKL accounting from schedstats Andi Kleen
@ 2011-05-06 21:05 ` Andi Kleen
  2011-05-06 21:05 ` [PATCH 4/4] Remove last BKL leftover: lock_depth Andi Kleen
  2011-05-06 21:11 ` Andrew Morton
  4 siblings, 0 replies; 8+ messages in thread
From: Andi Kleen @ 2011-05-06 21:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: peterz, arnd, akpm, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

No need for that ever as far as I can see. Just set it to 0.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 kernel/sched.c |    5 +----
 1 files changed, 1 insertions(+), 4 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 7fa5334..f70b2cc 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5776,11 +5776,8 @@ void __cpuinit init_idle(struct task_struct *idle, int cpu)
 	raw_spin_unlock_irqrestore(&rq->lock, flags);
 
 	/* Set the preempt count _outside_ the spinlocks! */
-#if defined(CONFIG_PREEMPT)
-	task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0);
-#else
 	task_thread_info(idle)->preempt_count = 0;
-#endif
+
 	/*
 	 * The idle tasks have their own, simple scheduling class:
 	 */
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/4] Remove last BKL leftover: lock_depth
  2011-05-06 21:05 Remove last BKL leftover: lock_depth Andi Kleen
                   ` (2 preceding siblings ...)
  2011-05-06 21:05 ` [PATCH 3/4] SCHED: Don't use BKL count for idle preempt count initialization Andi Kleen
@ 2011-05-06 21:05 ` Andi Kleen
  2011-05-06 21:11 ` Andrew Morton
  4 siblings, 0 replies; 8+ messages in thread
From: Andi Kleen @ 2011-05-06 21:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: peterz, arnd, akpm, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

This was the (hopefully) last BKL leftover: the per task lock_depth
field.

After all users are gone remove the task_struct lock_depth field.
Depending on the configuration this will not actually shrink task_struct
unfortunately due to padding. It helps a bit on 32bit. And on 64bit
it allows new bloat.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 include/linux/init_task.h |    1 -
 include/linux/sched.h     |    2 --
 kernel/fork.c             |    1 -
 3 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index caa151f..689496b 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -134,7 +134,6 @@ extern struct cred init_cred;
 	.stack		= &init_thread_info,				\
 	.usage		= ATOMIC_INIT(2),				\
 	.flags		= PF_KTHREAD,					\
-	.lock_depth	= -1,						\
 	.prio		= MAX_PRIO-20,					\
 	.static_prio	= MAX_PRIO-20,					\
 	.normal_prio	= MAX_PRIO-20,					\
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 53373bc..006c31f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1193,8 +1193,6 @@ struct task_struct {
 	unsigned int flags;	/* per process flags, defined below */
 	unsigned int ptrace;
 
-	int lock_depth;		/* BKL lock depth */
-
 #ifdef CONFIG_SMP
 #ifdef __ARCH_WANT_UNLOCKED_CTXSW
 	int oncpu;
diff --git a/kernel/fork.c b/kernel/fork.c
index e7548de..aca6287 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1103,7 +1103,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
 
 	posix_cpu_timers_init(p);
 
-	p->lock_depth = -1;		/* -1 = no lock */
 	do_posix_clock_monotonic_gettime(&p->start_time);
 	p->real_start_time = p->start_time;
 	monotonic_to_bootbased(&p->real_start_time);
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: Remove last BKL leftover: lock_depth
  2011-05-06 21:05 Remove last BKL leftover: lock_depth Andi Kleen
                   ` (3 preceding siblings ...)
  2011-05-06 21:05 ` [PATCH 4/4] Remove last BKL leftover: lock_depth Andi Kleen
@ 2011-05-06 21:11 ` Andrew Morton
  2011-05-06 21:19   ` Andi Kleen
  4 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2011-05-06 21:11 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel, peterz, arnd

On Fri,  6 May 2011 14:05:24 -0700
Andi Kleen <andi@firstfloor.org> wrote:

> I noticed the BKL task->lock_depth field was there with various users.
> But they all don't need it anymore because the BKL is gone.
> Fix up all the users and then finally remove the field.
> 
> This patchkit only removes code.

Patches in Ingo's tree from Jon Corbet have done this removal.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Remove last BKL leftover: lock_depth
  2011-05-06 21:11 ` Andrew Morton
@ 2011-05-06 21:19   ` Andi Kleen
  0 siblings, 0 replies; 8+ messages in thread
From: Andi Kleen @ 2011-05-06 21:19 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Andi Kleen, linux-kernel, peterz, arnd

On Fri, May 06, 2011 at 02:11:14PM -0700, Andrew Morton wrote:
> On Fri,  6 May 2011 14:05:24 -0700
> Andi Kleen <andi@firstfloor.org> wrote:
> 
> > I noticed the BKL task->lock_depth field was there with various users.
> > But they all don't need it anymore because the BKL is gone.
> > Fix up all the users and then finally remove the field.
> > 
> > This patchkit only removes code.
> 
> Patches in Ingo's tree from Jon Corbet have done this removal.

Ok thanks.  I was wondering why noone had done it.

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/4] SCHED: Remove BKL accounting from schedstats
  2011-05-06 21:05 ` [PATCH 2/4] SCHED: Remove BKL accounting from schedstats Andi Kleen
@ 2011-05-07  8:16   ` Ingo Molnar
  0 siblings, 0 replies; 8+ messages in thread
From: Ingo Molnar @ 2011-05-07  8:16 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel, peterz, arnd, akpm, Andi Kleen, Thomas Gleixner


* Andi Kleen <andi@firstfloor.org> wrote:

> From: Andi Kleen <ak@linux.intel.com>
> 
> Remove the BKL accounting from schedstats.
> 
> I removed the field from the debug file too, in theory
> it could be kept as 0 for compatibility.
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>  include/linux/sched.h |    4 ----
>  kernel/sched.c        |    6 ------
>  kernel/sched_debug.c  |    3 ---
>  3 files changed, 0 insertions(+), 13 deletions(-)

The thing is, and this is rather amazing: not a *single* patch in your patch 
series gets the patch title right ...

Andi, how hard is it for you to type 'git log kernel/sched.c' (and 'git log 
kernel/mutex.c', etc.) and see what the commit title convention the affected 
kernel subsystem has, and follow that convention, so that the maintainer does 
not have to edit every single patch of your series?

You are not a kernel newbie, you've been a kernel developer for how many years? 

You are showing an amazing level of inattention, which makes me very reluctant 
to even look at your patches, all of which touch tricky pieces of code.

You badly need to improve the quality of your patches. You could start that by 
feeding patches to a third party who knows how to submit patches upstream and 
who can sign off to them and forward them to the right maintainers.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-05-07  8:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-06 21:05 Remove last BKL leftover: lock_depth Andi Kleen
2011-05-06 21:05 ` [PATCH 1/4] BKL: Remove BKL references in mutex code Andi Kleen
2011-05-06 21:05 ` [PATCH 2/4] SCHED: Remove BKL accounting from schedstats Andi Kleen
2011-05-07  8:16   ` Ingo Molnar
2011-05-06 21:05 ` [PATCH 3/4] SCHED: Don't use BKL count for idle preempt count initialization Andi Kleen
2011-05-06 21:05 ` [PATCH 4/4] Remove last BKL leftover: lock_depth Andi Kleen
2011-05-06 21:11 ` Andrew Morton
2011-05-06 21:19   ` Andi Kleen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.