All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
@ 2021-12-20 19:28 Wander Lairson Costa
  2021-12-20 19:38 ` Jens Axboe
  2021-12-20 20:50 ` Jens Axboe
  0 siblings, 2 replies; 11+ messages in thread
From: Wander Lairson Costa @ 2021-12-20 19:28 UTC (permalink / raw)
  To: linux-kernel, Jens Axboe, Steven Rostedt, Ingo Molnar,
	open list:BLOCK LAYER
  Cc: Thomas Gleixner, Sebastian Andrzej Siewior, Wander Lairson Costa

The running_trace_lock protects running_trace_list and is acquired
within the tracepoint which implies disabled preemption. The spinlock_t
typed lock can not be acquired with disabled preemption on PREEMPT_RT
because it becomes a sleeping lock.
The runtime of the tracepoint depends on the number of entries in
running_trace_list and has no limit. The blk-tracer is considered debug
code and higher latencies here are okay.

Make running_trace_lock a raw_spinlock_t.

Signed-off-by: Wander Lairson Costa <wander@redhat.com>
---
 kernel/trace/blktrace.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 1183c88634aa..a86e022f7155 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -34,7 +34,7 @@ static struct trace_array *blk_tr;
 static bool blk_tracer_enabled __read_mostly;
 
 static LIST_HEAD(running_trace_list);
-static __cacheline_aligned_in_smp DEFINE_SPINLOCK(running_trace_lock);
+static __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(running_trace_lock);
 
 /* Select an alternative, minimalistic output than the original one */
 #define TRACE_BLK_OPT_CLASSIC	0x1
@@ -121,12 +121,12 @@ static void trace_note_tsk(struct task_struct *tsk)
 	struct blk_trace *bt;
 
 	tsk->btrace_seq = blktrace_seq;
-	spin_lock_irqsave(&running_trace_lock, flags);
+	raw_spin_lock_irqsave(&running_trace_lock, flags);
 	list_for_each_entry(bt, &running_trace_list, running_list) {
 		trace_note(bt, tsk->pid, BLK_TN_PROCESS, tsk->comm,
 			   sizeof(tsk->comm), 0);
 	}
-	spin_unlock_irqrestore(&running_trace_lock, flags);
+	raw_spin_unlock_irqrestore(&running_trace_lock, flags);
 }
 
 static void trace_note_time(struct blk_trace *bt)
@@ -666,9 +666,9 @@ static int __blk_trace_startstop(struct request_queue *q, int start)
 			blktrace_seq++;
 			smp_mb();
 			bt->trace_state = Blktrace_running;
-			spin_lock_irq(&running_trace_lock);
+			raw_spin_lock_irq(&running_trace_lock);
 			list_add(&bt->running_list, &running_trace_list);
-			spin_unlock_irq(&running_trace_lock);
+			raw_spin_unlock_irq(&running_trace_lock);
 
 			trace_note_time(bt);
 			ret = 0;
@@ -676,9 +676,9 @@ static int __blk_trace_startstop(struct request_queue *q, int start)
 	} else {
 		if (bt->trace_state == Blktrace_running) {
 			bt->trace_state = Blktrace_stopped;
-			spin_lock_irq(&running_trace_lock);
+			raw_spin_lock_irq(&running_trace_lock);
 			list_del_init(&bt->running_list);
-			spin_unlock_irq(&running_trace_lock);
+			raw_spin_unlock_irq(&running_trace_lock);
 			relay_flush(bt->rchan);
 			ret = 0;
 		}
@@ -1608,9 +1608,9 @@ static int blk_trace_remove_queue(struct request_queue *q)
 
 	if (bt->trace_state == Blktrace_running) {
 		bt->trace_state = Blktrace_stopped;
-		spin_lock_irq(&running_trace_lock);
+		raw_spin_lock_irq(&running_trace_lock);
 		list_del_init(&bt->running_list);
-		spin_unlock_irq(&running_trace_lock);
+		raw_spin_unlock_irq(&running_trace_lock);
 		relay_flush(bt->rchan);
 	}
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 19:28 [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock Wander Lairson Costa
@ 2021-12-20 19:38 ` Jens Axboe
  2021-12-20 19:49   ` Wander Costa
  2021-12-20 20:50 ` Jens Axboe
  1 sibling, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2021-12-20 19:38 UTC (permalink / raw)
  To: Wander Lairson Costa, linux-kernel, Steven Rostedt, Ingo Molnar,
	open list:BLOCK LAYER
  Cc: Thomas Gleixner, Sebastian Andrzej Siewior

On 12/20/21 12:28 PM, Wander Lairson Costa wrote:
> The running_trace_lock protects running_trace_list and is acquired
> within the tracepoint which implies disabled preemption. The spinlock_t
> typed lock can not be acquired with disabled preemption on PREEMPT_RT
> because it becomes a sleeping lock.
> The runtime of the tracepoint depends on the number of entries in
> running_trace_list and has no limit. The blk-tracer is considered debug
> code and higher latencies here are okay.

You didn't put a changelog in here. Was this one actually compiled? Was
it runtime tested?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 19:38 ` Jens Axboe
@ 2021-12-20 19:49   ` Wander Costa
  2021-12-20 20:24     ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Wander Costa @ 2021-12-20 19:49 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Wander Lairson Costa, open list, Steven Rostedt, Ingo Molnar,
	open list:BLOCK LAYER, Thomas Gleixner,
	Sebastian Andrzej Siewior

On Mon, Dec 20, 2021 at 4:38 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> On 12/20/21 12:28 PM, Wander Lairson Costa wrote:
> > The running_trace_lock protects running_trace_list and is acquired
> > within the tracepoint which implies disabled preemption. The spinlock_t
> > typed lock can not be acquired with disabled preemption on PREEMPT_RT
> > because it becomes a sleeping lock.
> > The runtime of the tracepoint depends on the number of entries in
> > running_trace_list and has no limit. The blk-tracer is considered debug
> > code and higher latencies here are okay.
>
> You didn't put a changelog in here. Was this one actually compiled? Was
> it runtime tested?

It feels like the changelog reached the inboxes after patch (at least
mine was so). Would you like that I send a v6 in the hope things
arrive in order?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 19:49   ` Wander Costa
@ 2021-12-20 20:24     ` Jens Axboe
  2021-12-20 20:34       ` Wander Costa
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2021-12-20 20:24 UTC (permalink / raw)
  To: Wander Costa
  Cc: Wander Lairson Costa, open list, Steven Rostedt, Ingo Molnar,
	open list:BLOCK LAYER, Thomas Gleixner,
	Sebastian Andrzej Siewior

On 12/20/21 12:49 PM, Wander Costa wrote:
> On Mon, Dec 20, 2021 at 4:38 PM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> On 12/20/21 12:28 PM, Wander Lairson Costa wrote:
>>> The running_trace_lock protects running_trace_list and is acquired
>>> within the tracepoint which implies disabled preemption. The spinlock_t
>>> typed lock can not be acquired with disabled preemption on PREEMPT_RT
>>> because it becomes a sleeping lock.
>>> The runtime of the tracepoint depends on the number of entries in
>>> running_trace_list and has no limit. The blk-tracer is considered debug
>>> code and higher latencies here are okay.
>>
>> You didn't put a changelog in here. Was this one actually compiled? Was
>> it runtime tested?
> 
> It feels like the changelog reached the inboxes after patch (at least
> mine was so). Would you like that I send a v6 in the hope things
> arrive in order?

Not sure how you are sending them, but they don't appear to thread
properly. But the changelog in the cover letter isn't really a
changelog, it doesn't say what changed.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 20:24     ` Jens Axboe
@ 2021-12-20 20:34       ` Wander Costa
  2021-12-20 20:37         ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Wander Costa @ 2021-12-20 20:34 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Wander Lairson Costa, open list, Steven Rostedt, Ingo Molnar,
	open list:BLOCK LAYER, Thomas Gleixner,
	Sebastian Andrzej Siewior

On Mon, Dec 20, 2021 at 5:24 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> On 12/20/21 12:49 PM, Wander Costa wrote:
> > On Mon, Dec 20, 2021 at 4:38 PM Jens Axboe <axboe@kernel.dk> wrote:
> >>
> >> On 12/20/21 12:28 PM, Wander Lairson Costa wrote:
> >>> The running_trace_lock protects running_trace_list and is acquired
> >>> within the tracepoint which implies disabled preemption. The spinlock_t
> >>> typed lock can not be acquired with disabled preemption on PREEMPT_RT
> >>> because it becomes a sleeping lock.
> >>> The runtime of the tracepoint depends on the number of entries in
> >>> running_trace_list and has no limit. The blk-tracer is considered debug
> >>> code and higher latencies here are okay.
> >>
> >> You didn't put a changelog in here. Was this one actually compiled? Was
> >> it runtime tested?
> >
> > It feels like the changelog reached the inboxes after patch (at least
> > mine was so). Would you like that I send a v6 in the hope things
> > arrive in order?
>
> Not sure how you are sending them, but they don't appear to thread
> properly. But the changelog in the cover letter isn't really a
> changelog, it doesn't say what changed.
>

Sorry, I think I was too brief in my explanation. I am backporting
this patch to the RHEL 9 kernel (which runs kernel 5.14). I mistakenly
generated the v4 patch from that tree, but it lacks this piece

@@ -1608,9 +1608,9 @@ static int blk_trace_remove_queue(struct request_queue *q)

        if (bt->trace_state == Blktrace_running) {
                bt->trace_state = Blktrace_stopped;
-               spin_lock_irq(&running_trace_lock);
+               raw_spin_lock_irq(&running_trace_lock);
                list_del_init(&bt->running_list);
-               spin_unlock_irq(&running_trace_lock);
+               raw_spin_unlock_irq(&running_trace_lock);
                relay_flush(bt->rchan);
        }

Causing the build error. v5 adds that. Sorry again for the confusion.

> --
> Jens Axboe
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 20:34       ` Wander Costa
@ 2021-12-20 20:37         ` Jens Axboe
  2021-12-20 20:43           ` Wander Costa
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2021-12-20 20:37 UTC (permalink / raw)
  To: Wander Costa
  Cc: Wander Lairson Costa, open list, Steven Rostedt, Ingo Molnar,
	open list:BLOCK LAYER, Thomas Gleixner,
	Sebastian Andrzej Siewior

On 12/20/21 1:34 PM, Wander Costa wrote:
> On Mon, Dec 20, 2021 at 5:24 PM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> On 12/20/21 12:49 PM, Wander Costa wrote:
>>> On Mon, Dec 20, 2021 at 4:38 PM Jens Axboe <axboe@kernel.dk> wrote:
>>>>
>>>> On 12/20/21 12:28 PM, Wander Lairson Costa wrote:
>>>>> The running_trace_lock protects running_trace_list and is acquired
>>>>> within the tracepoint which implies disabled preemption. The spinlock_t
>>>>> typed lock can not be acquired with disabled preemption on PREEMPT_RT
>>>>> because it becomes a sleeping lock.
>>>>> The runtime of the tracepoint depends on the number of entries in
>>>>> running_trace_list and has no limit. The blk-tracer is considered debug
>>>>> code and higher latencies here are okay.
>>>>
>>>> You didn't put a changelog in here. Was this one actually compiled? Was
>>>> it runtime tested?
>>>
>>> It feels like the changelog reached the inboxes after patch (at least
>>> mine was so). Would you like that I send a v6 in the hope things
>>> arrive in order?
>>
>> Not sure how you are sending them, but they don't appear to thread
>> properly. But the changelog in the cover letter isn't really a
>> changelog, it doesn't say what changed.
>>
> 
> Sorry, I think I was too brief in my explanation. I am backporting
> this patch to the RHEL 9 kernel (which runs kernel 5.14). I mistakenly
> generated the v4 patch from that tree, but it lacks this piece
> 
> @@ -1608,9 +1608,9 @@ static int blk_trace_remove_queue(struct request_queue *q)
> 
>         if (bt->trace_state == Blktrace_running) {
>                 bt->trace_state = Blktrace_stopped;
> -               spin_lock_irq(&running_trace_lock);
> +               raw_spin_lock_irq(&running_trace_lock);
>                 list_del_init(&bt->running_list);
> -               spin_unlock_irq(&running_trace_lock);
> +               raw_spin_unlock_irq(&running_trace_lock);
>                 relay_flush(bt->rchan);
>         }
> 
> Causing the build error. v5 adds that. Sorry again for the confusion.

Right, that's why I asked if a) you had even built this patch, and b) if
you had tested it as well.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 20:37         ` Jens Axboe
@ 2021-12-20 20:43           ` Wander Costa
  2021-12-20 20:49             ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Wander Costa @ 2021-12-20 20:43 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Wander Lairson Costa, open list, Steven Rostedt, Ingo Molnar,
	open list:BLOCK LAYER, Thomas Gleixner,
	Sebastian Andrzej Siewior

On Mon, Dec 20, 2021 at 5:37 PM Jens Axboe <axboe@kernel.dk> wrote:
>
> On 12/20/21 1:34 PM, Wander Costa wrote:
> > On Mon, Dec 20, 2021 at 5:24 PM Jens Axboe <axboe@kernel.dk> wrote:
> >>
> >> On 12/20/21 12:49 PM, Wander Costa wrote:
> >>> On Mon, Dec 20, 2021 at 4:38 PM Jens Axboe <axboe@kernel.dk> wrote:
> >>>>
> >>>> On 12/20/21 12:28 PM, Wander Lairson Costa wrote:
> >>>>> The running_trace_lock protects running_trace_list and is acquired
> >>>>> within the tracepoint which implies disabled preemption. The spinlock_t
> >>>>> typed lock can not be acquired with disabled preemption on PREEMPT_RT
> >>>>> because it becomes a sleeping lock.
> >>>>> The runtime of the tracepoint depends on the number of entries in
> >>>>> running_trace_list and has no limit. The blk-tracer is considered debug
> >>>>> code and higher latencies here are okay.
> >>>>
> >>>> You didn't put a changelog in here. Was this one actually compiled? Was
> >>>> it runtime tested?
> >>>
> >>> It feels like the changelog reached the inboxes after patch (at least
> >>> mine was so). Would you like that I send a v6 in the hope things
> >>> arrive in order?
> >>
> >> Not sure how you are sending them, but they don't appear to thread
> >> properly. But the changelog in the cover letter isn't really a
> >> changelog, it doesn't say what changed.
> >>
> >
> > Sorry, I think I was too brief in my explanation. I am backporting
> > this patch to the RHEL 9 kernel (which runs kernel 5.14). I mistakenly
> > generated the v4 patch from that tree, but it lacks this piece
> >
> > @@ -1608,9 +1608,9 @@ static int blk_trace_remove_queue(struct request_queue *q)
> >
> >         if (bt->trace_state == Blktrace_running) {
> >                 bt->trace_state = Blktrace_stopped;
> > -               spin_lock_irq(&running_trace_lock);
> > +               raw_spin_lock_irq(&running_trace_lock);
> >                 list_del_init(&bt->running_list);
> > -               spin_unlock_irq(&running_trace_lock);
> > +               raw_spin_unlock_irq(&running_trace_lock);
> >                 relay_flush(bt->rchan);
> >         }
> >
> > Causing the build error. v5 adds that. Sorry again for the confusion.
>
> Right, that's why I asked if a) you had even built this patch, and b) if
> you had tested it as well.
>

Yes, I had. But I had two versions of it. One for RHEL and one for
torvalds/master. I just picked the wrong branch when generating it.
I apologize for the mess once more.

> --
> Jens Axboe
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 20:43           ` Wander Costa
@ 2021-12-20 20:49             ` Jens Axboe
  2021-12-20 23:30               ` Steven Rostedt
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2021-12-20 20:49 UTC (permalink / raw)
  To: Wander Costa
  Cc: Wander Lairson Costa, open list, Steven Rostedt, Ingo Molnar,
	open list:BLOCK LAYER, Thomas Gleixner,
	Sebastian Andrzej Siewior

On 12/20/21 1:43 PM, Wander Costa wrote:
> On Mon, Dec 20, 2021 at 5:37 PM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> On 12/20/21 1:34 PM, Wander Costa wrote:
>>> On Mon, Dec 20, 2021 at 5:24 PM Jens Axboe <axboe@kernel.dk> wrote:
>>>>
>>>> On 12/20/21 12:49 PM, Wander Costa wrote:
>>>>> On Mon, Dec 20, 2021 at 4:38 PM Jens Axboe <axboe@kernel.dk> wrote:
>>>>>>
>>>>>> On 12/20/21 12:28 PM, Wander Lairson Costa wrote:
>>>>>>> The running_trace_lock protects running_trace_list and is acquired
>>>>>>> within the tracepoint which implies disabled preemption. The spinlock_t
>>>>>>> typed lock can not be acquired with disabled preemption on PREEMPT_RT
>>>>>>> because it becomes a sleeping lock.
>>>>>>> The runtime of the tracepoint depends on the number of entries in
>>>>>>> running_trace_list and has no limit. The blk-tracer is considered debug
>>>>>>> code and higher latencies here are okay.
>>>>>>
>>>>>> You didn't put a changelog in here. Was this one actually compiled? Was
>>>>>> it runtime tested?
>>>>>
>>>>> It feels like the changelog reached the inboxes after patch (at least
>>>>> mine was so). Would you like that I send a v6 in the hope things
>>>>> arrive in order?
>>>>
>>>> Not sure how you are sending them, but they don't appear to thread
>>>> properly. But the changelog in the cover letter isn't really a
>>>> changelog, it doesn't say what changed.
>>>>
>>>
>>> Sorry, I think I was too brief in my explanation. I am backporting
>>> this patch to the RHEL 9 kernel (which runs kernel 5.14). I mistakenly
>>> generated the v4 patch from that tree, but it lacks this piece
>>>
>>> @@ -1608,9 +1608,9 @@ static int blk_trace_remove_queue(struct request_queue *q)
>>>
>>>         if (bt->trace_state == Blktrace_running) {
>>>                 bt->trace_state = Blktrace_stopped;
>>> -               spin_lock_irq(&running_trace_lock);
>>> +               raw_spin_lock_irq(&running_trace_lock);
>>>                 list_del_init(&bt->running_list);
>>> -               spin_unlock_irq(&running_trace_lock);
>>> +               raw_spin_unlock_irq(&running_trace_lock);
>>>                 relay_flush(bt->rchan);
>>>         }
>>>
>>> Causing the build error. v5 adds that. Sorry again for the confusion.
>>
>> Right, that's why I asked if a) you had even built this patch, and b) if
>> you had tested it as well.
>>
> 
> Yes, I had. But I had two versions of it. One for RHEL and one for
> torvalds/master. I just picked the wrong branch when generating it.
> I apologize for the mess once more.

Alright, fair enough, mistakes happen. I think the patch looks fine, my
main dislike is that it's Yet Another things that needs special RT
handling. But I guess that's how it is...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 19:28 [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock Wander Lairson Costa
  2021-12-20 19:38 ` Jens Axboe
@ 2021-12-20 20:50 ` Jens Axboe
  1 sibling, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2021-12-20 20:50 UTC (permalink / raw)
  To: Steven Rostedt, Ingo Molnar, linux-kernel, Wander Lairson Costa,
	open list:BLOCK LAYER
  Cc: Thomas Gleixner, Sebastian Andrzej Siewior

On Mon, 20 Dec 2021 16:28:27 -0300, Wander Lairson Costa wrote:
> The running_trace_lock protects running_trace_list and is acquired
> within the tracepoint which implies disabled preemption. The spinlock_t
> typed lock can not be acquired with disabled preemption on PREEMPT_RT
> because it becomes a sleeping lock.
> The runtime of the tracepoint depends on the number of entries in
> running_trace_list and has no limit. The blk-tracer is considered debug
> code and higher latencies here are okay.
> 
> [...]

Applied, thanks!

[1/1] blktrace: switch trace spinlock to a raw spinlock
      commit: 361c81dbc58c8aa230e1f2d556045fa7bc3eb4a3

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 20:49             ` Jens Axboe
@ 2021-12-20 23:30               ` Steven Rostedt
  2021-12-20 23:45                 ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2021-12-20 23:30 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Wander Costa, Wander Lairson Costa, open list, Ingo Molnar,
	open list:BLOCK LAYER, Thomas Gleixner,
	Sebastian Andrzej Siewior

On Mon, 20 Dec 2021 13:49:47 -0700
Jens Axboe <axboe@kernel.dk> wrote:

> Alright, fair enough, mistakes happen. I think the patch looks fine, my
> main dislike is that it's Yet Another things that needs special RT
> handling. But I guess that's how it is...

It's not really "yet another thing". That's because in general tracing
needs special RT handling (and a lot of other special handling, not
just RT). blktrace falls under the tracing category, and this just
falls under one more thing to make tracing work with RT.

-- Steve

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock
  2021-12-20 23:30               ` Steven Rostedt
@ 2021-12-20 23:45                 ` Jens Axboe
  0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2021-12-20 23:45 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Wander Costa, Wander Lairson Costa, open list, Ingo Molnar,
	open list:BLOCK LAYER, Thomas Gleixner,
	Sebastian Andrzej Siewior

On 12/20/21 4:30 PM, Steven Rostedt wrote:
> On Mon, 20 Dec 2021 13:49:47 -0700
> Jens Axboe <axboe@kernel.dk> wrote:
> 
>> Alright, fair enough, mistakes happen. I think the patch looks fine, my
>> main dislike is that it's Yet Another things that needs special RT
>> handling. But I guess that's how it is...
> 
> It's not really "yet another thing". That's because in general tracing
> needs special RT handling (and a lot of other special handling, not
> just RT). blktrace falls under the tracing category, and this just
> falls under one more thing to make tracing work with RT.

This isn't the first patch like this I've applied. I'm not saying this
is an issue with tracing, just that it's another one of these "spinlocks
are now preemptible/sleeping with RT" and that necessitates changes like
this.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-12-20 23:46 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-20 19:28 [PATCH v5 1/1] blktrace: switch trace spinlock to a raw spinlock Wander Lairson Costa
2021-12-20 19:38 ` Jens Axboe
2021-12-20 19:49   ` Wander Costa
2021-12-20 20:24     ` Jens Axboe
2021-12-20 20:34       ` Wander Costa
2021-12-20 20:37         ` Jens Axboe
2021-12-20 20:43           ` Wander Costa
2021-12-20 20:49             ` Jens Axboe
2021-12-20 23:30               ` Steven Rostedt
2021-12-20 23:45                 ` Jens Axboe
2021-12-20 20:50 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.