linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Work queue questions
@ 2012-09-21 17:35 Dinky Verma
  2012-09-21 17:49 ` Tejun Heo
  0 siblings, 1 reply; 22+ messages in thread
From: Dinky Verma @ 2012-09-21 17:35 UTC (permalink / raw)
  To: linux-kernel

Hi,

I have one question regarding concurrency managed workqueue. In the
previous kernel versions, I was using
create_singlethread_workqueue("driver_wq") e.g workqueue name is
driver_wq. In my device driver with the latest kernel version, I am
doing the same to have a support in my device driver for previous
kernel versions and new kernel version, I started using
alloc_workqueue (in intention to create single threaded workqueue)
e.g.

wq = alloc_workqueue("driver_wq", WQ_UNBOUND,1);

create_singlethread_workqueue (Depricated) and alloc_workqueue creates
work queue both work on the newer kernel versions.

I have created 3 single threaded workqueues. when I do ps on linux
console, I see the workqueue thread with process id. When I am queuing
the work simultaneously on these worker threads, I found that threads
named with Kworker/X.Y will process the work from the work queue not
the one that had been created create_singlethread_workqueue.

When I schedule the three works at the same time, I saw sometimes one
Kworker/X.Y thread processes all work items.

The question is why the main worker thread that I created does not
process the work that is intended for it why instead kworker will
process it? I have queued the work using queue_work(wq,
worker_struct).

Regards,
Deepa

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-21 17:35 Work queue questions Dinky Verma
@ 2012-09-21 17:49 ` Tejun Heo
  2012-09-21 18:30   ` Deepawali Verma
  0 siblings, 1 reply; 22+ messages in thread
From: Tejun Heo @ 2012-09-21 17:49 UTC (permalink / raw)
  To: Dinky Verma; +Cc: linux-kernel

On Fri, Sep 21, 2012 at 06:35:25PM +0100, Dinky Verma wrote:
> I have one question regarding concurrency managed workqueue. In the
> previous kernel versions, I was using
> create_singlethread_workqueue("driver_wq") e.g workqueue name is
> driver_wq. In my device driver with the latest kernel version, I am
> doing the same to have a support in my device driver for previous
> kernel versions and new kernel version, I started using
> alloc_workqueue (in intention to create single threaded workqueue)
> e.g.
> 
> wq = alloc_workqueue("driver_wq", WQ_UNBOUND,1);
> 
> create_singlethread_workqueue (Depricated) and alloc_workqueue creates
> work queue both work on the newer kernel versions.

You can use alloc_ordered_workqueue() instead but do you really need
strict ordering among different work items?  If not, it's likely that
you don't need to create separate workqueues at all.

> I have created 3 single threaded workqueues. when I do ps on linux
> console, I see the workqueue thread with process id. When I am queuing
> the work simultaneously on these worker threads, I found that threads
> named with Kworker/X.Y will process the work from the work queue not
> the one that had been created create_singlethread_workqueue.
> 
> When I schedule the three works at the same time, I saw sometimes one
> Kworker/X.Y thread processes all work items.
> 
> The question is why the main worker thread that I created does not
> process the work that is intended for it why instead kworker will
> process it? I have queued the work using queue_work(wq,
> worker_struct).

The kthread named after the workqueue is the rescuer which kicks in
iff work execution can't make forward progress due to memory pressure.
Normally all work items are served by worker threads in shared worker
pool.  What kind of driver is it?  Does it sit in memory reclaim path?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-21 17:49 ` Tejun Heo
@ 2012-09-21 18:30   ` Deepawali Verma
  2012-09-21 18:35     ` Tejun Heo
  0 siblings, 1 reply; 22+ messages in thread
From: Deepawali Verma @ 2012-09-21 18:30 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel

Hi Tejun,

Actually I want to make parallelization of one task into three tasks.
Therefore I created three single threaded work queues means divide the
task into three tasks. You are right that I can use one work queue as
well. But when I am doing three times schedule on different work
queues, I am seeing only one worker thread is processing the three
times schedule though I created three different workqueues and I
believe from previous kernel versions that there is one worker thread
associated with one queue. If one thread does this task then there is
no difference between doing the same task in one thread and using
three threads.

If we create different work queues, why always one worker thread is
processing the all tasks instead I want another two threads also work
in parallel?

Regards,
Deepa

On Fri, Sep 21, 2012 at 6:49 PM, Tejun Heo <tj@kernel.org> wrote:
> On Fri, Sep 21, 2012 at 06:35:25PM +0100, Dinky Verma wrote:
>> I have one question regarding concurrency managed workqueue. In the
>> previous kernel versions, I was using
>> create_singlethread_workqueue("driver_wq") e.g workqueue name is
>> driver_wq. In my device driver with the latest kernel version, I am
>> doing the same to have a support in my device driver for previous
>> kernel versions and new kernel version, I started using
>> alloc_workqueue (in intention to create single threaded workqueue)
>> e.g.
>>
>> wq = alloc_workqueue("driver_wq", WQ_UNBOUND,1);
>>
>> create_singlethread_workqueue (Depricated) and alloc_workqueue creates
>> work queue both work on the newer kernel versions.
>
> You can use alloc_ordered_workqueue() instead but do you really need
> strict ordering among different work items?  If not, it's likely that
> you don't need to create separate workqueues at all.
>
>> I have created 3 single threaded workqueues. when I do ps on linux
>> console, I see the workqueue thread with process id. When I am queuing
>> the work simultaneously on these worker threads, I found that threads
>> named with Kworker/X.Y will process the work from the work queue not
>> the one that had been created create_singlethread_workqueue.
>>
>> When I schedule the three works at the same time, I saw sometimes one
>> Kworker/X.Y thread processes all work items.
>>
>> The question is why the main worker thread that I created does not
>> process the work that is intended for it why instead kworker will
>> process it? I have queued the work using queue_work(wq,
>> worker_struct).
>
> The kthread named after the workqueue is the rescuer which kicks in
> iff work execution can't make forward progress due to memory pressure.
> Normally all work items are served by worker threads in shared worker
> pool.  What kind of driver is it?  Does it sit in memory reclaim path?
>
> Thanks.
>
> --
> tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-21 18:30   ` Deepawali Verma
@ 2012-09-21 18:35     ` Tejun Heo
  2012-09-21 19:26       ` Deepawali Verma
  0 siblings, 1 reply; 22+ messages in thread
From: Tejun Heo @ 2012-09-21 18:35 UTC (permalink / raw)
  To: Deepawali Verma; +Cc: linux-kernel

Hello,

On Fri, Sep 21, 2012 at 07:30:21PM +0100, Deepawali Verma wrote:
> Actually I want to make parallelization of one task into three tasks.
> Therefore I created three single threaded work queues means divide the
> task into three tasks. You are right that I can use one work queue as
> well. But when I am doing three times schedule on different work
> queues, I am seeing only one worker thread is processing the three
> times schedule though I created three different workqueues and I
> believe from previous kernel versions that there is one worker thread
> associated with one queue. If one thread does this task then there is
> no difference between doing the same task in one thread and using
> three threads.
> 
> If we create different work queues, why always one worker thread is
> processing the all tasks instead I want another two threads also work
> in parallel?

Well, that was the whole point of concurrency managed workqueue.  You
don't need to worry about the number of workers.  Concurrency is
automatically managed.  If you queue three work items on, say,
system_wq and none of them sleeps, a single worker will execute them
back to back.  If a work item sleeps, another worker will kick in.
So, in most cases, there's no need to worry about concurrency - just
use system_wq.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-21 18:35     ` Tejun Heo
@ 2012-09-21 19:26       ` Deepawali Verma
  2012-09-21 19:27         ` Tejun Heo
  0 siblings, 1 reply; 22+ messages in thread
From: Deepawali Verma @ 2012-09-21 19:26 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel

Hi Tejun,

I have put the ftrace markers in my code:

     kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
     kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
     kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
     kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
     kworker/u:5-3724  [000]   110.965311: task_event: MYTASKJOB2383 XStarted
     kworker/u:5-3724  [000]   110.965325: task_event: MYTASKJOB2383 Xstopped

I have this one big task to whom I divided into small sub tasks. These
are numbered 2381, 2382 and 2383, what was I expecting that task 2381,
2382, 2383 run in parallel. I have put start and stop markers here so
that I can see how this concurrency managed work queue is distributing
the load.

I found that task no 2381 is started first and exited before starting
task 2382 and so on. What I expected that it should start the three
sub tasks in parallel, not one by one.

Where is concurrency here?

Regards,
Deepa



On Fri, Sep 21, 2012 at 7:35 PM, Tejun Heo <tj@kernel.org> wrote:
> Hello,
>
> On Fri, Sep 21, 2012 at 07:30:21PM +0100, Deepawali Verma wrote:
>> Actually I want to make parallelization of one task into three tasks.
>> Therefore I created three single threaded work queues means divide the
>> task into three tasks. You are right that I can use one work queue as
>> well. But when I am doing three times schedule on different work
>> queues, I am seeing only one worker thread is processing the three
>> times schedule though I created three different workqueues and I
>> believe from previous kernel versions that there is one worker thread
>> associated with one queue. If one thread does this task then there is
>> no difference between doing the same task in one thread and using
>> three threads.
>>
>> If we create different work queues, why always one worker thread is
>> processing the all tasks instead I want another two threads also work
>> in parallel?
>
> Well, that was the whole point of concurrency managed workqueue.  You
> don't need to worry about the number of workers.  Concurrency is
> automatically managed.  If you queue three work items on, say,
> system_wq and none of them sleeps, a single worker will execute them
> back to back.  If a work item sleeps, another worker will kick in.
> So, in most cases, there's no need to worry about concurrency - just
> use system_wq.
>
> Thanks.
>
> --
> tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-21 19:26       ` Deepawali Verma
@ 2012-09-21 19:27         ` Tejun Heo
  2012-09-21 19:35           ` Deepawali Verma
  0 siblings, 1 reply; 22+ messages in thread
From: Tejun Heo @ 2012-09-21 19:27 UTC (permalink / raw)
  To: Deepawali Verma; +Cc: linux-kernel

On Fri, Sep 21, 2012 at 08:26:01PM +0100, Deepawali Verma wrote:
>      kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
>      kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
>      kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
>      kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
>      kworker/u:5-3724  [000]   110.965311: task_event: MYTASKJOB2383 XStarted
>      kworker/u:5-3724  [000]   110.965325: task_event: MYTASKJOB2383 Xstopped
> 
> I have this one big task to whom I divided into small sub tasks. These
> are numbered 2381, 2382 and 2383, what was I expecting that task 2381,
> 2382, 2383 run in parallel. I have put start and stop markers here so
> that I can see how this concurrency managed work queue is distributing
> the load.
> 
> I found that task no 2381 is started first and exited before starting
> task 2382 and so on. What I expected that it should start the three
> sub tasks in parallel, not one by one.
> 
> Where is concurrency here?

If none of them blocks, there isn't much point in throwing more
threads at them.  What are those thread doing?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-21 19:27         ` Tejun Heo
@ 2012-09-21 19:35           ` Deepawali Verma
  2012-09-21 19:40             ` Tejun Heo
  2012-09-22  4:24             ` anish singh
  0 siblings, 2 replies; 22+ messages in thread
From: Deepawali Verma @ 2012-09-21 19:35 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel

Hi Tajun,

These three tasks are writing the three chunks of data in parallel. I
am not getting improvement here otherwise what is difference between
writing these chunks one by one in single thread instead of trying to
write the data by scheduling the work on three different workqueues
means 3 worker threads?

Regards,
Deepa

On Fri, Sep 21, 2012 at 8:27 PM, Tejun Heo <tj@kernel.org> wrote:
> On Fri, Sep 21, 2012 at 08:26:01PM +0100, Deepawali Verma wrote:
>>      kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
>>      kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
>>      kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
>>      kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
>>      kworker/u:5-3724  [000]   110.965311: task_event: MYTASKJOB2383 XStarted
>>      kworker/u:5-3724  [000]   110.965325: task_event: MYTASKJOB2383 Xstopped
>>
>> I have this one big task to whom I divided into small sub tasks. These
>> are numbered 2381, 2382 and 2383, what was I expecting that task 2381,
>> 2382, 2383 run in parallel. I have put start and stop markers here so
>> that I can see how this concurrency managed work queue is distributing
>> the load.
>>
>> I found that task no 2381 is started first and exited before starting
>> task 2382 and so on. What I expected that it should start the three
>> sub tasks in parallel, not one by one.
>>
>> Where is concurrency here?
>
> If none of them blocks, there isn't much point in throwing more
> threads at them.  What are those thread doing?
>
> Thanks.
>
> --
> tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-21 19:35           ` Deepawali Verma
@ 2012-09-21 19:40             ` Tejun Heo
  2012-09-22  4:24             ` anish singh
  1 sibling, 0 replies; 22+ messages in thread
From: Tejun Heo @ 2012-09-21 19:40 UTC (permalink / raw)
  To: Deepawali Verma; +Cc: linux-kernel

Hello, Deepawali.

On Fri, Sep 21, 2012 at 08:35:13PM +0100, Deepawali Verma wrote:
> These three tasks are writing the three chunks of data in parallel. I
> am not getting improvement here otherwise what is difference between
> writing these chunks one by one in single thread instead of trying to
> write the data by scheduling the work on three different workqueues
> means 3 worker threads?

Workqueue is designed to supply sufficient concurrency for such use
cases and it has been doing so for all other in-kernel users for quite
some time now.  If you're not getting concurrency in the above
scenario, either you've found a bug in workqueue or you did something
wrong.  If you have a scenario not working for you, please post the
code.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-21 19:35           ` Deepawali Verma
  2012-09-21 19:40             ` Tejun Heo
@ 2012-09-22  4:24             ` anish singh
  2012-09-22  5:27               ` Daniel Taylor
  1 sibling, 1 reply; 22+ messages in thread
From: anish singh @ 2012-09-22  4:24 UTC (permalink / raw)
  To: Deepawali Verma; +Cc: Tejun Heo, linux-kernel

On Sat, Sep 22, 2012 at 1:05 AM, Deepawali Verma <dverma249@gmail.com> wrote:
> Hi Tajun,
>
> These three tasks are writing the three chunks of data in parallel. I
> am not getting improvement here otherwise what is difference between
> writing these chunks one by one in single thread instead of trying to
> write the data by scheduling the work on three different workqueues
> means 3 worker threads?
You should have carefully read "If none of them blocks, there
isn't much point in throwing more threads at them.  What are those
thread doing?" what Tejun said.

I think what he means is that concurrency is the concept of keeping the
system busy.
If you see the below logs:
kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
Here your first worker thread blocked.

So the system will try to get other workqueue started which is:
kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
Here again your second worker thread blocked.

So on so forth.
Anyway how can you write chunks of data in parallel when already some worker
thread is writing i.e. the system is busy.
Analogy: Suppose you are ambidextrous and you are eating.Can you eat with
both of your hands at a time?So worker thread are like your hands and keeping
you fed all the time is the concept of concurrency.

I am not an expert on this but from Tejun's reply I could make out this.
Please correct me If I have wrongly understood the concept based on this mail
chain.
>
> Regards,
> Deepa
>
> On Fri, Sep 21, 2012 at 8:27 PM, Tejun Heo <tj@kernel.org> wrote:
>> On Fri, Sep 21, 2012 at 08:26:01PM +0100, Deepawali Verma wrote:
>>>      kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
>>>      kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
>>>      kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
>>>      kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
>>>      kworker/u:5-3724  [000]   110.965311: task_event: MYTASKJOB2383 XStarted
>>>      kworker/u:5-3724  [000]   110.965325: task_event: MYTASKJOB2383 Xstopped
>>>
>>> I have this one big task to whom I divided into small sub tasks. These
>>> are numbered 2381, 2382 and 2383, what was I expecting that task 2381,
>>> 2382, 2383 run in parallel. I have put start and stop markers here so
>>> that I can see how this concurrency managed work queue is distributing
>>> the load.
>>>
>>> I found that task no 2381 is started first and exited before starting
>>> task 2382 and so on. What I expected that it should start the three
>>> sub tasks in parallel, not one by one.
>>>
>>> Where is concurrency here?
>>
>> If none of them blocks, there isn't much point in throwing more
>> threads at them.  What are those thread doing?
>>
>> Thanks.
>>
>> --
>> tejun
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Work queue questions
  2012-09-22  4:24             ` anish singh
@ 2012-09-22  5:27               ` Daniel Taylor
  2012-09-22  6:05                 ` anish singh
  0 siblings, 1 reply; 22+ messages in thread
From: Daniel Taylor @ 2012-09-22  5:27 UTC (permalink / raw)
  To: 'anish singh', Deepawali Verma; +Cc: Tejun Heo, linux-kernel

 

> -----Original Message-----
> From: linux-kernel-owner@vger.kernel.org 
> [mailto:linux-kernel-owner@vger.kernel.org] On Behalf Of anish singh
> Sent: Friday, September 21, 2012 9:25 PM
> To: Deepawali Verma
> Cc: Tejun Heo; linux-kernel@vger.kernel.org
> Subject: Re: Work queue questions
> 
> On Sat, Sep 22, 2012 at 1:05 AM, Deepawali Verma 
> <dverma249@gmail.com> wrote:
> > Hi Tajun,
> >
> > These three tasks are writing the three chunks of data in 
> parallel. I
> > am not getting improvement here otherwise what is difference between
> > writing these chunks one by one in single thread instead of 
> trying to
> > write the data by scheduling the work on three different workqueues
> > means 3 worker threads?
> You should have carefully read "If none of them blocks, there
> isn't much point in throwing more threads at them.  What are those
> thread doing?" what Tejun said.
> 
> I think what he means is that concurrency is the concept of 
> keeping the
> system busy.
> If you see the below logs:
> kworker/u:1-21    [000]   110.964895: task_event: 
> MYTASKJOB2381 XStarted
> kworker/u:1-21    [000]   110.964909: task_event: 
> MYTASKJOB2381 Xstopped
> Here your first worker thread blocked.
> 
> So the system will try to get other workqueue started which is:
> kworker/u:1-21    [000]   110.965137: task_event: 
> MYTASKJOB2382 XStarted
> kworker/u:1-21    [000]   110.965154: task_event: 
> MYTASKJOB2382 Xstopped
> Here again your second worker thread blocked.
> 
> So on so forth.
> Anyway how can you write chunks of data in parallel when 
> already some worker
> thread is writing i.e. the system is busy.
> Analogy: Suppose you are ambidextrous and you are eating.Can 
> you eat with
> both of your hands at a time?So worker thread are like your 
> hands and keeping
> you fed all the time is the concept of concurrency.
> 
> I am not an expert on this but from Tejun's reply I could 
> make out this.
> Please correct me If I have wrongly understood the concept 
> based on this mail

I don't know how many cores are in the CPU Deepawali's using, but if I have four,
for example, I could do something simplistic like copy pages A-G with one, pages
H-O with another, and pages Q-Z with a third.  There are memory and cache bottlenecks
(like the mouth, in your example), but all three copies could be running concurrently.

Copying, of course, is a silly, trivial example, and I hope there's a better reason
than that for the concurrency, but, if, for example, your needed to byte-swap, XOR,
or checksum, as core functionality of an embedded system, and the processing units were
available to do these things in parallel, then interleaving those operations with memory
accesses could provide higher throughput.

I think what he's asking is why there's no apparent concurrency, presuming that NONE
of his threads has a real reason to block.  With examining his code, I cannot tell,
but it looks like, from the messages, that the kernel did not attempt concurrency.
Perhaps he needs to pass additional state to the scheduler?


> chain.
> >
> > Regards,
> > Deepa
> >
> > On Fri, Sep 21, 2012 at 8:27 PM, Tejun Heo <tj@kernel.org> wrote:
> >> On Fri, Sep 21, 2012 at 08:26:01PM +0100, Deepawali Verma wrote:
> >>>      kworker/u:1-21    [000]   110.964895: task_event: 
> MYTASKJOB2381 XStarted
> >>>      kworker/u:1-21    [000]   110.964909: task_event: 
> MYTASKJOB2381 Xstopped
> >>>      kworker/u:1-21    [000]   110.965137: task_event: 
> MYTASKJOB2382 XStarted
> >>>      kworker/u:1-21    [000]   110.965154: task_event: 
> MYTASKJOB2382 Xstopped
> >>>      kworker/u:5-3724  [000]   110.965311: task_event: 
> MYTASKJOB2383 XStarted
> >>>      kworker/u:5-3724  [000]   110.965325: task_event: 
> MYTASKJOB2383 Xstopped
> >>>
> >>> I have this one big task to whom I divided into small sub 
> tasks. These
> >>> are numbered 2381, 2382 and 2383, what was I expecting 
> that task 2381,
> >>> 2382, 2383 run in parallel. I have put start and stop 
> markers here so
> >>> that I can see how this concurrency managed work queue is 
> distributing
> >>> the load.
> >>>
> >>> I found that task no 2381 is started first and exited 
> before starting
> >>> task 2382 and so on. What I expected that it should start 
> the three
> >>> sub tasks in parallel, not one by one.
> >>>
> >>> Where is concurrency here?
> >>
> >> If none of them blocks, there isn't much point in throwing more
> >> threads at them.  What are those thread doing?
> >>
> >> Thanks.
> >>
> >> --
> >> tejun
> > --
> > To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/
> --
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-22  5:27               ` Daniel Taylor
@ 2012-09-22  6:05                 ` anish singh
  2012-09-22  6:12                   ` Tejun Heo
  2012-09-22  6:18                   ` Daniel Taylor
  0 siblings, 2 replies; 22+ messages in thread
From: anish singh @ 2012-09-22  6:05 UTC (permalink / raw)
  To: Daniel Taylor; +Cc: Deepawali Verma, Tejun Heo, linux-kernel

On Sat, Sep 22, 2012 at 10:57 AM, Daniel Taylor <Daniel.Taylor@wdc.com> wrote:
>
>
>> -----Original Message-----
>> From: linux-kernel-owner@vger.kernel.org
>> [mailto:linux-kernel-owner@vger.kernel.org] On Behalf Of anish singh
>> Sent: Friday, September 21, 2012 9:25 PM
>> To: Deepawali Verma
>> Cc: Tejun Heo; linux-kernel@vger.kernel.org
>> Subject: Re: Work queue questions
>>
>> On Sat, Sep 22, 2012 at 1:05 AM, Deepawali Verma
>> <dverma249@gmail.com> wrote:
>> > Hi Tajun,
>> >
>> > These three tasks are writing the three chunks of data in
>> parallel. I
>> > am not getting improvement here otherwise what is difference between
>> > writing these chunks one by one in single thread instead of
>> trying to
>> > write the data by scheduling the work on three different workqueues
>> > means 3 worker threads?
>> You should have carefully read "If none of them blocks, there
>> isn't much point in throwing more threads at them.  What are those
>> thread doing?" what Tejun said.
>>
>> I think what he means is that concurrency is the concept of
>> keeping the
>> system busy.
>> If you see the below logs:
>> kworker/u:1-21    [000]   110.964895: task_event:
>> MYTASKJOB2381 XStarted
>> kworker/u:1-21    [000]   110.964909: task_event:
>> MYTASKJOB2381 Xstopped
>> Here your first worker thread blocked.
>>
>> So the system will try to get other workqueue started which is:
>> kworker/u:1-21    [000]   110.965137: task_event:
>> MYTASKJOB2382 XStarted
>> kworker/u:1-21    [000]   110.965154: task_event:
>> MYTASKJOB2382 Xstopped
>> Here again your second worker thread blocked.
>>
>> So on so forth.
>> Anyway how can you write chunks of data in parallel when
>> already some worker
>> thread is writing i.e. the system is busy.
>> Analogy: Suppose you are ambidextrous and you are eating.Can
>> you eat with
>> both of your hands at a time?So worker thread are like your
>> hands and keeping
>> you fed all the time is the concept of concurrency.
>>
>> I am not an expert on this but from Tejun's reply I could
>> make out this.
>> Please correct me If I have wrongly understood the concept
>> based on this mail
>
> I don't know how many cores are in the CPU Deepawali's using, but if I have four,
Assuming single core,Is my explanation correct about concurrency?
> for example, I could do something simplistic like copy pages A-G with one, pages
> H-O with another, and pages Q-Z with a third.  There are memory and cache bottlenecks
> (like the mouth, in your example), but all three copies could be running concurrently.
>
> Copying, of course, is a silly, trivial example, and I hope there's a better reason
> than that for the concurrency, but, if, for example, your needed to byte-swap, XOR,
> or checksum, as core functionality of an embedded system, and the processing units were
> available to do these things in parallel, then interleaving those operations with memory
> accesses could provide higher throughput.
>
> I think what he's asking is why there's no apparent concurrency, presuming that NONE
> of his threads has a real reason to block.  With examining his code, I cannot tell,
> but it looks like, from the messages, that the kernel did not attempt concurrency.
> Perhaps he needs to pass additional state to the scheduler?
>
>
>> chain.
>> >
>> > Regards,
>> > Deepa
>> >
>> > On Fri, Sep 21, 2012 at 8:27 PM, Tejun Heo <tj@kernel.org> wrote:
>> >> On Fri, Sep 21, 2012 at 08:26:01PM +0100, Deepawali Verma wrote:
>> >>>      kworker/u:1-21    [000]   110.964895: task_event:
>> MYTASKJOB2381 XStarted
>> >>>      kworker/u:1-21    [000]   110.964909: task_event:
>> MYTASKJOB2381 Xstopped
>> >>>      kworker/u:1-21    [000]   110.965137: task_event:
>> MYTASKJOB2382 XStarted
>> >>>      kworker/u:1-21    [000]   110.965154: task_event:
>> MYTASKJOB2382 Xstopped
>> >>>      kworker/u:5-3724  [000]   110.965311: task_event:
>> MYTASKJOB2383 XStarted
>> >>>      kworker/u:5-3724  [000]   110.965325: task_event:
>> MYTASKJOB2383 Xstopped
>> >>>
>> >>> I have this one big task to whom I divided into small sub
>> tasks. These
>> >>> are numbered 2381, 2382 and 2383, what was I expecting
>> that task 2381,
>> >>> 2382, 2383 run in parallel. I have put start and stop
>> markers here so
>> >>> that I can see how this concurrency managed work queue is
>> distributing
>> >>> the load.
>> >>>
>> >>> I found that task no 2381 is started first and exited
>> before starting
>> >>> task 2382 and so on. What I expected that it should start
>> the three
>> >>> sub tasks in parallel, not one by one.
>> >>>
>> >>> Where is concurrency here?
>> >>
>> >> If none of them blocks, there isn't much point in throwing more
>> >> threads at them.  What are those thread doing?
>> >>
>> >> Thanks.
>> >>
>> >> --
>> >> tejun
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe
>> linux-kernel" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> > Please read the FAQ at  http://www.tux.org/lkml/
>> --
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-22  6:05                 ` anish singh
@ 2012-09-22  6:12                   ` Tejun Heo
  2012-09-22  6:18                   ` Daniel Taylor
  1 sibling, 0 replies; 22+ messages in thread
From: Tejun Heo @ 2012-09-22  6:12 UTC (permalink / raw)
  To: anish singh; +Cc: Daniel Taylor, Deepawali Verma, linux-kernel

Hello,

On Fri, Sep 21, 2012 at 11:05 PM, anish singh
<anish198519851985@gmail.com> wrote:
> Assuming single core,Is my explanation correct about concurrency?

Yes, for bound workqueues, that's correct. Concurrency management
doesn't apply to unbound ones tho. Didn't notice Deepawali's test case
either just didn't take long enough for the scheduler to interleave
the workers or is using a workqueue w/ max_active == 1. I'm afraid
this won't be a particularly productive discussion without the source
code. For more details, please read Documentation/workqueue.txt.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Work queue questions
  2012-09-22  6:05                 ` anish singh
  2012-09-22  6:12                   ` Tejun Heo
@ 2012-09-22  6:18                   ` Daniel Taylor
  2012-09-24  7:25                     ` Deepawali Verma
  1 sibling, 1 reply; 22+ messages in thread
From: Daniel Taylor @ 2012-09-22  6:18 UTC (permalink / raw)
  To: 'anish singh'; +Cc: Deepawali Verma, Tejun Heo, linux-kernel


...

> >> So on so forth.
> >> Anyway how can you write chunks of data in parallel when
> >> already some worker
> >> thread is writing i.e. the system is busy.
> >> Analogy: Suppose you are ambidextrous and you are eating.Can
> >> you eat with
> >> both of your hands at a time?So worker thread are like your
> >> hands and keeping
> >> you fed all the time is the concept of concurrency.
> >>
> >> I am not an expert on this but from Tejun's reply I could
> >> make out this.
> >> Please correct me If I have wrongly understood the concept
> >> based on this mail
> >
> > I don't know how many cores are in the CPU Deepawali's 
> using, but if I have four,
> Assuming single core,Is my explanation correct about concurrency?

It is possible for his tasks to complete before scheduling occurs
again.  Consuming all of the CPU and having no blocking action,
yes, then the tasks will run consecutively.

...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-22  6:18                   ` Daniel Taylor
@ 2012-09-24  7:25                     ` Deepawali Verma
       [not found]                       ` <CAK-9PRB7KvPNgcsXiNG08-7OdrkkNc2ushusXh9rVm93J0xcHA@mail.gmail.com>
  0 siblings, 1 reply; 22+ messages in thread
From: Deepawali Verma @ 2012-09-24  7:25 UTC (permalink / raw)
  To: Daniel Taylor; +Cc: anish singh, Tejun Heo, linux-kernel

Hi Tejun,

Here are some code snippets from my device driver:

#defind NUMBER_OF_SUBTASKS 3

struct my_driver_object
{
        struct workqueue_struct *sub_task_wq;
        struct work_struct sub_task_work;
        char my_obj_wq_name[80];
        int task_id;
};

struct my_driver_object obj[3];


void my_driver_init(void)
{
   int i =0;
   memset(my_obj_wq_name,0,80);
  --------------------------------------
  for (i =0; i<3; i++)
  {
      snprintf(obj[i].my_obj_wq_name,80, "Task-wq:%d",i);
      obj.sub_task_wq = alloc_workqueue(obj[i].my_obj_wq_name,WQ_UNBOUND,1);
      INIT_WORK(&obj[i].sub_task_work, sub_task_work_handler);
  }

  --------------------------------------
}

void start_sub_tasks()
{
   int i =0;
   for (i =0; i<3; i++)
   {
        queue_work(obj[i].sub_task_wq, &obj[i].sub_task_work);

   }


}

static void sub_task_work_handler(struct work_struct work)
{
    Ftrace marker start;

    Ftrace marker end
}

Ideally I was expecting when work is queued to three different work
queues, it should run in parallel but it is not doing as per expected.
Let me know about this.

Regards,
Deepa







On Sat, Sep 22, 2012 at 7:18 AM, Daniel Taylor <Daniel.Taylor@wdc.com> wrote:
>
> ...
>
>> >> So on so forth.
>> >> Anyway how can you write chunks of data in parallel when
>> >> already some worker
>> >> thread is writing i.e. the system is busy.
>> >> Analogy: Suppose you are ambidextrous and you are eating.Can
>> >> you eat with
>> >> both of your hands at a time?So worker thread are like your
>> >> hands and keeping
>> >> you fed all the time is the concept of concurrency.
>> >>
>> >> I am not an expert on this but from Tejun's reply I could
>> >> make out this.
>> >> Please correct me If I have wrongly understood the concept
>> >> based on this mail
>> >
>> > I don't know how many cores are in the CPU Deepawali's
>> using, but if I have four,
>> Assuming single core,Is my explanation correct about concurrency?
>
> It is possible for his tasks to complete before scheduling occurs
> again.  Consuming all of the CPU and having no blocking action,
> yes, then the tasks will run consecutively.
>
> ...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
       [not found]                         ` <CAHCeSFqmeOkKySxMUXgtnev+HL-NC6MdmeuDYONymYaNczb7RA@mail.gmail.com>
@ 2012-09-24 16:56                           ` Deepawali Verma
  2012-09-24 18:10                             ` Tejun Heo
  2012-09-24 17:07                           ` Chinmay V S
  1 sibling, 1 reply; 22+ messages in thread
From: Deepawali Verma @ 2012-09-24 16:56 UTC (permalink / raw)
  To: Chinmay V S; +Cc: Daniel Taylor, anish singh, Tejun Heo, linux-kernel

Hi,

This is sample code snippet as I cannot post my project code. In
reality here, this work handler is copying the big chunks of data that
code is
 here in my driver. This is running on quad core cortex A9 Thats why I
asked. If there are 4 cpu cores, then there must be parallelism. Now
Tajun, what do you say?

 Regards,
 Deepa


> On Monday, September 24, 2012, Chinmay V S wrote:
>>
>> There is nothing in the sub_task_work_handler() to keep the CPU occupied.
>> Try adding a significant amount of work in it to keep it occupied. Also
>> are
>> you running on a SMP(multicore) system?...
>>
>> On Mon, Sep 24, 2012 at 12:55 PM, Deepawali Verma <dverma249@gmail.com>
>> wrote:
>>>
>>> Hi Tejun,
>>>
>>> Here are some code snippets from my device driver:
>>>
>>> #defind NUMBER_OF_SUBTASKS 3
>>>
>>> struct my_driver_object
>>> {
>>>         struct workqueue_struct *sub_task_wq;
>>>         struct work_struct sub_task_work;
>>>         char my_obj_wq_name[80];
>>>         int task_id;
>>> };
>>>
>>> struct my_driver_object obj[3];
>>>
>>>
>>> void my_driver_init(void)
>>> {
>>>    int i =0;
>>>    memset(my_obj_wq_name,0,80);
>>>   --------------------------------------
>>>   for (i =0; i<3; i++)
>>>   {
>>>       snprintf(obj[i].my_obj_wq_name,80, "Task-wq:%d",i);
>>>       obj.sub_task_wq =
>>> alloc_workqueue(obj[i].my_obj_wq_name,WQ_UNBOUND,1);
>>>       INIT_WORK(&obj[i].sub_task_work, sub_task_work_handler);
>>>   }
>>>
>>>   --------------------------------------
>>> }
>>>
>>> void start_sub_tasks()
>>> {
>>>    int i =0;
>>>    for (i =0; i<3; i++)
>>>    {
>>>         queue_work(obj[i].sub_task_wq, &obj[i].sub_task_work);
>>>
>>>    }
>>>
>>>
>>> }
>>>
>>> static void sub_task_work_handler(struct work_struct work)
>>> {
>>>     Ftrace marker start;
>>>
>>>     Ftrace marker end
>>> }
>>>
>>> Ideally I was expecting when work is queued to three different work
>>> queues, it should run in parallel but it is not doing as per expected.
>>> Let me know about this.
>>>
>>> Regards,
>>> Deepa
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sat, Sep 22, 2012 at 7:18 AM, Daniel Taylor <Daniel.Taylor@wdc.com>
>>> wrote:
>>> >
>>> > ...
>>> >
>>> >> >> So on so forth.
>>> >> >> Anyway how can you write chunks of data in parallel when
>>> >> >> already some worker
>>> >> >> thread is writing i.e. the system is busy.
>>> >> >> Analogy: Suppose you are ambidextrous and you are eating.Can
>>> >> >> you eat with
>>> >> >> both of your hands at a time?So worker thread are like your
>>> >> >> hands and keeping
>>> >> >> you fed all the time is the concept of concurrency.
>>> >> >>
>>> >> >> I am not an expert on this but from Tejun's reply I could
>>> >> >> make out this.
>>> >> >> Please correct me If I have wrongly understood the concept
>>> >> >> based on this mail
>>> >> >
>>> >> > I don't know how many cores are in the CPU Deepawali's
>>> >> using, but if I have four,
>>> >> Assuming single core,Is my explanation correct about concurrency?
>>> >
>>> > It is possible for his tasks to complete before scheduling occurs
>>> > again.  Consuming all of the CPU and having no blocking action,
>>> > yes, then the tasks will run consecutively.
>>> >
>>> > ...
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-kernel"
>>> in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> Please read the FAQ at  http://www.tux.org/lkml/
>>
>>
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
       [not found]                         ` <CAHCeSFqmeOkKySxMUXgtnev+HL-NC6MdmeuDYONymYaNczb7RA@mail.gmail.com>
  2012-09-24 16:56                           ` Deepawali Verma
@ 2012-09-24 17:07                           ` Chinmay V S
  1 sibling, 0 replies; 22+ messages in thread
From: Chinmay V S @ 2012-09-24 17:07 UTC (permalink / raw)
  To: Deepawali Verma; +Cc: Daniel Taylor, anish singh, Tejun Heo, linux-kernel

Hi,

Looking at the timestamps in your previous logs(copied below for reference),

     kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
     kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
     kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
     kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
     kworker/u:5-3724  [000]   110.965311: task_event: MYTASKJOB2383 XStarted
     kworker/u:5-3724  [000]   110.965325: task_event: MYTASKJOB2383 Xstopped

110.964895 to 110.964909 is 0.014ms. The supposedly "large" amount of
copying that you assume is NOT that large. Hence the kworker thread is
able to execute your work task quickly and is again available when the
next work task is ready to be scheduled.

The "large" amount of copying that you are doing is probably too small
or being run asynchronously(DMA?) Hence the work tasks finish quickly.

On Mon, Sep 24, 2012 at 10:21 PM, Deepawali Verma <dverma249@gmail.com> wrote:
>
> Hi,
>
> This is sample code snippet as I cannot post my project code. In reality here, this work handler is copying the big chunks of data that code is here in my driver. This is running on quad core cortex A9 Thats why I have point. If there are 4 cpu cores, then there must be parallelism. Now Tajun, what do you say?
>
> Regards,
> Deepa
>
> On Monday, September 24, 2012, Chinmay V S wrote:
>>
>> There is nothing in the sub_task_work_handler() to keep the CPU occupied. Try adding a significant amount of work in it to keep it occupied. Also are you running on a SMP(multicore) system?...
>>
>> On Mon, Sep 24, 2012 at 12:55 PM, Deepawali Verma <dverma249@gmail.com> wrote:
>>>
>>> Hi Tejun,
>>>
>>> Here are some code snippets from my device driver:
>>>
>>> #defind NUMBER_OF_SUBTASKS 3
>>>
>>> struct my_driver_object
>>> {
>>>         struct workqueue_struct *sub_task_wq;
>>>         struct work_struct sub_task_work;
>>>         char my_obj_wq_name[80];
>>>         int task_id;
>>> };
>>>
>>> struct my_driver_object obj[3];
>>>
>>>
>>> void my_driver_init(void)
>>> {
>>>    int i =0;
>>>    memset(my_obj_wq_name,0,80);
>>>   --------------------------------------
>>>   for (i =0; i<3; i++)
>>>   {
>>>       snprintf(obj[i].my_obj_wq_name,80, "Task-wq:%d",i);
>>>       obj.sub_task_wq = alloc_workqueue(obj[i].my_obj_wq_name,WQ_UNBOUND,1);
>>>       INIT_WORK(&obj[i].sub_task_work, sub_task_work_handler);
>>>   }
>>>
>>>   --------------------------------------
>>> }
>>>
>>> void start_sub_tasks()
>>> {
>>>    int i =0;
>>>    for (i =0; i<3; i++)
>>>    {
>>>         queue_work(obj[i].sub_task_wq, &obj[i].sub_task_work);
>>>
>>>    }
>>>
>>>
>>> }
>>>
>>> static void sub_task_work_handler(struct work_struct work)
>>> {
>>>     Ftrace marker start;
>>>
>>>     Ftrace marker end
>>> }
>>>
>>> Ideally I was expecting when work is queued to three different work
>>> queues, it should run in parallel but it is not doing as per expected.
>>> Let me know about this.
>>>
>>> Regards,
>>> Deepa
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sat, Sep 22, 2012 at 7:18 AM, Daniel Taylor <Daniel.Taylor@wdc.com> wrote:
>>> >
>>> > ...
>>> >
>>> >> >> So on so forth.
>>> >> >> Anyway how can you write chunks of data in parallel when
>>> >> >> already some worker
>>> >> >> thread is writing i.e. the system is busy.
>>> >> >> Analogy: Suppose you are ambidextrous and you are eating.Can
>>> >> >> you eat with
>>> >> >> both of your hands at a time?So worker thread are like your
>>> >> >> hands and keeping
>>> >> >> you fed all the time is the concept of concurrency.
>>> >> >>
>>> >> >> I am not an expert on this but from Tejun's reply I could
>>> >> >> make out this.
>>> >> >> Please correct me If I have wrongly understood the concept
>>> >> >> based on this mail
>>> >> >
>>> >> > I don't know how many cores are in the CPU Deepawali's
>>> >> using, but if I have four,
>>> >> Assuming single core,Is my explanation correct about concurrency?
>>> >
>>> > It is possible for his tasks to complete before scheduling occurs
>>> > again.  Consuming all of the CPU and having no blocking action,
>>> > yes, then the tasks will run consecutively.
>>> >
>>> > ...
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> Please read the FAQ at  http://www.tux.org/lkml/
>>
>>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-24 16:56                           ` Deepawali Verma
@ 2012-09-24 18:10                             ` Tejun Heo
  2012-09-24 19:57                               ` Deepawali Verma
  0 siblings, 1 reply; 22+ messages in thread
From: Tejun Heo @ 2012-09-24 18:10 UTC (permalink / raw)
  To: Deepawali Verma; +Cc: Chinmay V S, Daniel Taylor, anish singh, linux-kernel

On Mon, Sep 24, 2012 at 05:56:10PM +0100, Deepawali Verma wrote:
> Hi,
> 
> This is sample code snippet as I cannot post my project code. In
> reality here, this work handler is copying the big chunks of data that
> code is
>  here in my driver. This is running on quad core cortex A9 Thats why I
> asked. If there are 4 cpu cores, then there must be parallelism. Now
> Tajun, what do you say?

My name is Tejun and please lose the frigging attitude when you're
asking things.

> >>> alloc_workqueue(obj[i].my_obj_wq_name,WQ_UNBOUND,1);

Especially if you're not properly reading any of the documentation,
function comment and my explicit response mentioning @max_active. :(

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-24 18:10                             ` Tejun Heo
@ 2012-09-24 19:57                               ` Deepawali Verma
  2012-09-24 20:08                                 ` Tejun Heo
  0 siblings, 1 reply; 22+ messages in thread
From: Deepawali Verma @ 2012-09-24 19:57 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Chinmay V S, Daniel Taylor, anish singh, linux-kernel

Hi Tejun,

May be I misunderstood, I read in the documentation about max_active.
In this case, max_active is 1, but I created three workqueues, do you
mean to say for this case, single thread can process three requests
queued up in the three different workqueues.

Sorry, if I misunderstood.

Regards,
Deepa



On Mon, Sep 24, 2012 at 7:10 PM, Tejun Heo <tj@kernel.org> wrote:
> On Mon, Sep 24, 2012 at 05:56:10PM +0100, Deepawali Verma wrote:
>> Hi,
>>
>> This is sample code snippet as I cannot post my project code. In
>> reality here, this work handler is copying the big chunks of data that
>> code is
>>  here in my driver. This is running on quad core cortex A9 Thats why I
>> asked. If there are 4 cpu cores, then there must be parallelism. Now
>> Tajun, what do you say?
>
> My name is Tejun and please lose the frigging attitude when you're
> asking things.
>
>> >>> alloc_workqueue(obj[i].my_obj_wq_name,WQ_UNBOUND,1);
>
> Especially if you're not properly reading any of the documentation,
> function comment and my explicit response mentioning @max_active. :(
>
> --
> tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-24 19:57                               ` Deepawali Verma
@ 2012-09-24 20:08                                 ` Tejun Heo
  2012-09-24 20:52                                   ` Deepawali Verma
  2012-09-25  3:05                                   ` anish singh
  0 siblings, 2 replies; 22+ messages in thread
From: Tejun Heo @ 2012-09-24 20:08 UTC (permalink / raw)
  To: Deepawali Verma; +Cc: Chinmay V S, Daniel Taylor, anish singh, linux-kernel

Hello,

On Mon, Sep 24, 2012 at 08:57:40PM +0100, Deepawali Verma wrote:
> May be I misunderstood, I read in the documentation about max_active.
> In this case, max_active is 1, but I created three workqueues, do you

I see.  Why are you doing that?  Is there ordering requirement?  Why
not just use system_unbound_wq?

> mean to say for this case, single thread can process three requests
> queued up in the three different workqueues.

In the following execution log you posted,

  kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
  kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
  kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
  kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
  kworker/u:5-3724  [000]   110.965311: task_event: MYTASKJOB2383 XStarted
  kworker/u:5-3724  [000]   110.965325: task_event: MYTASKJOB2383 Xstopped

The first two got executed on the same worker thread but the third one
is on a different one.  It really looks like you just don't have large
enough work for scheduler to interleave them or migrate workers to
different CPUs.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-24 20:08                                 ` Tejun Heo
@ 2012-09-24 20:52                                   ` Deepawali Verma
  2012-09-24 20:54                                     ` Tejun Heo
  2012-09-25  3:05                                   ` anish singh
  1 sibling, 1 reply; 22+ messages in thread
From: Deepawali Verma @ 2012-09-24 20:52 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Chinmay V S, Daniel Taylor, anish singh, linux-kernel

Hi Tejun,

I do not have ordering as requirement. I can use system work queue as
well. what is max_active by default for system wq per cpu?

Regards,
Deepa

On Mon, Sep 24, 2012 at 9:08 PM, Tejun Heo <tj@kernel.org> wrote:
> Hello,
>
> On Mon, Sep 24, 2012 at 08:57:40PM +0100, Deepawali Verma wrote:
>> May be I misunderstood, I read in the documentation about max_active.
>> In this case, max_active is 1, but I created three workqueues, do you
>
> I see.  Why are you doing that?  Is there ordering requirement?  Why
> not just use system_unbound_wq?
>
>> mean to say for this case, single thread can process three requests
>> queued up in the three different workqueues.
>
> In the following execution log you posted,
>
>   kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
>   kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
>   kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
>   kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
>   kworker/u:5-3724  [000]   110.965311: task_event: MYTASKJOB2383 XStarted
>   kworker/u:5-3724  [000]   110.965325: task_event: MYTASKJOB2383 Xstopped
>
> The first two got executed on the same worker thread but the third one
> is on a different one.  It really looks like you just don't have large
> enough work for scheduler to interleave them or migrate workers to
> different CPUs.
>
> Thanks.
>
> --
> tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-24 20:52                                   ` Deepawali Verma
@ 2012-09-24 20:54                                     ` Tejun Heo
  0 siblings, 0 replies; 22+ messages in thread
From: Tejun Heo @ 2012-09-24 20:54 UTC (permalink / raw)
  To: Deepawali Verma; +Cc: Chinmay V S, Daniel Taylor, anish singh, linux-kernel

On Mon, Sep 24, 2012 at 09:52:14PM +0100, Deepawali Verma wrote:
> I do not have ordering as requirement. I can use system work queue as
> well. what is max_active by default for system wq per cpu?

For system_unbound_wq, it's the larger one of 512 and 4 * #cpus.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Work queue questions
  2012-09-24 20:08                                 ` Tejun Heo
  2012-09-24 20:52                                   ` Deepawali Verma
@ 2012-09-25  3:05                                   ` anish singh
  1 sibling, 0 replies; 22+ messages in thread
From: anish singh @ 2012-09-25  3:05 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Deepawali Verma, Chinmay V S, Daniel Taylor, linux-kernel

On Tue, Sep 25, 2012 at 1:38 AM, Tejun Heo <tj@kernel.org> wrote:
> Hello,
>
> On Mon, Sep 24, 2012 at 08:57:40PM +0100, Deepawali Verma wrote:
>> May be I misunderstood, I read in the documentation about max_active.
>> In this case, max_active is 1, but I created three workqueues, do you
>
> I see.  Why are you doing that?  Is there ordering requirement?  Why
> not just use system_unbound_wq?
>
>> mean to say for this case, single thread can process three requests
>> queued up in the three different workqueues.
>
> In the following execution log you posted,
>
>   kworker/u:1-21    [000]   110.964895: task_event: MYTASKJOB2381 XStarted
>   kworker/u:1-21    [000]   110.964909: task_event: MYTASKJOB2381 Xstopped
>   kworker/u:1-21    [000]   110.965137: task_event: MYTASKJOB2382 XStarted
>   kworker/u:1-21    [000]   110.965154: task_event: MYTASKJOB2382 Xstopped
>   kworker/u:5-3724  [000]   110.965311: task_event: MYTASKJOB2383 XStarted
>   kworker/u:5-3724  [000]   110.965325: task_event: MYTASKJOB2383 Xstopped
>
> The first two got executed on the same worker thread but the third one
> is on a different one.  It really looks like you just don't have large
> enough work for scheduler to interleave them or migrate workers to
> different CPUs.
Tejun, It is also very well mentioned in the documentation
Documentation/workqueue.txt
Quoting from it:
"Some users depend on the strict execution ordering of ST wq.  The
combination of @max_active of 1 and WQ_UNBOUND is used to achieve this
behavior.  Work items on such wq are always queued to the unbound gcwq
and only one work item can be active at any given time thus achieving
the same ordering property as ST wq."
So what Deepawali Verma is observing is the same.
>
> Thanks.
>
> --
> tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2012-09-25  3:06 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-21 17:35 Work queue questions Dinky Verma
2012-09-21 17:49 ` Tejun Heo
2012-09-21 18:30   ` Deepawali Verma
2012-09-21 18:35     ` Tejun Heo
2012-09-21 19:26       ` Deepawali Verma
2012-09-21 19:27         ` Tejun Heo
2012-09-21 19:35           ` Deepawali Verma
2012-09-21 19:40             ` Tejun Heo
2012-09-22  4:24             ` anish singh
2012-09-22  5:27               ` Daniel Taylor
2012-09-22  6:05                 ` anish singh
2012-09-22  6:12                   ` Tejun Heo
2012-09-22  6:18                   ` Daniel Taylor
2012-09-24  7:25                     ` Deepawali Verma
     [not found]                       ` <CAK-9PRB7KvPNgcsXiNG08-7OdrkkNc2ushusXh9rVm93J0xcHA@mail.gmail.com>
     [not found]                         ` <CAHCeSFqmeOkKySxMUXgtnev+HL-NC6MdmeuDYONymYaNczb7RA@mail.gmail.com>
2012-09-24 16:56                           ` Deepawali Verma
2012-09-24 18:10                             ` Tejun Heo
2012-09-24 19:57                               ` Deepawali Verma
2012-09-24 20:08                                 ` Tejun Heo
2012-09-24 20:52                                   ` Deepawali Verma
2012-09-24 20:54                                     ` Tejun Heo
2012-09-25  3:05                                   ` anish singh
2012-09-24 17:07                           ` Chinmay V S

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).