From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754082AbdIEAix (ORCPT ); Mon, 4 Sep 2017 20:38:53 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:47279 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753530AbdIEAiw (ORCPT ); Mon, 4 Sep 2017 20:38:52 -0400 X-Original-SENDERIP: 156.147.1.125 X-Original-MAILFROM: byungchul.park@lge.com X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com Date: Tue, 5 Sep 2017 09:38:45 +0900 From: Byungchul Park To: Peter Zijlstra Cc: Byungchul Park , Ingo Molnar , Tejun Heo , Boqun Feng , david@fromorbit.com, Johannes Berg , oleg@redhat.com, "linux-kernel@vger.kernel.org" , kernel-team@lge.com Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Message-ID: <20170905003844.GO3240@X58A-UD3R> References: <20170831081501.GJ3240@X58A-UD3R> <20170831083453.5tfjofzk7idthsof@hirez.programming.kicks-ass.net> <20170901020512.GK3240@X58A-UD3R> <20170901094747.iv6s532ccuuzpry2@hirez.programming.kicks-ass.net> <20170901101629.GL3240@X58A-UD3R> <20170901123856.p2trpebau57yxftc@hirez.programming.kicks-ass.net> <20170901163852.ckslrgldsalqmg3c@hirez.programming.kicks-ass.net> <20170904013031.GM3240@X58A-UD3R> <20170904114248.kls4jv2ggsv46mli@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170904114248.kls4jv2ggsv46mli@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 04, 2017 at 01:42:48PM +0200, Peter Zijlstra wrote: > On Mon, Sep 04, 2017 at 10:30:32AM +0900, Byungchul Park wrote: > > On Fri, Sep 01, 2017 at 06:38:52PM +0200, Peter Zijlstra wrote: > > > And get tangled up with the workqueue annotation again, no thanks. > > > Having the first few works see the thread setup isn't worth it. > > > > > > And your work_id annotation had the same problem. > > > > I keep asking you for an example because I really understand you. > > > > Fix my problematic example with your patches, > > > > or, > > > > Show me a problematic scenario with my original code, you expect. > > > > Whatever, it would be helpful to understand you. > > I _really_ don't understand what you're worried about. Is it the kthread > create and workqueue init or the pool->lock that is released/acquired in > process_one_work()? s/in process_one_work()/in all worker code including setup code/ Original code was already designed to handle real dependencies well. But you invalidated it _w/o_ any reason, that's why I don't agree with your patches. Your patches only do avoiding the wq issue now we focus on. Look at: worker thread another context ------------- --------------- wait_for_completion() | | (1) v +---------+ | Work A | (2) +---------+ | | (3) v +---------+ | Work B | (4) +---------+ | | (5) v +---------+ | Work C | (6) +---------+ | v We have to consider whole context of the worker to build dependencies with a crosslock e.g. wait_for_commplete(). Only thing we have to care here is to make all works e.g. (2), (4) and (6) independent, because workqueue does _concurrency control_. As I said last year at the very beginning, for works not applied the control e.g. max_active == 1, we don't need that isolation. I said, it's a future work. It would have been much easier to communicate with each other if you *tried* to understand my examples like now or you *tried* to give me one example at least. You didn't even *try*. Only thing I want to ask you for is to *try* to understand my opinions on conflicts. Now, understand what I intended? Still unsufficient?