From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF96DC7618F for ; Thu, 18 Jul 2019 14:50:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9DF192173E for ; Thu, 18 Jul 2019 14:50:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390778AbfGROuG (ORCPT ); Thu, 18 Jul 2019 10:50:06 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:52118 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727812AbfGROuF (ORCPT ); Thu, 18 Jul 2019 10:50:05 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1ho7jI-0000EU-Ch; Thu, 18 Jul 2019 22:49:56 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1ho7jC-0006I8-IG; Thu, 18 Jul 2019 22:49:50 +0800 Date: Thu, 18 Jul 2019 22:49:50 +0800 From: Herbert Xu To: Daniel Jordan Cc: Steffen Klassert , Andrea Parri , Boqun Feng , "Paul E . McKenney" , Peter Zijlstra , linux-arch@vger.kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, Mathias Krause Subject: Re: [PATCH] padata: Replace delayed timer with immediate workqueue in padata_reorder Message-ID: <20190718144950.yc6sambgdsz7vrvq@gondor.apana.org.au> References: <20190716163253.24377-1-daniel.m.jordan@oracle.com> <20190717111147.t776zlyhdqyl5dhc@gondor.apana.org.au> <20190717232136.pboms73sqf6fdzic@ca-dmjordan1.us.oracle.com> <20190718033008.wle67s7esg27mrtz@gondor.apana.org.au> <20190718142515.teinr4da3gps5r7a@ca-dmjordan1.us.oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190718142515.teinr4da3gps5r7a@ca-dmjordan1.us.oracle.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 18, 2019 at 10:25:15AM -0400, Daniel Jordan wrote: > > Which memory barrier do you mean? I think you're referring to the one that > atomic_inc might provide? If so, the memory model maintainers can correct me > here, but my understanding is that RMW atomic ops that don't return values are > unordered, so switching the lines has no effect. > > Besides, the smp_mb__after_atomic is what orders the list insertion with the > trylock of pd->lock. The primitive smp_mb__after_atomic only provides a barrier when used in conjunction with atomic_inc (and similar atomic ops). The actual barrier may either be in smp_mb__after_atomic or the atomic op itself (which is the case on x86). Since we need the barrier to occur after the list insertion we must move both of these after the list_add_tail. Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt