From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752103AbdFTUcx (ORCPT ); Tue, 20 Jun 2017 16:32:53 -0400 Received: from mx2.suse.de ([195.135.220.15]:33045 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751076AbdFTUcw (ORCPT ); Tue, 20 Jun 2017 16:32:52 -0400 Subject: Re: [PATCH v2 2/2] writeback: Rework wb_[dec|inc]_stat family of functions To: Tejun Heo Cc: jbacik@fb.com, linux-kernel@vger.kernel.org, mgorman@techsingularity.net References: <20170620173346.GB21326@htj.duckdns.org> <1497981720-7036-1-git-send-email-nborisov@suse.com> <20170620193715.GF21326@htj.duckdns.org> <274063e4-57d0-5a87-1f43-28f5232af52b@suse.com> <20170620203049.GH21326@htj.duckdns.org> From: Nikolay Borisov Message-ID: Date: Tue, 20 Jun 2017 23:32:49 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: <20170620203049.GH21326@htj.duckdns.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 20.06.2017 23:30, Tejun Heo wrote: > Hello, > > On Tue, Jun 20, 2017 at 11:28:30PM +0300, Nikolay Borisov wrote: >>> Heh, looks like I was confused. __percpu_counter_add() is not >>> irq-safe. It disables preemption and uses __this_cpu_read(), so >>> there's no protection against irq. If writeback statistics want >>> irq-safe operations and it does, it would need these separate >>> operations. Am I missing something? >> >> So looking at the history of the commit initially there was >> preempt_disable + this_cpu_ptr which was later changed in: >> >> 819a72af8d66 ("percpucounter: Optimize __percpu_counter_add a bit >> through the use of this_cpu() options.") >> >> I believe that having __this_cpu_read ensures that we get an atomic >> snapshot of the variable but when we are doing the actual write e.g. the >> else {} branch we actually use this_cpu_add which ought to be preempt + >> irq safe, meaning we won't get torn write. In essence we have atomic >> reads by merit of __this_cpu_read + atomic writes by merit of using >> raw_spin_lock_irqsave in the if() branch and this_cpu_add in the else {} >> branch. > > Ah, you're right. The initial read is speculative. The slow path is > protected with irq spinlock. The fast path is this_cpu_add() which is > irq-safe. We really need to document these functions. > > Can I bother you with adding documentation to them while you're at it? Sure, I will likely resend with a fresh head on my shoulders. > > Thanks. >