From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751883AbdFTLgz (ORCPT ); Tue, 20 Jun 2017 07:36:55 -0400 Received: from mx2.suse.de ([195.135.220.15]:54041 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751021AbdFTLgn (ORCPT ); Tue, 20 Jun 2017 07:36:43 -0400 From: Nikolay Borisov To: tj@kernel.org Cc: jbacik@fb.com, jack@suse.cz, linux-kernel@vger.kernel.org, hannes@cmpxchg.org, mgorman@techsingularity.net, Nikolay Borisov Subject: [PATCH 2/2] writeback: Rework wb_[dec|inc]_stat family of functions Date: Tue, 20 Jun 2017 14:36:30 +0300 Message-Id: <1497958590-6639-2-git-send-email-nborisov@suse.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497958590-6639-1-git-send-email-nborisov@suse.com> References: <1497958590-6639-1-git-send-email-nborisov@suse.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the writeback statistics code uses a percpu counters to hold various statistics. As such we have 2 families of functions - those which disable local irq and those which doesn't and whose names are begin with double underscore. However, they both end up calling __add_wb_stats which in turn end up calling percpu_counter_add_batch which is already SMP-safe. Exploiting this fact allows to eliminated the __wb_* functions since they do in fact cal SMP-safe primitive. Furthermore, refactor the wb_* function to call __add_wb_stat directly without the irq-disabling dance. This will likely result in better runtime of code which deals with modifying the stat counters. Signed-off-by: Nikolay Borisov --- Hello Tejun, This patch resulted from me reading your feedback on Josef's memory throttling prep patch https://patchwork.kernel.org/patch/9395219/ . If these changes are merged then his series can eliminated irq clustering and use straight __add_wb_stat call. I'd like to see his series merged sooner rather than later hence why sending this cleanup. include/linux/backing-dev.h | 24 ++---------------------- mm/page-writeback.c | 10 +++++----- 2 files changed, 7 insertions(+), 27 deletions(-) diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h index ace73f96eb1e..e9c967b86054 100644 --- a/include/linux/backing-dev.h +++ b/include/linux/backing-dev.h @@ -69,34 +69,14 @@ static inline void __add_wb_stat(struct bdi_writeback *wb, percpu_counter_add_batch(&wb->stat[item], amount, WB_STAT_BATCH); } -static inline void __inc_wb_stat(struct bdi_writeback *wb, - enum wb_stat_item item) -{ - __add_wb_stat(wb, item, 1); -} - static inline void inc_wb_stat(struct bdi_writeback *wb, enum wb_stat_item item) { - unsigned long flags; - - local_irq_save(flags); - __inc_wb_stat(wb, item); - local_irq_restore(flags); -} - -static inline void __dec_wb_stat(struct bdi_writeback *wb, - enum wb_stat_item item) -{ - __add_wb_stat(wb, item, -1); + __add_wb_stat(wb, item, 1); } static inline void dec_wb_stat(struct bdi_writeback *wb, enum wb_stat_item item) { - unsigned long flags; - - local_irq_save(flags); - __dec_wb_stat(wb, item); - local_irq_restore(flags); + __add_wb_stat(wb, item, -1); } static inline s64 wb_stat(struct bdi_writeback *wb, enum wb_stat_item item) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 143c1c25d680..b7451891959a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -601,7 +601,7 @@ static inline void __wb_writeout_inc(struct bdi_writeback *wb) { struct wb_domain *cgdom; - __inc_wb_stat(wb, WB_WRITTEN); + inc_wb_stat(wb, WB_WRITTEN); wb_domain_writeout_inc(&global_wb_domain, &wb->completions, wb->bdi->max_prop_frac); @@ -2437,8 +2437,8 @@ void account_page_dirtied(struct page *page, struct address_space *mapping) __inc_node_page_state(page, NR_FILE_DIRTY); __inc_zone_page_state(page, NR_ZONE_WRITE_PENDING); __inc_node_page_state(page, NR_DIRTIED); - __inc_wb_stat(wb, WB_RECLAIMABLE); - __inc_wb_stat(wb, WB_DIRTIED); + inc_wb_stat(wb, WB_RECLAIMABLE); + inc_wb_stat(wb, WB_DIRTIED); task_io_account_write(PAGE_SIZE); current->nr_dirtied++; this_cpu_inc(bdp_ratelimits); @@ -2745,7 +2745,7 @@ int test_clear_page_writeback(struct page *page) if (bdi_cap_account_writeback(bdi)) { struct bdi_writeback *wb = inode_to_wb(inode); - __dec_wb_stat(wb, WB_WRITEBACK); + dec_wb_stat(wb, WB_WRITEBACK); __wb_writeout_inc(wb); } } @@ -2791,7 +2791,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write) page_index(page), PAGECACHE_TAG_WRITEBACK); if (bdi_cap_account_writeback(bdi)) - __inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK); + inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK); /* * We can come through here when swapping anonymous -- 2.7.4