From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756315AbcA3DgS (ORCPT ); Fri, 29 Jan 2016 22:36:18 -0500 Received: from shelob.surriel.com ([74.92.59.67]:51558 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753836AbcA3DgL (ORCPT ); Fri, 29 Jan 2016 22:36:11 -0500 From: riel@redhat.com To: linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, mingo@kernel.org, luto@amacapital.net, fweisbec@gmail.com, peterz@infradead.org, clark@redhat.com Subject: [PATCH 0/2] sched,time: reduce nohz_full syscall overhead 40% Date: Fri, 29 Jan 2016 22:36:01 -0500 Message-Id: <1454124965-13974-1-git-send-email-riel@redhat.com> X-Mailer: git-send-email 2.5.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org unning with nohz_full introduces a fair amount of overhead. Specifically, various things that are usually done from the timer interrupt are now done at syscall, irq, and guest entry and exit times. However, some of the code that is called every single time has only ever worked at jiffy resolution. The code in __acct_update_integrals was also doing some unnecessary calculations. Getting rid of the unnecessary calculations, without changing any of the functionality in __acct_update_integrals gets us about an 11% win. Not calling the time statistics updating code more than once per jiffy, like is done on housekeeping CPUs and on all the CPUs of a non-nohz_full system, shaves off a further 30%. I tested this series with a microbenchmark calling an invalid syscall number ten million times in a row, on a nohz_full cpu. Run times for the microbenchmark: 4.4 3.8 seconds 4.5-rc1 3.7 seconds 4.5-rc1 + first patch 3.3 seconds 4.5-rc1 + first 3 patches 3.1 seconds 4.5-rc1 + all patches 2.3 seconds