From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753138Ab3GHUyS (ORCPT ); Mon, 8 Jul 2013 16:54:18 -0400 Received: from mga09.intel.com ([134.134.136.24]:9987 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752763Ab3GHUyQ (ORCPT ); Mon, 8 Jul 2013 16:54:16 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.87,1022,1363158000"; d="scan'208";a="366719304" Date: Mon, 8 Jul 2013 13:54:01 -0700 From: Andi Kleen To: Dave Hansen Cc: Stephane Eranian , Dave Hansen , LKML , Peter Zijlstra , "mingo@elte.hu" , dave.hansen@linux.jf.intel.com, Jiri Olsa Subject: Re: [PATCH] perf: fix interrupt handler timing harness Message-ID: <20130708205401.GG5643@tassilo.jf.intel.com> References: <20130704223010.GA30625@quad> <51DACE08.5030109@intel.com> <51DB1B75.8060303@intel.com> <51DB2271.1010009@sr71.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51DB2271.1010009@sr71.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > I did a quite a bit of ftracing to look for spots inside the handler > which were taking large amounts of time. There were none. The > execution time was spread very evenly over the entire nmi handler. It > didn't appear to be any individual hot cachelines or doing something > silly like sitting in a loop handling lots of PMU events. In some cases callgraphs seemed to be quite expensive (large fraction of the handler time) -Andi