From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755385AbbFPTEb (ORCPT ); Tue, 16 Jun 2015 15:04:31 -0400 Received: from mga09.intel.com ([134.134.136.24]:35084 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754746AbbFPTEY (ORCPT ); Tue, 16 Jun 2015 15:04:24 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,627,1427785200"; d="scan'208";a="509342249" Date: Tue, 16 Jun 2015 12:01:19 -0700 (PDT) From: Vikas Shivappa X-X-Sender: vikas@vshiva-Udesk To: Thomas Gleixner cc: Vikas Shivappa , linux-kernel@vger.kernel.org, vikas.shivappa@intel.com, x86@kernel.org, hpa@zytor.com, mingo@kernel.org, peterz@infradead.org, matt.fleming@intel.com, will.auld@intel.com, linux-rdt@eclists.intel.com Subject: Re: [PATCH 09/10] x86/intel_rdt: Hot cpu support for Cache Allocation In-Reply-To: Message-ID: References: <1434133037-25189-1-git-send-email-vikas.shivappa@linux.intel.com> <1434133037-25189-10-git-send-email-vikas.shivappa@linux.intel.com> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 16 Jun 2015, Thomas Gleixner wrote: > On Fri, 12 Jun 2015, Vikas Shivappa wrote: >> +static inline void intel_rdt_cpu_start(int cpu) >> +{ >> + struct intel_pqr_state *state = &per_cpu(pqr_state, cpu); >> + >> + state->closid = 0; >> + mutex_lock(&rdt_group_mutex); > > This is called from CPU_STARTING, which runs on the starting cpu with > interrupts disabled. Clearly never tested with any of the mandatory > debug configs enabled. But this can race with cbm_update_all calling on_each_cpu_mask ? or in other words the lock helps on_each_cpu_mask not race with hot cpu code updating the rdt_cpumask since the on_each_cpu_mask is also called with the lock always. Its tested on the 0 day build which should include the debug config. Will add a tested tag. Thanks, Vikas > > Thanks > > tglx >