From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751633AbeEUKkP (ORCPT ); Mon, 21 May 2018 06:40:15 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46792 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751056AbeEUKkN (ORCPT ); Mon, 21 May 2018 06:40:13 -0400 Date: Mon, 21 May 2018 11:40:08 +0100 From: Mark Rutland To: Ganapatrao Kulkarni Cc: Ganapatrao Kulkarni , linux-doc@vger.kernel.org, LKML , linux-arm-kernel@lists.infradead.org, Will Deacon , jnair@caviumnetworks.com, Robert Richter , Vadim.Lomovtsev@cavium.com, Jan.Glauber@cavium.com Subject: Re: [PATCH v4 2/2] ThunderX2: Add Cavium ThunderX2 SoC UNCORE PMU driver Message-ID: <20180521104008.z6ei5zjve7u5iwho@lakrids.cambridge.arm.com> References: <20180425090047.6485-1-ganapatrao.kulkarni@cavium.com> <20180425090047.6485-3-ganapatrao.kulkarni@cavium.com> <20180426105938.y6unpt36lisb7kbr@lakrids.cambridge.arm.com> <20180521103712.gofbrjdtghfwolmd@lakrids.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180521103712.gofbrjdtghfwolmd@lakrids.cambridge.arm.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 21, 2018 at 11:37:12AM +0100, Mark Rutland wrote: > Hi Ganapat, > > > Sorry for the delay in replying; I was away most of last week. > > On Tue, May 15, 2018 at 04:03:19PM +0530, Ganapatrao Kulkarni wrote: > > On Sat, May 5, 2018 at 12:16 AM, Ganapatrao Kulkarni wrote: > > > On Thu, Apr 26, 2018 at 4:29 PM, Mark Rutland wrote: > > >> On Wed, Apr 25, 2018 at 02:30:47PM +0530, Ganapatrao Kulkarni wrote: > > > >>> +static int alloc_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore) > > >>> +{ > > >>> + int counter; > > >>> + > > >>> + raw_spin_lock(&pmu_uncore->lock); > > >>> + counter = find_first_zero_bit(pmu_uncore->counter_mask, > > >>> + pmu_uncore->uncore_dev->max_counters); > > >>> + if (counter == pmu_uncore->uncore_dev->max_counters) { > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> + return -ENOSPC; > > >>> + } > > >>> + set_bit(counter, pmu_uncore->counter_mask); > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> + return counter; > > >>> +} > > >>> + > > >>> +static void free_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore, > > >>> + int counter) > > >>> +{ > > >>> + raw_spin_lock(&pmu_uncore->lock); > > >>> + clear_bit(counter, pmu_uncore->counter_mask); > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> +} > > >> > > >> I don't believe that locking is required in either of these, as the perf > > >> core serializes pmu::add() and pmu::del(), where these get called. > > > > without this locking, i am seeing "BUG: scheduling while atomic" when > > i run perf with more events together than the maximum counters > > supported > > Did you manage to get to the bottom of this? > > Do you have a backtrace? > > It looks like in your latest posting you reserve counters through the > userspace ABI, which doesn't seem right to me, and I'd like to > understand the problem. Looks like I misunderstood -- those are still allocated kernel-side. I'll follow that up in the v5 posting. Thanks, Mark. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on archive.lwn.net X-Spam-Level: X-Spam-Status: No, score=-5.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham autolearn_force=no version=3.4.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by archive.lwn.net (Postfix) with ESMTP id 9E0B47D043 for ; Mon, 21 May 2018 10:40:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751117AbeEUKkN (ORCPT ); Mon, 21 May 2018 06:40:13 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46792 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751056AbeEUKkN (ORCPT ); Mon, 21 May 2018 06:40:13 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 04C901435; Mon, 21 May 2018 03:40:13 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 39E343F25D; Mon, 21 May 2018 03:40:11 -0700 (PDT) Date: Mon, 21 May 2018 11:40:08 +0100 From: Mark Rutland To: Ganapatrao Kulkarni Cc: Ganapatrao Kulkarni , linux-doc@vger.kernel.org, LKML , linux-arm-kernel@lists.infradead.org, Will Deacon , jnair@caviumnetworks.com, Robert Richter , Vadim.Lomovtsev@cavium.com, Jan.Glauber@cavium.com Subject: Re: [PATCH v4 2/2] ThunderX2: Add Cavium ThunderX2 SoC UNCORE PMU driver Message-ID: <20180521104008.z6ei5zjve7u5iwho@lakrids.cambridge.arm.com> References: <20180425090047.6485-1-ganapatrao.kulkarni@cavium.com> <20180425090047.6485-3-ganapatrao.kulkarni@cavium.com> <20180426105938.y6unpt36lisb7kbr@lakrids.cambridge.arm.com> <20180521103712.gofbrjdtghfwolmd@lakrids.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180521103712.gofbrjdtghfwolmd@lakrids.cambridge.arm.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On Mon, May 21, 2018 at 11:37:12AM +0100, Mark Rutland wrote: > Hi Ganapat, > > > Sorry for the delay in replying; I was away most of last week. > > On Tue, May 15, 2018 at 04:03:19PM +0530, Ganapatrao Kulkarni wrote: > > On Sat, May 5, 2018 at 12:16 AM, Ganapatrao Kulkarni wrote: > > > On Thu, Apr 26, 2018 at 4:29 PM, Mark Rutland wrote: > > >> On Wed, Apr 25, 2018 at 02:30:47PM +0530, Ganapatrao Kulkarni wrote: > > > >>> +static int alloc_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore) > > >>> +{ > > >>> + int counter; > > >>> + > > >>> + raw_spin_lock(&pmu_uncore->lock); > > >>> + counter = find_first_zero_bit(pmu_uncore->counter_mask, > > >>> + pmu_uncore->uncore_dev->max_counters); > > >>> + if (counter == pmu_uncore->uncore_dev->max_counters) { > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> + return -ENOSPC; > > >>> + } > > >>> + set_bit(counter, pmu_uncore->counter_mask); > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> + return counter; > > >>> +} > > >>> + > > >>> +static void free_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore, > > >>> + int counter) > > >>> +{ > > >>> + raw_spin_lock(&pmu_uncore->lock); > > >>> + clear_bit(counter, pmu_uncore->counter_mask); > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> +} > > >> > > >> I don't believe that locking is required in either of these, as the perf > > >> core serializes pmu::add() and pmu::del(), where these get called. > > > > without this locking, i am seeing "BUG: scheduling while atomic" when > > i run perf with more events together than the maximum counters > > supported > > Did you manage to get to the bottom of this? > > Do you have a backtrace? > > It looks like in your latest posting you reserve counters through the > userspace ABI, which doesn't seem right to me, and I'd like to > understand the problem. Looks like I misunderstood -- those are still allocated kernel-side. I'll follow that up in the v5 posting. Thanks, Mark. -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 From: mark.rutland@arm.com (Mark Rutland) Date: Mon, 21 May 2018 11:40:08 +0100 Subject: [PATCH v4 2/2] ThunderX2: Add Cavium ThunderX2 SoC UNCORE PMU driver In-Reply-To: <20180521103712.gofbrjdtghfwolmd@lakrids.cambridge.arm.com> References: <20180425090047.6485-1-ganapatrao.kulkarni@cavium.com> <20180425090047.6485-3-ganapatrao.kulkarni@cavium.com> <20180426105938.y6unpt36lisb7kbr@lakrids.cambridge.arm.com> <20180521103712.gofbrjdtghfwolmd@lakrids.cambridge.arm.com> Message-ID: <20180521104008.z6ei5zjve7u5iwho@lakrids.cambridge.arm.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Mon, May 21, 2018 at 11:37:12AM +0100, Mark Rutland wrote: > Hi Ganapat, > > > Sorry for the delay in replying; I was away most of last week. > > On Tue, May 15, 2018 at 04:03:19PM +0530, Ganapatrao Kulkarni wrote: > > On Sat, May 5, 2018 at 12:16 AM, Ganapatrao Kulkarni wrote: > > > On Thu, Apr 26, 2018 at 4:29 PM, Mark Rutland wrote: > > >> On Wed, Apr 25, 2018 at 02:30:47PM +0530, Ganapatrao Kulkarni wrote: > > > >>> +static int alloc_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore) > > >>> +{ > > >>> + int counter; > > >>> + > > >>> + raw_spin_lock(&pmu_uncore->lock); > > >>> + counter = find_first_zero_bit(pmu_uncore->counter_mask, > > >>> + pmu_uncore->uncore_dev->max_counters); > > >>> + if (counter == pmu_uncore->uncore_dev->max_counters) { > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> + return -ENOSPC; > > >>> + } > > >>> + set_bit(counter, pmu_uncore->counter_mask); > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> + return counter; > > >>> +} > > >>> + > > >>> +static void free_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore, > > >>> + int counter) > > >>> +{ > > >>> + raw_spin_lock(&pmu_uncore->lock); > > >>> + clear_bit(counter, pmu_uncore->counter_mask); > > >>> + raw_spin_unlock(&pmu_uncore->lock); > > >>> +} > > >> > > >> I don't believe that locking is required in either of these, as the perf > > >> core serializes pmu::add() and pmu::del(), where these get called. > > > > without this locking, i am seeing "BUG: scheduling while atomic" when > > i run perf with more events together than the maximum counters > > supported > > Did you manage to get to the bottom of this? > > Do you have a backtrace? > > It looks like in your latest posting you reserve counters through the > userspace ABI, which doesn't seem right to me, and I'd like to > understand the problem. Looks like I misunderstood -- those are still allocated kernel-side. I'll follow that up in the v5 posting. Thanks, Mark.