From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96508C0650F for ; Tue, 30 Jul 2019 17:32:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7B557204FD for ; Tue, 30 Jul 2019 17:32:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731044AbfG3Rck (ORCPT ); Tue, 30 Jul 2019 13:32:40 -0400 Received: from mga01.intel.com ([192.55.52.88]:35736 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731025AbfG3RcN (ORCPT ); Tue, 30 Jul 2019 13:32:13 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Jul 2019 10:32:10 -0700 X-IronPort-AV: E=Sophos;i="5.64,327,1559545200"; d="scan'208";a="162655286" Received: from rchatre-s.jf.intel.com ([10.54.70.76]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Jul 2019 10:32:10 -0700 From: Reinette Chatre To: tglx@linutronix.de, fenghua.yu@intel.com, bp@alien8.de, tony.luck@intel.com Cc: kuo-lang.tseng@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, Reinette Chatre Subject: [PATCH V2 06/10] x86/resctrl: Introduce utility to return pseudo-locked cache portion Date: Tue, 30 Jul 2019 10:29:40 -0700 Message-Id: <12976aa4707da37b0fcc9ea196eed53eaa253d07.1564504901.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To prevent eviction of pseudo-locked memory it is required that no other resource group uses any portion of a cache that is in use by a cache pseudo-locked region. Introduce a utility that will return a Capacity BitMask (CBM) indicating all portions of a provided cache instance being used for cache pseudo-locking. This CBM can be used in overlap checking as well as cache usage reporting. Signed-off-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 1 + arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 23 +++++++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index 65f558a2e806..f17633cf4776 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -568,6 +568,7 @@ int rdtgroup_tasks_assigned(struct rdtgroup *r); int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp); int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp); bool rdtgroup_cbm_overlaps_pseudo_locked(struct rdt_domain *d, unsigned long cbm); +u32 rdtgroup_pseudo_locked_bits(struct rdt_resource *r, struct rdt_domain *d); bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d); int rdt_pseudo_lock_init(void); void rdt_pseudo_lock_release(void); diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c index e7d1fdd76161..f16702a076a3 100644 --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c @@ -1626,3 +1626,26 @@ void rdt_pseudo_lock_release(void) unregister_chrdev(pseudo_lock_major, "pseudo_lock"); pseudo_lock_major = 0; } + +/** + * rdt_pseudo_locked_bits - Portions of cache instance used for pseudo-locking + * @r: RDT resource to which cache instance belongs + * @d: Cache instance + * + * Return: bits in CBM of @d that are used for cache pseudo-locking + */ +u32 rdtgroup_pseudo_locked_bits(struct rdt_resource *r, struct rdt_domain *d) +{ + struct rdtgroup *rdtgrp; + u32 pseudo_locked = 0; + + list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) { + if (!rdtgrp->plr) + continue; + if (rdtgrp->plr->r && rdtgrp->plr->r->rid == r->rid && + rdtgrp->plr->d_id == d->id) + pseudo_locked |= rdtgrp->plr->cbm; + } + + return pseudo_locked; +} -- 2.17.2