From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752950AbcGVVnh (ORCPT ); Fri, 22 Jul 2016 17:43:37 -0400 Received: from mga14.intel.com ([192.55.52.115]:27250 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751493AbcGVVnf (ORCPT ); Fri, 22 Jul 2016 17:43:35 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,405,1464678000"; d="scan'208";a="144017799" Date: Fri, 22 Jul 2016 14:43:23 -0700 From: "Luck, Tony" To: Marcelo Tosatti Cc: Fenghua Yu , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Tejun Heo , Borislav Petkov , Stephane Eranian , Peter Zijlstra , David Carrillo-Cisneros , Ravi V Shankar , Vikas Shivappa , Sai Prakhya , linux-kernel , x86 Subject: Re: [PATCH 04/32] x86/intel_rdt: Add L3 cache capacity bitmask management Message-ID: <20160722214322.GA938@intel.com> References: <1468371785-53231-1-git-send-email-fenghua.yu@intel.com> <1468371785-53231-5-git-send-email-fenghua.yu@intel.com> <20160722071203.GA18422@amt.cnet> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160722071203.GA18422@amt.cnet> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 22, 2016 at 04:12:04AM -0300, Marcelo Tosatti wrote: > How does this patchset handle the following condition: > > 6) Create reservations in such a way that the sum is larger than > total amount of cache, and CPU pinning (example from Karen Noel): > > VM-1 on socket-1 with 80% of reservation. > VM-2 on socket-2 with 80% of reservation. > VM-1 pinned to socket-1. > VM-2 pinned to socket-2. That's legal, but perhaps we need a description of overlapping cache reservations. Hardware tells you how finely you can divide the cache (and this information is shown in /sys/fs/resctrl/info/l3/max_cbm_len to save you from digging in CPUID leaves). E.g. on Broadwell the value is 20, so you can control cache allocations in 5% slices. A bitmask defines which slices you can use (and h/w has the restriction that you must have contiguous '1' bits in any mask). So you can pick your 80% using 0x0ffff, 0x1fffe, 0x3fffc, 0x7fff8 or 0xffff0. There is no requirement that masks be exclusive of each other. So you might pick the two extremes: 0x0ffff and 0xffff0 for your two VM's in this example. Each would be allowed to allocate up to 80%, but with a big overlap in the middle. Each has 20% exclusive, but there is a 60% range in the middle that they would compete for. Is this specific case useful? Possibly not. I think the more common overlap cases might be between processes that you know have shared code/data. Also the case where some rdtgroup has access to allocate in the entire cache (mask 0xfffff on Broadwell) and some other rdtgroups have limited cache allocation with less bits in the mask. -Tony