From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936515AbcJQVat (ORCPT ); Mon, 17 Oct 2016 17:30:49 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:38561 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932340AbcJQVak (ORCPT ); Mon, 17 Oct 2016 17:30:40 -0400 Date: Mon, 17 Oct 2016 23:27:59 +0200 (CEST) From: Thomas Gleixner To: Fenghua Yu cc: "H. Peter Anvin" , Ingo Molnar , Tony Luck , Peter Zijlstra , Stephane Eranian , Borislav Petkov , Dave Hansen , Nilay Vaish , Shaohua Li , David Carrillo-Cisneros , Ravi V Shankar , Sai Prakhya , Vikas Shivappa , linux-kernel , x86 Subject: Re: [PATCH v4 14/18] x86/intel_rdt: Add cpus file In-Reply-To: <1476497548-11169-15-git-send-email-fenghua.yu@intel.com> Message-ID: References: <1476497548-11169-1-git-send-email-fenghua.yu@intel.com> <1476497548-11169-15-git-send-email-fenghua.yu@intel.com> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 14 Oct 2016, Fenghua Yu wrote: > static int intel_rdt_offline_cpu(unsigned int cpu) > { > struct rdt_resource *r; > + struct rdtgroup *rdtgrp; > + struct list_head *l; > > mutex_lock(&rdtgroup_mutex); > for_each_rdt_resource(r) > update_domain(cpu, r, 0); > + list_for_each(l, &rdt_all_groups) { list_for_each_entry ... > + rdtgrp = list_entry(l, struct rdtgroup, rdtgroup_list); > + if (cpumask_test_and_clear_cpu(cpu, &rdtgrp->cpu_mask)) > + break; > +static ssize_t rdtgroup_cpus_write(struct kernfs_open_file *of, > + char *buf, size_t nbytes, loff_t off) > +{ .... > + /* Are trying to drop some cpus from this group? */ /* Check whether cpus are dropped from this group */ > + cpumask_andnot(tmpmask, &rdtgrp->cpu_mask, newmask); > + if (cpumask_weight(tmpmask)) { > + /* Can't drop from default group */ > + if (rdtgrp == &rdtgroup_default) { > + ret = -EINVAL; > + goto end; > + } > + /* Give any dropped cpus to rdtgroup_default */ > + cpumask_or(&rdtgroup_default.cpu_mask, > + &rdtgroup_default.cpu_mask, tmpmask); > + for_each_cpu(cpu, tmpmask) > + per_cpu(cpu_closid, cpu) = 0; > + } > + > + /* > + * If we added cpus, remove them from previous group that owned them > + * and update per-cpu rdtgroup pointers to refer to us s/per-cpu rdtgroup pointers to refer to us/the per-cpu closid/ > + */ > + cpumask_andnot(tmpmask, newmask, &rdtgrp->cpu_mask); > + if (cpumask_weight(tmpmask)) { > + struct list_head *l; > + > + list_for_each(l, &rdt_all_groups) { > + r = list_entry(l, struct rdtgroup, rdtgroup_list); Once more: list_for_each_entry() > @@ -582,6 +698,10 @@ static int rdtgroup_rmdir(struct kernfs_node *kn) > return -EPERM; > } > > + /* Give any CPUs back to the default group */ > + cpumask_or(&rdtgroup_default.cpu_mask, > + &rdtgroup_default.cpu_mask, &rdtgrp->cpu_mask); What resets the per-cpu closid to 0? Thanks, tglx