All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Vikas Shivappa <vikas.shivappa@intel.com>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org, hpa@zytor.com,
	mingo@kernel.org, tj@kernel.org, peterz@infradead.org,
	Matt Fleming <matt.fleming@intel.com>,
	"Auld, Will" <will.auld@intel.com>,
	peter.zijlstra@intel.com, h.peter.anvin@intel.com, "Juvva,
	Kanaka D" <kanaka.d.juvva@intel.com>,
	mtosatti@redhat.com
Subject: Re: [PATCH 2/7] x86/intel_rdt: Adds support for Class of service management
Date: Mon, 18 May 2015 20:41:53 +0200 (CEST)	[thread overview]
Message-ID: <alpine.DEB.2.11.1505182020030.4225@nanos> (raw)
In-Reply-To: <alpine.DEB.2.10.1505151508420.14506@vshiva-Udesk>

On Mon, 18 May 2015, Vikas Shivappa wrote:
> > > +static void __clos_get(unsigned int closid)
> > 
> > What's the purpose of these double underscores?
> > 
> 
> Similar to __get_rmid.

Errm. We use double underscore for such cases:

__bla()
{
	do_stuff();
}
	
bla()
{
	lock();
	__bla();
	unlock();
}

So you have two sorts of callers. Locked and unlocked. But I don't see
something like this. It's just random underscores for no value.

> > > +{
> > > +	struct clos_cbm_map *ccm = &ccmap[closid];
> > > +
> > > +	lockdep_assert_held(&rdt_group_mutex);
> > 
> > Do we really need all these lockdep asserts for a handfull of call
> > sites?
> 
> Well, we still have some calls some may be frequent depending on the usage..
> should the assert decided based on number of times its called ?

We add these things for complex locking scenarios, but for single
callsites where locking is obvious its not really valuable. But that's
the least of my worries.
 
> > > +static struct cgroup_subsys_state *
> > > +intel_rdt_css_alloc(struct cgroup_subsys_state *parent_css)
> > > +{
> > > +	struct intel_rdt *parent = css_rdt(parent_css);
> > > +	struct intel_rdt *ir;
> > > +
> > > +	/*
> > > +	 * Cannot return failure on systems with no Cache Allocation
> > > +	 * as the cgroup_init does not handle failures gracefully.
> > 
> > This comment is blatantly wrong. 5 lines further down you return
> > -ENOMEM. Now what?
> > 
> 
> Well , this is for cgroup_init called with parent=null which is when its
> initializing the cgroup subsystem. I cant return error in this case but I can
> otherwise.

And then you run into the same BUG_ON ...

> static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
> {
> 
> 	/* Create the root cgroup state for this subsystem */
> 	ss->root = &cgrp_dfl_root;
> 	css = ss->css_alloc(cgroup_css(&cgrp_dfl_root.cgrp, ss));
> 	/* We don't handle early failures gracefully */
> 	BUG_ON(IS_ERR(css));

I know. But still the comment is misleading.

  	/* 
	 * cgroup_init() expects  valid css pointer. Return
	 * the rdt_root_group.css instead of a failure code.
	 */

Can you see the difference?
 
> > > +	 */
> > > +	if (!parent)
> > > +		return &rdt_root_group.css;
> > > +
> > > +	ir = kzalloc(sizeof(struct intel_rdt), GFP_KERNEL);
> > > +	if (!ir)
> > > +		return ERR_PTR(-ENOMEM);

And why can't you return something useful here instead of letting the
thing run into a BUG?

> > >  	maxid = c->x86_rdt_max_closid;
> > >  	max_cbm_len = c->x86_rdt_max_cbm_len;
> > > 
> > > +	sizeb = BITS_TO_LONGS(maxid) * sizeof(long);
> > > +	rdtss_info.closmap = kzalloc(sizeb, GFP_KERNEL);
> > 
> > What's the point of this bitmap? In later patches you have a
> > restriction to 64bits and the SDM says that CBM_LEN is limited to 32.
> > 
> > So where is the point for allocating a bitmap?
> 
> The cache bitmask is really a bitmask where every bit represents a cache way ,
> so its way based mapping . Although currently the max is 32 bits we still need
> to treat it as a bitmask.
> 
> In the first patch version you are the one who suggested to use all the bitmap
> functions to check the presence of contiguous bits (although I was already
> using the bitmaps, I had not used for some cases). So i made the change and
> other similar code was changed to using bitmap/bit manipulation APIs. There
> are scenarios where in we need to check for subset of bitmasks and other cases
> but agree they can all be done without the bitmask APIs as well. But now your
> comment contradicts the old comment ?

Oh well. You can still use bitmap functions on an u64 where
appropriate.
 
> > 
> > > +	if (!rdtss_info.closmap) {
> > > +		err = -ENOMEM;
> > > +		goto out_err;
> > > +	}
> > > +
> > > +	sizeb = maxid * sizeof(struct clos_cbm_map);
> > > +	ccmap = kzalloc(sizeb, GFP_KERNEL);
> > > +	if (!ccmap) {
> > > +		kfree(rdtss_info.closmap);
> > > +		err = -ENOMEM;
> > > +		goto out_err;
> > 
> > What's wrong with return -ENOMEM? We only use goto if we have to
> > cleanup stuff, certainly not for just returning err.
> 
> This was due to PeterZ's feedback that the return paradigm needs to not have
> too many exit points or return a lot of times from the middle of code..
> Both contradict now ?

There are different opinions about that. But again, that's the least
of my worries.
 
> > 
> > > +	}
> > > +
> > > +	set_bit(0, rdtss_info.closmap);
> > > +	rdt_root_group.clos = 0;
> > > +	ccm = &ccmap[0];
> > > +	bitmap_set(&ccm->cache_mask, 0, max_cbm_len);
> > > +	ccm->clos_refcnt = 1;
> > > +
> > >  	pr_info("Max bitmask length:%u,Max ClosIds: %u\n", max_cbm_len,
> > > maxid);
> > 
> > We surely do not want to sprinkle these all over dmesg.
> 
> This is just printed once! how is that sprinke all over?   - we have a dmsg
> print for Cache monitoring as well when cqm is enabled.

Sorry, mapped that to the wrong function. Though the message itself is
horrible. 

	  "Max bitmask length:32,Max ClosIds: 16"	

With some taste and formatting applied this would read:

     	  "Max. bitmask length: 32, max. closids: 16"

Can you spot the difference?

Thanks,

	tglx


  reply	other threads:[~2015-05-18 18:41 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-11 19:02 [PATCH V7 0/7] x86/intel_rdt: Intel Cache Allocation support Vikas Shivappa
2015-05-11 19:02 ` [PATCH 1/7] x86/intel_rdt: Intel Cache Allocation detection Vikas Shivappa
2015-05-11 19:02 ` [PATCH 2/7] x86/intel_rdt: Adds support for Class of service management Vikas Shivappa
2015-05-15 19:18   ` Thomas Gleixner
2015-05-18 17:59     ` Vikas Shivappa
2015-05-18 18:41       ` Thomas Gleixner [this message]
2015-05-18 19:20         ` Borislav Petkov
2015-05-19 17:33           ` Vikas Shivappa
2015-05-19 20:35             ` Borislav Petkov
2015-05-18 19:44         ` Vikas Shivappa
2015-05-18 18:52       ` Thomas Gleixner
2015-05-18 19:27         ` Vikas Shivappa
2015-05-11 19:02 ` [PATCH 3/7] x86/intel_rdt: Add support for cache bit mask management Vikas Shivappa
2015-05-15 19:25   ` Thomas Gleixner
2015-05-18 19:17     ` Vikas Shivappa
2015-05-18 20:15       ` Thomas Gleixner
2015-05-18 21:09         ` Vikas Shivappa
2015-05-20 17:22         ` Vikas Shivappa
2015-05-20 19:02           ` Thomas Gleixner
2015-05-21  0:54             ` Thomas Gleixner
2015-05-21 16:36               ` Vikas Shivappa
2015-05-11 19:02 ` [PATCH 4/7] x86/intel_rdt: Implement scheduling support for Intel RDT Vikas Shivappa
2015-05-15 19:39   ` Thomas Gleixner
2015-05-18 18:01     ` Vikas Shivappa
2015-05-18 18:45       ` Thomas Gleixner
2015-05-18 19:18         ` Vikas Shivappa
2015-05-11 19:02 ` [PATCH 5/7] x86/intel_rdt: Software Cache for IA32_PQR_MSR Vikas Shivappa
2015-05-15 20:15   ` Thomas Gleixner
2015-05-20 17:18     ` Vikas Shivappa
2015-05-20 18:50       ` Thomas Gleixner
2015-05-20 20:43         ` Vikas Shivappa
2015-05-20 21:14           ` Thomas Gleixner
2015-05-20 22:51             ` Vikas Shivappa
2015-05-11 19:02 ` [PATCH 6/7] x86/intel_rdt: Intel haswell Cache Allocation enumeration Vikas Shivappa
2015-05-11 19:02 ` [PATCH 7/7] x86/intel_rdt: Add Cache Allocation documentation and usage guide Vikas Shivappa
2015-05-13 16:52 ` [PATCH V7 0/7] x86/intel_rdt: Intel Cache Allocation support Vikas Shivappa
  -- strict thread matches above, loose matches on Subject: below --
2015-05-02  1:36 [PATCH V6 0/7] x86/intel_rdt: Intel Cache Allocation Technology Vikas Shivappa
2015-05-02  1:36 ` [PATCH 2/7] x86/intel_rdt: Adds support for Class of service management Vikas Shivappa
2015-05-02 18:38   ` Peter Zijlstra
2015-05-04 17:31     ` Vikas Shivappa
2015-03-12 23:16 [PATCH V5 0/7] x86/intel_rdt: Intel Cache Allocation Technology Vikas Shivappa
2015-03-12 23:16 ` [PATCH 2/7] x86/intel_rdt: Adds support for Class of service management Vikas Shivappa
2015-02-24 23:16 [PATCH V4 0/7] x86/intel_rdt: Intel Cache Allocation Technology Vikas Shivappa
2015-02-24 23:16 ` [PATCH 2/7] x86/intel_rdt: Adds support for Class of service management Vikas Shivappa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.11.1505182020030.4225@nanos \
    --to=tglx@linutronix.de \
    --cc=h.peter.anvin@intel.com \
    --cc=hpa@zytor.com \
    --cc=kanaka.d.juvva@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matt.fleming@intel.com \
    --cc=mingo@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=peter.zijlstra@intel.com \
    --cc=peterz@infradead.org \
    --cc=tj@kernel.org \
    --cc=vikas.shivappa@intel.com \
    --cc=vikas.shivappa@linux.intel.com \
    --cc=will.auld@intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.