linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shivappa Vikas <vikas.shivappa@intel.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>,
	vikas.shivappa@intel.com, davidcc@google.com, eranian@google.com,
	linux-kernel@vger.kernel.org, x86@kernel.org, hpa@zytor.com,
	mingo@kernel.org, peterz@infradead.org, ravi.v.shankar@intel.com,
	tony.luck@intel.com, fenghua.yu@intel.com, andi.kleen@intel.com,
	h.peter.anvin@intel.com
Subject: Re: [PATCH 09/12] x86/cqm: Add RMID reuse
Date: Tue, 17 Jan 2017 16:26:58 -0800 (PST)	[thread overview]
Message-ID: <alpine.DEB.2.10.1701171557120.15892@vshiva-Udesk> (raw)
In-Reply-To: <alpine.DEB.2.20.1701171714440.3495@nanos>



On Tue, 17 Jan 2017, Thomas Gleixner wrote:

> On Fri, 6 Jan 2017, Vikas Shivappa wrote:
>> +static void cqm_schedule_rmidwork(int domain);
>
> This forward declaration is required because all callers of that function
> are coming _after_ the function implementation, right?
>
>> +static inline bool is_first_cqmwork(int domain)
>> +{
>> +	return (!atomic_cmpxchg(&cqm_pkgs_data[domain]->reuse_scheduled, 0, 1));
>
> What's the purpose of these outer brackets? Enhanced readability?

Will fix the coding styles and ordering of function declarations

>
>> +}
>> +
>>  static void __put_rmid(u32 rmid, int domain)
>>  {
>>  	struct cqm_rmid_entry *entry;
>> @@ -294,6 +301,93 @@ static void cqm_mask_call(struct rmid_read *rr)
>>  static unsigned int __intel_cqm_threshold;
>>  static unsigned int __intel_cqm_max_threshold;
>>
>> +/*
>> + * Test whether an RMID has a zero occupancy value on this cpu.
>
> This tests whether the occupancy value is less than
> __intel_cqm_threshold. Unless I'm missing something the value can be set by
> user space and therefor is not necessarily zero.
>
> Your commentry is really useful: It's either wrong or superflous or non
> existent.

Will fix , yes its the user configurable value.

>
>> + */
>> +static void intel_cqm_stable(void)
>> +{
>> +	struct cqm_rmid_entry *entry;
>> +	struct list_head *llist;
>> +
>> +	llist = &cqm_pkgs_data[pkg_id]->cqm_rmid_limbo_lru;
>> +	list_for_each_entry(entry, llist, list) {
>> +
>> +		if (__rmid_read(entry->rmid) < __intel_cqm_threshold)
>> +			entry->state = RMID_AVAILABLE;
>> +	}
>> +}
>> +
>> +static void __intel_cqm_rmid_reuse(void)
>> +{
>> +	struct cqm_rmid_entry *entry, *tmp;
>> +	struct list_head *llist, *flist;
>> +	struct pkg_data *pdata;
>> +	unsigned long flags;
>> +
>> +	raw_spin_lock_irqsave(&cache_lock, flags);
>> +	pdata = cqm_pkgs_data[pkg_id];
>> +	llist = &pdata->cqm_rmid_limbo_lru;
>> +	flist = &pdata->cqm_rmid_free_lru;
>> +
>> +	if (list_empty(llist))
>> +		goto end;
>> +	/*
>> +	 * Test whether an RMID is free
>> +	 */
>> +	intel_cqm_stable();
>> +
>> +	list_for_each_entry_safe(entry, tmp, llist, list) {
>> +
>> +		if (entry->state == RMID_DIRTY)
>> +			continue;
>> +		/*
>> +		 * Otherwise remove from limbo and place it onto the free list.
>> +		 */
>> +		list_del(&entry->list);
>> +		list_add_tail(&entry->list, flist);
>
> This is really a performance optimized implementation. Iterate the limbo
> list first and check the occupancy value, conditionally update the state
> and then iterate the same list and conditionally move the entries which got
> just updated to the free list. Of course all of this happens with
> interrupts disabled and a global lock held to enhance it further.

Will fix this. This can really use the per pkg lock as the cache_lock global 
one is protecting the event list. Also yes, dont need two loops now as its per 
package.

>
>> +	}
>> +
>> +end:
>> +	raw_spin_unlock_irqrestore(&cache_lock, flags);
>> +}
>> +
>> +static bool reschedule_cqm_work(void)
>> +{
>> +	unsigned long flags;
>> +	bool nwork = false;
>> +
>> +	raw_spin_lock_irqsave(&cache_lock, flags);
>> +
>> +	if (!list_empty(&cqm_pkgs_data[pkg_id]->cqm_rmid_limbo_lru))
>> +		nwork = true;
>> +	else
>> +		atomic_set(&cqm_pkgs_data[pkg_id]->reuse_scheduled, 0U);
>
> 0U because the 'val' argument of atomic_set() is 'int', right?

Will fix

>
>> +	raw_spin_unlock_irqrestore(&cache_lock, flags);
>> +
>> +	return nwork;
>> +}
>> +
>> +static void cqm_schedule_rmidwork(int domain)
>> +{
>> +	struct delayed_work *dwork;
>> +	unsigned long delay;
>> +
>> +	dwork = &cqm_pkgs_data[domain]->intel_cqm_rmid_work;
>> +	delay = msecs_to_jiffies(RMID_DEFAULT_QUEUE_TIME);
>> +
>> +	schedule_delayed_work_on(cqm_pkgs_data[domain]->rmid_work_cpu,
>> +			     dwork, delay);
>> +}
>> +
>> +static void intel_cqm_rmid_reuse(struct work_struct *work)
>
> Your naming conventions are really random. cqm_* intel_cqm_* and then
> totaly random function names. Is there any deeper thought involved or is it
> just what it looks like: random?
>
>> +{
>> +	__intel_cqm_rmid_reuse();
>> +
>> +	if (reschedule_cqm_work())
>> +		cqm_schedule_rmidwork(pkg_id);
>
> Great stuff. You first try to move the limbo RMIDs to the free list and
> then you reevaluate the limbo list again. Thereby acquiring and dropping
> the global cache lock several times.
>
> Dammit, the stupid reuse function already knows whether the list is empty
> or not. But returning that information would make too much sense.

Will fix. Dont need the global lock also.

>
>> +}
>> +
>>  static struct pmu intel_cqm_pmu;
>>
>>  static u64 update_sample(unsigned int rmid, u32 evt_type, int first)
>> @@ -548,6 +642,8 @@ static int intel_cqm_setup_event(struct perf_event *event,
>>  	if (!event->hw.cqm_rmid)
>>  		return -ENOMEM;
>>
>> +	cqm_assign_rmid(event, event->hw.cqm_rmid);
>
> Oh, so now there is a second user which calls cqm_assign_rmid(). Finally it
> might actually do something. Whether that something is useful and
> functional, I seriously doubt it.
>
>> +
>>  	return 0;
>>  }
>>
>> @@ -863,6 +959,7 @@ static void intel_cqm_event_terminate(struct perf_event *event)
>>  {
>>  	struct perf_event *group_other = NULL;
>>  	unsigned long flags;
>> +	int d;
>>
>>  	mutex_lock(&cache_mutex);
>>  	/*
>> @@ -905,6 +1002,13 @@ static void intel_cqm_event_terminate(struct perf_event *event)
>>  		mbm_stop_timers();
>>
>>  	mutex_unlock(&cache_mutex);
>> +
>> +	for (d = 0; d < cqm_socket_max; d++) {
>> +
>> +		if (cqm_pkgs_data[d] != NULL && is_first_cqmwork(d)) {
>> +			cqm_schedule_rmidwork(d);
>> +		}
>
> Let's look at the logic of this.
>
> When the event terminates, then you unconditionally schedule the rmid work
> on ALL packages whether the event was freed or not. Really brilliant.
>
> There is no reason to schedule anything if the event was never using any
> rmid on a given package or if the RMIDs are not freed because there is a
> new group leader.

Will fix to not schedule if event did not use the package.

>
> The NOHZ FULL people will love that extra work activity for nothing.
>
> Also the detection whether the work is already scheduled is intuitive as
> usual: is_first_cqmwork() ? What???? !cqmwork_scheduled() would be too
> simple to understand, right?
>
> Oh well.
>
>> +	}
>>  }
>>
>>  static int intel_cqm_event_init(struct perf_event *event)
>> @@ -1322,6 +1426,9 @@ static int pkg_data_init_cpu(int cpu)
>>  	mutex_init(&pkg_data->pkg_data_mutex);
>>  	raw_spin_lock_init(&pkg_data->pkg_data_lock);
>>
>> +	INIT_DEFERRABLE_WORK(
>> +		&pkg_data->intel_cqm_rmid_work, intel_cqm_rmid_reuse);
>
> Stop this crappy formatting.
>
> 	INIT_DEFERRABLE_WORK(&pkg_data->intel_cqm_rmid_work,
> 			     intel_cqm_rmid_reuse);
>
> is the canonical way to do line breaks. Is it really that hard?

Will fix the formatting.

Thanks,
Vikas

>
> Thanks,
>
> 	tglx
>

  reply	other threads:[~2017-01-18  0:47 UTC|newest]

Thread overview: 92+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-06 21:59 [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes Vikas Shivappa
2017-01-06 21:59 ` [PATCH 01/12] Documentation, x86/cqm: Intel Resource Monitoring Documentation Vikas Shivappa
2017-01-06 21:59 ` [PATCH 02/12] x86/cqm: Remove cqm recycling/conflict handling Vikas Shivappa
2017-01-06 21:59 ` [PATCH 03/12] x86/rdt: Add rdt common/cqm compile option Vikas Shivappa
2017-01-16 18:05   ` Thomas Gleixner
2017-01-17 17:25     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 04/12] x86/cqm: Add Per pkg rmid support Vikas Shivappa
2017-01-16 18:15   ` [PATCH 04/12] x86/cqm: Add Per pkg rmid support\ Thomas Gleixner
2017-01-17 19:11     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 05/12] x86/cqm,perf/core: Cgroup support prepare Vikas Shivappa
2017-01-17 12:11   ` Thomas Gleixner
2017-01-17 12:31     ` Peter Zijlstra
2017-01-18  2:14     ` Shivappa Vikas
2017-01-17 13:46   ` Thomas Gleixner
2017-01-17 20:22     ` Shivappa Vikas
2017-01-17 21:31       ` Thomas Gleixner
2017-01-17 15:26   ` Peter Zijlstra
2017-01-17 20:27     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 06/12] x86/cqm: Add cgroup hierarchical monitoring support Vikas Shivappa
2017-01-17 14:07   ` Thomas Gleixner
2017-01-06 22:00 ` [PATCH 07/12] x86/rdt,cqm: Scheduling support update Vikas Shivappa
2017-01-17 21:58   ` Thomas Gleixner
2017-01-17 22:30     ` Shivappa Vikas
2017-01-06 22:00 ` [PATCH 08/12] x86/cqm: Add support for monitoring task and cgroup together Vikas Shivappa
2017-01-17 16:11   ` Thomas Gleixner
2017-01-06 22:00 ` [PATCH 09/12] x86/cqm: Add RMID reuse Vikas Shivappa
2017-01-17 16:59   ` Thomas Gleixner
2017-01-18  0:26     ` Shivappa Vikas [this message]
2017-01-06 22:00 ` [PATCH 10/12] perf/core,x86/cqm: Add read for Cgroup events,per pkg reads Vikas Shivappa
2017-01-06 22:00 ` [PATCH 11/12] perf/stat: fix bug in handling events in error state Vikas Shivappa
2017-01-06 22:00 ` [PATCH 12/12] perf/stat: revamp read error handling, snapshot and per_pkg events Vikas Shivappa
2017-01-17 17:31 ` [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes Thomas Gleixner
2017-01-18  2:38   ` Shivappa Vikas
2017-01-18  8:53     ` Thomas Gleixner
2017-01-18  9:56       ` Peter Zijlstra
2017-01-19 19:59         ` Shivappa Vikas
2017-01-18 19:41       ` Shivappa Vikas
2017-01-18 21:03       ` David Carrillo-Cisneros
2017-01-19 17:41         ` Thomas Gleixner
2017-01-20  7:37           ` David Carrillo-Cisneros
2017-01-20  8:30             ` Thomas Gleixner
2017-01-20 20:27               ` David Carrillo-Cisneros
2017-01-18 21:16       ` Yu, Fenghua
2017-01-19  2:09       ` David Carrillo-Cisneros
2017-01-19 16:58         ` David Carrillo-Cisneros
2017-01-19 17:54           ` Thomas Gleixner
2017-01-19  2:21       ` Vikas Shivappa
2017-01-19  6:45       ` Stephane Eranian
2017-01-19 18:03         ` Thomas Gleixner
2017-01-20  2:32       ` Vikas Shivappa
2017-01-20  7:58         ` David Carrillo-Cisneros
2017-01-20 13:28           ` Thomas Gleixner
2017-01-20 20:11             ` David Carrillo-Cisneros
2017-01-20 21:08               ` Shivappa Vikas
2017-01-20 21:44                 ` David Carrillo-Cisneros
2017-01-20 23:51                   ` Shivappa Vikas
2017-02-08 10:13                     ` Peter Zijlstra
2017-01-23  9:47               ` Thomas Gleixner
2017-01-23 11:30                 ` Peter Zijlstra
2017-02-01 20:08                 ` Luck, Tony
2017-02-01 23:12                   ` David Carrillo-Cisneros
2017-02-02 17:39                     ` Luck, Tony
2017-02-02 19:33                     ` Luck, Tony
2017-02-02 20:20                       ` Shivappa Vikas
2017-02-02 20:22                       ` David Carrillo-Cisneros
2017-02-02 23:41                         ` Luck, Tony
2017-02-03  1:40                           ` David Carrillo-Cisneros
2017-02-03  2:14                             ` David Carrillo-Cisneros
2017-02-03 17:52                               ` Luck, Tony
2017-02-03 21:08                                 ` David Carrillo-Cisneros
2017-02-03 22:24                                   ` Luck, Tony
2017-02-07  8:08                                 ` Stephane Eranian
2017-02-07 18:52                                   ` Luck, Tony
2017-02-08 19:31                                     ` Stephane Eranian
2017-02-07 20:10                                   ` Shivappa Vikas
2017-02-17 13:41                                   ` Thomas Gleixner
2017-02-06 18:54                     ` Luck, Tony
2017-02-06 21:22                     ` Luck, Tony
2017-02-06 21:36                       ` Shivappa Vikas
2017-02-06 21:46                         ` David Carrillo-Cisneros
2017-02-06 22:16                       ` David Carrillo-Cisneros
2017-02-06 23:27                         ` Luck, Tony
2017-02-07  0:33                           ` David Carrillo-Cisneros
2017-02-02  0:35                   ` Andi Kleen
2017-02-02  1:12                     ` David Carrillo-Cisneros
2017-02-02  1:19                       ` Andi Kleen
2017-02-02  1:22                     ` Yu, Fenghua
2017-02-02 17:51                       ` Shivappa Vikas
2017-02-08 10:11               ` Peter Zijlstra
2017-01-20 20:40           ` Shivappa Vikas
2017-01-20 19:31         ` Stephane Eranian
  -- strict thread matches above, loose matches on Subject: below --
2017-01-06 21:56 [PATCH 00/12 V5] Cqm2: Intel Cache quality of " Vikas Shivappa
2017-01-06 21:56 ` [PATCH 09/12] x86/cqm: Add RMID reuse Vikas Shivappa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.10.1701171557120.15892@vshiva-Udesk \
    --to=vikas.shivappa@intel.com \
    --cc=andi.kleen@intel.com \
    --cc=davidcc@google.com \
    --cc=eranian@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=h.peter.anvin@intel.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ravi.v.shankar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=vikas.shivappa@linux.intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).