All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/1] perf: Add CPU hotplug support for events
@ 2018-02-15 23:01 Raghavendra Rao Ananta
  2018-02-15 23:01 ` [PATCH 1/1] " Raghavendra Rao Ananta
  0 siblings, 1 reply; 7+ messages in thread
From: Raghavendra Rao Ananta @ 2018-02-15 23:01 UTC (permalink / raw)
  To: alexander.shishkin, jolsa, namhyung
  Cc: peterz, mingo, cme, linux-kernel, psodagud, tsoni, rananta, skannan

The embedded world, specifically Android mobile SoCs, rely on CPU
hotplugs to manage power and thermal constraints. These hotplugs
can happen at a very rapid pace. Adjacently, they also relies on
many perf event counters for its management. Therefore, there is
a need to preserve these events across hotplugs.

In such a scenario, a perf client (kernel or user-space) can create
events even when the CPU is offline. If the CPU comes online during
the lifetime of the event, the registered event can start counting
spontaneously. As an extension to this, the events' count can also
be preserved across CPU hotplugs. This takes the burden off of the
clients to monitor the state of the CPU.

The tests were conducted on x86_64 PC. Sample results:
/* CPU-1 is offline: Event created when CPU is offline */
# ./perf stat -C 1 -e cycles -v -I 1000
Using CPUID GenuineIntel-6-5E
cycles: 0 0 0
#           time             counts unit events
     1.000143727      <not counted>      cycles
cycles: 0 0 0
     2.000309394      <not counted>      cycles
cycles: 0 0 0
     3.000466412      <not counted>      cycles
cycles: 0 0 0
     4.000599885      <not counted>      cycles
cycles: 0 0 0
     5.000750095      <not counted>      cycles
cycles: 0 0 0
     6.000901789      <not counted>      cycles
cycles: 0 0 0
     7.001035315      <not counted>      cycles
cycles: 0 0 0
     8.001195270      <not counted>      cycles

/* CPU-1 made online: Event started counting */

cycles: 21046256 719464572 719464572
     9.001379820         21,046,256      cycles
cycles: 19537953 1000222653 1000222653
    10.001634052         19,537,953      cycles
cycles: 27480135 1000095468 1000095468
    11.001764556         27,480,135      cycles
cycles: 27723233 1000108008 1000108008
    12.001909310         27,723,233      cycles
cycles: 19137349 1000133462 1000133462
    13.002079041         19,137,349      cycles
cycles: 48913285 1000101268 1000101268
    14.002218928         48,913,285      cycles
cycles: 27259199 1000128972 1000128972
    15.002388700         27,259,199      cycles
cycles: 19249055 1000095789 1000095789
    16.002514765         19,249,055      cycles
cycles: 27530051 1000246860 1000246860
    17.002801610         27,530,051      cycles
cycles: 34348072 1000272874 1000272874
    18.003110100         34,348,072      cycles
cycles: 11526457 107984264 107984264
    19.003435811         11,526,457      cycles

/* CPU-1 made offline */

cycles: 0 0 0
    20.003582803      <not counted>      cycles
cycles: 0 0 0
    21.003896484      <not counted>      cycles
cycles: 0 0 0
    22.004212989      <not counted>      cycles
cycles: 0 0 0
    23.004346689      <not counted>      cycles
cycles: 0 0 0
    24.004668259      <not counted>      cycles
cycles: 0 0 0
    25.004983504      <not counted>      cycles
cycles: 0 0 0
#           time             counts unit events
    26.005315741      <not counted>      cycles

/* CPU-1 made online: Event preserved across hotplug */

cycles: 27210082 933493459 933493459
    27.005652287         27,210,082      cycles
cycles: 41950431 1000112865 1000112865
    28.005805475         41,950,431      cycles
cycles: 35075124 1000141146 1000141146
    29.005974101         35,075,124      cycles
cycles: 45240055 1000132743 1000132743
    30.006140008         45,240,055      cycles
cycles: 43426180 1000077828 1000077828
    31.006253035         43,426,180      cycles
cycles: 34593167 1000315835 1000315835
    32.006605393         34,593,167      cycles
cycles: 105078270 1000136171 1000136171
    33.006773971        105,078,270      cycles

Raghavendra Rao Ananta (1):
  perf: Add CPU hotplug support for events

 include/linux/perf_event.h |   7 +++
 kernel/events/core.c       | 123 +++++++++++++++++++++++++++++++++------------
 2 files changed, 97 insertions(+), 33 deletions(-)

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/1] perf: Add CPU hotplug support for events
  2018-02-15 23:01 [PATCH 0/1] perf: Add CPU hotplug support for events Raghavendra Rao Ananta
@ 2018-02-15 23:01 ` Raghavendra Rao Ananta
  2018-02-16  8:21   ` Peter Zijlstra
  0 siblings, 1 reply; 7+ messages in thread
From: Raghavendra Rao Ananta @ 2018-02-15 23:01 UTC (permalink / raw)
  To: alexander.shishkin, jolsa, namhyung
  Cc: peterz, mingo, cme, linux-kernel, psodagud, tsoni, rananta, skannan

Perf framework doesn't allow prevserving CPU events across
CPU hotplugs. The events are scheduled out as and when the
CPU walks offline. Moreover, the framework also doesn't
allow the clients to create events on an offline CPU. As
a result, the clients have to keep on monitoring the CPU
state until it comes back online.

Therefore, introducing the perf framework to support creation
and preserving of (CPU) events for offline CPUs. Through
this, the CPU's online state would be transparent to the
client and it not have to worry about monitoring the CPU's
state. Success would be returned to the client even while
creating the event on an offline CPU. If during the lifetime
of the event the CPU walks offline, the event would be
preserved and would continue to count as soon as (and if) the
CPU comes back online.

Signed-off-by: Raghavendra Rao Ananta <rananta@codeaurora.org>
---
 include/linux/perf_event.h |   7 +++
 kernel/events/core.c       | 123 +++++++++++++++++++++++++++++++++------------
 2 files changed, 97 insertions(+), 33 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 7546822..bc07f16 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -489,6 +489,7 @@ struct perf_addr_filters_head {
  * enum perf_event_state - the states of a event
  */
 enum perf_event_state {
+	PERF_EVENT_STATE_DORMANT	= -5,
 	PERF_EVENT_STATE_DEAD		= -4,
 	PERF_EVENT_STATE_EXIT		= -3,
 	PERF_EVENT_STATE_ERROR		= -2,
@@ -687,6 +688,12 @@ struct perf_event {
 #endif
 
 	struct list_head		sb_list;
+
+	/* Entry into the list that holds the events whose CPUs
+	 * are offline. These events will be removed from the
+	 * list and installed once the CPU wakes up.
+	 */
+	struct list_head		dormant_entry;
 #endif /* CONFIG_PERF_EVENTS */
 };
 
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 96db9ae..5d0a155 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2329,6 +2329,30 @@ static int  __perf_install_in_context(void *info)
 	return ret;
 }
 
+#if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+static DEFINE_PER_CPU(struct list_head, dormant_event_list);
+static DEFINE_PER_CPU(spinlock_t, dormant_event_list_lock);
+
+static void perf_prepare_install_in_context(struct perf_event *event)
+{
+	int cpu = event->cpu;
+	bool prepare_hp_sched = !READ_ONCE(event->ctx->task);
+
+	if (!prepare_hp_sched)
+		return;
+
+	spin_lock(&per_cpu(dormant_event_list_lock, cpu));
+	if (event->state == PERF_EVENT_STATE_DORMANT)
+		goto out;
+
+	event->state = PERF_EVENT_STATE_DORMANT;
+	list_add_tail(&event->dormant_entry,
+			&per_cpu(dormant_event_list, cpu));
+out:
+	spin_unlock(&per_cpu(dormant_event_list_lock, cpu));
+}
+#endif
+
 /*
  * Attach a performance event to a context.
  *
@@ -2353,6 +2377,15 @@ static int  __perf_install_in_context(void *info)
 	smp_store_release(&event->ctx, ctx);
 
 	if (!task) {
+		struct perf_cpu_context *cpuctx =
+			container_of(ctx, struct perf_cpu_context, ctx);
+
+#if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+		if (!cpuctx->online) {
+			perf_prepare_install_in_context(event);
+			return;
+		}
+#endif
 		cpu_function_call(cpu, __perf_install_in_context, event);
 		return;
 	}
@@ -2421,6 +2454,43 @@ static int  __perf_install_in_context(void *info)
 	raw_spin_unlock_irq(&ctx->lock);
 }
 
+#if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+static void perf_deferred_install_in_context(int cpu)
+{
+	struct perf_event *event, *tmp;
+	struct perf_event_context *ctx;
+
+	/* This function is called twice while coming online. Once for
+	 * CPUHP_PERF_PREPARE and the other for CPUHP_AP_PERF_ONLINE.
+	 * Only during the CPUHP_AP_PERF_ONLINE state, we can confirm
+	 * that CPU PMU is ready and can be installed to.
+	 */
+	if (!cpu_online(cpu))
+		return;
+
+	spin_lock(&per_cpu(dormant_event_list_lock, cpu));
+	list_for_each_entry_safe(event, tmp,
+			&per_cpu(dormant_event_list, cpu), dormant_entry) {
+		if (cpu != event->cpu)
+			continue;
+
+		list_del(&event->dormant_entry);
+		event->state = PERF_EVENT_STATE_INACTIVE;
+		spin_unlock(&per_cpu(dormant_event_list_lock, cpu));
+
+		ctx = event->ctx;
+		perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE);
+
+		mutex_lock(&ctx->mutex);
+		perf_install_in_context(ctx, event, cpu);
+		mutex_unlock(&ctx->mutex);
+
+		spin_lock(&per_cpu(dormant_event_list_lock, cpu));
+	}
+	spin_unlock(&per_cpu(dormant_event_list_lock, cpu));
+}
+#endif
+
 /*
  * Cross CPU call to enable a performance event
  */
@@ -4202,6 +4272,15 @@ int perf_event_release_kernel(struct perf_event *event)
 	struct perf_event *child, *tmp;
 	LIST_HEAD(free_list);
 
+#if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+	if (!READ_ONCE(ctx->task)) {
+		spin_lock(&per_cpu(dormant_event_list_lock, event->cpu));
+		if (event->state == PERF_EVENT_STATE_DORMANT)
+			list_del(&event->dormant_entry);
+		spin_unlock(&per_cpu(dormant_event_list_lock, event->cpu));
+	}
+#endif
+
 	/*
 	 * If we got here through err_file: fput(event_file); we will not have
 	 * attached to a context yet.
@@ -10161,23 +10240,6 @@ static int perf_event_set_clock(struct perf_event *event, clockid_t clk_id)
 		goto err_locked;
 	}
 
-	if (!task) {
-		/*
-		 * Check if the @cpu we're creating an event for is online.
-		 *
-		 * We use the perf_cpu_context::ctx::mutex to serialize against
-		 * the hotplug notifiers. See perf_event_{init,exit}_cpu().
-		 */
-		struct perf_cpu_context *cpuctx =
-			container_of(ctx, struct perf_cpu_context, ctx);
-
-		if (!cpuctx->online) {
-			err = -ENODEV;
-			goto err_locked;
-		}
-	}
-
-
 	/*
 	 * Must be under the same ctx::mutex as perf_install_in_context(),
 	 * because we need to serialize with concurrent event creation.
@@ -10354,21 +10416,6 @@ struct perf_event *
 		goto err_unlock;
 	}
 
-	if (!task) {
-		/*
-		 * Check if the @cpu we're creating an event for is online.
-		 *
-		 * We use the perf_cpu_context::ctx::mutex to serialize against
-		 * the hotplug notifiers. See perf_event_{init,exit}_cpu().
-		 */
-		struct perf_cpu_context *cpuctx =
-			container_of(ctx, struct perf_cpu_context, ctx);
-		if (!cpuctx->online) {
-			err = -ENODEV;
-			goto err_unlock;
-		}
-	}
-
 	if (!exclusive_event_installable(event, ctx)) {
 		err = -EBUSY;
 		goto err_unlock;
@@ -11064,6 +11111,10 @@ static void __init perf_event_init_all_cpus(void)
 		INIT_LIST_HEAD(&per_cpu(cgrp_cpuctx_list, cpu));
 #endif
 		INIT_LIST_HEAD(&per_cpu(sched_cb_list, cpu));
+#if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+		spin_lock_init(&per_cpu(dormant_event_list_lock, cpu));
+		INIT_LIST_HEAD(&per_cpu(dormant_event_list, cpu));
+#endif
 	}
 }
 
@@ -11091,8 +11142,10 @@ static void __perf_event_exit_context(void *__info)
 
 	raw_spin_lock(&ctx->lock);
 	ctx_sched_out(ctx, cpuctx, EVENT_TIME);
-	list_for_each_entry(event, &ctx->event_list, event_entry)
+	list_for_each_entry(event, &ctx->event_list, event_entry) {
 		__perf_remove_from_context(event, cpuctx, ctx, (void *)DETACH_GROUP);
+		perf_prepare_install_in_context(event);
+	}
 	raw_spin_unlock(&ctx->lock);
 }
 
@@ -11141,6 +11194,10 @@ int perf_event_init_cpu(unsigned int cpu)
 	}
 	mutex_unlock(&pmus_lock);
 
+#if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+	perf_deferred_install_in_context(cpu);
+#endif
+
 	return 0;
 }
 
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/1] perf: Add CPU hotplug support for events
  2018-02-15 23:01 ` [PATCH 1/1] " Raghavendra Rao Ananta
@ 2018-02-16  8:21   ` Peter Zijlstra
  2018-02-16 18:06     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2018-02-16  8:21 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: alexander.shishkin, jolsa, namhyung, mingo, cme, linux-kernel,
	psodagud, tsoni, skannan

On Thu, Feb 15, 2018 at 03:01:41PM -0800, Raghavendra Rao Ananta wrote:
> Perf framework doesn't allow prevserving CPU events across
> CPU hotplugs. The events are scheduled out as and when the
> CPU walks offline. Moreover, the framework also doesn't
> allow the clients to create events on an offline CPU. As
> a result, the clients have to keep on monitoring the CPU
> state until it comes back online.
> 
> Therefore, introducing the perf framework to support creation
> and preserving of (CPU) events for offline CPUs. Through
> this, the CPU's online state would be transparent to the
> client and it not have to worry about monitoring the CPU's
> state. Success would be returned to the client even while
> creating the event on an offline CPU. If during the lifetime
> of the event the CPU walks offline, the event would be
> preserved and would continue to count as soon as (and if) the
> CPU comes back online.
> 
> Signed-off-by: Raghavendra Rao Ananta <rananta@codeaurora.org>
> ---
>  include/linux/perf_event.h |   7 +++
>  kernel/events/core.c       | 123 +++++++++++++++++++++++++++++++++------------
>  2 files changed, 97 insertions(+), 33 deletions(-)
> 
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 7546822..bc07f16 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -489,6 +489,7 @@ struct perf_addr_filters_head {
>   * enum perf_event_state - the states of a event
>   */
>  enum perf_event_state {
> +	PERF_EVENT_STATE_DORMANT	= -5,
>  	PERF_EVENT_STATE_DEAD		= -4,
>  	PERF_EVENT_STATE_EXIT		= -3,
>  	PERF_EVENT_STATE_ERROR		= -2,
> @@ -687,6 +688,12 @@ struct perf_event {
>  #endif
>  
>  	struct list_head		sb_list;
> +
> +	/* Entry into the list that holds the events whose CPUs
> +	 * are offline. These events will be removed from the
> +	 * list and installed once the CPU wakes up.
> +	 */
> +	struct list_head		dormant_entry;

No this is absolutely disguisting. You can simply keep the events in the
dead CPU's context. It's really not that hard.

Also, you _still_ don't explain why you care about dead CPUs.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/1] perf: Add CPU hotplug support for events
  2018-02-16  8:21   ` Peter Zijlstra
@ 2018-02-16 18:06     ` Raghavendra Rao Ananta
  2018-02-16 20:39       ` Peter Zijlstra
  0 siblings, 1 reply; 7+ messages in thread
From: Raghavendra Rao Ananta @ 2018-02-16 18:06 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: alexander.shishkin, jolsa, namhyung, mingo, cme, linux-kernel,
	psodagud, tsoni, skannan



On 02/16/2018 12:21 AM, Peter Zijlstra wrote:
> On Thu, Feb 15, 2018 at 03:01:41PM -0800, Raghavendra Rao Ananta wrote:
>> Perf framework doesn't allow prevserving CPU events across
>> CPU hotplugs. The events are scheduled out as and when the
>> CPU walks offline. Moreover, the framework also doesn't
>> allow the clients to create events on an offline CPU. As
>> a result, the clients have to keep on monitoring the CPU
>> state until it comes back online.
>>
>> Therefore, introducing the perf framework to support creation
>> and preserving of (CPU) events for offline CPUs. Through
>> this, the CPU's online state would be transparent to the
>> client and it not have to worry about monitoring the CPU's
>> state. Success would be returned to the client even while
>> creating the event on an offline CPU. If during the lifetime
>> of the event the CPU walks offline, the event would be
>> preserved and would continue to count as soon as (and if) the
>> CPU comes back online.
>>
>> Signed-off-by: Raghavendra Rao Ananta <rananta@codeaurora.org>
>> ---
>>   include/linux/perf_event.h |   7 +++
>>   kernel/events/core.c       | 123 +++++++++++++++++++++++++++++++++------------
>>   2 files changed, 97 insertions(+), 33 deletions(-)
>>
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index 7546822..bc07f16 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -489,6 +489,7 @@ struct perf_addr_filters_head {
>>    * enum perf_event_state - the states of a event
>>    */
>>   enum perf_event_state {
>> +	PERF_EVENT_STATE_DORMANT	= -5,
>>   	PERF_EVENT_STATE_DEAD		= -4,
>>   	PERF_EVENT_STATE_EXIT		= -3,
>>   	PERF_EVENT_STATE_ERROR		= -2,
>> @@ -687,6 +688,12 @@ struct perf_event {
>>   #endif
>>   
>>   	struct list_head		sb_list;
>> +
>> +	/* Entry into the list that holds the events whose CPUs
>> +	 * are offline. These events will be removed from the
>> +	 * list and installed once the CPU wakes up.
>> +	 */
>> +	struct list_head		dormant_entry;
> 
> No this is absolutely disguisting. You can simply keep the events in the
> dead CPU's context. It's really not that hard.
Keeping the events in the dead CPU's context was also an idea that we 
had. However, detaching that event from the PMU when the CPU is offline 
would be a pain. Consider the scenario in which an event is about to be 
destroyed when the CPU is offline (yet still attached to the CPU). 
During it's destruction, a cross-cpu call is made (from 
perf_remove_from_context()) to the offlined CPU to detach the event from 
the CPU's PMU. As the CPU is offline, that would not be possible, and 
again a separate logic has to be written for cleaning up the events 
whose CPUs are offlined. Hence, I thought it would be a cleaner way to 
maintain the events.
> 
> Also, you _still_ don't explain why you care about dead CPUs.
> 

It's just not only about dead CPUs. It's the fact that the CPUs can come 
and go online. The embedded world, specifically Android mobile SoCs, 
rely on CPU hotplugs to manage power and thermal constraints. These 
hotplugs can happen at a very rapid pace. Adjacently, they also rely on 
many perf event counters for its management. Therefore, there is
a need to preserve these events across hotplugs.
In such a scenario, a perf client (kernel or user-space) can create
events even when the CPU is offline. If the CPU comes online during
the lifetime of the event, the registered event can start counting
spontaneously. As an extension to this, the events' count can also
be preserved across CPU hotplugs. This takes the burden off of the
clients to monitor the state of the CPU.

-- Raghavendra

-- 
Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/1] perf: Add CPU hotplug support for events
  2018-02-16 18:06     ` Raghavendra Rao Ananta
@ 2018-02-16 20:39       ` Peter Zijlstra
  2018-02-17  1:48         ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2018-02-16 20:39 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: alexander.shishkin, jolsa, namhyung, mingo, cme, linux-kernel,
	psodagud, tsoni, skannan

On Fri, Feb 16, 2018 at 10:06:29AM -0800, Raghavendra Rao Ananta wrote:
> > No this is absolutely disguisting. You can simply keep the events in the
> > dead CPU's context. It's really not that hard.
> Keeping the events in the dead CPU's context was also an idea that we had.
> However, detaching that event from the PMU when the CPU is offline would be
> a pain. Consider the scenario in which an event is about to be destroyed
> when the CPU is offline (yet still attached to the CPU). During it's
> destruction, a cross-cpu call is made (from perf_remove_from_context()) to
> the offlined CPU to detach the event from the CPU's PMU. As the CPU is
> offline, that would not be possible, and again a separate logic has to be
> written for cleaning up the events whose CPUs are offlined.

That is actually really simple to deal with. The real problems are with
semantics, is an event enabled when the CPU is dead? Can you
disable/enable an event on a dead CPU.

The below patch (_completely_ untested) should do most of it, but needs
help with the details. I suspect we want to allow enable/disable on
events that are on a dead CPU, and equally I think we want to account
the time an enabled event spends on a dead CPU to go towards the
'enabled' bucket.

> > Also, you _still_ don't explain why you care about dead CPUs.
> > 
> 
> It's just not only about dead CPUs. It's the fact that the CPUs can come and
> go online. The embedded world, specifically Android mobile SoCs, rely on CPU
> hotplugs to manage power and thermal constraints. These hotplugs can happen
> at a very rapid pace.

That's batshit insane... and that sounds like the primary thing you
should be fixing.


---
 include/linux/perf_event.h |   2 +
 kernel/events/core.c       | 109 ++++++++++++++++++++++++++++-----------------
 2 files changed, 69 insertions(+), 42 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 7546822a1d74..c7a50fe26672 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -495,6 +495,8 @@ enum perf_event_state {
 	PERF_EVENT_STATE_OFF		= -1,
 	PERF_EVENT_STATE_INACTIVE	=  0,
 	PERF_EVENT_STATE_ACTIVE		=  1,
+
+	PERF_EVENT_STATE_HOTPLUG_OFFSET	= -32,
 };
 
 struct file;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 57898102847f..26dae8afe57d 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -265,17 +265,17 @@ static void event_function_call(struct perf_event *event, event_f func, void *da
 		lockdep_assert_held(&ctx->mutex);
 	}
 
-	if (!task) {
-		cpu_function_call(event->cpu, event_function, &efs);
-		return;
-	}
-
 	if (task == TASK_TOMBSTONE)
 		return;
 
 again:
-	if (!task_function_call(task, event_function, &efs))
-		return;
+	if (task) {
+		if (!task_function_call(task, event_function, &efs))
+			return;
+	} else {
+		if (!cpu_function_call(event->cpu, event_function, &efs))
+			return;
+	}
 
 	raw_spin_lock_irq(&ctx->lock);
 	/*
@@ -2110,7 +2110,7 @@ group_sched_in(struct perf_event *group_event,
 	struct perf_event *event, *partial_group = NULL;
 	struct pmu *pmu = ctx->pmu;
 
-	if (group_event->state == PERF_EVENT_STATE_OFF)
+	if (group_event->state <= PERF_EVENT_STATE_OFF)
 		return 0;
 
 	pmu->start_txn(pmu, PERF_PMU_TXN_ADD);
@@ -2189,6 +2189,14 @@ static int group_can_go_on(struct perf_event *event,
 static void add_event_to_ctx(struct perf_event *event,
 			       struct perf_event_context *ctx)
 {
+	if (!ctx->task) {
+		struct perf_cpu_context *cpuctx =
+			container_of(ctx, struct perf_cpu_context, ctx);
+
+		if (!cpuctx->online)
+			event->state += PERF_EVENT_STATE_HOTPLUG_OFFSET;
+	}
+
 	list_add_event(event, ctx);
 	perf_group_attach(event);
 }
@@ -2352,11 +2360,6 @@ perf_install_in_context(struct perf_event_context *ctx,
 	 */
 	smp_store_release(&event->ctx, ctx);
 
-	if (!task) {
-		cpu_function_call(cpu, __perf_install_in_context, event);
-		return;
-	}
-
 	/*
 	 * Should not happen, we validate the ctx is still alive before calling.
 	 */
@@ -2395,8 +2398,14 @@ perf_install_in_context(struct perf_event_context *ctx,
 	 */
 	smp_mb();
 again:
-	if (!task_function_call(task, __perf_install_in_context, event))
-		return;
+	if (task) {
+		if (!task_function_call(task, __perf_install_in_context, event))
+			return;
+	} else {
+		if (!cpu_function_call(cpu, __perf_install_in_context, event))
+			return;
+	}
+
 
 	raw_spin_lock_irq(&ctx->lock);
 	task = ctx->task;
@@ -10280,16 +10289,7 @@ SYSCALL_DEFINE5(perf_event_open,
 	}
 
 	if (!task) {
-		/*
-		 * Check if the @cpu we're creating an event for is online.
-		 *
-		 * We use the perf_cpu_context::ctx::mutex to serialize against
-		 * the hotplug notifiers. See perf_event_{init,exit}_cpu().
-		 */
-		struct perf_cpu_context *cpuctx =
-			container_of(ctx, struct perf_cpu_context, ctx);
-
-		if (!cpuctx->online) {
+		if (!cpu_possible(cpu)) {
 			err = -ENODEV;
 			goto err_locked;
 		}
@@ -10473,15 +10473,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
 	}
 
 	if (!task) {
-		/*
-		 * Check if the @cpu we're creating an event for is online.
-		 *
-		 * We use the perf_cpu_context::ctx::mutex to serialize against
-		 * the hotplug notifiers. See perf_event_{init,exit}_cpu().
-		 */
-		struct perf_cpu_context *cpuctx =
-			container_of(ctx, struct perf_cpu_context, ctx);
-		if (!cpuctx->online) {
+		if (!cpu_possible(cpu)) {
 			err = -ENODEV;
 			goto err_unlock;
 		}
@@ -11201,17 +11193,48 @@ void perf_swevent_init_cpu(unsigned int cpu)
 }
 
 #if defined CONFIG_HOTPLUG_CPU || defined CONFIG_KEXEC_CORE
+
+static void __perf_event_init_cpu_context(void *__info)
+{
+	struct perf_cpu_context *cpuctx = __info;
+	struct perf_event_context *ctx = &cpuctx->ctx;
+	struct perf_event_context *task_ctx = cpuctx->task_ctx;
+	struct perf_event *event;
+
+	perf_ctx_lock(cpuctx, task_ctx);
+	ctx_sched_out(ctx, cpuctx, EVENT_ALL);
+	if (task_ctx)
+		ctx_sched_out(task_ctx, cpuctx, EVENT_ALL);
+
+	list_for_each_entry_rcu(event, &ctx->event_list, event_entry)
+		perf_event_set_state(event, event->state - PERF_EVENT_STATE_HOTPLUG_OFFSET);
+
+	perf_event_sched_in(cpuctx, task_ctx, current);
+	perf_ctx_unlock(cpuctx, task_ctx);
+}
+
+static void _perf_event_init_cpu_context(int cpu, struct perf_cpu_context *cpuctx)
+{
+	smp_call_function_single(cpu, __perf_event_init_cpu_context, cpuctx, 1);
+}
+
 static void __perf_event_exit_context(void *__info)
 {
-	struct perf_event_context *ctx = __info;
-	struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
+	struct perf_cpu_context *cpuctx = __info;
+	struct perf_event_context *ctx = &cpuctx->ctx;
+	struct perf_event_context *task_ctx = cpuctx->task_ctx;
 	struct perf_event *event;
 
-	raw_spin_lock(&ctx->lock);
-	ctx_sched_out(ctx, cpuctx, EVENT_TIME);
-	list_for_each_entry(event, &ctx->event_list, event_entry)
-		__perf_remove_from_context(event, cpuctx, ctx, (void *)DETACH_GROUP);
-	raw_spin_unlock(&ctx->lock);
+	perf_ctx_lock(cpuctx, task_ctx);
+	ctx_sched_out(ctx, cpuctx, EVENT_ALL);
+	if (task_ctx)
+		ctx_sched_out(task_ctx, cpuctx, EVENT_ALL);
+
+	list_for_each_entry_rcu(event, &ctx->event_list, event_entry)
+		perf_event_set_state(event, event->state + PERF_EVENT_STATE_HOTPLUG_OFFSET);
+
+	perf_event_sched_in(cpuctx, task_ctx, current);
+	perf_ctx_unlock(cpuctx, task_ctx);
 }
 
 static void perf_event_exit_cpu_context(int cpu)
@@ -11226,7 +11249,7 @@ static void perf_event_exit_cpu_context(int cpu)
 		ctx = &cpuctx->ctx;
 
 		mutex_lock(&ctx->mutex);
-		smp_call_function_single(cpu, __perf_event_exit_context, ctx, 1);
+		smp_call_function_single(cpu, __perf_event_exit_context, cpuctx, 1);
 		cpuctx->online = 0;
 		mutex_unlock(&ctx->mutex);
 	}
@@ -11235,6 +11258,7 @@ static void perf_event_exit_cpu_context(int cpu)
 }
 #else
 
+static void _perf_event_init_cpu_context(int cpu, struct perf_cpu_context *cpuctx) { }
 static void perf_event_exit_cpu_context(int cpu) { }
 
 #endif
@@ -11255,6 +11279,7 @@ int perf_event_init_cpu(unsigned int cpu)
 
 		mutex_lock(&ctx->mutex);
 		cpuctx->online = 1;
+		_perf_event_init_cpu_context(cpu, cpuctx);
 		mutex_unlock(&ctx->mutex);
 	}
 	mutex_unlock(&pmus_lock);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/1] perf: Add CPU hotplug support for events
  2018-02-16 20:39       ` Peter Zijlstra
@ 2018-02-17  1:48         ` Raghavendra Rao Ananta
  2018-02-19  9:11           ` Peter Zijlstra
  0 siblings, 1 reply; 7+ messages in thread
From: Raghavendra Rao Ananta @ 2018-02-17  1:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: alexander.shishkin, jolsa, namhyung, mingo, linux-kernel,
	psodagud, tsoni, skannan



On 02/16/2018 12:39 PM, Peter Zijlstra wrote:
> On Fri, Feb 16, 2018 at 10:06:29AM -0800, Raghavendra Rao Ananta wrote:
>>> No this is absolutely disguisting. You can simply keep the events in the
>>> dead CPU's context. It's really not that hard.
>> Keeping the events in the dead CPU's context was also an idea that we had.
>> However, detaching that event from the PMU when the CPU is offline would be
>> a pain. Consider the scenario in which an event is about to be destroyed
>> when the CPU is offline (yet still attached to the CPU). During it's
>> destruction, a cross-cpu call is made (from perf_remove_from_context()) to
>> the offlined CPU to detach the event from the CPU's PMU. As the CPU is
>> offline, that would not be possible, and again a separate logic has to be
>> written for cleaning up the events whose CPUs are offlined.
> 
> That is actually really simple to deal with. The real problems are with
> semantics, is an event enabled when the CPU is dead? Can you
> disable/enable an event on a dead CPU.
> 
> The below patch (_completely_ untested) should do most of it, but needs
> help with the details. I suspect we want to allow enable/disable on
> events that are on a dead CPU, and equally I think we want to account
> the time an enabled event spends on a dead CPU to go towards the
> 'enabled' bucket.
I've gone through your diff, and it gave me a hint of similar texture 
what we are trying to do (except for maintaining an offline event list). 
Nevertheless, I tried to test your patch. I created an hw event, and 
tried to offline the CPU in parallel, and I immediately hit a watchdog 
soft lockup bug! Tried the same this by first switching off the CPU 
(without any event created), and I hit into similar issue. I am sure we 
can fix it, but apart from the "why we are doing hotplug?" question, was 
was there specifically any issue with our patch?

> 
>>> Also, you _still_ don't explain why you care about dead CPUs.
>>>
I wanted to understand, if we no longer care about hotplugging of CPUs, 
then why do we still have exported symbols such as cpu_up() and 
cpu_down()? Moreover, we also have the hotplug interface exposed to 
users-space as well (through sysfs). As long as these interfaces exist, 
there's always a potential chance of bringing the CPU up/down. Can you 
please clear this thing up for me?

-- Raghavendra

-- 
Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/1] perf: Add CPU hotplug support for events
  2018-02-17  1:48         ` Raghavendra Rao Ananta
@ 2018-02-19  9:11           ` Peter Zijlstra
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2018-02-19  9:11 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: alexander.shishkin, jolsa, namhyung, mingo, linux-kernel,
	psodagud, tsoni, skannan

On Fri, Feb 16, 2018 at 05:48:21PM -0800, Raghavendra Rao Ananta wrote:
> I am sure we can fix it, but apart
> from the "why we are doing hotplug?" question, was was there specifically
> any issue with our patch?

Yes, the extra list is crazy. We don't keep events in extra lists when a
task isn't currently running either. A CPU being offline shouldn't be
(much) different from that.

Thinking more, we should also get rid of that HOTPLUG offset thing.

> > > > Also, you _still_ don't explain why you care about dead CPUs.
> > > > 
> I wanted to understand, if we no longer care about hotplugging of CPUs, then
> why do we still have exported symbols such as cpu_up() and cpu_down()?

Not sure why we have them, legacy probably. The rcu/lock torture module
is the only legitimate user of them.

But if having those exports gives people the impression its a sane thing
to do hotplug from modules, we should just take it out.

> Moreover, we also have the hotplug interface exposed to users-space as well
> (through sysfs). As long as these interfaces exist, there's always a
> potential chance of bringing the CPU up/down. Can you please clear this
> thing up for me?

Hotplug is an absolute utter slow path. We do our absolute best to put
the entire burden on the hotplug path such that we don't perturb normal
things.

Its primary existence is for physical CPU hotplug, not resource
management. Although there seems to be a misguided 'I have this hammer,
everthing is a nail' thing going on.

I suppose these 'once' things like changing the topology of the machine
-- eg. 'unplug' all but one of the SMT threads, are OK as well.

And RAS things that take a CPU down when there's 'trouble' is also fine.

But anything that does hotplug semi regularly is batshit insane. One of
the first things hotplug does is synchronize_rcu(), that can take a
_long_ time.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-02-19  9:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-15 23:01 [PATCH 0/1] perf: Add CPU hotplug support for events Raghavendra Rao Ananta
2018-02-15 23:01 ` [PATCH 1/1] " Raghavendra Rao Ananta
2018-02-16  8:21   ` Peter Zijlstra
2018-02-16 18:06     ` Raghavendra Rao Ananta
2018-02-16 20:39       ` Peter Zijlstra
2018-02-17  1:48         ` Raghavendra Rao Ananta
2018-02-19  9:11           ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.