linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Nathan Fontenot <nfont@linux.vnet.ibm.com>
To: Michael Bringmann <mwb@linux.vnet.ibm.com>,
	linuxppc-dev@lists.ozlabs.org
Cc: John Allen <jallen@linux.vnet.ibm.com>,
	Thomas Falcon <tlfalcon@linux.vnet.ibm.com>,
	Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
Subject: Re: [RFC v2 1/3] hotplug/mobility: Apply assoc updates for Post Migration Topo
Date: Thu, 26 Apr 2018 13:31:59 -0500	[thread overview]
Message-ID: <4ac7df0f-354a-8079-66df-cd118522ca61@linux.vnet.ibm.com> (raw)
In-Reply-To: <2b3e4948-a21b-423b-fa74-f4ae3ebd5978@linux.vnet.ibm.com>

On 04/24/2018 04:33 PM, Michael Bringmann wrote:
> See comments below:
> 
> On 04/24/2018 11:56 AM, Nathan Fontenot wrote:
>> On 02/26/2018 02:52 PM, Michael Bringmann wrote:
>>> hotplug/mobility: Recognize more changes to the associativity of
>>> memory blocks described by the 'ibm,dynamic-memory' and 'cpu'
>>> properties when processing the topology of LPARS in Post Migration
>>> events.  Previous efforts only recognized whether a memory block's
>>> assignment had changed in the property.  Changes here include:
>>>
>>> * Checking the aa_index values of the old/new properties and 'readd'
>>>   any block for which the setting has changed.
>>> * Checking for changes in cpu associativity and making 'readd' calls
>>>   when differences are observed.
>>
>> As part of the post-migration updates do you need to hold a lock
>> so that we don't attempt to process any of the cpu/memory changes
>> while the device tree is being updated?
>>
>> You may be able to grab the device hotplug lock for this.
> 
> The CPU Re-add process reuses the dlpar_cpu_remove / dlpar_cpu_add
> code for POWERPC.  These functions end up invoking device_online() /
> device_offline() which in turn end up invoking the 'cpus_write_lock/unlock'
> around every kernel change to the CPU structures.  It was modeled
> on the Memory Re-add process as we discussed a while back, which
> also uses device_online and a corresponding write lock for each
> LMB processed.
> 
> Do you see a need for a coarser granularity of locking around
> all or a group of the cpu/memory changes?  The data structures
> that the kernel outside of powerpc uses for CPUs and LMBs seem
> to be quite well isolated from the device-tree properties.

My thinking was for memory and CPU updates, the idea being that all
updates are queued up until after the post-LPM device tree updates happens.
Grabbing the device_hotplug lock while updating the device tree would
prevent any of the queued CPU/memory updates from happening.

> 
>>
>>>
>>> Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com>
>>> ---
>>> Changes in RFC:
>>>   -- Simplify code to update CPU nodes during mobility checks.
>>>      Remove functions to generate extra HP_ELOG messages in favor
>>>      of direct function calls to dlpar_cpu_readd_by_index.
>>>   -- Move check for "cpu" node type from pseries_update_cpu to
>>>      pseries_smp_notifier in 'hotplug-cpu.c'
>>>   -- Remove functions 'pseries_memory_readd_by_index' and
>>>      'pseries_cpu_readd_by_index' as no longer needed outside of
>>>      'mobility.c'.
>>> ---
>>>  arch/powerpc/platforms/pseries/hotplug-cpu.c    |   69 +++++++++++++++++++++++
>>>  arch/powerpc/platforms/pseries/hotplug-memory.c |    6 ++
>>>  2 files changed, 75 insertions(+)
>>>
>>> diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
>>> index a7d14aa7..91ef22a 100644
>>> --- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
>>> +++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
>>> @@ -636,6 +636,27 @@ static int dlpar_cpu_remove_by_index(u32 drc_index)
>>>  	return rc;
>>>  }
>>>
>>> +static int dlpar_cpu_readd_by_index(u32 drc_index)
>>> +{
>>> +	int rc = 0;
>>> +
>>> +	pr_info("Attempting to update CPU, drc index %x\n", drc_index);
>>
>> Should make this say we are re-adding the CPU, it's a bit more specific as
>> to what is really happening.
> 
> Okay.  I will update the notice from dlpar_memory_readd_by_index, as well.

Looks like your current message mirrors what the memory readd routine has,
let's just keep the message as is.

-Nathan

>>
>>> +
>>> +	if (dlpar_cpu_remove_by_index(drc_index))
>>> +		rc = -EINVAL;
>>> +	else if (dlpar_cpu_add(drc_index))
>>> +		rc = -EINVAL;
>>> +
>>> +	if (rc)
>>> +		pr_info("Failed to update cpu at drc_index %lx\n",
>>> +				(unsigned long int)drc_index);
>>> +	else
>>> +		pr_info("CPU at drc_index %lx was updated\n",
>>> +				(unsigned long int)drc_index);
>>> +
>>> +	return rc;
>>> +}
>>> +
>>>  static int find_dlpar_cpus_to_remove(u32 *cpu_drcs, int cpus_to_remove)
>>>  {
>>>  	struct device_node *dn;
>>> @@ -826,6 +847,9 @@ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog)
>>>  		else
>>>  			rc = -EINVAL;
>>>  		break;
>>> +	case PSERIES_HP_ELOG_ACTION_READD:
>>> +		rc = dlpar_cpu_readd_by_index(drc_index);
>>> +		break;
>>>  	default:
>>>  		pr_err("Invalid action (%d) specified\n", hp_elog->action);
>>>  		rc = -EINVAL;
>>> @@ -876,12 +900,53 @@ static ssize_t dlpar_cpu_release(const char *buf, size_t count)
>>>
>>>  #endif /* CONFIG_ARCH_CPU_PROBE_RELEASE */
>>>
>>> +static int pseries_update_cpu(struct of_reconfig_data *pr)
>>> +{
>>> +	u32 old_entries, new_entries;
>>> +	__be32 *p, *old_assoc, *new_assoc;
>>> +	int rc = 0;
>>> +
>>> +	/* So far, we only handle the 'ibm,associativity' property,
>>> +	 * here.
>>> +	 * The first int of the property is the number of domains
>>> +	 * described.  This is followed by an array of level values.
>>> +	 */
>>> +	p = (__be32 *) pr->old_prop->value;
>>> +	if (!p)
>>> +		return -EINVAL;
>>> +	old_entries = be32_to_cpu(*p++);
>>> +	old_assoc = p;
>>> +
>>> +	p = (__be32 *)pr->prop->value;
>>> +	if (!p)
>>> +		return -EINVAL;
>>> +	new_entries = be32_to_cpu(*p++);
>>> +	new_assoc = p;
>>> +
>>> +	if (old_entries == new_entries) {
>>> +		int sz = old_entries * sizeof(int);
>>> +
>>> +		if (!memcmp(old_assoc, new_assoc, sz))
>>> +			rc = dlpar_cpu_readd_by_index(
>>> +					be32_to_cpu(pr->dn->phandle));
>>> +
>>> +	} else {
>>> +		rc = dlpar_cpu_readd_by_index(
>>> +					be32_to_cpu(pr->dn->phandle));
>>> +	}
>>> +
>>> +	return rc;
>>> +}
>>
>> Do we need to do the full compare of the new vs. the old affinity property?
>>
>> I would think we would only get an updated property if the property changes.
>> We don't care what changes in the property at this point, only that it changed.
>> You could just call dlpar_cpu_readd_by_index() directly.
> 
> Okay.
> 
>>
>> -Nathan
> 
> Thanks.
> Michael
> 
>>
>>> +
>>>  static int pseries_smp_notifier(struct notifier_block *nb,
>>>  				unsigned long action, void *data)
>>>  {
>>>  	struct of_reconfig_data *rd = data;
>>>  	int err = 0;
>>>
>>> +	if (strcmp(rd->dn->type, "cpu"))
>>> +		return notifier_from_errno(err);
>>> +
>>>  	switch (action) {
>>>  	case OF_RECONFIG_ATTACH_NODE:
>>>  		err = pseries_add_processor(rd->dn);
>>> @@ -889,6 +954,10 @@ static int pseries_smp_notifier(struct notifier_block *nb,
>>>  	case OF_RECONFIG_DETACH_NODE:
>>>  		pseries_remove_processor(rd->dn);
>>>  		break;
>>> +	case OF_RECONFIG_UPDATE_PROPERTY:
>>> +		if (!strcmp(rd->prop->name, "ibm,associativity"))
>>> +			err = pseries_update_cpu(rd);
>>> +		break;
>>>  	}
>>>  	return notifier_from_errno(err);
>>>  }
>>> diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
>>> index c1578f5..2341eae 100644
>>> --- a/arch/powerpc/platforms/pseries/hotplug-memory.c
>>> +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
>>> @@ -1040,6 +1040,12 @@ static int pseries_update_drconf_memory(struct of_reconfig_data *pr)
>>>  					  memblock_size);
>>>  			rc = (rc < 0) ? -EINVAL : 0;
>>>  			break;
>>> +		} else if ((be32_to_cpu(old_drmem[i].aa_index) !=
>>> +					be32_to_cpu(new_drmem[i].aa_index)) &&
>>> +				(be32_to_cpu(new_drmem[i].flags) &
>>> +					DRCONF_MEM_ASSIGNED)) {
>>> +			rc = dlpar_memory_readd_by_index(
>>> +				be32_to_cpu(new_drmem[i].drc_index))>  		}
>>>  	}
>>>  	return rc;
>>>
>>
> 

  reply	other threads:[~2018-04-26 18:32 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-26 20:51 [RFC v2 0/3] powerpc/hotplug: Fix affinity assoc for LPAR migration Michael Bringmann
2018-02-26 20:52 ` [RFC v2 1/3] hotplug/mobility: Apply assoc updates for Post Migration Topo Michael Bringmann
2018-03-07 19:32   ` Tyrel Datwyler
2018-03-07 23:24     ` Michael Bringmann
2018-04-24 16:56   ` Nathan Fontenot
2018-04-24 21:33     ` Michael Bringmann
2018-04-26 18:31       ` Nathan Fontenot [this message]
2018-02-26 20:53 ` [RFC v2 2/3] postmigration/memory: Review assoc lookup array changes Michael Bringmann
2018-04-24 17:01   ` Nathan Fontenot
2018-04-24 21:33     ` Michael Bringmann
2018-02-26 20:53 ` [RFC v2 3/3] postmigration/memory: Associativity & ibm,dynamic-memory-v2 Michael Bringmann
2018-04-24 17:17   ` Nathan Fontenot
2018-04-24 21:35     ` Michael Bringmann
2018-04-26 18:39       ` Nathan Fontenot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4ac7df0f-354a-8079-66df-cd118522ca61@linux.vnet.ibm.com \
    --to=nfont@linux.vnet.ibm.com \
    --cc=jallen@linux.vnet.ibm.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mwb@linux.vnet.ibm.com \
    --cc=tlfalcon@linux.vnet.ibm.com \
    --cc=tyreld@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).