linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Gautham R Shenoy <ego@linux.vnet.ibm.com>
Cc: Nathan Lynch <nathanl@linux.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Daniel Henrique Barboza <danielhb413@gmail.com>,
	Valentin Schneider <valentin.schneider@arm.com>,
	qemu-ppc@nongnu.org, Cedric Le Goater <clg@kaod.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Ingo Molnar <mingo@kernel.org>,
	David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup
Date: Fri, 16 Apr 2021 22:27:14 +0530	[thread overview]
Message-ID: <20210416165714.GG2633526@linux.vnet.ibm.com> (raw)
In-Reply-To: <20210416155748.GA26496@in.ibm.com>

* Gautham R Shenoy <ego@linux.vnet.ibm.com> [2021-04-16 21:27:48]:

> On Thu, Apr 15, 2021 at 11:21:10PM +0530, Srikar Dronamraju wrote:
> > * Gautham R Shenoy <ego@linux.vnet.ibm.com> [2021-04-15 22:49:21]:
> > 
> > > > 
> > > > +int *chip_id_lookup_table;
> > > > +
> > > >  #ifdef CONFIG_PPC64
> > > >  int __initdata iommu_is_off;
> > > >  int __initdata iommu_force_on;
> > > > @@ -914,13 +916,22 @@ EXPORT_SYMBOL(of_get_ibm_chip_id);
> > > >  int cpu_to_chip_id(int cpu)
> > > >  {
> > > >  	struct device_node *np;
> > > > +	int ret = -1, idx;
> > > > +
> > > > +	idx = cpu / threads_per_core;
> > > > +	if (chip_id_lookup_table && chip_id_lookup_table[idx] != -1)
> > > 
> > 
> > > The value -1 is ambiguous since we won't be able to determine if
> > > it is because we haven't yet made a of_get_ibm_chip_id() call
> > > or if of_get_ibm_chip_id() call was made and it returned a -1.
> > > 
> > 
> > We don't allocate chip_id_lookup_table unless cpu_to_chip_id() return
> > !-1 value for the boot-cpuid. So this ensures that we dont
> > unnecessarily allocate chip_id_lookup_table. Also I check for
> > chip_id_lookup_table before calling cpu_to_chip_id() for other CPUs.
> > So this avoids overhead of calling cpu_to_chip_id() for platforms that
> > dont support it.  Also its most likely that if the
> > chip_id_lookup_table is initialized then of_get_ibm_chip_id() call
> > would return a valid value.
> > 
> > + Below we are only populating the lookup table, only when the
> > of_get_cpu_node is valid.
> > 
> > So I dont see any drawbacks of initializing it to -1. Do you see
> any?
> 
> 
> Only if other callers of cpu_to_chip_id() don't check for whether the
> chip_id_lookup_table() has been allocated or not. From a code
> readability point of view, it is easier to have that check  this inside
> cpu_to_chip_id() instead of requiring all its callers to make that
> check.
> 

I didn't understand your comment. However let me reiterate what I said
earlier. We don't have control over who and when cpu_to_chip_id() gets
called. If the cpu_to_chip_id() might be called for non present CPU,
in which case it will return -1, Should we cache it or not?

If we cache it, we will return wrong value when the CPU may turn out
to be present. If we cache and retry it then having one value for
initializing and another for invalid is all the same as having just 1
value for initializing and invalid. Just that we end up adding more
confusing code. Atleast to me, code isnt readable if I say retry for
-1 and -2 too. After few years, we ourselves will wonder why we have
two values if we are checking and performing same actions.

> > 
> > > Thus, perhaps we can initialize chip_id_lookup_table[idx] with a
> > > different unique negative value. How about S32_MIN ? and check
> > > chip_id_lookup_table[idx] is different here ?
> > > 
> > 
> > I had initially initialized to -2, But then I thought we adding in
> > more confusion than necessary and it was not solving any issues.
> > 
> > 
> > -- 
> > Thanks and Regards
> > Srikar Dronamraju

-- 
Thanks and Regards
Srikar Dronamraju

  reply	other threads:[~2021-04-16 16:58 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-15 12:09 [PATCH 0/3] Reintroduce cpu_core_mask Srikar Dronamraju
2021-04-15 12:09 ` [PATCH 1/3] powerpc/smp: " Srikar Dronamraju
2021-04-15 17:11   ` Gautham R Shenoy
2021-04-15 17:36     ` Srikar Dronamraju
2021-04-16  3:21   ` David Gibson
2021-04-16  5:45     ` Srikar Dronamraju
2021-04-19  1:17       ` David Gibson
2021-04-15 12:09 ` [PATCH 2/3] Revert "powerpc/topology: Update topology_core_cpumask" Srikar Dronamraju
2021-04-15 12:09 ` [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup Srikar Dronamraju
2021-04-15 17:19   ` Gautham R Shenoy
2021-04-15 17:51     ` Srikar Dronamraju
2021-04-16 15:57       ` Gautham R Shenoy
2021-04-16 16:57         ` Srikar Dronamraju [this message]
2021-04-19  1:19         ` David Gibson
2021-04-15 12:17 ` [PATCH 0/3] Reintroduce cpu_core_mask Daniel Henrique Barboza
2021-04-19  4:00 ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210416165714.GG2633526@linux.vnet.ibm.com \
    --to=srikar@linux.vnet.ibm.com \
    --cc=clg@kaod.org \
    --cc=danielhb413@gmail.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=ego@linux.vnet.ibm.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mingo@kernel.org \
    --cc=nathanl@linux.ibm.com \
    --cc=peterz@infradead.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=valentin.schneider@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).