linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Weisbecker <frederic@kernel.org>
To: Nitesh Narayan Lal <nitesh@redhat.com>
Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	linux-pci@vger.kernel.org, mtosatti@redhat.com,
	sassmann@redhat.com, jeffrey.t.kirsher@intel.com,
	jacob.e.keller@intel.com, jlelli@redhat.com, hch@infradead.org,
	bhelgaas@google.com, mike.marciniszyn@intel.com,
	dennis.dalessandro@intel.com, thomas.lendacky@amd.com,
	jerinj@marvell.com, mathias.nyman@intel.com, jiri@nvidia.com
Subject: Re: [RFC][Patch v1 1/3] sched/isolation: API to get num of hosekeeping CPUs
Date: Tue, 22 Sep 2020 12:08:17 +0200	[thread overview]
Message-ID: <20200922100817.GB5217@lenoir> (raw)
In-Reply-To: <fd48e554-6a19-f799-b273-e814e5389db9@redhat.com>

On Mon, Sep 21, 2020 at 11:16:51PM -0400, Nitesh Narayan Lal wrote:
> 
> On 9/21/20 7:40 PM, Frederic Weisbecker wrote:
> > On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote:
> >> +/*
> >> + * num_housekeeping_cpus() - Read the number of housekeeping CPUs.
> >> + *
> >> + * This function returns the number of available housekeeping CPUs
> >> + * based on __num_housekeeping_cpus which is of type atomic_t
> >> + * and is initialized at the time of the housekeeping setup.
> >> + */
> >> +unsigned int num_housekeeping_cpus(void)
> >> +{
> >> +	unsigned int cpus;
> >> +
> >> +	if (static_branch_unlikely(&housekeeping_overridden)) {
> >> +		cpus = atomic_read(&__num_housekeeping_cpus);
> >> +		/* We should always have at least one housekeeping CPU */
> >> +		BUG_ON(!cpus);
> >> +		return cpus;
> >> +	}
> >> +	return num_online_cpus();
> >> +}
> >> +EXPORT_SYMBOL_GPL(num_housekeeping_cpus);
> >> +
> >>  int housekeeping_any_cpu(enum hk_flags flags)
> >>  {
> >>  	int cpu;
> >> @@ -131,6 +153,7 @@ static int __init housekeeping_setup(char *str, enum hk_flags flags)
> >>  
> >>  	housekeeping_flags |= flags;
> >>  
> >> +	atomic_set(&__num_housekeeping_cpus, cpumask_weight(housekeeping_mask));
> > So the problem here is that it takes the whole cpumask weight but you're only
> > interested in the housekeepers who take the managed irq duties I guess
> > (HK_FLAG_MANAGED_IRQ ?).
> 
> IMHO we should also consider the cases where we only have nohz_full.
> Otherwise, we may run into the same situation on those setups, do you agree?

I guess it's up to the user to gather the tick and managed irq housekeeping
together?

Of course that makes the implementation more complicated. But if this is
called only on drivers initialization for now, this could be just a function
that does:

cpumask_weight(cpu_online_mask | housekeeping_cpumask(HK_FLAG_MANAGED_IRQ))

And then can we rename it to housekeeping_num_online()?

Thanks.

> >
> >>  	free_bootmem_cpumask_var(non_housekeeping_mask);
> >>  
> >>  	return 1;
> >> -- 
> >> 2.27.0
> >>
> -- 
> Thanks
> Nitesh
> 




  reply	other threads:[~2020-09-22 10:08 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-09 15:08 [RFC] [PATCH v1 0/3] isolation: limit msix vectors based on housekeeping CPUs Nitesh Narayan Lal
2020-09-09 15:08 ` [RFC][Patch v1 1/3] sched/isolation: API to get num of hosekeeping CPUs Nitesh Narayan Lal
2020-09-17 18:18   ` Jesse Brandeburg
2020-09-17 18:43     ` Nitesh Narayan Lal
2020-09-17 20:11   ` Bjorn Helgaas
2020-09-17 21:48     ` Jacob Keller
2020-09-17 22:09     ` Nitesh Narayan Lal
2020-09-21 23:40   ` Frederic Weisbecker
2020-09-22  3:16     ` Nitesh Narayan Lal
2020-09-22 10:08       ` Frederic Weisbecker [this message]
2020-09-22 13:50         ` Nitesh Narayan Lal
2020-09-22 20:58           ` Frederic Weisbecker
2020-09-22 21:15             ` Nitesh Narayan Lal
2020-09-22 21:26             ` Andrew Lunn
2020-09-22 22:20               ` Nitesh Narayan Lal
2020-09-09 15:08 ` [RFC][Patch v1 2/3] i40e: limit msix vectors based on housekeeping CPUs Nitesh Narayan Lal
2020-09-11 15:23   ` Marcelo Tosatti
2020-09-17 18:23   ` Jesse Brandeburg
2020-09-17 18:31     ` Nitesh Narayan Lal
2020-09-21 22:58     ` Frederic Weisbecker
2020-09-22  3:08       ` Nitesh Narayan Lal
2020-09-22  9:54         ` Frederic Weisbecker
2020-09-22 13:34           ` Nitesh Narayan Lal
2020-09-22 20:44             ` Frederic Weisbecker
2020-09-22 21:05               ` Nitesh Narayan Lal
2020-09-09 15:08 ` [RFC][Patch v1 3/3] PCI: Limit pci_alloc_irq_vectors as per " Nitesh Narayan Lal
2020-09-10 19:22   ` Marcelo Tosatti
2020-09-10 19:31     ` Nitesh Narayan Lal
2020-09-22 13:54       ` Nitesh Narayan Lal
2020-09-22 21:08         ` Frederic Weisbecker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200922100817.GB5217@lenoir \
    --to=frederic@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=dennis.dalessandro@intel.com \
    --cc=hch@infradead.org \
    --cc=jacob.e.keller@intel.com \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=jerinj@marvell.com \
    --cc=jiri@nvidia.com \
    --cc=jlelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=mathias.nyman@intel.com \
    --cc=mike.marciniszyn@intel.com \
    --cc=mtosatti@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=nitesh@redhat.com \
    --cc=sassmann@redhat.com \
    --cc=thomas.lendacky@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).