From: Bjorn Helgaas <helgaas@kernel.org>
To: Ming Lei <ming.lei@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>,
Thomas Gleixner <tglx@linutronix.de>,
Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, Sagi Grimberg <sagi@grimberg.me>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
linux-pci@vger.kernel.org
Subject: Re: [PATCH 1/5] genirq/affinity: move allocation of 'node_to_cpumask' to irq_build_affinity_masks
Date: Thu, 7 Feb 2019 16:02:05 -0600 [thread overview]
Message-ID: <20190207220204.GP7268@google.com> (raw)
In-Reply-To: <20190125095347.17950-2-ming.lei@redhat.com>
On Fri, Jan 25, 2019 at 05:53:43PM +0800, Ming Lei wrote:
> 'node_to_cpumask' is just one temparay variable for irq_build_affinity_masks(),
> so move it into irq_build_affinity_masks().
>
> No functioanl change.
s/temparay/temporary/
s/functioanl/functional/
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
Nice patch, this is much cleaner.
Reviewed-by: Bjorn Helgaas <bhelgaas@google.com>
> ---
> kernel/irq/affinity.c | 27 +++++++++++++--------------
> 1 file changed, 13 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index 45b68b4ea48b..118b66d64a53 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -175,18 +175,22 @@ static int __irq_build_affinity_masks(const struct irq_affinity *affd,
> */
> static int irq_build_affinity_masks(const struct irq_affinity *affd,
> int startvec, int numvecs, int firstvec,
> - cpumask_var_t *node_to_cpumask,
> struct irq_affinity_desc *masks)
> {
> int curvec = startvec, nr_present, nr_others;
> int ret = -ENOMEM;
> cpumask_var_t nmsk, npresmsk;
> + cpumask_var_t *node_to_cpumask;
>
> if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
> return ret;
>
> if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL))
> - goto fail;
> + goto fail_nmsk;
> +
> + node_to_cpumask = alloc_node_to_cpumask();
> + if (!node_to_cpumask)
> + goto fail_npresmsk;
>
> ret = 0;
> /* Stabilize the cpumasks */
> @@ -217,9 +221,12 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd,
> if (nr_present < numvecs)
> WARN_ON(nr_present + nr_others < numvecs);
>
> + free_node_to_cpumask(node_to_cpumask);
> +
> + fail_npresmsk:
> free_cpumask_var(npresmsk);
>
> - fail:
> + fail_nmsk:
> free_cpumask_var(nmsk);
> return ret;
> }
> @@ -236,7 +243,6 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
> {
> int affvecs = nvecs - affd->pre_vectors - affd->post_vectors;
> int curvec, usedvecs;
> - cpumask_var_t *node_to_cpumask;
> struct irq_affinity_desc *masks = NULL;
> int i, nr_sets;
>
> @@ -247,13 +253,9 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
> if (nvecs == affd->pre_vectors + affd->post_vectors)
> return NULL;
>
> - node_to_cpumask = alloc_node_to_cpumask();
> - if (!node_to_cpumask)
> - return NULL;
> -
> masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL);
> if (!masks)
> - goto outnodemsk;
> + return NULL;
>
> /* Fill out vectors at the beginning that don't need affinity */
> for (curvec = 0; curvec < affd->pre_vectors; curvec++)
> @@ -271,11 +273,10 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
> int ret;
>
> ret = irq_build_affinity_masks(affd, curvec, this_vecs,
> - curvec, node_to_cpumask, masks);
> + curvec, masks);
> if (ret) {
> kfree(masks);
> - masks = NULL;
> - goto outnodemsk;
> + return NULL;
> }
> curvec += this_vecs;
> usedvecs += this_vecs;
> @@ -293,8 +294,6 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
> for (i = affd->pre_vectors; i < nvecs - affd->post_vectors; i++)
> masks[i].is_managed = 1;
>
> -outnodemsk:
> - free_node_to_cpumask(node_to_cpumask);
> return masks;
> }
>
> --
> 2.9.5
>
next prev parent reply other threads:[~2019-02-07 22:02 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-25 9:53 [PATCH 0/5] genirq/affinity: introduce .setup_affinity to support allocating interrupt sets Ming Lei
2019-01-25 9:53 ` [PATCH 1/5] genirq/affinity: move allocation of 'node_to_cpumask' to irq_build_affinity_masks Ming Lei
2019-02-07 22:02 ` Bjorn Helgaas [this message]
2019-01-25 9:53 ` [PATCH 2/5] genirq/affinity: allow driver to setup managed IRQ's affinity Ming Lei
2019-02-07 22:21 ` Bjorn Helgaas
2019-02-10 9:22 ` Ming Lei
2019-02-10 16:30 ` Thomas Gleixner
2019-02-11 3:54 ` Ming Lei
2019-02-11 14:39 ` Bjorn Helgaas
2019-02-11 22:38 ` Thomas Gleixner
2019-02-12 11:17 ` Ming Lei
2019-01-25 9:53 ` [PATCH 3/5] genirq/affinity: introduce irq_build_affinity() Ming Lei
2019-01-25 9:53 ` [PATCH 4/5] nvme-pci: simplify nvme_setup_irqs() via .setup_affinity callback Ming Lei
2019-02-10 16:39 ` Thomas Gleixner
2019-02-11 3:58 ` Ming Lei
2019-02-10 18:49 ` Thomas Gleixner
2019-02-11 4:09 ` Ming Lei
2019-01-25 9:53 ` [PATCH 5/5] genirq/affinity: remove support for allocating interrupt sets Ming Lei
2019-02-07 22:22 ` Bjorn Helgaas
2019-01-25 9:56 ` [PATCH 0/5] genirq/affinity: introduce .setup_affinity to support " Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190207220204.GP7268@google.com \
--to=helgaas@kernel.org \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=sagi@grimberg.me \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).