dm-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Jiasheng Jiang <jiasheng@iscas.ac.cn>
To: agk@redhat.com, snitzer@kernel.org
Cc: dm-devel@redhat.com, Jiasheng Jiang <jiasheng@iscas.ac.cn>,
	linux-kernel@vger.kernel.org
Subject: [dm-devel] [PATCH] dm stats: Add missing check for alloc_percpu
Date: Thu, 16 Mar 2023 14:55:06 +0800	[thread overview]
Message-ID: <20230316065506.17821-1-jiasheng@iscas.ac.cn> (raw)

Add the check for the return value of alloc_precpu and return the error
if it fails.
Moreover, Add the check for the return value of dm_stats_init cascade.

Fixes: fd2ed4d25270 ("dm: add statistics support")
Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn>
---
 drivers/md/dm-stats.c | 7 ++++++-
 drivers/md/dm-stats.h | 2 +-
 drivers/md/dm.c       | 4 +++-
 3 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
index c21a19ab73f7..db2d997a6c18 100644
--- a/drivers/md/dm-stats.c
+++ b/drivers/md/dm-stats.c
@@ -188,7 +188,7 @@ static int dm_stat_in_flight(struct dm_stat_shared *shared)
 	       atomic_read(&shared->in_flight[WRITE]);
 }
 
-void dm_stats_init(struct dm_stats *stats)
+int dm_stats_init(struct dm_stats *stats)
 {
 	int cpu;
 	struct dm_stats_last_position *last;
@@ -197,11 +197,16 @@ void dm_stats_init(struct dm_stats *stats)
 	INIT_LIST_HEAD(&stats->list);
 	stats->precise_timestamps = false;
 	stats->last = alloc_percpu(struct dm_stats_last_position);
+	if (!stats->last)
+		return -ENOMEM;
+
 	for_each_possible_cpu(cpu) {
 		last = per_cpu_ptr(stats->last, cpu);
 		last->last_sector = (sector_t)ULLONG_MAX;
 		last->last_rw = UINT_MAX;
 	}
+
+	return 0;
 }
 
 void dm_stats_cleanup(struct dm_stats *stats)
diff --git a/drivers/md/dm-stats.h b/drivers/md/dm-stats.h
index 0bc152c8e4f3..c6728c8b4159 100644
--- a/drivers/md/dm-stats.h
+++ b/drivers/md/dm-stats.h
@@ -21,7 +21,7 @@ struct dm_stats_aux {
 	unsigned long long duration_ns;
 };
 
-void dm_stats_init(struct dm_stats *st);
+int dm_stats_init(struct dm_stats *st);
 void dm_stats_cleanup(struct dm_stats *st);
 
 struct mapped_device;
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index eace45a18d45..b6ace995b9ca 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2097,7 +2097,9 @@ static struct mapped_device *alloc_dev(int minor)
 	if (!md->pending_io)
 		goto bad;
 
-	dm_stats_init(&md->stats);
+	r = dm_stats_init(&md->stats);
+	if (r < 0)
+		goto bad;
 
 	/* Populate the mapping, nobody knows we exist yet */
 	spin_lock(&_minor_lock);
-- 
2.25.1

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


             reply	other threads:[~2023-03-20  5:17 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-16  6:55 Jiasheng Jiang [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-03-16  6:42 [dm-devel] [PATCH] dm stats: Add missing check for alloc_percpu Jiasheng Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230316065506.17821-1-jiasheng@iscas.ac.cn \
    --to=jiasheng@iscas.ac.cn \
    --cc=agk@redhat.com \
    --cc=dm-devel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=snitzer@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).