From: Klaus Jensen <its@irrelevant.dk>
To: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Cc: fam@euphon.net, kwolf@redhat.com, qemu-block@nongnu.org,
qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com,
kbusch@kernel.org
Subject: Re: [PATCH 1/2] hw/block/nvme: consider metadata read aio return value in compare
Date: Fri, 16 Apr 2021 14:06:42 +0200 [thread overview]
Message-ID: <YHl90ktSdfhCOdYZ@apples.localdomain> (raw)
In-Reply-To: <20210416072234.25732-1-anaidu.gollu@samsung.com>
[-- Attachment #1: Type: text/plain, Size: 1695 bytes --]
On Apr 16 12:52, Gollu Appalanaidu wrote:
>Currently in compare command metadata aio read blk_aio_preadv return
>value ignored, consider it and complete the block accounting.
>
>Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
>---
> hw/block/nvme.c | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
>diff --git a/hw/block/nvme.c b/hw/block/nvme.c
>index 624a1431d0..c2727540f1 100644
>--- a/hw/block/nvme.c
>+++ b/hw/block/nvme.c
>@@ -2369,10 +2369,19 @@ static void nvme_compare_mdata_cb(void *opaque, int ret)
> uint32_t reftag = le32_to_cpu(rw->reftag);
> struct nvme_compare_ctx *ctx = req->opaque;
> g_autofree uint8_t *buf = NULL;
>+ BlockBackend *blk = ns->blkconf.blk;
>+ BlockAcctCookie *acct = &req->acct;
>+ BlockAcctStats *stats = blk_get_stats(blk);
> uint16_t status = NVME_SUCCESS;
>
> trace_pci_nvme_compare_mdata_cb(nvme_cid(req));
>
>+ if (ret) {
>+ block_acct_failed(stats, acct);
>+ nvme_aio_err(req, ret);
>+ goto out;
>+ }
>+
> buf = g_malloc(ctx->mdata.iov.size);
>
> status = nvme_bounce_mdata(n, buf, ctx->mdata.iov.size,
>@@ -2421,6 +2430,8 @@ static void nvme_compare_mdata_cb(void *opaque, int ret)
> goto out;
> }
>
>+ block_acct_done(stats, acct);
>+
> out:
> qemu_iovec_destroy(&ctx->data.iov);
> g_free(ctx->data.bounce);
>--
>2.17.1
>
>
Good fix, thanks! Since there is no crash, data corruption or other
"bad" behavior, this isn't critical for v6.0.
Might consider it for a potential stable release though, so I'll add a
Fixes: tag and queue it up.
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2021-04-16 12:09 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20210416072533epcas5p305e83206f2b3d947e9b5fef9fde1c969@epcas5p3.samsung.com>
2021-04-16 7:22 ` [PATCH 1/2] hw/block/nvme: consider metadata read aio return value in compare Gollu Appalanaidu
[not found] ` <CGME20210416072544epcas5p26bf011c82ad4b60693cfaac32bc9e36f@epcas5p2.samsung.com>
2021-04-16 7:22 ` [PATCH 2/2] hw/block/nvme: fix lbaf formats initialization Gollu Appalanaidu
2021-04-16 8:48 ` Klaus Jensen
2021-04-16 10:50 ` Gollu Appalanaidu
2021-04-16 12:06 ` Klaus Jensen [this message]
2021-04-20 19:58 ` [PATCH 1/2] hw/block/nvme: consider metadata read aio return value in compare Klaus Jensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YHl90ktSdfhCOdYZ@apples.localdomain \
--to=its@irrelevant.dk \
--cc=anaidu.gollu@samsung.com \
--cc=fam@euphon.net \
--cc=kbusch@kernel.org \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).