From: Jason Gunthorpe <jgg@nvidia.com>
To: Cornelia Huck <cohuck@redhat.com>, kvm@vger.kernel.org
Cc: Alex Williamson <alex.williamson@redhat.com>,
Yishai Hadas <yishaih@nvidia.com>
Subject: [PATCH v3] vfio/pci: Make vfio_pci_regops->rw() return ssize_t
Date: Wed, 21 Jul 2021 10:05:48 -0300 [thread overview]
Message-ID: <0-v3-5db12d1bf576+c910-vfio_rw_jgg@nvidia.com> (raw)
From: Yishai Hadas <yishaih@nvidia.com>
The only implementation of this in IGD returns a -ERRNO which is
implicitly cast through a size_t and then casted again and returned as a
ssize_t in vfio_pci_rw().
Fix the vfio_pci_regops->rw() return type to be ssize_t so all is
consistent.
Fixes: 28541d41c9e0 ("vfio/pci: Add infrastructure for additional device specific regions")
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/pci/vfio_pci_igd.c | 10 +++++-----
drivers/vfio/pci/vfio_pci_private.h | 2 +-
2 files changed, 6 insertions(+), 6 deletions(-)
v3:
- Fix commit subject and missing signed-off-by
diff --git a/drivers/vfio/pci/vfio_pci_igd.c b/drivers/vfio/pci/vfio_pci_igd.c
index 228df565e9bc40..aa0a29fd276285 100644
--- a/drivers/vfio/pci/vfio_pci_igd.c
+++ b/drivers/vfio/pci/vfio_pci_igd.c
@@ -25,8 +25,8 @@
#define OPREGION_RVDS 0x3c2
#define OPREGION_VERSION 0x16
-static size_t vfio_pci_igd_rw(struct vfio_pci_device *vdev, char __user *buf,
- size_t count, loff_t *ppos, bool iswrite)
+static ssize_t vfio_pci_igd_rw(struct vfio_pci_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite)
{
unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
void *base = vdev->region[i].data;
@@ -160,9 +160,9 @@ static int vfio_pci_igd_opregion_init(struct vfio_pci_device *vdev)
return ret;
}
-static size_t vfio_pci_igd_cfg_rw(struct vfio_pci_device *vdev,
- char __user *buf, size_t count, loff_t *ppos,
- bool iswrite)
+static ssize_t vfio_pci_igd_cfg_rw(struct vfio_pci_device *vdev,
+ char __user *buf, size_t count, loff_t *ppos,
+ bool iswrite)
{
unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
struct pci_dev *pdev = vdev->region[i].data;
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 5a36272cecbf94..bbc56c857ef081 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -56,7 +56,7 @@ struct vfio_pci_device;
struct vfio_pci_region;
struct vfio_pci_regops {
- size_t (*rw)(struct vfio_pci_device *vdev, char __user *buf,
+ ssize_t (*rw)(struct vfio_pci_device *vdev, char __user *buf,
size_t count, loff_t *ppos, bool iswrite);
void (*release)(struct vfio_pci_device *vdev,
struct vfio_pci_region *region);
--
2.32.0
next reply other threads:[~2021-07-21 13:06 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-21 13:05 Jason Gunthorpe [this message]
2021-07-21 13:56 ` [PATCH v3] vfio/pci: Make vfio_pci_regops->rw() return ssize_t Cornelia Huck
2021-07-27 12:41 ` Max Gurtovoy
2021-08-03 18:45 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0-v3-5db12d1bf576+c910-vfio_rw_jgg@nvidia.com \
--to=jgg@nvidia.com \
--cc=alex.williamson@redhat.com \
--cc=cohuck@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).