All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [RFC PATCH] vfio/pci: Add ioeventfd support
Date: Fri, 9 Feb 2018 15:05:11 +0800	[thread overview]
Message-ID: <20180209070511.GD2783@xz-mi> (raw)
In-Reply-To: <20180207000731.32764.95992.stgit@gimli.home>

On Tue, Feb 06, 2018 at 05:08:14PM -0700, Alex Williamson wrote:

[...]

> +long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
> +			uint64_t data, int count, int fd)
> +{
> +	struct pci_dev *pdev = vdev->pdev;
> +	loff_t pos = offset & VFIO_PCI_OFFSET_MASK;
> +	int ret, bar = VFIO_PCI_OFFSET_TO_INDEX(offset);
> +	struct vfio_pci_ioeventfd *ioeventfd;
> +	int (*handler)(void *, void *);
> +	unsigned long val;
> +
> +	/* Only support ioeventfds into BARs */
> +	if (bar > VFIO_PCI_BAR5_REGION_INDEX)
> +		return -EINVAL;
> +
> +	if (pos + count > pci_resource_len(pdev, bar))
> +		return -EINVAL;
> +
> +	/* Disallow ioeventfds working around MSI-X table writes */
> +	if (bar == vdev->msix_bar &&
> +	    !(pos + count <= vdev->msix_offset ||
> +	      pos >= vdev->msix_offset + vdev->msix_size))
> +		return -EINVAL;
> +
> +	switch (count) {
> +	case 1:
> +		handler = &vfio_pci_ioeventfd_handler8;
> +		val = data;
> +		break;
> +	case 2:
> +		handler = &vfio_pci_ioeventfd_handler16;
> +		val = le16_to_cpu(data);
> +		break;
> +	case 4:
> +		handler = &vfio_pci_ioeventfd_handler32;
> +		val = le32_to_cpu(data);
> +		break;
> +#ifdef iowrite64
> +	case 8:
> +		handler = &vfio_pci_ioeventfd_handler64;
> +		val = le64_to_cpu(data);
> +		break;
> +#endif
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	ret = vfio_pci_setup_barmap(vdev, bar);
> +	if (ret)
> +		return ret;
> +
> +	mutex_lock(&vdev->ioeventfds_lock);
> +
> +	list_for_each_entry(ioeventfd, &vdev->ioeventfds_list, next) {
> +		if (ioeventfd->pos == pos && ioeventfd->bar == bar &&
> +		    ioeventfd->data == data && ioeventfd->count == count) {
> +			if (fd == -1) {
> +				vfio_virqfd_disable(&ioeventfd->virqfd);
> +				list_del(&ioeventfd->next);
> +				kfree(ioeventfd);
> +				ret = 0;
> +			} else
> +				ret = -EEXIST;
> +
> +			goto out_unlock;
> +		}
> +	}
> +
> +	if (fd < 0) {
> +		ret = -ENODEV;
> +		goto out_unlock;
> +	}
> +
> +	ioeventfd = kzalloc(sizeof(*ioeventfd), GFP_KERNEL);
> +	if (!ioeventfd) {
> +		ret = -ENOMEM;
> +		goto out_unlock;
> +	}
> +
> +	ioeventfd->pos = pos;
> +	ioeventfd->bar = bar;
> +	ioeventfd->data = data;
> +	ioeventfd->count = count;
> +
> +	ret = vfio_virqfd_enable(vdev->barmap[ioeventfd->bar] + ioeventfd->pos,
> +				 handler, NULL, (void *)val,
> +				 &ioeventfd->virqfd, fd);
> +	if (ret) {
> +		kfree(ioeventfd);
> +		goto out_unlock;
> +	}
> +
> +	list_add(&ioeventfd->next, &vdev->ioeventfds_list);

Is there a limit on how many ioeventfds that can be created?

IIUC we'll create this eventfd "automatically" if a MMIO addr/data
triggered continuously for N=10 times, then would it be safer we have
a limitation on maximum eventfds?  Or not sure whether a malicious
guest can consume the host memory by sending:

- addr1/data1, 10 times
- addr2/data2, 10 times
- ...

To create unlimited ioeventfds?  Thanks,

-- 
Peter Xu

  parent reply	other threads:[~2018-02-09  7:05 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-07  0:08 Alex Williamson
2018-02-07  0:08 ` [Qemu-devel] " Alex Williamson
2018-02-07  4:09 ` Alexey Kardashevskiy
2018-02-07  4:09   ` [Qemu-devel] " Alexey Kardashevskiy
2018-02-07  4:25   ` Alex Williamson
2018-02-07  4:25     ` [Qemu-devel] " Alex Williamson
2018-02-07  4:48     ` Alexey Kardashevskiy
2018-02-07  4:48       ` [Qemu-devel] " Alexey Kardashevskiy
2018-02-07 14:12       ` Alex Williamson
2018-02-07 14:12         ` [Qemu-devel] " Alex Williamson
2018-02-08  1:22         ` Alexey Kardashevskiy
2018-02-08  1:22           ` [Qemu-devel] " Alexey Kardashevskiy
2018-03-13 12:38           ` Auger Eric
2018-03-13 12:38             ` [Qemu-devel] " Auger Eric
2018-03-15 21:23             ` Alex Williamson
2018-03-15 21:23               ` [Qemu-devel] " Alex Williamson
2018-02-07 15:46 ` Auger Eric
2018-02-07 16:57   ` Alex Williamson
2018-02-08 13:48     ` Auger Eric
2018-02-09  7:05 ` Peter Xu [this message]
2018-02-09 21:45   ` Alex Williamson
2018-02-11  3:09     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180209070511.GD2783@xz-mi \
    --to=peterx@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=qemu-devel@nongnu.org \
    --subject='Re: [Qemu-devel] [RFC PATCH] vfio/pci: Add ioeventfd support' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.