linux-um.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Berg <johannes@sipsolutions.net>
To: Vincent Whitchurch <vincent.whitchurch@axis.com>,
	Richard Weinberger <richard@nod.at>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: robh@kernel.org, devicetree@lists.infradead.org,
	 linux-um@lists.infradead.org, linux-kernel@vger.kernel.org,
	kernel@axis.com
Subject: Re: [PATCH] virt-pci: add platform bus support
Date: Mon, 13 Feb 2023 18:54:49 +0100	[thread overview]
Message-ID: <d3f6d627290bb1a6a1fcfdfd5fad915578453e02.camel@sipsolutions.net> (raw)
In-Reply-To: <20230127-uml-pci-platform-v1-1-ec6b45d2829f@axis.com>

On Fri, 2023-01-27 at 15:30 +0100, Vincent Whitchurch wrote:
> This driver registers PCI busses, but the underlying virtio protocol
> could just as easily be used to provide a platform bus instead.  If the
> virtio device node in the devicetree indicates that it's compatible with
> simple-bus, register platform devices instead of handling it as a PCI
> bus.
> 
> Only one platform bus is allowed and the logic MMIO region for the
> platform bus is placed at an arbitrarily-chosen address away from the
> PCI region.

So ... hm. I'm not sure I _like_ this so much. It feels decidedly
strange to put it this way.

But it looks like Richard already applied it, so I suppose look at this
as kind of a retroactive information gathering. :)


> Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
> ---
> My first approach to getting platform drivers working on UML was by
> adding a minimal PCI-to-platform bridge driver, which worked without
> modifications to virt-pci, but that got shot down:
> 
>  https://lore.kernel.org/lkml/20230120-simple-mfd-pci-v1-1-c46b3d6601ef@axis.com/

Reading through that ... OK that isn't fun either :-)

Sounds like there's a use case for something else though, but the PCI
IDs issue also makes that thorny.

> @@ -48,6 +51,7 @@ struct um_pci_device_reg {
>  
>  static struct pci_host_bridge *bridge;
>  static DEFINE_MUTEX(um_pci_mtx);
> +static struct um_pci_device *um_pci_platform_device;
>  static struct um_pci_device_reg um_pci_devices[MAX_DEVICES];
>  static struct fwnode_handle *um_pci_fwnode;
>  static struct irq_domain *um_pci_inner_domain;
> @@ -480,6 +484,9 @@ static void um_pci_handle_irq_message(struct virtqueue *vq,
>  	struct virtio_device *vdev = vq->vdev;
>  	struct um_pci_device *dev = vdev->priv;
>  
> +	if (!dev->irq)
> +		return;
> 

Does that mean platform devices don't have interrupts, or does that mean
not all of them must have interrupts?

I'll note that this also would allow the device to send an MSI which
feels a bit wrong? But I guess it doesn't really matter.


So let me ask this: Conceptually, wouldn't the "right" way to handle
this be a new virtio device and protocol and everything, with a new
driver to handle it? I realise that would likely lead to quite a bit of
code duplication, for now I just want to understand the concept here a
bit better.

Such a driver would then define its own messages, requiring (I guess)
the equivalents of
 * VIRTIO_PCIDEV_OP_INT,
 * VIRTIO_PCIDEV_OP_MMIO_READ,
 * VIRTIO_PCIDEV_OP_MMIO_WRITE, and
 * (maybe) VIRTIO_PCIDEV_OP_MMIO_MEMSET,

but not
 * VIRTIO_PCIDEV_OP_CFG_READ,
 * VIRTIO_PCIDEV_OP_CFG_WRITE,
 * VIRTIO_PCIDEV_OP_MSI and
 * VIRTIO_PCIDEV_OP_PME,

right?

How much code would we actually duplicate? Most of virt-pci is dedicated
to the mess of PCI MSI domains, bridges, etc.

The limitation to a single device feels odd, and the fact that you have
to put it into DT under the PCI also seems odd. OTOH, the platform
device has to be _somewhere_ right?

I guess I'm not really sure how to use it, but I suppose that's OK too
:)

Thanks,
johannes

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um

  reply	other threads:[~2023-02-13 17:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-27 14:30 [PATCH] virt-pci: add platform bus support Vincent Whitchurch
2023-02-13 17:54 ` Johannes Berg [this message]
2023-02-13 18:09   ` Richard Weinberger
2023-02-14 12:12   ` Vincent Whitchurch
2023-02-28 17:39     ` Johannes Berg
2023-03-07  0:31     ` Rob Herring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d3f6d627290bb1a6a1fcfdfd5fad915578453e02.camel@sipsolutions.net \
    --to=johannes@sipsolutions.net \
    --cc=anton.ivanov@cambridgegreys.com \
    --cc=devicetree@lists.infradead.org \
    --cc=kernel@axis.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-um@lists.infradead.org \
    --cc=richard@nod.at \
    --cc=robh@kernel.org \
    --cc=vincent.whitchurch@axis.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).