From: Jeremy Fitzhardinge <jeremy@goop.org> To: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Ingo Molnar <mingo@redhat.com>, Thomas Gleixner <tglx@linutronix.de>, "H. Peter Anvin" <hpa@zytor.com>, the arch/x86 maintainers <x86@kernel.org>, Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, Xen-devel <xen-devel@lists.xensource.com>, Keir Fraser <keir.fraser@eu.citrix.com> Subject: Re: [PATCH RFC] x86/acpi: don't ignore I/O APICs just because there's no local APIC Date: Thu, 18 Jun 2009 14:09:17 -0700 [thread overview] Message-ID: <4A3AACFD.5020805@goop.org> (raw) In-Reply-To: <m1ab45i8vs.fsf@fess.ebiederm.org> On 06/18/09 13:28, Eric W. Biederman wrote: >>> How does Xen handle domU with hardware directly mapped? >>> >>> >> We call that "pci passthrough". Dom0 will bind the gsi to a pirq as >> usual, and then pass the pirq through to the domU. The domU will bind >> the pirq to an event channel, which gets mapped to a Linux irq and >> handled as usual. >> > > Interesting. How does domU find out the pirq -> pci device mapping? > Hm, I haven't looked at it closely, but conventionally it would be via xenbus (which is how all the split frontend-backend drivers communicate). >> It is already; once the pirq is prepared, the process is the same in >> both cases. >> > > I 3/4 believe that. map_domain_pirq appears to setup a per domain > mapping between the hardware vector and the irq name it is known as. > So I don't see how that works for other domains. > > msi is setup on a per domain basis. > Ah, OK. The pirq is set up for a specific domain rather than being global (otherwise it would need some kind of "which domain can access which pirq" table). dom0 can either create a pirq for itself or someone else, and the final user of the pirq binds it to a domain-local evtchn. I think. I really haven't looked into the pci-passthrough parts very closely yet. J
WARNING: multiple messages have this Message-ID (diff)
From: Jeremy Fitzhardinge <jeremy@goop.org> To: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Xen-devel <xen-devel@lists.xensource.com>, the arch/x86 maintainers <x86@kernel.org>, Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, Ingo Molnar <mingo@redhat.com>, Keir Fraser <keir.fraser@eu.citrix.com>, "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de> Subject: Re: [PATCH RFC] x86/acpi: don't ignore I/O APICs just because there's no local APIC Date: Thu, 18 Jun 2009 14:09:17 -0700 [thread overview] Message-ID: <4A3AACFD.5020805@goop.org> (raw) In-Reply-To: <m1ab45i8vs.fsf@fess.ebiederm.org> On 06/18/09 13:28, Eric W. Biederman wrote: >>> How does Xen handle domU with hardware directly mapped? >>> >>> >> We call that "pci passthrough". Dom0 will bind the gsi to a pirq as >> usual, and then pass the pirq through to the domU. The domU will bind >> the pirq to an event channel, which gets mapped to a Linux irq and >> handled as usual. >> > > Interesting. How does domU find out the pirq -> pci device mapping? > Hm, I haven't looked at it closely, but conventionally it would be via xenbus (which is how all the split frontend-backend drivers communicate). >> It is already; once the pirq is prepared, the process is the same in >> both cases. >> > > I 3/4 believe that. map_domain_pirq appears to setup a per domain > mapping between the hardware vector and the irq name it is known as. > So I don't see how that works for other domains. > > msi is setup on a per domain basis. > Ah, OK. The pirq is set up for a specific domain rather than being global (otherwise it would need some kind of "which domain can access which pirq" table). dom0 can either create a pirq for itself or someone else, and the final user of the pirq binds it to a domain-local evtchn. I think. I really haven't looked into the pci-passthrough parts very closely yet. J
next prev parent reply other threads:[~2009-06-18 21:09 UTC|newest] Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top 2009-06-12 18:22 [PATCH RFC] x86/acpi: don't ignore I/O APICs just because there's no local APIC Jeremy Fitzhardinge 2009-06-12 18:22 ` Jeremy Fitzhardinge 2009-06-12 18:28 ` Alan Cox 2009-06-12 18:28 ` Alan Cox 2009-06-12 18:33 ` Jeremy Fitzhardinge 2009-06-12 18:33 ` Jeremy Fitzhardinge 2009-06-12 20:11 ` Cyrill Gorcunov 2009-06-15 2:01 ` Jeremy Fitzhardinge 2009-06-12 20:35 ` Eric W. Biederman 2009-06-12 20:35 ` Eric W. Biederman 2009-06-15 2:06 ` Jeremy Fitzhardinge 2009-06-15 10:47 ` Eric W. Biederman 2009-06-15 10:47 ` Eric W. Biederman 2009-06-15 20:49 ` Jeremy Fitzhardinge 2009-06-15 20:49 ` Jeremy Fitzhardinge 2009-06-15 21:58 ` Eric W. Biederman 2009-06-15 21:58 ` Eric W. Biederman 2009-06-16 19:38 ` Jeremy Fitzhardinge 2009-06-16 19:38 ` Jeremy Fitzhardinge 2009-06-17 5:10 ` Eric W. Biederman 2009-06-17 5:10 ` Eric W. Biederman 2009-06-17 12:02 ` Eric W. Biederman 2009-06-17 12:02 ` Eric W. Biederman 2009-06-17 17:32 ` Jeremy Fitzhardinge 2009-06-17 17:32 ` Jeremy Fitzhardinge 2009-06-18 2:58 ` Eric W. Biederman 2009-06-18 2:58 ` Eric W. Biederman 2009-06-18 19:34 ` Jeremy Fitzhardinge 2009-06-18 19:34 ` Jeremy Fitzhardinge 2009-06-18 20:28 ` Eric W. Biederman 2009-06-18 21:09 ` Jeremy Fitzhardinge [this message] 2009-06-18 21:09 ` Jeremy Fitzhardinge 2009-06-19 1:38 ` Eric W. Biederman 2009-06-19 1:38 ` Eric W. Biederman 2009-06-19 3:10 ` [Xen-devel] " Jiang, Yunhong 2009-06-19 3:10 ` Jiang, Yunhong 2009-06-18 12:26 ` Eric W. Biederman 2009-06-15 10:51 ` Eric W. Biederman 2009-06-15 10:51 ` Eric W. Biederman 2009-06-18 16:08 ` Len Brown 2009-06-18 19:14 ` Jeremy Fitzhardinge 2009-06-18 19:14 ` Jeremy Fitzhardinge 2009-06-18 19:27 ` Eric W. Biederman 2009-06-18 19:48 ` Jeremy Fitzhardinge 2009-06-18 19:48 ` Jeremy Fitzhardinge 2009-06-18 20:39 ` Eric W. Biederman 2009-06-18 22:33 ` Jeremy Fitzhardinge 2009-06-18 22:33 ` Jeremy Fitzhardinge 2009-06-19 2:42 ` Eric W. Biederman 2009-06-19 2:42 ` Eric W. Biederman 2009-06-19 19:58 ` Jeremy Fitzhardinge 2009-06-19 19:58 ` Jeremy Fitzhardinge 2009-06-19 23:44 ` [Xen-devel] " Nakajima, Jun 2009-06-19 23:44 ` Nakajima, Jun 2009-06-20 7:39 ` [Xen-devel] " Keir Fraser 2009-06-20 7:39 ` Keir Fraser 2009-06-20 8:21 ` [Xen-devel] " Eric W. Biederman 2009-06-20 8:21 ` Eric W. Biederman 2009-06-20 8:57 ` [Xen-devel] " Tian, Kevin 2009-06-20 8:57 ` Tian, Kevin 2009-06-20 10:22 ` [Xen-devel] " Keir Fraser 2009-06-20 10:22 ` Keir Fraser 2009-06-20 8:18 ` [Xen-devel] " Eric W. Biederman 2009-06-20 8:18 ` Eric W. Biederman 2009-06-19 5:32 ` Yinghai Lu 2009-06-19 5:32 ` Yinghai Lu 2009-06-19 5:50 ` Eric W. Biederman 2009-06-19 5:50 ` Eric W. Biederman 2009-06-19 7:52 ` [Xen-devel] Re: [PATCH RFC] x86/acpi: don't ignore I/O APICs justbecause " Jan Beulich 2009-06-19 7:52 ` Jan Beulich 2009-06-19 8:16 ` [Xen-devel] " Eric W. Biederman 2009-06-19 8:16 ` Eric W. Biederman 2009-06-20 3:58 ` [Xen-devel] " Yinghai Lu 2009-06-20 3:58 ` Yinghai Lu 2009-06-20 5:40 ` [Xen-devel] " Eric W. Biederman 2009-06-20 5:40 ` Eric W. Biederman 2009-06-20 5:58 ` [Xen-devel] " Yinghai Lu 2009-06-20 5:58 ` Yinghai Lu 2009-06-18 22:51 ` [PATCH RFC] x86/acpi: don't ignore I/O APICs just because " Maciej W. Rozycki
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=4A3AACFD.5020805@goop.org \ --to=jeremy@goop.org \ --cc=ebiederm@xmission.com \ --cc=hpa@zytor.com \ --cc=keir.fraser@eu.citrix.com \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@redhat.com \ --cc=tglx@linutronix.de \ --cc=x86@kernel.org \ --cc=xen-devel@lists.xensource.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.