linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: "David S. Miller" <davem@redhat.com>
Cc: Jeff Garzik <jgarzik@mandrakesoft.com>,
	<linux-kernel@vger.kernel.org>,
	"Albert D. Cahalan" <acahalan@cs.uml.edu>,
	Tom Gall <tom_gall@vnet.ibm.com>
Subject: Re: Going beyond 256 PCI buses
Date: Thu, 14 Jun 2001 23:30:21 +0200	[thread overview]
Message-ID: <20010614213021.3814@smtp.wanadoo.fr> (raw)
In-Reply-To: <15145.6960.267459.725096@pizda.ninka.net>
In-Reply-To: <15145.6960.267459.725096@pizda.ninka.net>

>
>Bus 0 is controller 0, of whatever bus type that happens to be.
>If we want to do something special we could create something
>like /proc/bus/root or whatever, but I feel this unnecessary.

<old rant>

Mostly, except for one thing: legacy devices expecting ISA-like
ops on a given domain which currently need some way to know
what PCI bus hold the ISA bus. 

While we are at it, I'd be really glad if we could agree on a
way to abstract the current PIO scheme to understand the fact
that any domain can actually have "legacy ISA-like" devices.

One example is that any domain can have a VGA controller that
requires a bit of legacy PIO & ISA-mem stuff. In the same vein,
any domain can have an ISA-bridge used to wired 16bits devices

Another example is an embedded device which could use the
domain abstraction to represent different IO busses on which
old-style 16bits chips are wired.

I beleive there will always be need for some platform specific
hacking at probe-time to handle those, but we can at least make
the inx/outx functions/macros compatible with such a scheme,
possibly by requesting an ioremap equivalent to be done so that
we stop passing them real PIO addresses, but a cookie obtained
in various platform specific ways.

For the case of PCI (which would handle both the VGA case and
the multiple PCI<->ISA bridge case), one possibility is to
provide a function returning resources for the "legacy" PIO
and MMIO regions if any on a given domain. This is especially
true for ISA-memory (used mostly for VGA) as host controllers
for non-x86 platforms usually have a special window somewhere
in the bus space for generating <64k mem cycles on the PCI bus.

</old rant>

Ben.



  reply	other threads:[~2001-06-14 21:31 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-06-13 10:02 Going beyond 256 PCI buses Tom Gall
2001-06-13 17:17 ` Albert D. Cahalan
2001-06-13 18:29   ` Tom Gall
2001-06-14 14:14 ` Jeff Garzik
2001-06-14 15:15   ` David S. Miller
2001-06-14 17:59   ` Jonathan Lundell
2001-06-14 20:50     ` Jonathan Lundell
2001-06-14 14:24 ` David S. Miller
2001-06-14 14:32   ` Jeff Garzik
2001-06-14 14:42   ` David S. Miller
2001-06-14 15:29     ` Jeff Garzik
2001-06-14 15:33       ` Jeff Garzik
2001-06-14 18:01   ` Albert D. Cahalan
2001-06-14 18:47   ` David S. Miller
2001-06-14 19:04     ` Albert D. Cahalan
2001-06-14 19:12     ` David S. Miller
2001-06-14 19:41       ` Jeff Garzik
2001-06-14 19:57       ` David S. Miller
2001-06-14 20:08         ` Jeff Garzik
2001-06-14 20:14         ` David S. Miller
2001-06-14 21:30           ` Benjamin Herrenschmidt [this message]
2001-06-14 21:46             ` Jeff Garzik
2001-06-14 21:48             ` David S. Miller
2001-06-14 21:57               ` Benjamin Herrenschmidt
2001-06-14 22:12               ` David S. Miller
2001-06-14 22:29                 ` Benjamin Herrenschmidt
2001-06-14 22:49                 ` David S. Miller
2001-06-14 23:35                   ` Benjamin Herrenschmidt
2001-06-14 23:35                 ` VGA handling was [Re: Going beyond 256 PCI buses] James Simmons
2001-06-14 23:42                 ` David S. Miller
2001-06-14 23:55                   ` James Simmons
2001-06-15 15:14                     ` Pavel Machek
2001-06-15  2:06                   ` Albert D. Cahalan
2001-06-15  8:52                   ` Matan Ziv-Av
2001-06-14 21:35           ` Going beyond 256 PCI buses David S. Miller
2001-06-14 21:46             ` Benjamin Herrenschmidt
2001-06-16 21:32           ` Jeff Garzik
2001-06-16 23:29             ` Benjamin Herrenschmidt
2001-06-15  8:42       ` Geert Uytterhoeven
2001-06-15 15:38       ` David S. Miller
2001-06-14 19:03   ` David S. Miller
2001-06-14 20:56     ` David S. Miller
2001-06-14 15:13 ` Jonathan Lundell
2001-06-14 15:17   ` Jeff Garzik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20010614213021.3814@smtp.wanadoo.fr \
    --to=benh@kernel.crashing.org \
    --cc=acahalan@cs.uml.edu \
    --cc=davem@redhat.com \
    --cc=jgarzik@mandrakesoft.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tom_gall@vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).