linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: PCI and MWI
       [not found]         ` <20030226160403.A15729@jurassic.park.msu.ru>
@ 2003-03-01  0:44           ` David Brownell
  2003-03-02 16:22             ` Ivan Kokshaysky
  0 siblings, 1 reply; 3+ messages in thread
From: David Brownell @ 2003-03-01  0:44 UTC (permalink / raw)
  To: Ivan Kokshaysky, Jeff Garzik; +Cc: linux-kernel

Ivan Kokshaysky wrote:
> 
> Well, here's some updated bits for 2.5. It's attempt to fix
> x86 PCI cacheline size. Completely untested, someone with better
> knowledge of x86 CPUs ought to verify this...

It worked for me on K7/Tbred and P6/Coppermine.  No more messages
at driver init about nassty stupd BIOS, or new problems.

But I think the arch code is wrong for CPUs like i386 and i486;
I recall at least one of them having 16 byte cachelines.  PCI
gets used with those, yes?  I wonder if it might not be best to
have cpuinfo_x86 store that value; people don't really expect
to see cpu-specific logic in the pci code.

One minor curiousity:  a multifunction device seemed to share
PCI_CACHE_LINE_SIZE between the enabled/active function and ones
without a driver.  Makes sense, the values can never legally
differ, but some more troublesome devices don't do that...

Re Jeff's suggestion to merge this to 2.5 ASAP, sounds right
to me if all the arch code gets worked out up front.  I have
no problem with the idea of enabling it as done here (when
the device is enabled) rather than waiting to enable DMA,
though I'd certainly pay attention to people who know about
devices broken enough to get indigestion that way.

- Dave



> Ivan.
> 
> --- 2.5.63/drivers/pci/pci.c	Mon Feb 24 22:05:14 2003
> +++ linux/drivers/pci/pci.c	Wed Feb 26 15:08:34 2003
> @@ -327,6 +327,8 @@ pci_restore_state(struct pci_dev *dev, u
>  	return 0;
>  }
>  
> +u8 pci_cache_line_size = L1_CACHE_BYTES >> 2;
> +
>  /**
>   * pci_enable_device_bars - Initialize some of a device for use
>   * @dev: PCI device to be initialized
> @@ -343,6 +345,9 @@ pci_enable_device_bars(struct pci_dev *d
>  	int err;
>  
>  	pci_set_power_state(dev, 0);
> +
> +	pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, pci_cache_line_size);
> +
>  	if ((err = pcibios_enable_device(dev, bars)) < 0)
>  		return err;
>  	return 0;
> @@ -597,7 +602,6 @@ pci_set_master(struct pci_dev *dev)
>  static int
>  pci_generic_prep_mwi(struct pci_dev *dev)
>  {
> -	int rc = 0;
>  	u8 cache_size;
>  
>  	/*
> @@ -607,22 +611,13 @@ pci_generic_prep_mwi(struct pci_dev *dev
>  	 * line set at boot time, the other will not.
>  	 */
>  	pci_read_config_byte(dev, PCI_CACHE_LINE_SIZE, &cache_size);
> -	cache_size <<= 2;
> -	if (cache_size != SMP_CACHE_BYTES) {
> -		printk(KERN_WARNING "PCI: %s PCI cache line size set "
> -		       "incorrectly (%i bytes) by BIOS/FW, ",
> -		       dev->slot_name, cache_size);
> -		if (cache_size > SMP_CACHE_BYTES) {
> -			printk("expecting %i\n", SMP_CACHE_BYTES);
> -			rc = -EINVAL;
> -		} else {
> -			printk("correcting to %i\n", SMP_CACHE_BYTES);
> -			pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE,
> -					      SMP_CACHE_BYTES >> 2);
> -		}
> -	}
> +	if (cache_size == pci_cache_line_size)
> +		return 0;
> +
> +	printk(KERN_WARNING "PCI: cache line size of %d is not supported "
> +	       "by device %s\n", pci_cache_line_size << 2, dev->slot_name);
>  
> -	return rc;
> +	return -EINVAL;
>  }
>  #endif /* !HAVE_ARCH_PCI_MWI */
>  
> --- 2.5.63/arch/i386/pci/common.c	Mon Feb 24 22:05:16 2003
> +++ linux/arch/i386/pci/common.c	Wed Feb 26 15:41:51 2003
> @@ -125,12 +125,22 @@ struct pci_bus * __devinit pcibios_scan_
>  	return pci_scan_bus(busnum, pci_root_ops, NULL);
>  }
>  
> +extern u8 pci_cache_line_size;
> +
>  static int __init pcibios_init(void)
>  {
> +	struct cpuinfo_x86 *c = &boot_cpu_data;
> +
>  	if (!pci_root_ops) {
>  		printk("PCI: System does not support PCI\n");
>  		return 0;
>  	}
> +
> +	pci_cache_line_size = 32 >> 2;
> +	if (c->x86 >= 6 && c->x86_vendor == X86_VENDOR_AMD)
> +		pci_cache_line_size = 64 >> 2;	/* K7 & K8 */
> +	else if (c->x86 > 6)
> +		pci_cache_line_size = 128 >> 2;	/* P4 */
>  
>  	pcibios_resource_survey();
>  
> 




^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: PCI and MWI
  2003-03-01  0:44           ` PCI and MWI David Brownell
@ 2003-03-02 16:22             ` Ivan Kokshaysky
  2003-03-04 18:15               ` David Brownell
  0 siblings, 1 reply; 3+ messages in thread
From: Ivan Kokshaysky @ 2003-03-02 16:22 UTC (permalink / raw)
  To: David Brownell; +Cc: Ivan Kokshaysky, Jeff Garzik, linux-kernel

On Fri, Feb 28, 2003 at 04:44:49PM -0800, David Brownell wrote:
> It worked for me on K7/Tbred and P6/Coppermine.  No more messages
> at driver init about nassty stupd BIOS, or new problems.

Fine. :-)

> But I think the arch code is wrong for CPUs like i386 and i486;

No.

> I recall at least one of them having 16 byte cachelines.  PCI
> gets used with those, yes?

Right, but 32 byte PCI cacheline is fine for those. It's perfectly ok
for the PCI cacheline size to be multiple of the CPU one, but not vice
versa. Actually, we could use 128 for all x86s (to respect P4), but
it would be an obvious overkill.

>  I wonder if it might not be best to
> have cpuinfo_x86 store that value; people don't really expect
> to see cpu-specific logic in the pci code.

Don't know. The cpuinfo_x86 is per-CPU thing, while pci_cache_line_size
is definitely system-wide.

> One minor curiousity:  a multifunction device seemed to share
> PCI_CACHE_LINE_SIZE between the enabled/active function and ones
> without a driver.  Makes sense, the values can never legally
> differ, but some more troublesome devices don't do that...

Hmm, we treat each function as an independent PCI device, as per PCI
spec. Sharing the config space between functions sounds like a severe
hardware bug. Do you have any examples?

> Re Jeff's suggestion to merge this to 2.5 ASAP, sounds right
> to me if all the arch code gets worked out up front.  I have
> no problem with the idea of enabling it as done here (when
> the device is enabled) rather than waiting to enable DMA,
> though I'd certainly pay attention to people who know about
> devices broken enough to get indigestion that way.

Well, in 2.4 on Alpha and ARM we still have pdev_enable_device() thing
which is the mostly __init-only variant of the pci_enable_device(),
but it also forces correct cacheline size and reasonable (more or less)
latency timer for *all* devices. Nobody had problems with it over the last
2 years, so I believe that setting cacheline size in pci_enable_device()
rather than in pci_set_master() is the right thing (and agrees with the
spec better).

Ivan.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: PCI and MWI
  2003-03-02 16:22             ` Ivan Kokshaysky
@ 2003-03-04 18:15               ` David Brownell
  0 siblings, 0 replies; 3+ messages in thread
From: David Brownell @ 2003-03-04 18:15 UTC (permalink / raw)
  To: Ivan Kokshaysky; +Cc: Jeff Garzik, linux-kernel

Ivan Kokshaysky wrote:
> David Brownell wrote:
> 
>> I wonder if it might not be best to
>>have cpuinfo_x86 store that value; people don't really expect
>>to see cpu-specific logic in the pci code.
> 
> 
> Don't know. The cpuinfo_x86 is per-CPU thing, while pci_cache_line_size
> is definitely system-wide.

So pci_cache_line_size = max (all L1 cacheline sizes in the system)
with some possible fudging (that i486 issue, etc).  But your patch
would seem to handle most archs correctly already.


>>One minor curiousity:  a multifunction device seemed to share
>>PCI_CACHE_LINE_SIZE between the enabled/active function and ones
>>without a driver.  Makes sense, the values can never legally
>>differ, but some more troublesome devices don't do that...
> 
> 
> Hmm, we treat each function as an independent PCI device, as per PCI
> spec. Sharing the config space between functions sounds like a severe
> hardware bug. Do you have any examples?

I just happened to notice _this specific case_ which as I mentioned
sure doesn't feel like a hardware bug to me!  The specific device
was a Philips ISP 1561 USB 2.0 controller (two OHCI one EHCI), and
the two more troublesome (less forgiving) devices were from VIA.

So that machine had quite a few high speed USB controllers (including
a NetChip 2280 :) running Linux, all using MWI and no particular
problems being visible ... and no messages about broken BIOS setup.


>>Re Jeff's suggestion to merge this to 2.5 ASAP, sounds right
>>to me if all the arch code gets worked out up front.  I have
>>no problem with the idea of enabling it as done here (when
>>the device is enabled) rather than waiting to enable DMA,
>>though I'd certainly pay attention to people who know about
>>devices broken enough to get indigestion that way.
> 
> 
> Well, in 2.4 on Alpha and ARM we still have pdev_enable_device() thing
> which is the mostly __init-only variant of the pci_enable_device(),
> but it also forces correct cacheline size and reasonable (more or less)
> latency timer for *all* devices. Nobody had problems with it over the last
> 2 years, so I believe that setting cacheline size in pci_enable_device()
> rather than in pci_set_master() is the right thing (and agrees with the
> spec better).

Sounds good to me then.

- Dave




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2003-03-04 17:55 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <3E4622B0.7040201@pobox.com>
     [not found] ` <20030210151813.B12043@jurassic.park.msu.ru>
     [not found]   ` <3E47DF75.20801@pacbell.net>
     [not found]     ` <3E5B1C08.6030906@pacbell.net>
     [not found]       ` <3E5C2368.6010502@pobox.com>
     [not found]         ` <20030226160403.A15729@jurassic.park.msu.ru>
2003-03-01  0:44           ` PCI and MWI David Brownell
2003-03-02 16:22             ` Ivan Kokshaysky
2003-03-04 18:15               ` David Brownell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).