All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Krzysztof Wilczyński" <kw@linux.com>
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: "Jörg Rödel" <joro@8bytes.org>,
	"Suravee Suthikulpanit" <suravee.suthikulpanit@amd.com>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	iommu@lists.linux-foundation.org,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Borislav Petkov" <bp@alien8.de>,
	x86@kernel.org, LKML <linux-kernel@vger.kernel.org>,
	linux-pci@vger.kernel.org
Subject: Re: How to reduce PCI initialization from 5 s (1.5 s adding them to IOMMU groups)
Date: Mon, 8 Nov 2021 18:18:19 +0100	[thread overview]
Message-ID: <YYlb2w1UVaiVYigW@rocinante> (raw)
In-Reply-To: <de6706b2-4ea5-ce68-6b72-02090b98630f@molgen.mpg.de>

Hi Paul,

> On a PowerEdge T440/021KCD, BIOS 2.11.2 04/22/2021, Linux 5.10.70 takes
> almost five seconds to initialize PCI. According to the timestamps, 1.5 s
> are from assigning the PCI devices to the 142 IOMMU groups.
[...]
> Is there anything that could be done to reduce the time?

I am curious - why is this a problem?  Are you power-cycling your servers
so often to the point where the cumulative time spent in enumerating PCI
devices and adding them later to IOMMU groups is a problem? 

I am simply wondering why you decided to signal out the PCI enumeration as
slow in particular, especially given that a large server hardware tends to
have (most of the time, as per my experience) rather long initialisation
time either from being powered off or after being power cycled.  I can take
a while before the actual operating system itself will start.

We talked about this briefly with Bjorn, and there might be an option to
perhaps add some caching, as we suspect that the culprit here is doing PCI
configuration space read for each device, which can be slow on some
platforms.

However, we would need to profile this to get some quantitative data to see
whether doing anything would even be worthwhile.  It would definitely help
us understand better where the bottlenecks really are and of what magnitude.

I personally don't have access to such a large hardware like the one you
have access to, thus I was wondering whether you would have some time, and
be willing, to profile this for us on the hardware you have.

Let me know what do you think?

	Krzysztof

WARNING: multiple messages have this Message-ID (diff)
From: "Krzysztof Wilczyński" <kw@linux.com>
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-pci@vger.kernel.org, x86@kernel.org,
	LKML <linux-kernel@vger.kernel.org>,
	iommu@lists.linux-foundation.org, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: How to reduce PCI initialization from 5 s (1.5 s adding them to IOMMU groups)
Date: Mon, 8 Nov 2021 18:18:19 +0100	[thread overview]
Message-ID: <YYlb2w1UVaiVYigW@rocinante> (raw)
In-Reply-To: <de6706b2-4ea5-ce68-6b72-02090b98630f@molgen.mpg.de>

Hi Paul,

> On a PowerEdge T440/021KCD, BIOS 2.11.2 04/22/2021, Linux 5.10.70 takes
> almost five seconds to initialize PCI. According to the timestamps, 1.5 s
> are from assigning the PCI devices to the 142 IOMMU groups.
[...]
> Is there anything that could be done to reduce the time?

I am curious - why is this a problem?  Are you power-cycling your servers
so often to the point where the cumulative time spent in enumerating PCI
devices and adding them later to IOMMU groups is a problem? 

I am simply wondering why you decided to signal out the PCI enumeration as
slow in particular, especially given that a large server hardware tends to
have (most of the time, as per my experience) rather long initialisation
time either from being powered off or after being power cycled.  I can take
a while before the actual operating system itself will start.

We talked about this briefly with Bjorn, and there might be an option to
perhaps add some caching, as we suspect that the culprit here is doing PCI
configuration space read for each device, which can be slow on some
platforms.

However, we would need to profile this to get some quantitative data to see
whether doing anything would even be worthwhile.  It would definitely help
us understand better where the bottlenecks really are and of what magnitude.

I personally don't have access to such a large hardware like the one you
have access to, thus I was wondering whether you would have some time, and
be willing, to profile this for us on the hardware you have.

Let me know what do you think?

	Krzysztof
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  parent reply	other threads:[~2021-11-08 17:18 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-05 11:56 How to reduce PCI initialization from 5 s (1.5 s adding them to IOMMU groups) Paul Menzel
2021-11-05 11:56 ` Paul Menzel
2021-11-05 12:04 ` How to reduce PCI initialization from 5 s (1.5 s adding them to IOMMU groups)s Paul Menzel
2021-11-05 12:04   ` Paul Menzel
2021-11-05 18:53 ` How to reduce PCI initialization from 5 s (1.5 s adding them to IOMMU groups) Bjorn Helgaas
2021-11-05 18:53   ` Bjorn Helgaas
2021-11-06 10:42   ` Paul Menzel
2021-11-06 10:42     ` Paul Menzel
2021-11-09 15:31     ` Robin Murphy
2021-11-09 15:31       ` Robin Murphy
2021-11-09 20:32       ` Paul Menzel
2021-11-09 20:32         ` Paul Menzel
2021-11-08 17:18 ` Krzysztof Wilczyński [this message]
2021-11-08 17:18   ` Krzysztof Wilczyński
2021-11-09 20:25   ` Paul Menzel
2021-11-09 20:25     ` Paul Menzel
2021-11-09 23:10     ` Krzysztof Wilczyński
2021-11-09 23:10       ` Krzysztof Wilczyński
2021-11-19 14:43       ` Paul Menzel
2021-11-19 14:43         ` Paul Menzel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YYlb2w1UVaiVYigW@rocinante \
    --to=kw@linux.com \
    --cc=bhelgaas@google.com \
    --cc=bp@alien8.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pmenzel@molgen.mpg.de \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.